The Uncertain Future of Moore’s Law
In 1965, Gordon Moore, CEO and co-founder of Intel, made a prediction that the number of transistors on an integrated circuit (the main component on a computer chip) would double every two years for at least the next decade. This prediction, known today as Moore’s Law, has continued to be fulfilled since 1965. While it is known as Moore’s Law, Gordon Moore’s prediction is not truly a law; rather, it is a trend that chipmakers around the world have been encouraged to match via technological advancements, research, and development.
An integrated circuit is the basic building block of a computer and is made up of many components. One of the largest components of the integrated circuit is the transistor. The more transistors you are able to pack onto one integrated circuit, the more computing power that integrated circuit is able to produce.
In the early 1970s, following Moore’s prediction, each integrated circuit contained approximately 3,000 transistors. Today, The Skylake chip, which was released in August 2015, is able to hold 1.7 billion transistors.
As the transistors get smaller, we are able to fit more transistors onto a single silicon chip. The smaller transistors are also able to operate faster and use less power. As a result, the advancements in transistor size have had a profound impact on our ability to compute things and, as a consequence, on the economy and society.
In this post, I will be discussing the impacts of Moore’s Law, the challenges it is facing, and the solutions that will enable advancements in the future.
Why Does Moore’s Law Matter?
Moore’s Law has been a driving force in the continual advancements in integrated circuit and transistor technology. The breakthroughs from our ability to continue packing additional transistors into such small chips have had massive impacts in computer modeling, satellites, GPS, smartphones, PCs, and more.
In fact, pretty much all digital technology has progressed thanks to the advancements in transistors. Smaller transistors have allowed for computation that is faster and more effective, allowing digital technology to continue to evolve. For example, smaller transistors allowed for the development of smart phones, and for the creation of micro-satellites, which make GPS services readily available at the touch of a button.
Moore’s Law has not only led to technological advancements. An IHS Technology report, published in 2015, found that the advancements made under Moore’s Law have accounted for $3-11 trillion of global gross domestic products (GDP) between 1995 and 2015.
Moore’s Law has played an instrumental role in both the technology we are able to enjoy today, as well as the economic boom that we have celebrated in recent years.
How Does Moore’s Law Keep Up?
As I mentioned above, Moore’s Law is not technically a law; it is an ideal rate of advancement that chipmakers have worked tirelessly to maintain. In the beginning, these advancements were made through several technological innovations:
- The integrated circuit was invented in 1959, making it relatively new technology at the time of Moore’s prediction. The integrated circuit played a large part in advancing both chip and transistor technology, as the integrated circuit itself did not require a lot of extra wires and was simpler to maintain.
- The types of transistors developed at the time, MOSFET/CMOS transistors, helped keep Moore’s Law on track.
- The processing techniques, such as photolithography (process for microfabrication), allowed chipmakers to print chips far more quickly and accurately, instead of making each chip by hand.
Today, chipmakers must discover more creative innovations to continue advancing technology along the path of Moore’s Law. Chipmakers today continue advancements through:
- Altering the materials used;
- Altering the architecture of the chips themselves;
- Investing in research and development.
As our technology continues to advance, it is becoming harder and harder for researchers to maintain the fast-paced nature of Moore’s Law.
Problems Moving Forward
We are now hitting a point where we are reaching the physical limits associated with further advancement. The current transistor is approximately five to seven nanometers (nm) in size. For comparison, a sheet of paper is about 100,000 nanometers thick!
The problem is, at this small size, you start getting into the realm of quantum mechanics, which adds its own set of complications (just look at the issues Ant-Man has faced in the quantum realm!) Some of the obstacles associated with decreasing the size of transistors further include:
- Quantum Tunneling: At this scale, electrons no longer behave as expected. For example, electrons can penetrate through things they wouldn’t normally interact with through quantum tunneling.
- Power and Heat Dissipation: At this scale, both power and heat dissipation become erratic and unpredictable.
- Research and Development: The cost of research and development has increased exponentially as the size of the transistors has decreased. This means that the cost required to make the chips and transistors is quickly becoming prohibitive.
Researchers are already investigating solutions that will allow technological advances to continue, including:
- Additional Innovation: There is a lot of work being done in the material science field. Researchers are looking into carbon nanotubes and graphene as possible alternatives to the current silicon chips. They are also researching the benefits of more complex architectures. Instead of creating flat chips, researchers and finding ways to build up into the z-space for 3D models.
- Specialization: Currently, most chips are created for general purpose use. This means that they have raw computing power and you can use them in whatever way you choose. However, through specialization of the use of chips, you are able to design chips that instead solve one problem extremely well. Chip specialization is already used for Artificial Intelligence (AI) and machine learning purposes, through Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FGPAs). Both units have been created to specialize in individual tasks, allowing them to perform at a more efficient rate than general purpose chips.
- Better Use of What We Have: Due to the competitive and fast-paced nature of the technological field, one of the trends in software development has been to get things out as quickly as possible, without much consideration for the resources it requires. As hardware has continued to improve, the resource allocation of new products has not needed to be a priority. However, as we come up against this physical barrier, it may cause us to begin focusing on how we work with what we have. Specifically, software developers have the opportunity to continue to raise efficiencies in their products through:
- Minimizing Software Bloat: When creating software, many developers tend to add overhead that doesn’t necessarily improve the user experience, but can make the creation process more efficient. Eliminating the overhead within a system will help to minimize software bloat and increase user efficiency.
- Writing Better Code: In our current system, we tend to re-use code to make the coding processes quicker, even if the code is from an older version of the system. By writing code that is more specific to the task at hand, we will be able to make the system more efficient.
- Taking Advantage of Multi-Threaded Processes: Multi-threaded processes, such as parallelism and concurrency, can take advantage of the existing technology and yield large gains in efficiency.
Though Moore’s Law has lasted well beyond the decade that Gordon Moore originally predicted, it is now on the precipice of failure. It will require the dedication of many individuals, across dozens of fields, if technology is to continue to advance at its current rate. Has Moore’s law out-lived its relevance? Or will chip makers come up with new innovations to keep chip technology at pace with Moore’s law?