By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

What’s Next? AI Scaling and its Implications

November 17, 2023

Can AI progress continue at the same pace we’ve seen in the last few years–and should it? What are the bottlenecks to growth, and how can they be solved? What are the greatest risks?

These and other vital questions were explored at the recent MIT FutureTech Workshop on AI scaling and its Implications, held October 12-13 at the MIT Museum. The invitation-only event gathered more than 70 prominent computer scientists, engineers, and economists to discuss scaling laws and their implications for AI development, automation, and more.

Attendees listened to 15 talks covering different aspects of AI development and adoption. Speakers included: Pamela Fine Mishkin from Open AI; Eric Drexler from the University of Oxford; Dan Hendrycks from the Centre for AI Safety, and Jacob Steinhardt from UC Berkeley (See the full agenda here).

Neil Thompson, Director of MIT FutureTech and MIT IDE research lead, led a discussion on research prioritization so that new research projects can address some of the challenges identified. 

IDE Content and Editorial Manager, Paula Klein,  asked Thompson and researchers, Tamay Besiroglu and Peter Slattery, to describe the day’s highlights and key takeaways.

Q: What was the main focus of the workshop?

Neil Thompson: The workshop centered on the future scalability of AI—whether AI models will continue to grow in size and power.

We think there are many reasons to explore this question. First, there are reasons to think that progress in AI performance may slow. Past progress came from training larger and more powerful AI models with disproportionately more computation. This approach worked, while computing performance increased in proportion to the demand for training AI models. However, the same model may not work in the future. In fact, our research shows that demand for computation is now outpacing its growth [See Figure 1 below]. If this continues, training larger models will become prohibitively expensive and the rate of progress will slow. 

“Demand for computation is now outpacing its growth. If this continues, training larger models will become prohibitively expensive and the rate of progress will slow.” 

Figure 1

[Source]

Additionally, while less immediate, it is also possible that access to data will impact the speed of AI improvement. The amount of (high-quality language) data required to train models is currently increasing much more rapidly than the supply, as shown in Figure 2.

Figure 2

 

[Source]
Q: Which innovations might affect AI scaling?

Peter Slattery: We discussed four areas of innovation that may help address the compute and data bottlenecks. These include: hardware, algorithms, AI model design, and synthetic data. 

  1. Hardware improvements: We might develop new chips with more specialized hardware or chip management technologies. However, hardware performance improvements are currently slowing down after years of rapid growth and investment, so it seems more likely that we will continue to see diminishing returns. [See Figure 3]

Figure 3

[Source]

2. Improvements in algorithms: If we discover new algorithms that help with machine learning performance, we might counteract the decrease in hardware performance. Our research suggests that this is certainly possible: improvement in algorithmic performance for certain problems has significantly outpaced hardware performance in the past. [See Figure 4]

Figure 4

[Source]

3. Improved model design: We could also improve AI model design. Instead of increasing performance by making large models capable of doing many tasks, we could instead use combinations of smaller specialized models which need less computation and data to train. [See Figure 5].

Figure 5

[Source]

4. Synthetic data: We could use synthetic data generated by AI to address the future shortage of data to train AI. Studies suggest that synthetic data can perform nearly as well as, or in some cases even better than, real-world data for training machine learning models. Using synthetic data might also be used to create bespoke data sets that can train models more efficiently than natural data.

“Synthetic data can perform nearly as well as, or in some cases even better than, real-world data for training machine learning models.”

Q: What are the risks and implications of scaling AI?

Tamay Besiroglu: Several participants discussed the impact of AI on productivity and labor markets. Some argued for extensive job replacement and rapid productivity growth, but others were less confident. This paper provides a summary of many of the key disagreements and uncertainties.

One talk explored how AI might impact scientific productivity and why the impact might differ between research problems. For instance, some areas of science may be easy to accelerate because we can easily train AI on relevant data. However, the useful benefits of AI may be much more limited for research problems where relevant training data is sparse or nonexistent.

Several speakers discussed shortcomings of existing models and the risks posed by more advanced AI. They suggested that we should be carefully preparing for risks and potentially slowing down particularly risky development and deployment.

The workshop ended with Jacob Steinhardt discussing predictions for AI in 2030. For example, he forecasts that models will automate most mathematics research and some other research areas, posing significant risks for misuse in cyberattacks, persuasion and manipulation.

Q: What is the connection between AI scaling and Quantum Computing?

Neil Thompson: The likely impact of quantum computing on AI scaling is very uncertain. Our recent research suggests that current quantum computers are only better than classical computers for a relatively small subset of current computer problems. At this stage, quantum computers are nowhere near the scale or error-resistance needed to outperform classical computers for a broad range of tasks relevant to AI. However, it is hard to anticipate the speed of progress in this technology, so this is another area that we need to watch. 

Q: What were some key takeaways and recommendations?

Tamay Besiroglu: The speed of AI scaling is uncertain and hinges on model design and trends in the performance and availability of hardware, algorithms, and data. To understand it better, we need more research into each of these topics.

 “We have good reasons to think that AI will impact labor markets, but there is still considerable uncertainty about how immediate and significant the impact will be.”

If used well, AI has tremendous potential for positive social impact, for instance on productivity and scientific progress, but it also may cause significant social harms if scaled too quickly and carelessly.