2024 Stanford AI Index Report

Fabio Oppini
7 min readApr 27, 2024

--

On April 16, 2024, Stanford Institute for Human-Centered Artificial Intelligence (HAI) published the seventh edition of its annual AI Index Report which extensively covers essential trends such as technical advancements in AI, the geopolitical dynamics surrounding its development, and the public perceptions of the technology.

This article presents 10 key takeaways from the report.

1. AI beats humans on some tasks, but not on all

Artificial Intelligence outperforms humans in certain tasks, such as image classification, visual reasoning and English comprehension. However, it lags behind in more complex areas such as competitive-level mathematics, visual common sense reasoning, and planning. Since 2021 competitive-level mathematics is the field with the greatest increase in performance over time compared to the human baseline.

2. Model training becomes significantly pricier

AI Index estimates reveal that training expenses for cutting-edge AI models have risen to unprecedented levels, showing exponential growth over the years. For example, training Google’s Gemini Ultra cost $191 million in computation alone, while OpenAI’s GPT-4 required computing resources worth an estimated $78 million.

3. The U.S. leads in the number of top AI models, but China dominates in the number of patents

The United States takes the lead over China, the EU and the UK as the main creators of the top AI models. In 2023, U.S.-based institutions produced 109 foundation AI models (73%), significantly surpassing China’s 20 (13%) and the European Union’s 15 (10%). In 2023, a total of 149 foundation models were released, more than double the amount released in 2022. Of these newly released models, 66% were open-source, compared to only 44% in 2022 and 33% in 2021, denoting a constant increase in the percentage of open-source projects.

On the other hand, China has the largest contribution in terms of granted AI patents (61%), far outpacing the United States (21%). Since 2010, the U.S. share of AI patents has decreased from 54%.

From 2021 to 2022, AI patent grants worldwide increased sharply by 63%. Since 2010, the number of granted AI patents has increased more than 31 times.

According to visualcapitalist.com, in the U.S., AI patents are primarily held by major corporations such as IBM, Microsoft, and Google. Conversely, in China AI patents are spread among government entities, universities, and tech companies (e.g. Tencent). Regarding the focus area, Chinese patents often revolve around computer vision, while American efforts are distributed more evenly across various research fields.

4. Closed LLMs significantly outperform open ones

Across 10 prominent AI evaluation metrics, closed-source models demonstrated a significant edge over open-source ones, boasting a median performance boost of 24%.
These disparities in model performance have significant implications for ongoing discussions surrounding AI policy and governance.

5. In the field of Responsible AI many concerns remain and others emerge

  • Robust and standardized evaluations for LLM responsibility are seriously lacking. AI developers are found to be lacking in transparency, particularly when it comes to revealing details about their training data and methods. This lack of openness impedes attempts to thoroughly evaluate the reliability and safety of AI technologies, ultimately undermining confidence in their performance.
  • Political deepfakes are easy to generate and difficult to detect. Fully autonomous AI disinformation systems such as CounterCloud allow to engineer an AI LLM to scrape, generate, and distribute AI-generated disinformation without human intervention at scale. With over 4 billion people worldwide expected to vote globally in 2024, AI poses new risks to political processes and decision-making more generally.
    The recent study has also revealed a statistically significant bias within ChatGPT towards Democrats in the U.S. and the Labour Party in the UK. This discovery sparks concerns about the tool’s capacity to shape users’ political perspectives.
  • The AI Incident Database (AIID) documents instances of AI systems being used in an unethical or harmful manner, including examples such as autonomous vehicles causing pedestrian fatalities and facial recognition technology leading to wrongful arrests. The frequency of these incidents has been steadily rising over the years, with a notable surge in 2023, which saw a 32% increase from the previous year, with a total of 123 reported incidents.

6. Generative AI investment skyrockets

Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.

A new McKinsey survey reveals that 42% of surveyed organizations report cost reductions from implementing AI (including generative AI), and 59% report revenue increases. Compared to the previous year, there was a 10 percentage point increase in respondents reporting decreased costs, suggesting AI is driving significant business efficiency gains.

7. AI adoption within organizations is on the rise

A McKinsey report from 2023 shows that the adoption of AI, including generative AI, has increased to 55% of organizations utilizing it in at least one business unit or function. This is up from 50% in 2022 and a mere 20% in 2017.

8. AI accelerates impact on Science and Medicine

In 2022, AI began to advance scientific discovery. However, the following year, 2023, witnessed the introduction of even more impactful AI applications in science and medicine, such as:

  • AlphaDev: a new AI reinforcement learning system which discovered a faster sorting algorithm.
  • Flexicubes: a gradient-based representation for differentiable mesh optimization.
  • Synbot: an AI-driven robotic chemist for synthesizing organic molecules.
  • GraphCast: a new weather forecasting system that delivers highly accurate 10-day weather predictions in under a minute, utilizing graph neural networks and machine learning.
  • GNoME: a model that simplifies the materials discovery process, capable of revealing 2.2 million new crystal structures, many overlooked by human researchers.
  • Flood Forecasting: innovative AI-based methods developed by Google researchers to produce highly accurate large-scale flood predictions.
  • SynthSR: an AI tool that processes clinical brain scans for advanced analysis.
  • Coupled Plasmonic Infrared Sensors: a new method for neurodegenerative disease diagnosis that combines AI-coupled plasmonic infrared sensors that use Surface-Enhanced Infrared Absorption (SEIRA) spectroscopy with an immunoassay technique (ImmunoSEIRA).
  • EVEscape: a new AI deep learning model, trained on historical sequences and biophysical and structural information, that predicts the evolution of viruses.
  • AlphaMissence: a new AI model that predicted the pathogenicity of 71 million genetic alterations that impact the functionality of human proteins.
  • GPT-4 Medprompt and MediTron-70B: two LLMs trained on MedQA, a comprehensive dataset derived from professional medical board exams, featuring over 60,000 clinical questions designed to challenge doctors.
    GPT-4 Medprompt is closed-source, while MediTron-70B is open-source.
  • CoDoC: a system designed to discern when to rely on AI for diagnosis and when to defer to traditional clinical methods.
  • CT Panda: an AI model, developed by a Chinese research team, capable of efficiently detecting and classifying pancreatic lesions in X-rays.

9. The public is pessimistic about AI’s economic impact

According to an Ipsos survey, just 37% of respondents express confidence that AI will enhance their job prospects. Similarly, only 34% foresee AI contributing to economic growth, while 32% believe it will positively impact the job market.

10. ChatGPT is widely known and widely used

According to an international survey conducted by the University of Toronto, 63% of participants are familiar with ChatGPT. Among those who are aware of it, approximately half indicate using ChatGPT at least once a week.

Conclusion

This year’s AI Index Report reveals insights into fascinating developments, ranging from AI models that outperform human performance on specific tasks to the growing computational expenses associated with training cutting-edge systems. Furthermore, the report highlights the growing public awareness and apprehension regarding the growing influence of AI.

As the number of areas of our society influenced by AI continues to increase, understanding the evolution of all aspects of AI becomes essential. The report provides a concrete understanding of AI’s current capabilities, persistent challenges, and areas that require greater attention to ensure the responsible advancement of this influential technology.

The full report is available here: https://aiindex.stanford.edu/report/

--

--

Fabio Oppini
Fabio Oppini

Written by Fabio Oppini

Engineer and technologist, coder since the age of 8, expert on AI and digital transformation.

No responses yet