The Oslo Patient.
AI’s annual report card.
“I read it almost every morning.” — James Walker, distant cousin.
1. MIT Tech Review:
The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, is now freely available on the web.
Despite predictions that AI development may hit a wall, the report says that the top models just keep getting better. People are adopting AI faster than they picked up the personal computer or the internet. AI companies are generating revenue faster than companies in any previous technology boom, but they’re also spending hundreds of billions of dollars on data centers and chips. The benchmarks designed to measure AI, the policies meant to govern it, and the job market are struggling to keep up. AI is sprinting, and the rest of us are trying to find our shoes.
All that speed comes at a cost. AI data centers around the world can now draw 29.6 gigawatts of power, enough to run the entire state of New York at peak demand. Annual water use from running OpenAI’s GPT-4o alone may exceed the drinking water needs of 12 million people. At the same time, the supply chain for chips is alarmingly fragile. The US hosts most of the world’s AI data centers, and one company in Taiwan, TSMC, fabricates almost every leading AI chip.
The data reveals a technology evolving faster than we can manage. (Sources: technologyreview.com, hai.stanford.edu)
2. Among (some of) the report’s findings: (1) AI capability is not plateauing. It is accelerating and reaching more people than ever. (2) The U.S.-China AI model performance gap has effectively closed. (3) The United States hosts 5,427 data centers, more than 10 times any other country, and it consumes more energy than any other country. (4) A single company, TSMC, fabricates almost every leading AI chip, making the global AI hardware supply chain dependent on one foundry in Taiwan. (5) The United States leads in AI investment, but its ability to attract global talent is declining. (6) Productivity gains from AI are appearing in many of the same fields where entry-level employment is starting to decline. (7) AI models for science can outperform human scientists, though bigger models do not always perform better. (8) AI sovereignty is becoming a defining feature of national policy, but capabilities remain uneven, even as open-source development helps to redistribute who participates. The HAI Report is worth reading in full. (Source: hai.stanford.edu)
3. Quantamagazine:
Mathematicians who had dismissed AI models as too error-prone to be useful started playing around with them. Those early adopters found, to their surprise, not only that the models were good at puzzles, but that they could help break genuinely new ground. Soon, mathematicians were using AI to discover and prove new results, accomplishing in a day what would have once taken them weeks or months. “2025 was the year when AI really started being useful for many different tasks,” said Terence Tao, a prominent mathematician at the University of California, Los Angeles…
By the start of 2026, shock at the power of AI had turned into something more like wonder. A February challenge called First Proof gave entrants a week to have their AI models solve 10 research-level questions in various areas of math. Mathematicians had chosen the questions so that they were unlikely to have appeared in the algorithms’ training data. With varying levels of autonomy, the models succeeded in solving over half the problems. If the Olympiad results represented the moment AI entered an ambitious college math program, the First Proof results were arguably the moment they finished graduate school. (Sources: quantamagazine.org, math.ucla.edu, daniellitt.com)




