1. After beating humans at everything from the game of Go to strategy board games, Google DeepMind now says it is on the verge of besting the world’s top students at solving mathematics problems. The London-based machine-learning company yesterday announced that its artificial intelligence (AI) systems had solved four of the six problems that were given to school students at the 2024 International Mathematical Olympiad (IMO) in Bath, UK, this month. The AI produced rigorous, step-by-step proofs that were marked by two top mathematicians and earned a score of 28/42 — just one point shy of the gold-medal range. “It’s clearly a very substantial advance,” says Joseph Myers, a mathematician based in Cambridge, UK, who — together with Fields Medal-winner Tim Gowers — vetted the solutions and who had helped select the original problems for this year’s IMO. (Source: nature.com, deepmind.google)
2. Meaghan Tobin and Cade Metz:
While the United States has had a head start on A.I. development, China is catching up. In recent weeks, several Chinese companies have unveiled A.I. technologies that rival the leading American systems. And these technologies are already in the hands of consumers, businesses and independent software developers across the globe.
While many American companies are worried that A.I. technologies could accelerate the spread of disinformation or cause other serious harm, Chinese companies are more willing to release their technologies to consumers or even share the underlying software code with other businesses and software developers. This kind of sharing of computer code, called open source, allows others to more quickly build and distribute their own products using the same technologies.
Open source has been a cornerstone of the development of computer software, the internet and, now, artificial intelligence. The idea is that technology advances faster when its computer code is freely available for anyone to examine, use and improve upon. (Source: nytimes.com)
3. The world’s biggest tech and AI companies, from Google, OpenAI and Tesla, are among those racing to build the AI “brain” that can autonomously operate robotics in moves that could transform industries from manufacturing to healthcare. In particular, improved computer vision and spatial reasoning capabilities have allowed robots to gain greater autonomy while navigating varied environments, from construction sites to oil rigs and city roads. Training and programming robots previously required engineers to hardwire rules and instructions that taught the machine how to behave, often specific to each system or environment. The advent of deep learning models in recent years has enabled experts to train AI software that allows machines to be far more adaptive and reactive to unexpected physical challenges in the real world and learn by themselves. (Source: ft.com)
4. OpenAI is launching an online search tool in a direct challenge to Google, opening up a new front in the tech industry’s race to commercialize advances in generative artificial intelligence. The experimental product, known as SearchGPT, will initially only be available to a small group of users, with the San Francisco-based company opening a 10,000-person waiting list to test the service on Thursday. The product is visually distinct from ChatGPT as it goes beyond generating a single answer by offering a rail of links — similar to a search engine — that allows users to click through to external websites. SearchGPT was developed with feedback from publishers that OpenAI has recently signed deals with, including News Corp, Axel Springer and the Financial Times. (Source: ft.com)
5. Training artificial intelligence (AI) models on AI-generated text quickly leads to the models churning out nonsense, a study has found. This cannibalistic phenomenon, termed model collapse, could halt the improvement of large language models (LLMs) as they run out of human-derived training data and as increasing amounts of AI-generated text pervade the Internet. “The message is, we have to be very careful about what ends up in our training data,” says co-author Zakhar Shumaylov, an AI researcher at the University of Cambridge, UK. Otherwise, “things will always, provably, go wrong”. he says.” The team used a mathematical analysis to show that the problem of model collapse is likely to be universal, affecting all sizes of language model that use uncurated data, as well as simple image generators and other types of AI. (Source: nature.com)
Keep reading with a 7-day free trial
Subscribe to News Items to keep reading this post and get 7 days of free access to the full post archives.