News Items

News Items

A Watershed Event.

The "kill switch" won't work.

John Ellis
Apr 24, 2026
∙ Paid

Get 14 day free trial


1. The Food and Drug Administration on Thursday approved a gene therapy that can cure a rare, inherited form of deafness. The treatment is the first to restore normal hearing in children who were born deaf. The maker of the therapy, Regeneron, plans to provide it free to any child who needs it. (Sources: nytimes.com, investor.regeneron.com)


2. Chris McGuire:

April 2026 marked a turning point in artificial intelligence: America’s leading AI companies developed models so powerful that they decided not to immediately release them to the public. These systems are the most capable cyber-weapons ever built, and AI leaders have warned for years that their arrival would reshape national security. That moment is here.

The consequences of losing the AI arms race are no longer theoretical. AI models are now the decisive offensive and defensive tools in cyber space, and American and allied cyber security depends on maximising the US lead over China in AI.

Anthropic’s newest model, Claude Mythos Preview, is the first AI model that can autonomously discover, chain together and exploit or patch software vulnerabilities more effectively than almost every human researcher, at unprecedented scale. Cyber security experts describe Mythos as a “watershed event in the history of cyber security”.

Instead of releasing Mythos publicly, Anthropic is deploying it only to select US technology companies for the purpose of shoring up US cyber defences. OpenAI announced that its upcoming model, Spud, would similarly only be released to select cyber security partners. The White House, Treasury department and Federal Reserve have launched parallel efforts to harden US critical infrastructure against AI-enabled cyber attacks.

These efforts are urgent because China will develop a model as capable as Mythos soon. China’s best AI models currently lag behind leading US models by about seven months, potentially slightly more. Seven months is therefore the window to harden America’s entire digital infrastructure before Chinese cyber weapons surpass current US defenses. (Sources: cfr.org, ft.com)


3. The White House has accused China of undertaking industrial-scale theft of American artificial intelligence labs’ intellectual property and warned that it would crack down on a practice that exploits US innovation. “The US government has information indicating that foreign entities, principally based in China, are engaged in deliberate, industrial-scale campaigns to distil US frontier AI systems,” Michael Kratsios, director of the White House Office of Science and Technology Policy, wrote in a memo seen by the Financial Times. The accusation marks the latest escalation in tensions around Chinese groups allegedly raiding advanced American AI research amid an arms race to lead in the technology. It comes just weeks before President Donald Trump will meet President Xi Jinping in Beijing. (Source: ft.com)


4. DeepSeek rolled out preview versions of a new flagship artificial intelligence model a year after upending Silicon Valley, calling it the most powerful open-source platform in a challenge to rivals from OpenAI to Anthropic PBC. The Chinese startup unveiled the V4 Flash and V4 Pro series, touting top-tier performance in coding benchmarks and big advancements in reasoning and agentic tasks. They come with architecture upgrades and optimization improvements, the startup said on Hugging Face. DeepSeek singled out a technique it dubbed Hybrid Attention Architecture, which it said improves the ability of an AI platform to remember queries across long conversations. It also pushed the 1 million-token context window — a leap that allows entire codebases or long documents to be sent as a single prompt. (Sources: bloomberg.com, huggingface.co)


5. The artificial intelligence company Anthropic said this month that it would share its latest A.I. technology with only a small number of partners because of cybersecurity concerns. Yesterday, Anthropic’s chief rival, OpenAI, took a different approach. The company unveiled a new flagship A.I. model, GPT-5.5, and began sharing the technology with the hundreds of millions of people who use ChatGPT, its online chatbot. The companies’ contrasting strategies are a clear indication that Anthropic and OpenAI disagree on how they should handle technology that is increasingly useful for the people trying to defend computer networks as well as those trying to break into those networks. But OpenAI is not throwing caution to the wind. The company said it was not yet releasing the technology as an application programming interface, or A.P.I., which would allow companies and individuals to fold the technology into their own software applications and other tools. (Source: nytimes.com)


6. Anthropic says it has no way to control or shut down its AI models once they’re deployed by the Pentagon, according to a new court filing. The Pentagon designated Anthropic a supply chain risk, contending the AI firm is inappropriately getting involved in how its technology can be used in sensitive military operations. Anthropic argues in the filing to a federal appeals court in D.C. that it has no visibility, technical ability or any kind of “kill switch” for its technology once it’s deployed. (Sources: axios.com, storage.courtlistener.com)

User's avatar

Continue reading this post for free, courtesy of John Ellis.

Or purchase a paid subscription.
© 2026 John Ellis · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture