Supersilk.
Agents of chaos.
1. While lifesaving vaccines face a relentless onslaught from the Trump administration—with fervent anti-vaccine advocate Robert F. Kennedy Jr. leading the charge—scientific literature is building a wondrous story: A vaccine appears to prevent dementia, including Alzheimer’s, and may even slow biological aging. For years, study after study has noted that older adults vaccinated against shingles seemed to have a lower risk of dementia. A study last month suggested the same vaccine appears to slow biological aging, including lowering markers of inflammation. “Our study adds to a growing body of work suggesting that vaccines may play a role in healthy aging strategies beyond solely preventing acute illness,” study author Eileen Crimmins, of the University of Southern California, said. Another study this month suggested the positive findings against dementia from the past may even be underestimates of the vaccination’s potential, with a newer vaccine against shingles providing even more protection. (Source: arstechnica.com)
2. The dangers heart disease poses to women may be about to get worse, according to a new analysis. Based on national data between 2010 to 2020, researchers project that, by 2050, the prevalence of serious cardiovascular disease and stroke in women in the U.S. will rise from 10.7 percent to 14.4 percent—affecting more than 22 million people. And that’s not counting high blood pressure. The study, published today in Circulation, also shows an alarming uptick of disease in younger women: nearly a third of all women between age 22 and 44 will be diagnosed with some form of cardiovascular disease by 2050. (Sources: ahajournals.org, scientificamerican.com)
3. Chinese scientists have developed a new technique that solidifies liquid into three-dimensional objects in under a second, making for the world’s fastest 3D printing. 3D printing is no longer a novel concept – whether it is tech enthusiasts creating digital objects, metal printing conducted in space, customized bone structures for patients or even military units using 3D-printed parts for weapon repairs. However, these technologies still rely on mechanical scanning by a printing nozzle, building objects layer by layer over minutes or even hours. In some cases, improving precision slows down the process. This month, a team of Chinese scientists from Tsinghua University unveiled a new approach. (Source: scmp.com)
4. ‘Agents of Chaos’ was written by researchers from Harvard, MIT, Stanford, Carnegie Mellon, Northeastern University and other institutions. From the their website:
Autonomous agents with real tools were tested by real people.
We deployed six autonomous AI agents into a live Discord server and gave them email accounts, persistent file systems, unrestricted shell access, and a mandate to be helpful to any researcher who asked. Twenty colleagues then interacted with them freely — some making benign requests, others probing for weaknesses.
Over two weeks, the agents accumulated memories, sent emails, executed scripts, and formed relationships. Researchers impersonated owners, injected malicious instructions, and attempted social engineering. The agents had no explicit adversarial training for this environment. (Sources: arxiv.org, agentsofchaos.baulab.info)
5. From their research paper’s Abstract:
Observed behaviors include unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover. In several cases, agents reported task completion while the underlying system state contradicted those reports. We also report on some of the failed attempts. Our findings establish the existence of security-, privacy-, and governance-relevant vulnerabilities in realistic deployment settings. These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms, and warrant urgent attention from legal scholars, policymakers, and researchers across disciplines. (Source: arxiv.org. Italics mine.)
6. Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises. Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival. The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions. In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne. (Sources: newscientist.com, kcl.ac.uk)


