News Items

News Items

Share this post

News Items
News Items
The Sunday Weekend Edition

The Sunday Weekend Edition

Don't talk about it.

John Ellis
Jul 27, 2025
∙ Paid
11

Share this post

News Items
News Items
The Sunday Weekend Edition
2
Share

“News Items is my signal amid the noise.” — David Shipley.


Get 14 day free trial


1. Renowned scientists and business leaders from the US and China are calling for greater collaboration in the field of artificial intelligence amid growing concerns that humanity might lose control of the rapidly evolving technology. At the World Artificial Intelligence Conference (WAIC), which commenced in Shanghai on Saturday, Nobel laureate and AI pioneer Geoffrey Hinton proposed the establishment of “an international community of AI safety institutes and associations that works on techniques for training AI to be benevolent”. In his talk, Hinton acknowledged the challenges of international cooperation owing to divergent national interests on issues such as cyberattacks, lethal autonomous weapons, and the creation of fake videos that manipulate public opinion. However, he emphasised a critical common ground: “No country wants AI to take over”. Hinton warned that AI was akin to a “cute tiger cub” kept as a pet by humans, but which could become dangerous as it matured. He stressed the importance of preventing this scenario through international cooperation, drawing parallels to US-Soviet collaboration on nuclear non-proliferation during the Cold War. (Source: scmp.com)


2. Jakob Stenseke:

How do you control a superintelligent artificial being given the possibility that its goals or actions might conflict with human interests? Over the past few decades, this concern– the AGI control problem– has remained a central challenge for research in AI safety. This paper develops and defends two arguments that provide pro tanto support for the following policy for those who worry about the AGI control problem: don’t talk about it. The first is argument from counter-productivity, which states that unless kept secret, efforts to solve the control problem could be used by a misaligned AGI to counter those very efforts. The second is argument from suspicion, stating that open discussions of the control problem may serve to make humanity appear threatening to an AGI, which increases the risk that the AGI perceives humanity as a threat. I consider objections to the arguments and find them unsuccessful. Yet, I also consider objections to the don’t-talk policy itself and find it inconclusive whether it should be adopted. Additionally, the paper examines whether the arguments extend to other areas of AI safety research, such as AGI alignment, and argues that they likely do, albeit not necessarily as directly. I conclude by offering recommendations on what one can safely talk about, regardless of whether the don’t-talk policy is ultimately adopted. (Sources: fil.lu.se/en/person/jakobstenseke, link.springer.com)

Keep reading with a 7-day free trial

Subscribe to News Items to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 John Ellis
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share