1. Interview with Sam Altman, OpenAI CEO:
Josh Tyrangiel: What’s the threshold where you’re going to say, “OK, we’ve achieved AGI now”?
Sam Altman: The very rough way I try to think about it is when an AI system can do what very skilled humans in important jobs can do—I’d call that AGI. There’s then a bunch of follow-on questions like, well, is it the full job or only part of it? Can it start as a computer program and decide it wants to become a doctor? Can it do what the best people in the field can do or the 98th percentile? How autonomous is it? I don’t have deep, precise answers there yet, but if you could hire an AI as a remote employee to be a great software engineer, I think a lot of people would say, “OK, that’s AGI-ish.”
Now we’re going to move the goalposts, always, which is why this is hard, but I’ll stick with that as an answer. And then when I think about superintelligence, the key thing to me is, can this system rapidly increase the rate of scientific discovery that happens on planet Earth? (Source: bloomberg.com)
2. More from the interview:
Tyrangiel: What’s the most helpful thing the Trump administration can do for AI in 2025?
Altman: US-built infrastructure and lots of it. The thing I really deeply agree with the president on is, it is wild how difficult it has become to build things in the United States. Power plants, data centers, any of that kind of stuff. I understand how bureaucratic cruft builds up, but it’s not helpful to the country in general. It’s particularly not helpful when you think about what needs to happen for the US to lead AI. And the US really needs to lead AI. (Source: bloomberg.com)
3. A two-hour conversation with an artificial intelligence (AI) model is all it takes to make an accurate replica of someones’ personality, researchers have discovered. In a new study published Nov. 15 to the preprint database arXiv, researchers from Google and Stanford University created "simulation agents" — essentially, AI replicas — of 1,052 individuals based on two-hour interviews with each participant. These interviews were used to train a generative AI model designed to mimic human behavior. To evaluate the accuracy of the AI replicas, each participant completed two rounds of personality tests, social surveys and logic games, and were asked to repeat the process two weeks later. When the AI replicas underwent the same tests, they matched the responses of their human counterparts with 85% accuracy. (Source: livescience.com, Pre-print is here.)
4. Chinese venture capitalists are hounding failed founders, pursuing personal assets and adding them to a national debtor blacklist when they fail to pay up, in moves that are throwing the country’s start-up funding ecosystem into crisis. The hard-nosed tactics by risk capital providers have been facilitated by clauses known as redemption rights, included in nearly all the financing deals struck during China’s boom times. “My investors verbally promised they wouldn’t enforce them, that they had never enforced them before — and in ’17 and ’18 that was true — no one was enforcing them,” said Neuroo Education founder Wang Ronghui, who now owes investors millions of dollars after her childcare chain stumbled during the pandemic. While they are relatively rare in US venture investing, Shanghai-based law firm Lifeng Partners estimates that more than 80 per cent of venture and private equity deals in China contain redemption provisions. (Source: ft.com)
Keep reading with a 7-day free trial
Subscribe to News Items to keep reading this post and get 7 days of free access to the full post archives.