“In my dreams I would have the perfect AI generated email first thing every morning with the most substantive articles of interest to me in foreign affairs, domestic policy, business, tech and media distilled with on point commentary. Oh wait! I already have it with News Items by John Ellis.” — Annie Lamont, co-founder of Oak HC/FT, a leading venture capital firm.
1. The Wall Street Journal:
Trump administration officials ordered eight senior FBI employees to resign or be fired, and asked for a list of agents and other personnel who worked on investigations into the Jan. 6, 2021, attack on the U.S. Capitol, people familiar with the matter said, a dramatic escalation of President Trump’s plans to shake up U.S. law enforcement.
On Friday, the Justice Department also fired roughly 30 prosecutors at the U.S. attorney’s office in Washington who have worked on cases stemming from Capitol riot, according to people familiar with the move and a Justice Department memo reviewed by The Wall Street Journal. The prosecutors had initially been hired for short-term roles as the U.S. attorney’s office staffed up for the wave of more than 1,500 cases that arose from the attack by Trump supporters.
Trump appointees at the Justice Department also began assembling a list of FBI agents and analysts who worked on the Jan. 6 cases, some of the people said. Thousands of employees across the country were assigned to the sprawling investigation, which was one of the largest in U.S. history and involved personnel from every state. Acting Deputy Attorney General Emil Bove gave Federal Bureau of Investigation leadership until noon on Feb. 4 to identify personnel involved in the Jan. 6 investigations and provide details of their roles. Bove said in a memo he would then determine whether other discipline is necessary.
Acting FBI Director Brian Driscoll said in a note to employees that he would be on that list, as would acting Deputy Robert Kissane.
“We are going to follow the law, follow FBI policy and do what’s in the best interest of the workforce and the American people—always,” Driscoll wrote.
Across the FBI and on Capitol Hill, the preparation of the list stirred fear and rumors of more firings to come—potentially even a mass purge. (Source: wsj.com, italics mine. The big question is whether “the list” will include FBI informants)
2. OpenAI Chief Executive Sam Altman said he believes his company should consider giving away its AI models, a potentially seismic strategy shift in the same week China’s DeepSeek has upended the artificial-intelligence industry. DeepSeek’s AI models are open-source, meaning anyone can use them freely and alter the way they work by changing the underlying code. In an “ask-me-anything” session on Reddit Friday, a participant asked Altman if the ChatGPT maker would consider releasing some of the technology within its AI models and publish more research showing how its systems work. Altman said OpenAI employees were discussing the possibility. “(I) personally think we have been on the wrong side of history here and need to figure out a different open source strategy,” Altman responded. He added, “not everyone at OpenAi shares this view, and it’s also not our current highest priority.” (Source: wsj.com)
3. Quanta Magazine:
In December 17, 1962, Life International published a logic puzzle consisting of 15 sentences describing five houses on a street. Each sentence was a clue, such as “The Englishman lives in the red house” or “Milk is drunk in the middle house.” Each house was a different color, with inhabitants of different nationalities, who owned different pets, and so on. The story’s headline asked: “Who Owns the Zebra?” Problems like this one have proved to be a measure of the abilities — limitations, actually — of today’s machine learning models.
Also known as Einstein’s puzzle or riddle (likely an apocryphal attribution), the problem tests a certain kind of multistep reasoning. Nouha Dziri, a research scientist at the Allen Institute for AI, and her colleagues recently set transformer-based large language models (LLMs), such as ChatGPT, to work on such tasks — and largely found them wanting. “They might not be able to reason beyond what they have seen during the training data for hard tasks,” Dziri said. “Or at least they do an approximation, and that approximation can be wrong.”
Keep reading with a 7-day free trial
Subscribe to News Items to keep reading this post and get 7 days of free access to the full post archives.