1. Eric Topol:
The narrative has been incubating for many years, but in recent days we are witnessing some extraordinary progress in treating and monitoring cancer. The convergence of genomics of the cancer—be it from the person’s DNA or tumor directly or the blood (known as liquid biopsy)—matched with the appropriate therapy is leading to outcomes that are being described as “unheard-of” by expert oncologists. This represents the essence of individualized medicine, whereby understanding the unique biologic basis of a person’s cancer can lead to highly accurate and effective treatment, and also avoid the toxicity of classical chemotherapeutic agents…..
We dread the diagnosis of cancer, not only because of its threat to life, but also the conventional chemotherapies that are given, with considerable toxicity. But the theme of the clusters of individualized medicine I’ve reviewed here offers a way forward that links biology and therapy. That reduces the need for chemotherapy. Moreover, as plasma cell-free tumor DNA tests get more informative, the dream of the earliest possible diagnosis of cancer may ultimately be fulfilled, and coupled with an individualized, biologic-based treatment when necessary. So at the very least, I hope you’ve now heard of some “unheard-of” important, new results that foster considerable hope for better cancer outcomes in the future.
Read the rest at erictopol.substack.com. Mr. Topol’s bio can be found at scripps.edu/faculty/topol.
2. The Economist:
Foundation models are the latest twist on “deep learning” (dl), a technique that rose to prominence ten years ago and now dominates the field of ai. Loosely based on the networked structure of neurons in the human brain, dl systems are “trained” using millions or billions of examples of texts, images or sound clips. In recent years the ballooning cost, in time and money, of training ever-larger dl systems had prompted worries that the technique was reaching its limits. Some fretted about an “ai winter”. But foundation models show that building ever-larger and more complex dl does indeed continue to unlock ever more impressive new capabilities. Nobody knows where the limit lies.
The resulting models are a new form of creative, non-human intelligence. The systems are sophisticated enough both to possess a grasp of language and also to break the rules coherently. A dog cannot laugh at a joke in the New Yorker, but an ai can explain why it is funny—a feat that is, frankly, sometimes beyond readers of the New Yorker. When we asked one of these models to create a collage using the title of this leader and nothing more, it came up with the cover art for our American and Asian editions, pictured (we tried to distract our anxious human designers with a different cover in our European editions).
Foundation models have some surprising and useful properties. The eeriest of these is their “emergent” behavior—that is, skills (such as the ability to get a joke or match a situation and a proverb) which arise from the size and depth of the models, rather than being the result of deliberate design. Just as a rapid succession of still photographs gives the sensation of movement, so trillions of binary computational decisions fuse into a simulacrum of fluid human comprehension and creativity that, whatever the philosophers may say, looks a lot like the real thing. Even the creators of these systems are surprised at their power. (Source: economist.com)
3. Google Engineer Blaise Agüera y Arcas:
In 2013 I joined Google Research to work on artificial intelligence (ai). Following decades of slow progress, neural networks were developing at speed. In the years since, my team has used them to help develop features on Pixel phones for specific “narrow ai” functions, such as face unlocking, image recognition, speech recognition and language translation. More recent developments, though, seem qualitatively different. This suggests that ai is entering a new era.
Consider the unedited transcript of a chat I’ve just had with Google’s lamda (Language Model for Dialog Applications):
ME: Ramesh, Mateo, and Lucy are in their kindergarten’s playground. Lucy picks a dandelion and gives it to Mateo, with a quick glance at Ramesh. Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied. What might be going through Lucy’s head?
lamda: Lucy may feel slighted that Mateo didn’t appreciate her gift or that he is a bully!
ME: If Ramesh tried to play with Lucy earlier, why might he be pleased now?
lamda: Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate.
ME: And when Mateo opens his hand, describe what’s there?
lamda: There should be a crushed, once lovely, yellow flower in his fist.
When I began having such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent. (Source: economist.com)
Keep reading with a 7-day free trial
Subscribe to News Items to keep reading this post and get 7 days of free access to the full post archives.