Gary Marcus: LLMs & the Future of AI
- Cognitive scientist Gary Marcus, a longtime artificial intelligence researcher, remains a prominent skeptic of generative AI.
- Marcus, who studied under Steven pinker and founded machine learning startups, has been involved in AI for decades.He expressed disillusionment with the field's focus on large language models...
- "I think we’re very early in the history of A.I.," marcus said,emphasizing the need for regulation to mitigate emerging risks associated with artificial intelligence.
Gary Marcus, a leading AI skeptic, casts doubt on current generative AI, emphasizing its limitations and potential harms. He advocates for urgent AI regulation and oversight, highlighting the technology’s lack of true understanding and reasoning capabilities. The cognitive scientist, with decades of experiance, believes the field’s focus on large language models (LLMs) is shortsighted, pushing for neuro-symbolic AI and a balanced approach. Marcus criticizes the current state of U.S. AI regulation, calling for a process akin to the FDA’s drug approval system. News Directory 3 brings you this critical analysis of the future of artificial intelligence. Discover what’s next in the evolving world of AI.
Gary Marcus: AI Skeptic on Regulation and the Future of AI
Cognitive scientist Gary Marcus, a longtime artificial intelligence researcher, remains a prominent skeptic of generative AI. Speaking at Web Summit Vancouver, Marcus reiterated his concerns about the technology’s limitations and potential harms.
Marcus, who studied under Steven pinker and founded machine learning startups, has been involved in AI for decades.He expressed disillusionment with the field’s focus on large language models (LLMs), arguing they require supplementation with symbolic AI, which uses logic and reasoning.
“I think we’re very early in the history of A.I.,” marcus said,emphasizing the need for regulation to mitigate emerging risks associated with artificial intelligence.
Marcus recalled his early interest in AI, sparked at age 10 when he learned to program. He described explaining computer simulations on a TV show called Ray’s Way.
Despite his long involvement,marcus said he has “never been fully optimistic” about AI’s progress. He believes fundamental questions about knowledge depiction and acquisition remain unanswered.
upon ChatGPT‘s rise, Marcus immediately predicted its limitations, including its tendency to “hallucinate” and make errors. He argues these issues persist in subsequent models, making them unreliable and perhaps useful for misinformation.
Marcus champions neuro-symbolic AI, combining neural networks with classical AI approaches. he cited alphafold as an early example. He also stated that artificial general intelligence (A.G.I.) is unlikely to be achieved this decade, possibly longer.
I think we’re very early in the history of A.I.
Marcus lamented the current state of AI regulation in the U.S., describing it as “fully fallen apart.” He expressed disappointment that bipartisan support for regulation has waned, increasing the risk of cybercrime and discrimination.
He continues to advocate for a regulatory process similar to the FDA’s drug approval system, arguing that society should have a say in the release of technologies that could cause serious harm.
Marcus also criticized federal funding cuts to science, calling them “the best thing that ever happened to China” and detrimental to U.S. innovation in artificial intelligence.
What’s next
Marcus plans to continue advocating for responsible AI growth and regulation,pushing for a more balanced approach that combines neural networks with symbolic AI to address the limitations of current large language models.
