Home / News / Technology / Big Tech / Sam Altman Predicts an AI-Fueled Scientific Explosion — Hugging Face’s Chief Scientist Isn’t Convinced
Big Tech
4 min read

Sam Altman Predicts an AI-Fueled Scientific Explosion — Hugging Face’s Chief Scientist Isn’t Convinced

Published
Kurt Robson
Published
By Kurt Robson
Edited by Samantha Dunn
Key Takeaways
  • Thomas Wolf, chief science officer at AI firm Hugging Face, has cast doubt on Sam Altman’s vision of AI-powered scientific breakthroughs.
  • OpenAI CEO Altman believes “superintelligent” AI could “massively accelerate scientific discovery.”
  • Wolf is not alone in his skepticism of AI’s scientific potential.

As the race to build ever more powerful artificial intelligence continues, OpenAI CEO Sam Altman believes decades of AI progress will unfold within the next few years.

However, not everyone in the industry is as optimistic. Thomas Wolf, chief science officer at AI firm Hugging Face, believes that today’s AI isn’t built for true scientific breakthroughs and instead risks becoming “yes-men on servers.”

Wolf warned that unless AI learns to challenge assumptions and ask bold new questions, it won’t fuel the scientific explosion Altman predicts is just around the corner.

Hugging Face Not Convinced by AI’s Scientific Explosion

In a lengthy post published to X on Thursday, March 8, Hugging Face’s chief scientist argued that AI is like a group of “straight A” students.

“The main mistake people usually make is thinking Newton or Einstein were just scaled-up good students, that a genius comes to life when you linearly extrapolate a top-10% student,” Wolf wrote .

Wolf said this misses the most crucial aspect of science: the skill to ask the right questions and to challenge one’s own learning.

“A real science breakthrough is Copernicus proposing, against all the knowledge of his days -in ML terms we would say ‘despite all his training dataset’-, that the earth may orbit the sun rather than the other way around.”

Wolf notes that the recent tests against AI usually featured extremely difficult questions but came with “clear, closed-end answers.”

“However, real scientific breakthroughs will come not from answering known questions, but from asking challenging new questions and questioning common conceptions and previous ideas,” Wolf wrote.

Sam Altman’s AI Vision

Wolf’s perspective casts doubt on Altman’s grand vision for AI. Last month, Altman wrote on his blog that he believed that “superintelligent” AI could “massively accelerate scientific discovery.”

“…we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential,” Altman said.

The OpenAI CEO said that the company was beginning to roll out AI agents, which would eventually begin to feel like virtual co-workers.

“Imagine that this agent will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do, for tasks up to a couple of days long,” he wrote.

Altman claimed the agent “will not have the biggest new ideas, it will require lots of human supervision and direction, and it will be great at some things but surprisingly bad at others.”

Regardless, Altman claimed that AI, in some ways, “may turn out to be like the transistor economically—a big scientific discovery that scales well and that seeps into almost every corner of the economy.”

AI Reasoning

Wolf believes that in order to combat a possibly narrow-minded future of AI, we do not “need a system that knows all the answers […] but rather one that can ask questions nobody else has thought of or dared to ask.”

“One that writes ‘What if everyone is wrong about this?’ when all textbooks, experts, and common knowledge suggest otherwise,” Wolf added.

Wolf is not alone in his concerns for future AI. François Chollet, an ex-Google engineer, expressed skepticism about AI’s ability to generate new reasoning in novel situations.

Chollet believes that while AI models may excel at memorizing and reproducing known reasoning patterns, they lack the capacity to adapt beyond their training data.

“AGI is going to be a kind of super-competent scientist,” he told Time Magazine.

Was this Article helpful? Yes No
Kurt Robson is a London-based reporter at CCN with a diverse background across several prominent news outlets. Having transitioned into the world of technology journalism several years ago, Kurt has developed a keen fascination with all things AI. Kurt’s reporting blends a passion for innovation with a commitment to delivering insightful, accurate and engaging stories on the cutting edge of technology.
See more
loading
loading