Home / News / Technology / Google DeepMind Shares Plans for AI Lab Assistants After Noam Shazeer Returns in $2.7B Deal
Technology
3 min read

Google DeepMind Shares Plans for AI Lab Assistants After Noam Shazeer Returns in $2.7B Deal

Published 33 seconds ago
James Morales
Published 33 seconds ago
By James Morales
Verified by Samantha Dunn

Key Takeaways

  • Google DeepMind plans to build an “AI lab assistant” for scientific researchers.
  • Several other AI developers are also working on research-focused chatbots.
  • Google’s AI research is being boosted by the return of Noam Shazeer.

Three years after Noam Shazeer left Google to found Character.AI , the AI entrepreneur has returned courtesy of a $2.7 billion licensing agreement with the startup.

Shazeer will return to work on Google’s flagship large language model (LLM) Gemini just as the company embarks on one of its most ambitious LLM projects yet–a planned AI lab assistant announced by Google Deepmind CEO Demis Hassabis on Wednesday, Oct. 2.

Google’s Scientific Research Bot

As reported  by the Financial Times, Hassabis said Google is developing “a science large language model that could be like a research assistant and maybe help you predict […]  the outcome of an experiment.”

The proposed model would add to Google’s already extensive array of AI tools designed specifically for scientific research. 

These include AlphaFold, a predictive AI system that is widely used for drug discovery, and NotebookLM, an LLM-enhanced notetaking program that lets users ground the language model in their own notes and sources.

Amid a growing appetite for domain-specific chatbots, Google isn’t the only company targeting the scientific research community. Other takes on the concept include scienceOS  and InstaDeep’s Laila 

The new generation of research bots combines ChatGPT-style conversational AI with more specific scientific knowledge. They can be integrated into research workflows with automatic citations and options to fine-tune with new data. 

But at their core, they still share the same transformer architecture as mass-market chatbots.

The Transformer Architecture

Famously known as the T in OpenAI’s GPT, a transformer is a type of deep-learning framework first described by Shazeer and 7 other Google researchers in 2017. 

The authors of the original landmark research paper, “Attention Is All You Need,” are some of the brightest minds in AI. With the exception of Lukasz Kaiser, who currently works at OpenAI, each went on to found their own AI ventures post-Google.

From Shazeer’s Character.AI to Aidan Gomez’s Cohere, startups founded by the original transformer inventors have sought to build on their legacy, developing new LLM designs and applications. 

Chatbot Visionaries Return

Shazeer’s return to Google was one of the conditions of the big Tech firm’s $2.7 billion investment in Character.AI, which was announced in August of this year. 

The comeback marks a significant turnaround for Shazeer, who, after working at Google for over two decades, publicly criticized  the firm for being too risk-averse in its AI releases.

Specifically, he expressed frustration that the company didn’t initially release LaMDA, an LLM developed by fellow Character.AI co-founder Daniel De Freitas. 

Three years later, however, Shazeer and De Freitas are back to work on LaMDA’s successor Gemini. 

Alongside a small team of leading researchers that have come on board from character.AI, Google has also secured a non-exclusive license to use the startup’s current LLM technology as part of the deal.

Was this Article helpful? Yes No