Home / News / Technology / Google’s AI Search Engine Tool Threatens OpenAI by Cutting Out The Middleman
9 min read

Google’s AI Search Engine Tool Threatens OpenAI by Cutting Out The Middleman

Last Updated May 10, 2024 11:43 AM
Giuseppe Ciccomascolo
Last Updated May 10, 2024 11:43 AM

Key Takeaways

  • OpenAI has launched a new version of Chat GPT, dismissing speculations on unveiling a new search engine.
  • Google has strengthened its leadership in online searches by introducing new AI features at its I/O conference.
  •  Rumors have been circulating around OpenAI’s new product launch and when it will release a search product.

Just one day after OpenAI launched a new version of ChatGPT, Google unveiled its latest generative artificial intelligence (AI) features during its I/O conference on Tuesday, May 14, 2024. The Alphabet-controlled company added new AI-powered products to its already market-leading search engine, further strengthening its leadership in this sector.

Google’s leadership was said to be undermined by OpenAI’s new product, but Sam Altman’s company introduced a faster iteration of its GPT-4o  language model, dismissing speculations about a potential rival search product.

Google Introduces AI Overviews in Search

At its annual I/O developer conference , Google unveiled its ambitious integration of generative AI directly into Google Search, marking a significant leap forward in information retrieval technology.

Furthermore, Google announced  an expansion of its AI-driven search experience, poised to revolutionize how individuals explore and engage with information.

Among the forthcoming enhancements are customizable overviews , allowing users to tailor the depth of information to their preferences; multi-step reasoning capabilities to tackle intricate queries with finesse; integrated planning functionalities for seamless organization of tasks such as meal preparation and travel arrangements; AI-curated search result pages offering curated insights and inspiration; and the groundbreaking ability to conduct visual search queries using uploaded videos and images.

In a move, Google is integrating AI overviews from its experimental Search Labs directly into its mainstream search results pages. This rollout will grant hundreds of millions of US users access to AI overviews, with projections soaring to over 1 billion users globally by this year’s end.

Soon, searchers will have the power to fine-tune the language and level of detail within AI overviews, tailoring the information to their specific needs and comprehension levels, thereby enhancing the accessibility and utility of the search experience.

New AI Features

Among the array of new capabilities , Gemini’s multi-step reasoning prowess stands out. It empowers users to pose intricate questions and receive detailed, nuanced responses.

Beyond addressing complex inquiries, Google extends its support into daily life management, offering aid in meal planning and vacation arrangements. Picture conjuring a customized meal plan with ease, simply by searching, “Craft a 3-day meal plan suitable for a group, emphasizing simplicity.” Your screen lights up with tailored recipes sourced from across the web.

Introducing AI-driven results pages, meticulously organized under distinct, AI-crafted headings, unveiling a wealth of insights and content variations. Initially debuting in dining and culinary realms, this feature’s scope will swiftly broaden, spanning domains like entertainment, travel, literature, accommodation, retail, and more.

Moreover, Google pioneers an approach to inquiries through video content. By embracing visual search capabilities, users transcend traditional text-based queries, effortlessly capturing and conveying their questions through recorded videos.

Liz Reed, vice president of Google Search, said: “Now, with generative AI, Search can do more than you ever imagined. So you can ask whatever’s on your mind or whatever you need to get done — from researching to planning to brainstorming — and Google will take care of the legwork.”

OpenAI Launches GPT-4o

OpenAI has unveiled a faster and more cost-effective iteration of its foundational AI model, the powerhouse behind ChatGPT, as it strives to maintain its edge in an ever-expanding market.

During a live event  streamed on Monday, May 13, 2024, OpenAI introduced GPT-4o, an upgraded rendition of its GPT-4 model, now over a year old. Fueled by extensive internet-derived data, this enhanced large-scale language model boasts heightened capabilities across text, audio, and image processing in real time. The awaited updates are poised to roll out in the forthcoming weeks.

OpenAI reports that, when prompted verbally, the system can deliver audio responses within milliseconds, promising smoother conversational exchanges. In a captivating demonstration, OpenAI researchers and Chief Technology Officer Mira Murati engaged in dialogue with the new ChatGPT solely through their voices, showcasing the tool’s conversational prowess. During the presentation, the chatbot showcased its prowess by seemingly instantaneously translating speech between languages and even delighting the audience by singing a portion of a story upon request.

The forthcoming update will democratize several previously reserved features for paid subscribers, empowering free users with capabilities like web searches for queries, conversational interactions with the chatbot in various voices, and instructing it to store information for future recall.

In an unusual move, OpenAI CEO Sam Altman penned a blog post , reflecting on the transformation brought about by GPT-4o. He remarked that while the original ChatGPT hinted at the potential of language-computer interaction, utilizing GPT-4o evokes a “viscerally different” experience.

“It feels like AI from the movies, and it’s still a bit surprising to me that it’s real,” he said. “Achieving human-level response times and expressiveness marks a significant leap forward.”

What’s New

GPT-4o  brings a substantial enhancement to the ChatGPT experience for free users, who previously relied on the text-only GPT-3.5 model. Now, they can tap into the robust capabilities of GPT-4, including image and document analysis. This upgrade opens doors to smarter modeling, web browsing, data analysis, and charting, along with access to the GPT Store and a memory feature, enabling the app to effortlessly retain user information and preferences, whether typed or spoken.

With GPT-4o , ChatGPT Free users gain access to a suite of features, like enjoying intelligence on par with GPT-4, receiving responses from both templates and the web, analyzing data, and generating graphs. Furthermore, users can discuss photos in real time, upload files for summarization, writing assistance, or analysis, explore and utilize GPT and the GPT Store, and enhance their experience with Memory.

During the event demonstration, OpenAI showcased ChatGPT, empowered by GPT-4o as a real-time translation tool, seamlessly converting spoken Italian to English and vice versa. The company emphasized that ChatGPT now supports over 50 languages across sign-up, login, user settings, and more.

While GPT-4o will eventually become accessible to free ChatGPT users, OpenAI initially prioritizes its rollout to paid subscribers. GPT-4o is being introduced to ChatGPT Plus and Team users first, with plans for availability to Enterprise users shortly. Today marks the commencement of the GPT-4o rollout to ChatGPT Free users, albeit with usage limits. Plus, users will enjoy message limits up to five times higher than free users, while Team and Enterprise users will receive even more generous limits.

New Search Engine Speculations Dismissed

OpenAI was set to unveil its AI-powered search product on Monday, according  to sources familiar with the matter. This move would have intensified its competition with search giant Google.

Speculations  suggested that OpenAI’s new offering would have enabled users to pose questions to ChatGPT, receiving responses incorporating citations from various online sources such as news articles, blog posts, and Wikipedia. Additionally, images will be utilized when relevant to the query.

The announcement’s timing, potentially a day before the start of Google’s annual I/O conference  on Tuesday, May 14, 2024, suggested a strategic move. OpenAI’s search product will build upon its flagship ChatGPT product, allowing ChatGPT to access direct information from the web and provide citations. ChatGPT, known for its human-like responses to text prompts, has been considered by industry observers as an alternative for gathering online information. However, it has faced challenges in delivering accurate and real-time web information.

OpenAI’s Previous Effort With Microsoft’s Bing

Observers have questioned the potential impact on OpenAI’s partnership with Microsoft, its primary backer. Last week’s news stems from a decision made by the artificial intelligence startup last September, wherein they introduced a browsing feature allowing websites to control ChatGPT’s interactions.

OpenAI made browsing available to Plus and Enterprise users, and it would have expanded  it to all users soon. To enable that, users must choose Browse with Bing in the selector under GPT-4.

The startup also announced a significant update enabling ChatGPT to engage in voice conversations with users and interact with them using images, bringing it closer to popular AI assistants like Apple’s Siri.

Previously, OpenAI had tested a feature within its premium ChatGPT Plus offering that allowed users to access the latest information through the Bing search engine. However, OpenAI later disabled this feature due to concerns that it could potentially enable users to bypass paywalls.

Was this Article helpful? Yes No