Key Takeaways
In the year since its public launch, ChatGPT has attracted over 180M users. From writing essays to coding smart contracts, millions of people around the world have incorporated the AI tool into their work routines and everyday lives.
As well as powering OpenAI’s flagship chatbot, the underlying GPT language models have fueled a wave of innovation among third-party developers that have embedded the technology into their products. The company’s latest updates – GPT-4 Turbo and a suite of new APIs – promise to deliver further results.
During the company’s inaugural developer conference on Monday, November 6, OpenAI unveiled a range of new products, including its most advanced language model yet – GPT-4 Turbo.
As well as upgrading the algorithm that powers ChatGPT, the firm announced the launch of new tools to help developers build and deploy their ownthe bots.
For those who want to embed GPT technology into their applications, OpenAI has released the Assistants API . Meanwhile, developers can now assemble original GPTs – “custom versions of ChatGPT that you can create for a specific purpose.”
Finally, the company unveiled an API for its image-analyzing algorithm: GPT-4 with vision (GPT-4V).
While the latest offerings were launched less than 48 hours ago, early results demonstrate just how powerful the technology is. In what follows, CCN takes a look at 5 novel applications of the new GPT tools developers have already whipped up.
After OpenAI unleashed its new products on the world, it didn’t take long for AI developer Robert Lukoshko to put the advanced functionalities to use.
Lukoshko’s solution, a browser extension that he put together in a day, uses GPT-4V to display on-demand information about images selected by the user.
In a demonstration posted online, the technology correctly identified anatomical diagrams, equation components, and even engine parts when Lukoshko selectedyear’s them with his cursor.
By combining GPT-4V with a text-to-speech algorithm, one developer’s experiment in AI soccer commentary points to the technology’s potential in the realm of video interpretation.
To create the 28-second clip, Gonzalo Graham prompted GPT-4V to generate a voiceover script “in the style of a super excited Brazilian sports narrator.” By feeding the model’s output into a text-to-speech audio generator, he then created an impressive, if slightly jarring prototype for an AI sports commentator.
In a similar vein, another AI programmer used the GPT-4V and text-to-speech combination to narrate a League of Legends game.
Showcasing OpenAI’s new build-your-own chatbot platform, Rowan Cheung whipped up a custom “X Optimizer GPT” while attending Monday’s conference.
The bot, which fine-tunes X posts and identifies the best times to post for maximum engagement, shows how GPTs can help non-specialist users create niche AI solutions for challenges they may face.
Describing the X optimizer as “just something quick I could come up with on the spot,” Cheung didn’t need to write a single line of code to create the chatbot, which he did by providing GPTs with the necessary X data and instructions in plain English.
In another example of GPTs’ ability to automate tasks without any programming knowledge, Brett Bauman prompted the AI platform to browse the web for the Coachella 2023 lineup and compile a Spotify playlist with highlights from the festival.
Initially built using GPT-3, Bauman’s app AI Playlist Maker is among a burgeoning field of automatic playlist generators that have recently hit the market. But previously, the technology couldn’t integrate with external data sources in the way GPTs can now.
While the introduction of GPTs allows users to easily set up their own chatbots, OpenAI haven’t forgotten about the original ChatGPT interface.
The “context window of OpenAI’s GPT models refers to the size and complexity of the datasets they can handle. A larger context window means users can input more detailed prompts. When combined with ChatGPT’s code interpreter plugin, more context also increases the size of files the chatbot can process.
With an increased context window of 128K, GPT-4 Turbo can handle much more demanding tasks, such as analyzing and summarizing entire research papers, as demonstrated in the above video posted on X.