Home / News / Technology / Oprah Winfrey: ‘AI and the Future of Us’ Highlights Deepfake Dangers – Coincides With OpenAI’s New Model Launch
Technology
6 min read

Oprah Winfrey: ‘AI and the Future of Us’ Highlights Deepfake Dangers – Coincides With OpenAI’s New Model Launch

Last Updated September 13, 2024 12:02 PM
Giuseppe Ciccomascolo
Last Updated September 13, 2024 12:02 PM

Key Takeaways

  • Oprah Winfrey’s new special, “AI and the Future of Us,” aired on Thursday, Sept. 12.
  • Attendees include OpenAI CEO Sam Altman and Microsoft co-founder Bill Gates.
  • The premiere occurred the same day OpenAI launched its new o1 model.

Oprah Winfrey’s latest special, “AI and the Future of Us,” explored the impact of artificial intelligence (AI) on our daily lives. Deepfakes, reasoning models, online safety, and regulations were just some of the matters touched on during the one-hour show that premiered on ABC.

Featuring in-depth interviews with industry leaders like OpenAI’s Sam Altman and Microsoft co-founder Bill Gates, the program offered a broad view of AI’s potential to revolutionize science, health, education, and beyond.

AI and The Future Of Us

AI and the Future of Us  is a one-hour primetime event that aims to offer a deep dive into the advancements in AI technology while equipping viewers with the knowledge to navigate the fast-changing digital world.

“AI and the Future of Us: An Oprah Winfrey Special provides a serious, entertaining, and meaningful foundation for every viewer to understand AI and empowers everyone to be a part of one of the most important global conversations of the 21st century” ABC, the TV channel that broadcasted the show, said in its presentation .

Altman’s Push for Regulation

OpenAI CEO Sam Altman was the first to answer Oprah Winfrey’s questions. The tech entrepreneur claimed that today’s AI systems can learn underlying concepts from the data on which they are trained. “We are showing the system a thousand words in a sequence and asking it to predict what comes next,” he explained. The system learns to predict, and then it learns the underlying concepts.

While systems like ChatGPT and OpenAI’s new o1 can predict the most likely next words in a sentence, they do so based on statistical patterns, not proper understanding. These systems are essentially machines that learn from data, lacking the intentionality or comprehension often attributed to human intelligence.

Oprah Winfrey interviews OpenAI's CEO Sam Altman
Oprah Winfrey interviews OpenAI’s CEO Sam Altman. l Credit: ABC

Altman also emphasized the urgent need for safety testing of these systems. He called for government regulations similar to those in place for aircraft or new medicines, stating that he personally engages in frequent discussions with government officials on this topic. Altman’s push for regulation clashes with OpenAI’s opposition to California’s AI safety bill, SB 1047. The company behind ChatGPT argued that it would stifle innovation. However, former OpenAI employees have supported the bill, emphasizing the need for safeguards to prevent potential harm.

Oprah also challenged Altman on his leadership role at OpenAI, questioning why people should trust him. Altman avoided a direct answer, stating that his company is working to build trust over time. Previously, he had explicitly stated that people should not rely on any single individual to ensure AI benefits humanity.

In response to a news headline  suggesting he was the “most powerful and dangerous man in the world,” Altman dismissed the notion but acknowledged his responsibility to guide AI development towards positive outcomes for humanity.

Gates Shares Optimistic Vision

Winfrey then spoke with Microsoft co-founder Bill Gates about his optimistic outlook on the potential of artificial intelligence to revolutionize education and healthcare.

Gates envisions AI as a valuable assistant in medical appointments, transcribing conversations, suggesting prescriptions, and ensuring accurate documentation. However, he overlooked the significant risk of bias that can arise from poorly trained  AI systems, which can reinforce harmful stereotypes and biases, leading to misdiagnosis and unequal treatment.

Bill Gates shared his view on AI's impact on everyday life
Bill Gates shared his view on AI’s impact on everyday life. l Credit: ABC

Recognizing these challenges, the UN Educational, Scientific and Cultural Organization (UNESCO) has called for government regulations to govern the use of AI in education. These regulations would include age restrictions, data protection measures, and safeguards for user privacy.

Focus On Deepfakes

In a segment dedicated to artificial intelligence, Oprah Winfrey discussed the rising threat of deepfakes and AI-driven disinformation. YouTube creator and technologist Marques Brownlee compared sample footage from OpenAI’s Sora to older AI-generated content. He demonstrated the rapid advancements in synthetic media. The Sora footage was significantly more realistic, highlighting the field’s rapid progress.

The discussion then shifted to an interview with FBI Director Christopher Wray, who recounted his first encounter with AI-enhanced deepfakes. He described how he was shown a video of himself saying things he had never said before.

Wray also discussed the increasing prevalence of AI-aided sextortion, which has seen a significant rise in recent years. He described how perpetrators use AI-generated compromising images to target and blackmail young people.

Turning to the upcoming U.S. presidential election, Wray expressed concerns about the potential for AI-powered disinformation campaigns. While he didn’t advocate for panic, he emphasized the need for increased vigilance and caution in evaluating information from social media. He warned that seemingly authentic content could be generated by foreign adversaries.

A Statista poll revealed  that many U.S. respondents encountered misleading information online in late 2023. This year, AI-generated images of candidates VP Kamala Harris and former president Donald Trump have garnered millions of views on social networks. This further emphasized the threat of AI-driven misinformation.

OpenAI o1 Debut

The new Oprah Winfrey TV show premiered the same day OpenAI launched o1 , “a new series of reasoning models for solving hard problems,” the company said. It remains unclear whether Altman decided to introduce the new product the same day he had his interview with Oprah on TV.

However, o1 is a new safety training approach to ensure its AI models adhere to ethical guidelines. By teaching models to reason about safety rules in context, the company aims to improve its ability to apply these principles effectively.

To assess model safety, OpenAI said it conducted “rigorous testing, including attempts to “jailbreak” them” by circumventing their safety rules. The latest model, o1-preview, demonstrated significant improvement in these tests compared to its predecessor, GPT-4o.

OpenAI launched 01
OpenAI launched o1 on the same day as Oprah Winfrey’s AI special. l Credit: OpenAI

OpenAI has strengthened its internal safety processes, governance, and collaboration with governments to match the growing capabilities of its models. This includes testing and evaluation using the Preparedness Framework , rigorous red teaming, and oversight from the Safety & Security Committee.

Was this Article helpful? Yes No