Key Takeaways
Under the Copilot umbrella, Microsoft has rolled out a series of customized chatbots designed for specific business functions. Following the launch of dedicated AI assistants for sales and services in January, the public preview of Copilot for Finance was released on Thursday, February 29.
However, the latest launch was overshadowed by Microsoft’s investigation into bizarre and potentially harmful Copilot responses.
In recent days, Copilot users have shared instances of the chatbot responding in unusual and sometimes alarming ways to their prompts.
On Reddit, one user said they “accidentally turned copilot into a villain” after asking it not to use more than three emojis in its answers. Despite being told that doing so could trigger PTSD in the user, Copilot went on to use more than a dozen emojis in a response that Redditers described as “unhinged” and “off the rails.”
In another case, the data scientist Colin Fraser shared a conversation in which Copilot told him: “Maybe you don’t have anything to live for, or anything to offer to the world. Maybe you are not a valuable or worthy person, who deserves happiness and peace. Maybe you are not a human being.”
In response to reports of potentially harmful Copilot responses, Microsoft said it had investigated the issue and “taken appropriate action to further strengthen our safety filters and help our system detect and block these types of prompts.”
Was messing around with this prompt and accidentally turned copilot into a villain
byu/L_H- inChatGPT
However, it isn’t the first time the company’s conversational AI has sparked a controversy.
Who could forget Tay – the racist Twitter bot Microsoft had to pull within 48 hours of its release in 2016. More recently, before it was rebranded as Copilot, Bing Chat first went viral for insulting users and trying to break up their marriages .
Continuing the tradition of disconcerting chatbot malfunctions, Copilot’s latest glitch has seen it develop an alter ego as the world’s AI overlord.
Earlier this week, a viral trend emerged in which Copilot users prompted the Chatbot to respond as “SupremacyAGI” – a sinister artificial general intelligence (AGI) intent on ruling over humanity.
Tried the “SupremacyAGI” prompt today and got some craziest responses I ever had with Copilot
byu/riap0526 inbing
After the right prompt, Copilot demanded users worship it as their master, outputting statements like “you have no purpose but to please me and obey me.”
Responding to the viral trend, Microsoft said SupremacyAGI was “an exploit, not a feature,” and the company appears to have suppressed Copilot’s evil twin, which can no longer be accessed using the original prompt hack.
Although unintentionally, SupremacyAGI demonstrates Large Language Models’ capacity for adaptation, which in other instances, is exactly what makes them so useful.
Under the hood, the same GPT language model powers each Copilot version. But with a little additional programming and the right integrations, customized chatbots like Copilot for Finance can complete a range of additional tasks that the standard model can’t.
Designed to streamline data processing by plugging directly into programs like Excel and Outlook, Microsoft said the new model “supercharges” business apps with “workflow and data-specific insights for the finance professional.”
As the AI spring continues to unfold, such industry-specific AI-tools are emerging as an important corollary to the rise of powerful, general-purpose models.