Key Takeaways
ByteDance-owned TikTok has joined the growing list of Big Tech companies choosing to lay off hundreds of employees in favor of AI.
Most of the layoffs will be focused on those working in the company’s content moderation operations, according to Reuters , citing two sources familiar with the matter.
TikTok announced on Friday Oct. 11 that it planned to lay off hundreds from its global workforce, many of which were based in Malaysia. Over 400 roles were slashed in the country, Reuters reported.
ByteDance currently has over 150,000 employees across 120 cities globally, according to its website.
As of right now, TikTok uses AI extensively for content moderation, with 80 percent of guideline-violating content removed by AI, a company spokesperson told Reuters.
TikTok said the layoffs were to “further strengthen the global operating model for content moderation.” The company plans to spend a further $2 billion on trust and safety initiatives this year, with a continued focus on efficiency.
A wave of layoffs have continued to plague the industry this year as companies look to downsize in favour of automation.
In January , Google CEO Sundar Pichai warned his employees to brace themselves for a year of layoffs as the company dives further into AI.
Pichai said his company would “remove layers [of its workforce]” to free up funds for investment in the company’s main priorities.
Since then, the search engine giant has laid off hundreds of employees Google across its engineering, hardware, assistant, real estate, and finance departments.
In March, an undisclosed number of employees were fired from IBM following the company’s massive shift to AI-driven operations.
The moved followed comments from IBM CEO, Arvind Krishna, claiming the company was planning to cut 8,000 jobs in favor of AI.
In recent years, AI has become a central tool for content moderation across various social media platforms.
The likes of Facebook, YouTube and X have embraced automated systems to handle the sheer volume of user-generated content.
Meta, for instance, uses various machine learning algorithms and computer vision models to detect and remove harmful content. According to former CTO Mike Schroepfer, between 2017 and 2021, the percentage of hate speech taken down from Facebook that was automatically flagged by AI rose from 24% to 97%.
While AI tools were initially focused on flagging potential violations of social media content policies, they are increasingly being entrusted to automatically take down offensive materials.
Google, for instance, observes that when YouTube’s AI content moderation “have a high degree of confidence that content is violative, they may make an automated decision.” However, it adds that “in the majority of cases, our automated systems will simply flag content to a trained human reviewer for evaluation before any action is taken.”
While AI has dramatically transformed content moderation in recent years, the complete replacement of human moderators remains unlikely for now.
AI systems still struggle with nuanced understanding of context, cultural sensitivities, and evolving language—issues that human moderators are better equipped to handle.
Efforts to increase automation have also faced a backlash from regulators and platforms that rush to remove humans from the equation too fast risk provoking their ire.
For instance, in 2023 EU officials reportedly told X owner Elon Musk to hire more human moderators, citing concerns that AI-only systems could overlook harmful content.
As they embrace AI-powered moderation TikTok and its peers need to strike the right balance and ensure their ability to identify and remove harmful content doesn’t suffer.