TikTok is well aware it’s no longer a service used solely by bubbly teens doing dance challenges, lip-synching, or creating comic skits. It’s also a place where people spread harmful misinformation—a problem that has proliferated on its more mature rivals.
The service said on Wednesday that it’s taking new steps to crack down on the problem. It will now label posts with “unsubstantiated content” in an effort to reduce the number of times it’s shared. If users still try to share the video, they’ll receive another warning reminding them that the video has “unverified” content before they can share it.
But social media experts say TikTok’s new step raises a lot of questions.
For example, what is considered “unsubstantiated content”? Will users ultimately be responsible for flagging such posts or does TikTok have a plan in place to proactively find them? How much of this work will be done by artificial intelligence versus humans? It’s content moderation team big enough to handle the massive number of posts users publish every day? And how much does TikTok’s Chinese owner ByteDance influence those decisions?
“I think there is some intentional ambiguity in the message here,” said Yotam Ophir, assistant professor at the University of Buffalo who studies misinformation. “It’s hard for me to understand what they’re going to do.”
TikTok had not responded to a request for comment by the time this newsletter was published.
Here’s what we know. TikTok says it already removes misinformation as it identifies it, though we don’t know how it’s identified. It also says in the U.K., it partners with Logically, a company that combines human fact-checkers with artificial intelligence to fight misinformation. When Logically determines a video has misinformation, TikTok removes it. It’s unclear if this is only happening in the U.K.
Meanwhile, legislators and lawmakers have been turning up the pressure on social media companies, concerned about hate speech, violent posts, and misinformation that could lead to real-world harm. As a result, Facebook and Twitter have rapidly ramped up their rules, getting tougher on post that contain problematic content like election misinformation and fake coronavirus cures. And they’ve all seemingly been rattled by the riots at the U.S. Capitol, which were fueled by false claims of voter fraud that were bolstered on social media.
“It’s certainly a come-to-Jesus moment,” said Sarah Roberts, an associate professor at the University of California, Los Angeles who studies content moderation. “It was yet again a demonstration of the ways in which it’s not feasible to pretend like the world of social media is somehow completely divorced from the rest of our reality.”
TikTok’s decision to crack down on misinformation will likely come with a host of new challenges and a lot more scrutiny over how they execute the plan. But experts agree it’s a step in the right direction, both for society and the business. Roberts points out if anything, TikTok can now claim it’s making a good-faith effort to control the problem and reduce its business risks.
But as director of Vanderbilt University’s Stanton Foundation First Amendment Clinic Gautam Hans says, TikTok is wading into gray area of regulating speech. And ultimately, “speech is messy.”
Danielle Abril
@DanielleDigest
[email protected]