TikTok won’t use end-to-end encryption, citing harm to users
While social media platforms have a habit of copying each other, there’s one area where TikTok is forging its own path.
TikTok doesn’t use end-to-end encryption (E2EE) for direct messages, the BBC reports. In contrast, the security measure is used by Meta Platforms for services like Facebook and WhatsApp. It’s also integrated into Signal, Apple, and Google’s in-device messages, and Snapchat.
End-to-end encryption means only those involved in a conversation can read those messages. These other platforms argue this is critical for users’ privacy as it means the companies and law enforcement are unable to see any of the content that users send.
However, in its conversation with the BBC, TikTok stated that end-to-end encryption allows for harm to users and sharing illegal content without the possibility to investigate it.
TikTok instead uses standard encryption, which means certain authorized employees can access messages. This step might occur in cases such as a prompt from law enforcement officials.
Notably, TikTok’s security has long been called into question thanks to its Chinese owner ByteDance. In January, TikTok’s U.S. ownership transferred to an American subsidiary consisting of backers like Oracle founder Larry Ellison.
Fast Company has reached out to TikTok for confirmation about its security. We will update this post if we hear back.
TikTok’s stance aligns with anti-CSAM policies
The platform’s line of argument follows that of many governments and child protection charities.
“We believe personal security is extremely important and support efforts to improve online privacy,” the U.S. National Center for Missing and Exploited Children states. “But, if this solution is implemented with no exceptions for detecting child sexual exploitation, millions of incidents of abuse will remain hidden, leaving these young victims without any help or protection from these horrific crimes.”
The U.K. government takes a similar stance: “Intentionally implementing E2EE without necessary safety features will blind social media companies to the child sexual abuse material that is being repeatedly shared on their platforms.
“We are not asking companies to stop the implementation of E2EE across their messaging services,” the U.K. government’s statement continues. “We are instead urging all social media companies to implement sufficient child safety measures on their messaging platforms that will maintain and/or enhance the identification and prevention of child sexual abuse.”