That’s according to a report Wednesday (March 25) from the Financial Times (FT), which says this move comes in the wake of pressure from the U.K. government on smartphone makers to do more to protect younger users.
According to the FT, the U.K. is believed to be the first market in Europe where Apple is introducing these age controls, designed to make sure adults are the only ones downloading apps rated as being appropriate for people 18 and older.
Now, adults who do not verify their age will face limits on web browsing and “communication safety” checks to their messages and FaceTime video calls, which are designed to identify nude photos and videos, the report added.
Digital services such as social media apps and pornographic websites have begun mandating age verification in the U.K. after the country introduced new rules under the Online Safety Act that impose stricter limits on what children can see and do online.
While app stores are not covered by this law, the U.K. media regulator Ofcom cheered Apple’s move, the FT added.
“Apple’s decision that the U.K. will be one of the first countries in the world to receive new child safety protections on devices is a real win for children and families,” Ofcom said.
The news comes weeks after British Prime Minister Sir Keir Starmer announced his government would step up enforcement and close loopholes in laws designed to protect children online to cover artificial intelligence (AI) chatbots.
This came after Elon Musk’s Grok promised to make changes to its AI assistant in the U.K. to stop its use to create non-consensual sexual deepfakes—including of children—after the government threatened to impose criminal sanctions on the company.
Now, Starner said it is now time to do the same with “all AI bots.” No platform will “get a free pass” over children’s online safety, he added, promising to “crack down on the addictive elements of social media.”
In related news, a jury in New Mexico Tuesday (March 24) ruled against Meta in a case brought by the state attorney general. Jurors found that the company misled users about the safety of its social media platforms and failed to properly prevent child sexual exploitation.
Meta said it disagreed with the verdict and will appeal, arguing that it works “hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content.”