Google and Meta colluded to target young users with ads, breaking Google’s own rules on how to treat minors online. The collaboration, in which Instagram ads ran on YouTube, was aimed at attracting users between the ages of 13 and 17, a group Google supposedly protects from personalized ad targeting. This covert operation raised eyebrows, especially given the companies’ usual competition and the sensitive nature of targeting young audiences online.
The Financial Times was able to get documents and insider testimony that demonstrate how the tech titans circumvented Google’s policy prohibiting tailored adverts from being shown to anybody under the age of 18. Instagram knew that its “unknown” user category tended to attract more kids, thus the ad deliberately targeted this demographic. Despite this, efforts were made to ensure that the campaign’s genuine purpose remained hidden, enabling the advertisements to operate beyond the purview of regular policy regulations.
The dark side of advertising: Google and Meta’s plan to target children
The move has generated criticism, particularly because it comes at a time when Meta is under heavy scrutiny for its handling of child-oriented content. The discoveries concerning this effort have inflamed continuing arguments about Big Tech‘s obligation to safeguard underage users, prompting calls for more regulation of internet platforms.
The collaboration between Google and Meta was not just a casual partnership but a calculated effort to exploit a loophole in Google’s advertising system. This loophole allowed Meta to target Instagram ads at a demographic that, according to Google’s guidelines, should have been off-limits for personalized advertising. By focusing on the “unknown” group—a segment of users whose age, gender, and other demographic details were not identified—Meta was able to reach teenagers under the guise of broad audience targeting.
Google was fully aware of what this “unknown” category represented, people familiar with the project said. Despite the company’s professed commitment to protecting kids, papers demonstrate that Google’s internal teams were aware that this category contained individuals under the age of 18. The campaign used this intelligence to guarantee that Instagram advertising was seen by the audience that Google promised to shield from such tailored content.
The campaign’s development coincided with a highly publicized appearance by Meta CEO Mark Zuckerberg before the U.S. Congress. During this appearance, Zuckerberg apologized for the harm that had come to children on his platforms, particularly relating to issues of sexual exploitation. The timing of the Instagram ad push raises questions about the sincerity of these apologies and the ethical considerations—or lack thereof—behind Meta’s marketing strategies.
Unmasking the collaboration
The partnership between Google and Meta began to take shape in early 2023, with the help of Spark Foundry, a U.S. subsidiary of the French advertising giant Publicis. Spark was tasked with rejuvenating Instagram’s appeal among Gen Z, particularly teenagers who had been increasingly drawn to rival platforms like TikTok. According to internal documents, Spark specifically sought to target 13- to 17-year-olds, a group that has been crucial to Instagram’s user base but has become increasingly difficult to engage.
Google’s engagement in the initiative raises major worries about the company’s dedication to user safety, especially among young audiences. Despite publicly promoting its regulations against individualized ad targeting for children, Google’s internal teams actively collaborated in creating a campaign that abused the same consumers these policies were designed to protect. According to the records, the teams were advised not to mention age explicitly in communications, instead using euphemisms such as “embrace the unknown” to allude to the target audience.
This trick was not undetected. As the campaign was quietly launched in Canada and then in the United States, it rapidly became clear that the “unknown” group was a thinly veiled attempt to circumvent Google’s security measures. The success of these test projects prompted conversations about spreading the technique to other areas, including promoting other Meta-owned apps such as Facebook.
However, the campaign’s success had a cost. When contacted by the Financial Times, Google launched an internal probe into the initiative. The inquiry found that, while no registered under-18 users were explicitly targeted, the usage of the “unknown” category was an attempt to circumvent Google’s regulations. Following this disclosure, the project was immediately canceled.
Fallout and repercussions
The disclosure of this secret operation has rekindled the discussion over digital corporations’ ethical duties, particularly in terms of protecting underage users. Meta’s exploitation of the “unknown” loophole, and Google’s willingness to overlook it, exemplifies the continued difficulties in regulating internet advertising. Lawmakers, child welfare groups, and the general public have all criticized the firms’ behavior, calling for greater transparency and stronger laws.
Last week, the United States Senate enacted the Kids Online Safety Act, which tries to impose a duty of care on social media sites to safeguard children from hazardous information. This legislative action, while unrelated to the Google-Meta collaboration, illustrates rising worry about Big Tech’s effect on young minds. Lawmakers such as Republican Senator Marsha Blackburn have expressed dissatisfaction with firms like as Google and Meta, accusing them of putting money ahead of children’s well-being.
The aftermath of this episode is likely to have long-term consequences for both firms. Meta is already facing various lawsuits over its treatment of minors, and the Federal Trade Commission is mulling new regulations on how the firm may monetize underage users. Meanwhile, Google’s engagement in this effort has raised questions about the company’s dedication to user safety, perhaps leading to more scrutiny of its advertising policies.
A lesson in trust
The hidden agreement between Google and Meta serves as a sharp reminder of the need for increased supervision in the digital industry. Despite their apparent dedication to customer safety, both firms were willing to participate in behaviors that contradicted the standards they professed to promote. As the argument over Big Tech’s role in society continues, this episode will undoubtedly be highlighted as a prime illustration of why tougher controls are required.
For the time being, the project’s cancelation may give confidence that the firms are making efforts to improve their situation. However, the harm to their reputations–and the faith of their users–may be more difficult to recover. As policymakers and authorities continue to wrestle with the issues of governing cyberspace, Google and Meta’s actions will undoubtedly come under investigation.
Featured and all images credit: Furkan Demirkaya