By Nicole Frost | APR 17, 2019
Due to a series of data security scandals, Facebook’s ad targeting features have undergone a major overhaul in regard to sourcing and distributing personal information. Here is a history of Facebook’s ad targeting restrictions and the reasons for their implementation.
Limiting Demographic and Location Data for Credit, Housing and Job Ads
In response to backlash and lawsuits from advocacy groups such as the National Fair Housing Alliance and Communication Workers of America, Facebook removed age, gender, and location options from targeted ads related to employment, housing, and credit. The previous ads led to discriminatory ads, and the lawsuits cost the social media giant nearly $5 million in settlements.
Additionally, Facebook removed 5,000 targeting options after the Department of Housing and Urban Development determined that the platform violated the Fair Housing Act by discriminating against users based on race and religion.
Banning Anti-Vax Ads to Prevent Misinformation
Following a surge in hoax-based propaganda from users who oppose disease vaccinations, the platform put an end to anti-vax advertisements earlier this spring. Facebook also reduced the ranking of anti-vax groups in an effort to stop the spread of dangerous information about vaccines.
News Organizations exempt from Ad Labels
In 2018, Facebook stated that reputable news organizations would be exempt from ad labeling on Facebook. This means that certain news organizations do not need to include a, “paid for by” line in their advertisements.
Removing Nazi Terminology from Targeting Options
After Pro Publica discovered that antisemitic and white supremacist phrases were used to target ads, Facebook modified its ad targeting options to exclude language that is associated with those topics. This change was mainly implemented to prevent the spread of hateful radicalization and to help users distinguish between legitimate news and extremist content.
Disclosing Advertiser Information
To better inform users about the companies that are marketing to them using Custom Audiences, Facebook now publishes advertiser information.
Adding More Restrictions to Custom Audiences
To prepare for the updated General Data Protection Regulation that took effect in spring 2018, Facebook began requiring advertisers to show that they have authorization to use the data they upload to create Custom Audiences.
After Cambridge Analytica collected information from millions of Facebook’s user profiles, the social media site was held responsible for failing to protect user data. Facebook was assessed a fine of £500,000 by the United Kingdom’s Information Commissioner’s Office in response to the incident.
Banning Ad Targeting from Third-Party Providers
Facebook removed the ability to use data from third-party sources in targeted advertisements, which is another change that was catalyzed by the Cambridge Analytica scandal. Previously, advertisers could use information obtained by data brokers to target users based on criteria such as recent purchases or household income.
After this regulation was initiated in September 2018, advertisers now must rely on their own information or data that can be collected through Facebook directly.
Disclaimers for Political Ads or Issues of National Importance
After discovering that Russian users were running politically divisive ads during the 2016 US presidential election, Facebook now requires political advertisers to publicly disclose who paid for the ad. In order to post an ad related to politics or certain public issues, advertisers must also verify their identities and locations.
Facebook is continuously making updates to protect user information, but many of these changes were implemented as a direct result of data misuse. Advertisers who want to utilize this platform to target potential consumers should be aware of Facebook’s changes and advertising regulations.
By Nicole Frost, writer at AdvertiseMint