In the 2012 US presidential election, the Obama campaign deftly used a Facebook app to register voters and have friends message them to get out the vote. But it was 2016 presidential campaign that really changed social media’s impact on electoral politics.
Voters were subjected to unprecedented, technology-driven efforts to sway their votes, and from as far away as Russia, according to US government claims. Social media in general and Facebook specifically were the primary weapons of choice. It worked so well that spending by US politicians in the 2018 midterm elections was up 25 times from 2014 midterms.
Now 16 months away from the next election, efforts have been joined to prevent voter tampering from happening again. But I don’t think significant progress has been accomplished—a view bolstered in recent testimony before congress by Robert Mueller. What’s worse, users themselves seem unconcerned. Engagement on various applications on the Facebook platform is up. Users appear comfortable with the trade they make to give up privileged information in exchange for a range of convenient and free services. Without a push by Facebook’s customers or more fundamental federal government regulation, history is likely to repeat itself.
“WITHOUT A PUSH BY FACEBOOK’S CUSTOMERS OR MORE FUNDAMENTAL FEDERAL GOVERNMENT REGULATION, HISTORY IS LIKELY TO REPEAT ITSELF.”
After all, the ability of advertisers to get hold of personal data from Facebook users is not an accident—it’s a powerful business strategy, part of Facebook’s business model since at least 2010. That’s when Facebook opened up its Graph application programming interface (API) to advertisers, giving them access to user data including their social network friends, activities, and the history of content they “liked” on the platform.
Advertisers used that data to create psychographic profiles of potential customers that went beyond mere demographics to give insights into what they like and value. Studies have found that with just 10 “likes” an algorithm can predict a person’s personality traits better than can a co-worker; with 150, it can do better than a family member; and with 300, better than a spouse.
But then came the Cambridge Analytica fiasco, when that firm got 250,000 Facebook users to voluntarily give them access to their personal data by playing an online game. Because of the consent users gave, their profiles of their friends, who didn’t even use the application, were accessed by Cambridge Analytica who was able harvest information for 87 million people—many of whom had never authorized it. Facebook CEO Mark Zuckerberg later said Cambridge Analytica was not supposed to take the data for its own purposes.
Facebook makes changes
In 2014, Facebook had changed the API and limited access to data, effectively closing the barn door after the horse got out. In 2018, after the scandal broke, Facebook further announced a six-point plan for a “privacy-focused” platform, to better safeguard data, investing billions of dollars and tens of thousands of employees and contractors to implement it. The market capitalization of the company dropped over $100 billion in one day, now roughly back to the pre-crisis level.
At the same time, Facebook implemented changes to its privacy settings to allow users more granular control over what they share and with whom. According to studies, however, very few users have changed their privacy settings, and those who do often end up sharing more data by opening up some avenues for sharing rather than just blocking everything.
What Facebook has not changed is its advertising-funded business structure, which is predicated on giving advertisers the ability to target ads according to psychographic user profiles—and advertisers have shown no sign of giving up the platform. Ad loads have gone up in various Facebook applications. That’s no surprise: The return on investment of the targeted ad model they’ve perfected is far superior to traditional advertising. Both Google and Facebook have an effective duopoly on digital advertising, though Amazon is starting to put up meaningful numbers. As such, advertising customers have limited realistic alternatives.
When the FTC announced in July a $5 billion penalty and other actions against the company over privacy disgressions, the Federal Trade Commission said it well: “… through at least June 2018, Facebook subverted users’ privacy choices to serve its business needs.”
By the way, despite recent public statements, Facebook has lobbied against further regulation by government, typically ranking in the top 10 US spenders on lobbying efforts.
What’s the solution?
To Facebook’s credit, it has been partnering with government and nongovernmental organizations to weed out fake and inflammatory content. On the transparency front, it is publishing quarterly reports across nine categories of content, showing how much they take down before users see it.
Last May, the European Union passed the General Data Protection Regulation, which places tight controls on how companies can employ user data, and California passed a similar law last year, which will go into effect in 2020. It’s an open question whether that law will spur the US Congress to enact similar measures, or water down California’s law. Congress has been talking about a privacy bill for two years, but if nothing emerges before the Congressional August 2018 recess, a federal bill is unlikely and therefore a hodgepodge of state bills, like California’s CCPA, will come into effect in 2020.
Beyond privacy and content, a range of calls have been made to revisit US antitrust laws on many of the large tech players, effectively challenging the power they wield. The Justice Department announced last week it’s launching a multiyear review, but didn’t name potential targets.
For Facebook, the most often cited solution would be to break up the company by unwinding the Instagram and WhatsApp acquisitions. Similar efforts around AT&T and Microsoft took over a decade to bring to conclusion.
A new direction?
Whatever or whenever something emerges from Washington or the EU may be moot. According to a recently announced pivot in strategy—they call it a privacy-focused plan—Facebook seems to be shifting direction, merging its main app with its Messenger, Instagram and WhatsApp acquisitions. This would create an encrypted messaging platform that would emphasize small groups over public content.
While that may seem like a step towards privacy it may have the opposite effect—limiting Facebook’s ability to see and therefore police encrypted messages, while still allowing advertisers access to users to target them with ads. In addition, Facebook has launched Libra (a cryptocurrency/payments initiative) and increasingly sought to pursue ecommerce revenues via Instagram or Facebook Market.
One interpretation of these changes is they are admitting that the “public square” problem they’ve created in the last 15 years is too hard to solve. They’ll still be able to target these private communications, but won’t be able to moderate the content on it. If bad actors target users with political ads or misinformation, Facebook will have “plausible deniability” since they won’t even be able to see the content themselves.
One way around that problem would be for regulators to repeal the 1996 Communications Decency Act, Section 230, which granted immunity to internet platforms for user-posted content. The issue isn’t political bias, its bullying and hate speech. You might consider a carve-out for smaller players to not overly burden innovation. The purpose would be to bring the same level of societal engagement to social media sites—making them responsible for libel and defamation—just as traditional publishers. Facebook has a well-grooved process to deal with copyright and trademarked content that could be applied to user-generated content, where you would give notice, require a timely response, and pursue legal means if unsatisfied. Another idea would be to prohibit political advertising spending on Facebook and other social media companies system from the platform.
Unless regulatory or governance change happens, it’s hard to see what changes Facebook implements will solve the Pandora’s box of problems opened when it allowed advertisers access to user data—or prevent a new scandal from emerging in the next election. Facebook has many issues, but making money isn’t one of them.
I’m a bit of a skeptic about the ability of the purported changes to drive fundamental impact, given the incentives of their business model. They’ve been trying to optimize for user engagement forever, which drives their ad model. Changing that to a completely new business model would be more than difficult because individual users seem well served, advertisers like the reach and targeting ability on the platform, and shareholders can’t complain either. It is society that is paying the digital version of the tragedy of the commons costs.
[“source=hbswk.hbs”]