OPINION: Facebook Stock Has Fallen, But Its Exploitation Of User Data Is Active As Ever

REUTERS/Stephen Lam

Sandeep Gopalan Contributor
Font Size:

Is Facebook too big and too powerful to be kept alive? Should it be subjected to harsh regulation or chopped up into smaller entities? Is it time for regulators to apply Facebook’s shibboleth “Move fast. Break things,” to itself?

The company’s stock has been battered by a long list of scandals and user growth appears to have hit a wall. So, will the market work its magic and lead to Facebook’s demise eventually?

Facebook built its success based on a staggeringly simple idea — give naïve users a platform which lulls them into a sense of privacy and they will tell you everything about not only themselves but others as well. And this data is valuable to every commercial entity chasing customers — it allows them to target their marketing dollars to the likeliest customers instead of dissipating it amongst a wide audience as in the pre-Facebook era.

Facebook’s model is simple — be indifferent to what people say or do, ignore the motivations of users and falsehood of content, and just continue to enable its creation and dissemination to an ever-expanding number of people. The reason is clear — the company doesn’t produce anything and therefore incurs few costs; its users are the product — and they essentially create their own value via their own actions. Facebook enables this self-creating product and laughs all the way to the bank by selling it to marketers: the average user in the U.S. yielded revenue of $27.61 in Q3 2018.

The company could have chosen to police those who misuse its platform for spreading hate speech or political propaganda but the incentives do not align. And as the user base in the US reached saturation the company had to rely on growth in overseas markets. The monitoring of users and content across the globe would have been costly. For instance, its largest number of daily users are in the Asia-Pacific (561 million in Q3 ’18) — effective policing would require the employment of local language experts versed in diverse cultural expectations and social norms, establishing a greater physical presence in many countries, and potentially inhibiting the growth of user numbers.

Clearly, Facebook staff in the U.S. would have been ineffective in policing against hate speech directed at the Rohingya in Myanmar — that would have required Burmese language experts familiar with historical prejudices and insults calculated to offend the Rohingya, trigger words likely to generate offline violence, etc. Facebook’s own report on its human rights impact in Myanmar released this month showed that it did not invest in such monitoring capability. The company has 20 million users in Myanmar but employs less than 100 staff with the necessary capabilities despite the evidence of its platform being used to enable mass violence.

This brings us back to the original question about how to tackle the problems generated by Facebook. Critics might cavil that regulation is unnecessary because the market will discipline Facebook — stagnation in growth and some small declines in user numbers in the U.S. point in that direction. However, they miss the point. The problem is not about one company — Facebook is just the symptom and not the disease. And there is an even bigger elephant in the room: Google.

Hence, rather than getting bogged down in distractions about Facebook, broader regulation needs to tackle the extensive collection of data and the use of private information contrary to the intentions of those who supply it for commercial or other purposes. If regulation is built on the edifice of a fundamental right to privacy, companies such as Facebook would not be able to operate as surveillance and data-gobbling machines. To be meaningful, such regulation has to empower users to lodge complaints, obtain compensation, and penalize companies for using data for purposes unrelated to which it was supplied. Under such a model, Facebook, Google etc., would lose the incentives for such activity because the data cannot be sold to advertisers except with consent.

Coevally, Facebook and other platforms have to be required to verify the identity of users on their platforms. This will prevent manipulation and abuse whether it is with the Russians during the 2016 election or in Myanmar. Identity verification and ending anonymity will also reduce the worst aspects of the internet — vicious abuse, hate, defamation, and violence. And the claim that preventing anonymity will chill free speech overstates its benefits and ignores the harms — there are other avenues for legitimate whistleblowers or those with clean hands to disseminate necessary and true information.

Proposals for laws tackling fake news or breaking up Facebook miss their mark and are unlikely to attain fruition. They also aim at the wrong problem — Facebook is just one entity milking the private information of unwary users to make money. There are others. So, law needs to strike at the heart of problem — the exploitation of private information for commercial gain.

Sandeep Gopalan (@DrSGopalan) is a professor of law and pro vice chancellor for academic innovation at Deakin University in Melbourne, Australia. He previously was co-chairman or vice chairman of American Bar Association committees on aerospace/defense and international transactions, a member of the ABA’s immigration commission, and dean of three law schools in Ireland and Australia. He has taught law in four countries and served as a visiting scholar at universities in France and Germany.

The views and opinions expressed in this commentary are those of the author and do not reflect the official position of The Daily Caller.