All social media companies operate around the same basic principle. You are not the customer of Facebook — you are the product. When we speak about data, we are referring to bite-sized pieces of information about people. Where you live, how you vote, and what your favorite soda is are all examples of individual data points. Social media companies create platforms that draw you in, hook you, and then provide reasons for you to hand over information. Slowly, bit by bit, datapoint by datapoint, a picture of you develops. These data portraits are incredibly detailed characterizations of you. In fact, with the exception of your immediate family no one else knows you as well as your personalized data file. Facebook, if pressed, could likely provide more information about you than the federal government.
With enough data points about you, and users like you (which can work as a proxy for gaps in your personal data collection), massive algorithms can begin predicting and altering your behavior. These individualized data prediction webs are perfect for advertisers that want to micro-target users with personalized ads. Those advertisers pay YouTube and Facebook, and that is where the tech giants receive their profit. They sell their users.
In the early days of social media, this business model was not well understood by the American populace. Concerns about the internet were typically tied to cultural flashpoints like pornography, or versions of the argument that social media makes teenagers braindead. However, in recent years public attention has begun to finally turn towards the problems related to data collection, aided by the fact that modern young adults grew up with the internet, and a range of high-profile scandals.
Likely the most famous of these scandals was the attempt by Russia to influence the 2016 election through social media. Russia essentially attempted to weaponize this data collection and manipulation system for their own ends. They created legions of fake accounts and fake stories, and as the fake stories became more popular (based on micro-targeted ads), the platforms funneled more and more users towards them. These users typically had a moderate conservative stance, and were incrementally funneled towards radicalization. Following similar scandals, demands began to rise for restrictions on how companies could use people’s data, and when they were allowed to sell it — especially in light of how many of these practices were disguised in arcane terms and conditions.
This article will explore the future of the American legislative response to these concerns about data usage.
The United States Status Quo
The United States lacks a federal data protection law. Instead, it has a patchwork of state and federal regulations that either deal with data directly or have been shifted towards that purpose.
State legislation, most of it resulting from the early 21st century when many of the data issues were not well known, deals primarily with data security. Under these laws, it is fine for corporations to have access to your data — as long as they make sure it isn’t leaked or stolen (though, of course, nothing is stopping companies from simply purchasing the right to use the information). For example, New York has a strong data breach protection law that requires companies to have “reasonable” safeguards against data breach. Of course, “reasonable” is a vague standard. Illinois has a surprisingly beefed-up cybersecurity law that covers bioinformation, a rarity in state legislation, although it too doesn’t really deal with the usage of data — just its protection. Both of these are strong protections against data breaches, but incredibly weak when it comes to regulating how that data is to be used by companies.
Massachusetts has a similar law that uses specific standards to ensure that data security is up-to-date, however it is important to note that the law, like most state laws in this area, acts only to protect Massachusetts’ citizens’ data. It doesn’t set standards for any company that operates in Massachusetts regarding its data — just the data relating to the state’s population. This is a minor detail with large implications. If the law instead stated that any company operating in Massachusetts had to comply with certain general protections, that might force the company to alter its product in general to avoid losing a major market — as opposed to just altering how it handles some data.
Despite the lack of a general federal law, prosecutors have used existing laws in new and creative ways. For example, the 1998 Child Online Privacy Protection Act, originally designed to protect children from pornography, was recently re-used to target YouTube for collecting data on children. Following several successful settlements, the law has been used against a range of companies that build data profiles about children.
California and CPRA
California stands out in this regard. Famously remarked as being a few years ahead of the federal system when it comes to legislation, California has passed a range of real data protection restrictions, covering not only data security but also limiting how companies can use data. All this coming from the state in which these large tech companies reside, and where their lobbying may be expected to be the strongest. The reason California was able to pass such legislation likely results from its unique institutional design.
At the beginning of 2020, the California Consumer Privacy Act (CCPA) went into effect. This bill contained not only safety requirements, but some restrictions on how companies could use data. These restrictions were not strong or expansive, but at the time they were better than any other state law and the federal government’s standards. The limited bill arose from political concessions. Originally a thicker version of the law was intended to become a ballot initiative. Fearing that putting the initiative to a public vote would lead to an overly restrictive law, tech lobbyists and their supporters in the state legislature agreed to pass a weak protection bill in return for the ballot measure being removed.
California’s public initiative system, unique among the states, is an extremely powerful system, for both good and ill. In theory, the system allows laws to bypass corruption and lobbying within the legislature by appealing directly to the people. In practice, the system has a complicated and messy history. The process as a whole has also been successfully captured at times by corporate interests — an example of this is the ridesharing regulation initiative that failed in 2020. However, CCPA provides an example of the times in which public initiative can serve the original intentions of its creators. California is a huge market and home to many tech companies — if it ran through a normal legislative process, the bill would have faced an incredibly intensive lobbying campaign that would have likely stopped it in its tracks. It was only the fear of the public vote that led to CCPA’s passage.
However, the CCPA was soon seen as too weak, and the compromise not worth it. Data protection was put on the ballot directly in the form of California Proposition 24. The fears of the tech lobbyists soon proved correct – the bill passed with 56.23% of the vote, even in the face of a massive lobbying effort against it. The proposition created the California Privacy Rights Act (CPRA), which dramatically expanded the CCPA. The bill created a new California Privacy Protection Agency, a regulatory agency with punitive powers designed to focus specifically on data issues. It also dramatically expanded the rights of consumers to know about data usage and reject the collection and usage of their data.
The law is not perfect — opponents have rightfully called out that the bill still contains some giveaways to corporations and is not as dramatically restrictive as some might have hoped. That is not what should be focused on though. The shocking factor isn’t the presence of giveaways — it is that the bill itself was passed at all in a state with such a huge market and so many tech companies. It should have died in the cradle, like it does in other states or nations. In fact, California seems like the least likely state to pass such a law due to those fundamentals.
It is the unique institutional design of California that allowed for such a law to pass. The history of the CPRA shows this to be the case. The initiative process allowed proponents to jump around lobbying gridlocks in the legislature; with a subject sufficiently publicly popular, initiatives can better overcome lobbying campaigns.
The GDPR and the EU
As great as this story is for California, it doesn’t seem to bode well for the United States. California passed its law because of an institutional feature that the U.S. as a whole does not have. What is to stop the gridlock that has blocked other states from passing such laws from occurring on the national level, where lobbying is likely to be more extensive?
Looking over to the inspiration for the CPRA the picture seems to be even worse. The CPRA was based on the European Union’s General Data Protection Regulation (GDPR), the most expansive data protection law on Earth (and notably, the only major one that acts on a national level). Why is the EU the only international body to successfully implement a data protection law? The supranational aspect of the EU is key here.
More importantly, the division of powers inherent to the EU lowered the stakes of the GDPR. By requiring that the law be implemented by national courts and absorbed into national law, the EU itself became the less important venue for targeted lobbying. Rather, the focus on mitigating impacts has been targeted at the national level, allowing for an overarching legal regime to be in place for the continent while also allowing for diverse enforcement standards. By shifting the implementation down, the EU shielded itself, while still achieving its overarching goal as the regulation forces websites and services serving an EU audience to conform to EU standards. The diversity of venues in which enforcement must occur or be challenged lowers the stakes of EU-wide lobbying, but in setting a supranational standard the EU forces through the logic of market competitiveness companies in all EU nations to conform to the standard most likely to be enforced in every EU nation (the GDPR standard).
Just as California was unique among the states for passing such a law, so are EU member states unique among the community of nations. Both California and the EU relied on unique institutional structures to achieve that goal.
The United States
So does the United States need a similar institutional framework in order to pass such a bill? As such, is the national movement doomed?
In fact, the U.S. does not need such a framework, and the movement is not doomed. Rather, this is an incredible example of the power of institutional fragmentation — the division of power across multiple independent institutions thus allowing changemakers to shift targets and build momentum through alternative avenues.
California and the European Union are two huge markets. The populations and native corporations are especially technology-savvy, and both are home to thousands of other national and multinational businesses that rely on technology products. Their laws, CPRA and the GDPR, are incredibly similar, the former built by emulating the latter. Social media platforms thrive on interconnectivity – their ability to connect more and more people is a key part of their appeal. The importance of interconnection means that it is very difficult to divide the platform into multiple versions for each nation and still function properly. It simply doesn’t make sense to run two versions of the same product, one operating unrestricted and another heavily regulated.
After all, the CPRA and the GDPR pretty fundamentally shift how these platforms are monetized. In any case, all social media companies will have to create at least two versions of their platforms — one for CA/EU, and one for everyone else. However, in most cases it is easier to simply create one product that meets the highest regulatory standards — that way you catch all the weaker legislation underneath. As nations and states expand their data protection laws, platforms risk incredibly costly fragmentations that are not only expensive to run but also weaken their appeal.
As such, the logic of economic efficiency predicts that over time the CA/EU compliant version will increasingly become the standard, if only because it is easier to run one product than two. This will be further reinforced if the new California privacy regulatory body begins enforcing its protections against all companies based in its borders (even if operating elsewhere) — a difference between the CPRA and the Massachusetts law. Eventually, there will be no real choice but to comply with the increased standards.
This changes the political environment around a new U.S. bill. With companies already having to comply with regulations in California and the EU, pressure against a national bill will be reduced. If the regulated version of platforms becomes increasingly standard in non-regulated areas, that pressure will be further reduced as the U.S. basically shifts to following regulations despite the absence of national level versions.
With political will for a bill still being incredibly strong, a strong national data privacy bill doesn’t seem like an impossibility anymore. Institutional fragmentation is on the side of data protection advocates, and even if a national bill never comes to fruition, tech companies are already on the defensive. Increased protections seem to be on their way to normalization.
This argument received an unexpected empirical verification during the writing of this article. While it was being written, Google announced that it would no longer fund its platform through the selling of data for micro-targeted ads. Regarding the reasons for such a surprising shift in company policy, Google cited several things: public outrage over data usage, a desire to prepare for future national data regulation, and a declining ability to make such a system profitable in the face of existing pressures. In short, the aforementioned forces of institutional fragmentation played out as expected. With this verification, it seems more certain that similar decisions from other large technology companies are likely to occur in the future, as market forces and increasing regulation increasingly make data collection an unprofitable business model.
Featured Image Source: iStock by Getty Images
Comments are closed.