Until relatively recently, the regulation of technology was largely discussed through the prism of contract. Technology is often sold or accessed as a proprietary product, licenced through contract to the user. Clickwrap terms of service on major service platforms, for example, allow companies to operate broadly on a take-it-or-leave-it basis.
The law has traditionally respected the rights of private parties in making agreements on their own terms, and courts have been reluctant to intervene and set aside these freely bargained arrangements except in the most extreme cases. But many of the contracts we enter into on almost a daily basis share little in common with the context in which the law of contract developed. This is not least because there is an absence of some of the central foundations that have traditionally supported the relevant jurisprudence. Modern digital contracts are characterized by grossly unequal bargaining power, with an absence of a meeting of minds. There is no genuine consent or understanding among users about the rights and obligations of each party. It is formalized exploitation of our digital lives for profit.
Moreover, seeing consent as something that the individual is empowered to offer is something of a category error. For example, as service platform companies collect data, this allows them to know what they know, as well as know what they do not know. Put differently, companies can make inferences about the data they have not collected from data they have. If a company has sufficient intelligence about a certain class of people, it can draw conclusions about those who fit that demographic on the basis that they are part of a lookalike audience. It is not possible to opt out of this; we all end up bound by decisions made by others to consent to invasive data collection practices. In some ways it is like buying a car with faulty brakes. It’s a consumer choice that puts not only you at risk, but also makes the road less safe for all users.
[...] The task of improving safety could not be left to industry, as market incentives mitigated against such an investment. “A democratic government is far better equipped to resolve competing interests and determine whatever is required [to improve safer transport] than are firms whose all-absorbing aim is higher and higher profits,” wrote Ralph Nader in 1965 in his seminal book, Unsafe at Any Speed. These safety problems were fixable; they were problems of design rather than individual responsibility. But they required centrally imposed rules to achieve this.
There are strategic limits to this logic. Arguments framed around consumer rights still rely on assumptions about the inherent value of the free market, and a commitment to making it a more functional mode of relations. But if we neglect this field, we lose important ground in the public debate about regulation. Even the most committed libertarian would struggle to justify abolishing the Food and Drug Administration on the basis that it limited individual freedom. No one would agree that an ideal society would require people to take responsibility for testing their food to check that it has not been poisoned. We expect a well-run society would have a process in place, centrally administered, to enforce the relevant rules as much as possible in an efficient way.
There is no reason why technological products could not be subject to similar testing and approval. Biased algorithms would be identified, automatic processes that produce perverse outcomes could be stopped before they are shipped. In the course of finding these examples, there would be a platform for public debate about how to respond to them. A consumer protection lens can help us think of other potential reforms. This might include prohibiting the use of data (including its sale) for any purpose other than the purpose that it was given by the user. This is what consumers currently expect, but not what companies actually deliver.
[...] The central basis for the key antitrust legislation in U.S. history, the Sherman Act, was not just about markets, but power. Legal scholar Lina M. Khan argues that the importance of antitrust law has traditionally not just been about economics, it was also understood in political terms. Legislators were animated by an understanding that the “concentration of economic power also consolidates political power,” she writes. Monopolies that vest control of markets in a single person create “a kingly prerogative, inconsistent with our form of government,” declared Senator Sherman in 1890 when he proposed the bill that would become his eponymous act. “If anything is wrong this is wrong. If we will not endure a king as a political power we should not endure a king over the production, transportation, and sale of any of the necessaries of life.” Khan has observed that more recent interpretations of antitrust law over the last half century have given more weight to consumer welfare—often understood in the form of lower prices. This means that platform monopolies fall outside the frame of antitrust protection in their modern iteration.
For example, at the recent Facebook developer conference F8 the presentations were focused on getting people onto Facebook-owned apps (including Instagram and WhatsApp) and making it so they never need to leave. Facebook wants us to buy things, find a date, and apply for a job without ever leaving Facebook. If Zuckerberg gets his way, and it is hard to think that he will not, users will also be able to pay for things with Facebook’s cryptocurrency. (With a potential market of almost three billion users, this could easily become the largest traded currency in the world.) This corporate domination strategy is about creating a private version of the web, where a significant portion of our online lives is mediated through a company. “In a lot of ways Facebook is more like a government than a traditional company,” Zuckerberg has said. A lot of ways, except that, critically, its constituents are disenfranchised. These are the foundations of corporate totalitarianism, where billions of people are made subservient to the whims of a boardroom dictator.
then why is this idiot not elected
One possible alternative is to consider socialization or nationalization of major platforms. The centralization of users is a key feature of a successful platform like Facebook, but now that the technology has been built and there is a critical mass of users, it is possible to make the claim that there are benefits of public possession that might outweigh those of private ownership. It is possible to imagine a process whereby users are given control, like shareholders in a company, to appoint people to run the enterprise, or alternatively, an accountable authority of some description becomes responsible for managing the platform, like a public broadcaster.
Government procurement practices could be another way to undermine monopolies and clear space for newcomers with alternative approaches. Imagine, for example, that software products were required to meet certain criteria in terms of ethical and open source design before being used by public bodies. This could foster a culture of collaboration and keep the internet open, bringing down the walls of proprietary gardens like Facebook.
These approaches have the potential to open up space for thinking about technological development differently. Imagine if the web was less about titans of industry jostling for domination of the market, and more about improving public participation, inclusion, and community organizing. Such revolutionary ideas from our past have renewed potential in the digital age. [...]
hell yeah
In part, the motivation for this book comes from observing the ahistorical nature of discussions about technology. This has, at best, led to a benign yet thoughtless form of technological optimism. “When you give everyone a voice and give people power, the system usually ends up in a really good place,” declared Mark Zuckerberg back in the early days of Facebook, with an impressive combination of naiveté and disingenuousness. At worst, and dismayingly, this sees revolutionary moments recast as cultural shifts generated by disruptive thought leaders: history understood as the march of great entrepreneurial CEOs. This kind of thinking sees the future as defined by universal progress—rather than by a messy, contradictory struggle between different interests and forces—and never driven by the aspirations of those from below. It reduces the value of human agency to entrepreneurialism and empty consumerism.
History has a role in telling us about the present but not if we use a frame that valorizes those who currently hold positions of power. We need to reclaim the present as a cause of a different future, using history as our guide.
[...] As the planet slides further toward a potential future of catastrophic climate change, and as society glorifies billionaires while billions languish in poverty, digital technology could be a tool for arresting capitalism’s death drive and radically transforming the prospects of humanity. But this requires that we politically organize to demand something different.
These methodologies for predicting and shaping our behavior have grown more sophisticated over the first two decades of the twenty-first century. Collection and analysis of big data about people is a well-established industry. It includes the companies collecting data (miners), those trading it (brokers) and those using it to generate advertising messages (marketers), often with overlap between all three. It can be hard to obtain reliable estimates of the size of the industry, given its complexity, but one study says that by 2012 it was worth around $156 billion in the United States and accounted for 675,000 jobs. It has undoubtedly grown since then. Like slum landlords who rent out dilapidated apartments, or greedy hotshot developers who take advantage of legal loopholes to build luxury condos, companies that trade in personal data represent the sleazy side of how digital technology is impacting the real estate of our minds. This industry uses the faux luxuries of choice and convenience to entice us to part with our data, but often what they are really selling is overpriced and dodgy.
interesting analogy
The most valuable consumer platforms have both the capacity to collect highly valuable personal data and the opportunity to use it to market to users at the most lucrative moments of their daily lives. These are the places in which the invisible hand of what I call technology capitalism is at work—between data miners and advertisers, with data on users as the commodity being traded.
the fckin gateways!!
There is much still to be won and lost in the battle for our online autonomy in the future. As the next generation of web technology improves the integration of all our digital activities, allowing machines to organize even more of our lives, others will continue to learn more about us than we even know ourselves. In this context, focusing on our power over this process as consumers is a mistake: the power being exercised over us is precisely based on our being socialized as consumers. This prepares us to accept a city where every park is paved over to build freeways and every sports field and roller-skating rink is demolished to build a shopping mall. Such a city would not be a functional, let alone enjoyable, place to live. But it would be a place where data traders and retailers made a lot of money.
love this
This is not an attempt to pin blame on evil engineers or designers. The people who made these cars were working in a specific corporate climate. Their organizations were led by ruthless executives. The leadership of companies like Ford and GM ignored safety concerns in competition with other companies that did the same. It was not even a problem confined to the auto industry; there have been many other similar scandals involving corporate indifference to the human consequences of poorly designed consumer products. These scandals are not aberrant; they occur in a context, and to avoid them happening again requires a political strategy to attack the logic that produces them.
on the ford pinto