And here’s where EFF showed its true colors. The group published a string of blog posts and communiqués that attacked Figueroa and her bill, painting her staff as ignorant and out of their depth. Leading the publicity charge was Wentworth, who, as it turned out, would jump ship the following year for a “strategic communications” position at Google. She called the proposed legislation “poorly conceived” and “anti-Gmail” (apparently already a self-evident epithet in EFF circles). She also trotted out an influential roster of EFF experts who argued that regulating Google wouldn’t remedy privacy issues online. What was really needed, these tech savants insisted, was a renewed initiative to strengthen and pass laws that restricted the government from spying on us. In other words, EFF had no problem with corporate surveillance: companies like Google were our friends and protectors. The government—that was the bad hombre here. Focus on it.
In the public sphere, meanwhile, EFF’s vision won out. Concerns about private surveillance were pushed out of the spotlight, crowded out by utopian proclamations about how companies like Google and Big Data would change the world for the better. Privacy would come to mean “privacy from government surveillance.” And corporations? Corporate intentions were assumed to be good—or, at worst, neutral. Corporations like Google didn’t spy; they “collected data”—they “personalized.”
about a bill that was meant to prevent gmail from showing targeted ads by requiring opt-in consent from all parties relevant to the email. this is mildly interesting, though perhaps a very slanted portrayal; what this makes me think is how limited privacy-centred measures are, from the outset. if you dismantle google's (anyone's) ability to show targeted ads, yes you may reduce ad click-through rates, but do you diminish power? you'll still get ads, just less targeted, possibly more obtrusive (making up for precision with volume). plus google already controls so much ad tech infrastructure. it's too late, now; the cat's out of the bag
The defeat of SOPA was naturally a time of great celebration for EFF. The group’s campaign was successful, effectively short-circuiting any possible discussions about using copyright and anti-piracy enforcement to make sure people aren’t getting exploited. From 2012 forward, the bid to license and preserve online copyright has been monstrously, and misleadingly, framed as struggle against totalitarianism, conflating Silicon Valley’s right to pirate content at will with liberty and freedom for the masses. As such, the SOPA battle was just one more successful application of EFF’s rhetorical public relations strategy: frame any attempt to regulate Silicon Valley power with totalitarianism, all while conflating the interests of regular internet dwellers with the plutocrats who own the internet.
this is smart (and obvs applicable beyond the EFF)
[...] Section 2030 from the 1996 Communications Act is a piece of deregulation that says that a platform isn't responsible for the content that a user posts.
I'd like to see it amended in one specific way. There are a lot of people posting content for which you can't necessarily make the platform responsible. But if the platforms were to make the curatorial choice to promote that content in a recommendation engine, and it reached a certain number of people, they would then have to be responsible for it. I think that would have the effect of curtailing, for example, YouTube's pushing of conspiracy theories by placing certain videos in their 'top trending' boxes. [...]
it's an algorithm, sure, but it's an algorithm that has at least some human curation, and could probably have more. have simple guidelines: dont promote it if it could be construed as hate speech or sth similar acc to some guidelines. it might not be perfect but it would sure as hell be better than now
and ofc the human curators should be paid very well, given lots of employment security and authority and guidelines and report to someone high up
something to think about: the dream is to get rid of human action entirely, but maybe that's a dumb dream? an impractical one? there'll always be human curation needed as long as human society and culture and interaction cannot be described/generated by a finite algorithm
Is there any way to give ourselves antibodies versus targeted persuasion?
To me this is about using everything we know about design and human psychology to fight back. For example, Amazon famously found that for every hundred milliseconds more slowly their page loads, they lose one percent of revenue. So we know that attention is directly correlated to how fast apps or pages load. Why not just turn that back around and design it whereby the more you use something you know you don't want to use, you insert a longer and longer delay, and soon you'll just ask 'why am I here anyway?' It would give your brain the chance to catch up with its impulses.
After Apple recently won the race to surpass a $1tn valuation, CEO Tim Cook emailed staff to explain, “Financial returns are simply the result of Apple’s innovation, putting our products and customers first, and always staying true to our values.”
While seductive, this story is, like the Apple store itself, a managed fiction.
Apple’s system of operation is less the result of genius than of capture and control. Semiconductors, microprocessors, hard drives, touch screens, the internet and its protocols, GPS: all of these ingredients of Apple’s immense profitability were funded through public dollars channeled into research through the Keynesian institution called the US military. They are the basis of Apple’s products, as the economist Mariana Mazzucato has shown.
The company’s extraordinary wealth is not simply a reward for innovation, or the legacy of “innovators” like Steve Jobs. Rather, it flows from the privatization of publicly funded research, mixed with the ability to command the low-wage labor of our Chinese peers, sold by empathetic retailers forbidden from saying “crash”. The profits have been stashed offshore, tax free, repatriated only to enrich those with enough spare cash to invest.
some thoughs on this (relevant for book)
what apple has done (like most successful tech companies) is figure out how to put the pieces together (wtih ofc some creative control, innovation) in such way as to mint a ton of money. now, the big q is: is this good (intended behaviour), or is it bad (an aberration)?
something about profit's morality being socially constructed: if you forge money, or say forge/steal something and sell it, you're committing a crime. if you pay chinese workers a pittance to assemble devices based on tech you're cobblign together from various sources, you're innovating. this is considered legal because it's been created to be so - it's not a natural state of affairs. you could imagine alterantive systems where excess profit is essentialyl made impossible (or even criminalised) - workers must be better paid, prices must be lower, corporate taxes must be higher. whatever the rationale for allowing companies like apple to rake in profits in exchange for monopolisation supply chains doesnt seem worth it (not worth the costs)
[...] Apple's profits, at root, are a product of its power to control.
Apple's ability to govern its employees, supply chains, and image allow it to restrict behavior and creativity in its interests - try getting a genius to say "crash," the company to pay tax, or your music out of your iPhone. Apple's ability to assert proprietary control over public goods, from the town square to government research, allow it to generate income far in excess of anything it could hope to wring from its staff. Apple's performance of friendliness and innovation allows it to soothe customers while convincing both them and investors that it is the source of a happier, richer destiny. Apple's profit does not come from packaging the labor of the past, in other words, but from the power to organize the present in a way that makes others believe that is inventing the future.
damn this is great
When we consider the social effects of computers in political and social life, we usually think in terms of expanded power and new possibilities. This perspective on computation permeates even our critical visions of technology. But we should also be attentive to the power that computers and the accompanying language of “systems” and “complexity” have to narrow our conception of the politically possible.
Another fallacy in the lead-up to the financial crisis was the assumption that financial markets were so efficient that participants didn’t need to do the underlying work to figure out what the securities were actually worth. Because you could rely on the market to efficiently incorporate all available information about the bond. All you need to think about is the price that someone else is willing to buy it from you at or sell it to you at.
Of course, if all participants believe that, then the price starts to become arbitrary. It starts to become detached from any analysis of what that bond represents. If new forms of quantitative trading rely on assumptions of market efficiency—if they assume that the price of an instrument already reflects all of the information and analysis that you could possibly do—then they are vulnerable to that assumption being false.
Is Uber worth $60 billion? Well, Uber is worth $60 billion because we believe someone is willing to pay $60 billion for it. But maybe Uber is worth zero. Maybe that’s the actual value of the revenues that Uber will make in the future. In the current environment, we rely on liquidity to sustain prices for financial assets. When liquidity dries out and you’re forced to rely on the things that those financial assets actually represent, however, you could see painful shocks if there’s a big disconnect between price and reality—the kind of shocks you saw during the financial crisis.
If people didn’t want to do the analysis before, they’re probably even less inclined to do it now. They figure the machine learning models are taking care of it.
Right. The machines are taking care of it. Or other market participants are taking care of it.
I might think that the share of a particular company is worth 20 dollars. But its price can go up to 100 dollars well before it drops down to 20, in which case I can’t sustain my measure of its actual value. So if all of the computers are pushing the price to 100 dollars, I might as well not do the work of figuring out what the company is actually worth because it’s somewhat irrelevant to the price that it trades at. Paraphrasing Keynes, “Markets can remain irrational longer than you can remain solvent.”
[...] Beneath the specific events that I experienced, I recognised a universal story – the story of what happens when human beings find themselves at the mercy of cruel circumstances that have been generated by an inhuman, mostly unseen network of power relations. This is why there are no ‘goodies’ or ‘baddies’ in this book. Instead, it is populated by people doing their best, as they understand it, under conditions not of their choosing. Each of the persons I encountered and write about in these pages believed they were acting appropriately, but, taken together, their acts produced misfortune on a continental scale. Is this not the stuff of authentic tragedy? Is this not what makes the tragedies of Sophocles and Shakespeare resonate with us today, hundreds of years after the events they relate became old news?
When a large-scale crisis hits, it is tempting to attribute it to a conspiracy between the powerful. Images spring to mind of smoke-filled rooms with cunning men (and the occasional woman) plotting how to profit at the expense of the common good and the weak. These images are, however, delusions. If our sharply diminished circumstances can be blamed on a conspiracy, then it is one whose members do not even know that they are part of it. That which feels to many like a conspiracy of the powerful is simply the emergent property of any network of super black boxes.
not the most elegant wording but an important point