When a tech company captures an audience, it gets more than the opportunity to sell products and ideas. It also harvests the discretely quantified and collated bits of individual user data that people hand over, wittingly and unwittingly, as they stare at their computer and smartphone screens. As valuable as this information is for what it reveals about individual consumer habits and preferences, it’s even more precious in the aggregate, as so-called big data, which can be used to predict political shifts, market trends, and even the public mood. Who knows, wins, as the old military adage goes—and this is equally true in the world of business. Watch the video, click the link, fill out the form—this is the labor that tech companies turn into profits. The people who carry out this labor consider themselves customers, but they are also uncompensated workers. The process whereby eyeballs get turned into money is mysterious, but not totally opaque—just discouragingly complicated and boring.
sounds kinda similar to Christian Fuchs' arg, which i still don't really like. need to think more about how this theory fits into the larger canon, and how it relates to social reproduction theory (division between work and non-work etc). figure this out for my ad-tech essay!!
Fraud was the hot topic that year, because digital ad buyers were starting to wise up. More than two decades after the arrival of the commercialized Web, a trade group finally funded a proper scientific study on the problem of online ad fraud. The study found, among other things, that marketers were losing $6.3 billion a year to various forms of fraud, much of it staged by organized criminal networks. In outline, such scams allow fraudsters to siphon the fat from corporate ad budgets by employing bots that pose as genuine consumers to click on ads. The crooks are able to grab a piece of the money advertisers are paying out because online publishers—that is, people who run websites on which the ads appear—receive a cut of the money paid to online ad sellers by the companies that buy ads. The crooks are even able to redirect ad revenue from legitimate publishers to their own fraudulent sites through a process known as “injection,” or to generate bogus clicks by hijacking users’ browsers with automated hacking tools. Several experts at the conference told me the study lowballed its multibillion-dollar estimate of industry-wide fraud losses and that the real figure was multiples higher.
In other words, online advertising—the basis for the attention economy that fueled all speculative investment in digital media, from giants like Google on down to low-rent email marketers—was a racket. In the case of Google, an ad buyer will fill out a form saying what search keywords they’d like to associate themselves with, so that when a Google user types in, say, “soap,” they might see an ad for Irish Spring. On Facebook, it would work a little differently. There, ad buyers are able to specify a certain demographic they want to reach—say, expectant mothers with household incomes of $80,000 a year and up, or people with bachelor’s degrees who drive secondhand cars in the Cleveland, Ohio, metro area. This kind of targeting is the core promise of digital advertising. But during the course of the Ad:Tech talks, I came to see that the promise was a sham. The old knock on print and broadcast advertising was that half of ad budgets were wasted, but no one knew which half—the ads went out to everyone. Online ad targeting was supposed to change that by essentially surveilling users and letting advertisers see who actually viewed their ad and did or didn’t buy their product as a result. In reality, though, the new data collection tools didn’t work nearly so well as was promised. A full half of ad budgets was still getting flushed down the toilet.
i mean tbf the prices are arbitrary anyway, but yes, i see his point
(reminds me of me clicking my own ads on TROD, circa 2005-2007)
“It’s all about getting the chart that goes up,” a disaffected social media marketing expert told me over drinks. “There’s a whole industry devoted to making charts that go up.” To illustrate his point, he pointed me to the Guardian newspaper’s now-defunct “partner zones” program, which afforded large institutional advertisers the opportunity to pay massive sums to the newspaper in exchange for the right to post promotional “news” stories on its website—a form of “sponsored content,” in the industry’s parlance. To create the all-important “chart that goes up,” the clients would then pay Facebook to generate traffic to their advertorials. Ostensibly this traffic came through “organically” promoted links targeting genuine potential customers who are so enchanted by the serendipitous appearance of an online advertorial that speaks to their personal desires that they make a conscious choice to click, read, like, and share—or so the story goes. However, the expert, who was a friend of mine, had noticed that a suspiciously high percentage of the paid traffic came from far-flung, low-wage countries such as Bhutan. So the new model supporting digital media was for floundering corporations to pay to place stories about how awesome they were, which publishers would then promote by buying phantom readers. “Then the advertisers can go to the boss and say, ‘Look, we got an article in the Guardian,’” my friend the marketing cynic said. Those phony measures of success supplied fodder for still more charts that went up—these ones for internal consumption, and used to justify the marketing department budget to higher-ups.
In Ghazi’s telling, each tech boom began with a constant and a variable. The constant was easy financing, whether from government-subsidized borrowing or unsophisticated investors. The variable was whatever Silicon Valley was trying to sell at the time. In the early nineties, the boom was about hardware. IBM and Apple had found a way to commercialize military-funded computer research by churning out personal desktop computers and accessories. “Back then, Silicon Valley was small,” Ghazi said. “It was focused almost exclusively on the technology with very little thought on how to market it.” Then, in the late nineties, came another commercial boom, also underwritten by government research: the internet. This time, something changed. Wall Street got involved.
“All of a sudden, Silicon Valley got the first taste of the big money,” Ghazi said. Certain venture capital funds, such as Kleiner Perkins and Sequoia, grew large and powerful—even more so after the bubble popped in 2000. The big crash cleared out the competition. While industry down cycles drove lots of people out of business, the surviving players claimed even more ground. This pattern went back a long way. In fact, as my subsequent research revealed, it went back to the beginning.
The privatization led by Clinton and Gore enabled the dot-com boom of the 1990s as well as the bust that followed. Once more, the best-connected insiders emerged from the chaos in an even stronger position. Ghazi was working as an investment analyst during the boom, but his former company promoted him to VC only after the bubble popped. This meant he missed his first chance at easy money. In 2005, there was another great inflation, this one fueled by social media companies like Facebook. The dot-com boom had lasted only five years. The social media boom—which was called, for a while, Web 2.0, and closely followed Google’s massive 2004 initial public stock offering—was going strong for more than a decade by the time I met Ghazi. Some things hadn’t changed since he first arrived in the Valley. It ran on the same old mix of government-subsidized research, cheap labor, and a regulatory outlook inherited from the Ronald Reagan era that permitted corporations to unload the costs of doing business on customers, employees, taxpayers, and the ecosystem.
nothing especially insightful, just worth noting (even if he doesn't talk about neoliberalism using that word)
[...] the last thing that mattered in Silicon Valley was technological innovation. Marketing came first and foremost. The actual products of the tech industry—computers and software—were less important than the techniques used to sell those products, and to sell shares in the companies that made them. The portfolios of venture capital firms were composed largely of go-nowhere companies built on bluster. There was a paucity of genuine innovation Ghazi shook his head. Neither was having revenue, or customers. In fact, the last thing that mattered in Silicon Valley was technological innovation. Marketing came first and foremost. The actual products of the tech industry—computers and software—were less important than the techniques used to sell those products, and to sell shares in the companies that made them. The portfolios of venture capital firms were composed largely of go-nowhere companies built on bluster. There was a paucity of genuine innovation among these companies, because incremental advances in technology were less reliable generators of profit than, say, finding clever ways to rip people off, or exploiting regulatory loopholes. The overwhelming majority of VC-backed startups were destined to flame out quickly—or, at best, to sputter along for a few years producing modest annual returns of, say, 1 percent. This was not necessarily a problem, at least from the investors’ point of view. Ghazi explained that 60 percent of a venture fund’s earnings typically come from 10 percent of its investments, and “everything else is crap.” Thus financiers were almost guaranteed to profit, eventually. The odds were much worse for entrepreneurs, who were almost certainly doomed even if they secured VC funding. A 2012 Harvard Business School study of two thousand venture-backed companies found that more than 95 percent failed. “You don’t hear a lot about the failures,” Ghazi said.
He was right. Techies only talked about their past failures as a necessary prelude to their present success. But most failures were permanent, and founders didn’t easily bounce back. I contemplated those numbers from the Harvard study. If 95 percent of startups failed, that meant 5 percent of startups received most of the attention from inside and outside the industry. Which meant that the mediated image of Silicon Valley bore little resemblance to the reality. I was living the reality. The reality was that almost everyone was a loser like me, trying to break through.
We were chum. Fodder. Marks. Ghazi shared with me his pity for “fresh-off-the-boat” entrepreneurs who lacked elite connections and still believed the hype about meritocracy, opportunity, collaboration, and geek camaraderie. As Ghazi saw it, one single factor determined who even got the chance to join the 5 percent of winners: “It’s about who you know,” he said. “Go to Stanford, and if you have a bad idea, it will get funded.”
One company, which the Wall Street Journal called “The Epitome of a Stanford-Fueled Startup,” encapsulated in every respect the sham of the Silicon Valley meritocracy, from its charmed beginnings to its ignominious downfall. This company, called Clinkle, secured investors before settling on a product. Clinkle was, in the words of its founder, Lucas Duplan, “a movement to push the human race forward,” but beyond that no one seemed quite sure what the company was all about. Duplan was a nineteen-year-old Stanford computer science major and an insufferable showboat. No need to dwell on Duplan’s shortcomings, however—the important thing is that his academic adviser was Stanford president and Google board member John Hennessy. Along with Hennessy, several professors backed Duplan’s charge into the private sector, endorsing what the Wall Street Journal called “one of the largest exoduses” in departmental history. More than a dozen students abandoned their studies to work for Duplan, who rented a house to serve as Clinkle’s headquarters-cum-dormitory with money invested by his parents and a VC firm, Highland Capital. One company, which the Wall Street Journal called “The Epitome of a Stanford-Fueled Startup,” encapsulated in every respect the sham of the Silicon Valley meritocracy, from its charmed beginnings to its ignominious downfall. This company, called Clinkle, secured investors before settling on a product. Clinkle was, in the words of its founder, Lucas Duplan, “a movement to push the human race forward,” but beyond that no one seemed quite sure what the company was all about. Duplan was a nineteen-year-old Stanford computer science major and an insufferable showboat. No need to dwell on Duplan’s shortcomings, however—the important thing is that his academic adviser was Stanford president and Google board member John Hennessy. Along with Hennessy, several professors backed Duplan’s charge into the private sector, endorsing what the Wall Street Journal called “one of the largest exoduses” in departmental history. More than a dozen students abandoned their studies to work for Duplan, who rented a house to serve as Clinkle’s headquarters-cum-dormitory with money invested by his parents and a VC firm, Highland Capital.
With the prestige and power of Stanford’s leadership behind it, and still without a solid business plan, Clinkle raised $25 million in seed money to develop some sort of app that would exist somewhere in the “mobile-payments space.” The predictable squandering of that impressive sum was chronicled with due skepticism and schadenfreude on Gawker’s Valleywag blog and elsewhere, though many a suck-up rose to Clinkle’s defense. Employees were resigning in frustration even before pictures emerged of Duplan posing like P. Diddy with handfuls of cash. Layoffs followed. Panicked investors called in a series of experienced managers as “adult supervision,” one of whom quit within twenty-four hours. Long overdue and well over budget, Clinkle eventually launched a digital payments service, and later pivoted to a digital twist on an old-fashioned lottery. Clinkle became a punch line and Duplan a pariah. But the real blame belonged to some members of the Stanford administration and the sheeplike VCs of the Valley. It seemed to me that they were the ones who were seeking to exploit the bountiful energy of relatively naïve tuition-paying kids to make a fast buck on pointless, unworkable, and otherwise dubious investment schemes. The dropout entrepreneurs were just eager saps who, when handed shovels, dug holes for themselves.
Along with the cloistered military agencies that underwrote the research for the smartphone, the personal computer, and the internet, these institutions—yes, even NASA—shared a set of overarching goals: to extend the reach of machines to all spheres of human activity; to ensure those machines remained under private control, unaccountable to the public at large; to automate the countless individual political and economic decisions that constitute a nominally free society; and, oh yes, to get richer than the Medicis. The tech tycoons who ruled this land elevated their profane designs with a sacred mythography. The Singularity was its theological expression, but they wrote their own history, too, as I had seen at the Computer History Museum. Their preferred discourse was reverent contemplation of the lofty arc of scientific progress and homilies on the fortitude of a few pioneers of industry—Father Gates, Saint Musk. What time had they for the vulgar problems of the misfortunate many: housing, wages, police, debt, drugs, disease? Here was the dream of a new order that was at once futuristic and antiquated, a feudal fantasy played out on a sci-fi stage that looked deceptively like any boring stretch of asphalt in America.
It is no wonder the Singularitarian fantasies have captured the imaginations of the world’s most zealously self-interested businesspeople: these visions promise ultimate, permanent power. The stated ambitions of America’s tech oligarchs are almost comically solipsistic—endless lifespans, superhuman powers, personal hyperspeed transport. They truly imagine themselves as a superior race. And while it is unlikely that they will attain everything they imagine, it is unfortunately true that this hyper-elite class will reap the benefits of any new technologies society develops, while the costs will fall, as ever, on the rest of us. This will not be a situation without precedent. It’s exactly how things were with the rotten kings of yore. But if history teaches us one thing, it’s that complex problems often have simple solutions. Off with their heads.
lol damn