Welcome to Bookmarker!

This is a personal project by @dellsystem. I built this to help me retain information from the books I'm reading.

Source code on GitHub (MIT license).

View all notes

Showing results by Nick Seaver only

[...] Given Bourdieu's central role in developing practice theory, it would be surprising for him to suddenly come down on the side of structural determinism, arguing that people are simply recipients of large-scale social forces. Instead, Bourdieu's theory of taste hinges on his understanding of the habitus - the set of embodied dispositions that people acquire as they are socialized and that they exercise when making judgments of taste (among other things). For Bourdieu, the concept of habitus provides an alternative to visions of people as either free-willed autonomous subjects or unthinking vehicles of structural dynamics (see Sterne 2003, 376). People acquire the sensibilities that constitute their habitus in worlds full of the myriad entities Hennion describes: a person's taste in music is going to be shaped by how the people around them act, the forms they encounter music in, and a host of other situational factors. Because people in similar social positions grow up under similar conditions, they end up with a similar habitus, and the mystery of how tastes come to mirror social structure is at least partly resolved (Lizardo 2014, 346).

—p.11 Introduction: Technology with Humanity (1) by Nick Seaver 1 year, 3 months ago

Conventional ways of thinking about access mislead researchers into thinking that, once they have cracked through the black box's wall, knowledge will be waiting there for the taking. But access is not an event; it is the ongoing navigation of relationships. Access has a texture, and this texture -- patterns of disclosure and refusal -- can be instructive in itself. When I interviewed junior employees, they were often worried about what they were allowed to say to me, even when I had signed the same nondisclosure agreements they had. In contrast, CEOs and founders usually spoke quite freely, sure in their ability to define the parameters of the permissible and to draw the corporation's limits into strict, punishing existence at will. The social structure of the firm shimmered into view through these interactions.

—p.15 Introduction: Technology with Humanity (1) by Nick Seaver 1 year, 3 months ago

Chapter 6, "Parks and Recommendation," builds on that discussion of space by examining a set of spatial metaphors commonly used by the makers of music recommendation: pastoral metaphors that figure technical workers as gardeners or park rangers who tend to the music space and the listeners who travel within it. Many critics have argued that such metaphors naturalize the work of machine learning, mystifying how it actually works. I offer a different interpretation, suggesting that developers find pastoral metaphors useful because they describe an ambivalent form of control: while the people who manage the music space are aware that they determine a good deal of its structure, they also understand their work as tending to lively data sources beyond their influence. Analyzing these metaphors helps us interpret how the makers of music recommendation think about their power and responsibility in relation to the objects of their labor and to music more generally.

—p.21 Introduction: Technology with Humanity (1) by Nick Seaver 1 year, 3 months ago

We have no shortage of explanations that place the blame at the foot of capitalism itself: in the ceaseless production of desire that capital demands. The musicologist Eric Drott (2018c, 333), for instance, convincingly argues that the promotional materials for music streaming services "transfigure plenitude into a form of lack." These services provide users access to the catalog, and then suggest that the size of the catalog is a problem that they can solve for those same users, keeping the wheels of capital moving. The cultural critic Jonathan Cohn (2018, so) follows a similar line of reasoning, arguing that recommender systems operate in "bad faith," framing choice as a "burden" to be relieved rather than as the location of users' agency, which recommendations diminish.

These explanations are not wrong; they reach for large-scale dynamics of desire and production. But they do not capture the local reasoning of people working on these systems, who feel the reality of overload in their everyday lives and come to understand their work as a form of care for users who are similarly beset by the paradox of choice. If we want to understand the logic of people working in these systems, we cannot reduce their efforts at understanding the world to "bad faith" or the epiphenomena of capitalist machinery. This does not mean that the makers of music recommendation can't be wrong about themselves, their users, or the cultural dynamics they try to understand. They may indeed be caught up in large-scale processes in which their ultimate function is the ongoing production of consumer desire. But the political economy of the music industry does not directly determine how people working in these settings make decisions or think about their work.

My goal here is to understand how recommender systems make sense to their makers -- how they work, who they're for, why they exist. To do this, we need to understand overload. Overload haunts the utopian fantasies of the information age, lurking beneath dreams of exponential growth and threatening to turn computing's successes into failures. It feels real, it feels new, and it feels tightly bound up with contemporary technologies of media circulation. And yet as we've already seen, its newness is old. What seemed like a natural response to on-demand streaming in the 2010s also seemed like a natural response to the ocean of CDs in the 1990s.

—p.29 Too Much Music (22) by Nick Seaver 1 year, 3 months ago

Mythological discourse is conventionally understood to be concerned with form over content abstract types over concrete instances. Roland Barthes (1972, 143) has argued that myths' abstraction lets them function as "depoliticized speech": by tying together timeless cosmic orders and ordinary historical experience, myths naturalize the archetypes and structures they contain, giving historical contingencies "a natural and eternal justification, ... a clarity which is not that of an explanation but that of a statement of fact." In computer science, abstraction is also a central practice and value, which identifies underlying coherence by disregarding details considered extraneous. To suggest that collaborative filtering and prehistoric ant trails are the same kind of thing requires just such an abstraction, shedding the many features that might distinguish them in favor of a timeless, underlying unity. Critics of computer science have, echoing Barthes, suggested that this commitment to abstraction has made the field "antipolitical" -- aggressively dismissive of historic particularity [...]

We can think of these myths as scaling devices. They establish the scope of discussion, indicating that we are not talking about minor acts of coding but about enduring problems of existence. If the ordinary work of programming seems boring -- like staying put all day and typing -- these stories reimagine telling computers what to do as transformative action on the largest possible scale. As the linguistic anthropologist Judith Irvine (2016, 228) argues, "scale-climbing" is an ideological operation: by claiming the broader view, people try to encompass one another within their own explanatory frameworks (see Gal and Irvine 1995). Epochal software stories set human species-being within a computational frame, recasting practically all social activities as precursors to their narrators' technological projects. David Golumbia, in The Cultural Logic of Computation (2009), has adapted a term from the philosophy of mind -- "computationalism" -- to describe this expansionist tendency in the rhetoric of computing, which enables software to alternately lay claim to the future and the past: new companies figure themselves as both innovators and inheritors of timeless truths.

Identifying these myths as myths is a first step toward reimagining our situation, making received wisdom contestable by reinstalling it in historical time (see Bowker 1994). We can locate overload in concrete situations, with all the particularities that abstraction scrapes away. But we can also analyze how the myth works, as a story that is intellectually productive and world-enframing for the people who tell it. In anthropological terms, we can take myths not as falsehoods to be disproved but as keys to their tellers' cosmology: their worldview, their sense of the order of things, their background theory of society and of existence more generally.

—p.33 Too Much Music (22) by Nick Seaver 1 year, 3 months ago

[...] Across disciplines undergoing "cognitive turns," key concepts were thus reinterpreted as filtering methods, which protected limited minds from overload. Cognitive anthropology, for instance, reconceptualized culture and classification as an adaptive technique for coping with an overwhelming world: "We classify because life in a world where nothing was the same would be intolerable. It is through naming and classification that the whole rich world of infinite variability shrinks to manipulable size and becomes bearable" (Tyler 1969, 7).

—p.43 Too Much Music (22) by Nick Seaver 1 year, 3 months ago

Today [...] bandwidth constraints are largely considered a thing of the past, and recommender systems are no longer commonly framed as techniques for optimizing the bandwidth of a computer network. When people suggest that recommender systems are necessary to manage an overwhelming amount of information, they are not making a claim about digital computers. It would be technically easy, for instance, for Facebook to simply present every update from a user's friends in chronological order. The problem with this much-requested feature, Facebook suggests, is human bandwidth: users would be overwhelmed.

You must be logged in to see this comment.

—p.45 Too Much Music (22) by Nick Seaver 1 year, 3 months ago

[...] he has just finished an internship at a large software company known for such techniques, and he is convinced that feature learning is the future. He derisively calls feature representations like MFCCs "handcrafted," suggesting that they are archaic and ready to be replaced.

"handcrafted" (derogatory) is a good idea

—p.106 Hearing and Counting (95) by Nick Seaver 1 year, 3 months ago

The instructor asks us to look for patterns in the space, and the students find one: the movies seem to have separated by genre. Romances are on one side, while action movies are on the other. A student asks how the algorithm knew about genre, when the input only contained rating data. "It seems like magic, I know," the instructor replies, "Magically, we've measured two secret things about these movies." Those two secrets are the two dimensions of our space, although we could have instructed the computer to generate more of them. Our horizontal dimension appears to correspond to genre, as though information about the movies genres were hidden in the data, waiting to be revealed. The significance of the vertical dimension is less obvious, although the instructor suggests that it may reflect how "serious" the movies are.

What made matrix factorization "magical," like McDonald's alchemy, was its ability to uncover such cultural secrets from data that appeared to be about something else. We were being instructed not only in the fundamental spatiality of data but in what the sociologist of science Catelijne Coopmans (2014) has called "artful revelation" -- the rhetorical use of visualization to make manifest hidden patterns in data. Revelations like the one performed in class are a common way to claim authority for data analytic practices. Coordinate spaces are often presented as ways to make data intuitively understandable by people -- an easy means to see, rather than to calculate, the similarities among a set of data points. The geographers Martin Dodge and Rob Kitchin (2001, 30) call images like these "spatializations," to distinguish them from proper "maps," which they reserve for visualizations that have geographical referents. But, as we will see, this distinction is not a significant one for most people working in machine learning. Spatializations effectively summarize differences along various axes into singular, readily comparable distances, appealing to intuitions about the relationship between distance and similarity. When visualized, these spaces are almost always described as "maps."

—p.122 Space Is the Place (116) by Nick Seaver 1 year, 3 months ago

A grad student told me a joke that captured the impossibility of developing an intuition for machine learning's highly dimensional spaces, attributed to one of the field's most senior figures, Geoffrey Hinton: "How do you deal with a fourteen-dimensional space? To deal with a fourteen-dimensional space, visualize a three-dimesional space and say 'fourteen' to yourself very loudly. Everyone does it."

—p.126 Space Is the Place (116) by Nick Seaver 1 year, 3 months ago

Showing results by Nick Seaver only