Interpretability, or Learning to Listen to Algorithms is the first session of the Fall conference programme organized by Ladhul (Laboratoire de cultures et humanités digitales de l’Université de Lausanne)
CC BY SA WikiMaps 2019
With Nick Seaver: Nick Seaver is an assistant professor in the Department of Anthropology at Tufts University, where he also teaches in the Science, Technology, and Society program. He is writing a book about the people who make music recommender systems and how they think about culture.
Abstract : How do algorithms work? As algorithmic systems—from Google’s search engine to Spotify’s recommender—have become objects of popular concern, this question has proven vexing. Not only are these black boxes hidden from public view and illegible to the untrained eye, they are also complex, distributed systems. With the advent of techniques like deep learning, algorithmic systems are often described as “uninterpretable”—so complex that it is impossible, even for insider experts, to explain their outputs. And yet, engineers, like ordinary users, are tenacious interpreters, eager to make sense of algorithmic behavior, regardless of its internal complexity. In this talk, I draw on ethnographic fieldwork with developers of algorithmic music recommenders in the US to theorize “interpretability,” describing how engineers interpret supposedly uninterpretable systems. Music and listening offer useful models for making sense of this interpretive work, for the engineers as well as outside critics. Developers are not uniquely able to “see” inside algorithmic black boxes but rather learn to listen to them, and their own musical sensibilities are knit into the supposedly rational and quantitative operations of algorithmic systems.
- Time and place : from 3.15 pm to 5 pm, Geopolis, room 2207
- Free entrance, no registration
- Full programme of the seminars