Writing
Journal Articles
Pereira, G., & Moreschi, B. (2023). Living with images from large-scale data sets: A critical pedagogy for scaling down. Photographies. doi:10.1080/17540763.2023.2189285. See abstract.
The emergence of contemporary computer vision coincides with the growth and dissemination of large-scale image data sets. The grandeur of such image collections has raised fascination and concern. This article critically interrogates the assumption of scale in computer vision by asking: What can be gained by scaling down and living with images from large-scale data sets? We present results from a practice-based methodology: an ongoing exchange of individual images from data sets with selected participants. The results of this empirical inquiry help to consider how a durational engagement with such images elicits profound and variously situated meanings beyond the apparent visual content used by algorithms. We adopt the lens of critical pedagogy to untangle the role of data sets in teaching and learning, thus raising two discussion points: First, regarding how the focus on scale ignores the complexity and situatedness of images, and what it would mean for algorithms to embed more reflexive ways of seeing; Second, concerning how scaling down may support a critical literacy around data sets, raising critical consciousness around computer vision. To support the dissemination of this practice and the critical development of algorithms, we have produced a teaching plan and a tool for classroom use.
Pereira, G., & Raetzsch, C. (2022). From Banal Surveillance to Function Creep: Automated License Plate Recognition (ALPR) in Denmark. Surveillance & Society. doi:10.24908/ss.v20i3.15000. See abstract.
This article discusses how Automated License Plate Recognition (ALPR) has been implemented in Denmark across three different sectors – parking, environmental zoning, and policing. ALPR systems are deployed as a configuration of cameras, servers, and algorithms of computer vision for automatically reading and recording license plates of passing cars. Through digital ethnography and interviews with key stakeholders in Denmark, we contribute to the fields of critical algorithm and surveillance studies with a concrete empirical study on how ALPR systems are configured according to user-specific demands. Each case gives nuance to how ALPR systems are implemented: (1) how the seamless charging for a “barrier-free” parking experience poses particular challenges for customers and companies; (2) how environmental zoning enforcement through automated fines avoids dragnet data collection through customized design and regulation; (3) how the Danish Police has widened its dragnet data collection with little public oversight and questionable efficacy. We argue that ALPR enacts a form of “banal surveillance” because such systems operate inconspicuously under the radar of public attention. As the central analytic perspective, banality highlights how the demand for increasing efficiency in different domains makes surveillance socially and politically acceptable in the long run. Although we find that legal and civic modes of regulation are important for shaping the deployment of ALPR, the potential for function creep is embedded into the very process of infrastructuring due to a lack of public understanding of these technologies. We discuss wider consequences of ALPR as a specific and overlooked instance of algorithmic surveillance, contributing to academic and public debates around the embedding of algorithmic governance and computer vision into everyday life.
Pereira, G. (2022). Apple Memories and Automated Memory-Making: The Networked Image Inside the iPhone Chip. Digital Culture & Society. doi:10.14361/dcs-2021-070210 or Open Access pdf. See abstract.
In 2016 Apple for the first time introduced what it called “advanced computer vision” (Federighi 2016) to organize and curate users’ images. What was particularly new about Apple Memories, and what became its key selling point, was that all computation would happen inside the user’s device, relying on the privacy afforded by Apple’s widely used smartphone, the iPhone. Even if operating on-device and in a privacy-preserving manner, Apple Memories still relies on the datafication of the personal archive via the automation of image analysis (computer vision) and personalization. As I argue through this article, control of the networked image today goes beyond buying and selling data for behavioural analysis and advertising. Building upon Mackenzie & Munster’s (2019) notion of “platform seeing,” my argument is that infrastructures of machine seeing are important means for the consolidation of platform power. To engage in this issue, this article offers a case study of Apple Memories and its automated memory-making, focusing on three dimensions: the vision of Apple Memories; how this vision gets infrastructured through the iPhone hardware (the A11 Bionic chip); and how Apple Memories engages users in automated memory-making. By bridging these facets, I discuss the consequences Apple Memories raises for notions of privacy and surveillance capitalism, with the suggestion to think instead of platform and infrastructural power. The centralization of power in Apple’s hands means that, although it may not “own” the data, it retains monopoly over how the networked image will be computed and distributed as data. The framing of computer vision as an intimate, always-on, and personal way of remembering is part of a wider goal of exploiting such personal data into user engagement, generating even more data, and ultimately accumulating infrastructural power across Apple’s “walled garden” digital ecosystem.
Grohmann, R., Pereira, G., Guerra, A., Abílio, L.C., Moreschi, B., & Jurno, A. (2022). Platform scams: Brazilian workers’ experiences of dishonest and uncertain algorithmic management. New Media & Society. doi:10.1177/14614448221099225, or see the Open Access Accepted Manuscript. See abstract.
This article discusses how Brazilian platform workers experience and respond to platform scams through three case studies. Drawing from digital ethnographic research, vlogs/interviews of workers, and literature review, we argue for a conceptualization of “platform scam” that focuses on multiple forms of platform dishonesty and uncertainty. We characterize scam as a structuring element of the algorithmic management enacted by platform labor. The first case engages with when platforms scam workers by discussing Uber drivers’ experiences with the illusive surge pricing. The second case discusses when workers (have to) scam platforms by focusing on Amazon Mechanical Turk microworkers’ experiences with faking their identities. The third case presents when platforms lead workers to scam third parties, by engaging with how Brazilian click farm platforms’ workers use bots/fake accounts to engage with social media. Our focus on “platform scams” thus highlights the particular dimensions of faking, fraud, and deception operating in platform labor. This notion of platform scam expands and complexifies the understanding of scam within platform labor studies. Departing from workers' experiences, we engage with the asymmetries and unequal power relations present in the algorithmic management of labor.
Heemsbergen, L., Treré, E., & Pereira, G. (2022). Introduction to algorithmic antagonisms: Resistance, reconfiguration, and renaissance for computational life. Media International Australia. doi:10.1177/1329878X221086042, or see the Open Access Accepted Manuscript. See abstract.
“Fuck the Algorithm! Fuck the Algorithm! Fuck the Algorithm!” The chant of hundreds of UK school students echoed across the streets in front of the UK's Department of Education. Students, livid, were protesting their chances of university entrance being snuffed out or otherwise manipulated by ‘the algorithm’. The exasperation focussed on the Office of Qualifications and Examinations Regulation's automated process for estimating marks for final exams that were cancelled due to COVID-19: ‘the algorithm’ deployed had taken predictable biases (Noble, 2018) and delivered inflationary marks to private school students, while deflating the marks (and dreams) of students at government schools and of lower socio-economic status. This example from 2020 is just one that shows how problematics of algorithmic governance pervade society. Yet the focus here on algorithmic antagonisms is not meant to comment on being oppressed by algorithms, or resisting their power through protest, or even being resigned to their effects. Instead it asks what happens when energy is put into fucking with algorithms to produce political movement within societies that are increasingly governed by them? Specifically, this introduction and the papers that follow in the special issue turn the critical study of data to consider algorithms’ antagonistic and tactical uses.
Pereira, G., Moreschi, B., Mintz, A. & Beiguelman, G. (2022). We’ve always been antagonistic: Algorithmic resistances and dissidences beyond the Global North. Media International Australia. doi:10.1177/1329878X221074792, or see the Open Access Accepted Manuscript. Also published in Portuguese. See abstract.
In this article we suggest that otherwise unacknowledged histories of technological antagonism can help us (artists, activists, and researchers) to more deeply appreciate the foundations on which we develop activist resistances to contemporary computing. Departing from the case of Brazil, our goal is to bridge historical and contemporary perspectives by: (1) discussing the everyday practises of technological dissidence in the country, and how appropriation has been used to resist unequal power structures; (2) presenting how particular tactical ruptures in the history of art and media activism have sought to contaminate and re-envision networked technologies; (3) exploring the particular notions of algorithmic antagonism that two contemporary projects (PretaLab/Olabi and Silo/Caipiratech) advance, and how they relate to their historical counterparts. In sum, these different threads remind us that we’ve always been antagonistic, and that recognizing a longer genealogy of technological dissidences and ruptures can strengthen current practises against algorithmic oppressions.
Pereira, G., Schorr, S., & de Oliveira, C. (2021). Algorithmic Sea: The Fluid Critical Making and Seeing of Color. the digital review. doi:10.7273/z7pt-h352. See abstract.
Transparent. Translucent. Opaque. Reflective. What is the color of Water? Water is a reflecting pool for both the human and computational understanding of color. However, these perceptions are always partial and transforming. Through the critical, technical, and artistic making of our installation The Color of Water: Algorithmic Sea, we explore the sociotechnical perception of color, especially in how it is actualized through computer systems and their algorithms. If, as described by Carolyn Kane (2014), color is not about vision, but about a “system of control used to manage and discipline perception and thus reality,” then how do computers see color? Our installation seeks to spark reflection about seeing color inside, through, and in relation to the algorithmic sea.
Pereira, G., Bojczuk Camargo, I.B., & Parks, L. (2021). WhatsApp Disruptions in Brazil: A content analysis of user and news media responses, 2015-2018. Global Media and Communication. doi:10.1177/17427665211038530, or see the Open Access Preprint. See abstract.
Brazilians have adopted WhatsApp as a national media and communication infrastructure over the past several years, although it is controlled by its private US-based owner, Facebook. This article explores the diverse, contentious and influential roles the app played in the country during disruptions to its use from 2015 to 2018. Using content analysis, we critically engage with user-generated memes and news media coverage responding to these disruptions. In these cases, Brazilians self-reflexively questioned the app’s role in their everyday lives and country, reassessing what it means to rely on a national infrastructure owned by an unaccountable global media conglomerate.
Pereira, G. (2021). Towards Refusing as a Critical Technical Practice: Struggling With Hegemonic Computer Vision. A Peer-Reviewed Journal About. doi:10.7146/aprja.v10i1.128185. See abstract.
Computer Vision (CV) algorithms are overwhelmingly presented as efficient, impartial, and desirable further developments of datafication and automation. In reality, hegemonic CV is a particular way of seeing that operates under the goal of identifying and naming, classifying and quantifying, and generally organizing the visual world to support surveillance, be it military or commercial. This paradigm of Computer Vision forms a ‘common sense’ that is difficult to break from, and thus requires radical forms of antagonism. The goal of this article is to sketch how refusing CV can be part of a counter-hegemonic practice – be it the refusal to work or other, more creative, responses. The article begins by defining hegemonic CV, the ‘common sense’ that frames machine seeing as neutral and impartial, while ignoring its wide application for surveillance. Then, it discusses the emergent notion of refusal, and why critical technical practice can be a useful framework for questioning hegemonic sociotechnical systems. Finally, several potential paths for refusing hegemonic CV are outlined by engaging with different layers of the systems’ 'stack.'
Pereira, G., & Moreschi, B. (2020). Artificial Intelligence and Institutional Critique 2.0: Unexpected Ways of Seeing with Computer Vision. AI & Society. doi:10.1007/s00146-020-01059-y, or see the Open Access Accepted Manuscript. See abstract.
During 2018, as part of a research funded by Deviant Practice Grant, artist Bruno Moreschi and digital media researcher Gabriel Pereira worked with the Van Abbemuseum collection (Eindhoven, NL), reading their artworks through commercial image-recognition (Computer Vision) Artificial Intelligences from leading tech companies. The main takeaways were: somewhat as expected, AI is constructed through a capitalist and product-focused reading of the world (values that are embedded in this sociotechnical system); and that this process of using AI is an innovative way for doing institutional critique, as AI offers an untrained eye that reveals the inner workings of the art system through its glitches. This research aims to regard these glitches as potentially revealing of the art system, and even poetic at times. We also look at them as a way of revealing the inherent fallibility of the commercial use of AI and Machine Learning to catalogue the world: it cannot comprehend other ways of knowing about the world, outside the logic of the algorithm. But, at the same time, due to their “glitchy” capacity to level and reimagine, these faulty readings can also serve as a new way of reading art; a new way for thinking critically about the art image in a moment when visual culture has changed form to hybrids of human-machine cognition and “machine-to-machine seeing”.
Mangabeira, M., Goveia, F., Moreschi, B., Pereira, G., & Criste, I. (2020). Network of Speeches: Constructing “Another 33rd Sao Paulo Biennial” from digital traces on Twitter and Facebook. Trama Interdisciplinar. Only in Portuguese: doi:10.5935/2177-5672/trama.v10n1p23-47. See abstract.
As part of the artwork “Another 33rd Sao Paulo Biennial”, which sought to create an alternative archive of the Biennial’s artistic system, a group of multidisciplinary artists and researchers analyzed the discourses constructed and published on social networks about the exhibition. Through the collection and analysis of public traces of user interactions on Facebook and Twitter, the objective of this research was to investigate which unofficial speeches about the event reverberated out of the exhibition space. Data was collected and mined using the Ford Parse script, developed by Labic (Ufes). We obtained 2,764 Twitter posts, and 38 posts and 827 comments on Facebook between August 28 and November 2, 2018. Quantitative and qualitative analysis allowed us to observe a close relationship between artistic space and social and political issues. The most relevant theme in the period was the pro-Lula act, called “Lulaço”, which was held at the opening of the Biennial. Other research results show point to diverse unofficial discourses reverberated through social networks during the main art event in Brazil: between press attention, negative and positive emotions, a traffic radio, and check-in through photos. Instead of an elucidative or even definitive study on the subject, this work had the intention of contributing to the creation of a culture of investigation of the repercussions that escape the traditional methods of measuring the impact of a large art event. We offer a broader understanding, which understands the art system as something larger than its traditional encoded discourse. By carefully gathering, selecting, and analyzing other voices, this paper emphasizes that art is above all a set of social practices and should be viewed as a field beyond its objects. Further studies can be made in the future from the collected data sets, as the data collected is kept as part of the institution’s archive, offering layers of the exhibition that are usually not taken into account and preserved.
Moreschi, B., Pereira, G., & Cozman, F.G. (2020). The Brazilian Workers in Amazon Mechanical Turk: Dreams and realities of ghost workers. Contracampo – Brazilian Journal of Communication [Special Issue on Platform Labor]. In English: doi:10.22409/contracampo.v39i1.38252, or in Portuguese. See abstract.
Contributing to research on digital platform labor in the Global South, this research surveyed 149 Brazilian workers in the Amazon Mechanical Turk (AMT) platform. We begin by offering a demographic overview of the Brazilian turkers and their relation with work in general. In line with previous studies of turkers in the USA and India, AMT offers poor working conditions for Brazilian turkers. Other findings we discuss include: how a large amount of the respondents affirmed they have been formally unemployed for a long period of time; the relative importance of the pay they receive to their financial subsistence; and how Brazilian turkers cannot receive their pay directly into their bank accounts due to Amazon restrictions, making them resort to creative circumventions of the system. Importantly, these “ghost workers” (Gray & Suri, 2019) find ways to support each other and self-organize through the WhatsApp group, where they also mobilize to fight for changes on the platform. As this type of work is still in formation in Brazil, and potentially will grow in the coming years, we argue this is a matter of concern.
Markham, A., & Pereira, G. (2019). Analyzing public interventions through the lens of experimentalism: The case of the Museum of Random Memory. Digital Creativity [Special Issue on Hybrid Pedagogies]. doi:10.1080/14626268.2019.1688838. See abstract.
Over 30 computational scientists, designers, artists, and activists collaboratively performed eight Museum of Random Memory workshops and exhibitions from 2016 to 2018. Here, we explore how the framework of ‘experimentation’ helped us analyze our own iterative development of techniques to foster critical data literacy. After sketching key aspects of experimentation across disciplines, we detail moments within where researchers tweaked, observed, tested, reflected, and tweaked again. This included changing scale, format, and cultural context; observing how people responded to digital versus analog memory-making activities; modifying prompts to evoke different conversations among participants about how future memories might be imagined or read by future archeologists; and finding creative ways to discuss and trouble ethics of data sharing. We conclude that coopting some of the techniques typical in natural science laboratories can prompt scholar activists to continuously recalibrate their processes and adjust interactions as they build pedagogical strategies for fostering critical data literacy in the public sphere.
Raetzsch, C., Pereira, G., Vestergaard, L. S., & Brynskov, M. (2019). Weaving seams with data: Conceptualizing City APIs as elements of infrastructures. Big Data & Society, 6(1), doi:10.1177/2053951719827619. See abstract.
This article addresses the role of application programming interfaces (APIs) for integrating data sources in the context of smart cities and communities. On top of the built infrastructures in cities, application programming interfaces allow to weave new kinds of seams from static and dynamic data sources into the urban fabric. Contributing to debates about “urban informatics” and the governance of urban information infrastructures, this article provides a technically informed and critically grounded approach to evaluating APIs as crucial but often overlooked elements within these infrastructures. The conceptualization of what we term City APIs is informed by three perspectives: In the first part, we review established criticisms of proprietary social media APIs and their crucial function in current web architectures. In the second part, we discuss how the design process of APIs defines conventions of data exchanges that also reflect negotiations between API producers and API consumers about affordances and mental models of the underlying computer systems involved. In the third part, we present recent urban data innovation initiatives, especially CitySDK and OrganiCity, to underline the centrality of API design and governance for new kinds of civic and commercial services developed within and for cities. By bridging the fields of criticism, design, and implementation, we argue that City APIs as elements of infrastructures reveal how urban renewal processes become crucial sites of socio-political contestation between data science, technological development, urban management, and civic participation.
Full-Length Articles in Conference Proceedings (Refereed)
Pötzsch, H., & Pereira, G. (2022). Reimagining Algorithmic Governance: Cultural expressions and the negotiation of social imaginaries. In Proceedings of the 12th Nordic Conference on Human-Computer Interaction (pp. 1–12). Association for Computing Machinery. doi:10.1145/3546155.3547298. See abstract.
This paper investigates the means through which a series of artistic works invite critical responses to algorithmic governance and the systems of surveillance and data capture these draw upon. In combining the theoretical frameworks of Cornelius Castoriadis, Jacques Ranciére, and Chantal Mouffe, we conceptualize how, and to what possible effects, art can engage politics and focus the discussion on concrete techniques through which selected works and performances question contemporary systems of algorithm-based management and control. Firstly, we offer an introduction to Cornelius Castoriadis’s [13] theoretical framework regarding an imaginary institution and reproduction of society. So far, his concepts have been treated in a rather metaphorical manner in the literature on algorithmic governance and we set out to provide a thorough theoretical grounding of this valuable framework. We then add Jacques Ranciére’s [49, 51] concept of a distribution of the sensible and draw upon Chantal Mouffe’s [47] theories of art, democracy, and hegemony to account for art’s political function. As a second step, we focus on specific media artworks that respond to concrete instances of algorithmic governance by defamiliarizing and transgressing received social imaginaries to enable an active reshaping of received technologies and sedimented practices. We identify and illustrate a selection of tactics available to artists to question and contain these increasingly automated systems; including appropriating, rejecting, inverting (perspectives, scales, values/norms), and creating alternatives. We show how these tactics are deployed to invite a subversion and reimagination of dominant algorithmic imaginaries and the specific frames of sensing, speaking, and doing they imply. Our objective is to give an overview of available artistic tactics, and to facilitate further research of, and critical engagements with, algorithmic forms of governance and their enactments of surveillance and automation. Through our inquiry, we hope to contribute to a further development of critical media literacy practices in art and higher education that can facilitate a creative reimagination and reshaping of the socio-technical systems that become increasingly constitutive of contemporary identities and societies.
Markham, A., & Pereira, G. (2019). Experimenting with algorithmic memory-making: Lived experience and future-oriented ethics in critical data science. Proceedings of the Workshop on Critical Data Science of the AAAI ICWSM Conference 2019. Frontiers in Big Data. doi:10.3389/fdata.2019.00035. See abstract.
In this paper, we focus on one specific participatory installation developed for an exhibition in Aarhus (Denmark) by the Museum of Random Memory, a series of arts-based, public-facing workshops and interventions. The multichannel video installation experimented with how one memory (Trine's) can be represented in three very different ways, through algorithmic processes. We describe how this experiment troubles the everyday (mistaken) assumptions that digital archiving naturally includes the necessary codecs for future decoding of digital artifacts. We discuss what's at stake in critical (theory) discussions of data practices. Through this case, we offer an argument that from an ethical as well as epistemological perspective critical data studies can't be separated from an understanding of data as lived experience.
Tiidenberg, K., Markham, A., Pereira, G., Rehder, M. M., Dremljuga, R-R., Sommer, J. K., & Dougherty, M. (2017). “I’m an addict” and other sensemaking devices: a discourse analysis of self-reflections on lived experience of social media. Proceedings of the 8th International Conference on Social Media & Society, 21, Association for Computing Machinery. doi:10.1145/3097286.3097307. See abstract.
How do young people make sense of their social media experiences, which rhetoric do they use, which grand narratives of technology and social media do they rely on? Based on discourse analysis of approximately 500 pages of written data and 390 minutes of video (generated by 50 college students aged 18-30 between 2014-2016) this article explores how young people negotiate their own experience and existing discourses about social media. Our analysis shows that young people rely heavily on canonic binaries from utopian and dystopian interpretations of networked technologies to apply labels to themselves, others, and social media in general. As they are prompted to reflect on their experience, their rhetoric about social media use and its implications becomes more nuanced yet remains inherently contradictory. This reflects a dialectical struggle to make sense of their lived experiences and feelings. Our unique methodology for generating deeply self-reflexive, auto-ethnographic narrative accounts suggests a way for scholars to be able to understand the ongoing struggles for meaning that occur within the granularity of everyday reflections about our own social media use.
Editor in Books and Special Issues
Heemsbergen, L., Treré, E., & Pereira, G. (eds.). (2022). Algorithmic Antagonisms: Resistance, Reconfiguration, and Renaissance for Computational Life. Special Issue for Media International Australia. Link.
Benfield, D.M., Moreschi, B., Pereira, G., & Ye, K. (eds.). (2020). Affecting Technologies, Machining Intelligences. ISBN: 978-1-7356981-0-6. CAD+SR.
Book Chapters
Pereira, G., & Couldry, N. (2023). Data Colonialism Now: Harms and Consequences. In: Resisting Data Colonialism – A Practical Intervention, eds. Tierra Común Network. Institute of Network Cultures. ISBN: 9789083328263.
Pereira, G., & Ricci, B. (2023). “Calling all the cattle”: Music live streams during the covid-19 pandemic in Brazil. In: Live Streaming Culture, eds. Johanna Brewer, Bo Ruberg, Amanda L. L. Cullen, and Christopher Persaud. MIT Press. doi:10.7551/mitpress/14526.003.0006.
Benfield, D.M., Bratton, C., Eastmond, E., Eifler, M., & Pereira, G. (2022). Impossible Spaces and Other Embodiments: Co-constructing Virtual Realities. In: Art as Social Practice: Technologies for Change, eds. xtine burrough & Judy Walgren. Routledge. ISBN: 9780367758462.
Moreschi, B., Pereira, G., & Cozman, F.G. (2021). Os Brasileiros Que Trabalham na Amazon Mechanical Turk. In: Os Laboratórios do Trabalho Digital, ed. Rafael Grohmann. Boitempo Editorial. ISBN: 9786557170748.
Pereira, G. (2021). Alternative ways of algorithmic seeing-understanding. In: Affecting Technologies, Machining Intelligences. CAD+SR. Link.
Moreschi, B., & Pereira, G. (2019). Recoding Art: Van Abbemuseum collection. Deviant Practice Research Programme, 2018-2019. ISBN: 9789490757205. Portuguese translation published at Revista Farol.
Extended Abstracts in Conference Proceedings (Refereed)
Pereira, G. (2019). Apple Memories and automated memory-making: Marketing speak, chip-engineering, and the politics of prediction. Selected Papers of #AoIR2019: Proceedings of the 20th international conference of the Association of Internet Researchers. Link
Rehder, M., Pereira, G., & Markham, A. (2017). “Clip, move, adjust”: Video editing as reflexive rhythmanalysis in networked publics. Selected Papers of #AoIR2017: Proceedings of the 18th international conference of the Association of Internet Researchers. Link
Book Reviews
Pereira, G. (2023). Data justice. Information, Communication & Society. doi:10.1080/1369118X.2023.2183084
Pereira, G. (2021). Automating vision: The social impact of the new camera consciousness. New Media & Society. doi:10.1177/1461444821989639
Other Publications (lightly or non peer-reviewed)
Moreschi, B., & Pereira, G. (2021). Future Movement Future – REJECTED. Surveillance & Society 19(4). doi:10.24908/ss.v19i4.15126
Moreschi, B., Bratton, C., Benfield, D.M., Pereira, G., & Falcão, G. (2021). _rt movements. ARTMargins 10(2). doi:10.1162/artm_a_00294
Carvalho, A., Moreschi, B., & Pereira, G. (2019). The History of _rt: deconstructions of the official history of art narrative. Revista do Centro de Pesquisa e Formação do SESC, 8. In PT/BR.