Introduction

Coding.Care

“I’m stuck here inputting and outputting the data of a story I can’t change.”
–Italo Calvino “The Burning of the Abominable House” (1976)

Hello, World => Another World Is Possible

I came late to both queerness and coding. Perhaps you can relate. Both delays were rooted in scarcity and fear. I did not know, and could not access, the variety of ways to be queer or to be a coder. Once I learned some ways, I still worried that my version (of queerness, of coding) was “not enough” to count.

My interest in code and in AI systems grew from wanting to understand how I might tell stories in new forms using and building digital tools, to write books that expanded books. I was conscious of the growing impacts that automated systems were having on my voice and on others who might also feel left out. I spent a long time struggling to learn in tech (and queer) communities without feeling I belonged. I did not yet know how much each of these communities would shape my understanding of the other.

Over the last decade, I have begun developing the approaches shared across the works collected in Coding.Care: Guidebooks for Intersectional AI as practices of exploration and investigation, experimentation and imagination, repair and resistance toward and with technologies like machine learning. These are practices that ask how to better use technology toward relationality and building the systems and worlds we want.

No intervention in disproportionately harmful algorithmic systems is effective without critically aware approaches to technologies from deeply plural perspectives. Meanwhile, no such proliferation of perspectives is possible without inviting spaces to understand, interrogate, and reimagine the infrastructures that support those systems. Coding.Care argues for the essential entanglement of critical, intersectional AI approaches and creative-critical coding communities, demonstrating how each needs the other. It shows what intersectional, interdisciplinary, creative–critical approaches to AI systems and other emergent technologies can look like. Its multimodal guides apply these approaches as in-practice experiments—in different contexts, for different audiences, for different aspects of these urgent issues.

The stakes are high: technologies like machine learning urgently require transformative interventions that recalibrate these systems’ values and their stakeholders. Automated decision-making systems disproportionately harm the marginalized majority (Browne 2015; Noble 2018; Buolamwini and Gebru 2018; Benjamin 2019). The communities most impacted by them, and best poised to intervene, currently go unheard as powerful actors profit from their data and labor. Large companies warn of hyperbolic coming dangers in order to distract from the clear, current dangers they perpetuate (Kapoor and Narayanan 2023; Nedden and Dongus 2017; Davies, McKernan, and Sabbagh 2023). Already-toothless policy recommendations are watered down and ignored (Heikkilä and Ryan-Mosley 2023). Meanwhile more and more data represent less and less diversity (Bender et al. 2021), more and more processing power destroys more and more planet (Dodge et al. 2022). Glib fun with AI on one side of the world relies on extractive labor for pennies a day on the other (Perrigo 2023). But why must it be this way?

“We must begin with the knowledge that new technologies will not simply redistribute power equitably within already established hierarchies of difference. The idea that they will is the stuff of utopian naivete and technological determinism that got us here to begin with.” (Sharma 2020)

We cannot expect technology to solve the problems of technology. To face these challenges, those who already have access and aptitude with tech must embrace a wider range of essential perspectives from the marginalized majority. In order to change systems to suit more communities, these communities must be able to participate on their own terms. Working together with critical engagement, we as users, makers, scholars, and arbiters of tech can reclaim more collective agency and access by considering tech at tangible scales while also grappling with its systemic impacts. When learning programming and engaging with tech is often intimidating, we must reclaim technology as a widely accessible craft. We can transform technologies themselves rather than accepting their current shapes:

“Anyone who has ever woven or knitted knows that one can change patterns […] but, more importantly, they know there are other patterns. The web of technology can indeed be woven differently, but even to discuss such intentional changes of pattern requires an examination of the features of the current pattern and an understanding of the origins and the purpose of the present design.” (Franklin 2004)

I argue that such changes in tech require fostering critical–creative coding communities as spaces of radical belonging. I have seen and helped build spaces that effectively shift conversations, implement cooperation, and produce innovative tools. What distinguishes such spaces is their emphasis on process and materiality—a set of practices I trace back to technology as handcraft—and care and relationality—an ethos of interdependence and radical difference I find modeled in queer, trans*, and intersectional feminist spaces. Craft methods become expressions of these theories, ethics, and tactics as enacted through technology. Together, these practices help build approachable spaces that provide a basis for the deep interdisciplinary thought, interrogation of formative principles, and openness to co-creation necessary to reshape AI systems. Finding common vocabularies across diverse communities, using these combined approaches, makes it feasible to reengage emergent technologies as craftable materials, rather than unassailable forces, and to respond with impactful, sustainable interventions.

Why AI Alone Can’t Create the Worlds We Want

A large interdisciplinary community is pushing for understanding, critiquing, and rethinking how we define, develop, deploy, regulate, use, and mitigate the effects of machine learning tasks, datasets, models, algorithms, architectures, and agents, which we collectively and nebulously understand as ‘artificial intelligence’ or ‘AI’ systems.I say ‘AI systems’ or ‘machine learning’ to refer collectively to an entire set of tasks and operations including and especially the human decision-making involved at every stage of this pipeline. Despite the term ‘pipeline’, it is a non-linear set of processes, conventions, and histories that feeds back into itself and is rarely straightforward.

Tech industry implementations of so-called ethical AI reduce complex concepts into flattened ideas of fairness and representation (Ovalle et al. 2023). Machine learning, as a mass production, produces a false sense of certainty out of uncertainty, argues political geographer Louise Amoore (2020), describing the millions or billions of small uncertainties that it reduces and presents as acquired knowledge. As artist and researcher Mashinka Firunts Hakopian (2022)points out, these uncertainties are also claims about “what we should know, how we should know what we know, and how that knowledge should be deployed. Each exposure to a dataset occurs because someone concluded that the information in that dataset should be used to determine a possible future.” In short, current versions of AI systems, as well as current attempts to improve them, end up using reductionist logics to limit both knowledge and the values that structure it.

While the need for AI oversight is clear, many are calling for total overhaul that moves beyond audits and inert critique, like computer scientist Joy Buolamwini (2017) who argues for what she terms algorithmic justice. Overhaul and even oversight can appear out of reach when the scale of AI systems seems unfathomable and entangled. The valid criticism that algorithmic systems are biased because their data are biased—often summed up “garbage in, garbage out”—sets up a quest already doomed to fail. Yes, in many cases it would be preferable to have more, better data. But what would be better data? Or an optimized system? For what goal, and for whom exactly? There is no ‘unbiased’ data or system; there is only ‘good enough’, and only for some tasks, in particular settings. As digital media researcher Yanni Alexander Loukissas (2019) says, “all data are local,” meaning they come from specific contexts and are shaped by human processes into ‘data’ and produced as ‘datasets’. There are many ways to do any task, informed by minute choices at every step. As these choices scale exponentially with computation, their impacts magnify exponentially too.

Yet local data do not scale up in ways that are well-suited to the massive tools currently being created by AI companies. These generative AI models now frequently rely on foundation models, because models have become so large and because the processes for training new models has become so slow and expensive. Foundation models are previously built models, usually designed for different or ‘general’ tasks, which are then used as the building blocks for new models. Like a sourdough starter, foundation models carry with them the histories of how their datasets were designed and for what purpose. They retain marks of whose data was included or excluded, and the choices their creators made when preprocessing them. The latest systems still rely on often decades-old datasets that leave traces of debunked or erroneous information in their outputs. Their errors are compounded by the computational speeds that allow thousands of operations to run per second.

AI hype and automation hype helps normalize, naturalize, and neutralize these very subjective decisions. Critical AI researcher Kate Crawford (2021) argues, “We can easily forget that the classifications that are casually chosen to shape a technical system can play a dynamic role in shaping the social and material world.” These classification choices emerge from material culture and feed back into it, and they cannot be solved with technical tweaks. Artist Hito Steyerl (2023) argues, “the supposed elimination of bias within datasets creates more problems than it solves. The process limits changes to parts of the output, making these more palatable for Western liberal consumers, while leaving the structure of the industry and its modes of production intact.” Thus, “fixing” inputs or outputs does little to address the structures of the systems (both cultural and technological) which produced them. When the stakes are too high, no data could be ‘good enough’ to make life or death decisions. We cannot rely on computational systems for infallibility and rationality, nor can we look uncritically to these technologies as bandages for the problems they exacerbate.

Practical applications of these critiques and calls for change have remained incredibly difficult to implement, and it is easy to feel like actively rebuilding the foundational structures of AI systems is out of reach. How do we get there? We need hands-on, intersectional strategies!

AI Needs Critical, Intersectional Approaches

Intersectional lenses can reveal the tangible human and more-than-human costs entangled in algorithmic systems–from proliferating data and its material infrastructures, to consolidated power and its sociocultural infrastructures. In 1977 the Black feminist Combahee River Collective called for “integrated analysis and practice based upon the fact that the major systems of oppression are interlocking” (Moraga and Anzaldúa 1981, 1983). Named intersectionality by Kimberlé Crenshaw (1989), and emerging from centuries of work by women of color (Haschemi Yekani, Nowicka, and Roxanne 2022), intersectional analysis of institutional power is often misinterpreted as individual identity politics. In fact, intersectionality “critiques systems of power and how those systems structure themselves to impact groups and individuals unequally” (Cooper 2016). Crenshaw (2021) argues that intersectionality is as useful for understanding privilege as it is for understanding marginality. Intersectional analysis shows that power is differential by design; it reveals the inequalities within inequalities and asks that “communities and movements are inclusive of differences and work together towards equality” (“What Is Intersectionality” n.d.). Conversations about AI fairness, transparency, explainability, ethics, public good, and the hype cycles of new technologies are grossly incomplete without intersectional analyses of power and intersectional tactics of (and beyond) equity and inclusion. No change about us without us.

Intersectional AI (Ciston 2019) calls for demystifying normative AI systems and learning from a wide range of marginalized ethics and tactics, in order to fundamentally transform AI. It requires multimodal, polyvocal, experimental approaches that cut through technological solutionism. It requires slow, long-term investments in algorithmic justice, rather than extractive, performative forms of inclusion that erase friction, context, and agency as they scale up for machine learning tasks (Sloane et al. 2022; Buolamwini and Gebru 2018). Intersectional AI celebrates and documents the work done by many related efforts that call for diverse knowledge systems to be incorporated toward reimagining machine learning as a more care-ful set of practices, including but not limited to abolitionist (Earl 2021), anticolonialist (Chakravartty and Mills 2018; Rights 2020), antiracist (Abebe et al. 2022), crip (Hamraie and Fritsch 2019), Indigenous (“CARE Principles of Indigenous Data Governance” n.d.; “INDIGENOUS AI” n.d.; Lewis et al. 2020), intersectional (Ciston 2019; Klumbytė, Draude, and Taylor 2022), feminist (Sinders n.d.; “Feminist.AI” n.d.), neurodivergent (Goodman, n.d.), and queer and trans* ways of knowing (Keeling -01-22 2014; Barnett et al. 2016; Klipphahn-Karge and Koster, n.d.; Martinez and Ciston n.d.).

Simultaneously, a growing field of critical AI studiesA field can have many names. What do fields owe to each other? Here I use Critical AI as an umbrella for much work being done in across Science and Technology Studies, Arts and Humanities, Computer and Data Science, Fine Arts, and elsewhere, in relation to these questions. I acknowledge the critical perspectives that have long been situated within computer science, such as Phil Agre’s (1998). I draw from critical data and dataset studies (Corry et al., n.d.) and from critical algorithm studies (Gillespie and Seaver 2015), from software studies (Chun 2021; Cox and McLean 2013; Wendy Hui Kyong Chun 2008) and from critical code studies (M. Marino 2006). I cannot promise to be comprehensive; but I do hope to connect some important, related conversations happening across a wide range of fields.

has been using interdisciplinary techniques to analyze the pitfalls of existing AI methods and to argue these cannot be addressed with technical improvements alone. Critical AI is distinguished from tech industry approaches like “AI for Good” or “AI for Society,” which can lack critical perspectives on AI’s impacts, despite an intended altruism. Critical AI researchers and professors Rita Raley and Jennifer Rhee (2023) argue that AI makers and researchers need to engage these systems as sociotechncial objects embedded in their historical, social context. They argue that we must be “situated in proximity to the thing itself, cultivating some degree of participatory and embodied expertise, whether archival, ethnographic, or applied.” This level of engagement requires interdisciplinary and intersectional perspectives in order to permeate the entire AI pipeline, transforming it altogether. Critical AI research is often paired with the urgent calls for alternative approaches and knowledge systems to be applied to machine learning discussed above. Yet, importantly, none of these mixed methods of analysis and intervention has yet to be adopted widely into standard machine learning practices, even as the use and awareness of AI escalates and its issues grow more urgent.

Throughout this project, I adopt the term ‘trans*formative’ because it attends to root causes and radical alternatives, suggesting intersectional queer and trans* imaginaries that embrace radical difference and radical belonging. Here this means seeking fundamentally reworked alternatives to the computational logics that perpetuate harmful systemic inequities. It means beginning with “positive refusals” and radical alternatives that, as creative and social computing scholar and activist Dan McQuillan (2022) suggests, “restructur[e] the conditions that give rise to AI.” He and many others have shown these systems are direct products of the cultural and historical conditions in which they are embedded. Therefore, transformation must go beyond isolating AI’s problems, when it is itself an expression of broader problems. Rhetorician Adam Banks (2006) argues for what he calls “transformative access” to digital technologies, saying that access is more than owning or using tools, participating in processes or even critiquing their failings. Transformative access is “always an attempt to both change the interfaces [where people use that system] and fundamentally change the codes that determine how that system works.” Banks highlights the important role Black people play in technology’s transformations, saying, “Black people have hacked or jacked access to and transformed the technologies of American life to serve the needs of Black people and all of its citizens.” Supporting the needs of people who are pushed to the margins is essential in its own right and frequently leads to more effective support for many others as well (Costanza-Chock 2020). Access and agency with technology is not a favor or a handout; more encounters across communities with different backgrounds and knowledge, engaging and imagining different possibilities for technology, benefits the collective.

Hyped AI discourse explores limited questions about AI because it continues to draw from limited perspectives, letting normalized narratives about humans and automated systems frame the terms of debate. “Machine learning […] is an expression of that which has already been categorized,” says digital culture researcher Ramon Amaro (2022). Not to think to train on a variety of faces, or not to raise concerns about training on faces at all, happens because there is not a variety of perspectives in the room when these very human decisions are being made: before, during, and after the data are being collected; and before, during, and after the code is being written and run. The spaces where technologies are discussed, designed, and implemented are missing essential perspectives of those pushed to the margins, who are most capable of addressing the concerns facing technology now. These concerns are not new, nor strictly digital.

Such questions—about representation, equity, ethics, and more—have been addressed by a wide range of communities with different types of knowledge for centuries. Yet intimidating, isolating cultures around the specialization of computation and programming practices have left so many people out of these conversations. Down to the very language chosen to describe it, the seemingly neutral choices about technology reinforce narrow cultural ideas about it. Specialized terms exclude, and imprecise terms obscure.For example, a machine isn’t ‘learning’ like a person learns in the process of ‘machine learning’. In ‘natural language understanding’ tasks, nothing is ‘understood’ the way you or I understand. Generative AI systems are now said to ‘suffer’ from ‘hallucinations’, but all these terms signify purely mathematical operations happening under the hood. A lot of math is happening very fast, but it is still just math. In an example from cognitive robotics professor Murray Shanahan (2023), generative systems like GPT do not ‘know’ that Neil Armstrong went to the moon, but only that the characters representing the word ‘moon’ are highly likely to follow Armstrong’s name in a text. GPT’s results hinge on next-word prediction, using simple mathematical functions that are decided on and adjusted by its designers. Humanities scholar Francis Hunger (2023) suggests new terminology to replace these misnomers, offering instead ‘machine conditioning’ to suggest the active role of designers in adjusting, tuning, and producing these systems.

Curator and critic Nora Khan says of explainability in AI:

“the brutal realities of algorithmic supremacy are often contingent upon its mystification and its remove. We can map a growing hierarchy of computational classes of ‘knowers’ versus those without knowledge, those with more access to the workings of technology, those with partial access, and those with nearly none.” (2022)

With the goal to imagine different systems, code literacy should not be defined only on the narrow terms of those creating existing systems (Vee 2017). It is clear that bootcamps and hiring initiatives, though useful, do not result in more variety of voices in effective positions of change (Abbate 2021; Hicks 2021; Dunbar-Hester 2019; Vee 2017). Joining an ‘elite’ tech field is a moving target, entangled with race, gender, and economic politics that rig the game. Diversity quick-fixes do not acknowledge the many people already participating in the production of technologies in the global majority, from those harvesting of rare earth minerals and circuit board manufacturers (Nakamura 2014; AI Now Institute n.d.; Crawford 2021) to the content moderators (Roberts 2016) to the crowd workers (Sunder 2023). Communications scholar Christina Dunbar-Hester (2019) calls for interventions that go deeper than training more people in tech jobs, pointing out that this also does too little to examine the structures that organize and value existing work sectors. She argues this calls for “a larger reevaluation and appropriation of categories themselves—the boundaries of what is ‘social’ and what is ‘technical’ are flexible categories.” Part of reworking inclusion involves acknowledging how many more people are already engaged with sociotechnical practices and impacted by them, as user-practitioners, data subjects and subjectees (Ciston 2023), skilled crafters and critics. It requires rethinking access as mutually beneficial connection across communities of practice, by finding shared spaces and common vocabularies.

Critical, Intersectional AI Needs Creative, Care-full Approaches

How do we reconnect the communities of practice who are currently building technologies and those who are equipped with the skills and knowledge necessary to transform them?

Coding.Care argues for craft-based, process-oriented, community-driven approaches to AI that can better help meet this challenge. It demonstrates that implementing critical approaches into AI systems more broadly requires building inviting, inclusive spaces where more people can engage both creatively and critically with each other and with machine learning techniques as malleable materials.

As important as the criticisms of current AI systems are, effective critique should be imbedded and actively practiced in order to produce meaningful alternatives. Abstract calls for access are not enough; to create truly transformative alternatives, we need spaces that acknowledge diverse capacities across backgrounds and disciplines, while connecting us in shared goals. Caring, creative, and critical approaches must be combined in order to adapt these conversations and these technologies to welcome different communities and the wider range of knowledge necessary for such transformation.

‘Critical–creative coding’ is the term I use to describe combining the application of existing critical AI approaches with creative coding, an existing set of artistic approaches to software and the diverse communities surrounding these approaches. Creative coding has long lineages that can be traced to 1990s net art, analog fine arts, and early computation. Like ‘tactical media’ (Raley 2009) and ‘critical engineering’ (Oliver, Savičić, and Vasiliev 2011), critical–creative coding creates software and other technological objects, not only for their aesthetic qualities, but in order to investigate them as objects of study and critique. Like ‘critical making’ (Bogers and Chiappini 2019) and many hacklabs and makerspaces (Dunbar-Hester 2019), critical–creative coding considers the tools it takes up and the community practices which root it. For example, as an intervention into deep learning algorithms, designer and researcher Catherine Griffiths argues for “reflexive software development” that critically considers and interactively presents the circumstances of its own production (Griffiths 2022). Such research can produce tools that continue to probe their research questions, both through the very processes of their creation and through their later use by others. Critical–creative coding consolidates and combines a collection of practices from this wide range of critical and creative spaces. Building on these inspirations, I find it essential in my own practice that critical–creative coding also emphasizes care, co-learning, and process—practices learned from crafting communities and from queer and trans* communities.

Process-oriented (creativity, craft) and radical belonging (care) practices are at the core of this approach. Care and creativity are not additions, affectations, or antonyms to critical theory or technical savvy; rather, they are its central fortifications. Importantly, critical–creative is not a binary to straddle but a deeply integrated way of knowing. Critical methods activate creative modes and root them in sociotechnical complexity. Creative methods in turn activate critical modes and root them in care and connection, taking the critical out of the abstract and into action. These strategies are interlocking and have found disciplinary grounding internationally, sometimes called arts-based research, artistic research, design research, or research–creation (Willis 2016; Loveless 2019; Fournier 2021). I find that artistic research has the capacity to combine rigorous scholarly investigation, deep community building and activism, material artistic experimentation, and queer and creative play in ways that facilitate connections across broader non-academic audiences. In short, the technical means (coding skills), the analytical means (analytical, political, aesthetic, ethical contexts), and the material means (data, energy, hardware, infrastructure, care) are inseparable and essential.

Code work is critical work is care work is creative work.

Code can do, mean, and be so much more—if we let it. Code is collaborative, says Mark C. Marino (2020), who helped develop the practice of Critical Code Studies, which applies the close reading attention of the humanities to the deeper understanding of software code. Code can be an inviting, interpretive practice: “Code’s meaning is communal, subjective, opened through discussion and mutual inquiry, to be contested and questioned, requiring creativity and interdisciplinarity, enriched through the variety of its readers and their backgrounds, intellectually and culturally.” Such collaborations are vital to the creation of hybrid communities capable of applying the interdisciplinary, intersectional capacities of programming.

I theorize this overall approach as “crafting queer trans*formative systems” or simply “coding care.” As these guides demonstrate, process-oriented, craft-focused practices and intersectional queer care practices make room for in-betweenness—for rejoining the divisions between theory and practice, and between user and programmer, which were artificially split from the start (Hoff 2022; Artist n.d.; Nardi 1993); for un-siloing domains and disciplines, the artificial boundaries that divide technologists from activists and critics from creators; for finding common language and common values that come with working knowledge of whole systems and with openness to new systems. Craft, criticality, and care support more nuanced and more mutual understanding. This fosters code collaborations that more fundamentally challenge and change technologies.

“any theory that cannot be shared in everyday conversation cannot be used to educate the public.” (hooks 1994)

An Invitation

Coding.Care is a collection of public-facing resources that strive to put this thinking into action. Read in any order you like. Read on the bias. Read in conversation with other texts. Read as openings for discussion and expansion.

Produced in various non-academic formats like zines and wikis, and in contexts like public workshops, these resources aim to support discussions across communities of practice—including code creators, AI researchers, and marginalized outsiders. The works prioritize finding a common language to connect across boundaries, by offering plainspoken translations of technical, critical, aesthetic, and ethical concepts relevant to the questions raised around emerging tech. They are written as jargon-free as possible to allow them to travel as broadly as possible.

The texts are not final, fixed, or authoritative versions. Rather, they are (and describe) ongoing dynamic processes. They are caught moments of queer space-time, local data slices of thinking in process. In the static, linear formats of bound books and archives, they are temporarily constrained to an order, but please don’t let that constrain you in reading. We work with the tools we have, and we work to build new ones.

Where possible I have tried to combine, misuse, and expand on the forms available. This work has been drafted using a GitHub repository, which archives and makes public its composition process and includes a detailed version history since May 2021. Its latest version is hosted online at Coding.Care, which supports updates outside of and beyond the institutionally archived edition. Unless created as part of another outside project, the writing is produced in plain-text Markdown files, which allows for conversion to many output formats. I coded small scripts that use these same text files to produce multiple outputs: the HTML files for the website, the PDF files for the dissertation version, and the PDF files for printable zine booklets.I used Markdown, Pandoc, LaTex, and Make files here. If you’re interested you can find examples in the GitHub repo or ask me for more information. I don’t necessarily recommend my technique but I haven’t found a better one. It was a silly, but fulfilling, process—much like life.

I developed this working method because I value the multimodal possibilities of digital systems and the dynamic states of text. The minimalist structure and simple formatting, made without complex software frameworks, means both the code and writing are more adaptable to other complex forms and also more legible to other people.

I am excited by how code writing and text writing work together, in how they might work together more, and in how forms(s) inform content and distribution. I discuss this further in Crafting Queer Trans*formative Systems and in Coding.Care: Field Notes for Making Friends with Code. How a text is made might seem a small concern. But to me, these questions of how we read, how we code, the forms these take, and how they travel all relate directly to the future of sophisticated multimodal AI models as much as to fundamental concepts of first-year programming courses and first-year writing classes. They will continue shaping emerging technologies at every scale and every step.

Together, the range of resources collected by Coding.Care shows how different ways of knowing are necessary to access and address the questions raised by automated, algorithmic, and emerging technologies. They also show how different aspects of these urgent issues must be addressed simultaneously.

And so, I invite you to find the resources and the ways of knowing that suit you. Find the people with whom you want to read this, the conversations you want to have, the action you want to take, and the world you want to create. I hope that this plurality of forms, tactics, topics, and voices helps you find something you need, answers some questions, and raises more.

Coding.Care: Field Notes for Making Friends with Code

As a pocket guide to making and sustaining friendly coding communities, Coding.Care: Field Notes for Making Friends with Code shows why we need these communities, how to build them, and how to let them thrive. It draws on lessons I learned from Code Collective, the diverse hack lab that I started in early 2019 when I yearned for the adaptable, encouraging environment I had needed when I was first struggling to learn to program. I wanted to make a space where I wouldn’t feel like an outsider for ‘not knowing everything’ about programming, and I suspected others might feel the same. I wondered how to recreate the positive experiences that changed coding for me, inspired by teachers like Brett Stalbaum, who had showed me code could feel creative instead of prescriptive.

In Code Collective, a mix of media artists, activists, makers, scientists, scholars, and engineers gather to co-work and co-learn, thinking critically with code in an inclusive, interdisciplinary space that supports many kinds of learners. The Collective unites students who may have zero technical experience with those who may have lots but perhaps lack a critical or creative lens. We value their experiences equally, reinforcing the idea that, “We all have something to teach each other.”

This guide looks at a variety of the strategies and tools we have explored and developed as the community has grown. It discusses how we have adapted to meet our shifting needs—from hosted workshops to hybrid-format meetups, from pandemic support to alumni programming. Code Collective’s approaches draw on many existing methodologies and methods from intersectional queer, feminist, anti-ableist, and anti-racist theories. The guide connects these approaches to cooperative organizations like Varia and p5.js, to Critical Code Studies, and to practices like working iteratively and breaking critically.

As a guide for making friends with code, Coding.Care discusses how practices such as process-oriented skillbuilding, co-teaching and co-learning, and snacks (always snacks) embody the Collective’s guiding values, such as “scrappy artistic strategies not perfect code.” The guide shares projects and feedback from members of the Collective, who report how these values and practices have shaped them as emerging makers and thinkers. Personally, I have found this community to be the strongest influence on my own research, above and beyond my role as facilitator. Code Collective has become a joyful space for creative risk-taking that nourishes my practice. The guide offers practical advice for getting comfortable with code, while situating these approaches and groups within an “ethics of coding care”—grounded in shared embodied knowledge, embedded co-creation, and programming with and for community—as an antidote to technocratic values and as an enactment of its ethos.

In her book Coding Literacy, Annette Vee (2017) argues that, “Changing ‘how the system works’ would move beyond material access to education and into a critical examination of the values and ideologies embedded in that education. […] Programming is a literacy practice with many applications beyond a profession defined by a limited set of values.” Vee calls this kind of programming access “transformative.” Through Coding.Care’s intimidation-free, learner-led, process-oriented approaches, it both theorizes and models the creation of caring communities and innovative spaces that can transfer knowledge across social strata and intellectual disciplines in order to reshape technological systems.

OBJECTIVES: Through Coding.Care, understand how to approach programming with less fear and more fun, with less constraint and more community support. Think creatively and critically about the kinds of technologies you want to make and support. Learn to choose and use tools, languages, and platforms that match your goals and ethics. Create or join communities of practice that feel supportive and generative.

Crafting Queer Trans*formative Systems, a Theory in Process

As a guide to the theories and tactics, metaphors and materials, that ground the rest of this collection, Crafting Queer Trans*formative Systems explores how artists, activists, scholars, and technologists can recast our relationships to emerging technologies by reframing them as handcraft processes (e.g. crochet, woodworking, printmaking), in order to embody the intersectional approaches necessary to transform systems. It argues that these strategies help deflate AI hype, lower barriers to learning, build community, and emphasize slowness and sustainability. Thinking craft-as-technology honors the inherited knowledge of many outsider communities. Thinking technology-as-craft provides a framework to implement those theories, ethics, and tactics as intersectional critical AI.

This text describes a “theory in progress” using craft as a framework to surface reconsiderations of scale and dimension, care as infrastructure, counterhistories of digitization from marginalized legacies, the merging theory and practice, creating access and drawing boundaries, and material resistance. Rather than relying on ephemeral cloud-y metaphors, attending to the material realities of even the most sophisticated technology brings it back within a scope of human- and eco-relations. Material resistance considers both how materials push back as well as how to (mis)use materials as modes for refusal, including extreme use, adversarial use, handmaking, and making esoteric systems.

Central to this theory are queer and trans* becoming, radical belonging, and radical difference, particularly as ways of (un)knowing that ground my perspective on community-building and worldbuilding, art-making and research. These intersectional practices are rooted in anticolonial and queer-of-color histories (Muñoz 2009; Driskill 2010), and these both make plain the urgent stakes of current AI systems, and also offer the fluidity of thinking necessary to produce substantive change in those systems.

This thinking-guide works through these stakes by unpacking each of its key terms and how they relate to machine learning: craft as both process and knowledge; queer futures and queer enough; the trans*ness of transformer models and the formative as a shape, a scaffold, and an origin story; even the asterisk as a pivot and a portal; and systems as both sociotechnical systems and speculative systems for change.

OBJECTIVES: Through Crafting Queer Trans*formative Systems, understand some examples of process-based and craft-based approaches to technologies and how they offer alternative perspectives and opportunities for intervention. Consider queer and trans* lenses for technologies and their essential role in reshaping and reimagining tech spaces and tech systems. Understand the role of form as ingrained assumptions and structuring systems for sociotechnical tools. Imagine how these combine at scale and could be reconfigured.

A Critical Field Guide for Working with Machine Learning Datasets and Inclusive Datasets Research Guide

Datasets provide the foundation for all of the large-scale machine learning systems we encounter today, and they are increasingly part of many other research fields and daily life. Many technical guides exist for learning to work with datasets, and much scholarship has emerged to study datasets critically (Corry et al., n.d.; Gillespie and Seaver 2015). Still no guides attempt to combine technical and critical approaches comprehensively. Every dataset is partial, imperfect, and historically and socially contingent—yet the abundance of problematic datasets and models shows how little attention is given to these critical concerns in typical use.

A Critical Field Guide for Working with Machine Learning Datasets helps navigate the complexity of working with giant datasets. Its accessible tone and zero assumed knowledge support direct use by practitioners of all stripes—including activists, artists, journalists, scholars, students—anyone who is interacting with datasets in the wild and wants to use them in their work, while being mindful of their impacts. Developed with Kate Crawford and Mike Ananny, as part of their research team Knowing Machines, the field guide discusses parts and types of datasets, how they are transformed, why bias cannot be eliminated, and questions to ask at every stage of the dataset lifecycle. Importantly, it shares some of the benefits of working critically with datasets when (on the surface) it may seem just as easy not to take that care.

In a similar vein, the Inclusive Datasets Research Guide is an interactive digital guide for academic researchers working with datasets, that supports them with an overview of key concepts and considerations for working with datasets, as well as providing tools and software, books and tutorials, and recommendations for thinking inclusively. Like A Critical Field Guide, the Inclusive Datasets Research Guide focuses on a blend of technical and critical decisions that arise when working with datasets. Because this guide is aimed at students and teachers, the format is brief collections of resources rather than conceptual deep-dives. The guide appears on the USC Libraries’ website along with its other research guides on many topics.

Developed by a team at USC Libraries, with the support of a grant from the USC Office of Research, this research guide was written as part of a grant to acquire core research datasets to support areas of inquiry by USC researchers into arts, humanities, and machine learning. I was recruited to provide interdisciplinary perspective on inclusive approaches to machine learning, and I joined a team including a chief library technologist, data science graduate students, special collections librarians, a research communications specialist, and a multimedia digital humanities specialist. We conducted 18 interviews with faculty across campus who worked with datasets in order to develop an internal rubric to support collection development. Through this process, we found less pressing need for dataset acquisition, because researchers do not yet look to libraries for their datasets but access them elsewhere. Still, the rubric we developed was utilized to acquire approximately 50 collections identified for being more accessible, inclusive, ‘datafiable’, and meaningfully engaged, with the aim to offer alternative options for researchers. Additionally, we identified the need for more curated resources and more training on how to select and use datasets critically, while remaining mindful of their origins and impacts. This led to the expanded aim of the grant and the development of the Inclusive Datasets Research Guide

Both A Critical Field Guide and the Inclusive Datasets Research Guide reflect on the stakes of datasets and the human choices they rely on. Reframing this information in two different forms shows how it can be made more effective depending on different audiences’ needs. Both works are examples of how concepts and processes researched in the Intersectional AI Toolkit (below) can be reworked for new institutional contexts. Adapting the Toolkit to new audiences in library science, data science, and the social sciences posed interesting challenges that both expanded and refined the work. It required the ideas be scaled up and applied, and sometimes renegotiated until their rewordings no longer felt watered down. In each project, I learned how another field has addressed the problems of knowledge organization and bias, historically and in the present. Library scientists have at least a century of practice considering questions of how to categorize, curate, and archive. Social scientists have been asking how and what to measure for just as long. None of this is perfect, either, but learning from each institution, and combining thse findings with questions machine learning is trying to ask, helps me understand better how we got where AI systems are today. This includes understanding each field’s starting baselines and vocabularies. Combining these with other intersectional practices helps me better understandwhat each domain might learn from the other.

OBJECTIVES: Through A Critical Field Guide for Working with Machine Learning Datasets and the Inclusive Datasets Research Guide, understand the importance of working critically with datasets as part of any machine learning practice. Identify the parts, types, and functions of datasets as you encounter them. Determine whether a particular dataset is a good fit for your project by understanding critical questions to ask at each phase of the dataset lifecycle. Learn to collaborate with communities impacted by your research and to create strategies for addressing potential harms in the datasets you utilize.

The Intersectional AI Toolkit

The Intersectional AI Toolkit argues that anyone should be able to understand AI and help shape its futures. Through collaborative zine-making workshops, it aims to find common vocabularies to connect diverse communities around AI’s urgent questions. It clarifies, without math or jargon, the inner workings of AI systems and the ways in which they operate always as sociotechnical systems. The Toolkit celebrates intersectional work done by many other researchers and artists working to address these issues in interdisciplinary fields; and it gathers and synthesizes legacies of anti-racist, queer, transfeminist, neurodiverse, anti-ableist theories, ethics, and tactics that can contribute valuable perspective. Its three formats allow for multiple entry points: The digital wiki offers a forum for others to discuss and expand upon its topics. The collection of printed zines share AI topics at a concise, approachable scale. And the in-person and hybrid-online workshops invite multiple communities to participate directly in the systems that impact them.

Selecting the toolkit format was a key consideration of the development process for the Intersectional AI Toolkit. The toolkit form taken up here was first modeled after Ahmed’s ‘Killjoy Survival Kit’ (Ahmed 2017).Ahmed says in Living a Feminist Life that the killjoy survival kit should contain books, things, tools, time, life, permission notes, other killjoys, humor, feelings, bodies, and your own survival kit.

. The term ‘toolkit’ was thoroughly contested (too instrumental, too object-oriented, not quite compendium or catalogue or care package, not index or hub or gazetteer, not manifesto or knapsack or portal), but settled upon after nothing else quite suited. The technologies used went through many iterations, from git repo to wiki to self-hosted hybrid back to repo again, in search of a platform that would facilitate guest user access without heavy onboarding, that could track edits and adapt to multimedia print and digital zine forms. I am still remaking the work and searching for the perfect form. I suspect I will have to create it, and it will continue to change. In its various iterations, the Toolkit has grown into eleven workshops, compiled into eight printed zines, plus a selection of focused topic pages online. Work on the Toolkit also resulted in the two related datasets projects (above), which reflected back to inform the Toolkit. Citizen data researcher Jennifer Gabrys (2019) says toolkits “provide instructions not just for assembly and use but also for attending to the social and political ramifications of digital devices.” She says they are spaces of “instruction, contingency, action, and alternative engagement.” As such, the Intersectional AI Toolkit hopes to provide resources and access points for engaging differently with machine learning systems in non-intimidating ways that connect different audiences.

OBJECTIVES: Through the Intersectional AI Toolkit, appreciate the need for plural perspectives on AI systems. Understand key terminology related to machine learning and to intersectionality. Share perspectives on the impact of emerging technology as it relates to you. Choose critically which AI tools and resources you will engage and how.

Interstitial Portals

  • (Un)Limiting: Rebecca Horn, constraint, and COVID-era art
  • (Un)Raveling: Sonya Rapoport, fiber art, and computation
  • (Un)Forming: VALIE EXPORT, glitch feminism, and broken machines
  • (Un)Living: On Kawara, dailiness, death, and data
  • (Un)Knowing: Pipilotti Rist, black boxes, and tech trauma

The essay form is a kind of embodied processing that moves the corpus (book) through the corpus (body), a reckoning in throat and gut that pairs bodily processing with computational processing. These interstitial essays serve, as queer scholar KJ Cerankowski (2021) writes, “to let this book be the crisis rather than about the crisis or crises, rather, a plurality of traumas and pains felt collectively and individually.” Long traditions of artistic and literary outliers have maintained the need for such forms, like autotheory and lyric essay, which bridge aesthetic, personal, and political concerns. Known as the ‘first’ essayist, Michel de Montaigne called the essay form a ‘trial’ or ‘attempt’. After Trinh T. Minh–ha’s “speaking nearby” (Chen 1992), these interstitial essays are attempts to speak in the “nearbyness” of Coding.Care. They are trials, in the sense of struggles, to get closer to the core of a strange creature by sneaking between its ribs. An oblique strategy (Eno and Schmidt 1975), they glance against logical modes of critical analysis or direct address, in order to become, probe, and interrupt simultaneously. From these traditions, I am interested both in the constitutive act of form-making (as prefiguration) and in the reconsitituion of critical forms into poetic, personalized, or approachable forms (Fournier 2021), into forms that can remake the forms we see around us as places we want to inhabit (abolitionist forms).

Locating “a correspondence, not an assemblage,” the essays join together by “living with” concepts (Ingold 2015). They are portals to a co-existing “past-present-future” (Olufemi 2021) for exploring our relationships with systems differently and intimately, in which “the past is not lost, however, but rather a space of potential” (Chun 2021). The essays also use correspondance as a form, relying on epistolary address to conjure up analogue antecedents to the digital media discussed in other sections of Coding.Care. I read the works of five 20th century media artists as pre-responses to automated systems. Their wide range of practices—from minimalist daily rituals to queer feminist body art and performance—show how we have always-already been living in, talking about, performing with the questions amplified by automated systems, classification, and datification. Their works offer a breadth of artistic possibilities for reconsidering our relationships with computational systems—and these responses were already being established in parallel to the development of those systems. They help me reimagine how I want to respond now.

“the future is not in front of us, it is everywhere simultaneously: multidirectional, variant, spontaneous. We only have to turn around. Relational solidarities, even in their failure, reveal the plurality of the future-present, help us to see through the impasse, help temporarily eschew what is stagnant, help build and then prepare to shatter the many windows of the here and now.” (Olufemi 2021)

These are alternate takes on algorithmic “bias,” because bias cannot be “optimized” out of systems. These are bias cuts moving diagonally or diffractively across the warp and weft of the fabric of the other texts here. Cutting and sewing on the bias puts fabric in tension, making garments that take the shape of the bodies they surround. Bias cuts are ways of working with and against materials, acknowledging their limits and not resolving them to right angles. Thus, these essays sustain the research tensions of Coding.Care, unfurling the questions in the materials rather than folding them away.

OBJECTIVES: Engage the questions and concerns of this collection through artistic practice and poetic language. Consider the personal, emotional, physical, social impacts of automated systems through modes that might not be accessed through other texts. Expand the timelines, media formats, and contexts with which you frame the algorithmic. Imagine how you will choose to respond.

References

Abbate, Janet. 2021. “Coding Is Not Empowerment.” In Your Computer Is on Fire, edited by Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip, 253–72. The MIT Press. https://doi.org/10.7551/mitpress/10993.003.0018.
Abebe, Veronica, Gagik Amaryan, Marina Beshai, Ilene, Ali Ekin Gurgen, Wendy Ho, Naaji R. Hylton, et al. 2022. “Anti-Racist HCI: Notes on an Emerging Critical Technical Practice.” In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, 1–12. CHI EA ’22. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3491101.3516382.
Agre, Philip E. 1998. “Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI.” In Social Science, Technical Systems, and Cooperative Work. Psychology Press.
Ahmed, Sara. 2017. Living a Feminist Life. Durham: Duke University Press.
AI Now Institute, dir. n.d. The Labor That Makes AI “Magic” | Lilly Irani | AI Now 2016. Accessed September 2, 2018. https://www.youtube.com/watch?time_continue=68&v=5vXqpc2jCKs.
Amaro, Ramon. 2022. The Black technical object: on machine learning and the aspiration of Black being. London: Sternberg Press.
Amoore, Louise. 2020. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. https://doi.org/10.1215/9781478009276.
Artist, American. n.d. “Black Gooey Universe.” Unbag. Accessed February 19, 2021. https://unbag.net/end/black-gooey-universe.
Banks, Adam J. 2006. Race, Rhetoric, and Technology: Searching for Higher Ground. Routledge.
Barnett, Fiona, Zach Blas, Micha Cárdenas, Jacob Gaboury, Jessica Marie Johnson, and Margaret Rhee. 2016. “QueerOS: A User’s Manual.” In Debates in the Digital Humanities 2016, edited by Matthew K. Gold and Lauren F. Klein, 50–59. University of Minnesota Press. https://doi.org/10.5749/j.ctt1cn6thb.8.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922.
Benjamin, Ruha. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. 1 edition. Medford, MA: Polity.
Bogers, Loes, and Letizia Chiappini, eds. 2019. The critical makers reader: (un)learning technology.
Browne, Simone. 2015. Dark matters: on the surveillance of blackness. Durham, [North Carolina] ; Duke University Press.
Buolamwini, Joy, dir. 2017. How I’m Fighting Bias in Algorithms | Joy Buolamwini. TEDx BeaconStreet. https://www.youtube.com/watch?v=UG_X_7g63rY.
Buolamwini, Joy, and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77–91. PMLR. https://proceedings.mlr.press/v81/buolamwini18a.html.
Calvino, Italo. 1976. Numbers in the dark: and other stories. Edited by Tim Parks. New York: Pantheon Books.
“CARE Principles of Indigenous Data Governance.” n.d. Global Indigenous Data Alliance. Accessed March 14, 2022. https://www.gida-global.org/care.
Cerankowski, K. J. 2021. Suture: Trauma and Trans Becoming. 1st ed. Santa Barbara: Punctum Books.
Chakravartty, Paula, and Mara Mills. 2018. “Virtual Roundtable on ‘Decolonial Computing’.” Catalyst: Feminism, Theory, Technoscience 4 (2): 1–4. https://doi.org/10.28968/cftt.v4i2.29588.
Chen, Nancy N. 1992. ‘Speaking Nearby:’ A Conversation with Trinh T. Minh–Ha.” Visual Anthropology Review 8 (1): 82–91. https://doi.org/10.1525/var.1992.8.1.82.
Chun, Wendy Hui Kyong. 2021. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. The MIT Press. https://doi.org/10.7551/mitpress/14050.001.0001.
Ciston, Sarah. 2019. “Intersectional AI Is Essential: Polyvocal, Multimodal, Experimental Methods to Save Artificial Intelligence.” Journal of Science and Technology of the Arts 11 (2): 3–8. https://doi.org/10.7559/citarj.v11i2.665.
———. 2023. “A CRITICAL FIELD GUIDE FOR WORKING WITH MACHINE LEARNING DATASETS.” Edited by Kate Crawford and Mike Ananny. https://knowingmachines.org/critical-field-guide.
Cooper, Brittney. 2016. “Intersectionality.” In The Oxford Handbook of Feminist Theory, edited by Lisa Disch and Mary Hawkesworth. Vol. 1. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199328581.013.20.
Corry, Frances, Edward B. Kang, Hamsini Sridharan, Sasha Luccioni, Mike Ananny, and Kate Crawford. n.d. “Critical Dataset Studies Reading List.” Knowing Machines. https://knowingmachines.org/reading-list.
Costanza-Chock, Sasha. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. https://doi.org/10.7551/mitpress/12255.001.0001.
Cox, Geoff, and Alex McLean. 2013. Speaking Code: Coding as Aesthetic and Political Expression. Software Studies. Cambridge, Mass: The MIT Press.
Crawford, Kate. 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press.
Crenshaw, Kimberle. 1989. “Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics.” University of Chicago Legal Forum 1989: 139–68. https://heinonline.org/HOL/P?h=hein.journals/uchclf1989&i=143.
Crenshaw, Kimberlé. 2021. What Does Intersectionality Mean? : 1A. https://www.npr.org/2021/03/29/982357959/what-does-intersectionality-mean.
Davies, Harry, Bethan McKernan, and Dan Sabbagh. 2023. ‘The Gospel’: How Israel Uses AI to Select Bombing Targets in Gaza.” The Guardian, December 1, 2023, sec. World news. https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets.
Dodge, Jesse, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, and Will Buchanan. 2022. “Measuring the Carbon Intensity of AI in Cloud Instances.” In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 1877–94. FAccT ’22. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3531146.3533234.
Driskill, Qwo-Li (Cherokee). 2010. “Doubleweaving Two-Spirit Critiques: Building Alliances Between Native and Queer Studies.” GLQ: A Journal of Lesbian and Gay Studies 16 (1): 69–92. https://muse.jhu.edu/pub/4/article/372445.
Dunbar-Hester, Christina. 2019. Hacking Diversity: The Politics of Inclusion in Open Technology Cultures. Princeton, New Jersey: Princeton University Press. https://doi.org/10.2307/j.ctvhrd181.
Earl, Charles C. 2021. “Towards an Abolitionist AI: The Role of Historically Black Colleges and Universities.” arXiv. https://doi.org/10.48550/arXiv.2101.02011.
Eno, Brian, and Peter Schmidt. 1975. Oblique Strategies.
“Feminist.AI.” n.d. Accessed March 17, 2019. https://www.feminist.ai/.
Fournier, Lauren. 2021. Autotheory as Feminist Practice in Art, Writing, and Criticism. http://direct.mit.edu/books/book/5028/Autotheory-as-Feminist-Practice-in-Art-Writing-and.
Franklin, Ursula M. 2004. The Real World of Technology. Rev. ed. CBC Massey Lectures Series. Toronto, Ont. : Berkeley, CA: House of Anansi Press ; Distributed in the United States by Publishers Group West.
Gabrys, Jennifer. 2019. How to Do Things with Sensors. University of Minnesota Press. https://doi.org/10.5749/j.ctvpbnq7k.
Gillespie, Tarleton, and Nick Seaver. 2015. “Critical Algorithm Studies: A Reading List.” Social Media Collective (blog). November 5, 2015. https://socialmediacollective.org/reading-lists/critical-algorithm-studies/.
Goodman, Andrew. n.d. “The Secret Life of Algorithms: Speculation on Queered Futures of Neurodiverse Analgorithmic Feeling and Consciousness,” 22.
Griffiths, Catherine. 2022. “Toward Counteralgorithms: The Contestation of Interpretability in Machine Learning.” Ph.D., United States -- California: University of Southern California. https://digitallibrary.usc.edu/CS.aspx?VP3=DamView&VBID=2A3BXZ8B021HS&SMLS=1&RW=1334&RH=859.
Hakopian, Mashinka Firunts. 2022. The Institute for Other Intelligences. Edited by Anuradha Vikram and Ana Iwataki. X Artists’ Books.
Hamraie, Aimi, and Kelly Fritsch. 2019. “Crip Technoscience Manifesto.” Catalyst: Feminism, Theory, Technoscience 5 (1): 1–33. https://doi.org/10.28968/cftt.v5i1.29607.
Haschemi Yekani, Elahe, Magdalena Nowicka, and Tiara Roxanne. 2022. Revisualising Intersectionality. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-93209-1.
Heikkilä, Melissa, and Tate Ryan-Mosley. 2023. “Three Things to Know about the White House’s Executive Order on AI.” MIT Technology Review. October 30, 2023. https://www.technologyreview.com/2023/10/30/1082678/three-things-to-know-about-the-white-houses-executive-order-on-ai/.
Hicks, Mar. 2021. “Sexism Is a Feature, Not a Bug.” In Your Computer Is on Fire, edited by Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip, 135–58. The MIT Press. https://doi.org/10.7551/mitpress/10993.003.0011.
Hoff, Melanie. 2022. “Always-Already-Programming.md.” Gist. October 17, 2022. https://gist.github.com/melaniehoff/95ca90df7ca47761dc3d3d58fead22d4.
hooks, bell. 1994. Teaching to Transgress: Education as the Practice of Freedom. New York: Routledge.
Hunger, Francis. 2023. “Unhype Artificial ‘Intelligence’! A proposal to replace the deceiving terminology of AI.” Zenodo. https://doi.org/10.5281/zenodo.7524493.
“INDIGENOUS AI.” n.d. INDIGENOUS AI (blog). Accessed April 25, 2022. https://www.indigenous-ai.net/.
Ingold, Tim. 2015. The Life of Lines. London ; New York: Routledge.
Kapoor, Sayash, and Arvind Narayanan. 2023. “A Misleading Open Letter about Sci-Fi AI Dangers Ignores the Real Risks.” Substack newsletter. AI Snake Oil (blog). March 29, 2023. https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci.
Keeling, Kara. -01-22 2014. “Queer OS.” Cinema Journal 53 (2): 152–57. https://doi.org/10.1353/cj.2014.0004.
Khan, Nora. 2022. “HOLO 3: This Isn’t Even My Final Form.” HOLO (blog). 2022. https://www.holo.mg/dossiers/holo-3/.
Klipphahn-Karge, Michael, and Ann-Kathrin Koster. n.d. “Queere KI - Zum Coming-out smarter Maschinen.”
Klumbytė, Goda, Claude Draude, and Alex S. Taylor. 2022. “Critical Tools for Machine Learning: Working with Intersectional Critical Concepts in Machine Learning Systems Design.” In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 1528–41. FAccT ’22. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3531146.3533207.
Lewis, Jason Edward, Angie Abdilla, Noelani Arista, Kaipulaumakaniolono Baker, Scott Benesiinaabandan, Michelle Brown, Melanie Cheung, et al. 2020. “Indigenous Protocol and Artificial Intelligence Position Paper.” Monograph. Honolulu, HI: Indigenous Protocol and Artificial Intelligence Working Group and the Canadian Institute for Advanced Research. 2020. https://doi.org/10.11573/spectrum.library.concordia.ca.00986506.
Loukissas, Yanni Alexander. 2019. All Data Are Local: Thinking Critically in a Data-Driven Society. The MIT Press. Cambridge: MIT Press, The MIT Press.
Loveless, Natalie. 2019. How to Make Art at the End of the World: A Manifesto for Research-Creation. Durham: Duke University Press Books.
Marino, Mark. 2006. “Critical Code Studies | Electronic Book Review.” Electronic Book Review. https://electronicbookreview.com/essay/critical-code-studies/.
Marino, Mark C. 2020. Critical Code Studies. Software Studies. Cambridge, Massachusetts: The MIT Press.
Martinez, Emily, and Sarah Ciston. n.d. “Unsupervised Pleasures.” Unsupervised Pleasures. Accessed January 29, 2023. https://unsupervisedpleasures.com/unsupervisedpleasures.com/.
McQuillan, Dan. 2022. Resisting AI: An Anti-Fascist Approach to Artificial Intelligence. https://bristoluniversitypress.co.uk/resisting-ai.
Moraga, Cherríe, and Gloria Anzaldúa, eds. 1981, 1983. This Bridge Called My Back: Writings by Radical Women of Color. 2nd ed. Watertown, Mass: Persephone Press.
Muñoz, José Esteban. 2009. Cruising Utopia: The Then and There of Queer Futurity. NYU Press. https://www.jstor.org/stable/j.ctt9qg4nr.
Nakamura, Lisa. 2014. “Indigenous Circuits: Navajo Women and the Racialization of Early Electronic Manufacture.” American Quarterly 66 (4): 919–41. https://muse.jhu.edu/pub/1/article/563663.
Nardi, Bonnie A. 1993. A Small Matter of Programming: Perspectives on End User Computing. https://doi.org/10.7551/mitpress/1020.001.0001.
Nedden, Christina zur, and Ariana Dongus. 2017. “Biometrie: Getestet an Millionen Unfreiwilligen.” Die Zeit, December 17, 2017. https://www.zeit.de/digital/datenschutz/2017-12/biometrie-fluechtlinge-cpams-iris-erkennung-zwang.
Noble, Safiya Umoja. 2018. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: NYU Press. http://search.ebscohost.com/login.aspx?direct=true&db=nlebk&AN=1497317&authtype=sso&custid=s8983984.
Oliver, Julian, Gordan Savičić, and Danja Vasiliev. 2011. “The Critical Engineering Manifesto.” 2011. https://criticalengineering.org/.
Olufemi, Lola. 2021. Experiments in Imagining Otherwise. Hajar Press.
Ovalle, Anaelia, Arjun Subramonian, Vagrant Gautam, Gilbert Gee, and Kai-Wei Chang. 2023. “Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness.” arXiv. https://doi.org/10.48550/arXiv.2303.17555.
Perrigo, Billy. 2023. “Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer.” Time. January 18, 2023. https://time.com/6247678/openai-chatgpt-kenya-workers/.
Raley, Rita. 2009. Tactical Media. Vol. 28. Minneapolis, Minn: Univ. of Minnesota Press.
Raley, Rita, and Jennifer Rhee. 2023. “Critical AI: A Field in Formation.” American Literature, March, 10575021. https://doi.org/10.1215/00029831-10575021.
Rights, Coding. 2020. “Decolonising AI: A Transfeminist Approach to Data and Social Justice.” Medium. September 10, 2020. https://medium.com/codingrights/decolonising-ai-a-transfeminist-approach-to-data-and-social-justice-a5e52ac72a96.
Roberts, Sarah T. 2016. “Commercial Content Moderation: Digital Laborers’ Dirty Work.” In The intersectional Internet: race, sex, class and culture online, 147–60. Digital formations, vol. 105. New York: Peter Lang Publishing, Inc.
Shanahan, Murray. 2023. “Talking About Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2212.03551.
Sharma, Sarah. 2020. “A Manifesto for the Broken Machine.” Camera Obscura: Feminism, Culture, and Media Studies 35 (2 (104)): 171–79. https://doi.org/10.1215/02705346-8359652.
Sinders, Caroline. n.d. “Feminist Data Set.” Accessed January 29, 2023. https://carolinesinders.com/feminist-data-set/.
Sloane, Mona, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. 2022. “Participation Is Not a Design Fix for Machine Learning.” In Equity and Access in Algorithms, Mechanisms, and Optimization, 1–6. EAAMO ’22. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3551624.3555285.
Steyerl, Hito. 2023. “Mean Images.” New Left Review, no. 140/141 (April): 82–97.
Sunder, Aarti, ed. 2023. Platformisation: Around, Inbetween and Through. Singapore Art Museum. https://aartisunder.com/2022/11/07/platformisation-around-in-between-and-through/.
Vee, Annette. 2017. Coding Literacy: How Computer Programming Is Changing Writing. http://direct.mit.edu/books/book/3543/Coding-LiteracyHow-Computer-Programming-Is.
Wendy Hui Kyong Chun. 2008. “On ‘Sourcery,’ or Code as Fetish.” Configurations 16 (3): 299–324. https://doi.org/10.1353/con.0.0064.
“What Is Intersectionality.” n.d. Center for Intersectional Justice. Accessed February 16, 2024. https://www.intersectionaljustice.org/what-is-intersectionality.
Willis, Holly. 2016. Fast Forward: The Future(s) of the Cinematic Arts. Wallflower Press.