Resisting AI, by Dan McQuillan

It's a short book, but it felt pretty dense. Lots of sociological jargon that I wasn't familiar with, and many references to other works by other authors. Felt a bit like a slightly expanded and relatively accessible academic paper.

The author makes the case, through the first four of its seven chapters, that AI naturally tends toward encouraging fascist approaches to problem framing and problem solving within society, which is compounded by the contexts in which it is being developed and used, by who is using it, and the ends to which it is being used. He gives a layman's explanation of how AI works in the first chapter, explaining that it's effectively just an application of statistics, and then expands on this until, by chapter four, he's explaining how AI enables necropolitics, or politics which determine which sorts of people are allowed to live and which sorts of people are to be killed, whether directly or (more likely) by systemic oppression and neglect.

The fifth chapter critiques "science" and offers "different ways of knowing" as a solution; it introduced me to the phrases "feminist science" and "post-normal science." There was a fair amount of criticism that applied to science as it has been institutionalized and bastardized in some places; there was also criticism of those who are effectively ignoring "model risk." Post-normal science in particular deals with situations where certainty cannot be arrived at before action must be taken. I'll probably try to read more about this subject.

The sixth chapter suggests establishing "people's councils," modeled after "worker's councils" but with broader membership, in order to effectively mount a political challenge to the dangers posed by AI.

The final chapter suggests that we ought to pursue the development of "anti-fascist AI." The content of this chapter repeated themes that should be familiar to people entrenched in the development and promotion of libre software. It spoke about reclaiming the commons and fighting the enclosure represented by the development of proprietary AI, and of user empowerment, and of collective political and technical control over the technology itself. Among the citations, I noted he cited the Ostrom book I'd been reading before I misplaced it somewhere ("Governing the Commons"), and critiqued the book I'd not been able to finish (for its apparent naivety) by Bastani, Fully Automated Luxury Communism.

There were a lot of threads in this book I could, and probably should, start pulling on. Someone better versed in the language of sociology and political theory might get more out of it than I did, though I did feel I got enough given its small size.

Page Created: 2025-06-11

Last Updated: 2025-06-12

Last Reviewed: 2025-06-12