Skip to the content.

Monthly contestation updates

Since our launch we have begun to produce a running update in newsletter form of the different types of contestation as they happen and are reported. Sign-up to our email list to receive these once they are produced.

December 2025/February 2026

Some campaigning news:

Opposition from employees continues: 15,000 nurses went on strike in New York in January/February 2026, with AI as one of the strike issues - as of now it looks like an agreement has been reached in every hospital hit by the action, winning AI safeguards in their contracts for the first time - victory to the New York nurses! TMK produced an excellent podcast on healthcare workers who are members of the National Union of Healthcare Workers about their ongoing contract dispute with Kaiser Permamente. A whistleblower (former Google employee) alleged that Google helped an Israeli military contractor with AI. Employee whistleblowing also helped expose Musk’s deliberate use of porn to generate interest in Grok. There is a good summary here from Power at Work of worker responses.

Artists continue to mobilise in different ways in response to the impact of AI. Hollywood continues to be focused on limiting the impact of AI on film-making. A recent film about AI, Mercy, saw Amazon ban the use of AI - actors continue to be unhappy about the use of AI in film, and actors’ agents are including non-use of performance to make AI replicas in their contracts - and there is already talk about another strike this year once the widely-cited 2023 contracts, which famously saw a prolonged strike result in a deal to limit AI, expires. In the UK, 99% of members of actors’ union, Equity, voted to refuse to be digitally scanned on set over concerns that the scans will be used by AI. Audiences also don’t seem very keen on AI-generated content. Platforms are moving to ban AI-generated content. Bandcamp announced a ban on AI-generated songs. Comic-Con banned AI art.

On copyright, the UK goverment continues its consultation on how to legislate in the area - recent consultation saw an overwhelming rejection - 97% of submissions - of a plan to introduce a tech-friendly approach to copyright of artists’ work. India released a plan to make AI companies pay for training data. The European Parliament has called for new copyright laws.

Data centres continue to be a major focus of opposition, especially from nearby residents - in fact ‘everyone hates them’. Resident campaigns are reported in Northern Virginia, Oklahoma, Georgia (which is seeing a push for a ban), London, Buckinghamshire (UK - where the planning permission was retracted in an embarrasing U-turn by the government). Rest of World produced a good summary of environmental campaigns arising in response to the impact of data centres.

Politicians also look increasingly concerned about the unpopular rise of data centres in the US. This has prompted investigations over the impact on electricity costs. There is also growing pressure, especially - but not only - within the Democratic Party to adopt a strongly anti-AI position. Republican Governor of Florida, Ron DeSantis, has adopted a strong anti-AI stance. Internal divisions also continue to be evident within the MAGA coalition over AI - with Republican politicians concerned about local opposition amongst voters unhappy about Trump’s attempt to ban state-level regulation of AI. Indeed, despite Trump’s attempted ban on state-level regulation many states continue to move towards tighter regulation. Trump was eventually moved by this opposition to pledge to encourage Big Tech to do more to avoid pushing up electricity bills (citing Microsoft’s recent announcement on this).

The publication by Grok of sexualised deepfake images, including of children, sparked global outrage - with Apple and Google also pulled into the controversy - and a widespread call for bans (and in some cases actual bans), including in the EU (where an investigation was launched), India, the UK (also see here and here - and including the launch of an investigation by UK media regulator, Ofcom, and another one by the Information Commissioner’s Office, and proposals for legislation to ban the creation of online sexual images), Malaysia, Indonesia (where it was banned and then later ‘conditionally’ had the ban lifted), Brazil, California, and France (including a police raid) - eventually prompting Grok to turn off the image creation function, except for subscribers, which also caused a further political storm as critics claimed this amounted to allowing deepfakes of children if the Grok user paid for the service. More coverage of this here. ControlAI speaker Andrea Miotti was interviewed on Sky News to talk about the risks of AI as a result, and by a committee of the Canadian House of Commons. AOC and Paris Hilton announced their attempt to seek legislation in response in the US. The controversy is also seen as one of the reasons Spain (and later Brazil) is moving to ban social media for children (which Musk was very angry about) (prompting some social media firms agree to independent assessments of health risks), and a more general move that Rest of World claims amounts to an attempt to move off US Big Tech.

The rise in AI relationships (or human-like interactions) has also prompted concerns - China recently adopted regulation. Related, the impact of AI delusions is becoming a common theme in public discussion.

Various legal proceedings continue - Google and character.AI settled a lawsuit relating to a child’s suicide after developing a relationship with a chatbot. A lawsuit was also initiated in LA over the addictiveness of social media and its algorithms (prompting TikTok to settle).

The adoption of new regulation of AI is ongoing. Nigeria announced new plans to regulate AI. South Korea adopted strict new AI regulations. The EU’s regulatory landscape continues to be the subject of debate. The EU issued a warning to Meta over its restricted access to rival AI assistants on Whatsapp. Starlink was blocked in South Africa (and restricted in Uganda). A new angle also opened up in the debate over job displacement when a Labour minister called for UBI, paid for by tech companies, as a way of responding to the job losses associated with AI.

The AI-fuelled semiconductor chip industry is also prompting social opposition. In Taiwan, the need for largescale green energy such as through wind turbines is devastating local fishing and farming - prompting ongoing protests and opposition by those affected.

Kim Crawley, founder of the excellent StopGenAI website, is crowdfunding the publication of a forthcoming book, Technofascism Survival Guide.

The limited usefulness (un-usefulness?) of AI also continues to be exposed by a wide range of reports and surveys. One report found that half of AI projects have been shelved due to complex infrastructure. CNN reports a new trend of ‘analog lifestyles’ where people explicitly commit to removing AI from their lives. A survey in the Wall Street Journal highlights the stark difference in perception between bosses and employees - with the former heralding efficiency savings and over 90% of CEOs claiming that at least 2 hours per week is saved, yet 40% of employees claim no time has been saved at all. A report in Harvard Business Review finds AI does not reduce work; rather it makes employees work longer - which they conclude is probably unsustainable (and which has also prompted concerns about AI-related burnout. A West Midlands police chief was caught out for using AI in producing a report on football hooliganism, citing a football matches that never existed - prompting discussion of ways to avoid using AI. A majority of CEOs report zero payoff from AI spending - with very similar findings in another report. Researchers showed that AI is able to be prompted to quote copyrighted work word-for-word, despite AI firms’ denials that it would do so. DeepMind chief Demis Hassabis warned AI investment looks ‘bubble-like’. Dario Amodei, Anthropic chief, warned ‘Humanity needs to wake up’ to dangers of AI. The Register reports that ‘Only 3.3 percent of Microsoft 365 and Office 365 users who touch Copilot Chat actually pay for it, an awkward figure that landed alongside Microsoft’s $37.5 billion quarterly AI splurge and its insistence that the payoff is coming’. The announcement of a $660b investment in AI infrastructure by the Big Tech firms was not received well by the markets. There was an interesting lengthy discussion in Bloomberg on whether robotaxis are safer than human-driven ones - in sum, at least in terms of deaths per miles driven the answer is no, and otherwise it is that we don’t know. Another report discusses the rise in hiring of good writers, due to the poor quality of AI slop writing (or ‘slopaganda’).

The link between the ICE raids in the US and AI has been getting increasing attention - prompting French company CapGemini to drop a contract with ICE following pressure from French officials (and then moving to sell the unit altogether). Palantir and Roshel have also faced scrutiny.

The use of AI by organisations and firms remains controversial. Campaigners highlighted the use of AI to write the political platforms of far-right parties during the 2024 European Parliament elections. The British Museum faced criticism for using AI-created content.

In terms of detrimental impact: Campaigners highlighted the negative impact of AI when used to teach children in schools. Opposition is also growing over the poor health advice that AI is producing - with Google removing some of its summaries. MPs in the UK are highlighting the risks of unregulated AI for finance. The Washington Post covered a story on academics increasingly turning to oral examinations as an alternative form of assessment to deal with the difficulties of students using ChatGPT. Concerns are also increasingly being raised regarding the impact of AI use on younger employees’ capacity to learn.

Many critics look to open source alternatives. This article discusses the use of open source mapping platform, OpenStreetMap, used by Palestinians to navigate roads which they are prohibited from using by the Israeli state in the West Bank.

An attempt to stage a pro-billionaire march in San Fransisco flopped spectacularly.

November/December 2025 

Data centres continue to be a major target of public opposition, especially in the US, and over the pollution for local residents and water and electricity use. Amazon was under pressure after being found to have actively sought to keep the water use of its data centres secret. As Harvard Business Review reports, some state agencies have begun conducting health impact assessments of data centers’ on-site diesel generators to inform emission limits. Bloomberg carry a report claiming that electricity prices have doubled since 2020, with the highest rate of increase in locations that are within 50 miles of a data centre - all of which is creating considerable public anger. Elections are increasingly focused on candidates’ positions on data centres: a recent election in Northern Virginia witnessing both candidates pledge to block the expansion of data centres, and with the vote seemingly going to the candidate who was most ardent in his opposition. This comes amid reports of America’s coming war on data centres here. The campaign organisation, Data Centre Watch, produced a report highlighting how $64 billion in U.S. data center projects have been blocked or delayed by a growing wave of local, bipartisan opposition since 2024. Also see more details of the ongoing campaign by the  Pyramid Lake Paiute Tribe to oppose the building of data centres in the region. 230 environmental groups demanded a national moratorium on new datacenters in the US. This is alongside a new online campaign to oppose the building of data centres in Pennsylvania. At the same time, Amazon has complained to the the Public Utility Commission of Oregon alleging that Portland-based PacifiCorp is failing to provide enough electricity to its data centre! 

Worker and employee resistance and opposition to AI also continues. Ex-AI product safety lead, Steven Adler, broke ranks with the firm to highlight the continued risk of serious mental health issues associated with the use of AI chatbots. Likewise, the Guardian reports from a range of AI workers and raters who report the flawed models of AI training and speak out against the use of AI. For a broader discussion, see this Podcast by two founders of a new mutual aid and advocacy group called Stop Gen AI, which formed this year out of the critical need to provide material support for creatives, knowledge workers, and anyone else impacted by generative AI.  Research by Deloitte found that employee’s trust in their employer’s AI tools is dramatically falling - by 31% in the most recent findings. Quartz reports widespread hatred towards MS Copilot, in part due to employers forcing its adoption upon employees, prompting widespread online mockery and reports that MS is now needing to downgrade sales expectations.

The impact of AI on worker productivity has also been questioned. Research into the use of AI by software developers found that it made them 19% slower than when they worked without AI - yet despite this they believed that AI had sped them up by 20%.

AI has also faced ongoing legal challenges. In Japan, Yomiuri Shimbun and the Asahi Shimbun have filed a lawsuit with the Tokyo District Court in August against U.S. AI startup Perplexity AI; Amazon also sued Perplexity in an attempt to stop its chatbot from shopping on its platform. OpenAI was sued by firm Cameo, with OpenAI accused of deliberately confusing consumers by introducing a new “Cameo” feature on Sora; a series of lawsuits by OpenAI chatbot users for suicides and harmful delusions - confirming media reports of a trend of mental health hospitalizations as a result of chatbot use. A lawsuit was filed in California against discriminatory recruitment practices as a result of AI decision-making. A lawsuit was brought against OpenAI claims ChatGPT persuaded a user to kill his mother and himself. The European Commission announced an investigation into Google’s AI overview feature.

AI firms are being forced to act as a result of these legal challenges. For instance, the fallout from an earlier adverse legal ruling against it prompted Character.AI to move to ban children under 18 from using its chatbots.

Not only AI firms, but also AI users have faced legal challenges. Most notably, courts in Utah, Indiana and California have fined lawyers for using AI-generated legal submissions that included hallucinations reporting non-existent research and case law. 

In terms of AI regulations, family-led campaigns over the risks of AI chatbots have featured heavily in recent weeks. Senators Hawley and Blumenthal have proposed legislation passing through US Congress seeks to limit availability of chatbots to minors, following pressure from parents of children who have taken their own lives under the influence of chatbots. The UK government also announced that it is considering new laws regulating the use of AI chatbots by children. In addition, China adopted regulation designed to restrict fake images and videos, adding to existing legislation which requires AI-related companies to use generative AI services in line with the values of socialism (and which already governs the design and use of DeepSeek). A new law in New York regulates the use of AI to develop personalised pricing. The  Data (Use and Access) Act 2025 was adopted in the UK, banning deepfake porn and making it a criminal offence, prompted by a feminist campaign group make up of victims of deepfake porn and experts. 

As the UK Government continues to consider AI-related copyright law, Paul McCartney contributed to ongoing objections from the music industry regarding the infringement of copyright by AI firms, releasing a track of an almost completely silent recording studio, to highlight what the music industry might become if copyright laws aren’t updated to take account of AI.

Schools and parents are becoming increasingly focused on the risks of AI, with a parent-led campaign creating pressures on Malden Public Schools and demanding better regulation, with many parents calling for an outright ban in schools.

A broader anti-AI sentiment can also be discerned in terms of public opinion and how this is translating into political pressure, and forcing firms and organisations to reconsider their approach to AI. Research into the degree to which the public trusted AI to allow politicians to take decision was overwhelmingly opposed in both the UK and Japan. Google was forced to remove its  AI model Gemma from its Studio platform after a Republican senator said it “fabricated serious criminal allegations” against her. The academic paper publishing platform, arXiv, announced it will no longer accept computer science review articles and position papers due to the overwhelmingly high level of “AI slop” that was being submitted to the platform. McDonald’s faced online opposition to its AI-generated Christmas advert, forcing the firm to turn off comments on YouTube, before setting the video to private.

In terms of political consequences, divisions within the MAGA camp have emerged following Trump’s backing for a plan to restrict state-level regulation of AI, with Ron DeSantis calling Trump’s plan “an insult to voters” (another similar report in the Washington Post). As well as divisions within Meta, with reports of a schism between the AI team and the rest.

A number of other campaign developments: The Verge reports a campaign seeking to influence the Pope to speak out against the risks of AGI. A detailed set of recommendations for campaign organisations seeking to use social media platforms in a way that outsmarts algorithms designed to minimise the visibility of Palestine solidarity campaigns. The campaign by ControlAI to ask UK-based MPs to sign a pledge calling for “binding regulation on the most powerful AI systems” reached 100 MPs signed up.

The European Conference on AI (ECAI) which took place in Bologna in October 2025 witnessed a demonstration staged by local Palestine solidarity activists and organisations, which took place outside the Conference, declaring that such conferences are ‘just advertising stores for criminal multinational companies that finance the development of new technologies, devices placed on the market with the label “for civil use”, and then sold to the “defense” sector, to companies like Leonardo S.p.a., that still export weapons to “Israel”. The production of genocide begins in European classrooms, laboratories and research centers: ECAI is the emblem of neoliberal hypocrisy, the same one that tries to hide death and destruction behind empty principles of peace, freedom and democracy’.

For more summaries of contemporary anti-AI campaigns, see this nice article in Vox listing some of the most impressive recent achievements for anti-AI campaigns. And another similarly good overview of some recent campaign successes in this article in MROnline. 

October 2025

At the micro-level we continue to see an ongoing series of everyday forms of ‘misbehaviour’ or subversion regarding the use of AI. A recent survey of company employees found that over 30% were actively sabotaging their company’s AI strategy, with the most likely reason given that the introduction of AI amounted to an explicit attempt to subvert the attempt to replace employee’s jobs. Reflecting this trend, a recent McKinsey report that while 80% of companies are introducing gen AI in some form, at the same time around 80% also report no improvement on their bottom line as a result of introducing AI; the report identifies a key reason as “implicit resistance from business teams and middle management due to fear of disruption, uncertainty around job impact, and lack of familiarity with the technology.”. Likewise, the NYT reported on attempts by job applicants to trick chatbot recruiters for job applications. The Information reports on how coders are refusing to use AI tools to code, with two software engineers at Mixus, San Francisco, staging a rebellion by refusing to follow instructions to rely on AI coding-assistance software. Higher up the hierarchy within AI companies, the FT reported an exodus from disgruntled senior staff from Musk’s xAI. Also as a result of worker opposition, Microsoft announced it would disable some services to Israel’s Defense Ministry, after a company review concluded that Israel was using Microsoft’s cloud storage services to hold surveillance data on Palestinians - this came after disgruntled employees entered company president, Brad Smith’s office, to protest, hang banners and occupy rooms in the company HQ.

In terms of regulation, we have seen a number of key developments. This includes the introduction of the new California AI Companion Law (SB243), as this piece from James Muldoon reports. Newsom also signed the first AI Safety Law (SB53) in California - as also reported in more detail here by techcrunch- although he also vetoed another piece of legislation (AB1064) that sought to restrict access to AI for children. An AI Risk Evaluation Act was introduced into the US Senate. The much-vaunted UK’s AI Bill also still remains waiting in the pipeline. Calls for further regulations were also voiced. Perhaps most visibly, the Global Call for AI Red Lines saw over 300 prominent figures call for governments to reach international agreements regarding red lines for AI. Similarly, a group of leading experts in Cognitive Sciences signed an Open Letter called for the rejection of the uncritical introduction of AI in academia. For a more global summary of AI policies and regulations, do see the excellent new Artificial Intelligence Policy Observatory for the World of Work (AIPOWW) Symposium in the current issue of the journal, Global Political Economy. This reports on the work of the Artificial Intelligence Policy Observatory for the World of Work, based at University of Essex’s Centre for Commons Organising, Values Equalities and Resilience and which is led by Phoebe V. Moore and Peter Bloom (both members of our Contentious Politics Of AI Network). It includes excellent commentaries on AI policies and regulations in the EU, Brazil, India, China, and Canada.

The wider impact of AI-related infrastructure is also being contested. Efforts to oppose the building of data centres are growing in prominence. Earlier this year saw reports of the Sierra Club seeking to overturn the licence for a data centre as part of their campaign in Reno, Nevada. MIT Technology Review covered a related campaign by the Pyramid Lake Paiute Tribe to tackle the use of water by data centres that threatens to divert it away from the Pyramid Lake. More generally, the impact of AI on energy costs is starting to have political consequences. With voters increasingly calling for political measures to offset or reduce the impact of AI energy use on household energy costs. In the US, Democrats, led by Elizabeth Warren, have sought to politicise the issue and called for Trump to act. A recent election in Virginia saw both the Democrat and Republican candidates cite the rise in electricity costs as a key reason why the building of any new data centres should be blocked. This reflects a more general trend whereby public opinion in the US is becoming increasingly sceptical about the merits of AI. As the Atlantic reports, the ongoing negative public opinion of Musk continues to have a detrimental impact on the introduction of AI technologies with Tesla.

Finally, in terms of legal challenges, Anthropic agreed a much-reported $1.5billion copyright settlement to resolve the copyright class action brought against it.

Contact

The Contentious Politics Of AI network is coordinated by David J. Bailey (University of Birmingham), Masoumeh (Iran) Mansouri (University of Birmingham), and Gary Smith

If you have any questions or want to get involved, email us at contestingAInetwork@gmail.com