Skip to the content.

Monthly contestation updates

Since our launch we have begun to produce a running update in newsletter form of the different types of contestation as they happen and are reported. Sign-up to our email list to receive these once they are produced.

February/March 2026

Campaign updates:

*April 25, 2026 - Resisting Big Tech Empires - London South Bank University - details here

*London saw the biggest anti-AI protest ever. Reports here and here.

*Pull the Plug recently postered the OpenAI office in London to protest their role in AI-powered warfare.

*NYC saw mass vandalizing of adverts for the AI wearable Friend.

*Humans First was launched - a new AI-safety social movement

*The QuitGPT campaign is getting increasing attention - more coverage here.

*See Stop Gen AI for a list of AI-free software - and also for support groups for those seeking to push back against gen AI

*Some good anti-AI memes - and this one seems pertinent

*The UK government backtracked on plans to make it easy for AI companies to use copyrighted works for AI training - this a victory for the artists’ campaign.

The continuing effects of the Grok scandal: UK government announced it would tighten the Online Safety Act to make sure that it includes AI chatbots. It also announced it would require platforms to remove abusive content within 48 hours once identified. The Spanish government launched an investigation into TikTok, X, and Meta for potentially spreading AI-generated child sexual abuse material. The Indian government adopted a new rule to impose a 3-hour time limit for unlawful AI content to be removed once it is identified. Data protection watchdogs worldwide are calling for better regulation. And another law suit against Grok, including by children. And Baltimore sues xAI.

Ongoing battles over copyright: ByteDance agreed to limit use of AI video generator Seedance 2.0 after Walt Disney threatened legal action. David Greene, a public radio presenter, has a lawsuit against google over NotebookLM which he claims has used his voice to create an AI-generated podcast. Recent research proving that LLMs are able to reproduce exact replicas of the texts they’ve been trained on increases the risk further of copyright law suits against AI firms. Another collective action lawsuit has also been levelled against Nvidia over its scraping of youtube. Nielsen’s Gracenote is suing OpenAI for copyright infringement. The London Book Fair saw people walking around with a book - Don’t Steal This Book - on display as part of a campaign by authors against gen AI and for better copyright legislation when it comes to AI training models. Related, a new report by the Society of Authors (SoA), the Independent Society of Musicians (ISM), Equity, the Association of Illustrators (AOI), and the Association of Photographers (AOP) highlights the impact of gen AI on creative workers and was used to launch a campaign that calls on the UK government to adopt new regulations.

Recent research by Social Change Lab highlights how the risk of AI-enabled warfare - autonomous weapons like robots and drones which operate without meaningful human oversight - causes the greatest level of concern about AI and therefore the most likely to prompt social movements and protest in opposition to AI. This is especially pertinent at the present, given that the Pentagon strongarmed AI firms to fall in line before the attacks on Iran. Also see a recent post by Masoumeh Iran Mansouri (one of our convenors) on exactly this question of AI for military use. The FT also published an op-ed with a discussion of the risks. The discussion surrounding the horrific murder of 175 school children, seemingly by Palantir and the US Government - in Shajareh Tayyebeh primary school in Minab, in southern Iran - is also obviously relevant here.

Palantir’s toxic reputation is beginning to have some consequences - including the Swiss government’s repeated refusal to award contracts to the firm - and British MPs urging the government to halt its latest contract with Palantir - likewise in the US politicians are distancing themselves from Palantir due to its toxic reputation.

Employees don’t like or want AI - New research based on two surveys of over 1000 employees finds that those who are using AI more are also both more anxious about the effects of AI, and more inclined to resist its introduction. More research here on reluctance by employees to take up the use of AI. Another survey saw 89% of managers report no increase in productivity. Likewise, the Economist reports productivity gains at roughly zero. Workers are also reporting detrimental mental health consequences. Even managers are now reporting that AI is a major risk. In all of this the obvious reason that AI introduction is struggling is due to human opposition. Regardless of the benefits, employees are increasingly being told they should embrace AI or otherwise suffer the wrath of their managers - Accenture released a memo telling staff they needed to embrace AI if they wanted to have a chance of promotion; google is also dialling up the pressure on its employees; and PWC threatens to get rid of any naysayers. The fact that such an authoritarian approach is needed is obviously telling in terms of how employees are reacting to AI.

Everyone hates AI slop - here’s some people on reddit complaining about its infection of Pinterest. Here’s Hoopla agreeing to remove AI slop books following a 404media investigation. This mirrors a more general trend whereby the public don’t seem so keen on AI.

Political ramifications continue: intra-Republican divisions were further exposed as the White House attacked Utah Republicans over their proposal for a new AI Transparency Act - prompting a massive billboard campaign against the White House in Utah in response. An intra-AI industry political division has also emerged with AI billionaires on both sides - a ‘safety’ AI PAC (Public First Action) backing pro-regulation candidates (in both the Democratic and Republican parties) stands in opposition to a right wing (anti-safety?) PAC (Leading the Future). Axios has a much more detailed discussion on the political divisions that AI is creating in the US. The Global AI Impact Summit in New Delhi saw various efforts by political leaders to espouse their commitment to various degrees of AI safety: Abhishek Singh, chief executive of India’s AI mission, called for “democratising AI access to the global south” and rolling out technology for social ends such as education, health and agriculture. Although US-based Big Tech firms largely dismissed all of these suggestions. Florida voted for an AI Bill of Rights. This didn’t get through - further exposing the divide within the Republican Party. The White House released its AI policy - but not at all clear this will quell the opposition within the MAGA coalition itself.

Everyone hates data centres. The political fallout and opposition to the widespread of data centres saw Trump seek to shift the burden/blame to Big Tech firms directly. As a partial response to the ongoing outcry about data centres across the US, Anthropic pledged to cover the additional energy costs created by data centres - subsequently witnessing most of the Big Tech firms also signing the pledge (although obviously the proof will be in the pudding on this). A new Senate bill proposed by Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) sought to achieve a similar goal - although it looks unlikely to make it through the legislative process. Bernie Sanders also has a proposed bill which would pause data centres from being constructed. Chicago’s biggest electricity company, Commonwealth Edison Co. announced plans to make sure the data centres pay for the increased fuel costs. Authoritarian moves against data centre opposition also surfaced as a man was arrested for speaking too long against data centres at a city council meeting in Oklahoma. Farmers in Kentucky are refusing to sell their land to data centre developers. Data centres also continue to face refusals in planning applications, including in Pennsylvania, New Jersey, Edinburgh (despite its own city planners recommending to accept) - and a petition calling for a ban in Ohio. Campaign groups in the UK are calling for more transparency over energy use by data centres. And data centres are behind schedule (due in part to local opposition). People also don’t seem to much like AI in general.

Also some financial implications associated with data centres - Moody’s questioned the method of accounting that data centre providers are using, including special purpose vehicles, which appears to hide the level of financial risk the data centres are incurring - this is likely to become more of an issue as data centres are increasingly seeking credit ratings. Blue Owl Capital failed to secure financing for a $4 billion data center project in Pennsylvania. The CIOs are also getting worried. The FT published a discussion piece considering the possibility of a $9 trillion bust. Guardian reports on the possibility that it’s based on ‘phantom investments’. As a result data centres are struggling to find insurance cover.

Bloomberg has a nice piece discussing the fallout from the blog post published by tech entrepreneur and investor Matt Shumer, Something Big Is Happening, which sought to highlight how AI could decimate professional jobs. As the piece in Bloomberg argues, the severe market reaction to a single blog post by Shumer reveals the fact that “AI is trading on vibes and anecdotes”. This seems to equally apply to the market fallout that followed the Citrini post.

And political contestation arising from data centre opposition - Illinois Governor JB Pritzker (Democrat) announced a two-year suspension of state tax incentives for new data center developments - with Virginia Senate proposing to do the same. Pennsylvania Gov. Josh Shapiro (Democrat) announced a plaan for better regulations for data centers. More generally the Democratic Party is increasingly positioning itself as the anti-data centre party - and in the process witnessing internal divisions within the party (see the Democratic primary in North Carolina). US Senators opened an investigation into the use of energy by data centres.

Data centres were also exposed as a military target as the Iranian regime attacked Amazon data centres (see also here - and the regime’s declaration that it would continue to do so here.)

The lack of usefulness (uselessness?) of AI? - KPMG partner fined for using AI to answer an AI-training exam. A report commissioned by a group of non-profits, including Beyond Fossil Fuels and Climate Action Against Disinformation, discredited the claims of gen AI firms to have the potential to tackle the climate crisis - the report highlights how the gen AI firms promised “AI” will solve the climate crisis, failing to mention that all of such claims relate to machine learning and not generative AI/LLMs. New research published by Anthropic(!) in which an experiment with AI-assisted coding found AI-assisted coders were marginally more productive (not statistically significant) and a significant decline (significant) in their understanding of the code itself. The UK Government’s website - Gov.uk - is using chatbots to answer citizens’ queries, but producing unpredictable errors in the process, a study found. Wired404 includes a report on Alpha School, an AI-based school which has been heralded by right wing politicians and press in the US, turns out to be generating faulty lessons which are doing more than harm than good to students. Amazon’s agentic AI, Kiros, sabotaged Amazon’s AWS - deleting and recreating the code it was supposed to be working on. Related, Amazon’s own workforce prefer not to use Kiros, but are being forced to, and resisting those instructions. A recent study finds 13% of biomedical research is AI-assisted text - plus arxiv has moved to restrict submissions of AI-generated text. European journalist - Peter Vandermeersch - was suspended over AI-generated quotes. Essex police had to pause facial recognition on the grounds of racial bias. OpenAI closed its Sora video-making app and cancelled its $1bn Disney deal - all of which seems to highlight the difficulties AI will face in replacing artists’ creative work. Now AI is also resulting in flawed polling.

Lilli, McKinsey’s AI platform used by its 40,000 staff, was hacked.

The murky world of AI billionaires was spotlighted as ‘Godfather of AGI’ Ben Goertzel was seen in recent emails to have courted Epstein for funding and congratulated him on jail release. Billionaires also had a fallout over the Fermi America data center.

Given that ICE is powered by AI, then anti-ICE activities are a form of anti-AI activism? Wired has an excellent piece covering the different ways - analog and digital - that hackers and makers are working to provide activists with the necessary tools for anti-ICE evasion and solidarity activities.

We continue to see leaks from inside the AI industry by employees exposing the practices of the industry: OpenAI employees sought to raise the alarm regarding Jesse Van Rootselaar, who later went on to become the suspect in the mass shooting in Tumbler Ridge, British Columbia, Canada - whilst OpenAI employees raised the alarm about Van Rootselaar’s writings, which they viewed as indicating potential for real-world violence, and sought to report this to the local authorities, nevertheless the OpenAI bosses decided not to report it- this has since been followed by a lawsuit against OpenAI. More generally, whistleblowers continue to be a thorn in the side of the AI industry. Workers also exposed the falsehoods used to justify the sacking of half of Block’s staff under the guise of AI-related efficiency savings - and the press ran with it.

Rest of World reports on gig workers (data labellers) in Africa working on data to train AI systems that could be being used for surveillance, including the kidnapping of Maduro - with the Data Labellers Association as one of the groups raising concerns over this. More coverage of the Data Labellers Association here.

More lawsuits have been brought against AI: one here claiming Gemini caused a suicide; another here against Grammarly; another here against Tesla Cybertruck over FSD crash. Meta and youtube found negligent in landmark social media case.

This tool sabotages gen AI by making it very slow

Wikipedia has banned AI generated content

Other interesting sources:

*New database on Worker Mobilizations around AI in Arts, Culture, and Media *Rest of World covers a range of campaigners against AI

December 2025/February 2026

Some campaigning news:

Opposition from employees continues: 15,000 nurses went on strike in New York in January/February 2026, with AI as one of the strike issues - as of now it looks like an agreement has been reached in every hospital hit by the action, winning AI safeguards in their contracts for the first time - victory to the New York nurses! TMK produced an excellent podcast on healthcare workers who are members of the National Union of Healthcare Workers about their ongoing contract dispute with Kaiser Permamente. A whistleblower (former Google employee) alleged that Google helped an Israeli military contractor with AI. Employee whistleblowing also helped expose Musk’s deliberate use of porn to generate interest in Grok. There is a good summary here from Power at Work of worker responses.

Artists continue to mobilise in different ways in response to the impact of AI. Hollywood continues to be focused on limiting the impact of AI on film-making. A recent film about AI, Mercy, saw Amazon ban the use of AI - actors continue to be unhappy about the use of AI in film, and actors’ agents are including non-use of performance to make AI replicas in their contracts - and there is already talk about another strike this year once the widely-cited 2023 contracts, which famously saw a prolonged strike result in a deal to limit AI, expires. In the UK, 99% of members of actors’ union, Equity, voted to refuse to be digitally scanned on set over concerns that the scans will be used by AI. Audiences also don’t seem very keen on AI-generated content. Platforms are moving to ban AI-generated content. Bandcamp announced a ban on AI-generated songs. Comic-Con banned AI art.

On copyright, the UK goverment continues its consultation on how to legislate in the area - recent consultation saw an overwhelming rejection - 97% of submissions - of a plan to introduce a tech-friendly approach to copyright of artists’ work. India released a plan to make AI companies pay for training data. The European Parliament has called for new copyright laws.

Data centres continue to be a major focus of opposition, especially from nearby residents - in fact ‘everyone hates them’. Resident campaigns are reported in Northern Virginia, Oklahoma, Georgia (which is seeing a push for a ban), London, Buckinghamshire (UK - where the planning permission was retracted in an embarrasing U-turn by the government). Rest of World produced a good summary of environmental campaigns arising in response to the impact of data centres.

Politicians also look increasingly concerned about the unpopular rise of data centres in the US. This has prompted investigations over the impact on electricity costs. There is also growing pressure, especially - but not only - within the Democratic Party to adopt a strongly anti-AI position. Republican Governor of Florida, Ron DeSantis, has adopted a strong anti-AI stance. Internal divisions also continue to be evident within the MAGA coalition over AI - with Republican politicians concerned about local opposition amongst voters unhappy about Trump’s attempt to ban state-level regulation of AI. Indeed, despite Trump’s attempted ban on state-level regulation many states continue to move towards tighter regulation. Trump was eventually moved by this opposition to pledge to encourage Big Tech to do more to avoid pushing up electricity bills (citing Microsoft’s recent announcement on this).

The publication by Grok of sexualised deepfake images, including of children, sparked global outrage - with Apple and Google also pulled into the controversy - and a widespread call for bans (and in some cases actual bans), including in the EU (where an investigation was launched), India, the UK (also see here and here - and including the launch of an investigation by UK media regulator, Ofcom, and another one by the Information Commissioner’s Office, and proposals for legislation to ban the creation of online sexual images), Malaysia, Indonesia (where it was banned and then later ‘conditionally’ had the ban lifted), Brazil, California, and France (including a police raid) - eventually prompting Grok to turn off the image creation function, except for subscribers, which also caused a further political storm as critics claimed this amounted to allowing deepfakes of children if the Grok user paid for the service. More coverage of this here. ControlAI speaker Andrea Miotti was interviewed on Sky News to talk about the risks of AI as a result, and by a committee of the Canadian House of Commons. AOC and Paris Hilton announced their attempt to seek legislation in response in the US. The controversy is also seen as one of the reasons Spain (and later Brazil) is moving to ban social media for children (which Musk was very angry about) (prompting some social media firms agree to independent assessments of health risks), and a more general move that Rest of World claims amounts to an attempt to move off US Big Tech.

The rise in AI relationships (or human-like interactions) has also prompted concerns - China recently adopted regulation. Related, the impact of AI delusions is becoming a common theme in public discussion.

Various legal proceedings continue - Google and character.AI settled a lawsuit relating to a child’s suicide after developing a relationship with a chatbot. A lawsuit was also initiated in LA over the addictiveness of social media and its algorithms (prompting TikTok to settle).

The adoption of new regulation of AI is ongoing. Nigeria announced new plans to regulate AI. South Korea adopted strict new AI regulations. The EU’s regulatory landscape continues to be the subject of debate. The EU issued a warning to Meta over its restricted access to rival AI assistants on Whatsapp. Starlink was blocked in South Africa (and restricted in Uganda). A new angle also opened up in the debate over job displacement when a Labour minister called for UBI, paid for by tech companies, as a way of responding to the job losses associated with AI.

The AI-fuelled semiconductor chip industry is also prompting social opposition. In Taiwan, the need for largescale green energy such as through wind turbines is devastating local fishing and farming - prompting ongoing protests and opposition by those affected.

Kim Crawley, founder of the excellent StopGenAI website, is crowdfunding the publication of a forthcoming book, Technofascism Survival Guide.

The limited usefulness (un-usefulness?) of AI also continues to be exposed by a wide range of reports and surveys. One report found that half of AI projects have been shelved due to complex infrastructure. CNN reports a new trend of ‘analog lifestyles’ where people explicitly commit to removing AI from their lives. A survey in the Wall Street Journal highlights the stark difference in perception between bosses and employees - with the former heralding efficiency savings and over 90% of CEOs claiming that at least 2 hours per week is saved, yet 40% of employees claim no time has been saved at all. A report in Harvard Business Review finds AI does not reduce work; rather it makes employees work longer - which they conclude is probably unsustainable (and which has also prompted concerns about AI-related burnout. A West Midlands police chief was caught out for using AI in producing a report on football hooliganism, citing a football matches that never existed - prompting discussion of ways to avoid using AI. A majority of CEOs report zero payoff from AI spending - with very similar findings in another report. Researchers showed that AI is able to be prompted to quote copyrighted work word-for-word, despite AI firms’ denials that it would do so. DeepMind chief Demis Hassabis warned AI investment looks ‘bubble-like’. Dario Amodei, Anthropic chief, warned ‘Humanity needs to wake up’ to dangers of AI. The Register reports that ‘Only 3.3 percent of Microsoft 365 and Office 365 users who touch Copilot Chat actually pay for it, an awkward figure that landed alongside Microsoft’s $37.5 billion quarterly AI splurge and its insistence that the payoff is coming’. The announcement of a $660b investment in AI infrastructure by the Big Tech firms was not received well by the markets. There was an interesting lengthy discussion in Bloomberg on whether robotaxis are safer than human-driven ones - in sum, at least in terms of deaths per miles driven the answer is no, and otherwise it is that we don’t know. Another report discusses the rise in hiring of good writers, due to the poor quality of AI slop writing (or ‘slopaganda’).

The link between the ICE raids in the US and AI has been getting increasing attention - prompting French company CapGemini to drop a contract with ICE following pressure from French officials (and then moving to sell the unit altogether). Palantir and Roshel have also faced scrutiny.

The use of AI by organisations and firms remains controversial. Campaigners highlighted the use of AI to write the political platforms of far-right parties during the 2024 European Parliament elections. The British Museum faced criticism for using AI-created content.

In terms of detrimental impact: Campaigners highlighted the negative impact of AI when used to teach children in schools. Opposition is also growing over the poor health advice that AI is producing - with Google removing some of its summaries. MPs in the UK are highlighting the risks of unregulated AI for finance. The Washington Post covered a story on academics increasingly turning to oral examinations as an alternative form of assessment to deal with the difficulties of students using ChatGPT. Concerns are also increasingly being raised regarding the impact of AI use on younger employees’ capacity to learn.

Many critics look to open source alternatives. This article discusses the use of open source mapping platform, OpenStreetMap, used by Palestinians to navigate roads which they are prohibited from using by the Israeli state in the West Bank.

An attempt to stage a pro-billionaire march in San Fransisco flopped spectacularly.

November/December 2025 

Data centres continue to be a major target of public opposition, especially in the US, and over the pollution for local residents and water and electricity use. Amazon was under pressure after being found to have actively sought to keep the water use of its data centres secret. As Harvard Business Review reports, some state agencies have begun conducting health impact assessments of data centers’ on-site diesel generators to inform emission limits. Bloomberg carry a report claiming that electricity prices have doubled since 2020, with the highest rate of increase in locations that are within 50 miles of a data centre - all of which is creating considerable public anger. Elections are increasingly focused on candidates’ positions on data centres: a recent election in Northern Virginia witnessing both candidates pledge to block the expansion of data centres, and with the vote seemingly going to the candidate who was most ardent in his opposition. This comes amid reports of America’s coming war on data centres here. The campaign organisation, Data Centre Watch, produced a report highlighting how $64 billion in U.S. data center projects have been blocked or delayed by a growing wave of local, bipartisan opposition since 2024. Also see more details of the ongoing campaign by the  Pyramid Lake Paiute Tribe to oppose the building of data centres in the region. 230 environmental groups demanded a national moratorium on new datacenters in the US. This is alongside a new online campaign to oppose the building of data centres in Pennsylvania. At the same time, Amazon has complained to the the Public Utility Commission of Oregon alleging that Portland-based PacifiCorp is failing to provide enough electricity to its data centre! 

Worker and employee resistance and opposition to AI also continues. Ex-AI product safety lead, Steven Adler, broke ranks with the firm to highlight the continued risk of serious mental health issues associated with the use of AI chatbots. Likewise, the Guardian reports from a range of AI workers and raters who report the flawed models of AI training and speak out against the use of AI. For a broader discussion, see this Podcast by two founders of a new mutual aid and advocacy group called Stop Gen AI, which formed this year out of the critical need to provide material support for creatives, knowledge workers, and anyone else impacted by generative AI.  Research by Deloitte found that employee’s trust in their employer’s AI tools is dramatically falling - by 31% in the most recent findings. Quartz reports widespread hatred towards MS Copilot, in part due to employers forcing its adoption upon employees, prompting widespread online mockery and reports that MS is now needing to downgrade sales expectations.

The impact of AI on worker productivity has also been questioned. Research into the use of AI by software developers found that it made them 19% slower than when they worked without AI - yet despite this they believed that AI had sped them up by 20%.

AI has also faced ongoing legal challenges. In Japan, Yomiuri Shimbun and the Asahi Shimbun have filed a lawsuit with the Tokyo District Court in August against U.S. AI startup Perplexity AI; Amazon also sued Perplexity in an attempt to stop its chatbot from shopping on its platform. OpenAI was sued by firm Cameo, with OpenAI accused of deliberately confusing consumers by introducing a new “Cameo” feature on Sora; a series of lawsuits by OpenAI chatbot users for suicides and harmful delusions - confirming media reports of a trend of mental health hospitalizations as a result of chatbot use. A lawsuit was filed in California against discriminatory recruitment practices as a result of AI decision-making. A lawsuit was brought against OpenAI claims ChatGPT persuaded a user to kill his mother and himself. The European Commission announced an investigation into Google’s AI overview feature.

AI firms are being forced to act as a result of these legal challenges. For instance, the fallout from an earlier adverse legal ruling against it prompted Character.AI to move to ban children under 18 from using its chatbots.

Not only AI firms, but also AI users have faced legal challenges. Most notably, courts in Utah, Indiana and California have fined lawyers for using AI-generated legal submissions that included hallucinations reporting non-existent research and case law. 

In terms of AI regulations, family-led campaigns over the risks of AI chatbots have featured heavily in recent weeks. Senators Hawley and Blumenthal have proposed legislation passing through US Congress seeks to limit availability of chatbots to minors, following pressure from parents of children who have taken their own lives under the influence of chatbots. The UK government also announced that it is considering new laws regulating the use of AI chatbots by children. In addition, China adopted regulation designed to restrict fake images and videos, adding to existing legislation which requires AI-related companies to use generative AI services in line with the values of socialism (and which already governs the design and use of DeepSeek). A new law in New York regulates the use of AI to develop personalised pricing. The  Data (Use and Access) Act 2025 was adopted in the UK, banning deepfake porn and making it a criminal offence, prompted by a feminist campaign group make up of victims of deepfake porn and experts. 

As the UK Government continues to consider AI-related copyright law, Paul McCartney contributed to ongoing objections from the music industry regarding the infringement of copyright by AI firms, releasing a track of an almost completely silent recording studio, to highlight what the music industry might become if copyright laws aren’t updated to take account of AI.

Schools and parents are becoming increasingly focused on the risks of AI, with a parent-led campaign creating pressures on Malden Public Schools and demanding better regulation, with many parents calling for an outright ban in schools.

A broader anti-AI sentiment can also be discerned in terms of public opinion and how this is translating into political pressure, and forcing firms and organisations to reconsider their approach to AI. Research into the degree to which the public trusted AI to allow politicians to take decision was overwhelmingly opposed in both the UK and Japan. Google was forced to remove its  AI model Gemma from its Studio platform after a Republican senator said it “fabricated serious criminal allegations” against her. The academic paper publishing platform, arXiv, announced it will no longer accept computer science review articles and position papers due to the overwhelmingly high level of “AI slop” that was being submitted to the platform. McDonald’s faced online opposition to its AI-generated Christmas advert, forcing the firm to turn off comments on YouTube, before setting the video to private.

In terms of political consequences, divisions within the MAGA camp have emerged following Trump’s backing for a plan to restrict state-level regulation of AI, with Ron DeSantis calling Trump’s plan “an insult to voters” (another similar report in the Washington Post). As well as divisions within Meta, with reports of a schism between the AI team and the rest.

A number of other campaign developments: The Verge reports a campaign seeking to influence the Pope to speak out against the risks of AGI. A detailed set of recommendations for campaign organisations seeking to use social media platforms in a way that outsmarts algorithms designed to minimise the visibility of Palestine solidarity campaigns. The campaign by ControlAI to ask UK-based MPs to sign a pledge calling for “binding regulation on the most powerful AI systems” reached 100 MPs signed up.

The European Conference on AI (ECAI) which took place in Bologna in October 2025 witnessed a demonstration staged by local Palestine solidarity activists and organisations, which took place outside the Conference, declaring that such conferences are ‘just advertising stores for criminal multinational companies that finance the development of new technologies, devices placed on the market with the label “for civil use”, and then sold to the “defense” sector, to companies like Leonardo S.p.a., that still export weapons to “Israel”. The production of genocide begins in European classrooms, laboratories and research centers: ECAI is the emblem of neoliberal hypocrisy, the same one that tries to hide death and destruction behind empty principles of peace, freedom and democracy’.

For more summaries of contemporary anti-AI campaigns, see this nice article in Vox listing some of the most impressive recent achievements for anti-AI campaigns. And another similarly good overview of some recent campaign successes in this article in MROnline. 

October 2025

At the micro-level we continue to see an ongoing series of everyday forms of ‘misbehaviour’ or subversion regarding the use of AI. A recent survey of company employees found that over 30% were actively sabotaging their company’s AI strategy, with the most likely reason given that the introduction of AI amounted to an explicit attempt to subvert the attempt to replace employee’s jobs. Reflecting this trend, a recent McKinsey report that while 80% of companies are introducing gen AI in some form, at the same time around 80% also report no improvement on their bottom line as a result of introducing AI; the report identifies a key reason as “implicit resistance from business teams and middle management due to fear of disruption, uncertainty around job impact, and lack of familiarity with the technology.”. Likewise, the NYT reported on attempts by job applicants to trick chatbot recruiters for job applications. The Information reports on how coders are refusing to use AI tools to code, with two software engineers at Mixus, San Francisco, staging a rebellion by refusing to follow instructions to rely on AI coding-assistance software. Higher up the hierarchy within AI companies, the FT reported an exodus from disgruntled senior staff from Musk’s xAI. Also as a result of worker opposition, Microsoft announced it would disable some services to Israel’s Defense Ministry, after a company review concluded that Israel was using Microsoft’s cloud storage services to hold surveillance data on Palestinians - this came after disgruntled employees entered company president, Brad Smith’s office, to protest, hang banners and occupy rooms in the company HQ.

In terms of regulation, we have seen a number of key developments. This includes the introduction of the new California AI Companion Law (SB243), as this piece from James Muldoon reports. Newsom also signed the first AI Safety Law (SB53) in California - as also reported in more detail here by techcrunch- although he also vetoed another piece of legislation (AB1064) that sought to restrict access to AI for children. An AI Risk Evaluation Act was introduced into the US Senate. The much-vaunted UK’s AI Bill also still remains waiting in the pipeline. Calls for further regulations were also voiced. Perhaps most visibly, the Global Call for AI Red Lines saw over 300 prominent figures call for governments to reach international agreements regarding red lines for AI. Similarly, a group of leading experts in Cognitive Sciences signed an Open Letter called for the rejection of the uncritical introduction of AI in academia. For a more global summary of AI policies and regulations, do see the excellent new Artificial Intelligence Policy Observatory for the World of Work (AIPOWW) Symposium in the current issue of the journal, Global Political Economy. This reports on the work of the Artificial Intelligence Policy Observatory for the World of Work, based at University of Essex’s Centre for Commons Organising, Values Equalities and Resilience and which is led by Phoebe V. Moore and Peter Bloom (both members of our Contentious Politics Of AI Network). It includes excellent commentaries on AI policies and regulations in the EU, Brazil, India, China, and Canada.

The wider impact of AI-related infrastructure is also being contested. Efforts to oppose the building of data centres are growing in prominence. Earlier this year saw reports of the Sierra Club seeking to overturn the licence for a data centre as part of their campaign in Reno, Nevada. MIT Technology Review covered a related campaign by the Pyramid Lake Paiute Tribe to tackle the use of water by data centres that threatens to divert it away from the Pyramid Lake. More generally, the impact of AI on energy costs is starting to have political consequences. With voters increasingly calling for political measures to offset or reduce the impact of AI energy use on household energy costs. In the US, Democrats, led by Elizabeth Warren, have sought to politicise the issue and called for Trump to act. A recent election in Virginia saw both the Democrat and Republican candidates cite the rise in electricity costs as a key reason why the building of any new data centres should be blocked. This reflects a more general trend whereby public opinion in the US is becoming increasingly sceptical about the merits of AI. As the Atlantic reports, the ongoing negative public opinion of Musk continues to have a detrimental impact on the introduction of AI technologies with Tesla.

Finally, in terms of legal challenges, Anthropic agreed a much-reported $1.5billion copyright settlement to resolve the copyright class action brought against it.

Contact

The Contentious Politics Of AI network is coordinated by David J. Bailey (University of Birmingham), Masoumeh (Iran) Mansouri (University of Birmingham), and Gary Smith

If you have any questions or want to get involved, email us at contestingAInetwork@gmail.com