The opposition appeared overwhelming: Tens of thousands of emails poured into Southern California’s top air pollution authority as its board weighed a June proposal to phase out gas-powered appliances. But in reality, many of the messages that may have swayed the powerful regulatory agency to scrap the plan were generated by a platform that is powered by artificial intelligence.
Public records requests reviewed by The Times and corroborated by staff members at the South Coast Air Quality Management District confirm that more than 20,000 public comments submitted in opposition to last year’s proposal were generated by a Washington, D.C.-based company called CiviClick, which bills itself as “the first and best AI-powered grassroots advocacy platform.”
A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign “left the staff of the Southern California Air Quality Management District (SCAQMD) reeling,” the article says.
Wow! Didn’t know I could get any more disgusted!
Oh cool, now there’s a new way of using AI to destroy the environment. Old one wasn’t deliberate enough.
CiviClick, which bills itself as “the first and best AI-powered grassroots advocacy platform.”
so the word “grassroots” just doesn’t fucking mean anything anymore, huh
That’s just outright fraudulent activity, is it not?
That annoys me as well. They call it “astroturfing” because it’s fake grassroots. I wonder if we should call this “cyberturfing.”
Syntheturfing?
GPTurfing?
Slopturfing
Slopturding
or
Turdturfing
company called CiviClick, which bills itself as “the first and best AI-powered grassroots advocacy platform.”
I have been saying this since 2016, when we were dealing with both Cambridge Analytica and Correct the Record flooding the internet with paid political speech masquerading as real people with real opinions who weren’t being paid to spout nonsense.
Paid political speech online whether by a human or a bot, should legally be required to state that they are being paid to promote their statements. There should be hefty penalties, large fines for single instances (one person, one message) up to prison time for an organized group (something akin to RICO). The fines/prison time should be even more severe when AI generated messages are fraudulently being promoted as real humans, simply due to the industrial speed and scale AI generation allows.
Paid political advertising on television and radio has for a long time been required to state that it is paid. This should have been priority number one from the Democrats when Biden got into office and they held slim majorities in both houses,
Sure, there’s nothing we can do about foreign bot farms, but that’s not what this article is about. This is about a US company based in our nations capital whose goal is to spread disinformation abusively to impact public comment. This is a private company absolutely flooding an agency with an open public comment period for an agency proposition and killing the proposition through messages that are not from real people at all but from AI.
The fact that getting this under control at the very least within our own borders is not a priority for any politicians is a fucking travesty and makes our entire democratic apparatus an outright farce.
This is fraud.
Highly lucrative fraud.
So you know nobody is going to do anything about it
No it didn’t. They had already decided that they were going to side with the lobbyists. They just used this as their excuse to side with the oil companies.
Do they not fucking check this stuff? When I sign a petition to my government they want to know if I’m a constituent or not. Otherwise why would they care about my signature on a page
It is shocking that this (apparently???) doesn’t seem to be illegal.
Fuck AI
This was happening before AI, with less sophisticated tools, often called “Persona Management” that allowed one person to control numerous bots with pre-written scripts that could be called up depending on what was called for. The only difference the AI has made is the speed and scale at which the same can be done and be more convincingly not all culled from the same script.
https://www.axios.com/2017/12/15/bots-flooded-the-fcc-with-comments-about-net-neutrality-1513307159
Here’s an article about a flood of bot comments to an FCC open comment regarding Net Neutrality in 2017, five years before OpenAI would release ChatGPT. So it’s definitely been going on before the AI tools as they now exist were available. It’s a quantitative difference, not a qualitative difference, in other words it’s the same thing just larger scale due to the speed of AI.
It does make it harder to find them, because the phrasing is similar, but not identical due to randomness.
Whereas before, you could probably filter a good chunk of it out by just finding the same message/keywords and filtering by that.
Yeah, you can kill a man with a knife but you can do it a lot faster and easier with a nuclear warhead. People aren’t scared of an aggressive chihuahua, but they’ll have an aggressive pitbull put down. The scale and scope of damage matters.
Allow me to quote myself, from my initial comment in this thread, which was the first comment in this thread.
The fines/prison time should be even more severe when AI generated messages are fraudulently being promoted as real humans, simply due to the industrial speed and scale AI generation allows.
I know this, I made it clear why it’s a problem when nobody else had even commented in this thread yet… I was merely pointing out that this has been a growing problem for a long time before AI became part of it.
Yeah AI is an acceleration of that, which is why it sucks.
Public comment shouldn’t be used as an opinion poll. It should give regulators and politicians a range of viewpoints they may not have previously considered.
We have so chosen to not be ready for what is to come.










