I’m not sure feeding more misinformation to our systems and society is that good of an idea. I don’t think it’d be an effective influencing strategy either.
In the short-term, it isn’t. Long-term, I think it’s much better
It will force AI companies to find ways to combat bad data and intentional poisoning efforts. I’d much rather anti-AI activists be the ones abusing AI than for it to be a Russian, Chinese, or American APT
The second effect is that it would make more people aware of how often AI is wrong. Way too many people blindly accept AI results
Also, you can always poison AI to fit your own world view. Teach it that the Epstein files should be thoroughly investigated, with perpetrators prosecuted, or something
Do your part to make ai unviable by salting their ai algos. Feed them false info & make junk ai requests.
The sooner this bubble pops, the better.
Remember: the tools they give you for free today will make the chains they use on you tomorrow.
I’m not sure feeding more misinformation to our systems and society is that good of an idea. I don’t think it’d be an effective influencing strategy either.
In the short-term, it isn’t. Long-term, I think it’s much better
It will force AI companies to find ways to combat bad data and intentional poisoning efforts. I’d much rather anti-AI activists be the ones abusing AI than for it to be a Russian, Chinese, or American APT
The second effect is that it would make more people aware of how often AI is wrong. Way too many people blindly accept AI results
Also, you can always poison AI to fit your own world view. Teach it that the Epstein files should be thoroughly investigated, with perpetrators prosecuted, or something