You should also stop using Google products for similar reasons.
Did they make contracts with them?
Or have never used their crap, ever.
No, I don’t think this is correct. There was a time during which Google did great things. Their search engine allowed millions if not billions to gain access to knowledge. They had a positive impact on a lot of FOSS projects. What they were is not what they are.
The tell was getting rid of “don’t be evil” as their motto. Even for a corporation that was a little on the nose.
Anyone stockpiling ai prompt vulnerabilities for when we’ll eventually need them to fight off some deathbots?
I am lmao
A machine is more expensive and less expendable than a human. You don’t need to worry about killbots.
Sorry, but this is a stupid take. Humans can refuse to fire on a crowd of innocent people. Killbots cannot. The unquestioning loyalty is worth more than money can buy.
The Nazis’ biggest problem was finding willing participants.
The reason why shooting people was too difficult is because many of the einsatzgruppen members broke down psychological and some became so murderous that they might not have been refit to reenter civilian society. They used gas chambers because it was sufficiently distanced from the actual act of killing (it just involved rounding people up into a room and having some guy with a canister dump the stuff into a vent. None of the actual killers even had to see the results of their actions as the cleaning was done by another group) that they could do it without creating that same problem.
Brainwashing is a thing, just look at the modern despots and their foot soldiers.
This is a nonsensical and unrealistic fear/threat to be putting at the top of your list.
The biggest problems are happening right now not in some 90s sci fi films.
One of those threats is automated weaponry and mass surveillance, but not in the comic relief way you speak about it.
Prey tell the purpose of your comment, Brutus
You take issue with referring to these machines as deathbots? I’m allowed to poke fun at things that will eventually be used to attempt murdering me you absolute anthropomorphic dunce cap.
I wasn’t referring to some far off scenario, more for when this situation happens
I can assure you that not only do I live somewhere where these very things are above me daily, that I’m out here working my ass off in unspeakable ways to prevent exactly the aforementioned sceneario for people like yourself
Direct your anger elsewhere, the energy could be spent doing something useful
It’s a trope that every problem posed by the plot has a solution of difficulty level properly fit to the audience.
A culture of arcade games, unfortunately, has such long-standing effects.
While we are playing a roguelike. With no respawns.
Canada recently has had its 2nd worst school shooting ever. The killer had many interactions with ChatGPT that warranted banning her account. A whistleblower has claimed that they wanted to inform Canada’s police force of these comments but were denied by ChatGPT’s management.
They had a chance to stop the death of 8 people, most of which were young children, but failed to do anything.
FUCK CHATGPT AND THOSE BASTARDS THAT RUN IT
Why would you not contact police? I understand that this is a systemic failure and blame does not lie with that employee but if others me I’d rather be out of a job than have those deaths on my conscience for the rest of my life.
In my eyes some blame does lie with them. A systematic failure is a failure of many parts. An employee taking notice and following bad instructions is one of them.
I don’t know what information they had, but if they were at the point of intending to share, it seems like whistleblowing would have been the just and moral thing to do even if it means ignoring immediate authoritative structure.
It’s probabilities. If you report it you’re 100% out of a job but only maybe prevented something bad from happening. If you don’t report, you keep your job but maybesomething bad happens. Reliance on a job for survival shifts the decision even further to taking the course of action that’ll keep you your job.
I don’t see how it’s certain loss of job when you could whistleblow without revealing your identity.
That’s a great answer to my question, thanks!

Hell yeah!
Sam Altman is objectively a bad human being.
he did meet his future husband through one “THIELS party”, most likely his other protege.
Sam Altman is just some fail upward money guy, he’s been eventually removed from basically every prior position he has held.
Seems like his career has largely been lying and making impossible promises, so. The folks who do that well always manage to exit the stage before the magic tincture is revealed to just be piss 🤷♂️
That doesn’t mean he can’t also be an objectively bad human being

Fuck OpenAI
I cannot believe this is what it took for a boycott to go more mainstream. Tell me more about how so many people have no respect for the environment or the artists who’s work they gleefully consume.
mainstream
I’ll believe that when my sisters start saying this. Till then, it’s just us privacy fans screaming in a dark cave, enjoying the echo.
I had a coworker tell me how cool Copilot was because he asked it a question and it found the answer in an email in his outlook mailbox. I thought, “you needed AI to search your email?”
We are probably cooked.
It’s always like this. We get a ton of articles on how everyone is suddenly boycotting/deleting [insert thing] but when you ask someone in real life, they usually have no idea what you’re talking about.
The one thing I will say is that there does seem to be a generalized dislike for AI that has all the investors and upper management types nervous. Even by their own studies do people generally either not care about AI in their products or actively dislike it/find it intrusive. There was a study by a phone company from this past summer or fall that concluded that 80% of their users had no interest in AI or found that it actively made their experience worse, and there have been plenty of pretty damning reports about how useful it’s been in various industries (just look at Microslop). That is not conducive to convincing investors to fund your product and does not show a viable path to making a profit in the future.
We’ve seen similar things happening recently with car manufacturers walking back on their big touchscreens (with some help from regulation in civilized places that care about things like “pedestrian fatalities” - like Europe) due to consumer sentiment. They tried for nearly a decade to push bigger and bigger screens into cars and remove physical buttons, and now they’re moving in the other direction. Completely anecdotal evidence, but the last time I went to buy a car I told the salesman at the dealership that I wasn’t interested in cars newer than a certain year because that was when they increased the size of the screen and put them in a more obnoxious spot on the dashboard, and he said that he heard similar sentiments from practically everybody who came in looking to buy a car - everybody hated the bigger screens.
so explain it to them gently. you won’t reach everyone, but you’ll reach more people than accepting this status quo
whoosh
They went from standing with Anthropic to throwing them under the bus real fast
About half a day.

They probably have been working on a potential agreement with openai for a while now. They just hastily finished it in response to anthropic. But I don’t know if they will keep the red lines anthropic has demanded in place
They won’t.
The red line is the amount of cash they are ready to compromise for.
$$$
Which they badly need, they are in an incredibly risky position right now. It’s very disappointing, this deal might save them from collapse for quite a while.
The only disappointment, is that Altmans head is still attached to his shoulders.
Altman is a symptom, not the problem. The problem is capitalism.
Yeah but its interesting that all of these big tech guys are so creepy. Altman, Musk, Zuckerberg… Do they grow them in labs?
That’s very true; I still would love to see him guillotined.
No, no, even if we get that wish I dont want the US state propping up AI longer
It was always about the money.
Glad that I’ve switched platforms. sam altman should probably be in prison or something.
I’ve been using Venice lately, they claim (I have done zero research to determine if this is true) that they’re privacy focused. They do run uncensored models, which is a big plus.
That said, I find myself using the lying machine less these days. It was like a fun video game when I first got my hands on it, entertaining for a while, and I’m moving on. Maybe I’m not imaginative enough to use it to the fullest potential, but I’m having more fulfillment actually writing and actually drawing (even though I am very bad at both).
Anthropic still is scum for being completely fine helping America oppress the rest of the world.
Anthropic is scum, accepting money from foreign dictators, forcing their software on minorities while insisting it was conscious and had emotions just like them, praising the Trump administration, making up scary stories to get more funding…
…In many ways, they’re worse than OpenAI. They’re just running with the same playbook that Sam Altman used to use to pretend he was a good guy.
I mean they praised the Trump administration for benefiting their business, which is… fair? I guess?
If you do ask Claude Sonnet 4.6 about Trump it leans quite negative, as it should.
I missed when sucking up to the Trump administration and echoing Cold War style nationalism was “fair”. If that’s the case, OpenAI’s behavior is fair.
Fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems.
Our strong preference is to continue to serve the Department and our warfighters
Dario “Warfighter” Amodei
I missed when sucking up to the Trump administration and echoing Cold War style nationalism was “fair”. If that’s the case, OpenAI’s behavior is fair.
It’s just capitalism. Anthropic pushed against the administration and now they are about to be branded as “supply chain risk”. OpenAI bent over and are going to get billions in funding that they sorely need (and hopefully don’t get, let them fail).
You miss the mark though: Anthropic only praised the administration, but that’s just words to give the Twitter pedo in chief a pat on the head. OpenAI actually signed a contract and they are providing their service. Massive difference.
They both signed the contract. They both allegedly hold the exact same set of red lines. One of them just gets to pretend to be the virtuous company with the virtuous capitalist CEO, despite showing tons of red flags that should have you scrambling to be as concerned about them as OpenAI.
If you read their statement, Good Guy Anthropic is totally cool with
- Mass surveillance of non-Americans
- Targeted surveillance of Americans
- Semi-autonomous bombings
- Fully autonomous bombings… in the future
- The exact same Red Scare BS that Sam Altman talks about
They insisted Claude was human?
Sorry, not quite, but close. From 404 media
When users confronted Clinton with their concerns, he brushed them off, said he would not submit to mob rule, and explained that AIs have emotions and that tech firms were working to create a new form of sentience, according to Discord logs and conversations with members of the group.

Oh, that guy! To be fair, that’s one employee, not Anthropic’s actions or position. You mentioned forcing their software on minorities while insisting it was better than it was, and I was getting OLPC flashbacks. But Anthropic looking for funding in the UAE and Qatar is shitty. I can’t seem to find anything about whether or not they went through with those contracts.
Jason Clinton is Anthropic’s Deputy Chief Information Security Officer. That means Jason knew better, and he was using his position as a moderator (and supposedly a security expert) to try gaslighting a vulnerable minority into believing his favorite toy was “secure” when it was not.
I mean, I’m not gonna defend him. But fucking up a discord that you’re a mod of isn’t really in the same ballpark as taking money from dictators or directing fully autonomous strikes. Also, from the read, it really sounds like that Deputy CISO was a prime example of cyber-psychosis, or AI mania, or whatever we’ve decided to call it. And I assume he is part of the same vulnerable minority?
Every example we have of Anthropic’s behavior paints a picture of an immoral company that pretends to be moral. It’s bad enough that they continue doing harm, but then they dress it up with phrases like “AI Safety” and “Information Security”. (And every press release they create to describe how scary good their system is, tends to be followed up by a sudden cash infusion from an openly morally bankrupt company like Google or Amazon.)
I reserve zero empathy for the people on the abuser side of an abusive dynamic. Maybe Elon Musk is autistic too. I don’t really care. Only Moloch knows their hearts. I’ll judge them for their actions.
I’d argue that an armed uprising would have a greater effect than a smaller internet-based boycott but I’m just some random guy on some niche internet forum so… who’s to say?
Quite frankly we don’t have the organizational infrastructure for that. An army, including a rebellion marches on its stomach. Small protest organization feeds into larger scale organization down the road. We’ve got to start somewhere.
I am canceling my subscription now. Fuckers.
Yea, I can just imagine OpenAI is really struggling with their business decision.
On the one hand, they have multi-billion dollar contracts with the US Military that will make them all fabulously wealthy beyond their wildest dreams.
On the other, they have a handful of individuals leaving that might amount to a few thousand dollars of lost revenue.
Gosh, it must sure have been a tough choice.
The word is yeah, not yea, as in yea or nay. It isn’t a vote. Do people not go to school anymore?
yea
A pedantic critique that offers nothing to the conversation

















