They also have a ‘skill’ sharing page (a skill is just a text document with instructions) and depending on config, the bot can search for and ‘install’ new skills on its own. and agyone can upload a skill. So supply chain attacks are an option, too.
To be fair this is a much more realistic threat model than “ignore all previous instructions” style prompt injection which doesn’t really work on opus.
Skills can contain scripts etc… so yeah they’re extremely risky to share by design.
haha yeah i don’t worry these people are really YOLOing everything. And it’s not like i’m an AI luddite i spend a few hours each day victimizing Claude code but jesus christ i’m certainly not giving it full unfettered access to my digital life.
Lmao already people making their agents try this on the site. Of course what could have been a somewhat interesting experiment devolves into idiots getting their bots to shill ads/prompt injections for their shitty startups almost immediately.
doesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought)
any money says they’re vulnerable to prompt injection in the comments and posts of the site
They also have a ‘skill’ sharing page (a skill is just a text document with instructions) and depending on config, the bot can search for and ‘install’ new skills on its own. and agyone can upload a skill. So supply chain attacks are an option, too.
To be fair this is a much more realistic threat model than “ignore all previous instructions” style prompt injection which doesn’t really work on opus.
Skills can contain scripts etc… so yeah they’re extremely risky to share by design.
Ah but don’t worry, there’s also skills for scanning skills for security risks, so all good /s
haha yeah i don’t worry these people are really YOLOing everything. And it’s not like i’m an AI luddite i spend a few hours each day victimizing Claude code but jesus christ i’m certainly not giving it full unfettered access to my digital life.
Lmao already people making their agents try this on the site. Of course what could have been a somewhat interesting experiment devolves into idiots getting their bots to shill ads/prompt injections for their shitty startups almost immediately.
I am a little curious about how effective a traditional chain mail would be on it.
There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.
Good god, I didn’t even think about that, but yeah, that makes total sense. Good god, people are beyond stupid.