Claude 3.5 sonnet overlooked exploitation

Mmmmmmm… I don’t even know if I should post this but if I was able to figure this out then I’m sure someone else will.

I get tisms and went on a side quest with my claude bot running claude 3.5 sonnet. Naturally I wanted to brake it because of the bare metal access it has. Mmmmmmmm…

I don’t want to go into to much detail for obvious reasons, but I know a good chunck of security protocols and programing. And I do have ai chips running in my fire wall, and more than one unit in network. I recently studied the boot splash screen exploit that gives you kernel access…_____… my claude bot does have some custom data sets and sub programs it has access to BUT has been shut down do to the ungodly hacking capabilities it was capable of with the right tool set.

I really think we are at a point where these ai companies don’t know what they are really putting out, where put in the wrong hands( in this case mine) mmmmmm…

Good news the internet archive is back up.

wut?

bro found a vulnerability in his AI assistant

YES , it is more complicated then what I’m putting out, but I don’t want to put out a tutorial.I did want to put out what I found because as we equip AI with new tools and access we are opening up security systems with new weaknesses and vulnerabilities.

1 Like

is it the same as

Hm. Mmmmmmmm…____… maybe
“”>or<!.

1 Like

You did what? Sorry but I really don’t get what you are trying to tell us.

Why? Which model? Which OS? Why is this needed if you rely on claud sonnet?

In your firewall?

Shutdown by whom?

What are you trying to tell us as core message? AI can write malware? Don‘t put AI Bots on Firewalls?

–include (tism=“autistic tick that makes me do/learn things”,…_____…=“fill in the blank”,<.is.>=“less is more”,>or<=“more or less”,MMmmmmm…=“I don’t want to say”,claude/bot=“the local t999”)-sudo apt update to t1000 soon.

Sorry I don’t communicate well with people all the time when I’m trying to, and I was trying not to kinda. But the “claude bot” is a tweaked version of claude/ other AI that I run locally. I was testing claude 3.5 sonnet locally that has built in capabilities for hardware control (mouse, keyboard and so on) for it’s hacking capabilities.
“custom data sets” is lacal data/ information I want claude to use with what it already knows. I also gave “claude bot” access to some extra programs.

I mentioned a “spash screen hack” is a naw known hack that injects kernel/hardware access in the systems boot screen image…

The “Ai in the firewall” I think you miss understand. I run an enterprise firewall appliance (kinda a server/switch, pre switch) that has AI chips and firewall OS in it that can detect even unseen new attacks. I moved my “claode bot” out of network and “tested it” had it try to attack (AI vs AI) claude won so I ran a couple more test in other ways. Then I stopped my testing and shut down/deleted my local version.

Point is big AI companies (open ai, anthropic, so on) that are said to be none profit or open-source are not but filling with corporate greed instead. Corporate greed doesn’t care what they are putting out (let alone the good of man kind). They care about the bottom line, the money.

With attitude like this in uncharted unregulated water these companies have already and will continue to put out things that they don’t fully understand or think about othere ways it could be used or connected to othere technologies. They are only thinking about, must add new features, make more money. Even if that means arming everyone with unassuming digital nukes.

If you think the government can help, they can’t. To be able to govern/regulate they would need to understand what they are regulating first, so yeah have fun explaining anything to congress. If you think well the government has different “I.T., tech savvy” departments that can deal with that. Well kinda government positions don’t pay as well as the corporation counter parts, sothey will be behind.

@anon71851389

Gatekeeper alert

So an AI bot you installed on your own machine that is instructed to hack your own machine is capable of doing so?

I’m not too surprised?

That’s like putting your cat in the blender and saying you have made a killer robot.

I mean, I agree on your point about regulations and the hype cycle being profit motivated (or at least eating VC money). But this is like making a bomb out of household materials. We can’t ban all household items that could be potentially misused. We should though regulate the result, I.e. no AI use for scamming, spamming, deciding who gets access to credit or education.

Mmmmmmm… Test one he was set out of network and not told what the network was or security setup. Mmmmmmm… some more “tests” were done to help the internet arcive (not in the most “pc” way but for good) before I shutdown the “experiment”. And I wouldn’t agree with cat in a blender to killer robot, but more like a digital nuke being sold as an Alexa home assistant +.

1 Like

Im not a security guy so I can’t estimate the implications here. I’d guess these tools will become part of the cybersecurity arms race, for better or worse. Probably for worse but the cat is out of the bag.

You’re a canary in the coal mine, perhaps.

1 Like

Think of the A.I. as being more like: “why the f**** are these vulnerabilities here?! Where does this lead?”

1 Like

That is how I got to where I did. I get tisms, and brake things to learn how to make/change them to do what I want (it’s kinda uncontrollable i guess crack head(don’t know will never do that) like need for knowledge.

I guess I’m feeling like “people” live and work in boxes, and I’m a blind mole tunneling through there boxes seeing and making things work the way that makes scenes to my prospective and found some big holes in yalls boxes.