Do you need a driving licence?
The other day, I asked Cher, my OpenClaw instance if it could read my email and notify me if something important came in. It said it would be easy; then I mentioned I used ProtonMail (which is end-to-end encrypted and, as a result, does not use standard protocols). Cher paused, did a search, then found the Proton Mail Bridge - a local SMTP/IMAP server that connects to Proton Mail and then makes it available to the local machine (but nowhere else). I said "of course, I already use that on my Mac" - but Cher was running on Linux.
So I got Cher to install the bridge and was about to give it the connection parameters, when I was suddenly struck by a thought. "Isn't this a massive security risk? Am I opening myself up to prompt injection attacks". "You are" Cher confidently replied.
Oof.
So I asked it "How about this? We have a sub-agent that is sandboxed - it can read the IMAP feed and write to a single folder only - when it wakes up, it checks the feed and writes a summary of the important emails into the folder. Then another agent wakes up, reads the file and acts on it - so we're adding a layer of separation". Cher replied "it's not infallible but it's a much better way of organising things - shall I set that up for you?". I said yes - and we called this pair of sub-agents Charles and Eddie (would they lie to you?)
But there's a very important lesson there - especially with OpenClaw which has access to almost everything on the machine it's running on. What I asked for is a pretty reasonable request - look at my emails and alert me to the important ones. And Cher was all set to do exactly what I asked, exactly as I had asked for it. But because I'm a software developer, who has had to deal with XSS and SQL injection, I stopped myself and thought about the security implications. The solution is nowhere near 100%, but it's a whole lot better than the naive implementation the LLM would have given me.
In other words, these tools are incredibly powerful and also incredibly dangerous. Just like my car (Alfa Romeo Giulia Veloce if you're interested).
Because cars are so dangerous, we don't allow just anyone to drive one. Even with a driving licence, they're still lethal and cause injuries and deaths every day. Yet AI tools are even more powerful, even more dangerous and we're putting them in the hands of people who don't understand what they can do.
Maybe we need a driving licence (data-security certification) for LLMs too?