vrighter 33 minutes ago

So how does it save me from having to read the man page?

How do I know if the command is safe to execute when I can't trust LLM output?

Also, for "safe" commands, how do I know that it is doing what I asked it to do (without reading man pages) when I can't trust LLM output?

Anything your script returns is "untrusted input" to me which requires careful scrutinization. Which means this adds more work, not lessens it. While also making running commands cost real money!

iamdamian 2 days ago

Tucked at the bottom of the README:

> Privacy note: Shelly sends your requests, including your recent shell history, to Anthropic's API.

  • nestorD 2 days ago

    Yes, it is API based and uses your last unique 100 shell commands as part of its prompt: it seemed important to remind users that this data does leave their machine. A fork using a local model should be fairly easy to set up.