profClaw’s security mode determines how tool calls are validated before execution. The mode can be set globally, per channel, per user, or per conversation.
deny
sandbox
allowlist
ask
full
No tool execution allowed.All tool calls are blocked regardless of which tool or who is calling. The AI can still respond conversationally but cannot execute any actions.Use for: Read-only channels, demo environments, untrusted public chats.
security: mode: deny
All execution runs in an isolated Docker container.Tools run inside a Docker container with limited filesystem mounts, no network access by default, and resource limits. The container is destroyed after each tool call.Use for: Code execution environments, untrusted user inputs, CI/CD pipelines.
Only explicitly listed commands and paths are permitted.All tool calls are checked against a pre-approved allowlist. Anything not on the list is blocked.Use for: Production deployments where only known operations should run.
Moderate and dangerous operations require user approval.safe tools run immediately. moderate and dangerous tools send an approval request to the user and wait for confirmation before executing.Use for: Personal deployments, sensitive environments where you want oversight.
security: mode: ask askTimeout: 60000 # 60 seconds to approve, then auto-deny
Approval decisions:
Allow once - Run this specific call
Allow always - Add to allowlist for future calls
Deny - Block this call
No restrictions. All tools run immediately.No approval prompts, no allowlist checks. The AI can execute any tool without confirmation.Use for: Local development only. Do not use in production or with untrusted models.
security: mode: full
full mode is dangerous. Only use on trusted local machines with trusted AI models. Never use with public-facing deployments.
security: mode: ask # global defaultchannels: slack: security: mode: allowlist # stricter for Slack webchat: security: mode: full # permissive for local webchat telegram: security: mode: deny # block all tools on Telegram