What it is
A design pattern where AI systems require human approval or intervention at critical decision points before taking action. The AI does the heavy lifting - research, analysis, drafting - but a human reviews and approves before anything goes out the door.
Why it matters
Trust is the bottleneck for AI adoption. Human-in-the-loop lets you get 80% of the efficiency gains while keeping humans in control of high-stakes decisions. It's how enterprises deploy AI responsibly - letting agents handle routine work autonomously while escalating exceptions, edge cases, or sensitive actions for human review.
Key capabilities
- Approval workflows for AI-generated content
- Exception handling and escalation paths
- Quality control checkpoints
- Compliance and audit trails
- Confidence thresholds that trigger review
Good to know
Agentforce supports human-in-the-loop patterns natively. You can configure agents to pause and request approval before taking specific actions, or set confidence thresholds that trigger human review when the agent is uncertain.
Related terms
AI Agent
An autonomous AI system that can perceive its environment, make decisions, and take actions to achieve specific goals - without constant human direction.
Agentforce
Salesforce's AI agent platform that enables businesses to build, customize, and deploy autonomous AI agents across sales, service, marketing, and commerce.
Agentic AI
AI systems designed to take autonomous action, not just generate content or answer questions. The shift from "AI that talks" to "AI that does."