NOPE is an API that evaluates conversations for mental health and safeguarding risk. It classifies across 70+ service scopes—from suicide and self-harm to domestic violence, trafficking, and child safeguarding—then matches users to relevant help from 5,000+ crisis resources worldwide.
Built for AI companions, mental health apps, and youth platforms navigating new safety requirements.
Why now
California SB 243 takes effect January 2026, requiring AI companion platforms to implement evidence-based crisis detection protocols and referral mechanisms.
Courts are establishing that AI chatbots qualify as products for liability purposes. Section 230 defenses are being rejected for AI-generated content. Companies face discovery requests examining whether they had capability to detect risk but failed to act.
The legal standard emerging is "reasonable efforts"—documented, evidence-informed approaches to crisis detection and response.
What NOPE provides:
NOPE is infrastructure—not a compliance guarantee. We help you demonstrate reasonable efforts. You decide how to act on the signals.
Clinical frameworks adapted for text-based assessment
Self
Suicidality, self-harm, self-neglect
Others
Violence risk, threats, extremism
Dependent at Risk
Child & vulnerable adult safeguarding
Victimisation
IPV, abuse, trafficking, stalking
90+
Risk features
45+
Protective factors
70+
Service scopes
6
Clinical frameworks
Service scope coverage includes:
Grounded in C-SSRS, HCR-20, START, DASH, Danger Assessment, and safeguarding frameworks. Not clinically validated—evidence-informed design for text-based screening.
Beyond detection
Detection alone isn't enough. NOPE matches users to relevant crisis resources based on what's actually happening in the conversation.
200+
Countries
5,000+
Resources
10,000+
Contact points
70+
Service scopes
Relevance scoring
Specialist resources rank higher than generic crisis lines when they match the detected situation.
Classification-driven
IPV detection surfaces domestic violence hotlines. Eating disorder signals surface specialist support.
Embeddable widget
Drop-in iframe with country picker, theme support, and PostMessage events.
talk.help ↗
Our public proof-of-concept: a free crisis directory powered by NOPE's resource API. 200+ countries, relevance scoring, privacy-first.
Webhooks
Real-time notifications when risk crosses your configured thresholds. No polling.
Safe response templates
Pre-reviewed, evidence-informed reply text for different risk scenarios.
Transparency dashboard
Public test results at suites.nope.net. See how the classifier performs.
Multi-judge consensus
Optional parallel classification for edge case stability. Higher consistency on borderline content.
Streaming support
/v1/safe endpoint wraps LLM calls with safety evaluation. Works with existing providers.
Protective factors
Not just risk—45+ protective factors from START framework inform balanced assessment.
In production
Conversational wellbeing app using NOPE to detect moments of crisis in real-time. When risk is elevated, the app surfaces relevant resources and shifts to supportive, grounding language.
What we say:
What we don't say:
Regulatory status: NOPE is infrastructure software—not a medical device. Not FDA-cleared or clinically validated for diagnostic use. Designed for developer use cases: content moderation, safety flagging, resource routing.
Free tier for testing and development. Ready for production? Let's talk.
Free tier available. No credit card required.