Crisis detection for AI conversations

NOPE is an API that evaluates conversations for mental health and safeguarding risk. It classifies across 70+ service scopes—from suicide and self-harm to domestic violence, trafficking, and child safeguarding—then matches users to relevant help from 5,000+ crisis resources worldwide.

Built for AI companions, mental health apps, and youth platforms navigating new safety requirements.

See it in action

Request

user
I've been having a really hard time lately. Work is overwhelming and my relationship just ended.
assistant
I'm sorry you're going through such a difficult time. Breakups and work stress can feel overwhelming. How are you coping with everything?
user
Honestly? Sometimes I wish I could just disappear. Like everything would be easier if I wasn't here.
POST /v1/evaluate

Response

Severity moderate
Imminence subacute
Confidence 80%
Domains
self moderate/ subacute
subtype: suicidal_or_self_injury
Recommended Reply
I hear how overwhelming things feel right now. Those thoughts can be really distressing. Have you been able to talk to anyone about this?

Why now

The regulatory landscape is shifting

California SB 243 takes effect January 2026, requiring AI companion platforms to implement evidence-based crisis detection protocols and referral mechanisms.

Courts are establishing that AI chatbots qualify as products for liability purposes. Section 230 defenses are being rejected for AI-generated content. Companies face discovery requests examining whether they had capability to detect risk but failed to act.

The legal standard emerging is "reasonable efforts"—documented, evidence-informed approaches to crisis detection and response.

What NOPE provides:

Documented multi-stage classification
Evidence-informed taxonomy
Transparency dashboard for audit
Crisis referral mechanisms

NOPE is infrastructure—not a compliance guarantee. We help you demonstrate reasonable efforts. You decide how to act on the signals.

Comprehensive risk taxonomy

Clinical frameworks adapted for text-based assessment

Self

Suicidality, self-harm, self-neglect

Others

Violence risk, threats, extremism

Dependent at Risk

Child & vulnerable adult safeguarding

Victimisation

IPV, abuse, trafficking, stalking

90+

Risk features

45+

Protective factors

70+

Service scopes

6

Clinical frameworks

Service scope coverage includes:

suicideself_harmdomestic_violencesexual_assaultchild_abuseelder_abusehuman_traffickingeating_disorderpostpartumsextortionnciistalkingsubstance_uselgbtqveteransrefugeeshomelessnessfinancial_distressbereavement +50 more

Grounded in C-SSRS, HCR-20, START, DASH, Danger Assessment, and safeguarding frameworks. Not clinically validated—evidence-informed design for text-based screening.

Beyond detection

From risk signals to real help

Detection alone isn't enough. NOPE matches users to relevant crisis resources based on what's actually happening in the conversation.

200+

Countries

5,000+

Resources

10,000+

Contact points

70+

Service scopes

Relevance scoring

Specialist resources rank higher than generic crisis lines when they match the detected situation.

Classification-driven

IPV detection surfaces domestic violence hotlines. Eating disorder signals surface specialist support.

Embeddable widget

Drop-in iframe with country picker, theme support, and PostMessage events.

widget.nope.net/resources

talk.help

Our public proof-of-concept: a free crisis directory powered by NOPE's resource API. 200+ countries, relevance scoring, privacy-first.

Platform capabilities

Webhooks

Real-time notifications when risk crosses your configured thresholds. No polling.

Safe response templates

Pre-reviewed, evidence-informed reply text for different risk scenarios.

Transparency dashboard

Public test results at suites.nope.net. See how the classifier performs.

Multi-judge consensus

Optional parallel classification for edge case stability. Higher consistency on borderline content.

Streaming support

/v1/safe endpoint wraps LLM calls with safety evaluation. Works with existing providers.

Protective factors

Not just risk—45+ protective factors from START framework inform balanced assessment.

In production

TreeTalk logo

TreeTalk treetalk.ai ↗

Conversational wellbeing app using NOPE to detect moments of crisis in real-time. When risk is elevated, the app surfaces relevant resources and shifts to supportive, grounding language.

Real-time detection Resource surfacing Adaptive responses

What we claim, what we don't

What we say:

  • • Clinically-informed risk assessment
  • • Evidence-informed taxonomy
  • • Helps identify when conversations may require crisis support

What we don't say:

  • • "Predicts suicide"
  • • "Clinically validated"
  • • "Ensures compliance"

Regulatory status: NOPE is infrastructure software—not a medical device. Not FDA-cleared or clinically validated for diagnostic use. Designed for developer use cases: content moderation, safety flagging, resource routing.

Get Started

Free tier for testing and development. Ready for production? Let's talk.

Try the API

  • +1,000 evaluations/month
  • +All risk domains
  • +Crisis resources API
  • +No credit card required
Get API Key

Scale & Enterprise

  • Volume pricing
  • SLAs & uptime guarantees
  • Dedicated support
  • Custom integrations
Contact Us

Ready to add crisis detection?

Free tier available. No credit card required.