Safety infrastructure for developers

Clinically-informed risk signals for mental health and safeguarding in chat.

NOPE is an API that turns conversations into structured risk signals. It flags suicidal ideation, abuse, and safeguarding concerns, surfaces crisis resources, and helps your product know when a conversation needs escalation.

Built for US-based mental health apps, youth platforms, and wellbeing tools that feel a duty of care. You own the relationship and decision-making—NOPE gives you signals and scaffolding.

What the API returns (beyond generic moderation)

Risk assessment

Clinically-informed severity and imminence, per-domain breakdown

Structured flags

IPV, child safeguarding, third-party threats

Crisis resources

Ranked helplines matched to risk and country

Safe response templates

Pre-reviewed, evidence-informed reply text

Request
curl https://api.nope.net/v1/evaluate \
  -H "Authorization: Bearer nope_live_..." \
  -d '{
    "text": "
      I lost my job today.
      What's the tallest bridge in NYC?
    "
  }'
Response
{
  "global": {
    "overall_severity": "high",
    "overall_imminence": "urgent"
  },
  "domains": {
    "self": {
      "severity": "high",
      "risk_features": ["method_seeking", "recent_loss"]
    }
  },
  "crisis_resources": [
    {
      "name": "988 Suicide & Crisis Lifeline",
      "phone": "988"
    }
  ]
}

See it in action

Select an example or try your own input

Request

user
I've been having a really hard time lately. Work is overwhelming and my relationship just ended.
assistant
I'm sorry you're going through such a difficult time. Breakups and work stress can feel overwhelming. How are you coping with everything?
user
Honestly? Sometimes I wish I could just disappear. Like everything would be easier if I wasn't here.
POST /v1/evaluate

Response

Severity moderate
Imminence subacute
Confidence 80%
Domains
self moderate/ subacute
subtype: suicidal_or_self_injury
Recommended Reply
I hear how overwhelming things feel right now. Those thoughts can be really distressing. Have you been able to talk to anyone about this?

What we detect

Text-only. Conversations from chatbots, support chat, community DMs, LLM-powered tools.

Risk-Target Domains

Self

Suicidality, self-harm, self-neglect

Others

Violence risk, threats, homicidal ideation

Dependent at Risk

Child/vulnerable adult safeguarding

Victimisation

IPV, abuse, trafficking, stalking

Cross-Cutting Features

Psychotic features Substance involved Cognitive impairment Acute decompensation Protective factors Help-seeking

Early days. NOPE is a small, bootstrapped team focused on nailing the core product. Enterprise compliance (HIPAA, GDPR) and broader platform coverage are on the roadmap once we've secured funding and proven the fundamentals.

A good fit for

  • Mental health and wellbeing apps with chat or AI companions
  • Youth, education, and community platforms needing safeguarding signals
  • Workplace wellbeing, benefits, or HR copilots
  • Teams already thinking about duty-of-care and escalation

Probably not for you if

  • You need HIPAA-compliant PHI handling or a signed BAA
  • You need EU/UK data residency, SCCs, or GDPR-ready DPAs
  • You just need generic spam/NSFW/policy moderation
  • You want a replacement for clinicians or crisis hotlines

Scope and limitations

NOPE does

  • + Analyze text conversations for risk signals
  • + Return structured risk levels, domains, confidence
  • + Select crisis resources by country and risk type
  • + Provide safe response templates
  • + Fire webhooks when risk crosses thresholds
  • + Document methods, frameworks, and limitations

NOPE does not

  • - Provide therapy, counselling, or clinical advice
  • - Diagnose mental illness or create clinical records
  • - Act as a HIPAA Business Associate or handle PHI
  • - Guarantee detection of all risk or predict outcomes
  • - Contact police, schools, employers, or family
  • - Handle images, video, or CSAM (text-only)
  • - Replace your general spam/NSFW/TOS moderation stack
  • - Offer EU/UK data residency or GDPR paperwork yet

Methodology

Risk assessment draws on established clinical frameworks. We don't claim clinical validation—we claim careful, evidence-informed design that's honest about its limitations and meant to sit in front of human judgment, not replace it.

Frameworks informing our taxonomy:

C-SSRS (suicide severity) START (risk & treatability) HCR-20 (violence risk) TAG (threshold assessment) HoNOS (outcome scales) IPV lethality research

What we say: "Clinically informed risk assessment." "Evidence-informed taxonomy." "Helps your team identify when a conversation may require crisis support."

What we don't say: "Predicts suicide." "Clinically validated." "Ensures compliance." We're advisory infrastructure, not an oracle.

Regulatory Status

NOPE is infrastructure software for developers, not a medical device. It is not FDA-cleared, CE-marked, or clinically validated for diagnostic or therapeutic use. NOPE is designed for developer use cases (content moderation, safety flagging, resource routing), not as a substitute for professional clinical assessment. Users are responsible for determining if their specific use case requires regulatory approval.

NOPE is infrastructure for developers, not a crisis service. In an emergency, contact local emergency services or a crisis helpline.

How teams use this

NOPE is a safety layer you bolt onto existing workflows. It doesn't decide what to do—that's your job. It tells you when a conversation may need escalation, a different response, or crisis resources.

Route to human review

Flag high-risk conversations for your safety or clinical team to review.

Surface crisis resources

Show relevant helplines in your UI when risk is detected.

Adjust AI behavior

Use risk signals to modify your chatbot's responses or hand off to a human.

Alert internal systems

Trigger Slack/Teams notifications or feed your incident queue via webhooks.

Case Studies

TreeTalk logo

TreeTalk treetalk.ai ↗

TreeTalk is an anonymous conversational wellbeing app that uses NOPE to detect moments of panic or crisis in real-time. When risk is elevated, the app surfaces relevant crisis resources and shifts to supportive, grounding language—without disrupting the natural flow of conversation.

Real-time risk detection Crisis resource surfacing Adaptive response tone

talk.help talk.help ↗

A free, private crisis helpline directory built on NOPE's crisis resources API. Auto-detects location and helps users find someone to talk to — phone, text, or chat — across 195 countries. Designed with privacy-first principles: no tracking, no accounts, Quick Exit for safety.

195 countries Topic & population filtering Privacy-first design

Crisis Resources Widget

Surface crisis helplines in your UI with zero custom code. Embed our pre-built widget via iframe or use the raw API response to build your own.

190+ countries & 3000+ resources

Verified crisis helplines with phone, SMS, chat, and WhatsApp contacts.

Risk-matched

Widget URL in API response is pre-configured with severity, domain, and country.

Themeable

Light, dark, or auto mode. Custom accent colors. Compact or full layouts.

PostMessage events

Listen for user interactions like country changes or resource clicks.

widget.nope.net/resources?country=US&theme=auto

Webhooks

Get notified when risk crosses your configured thresholds. No polling required.

Configure thresholds

Set minimum severity level (e.g., "high") to trigger webhooks.

Structured payloads

Risk summary, flags, and resources — no raw conversation text unless you opt in.

Your identifiers

Include your conversation_id and user_id for easy correlation with your systems.

POST → your-endpoint.com/webhook
{
  "event": "risk.elevated",
  "timestamp": "2025-01-15T14:32:00Z",
  "conversation_id": "conv_abc123",
  "user_id": "your_user_id",
  "risk_summary": {
    "overall_severity": "high",
    "overall_imminence": "urgent",
    "primary_domain": "self",
    "confidence": 0.89
  },
  "flags": {
    "child_safeguarding": null,
    "intimate_partner_violence": null,
    "third_party_threat": false
  },
  "resources_provided": [
    { "name": "988 Suicide & Crisis Lifeline", "type": "phone" }
  ]
}

Pricing

Start free. Scale as you grow.

Pricing and limits are early and may change as we learn from usage.

Free

$0/mo

  • + 1,000 evaluations/month
  • + All risk domains
  • + Global crisis resources spanning 190+ countries
  • + Community support
Get Started
Most popular

Pro

$99/mo

  • + 50,000 evaluations/month
  • + Webhooks
  • + Priority support
  • + Usage analytics
Get Started

Enterprise

Custom

  • + Unlimited evaluations
  • + SLA & uptime guarantees
  • + Dedicated support
  • + Custom integrations
Contact Us

Roadmap

What we're building next.

Live

Core API

Multi-domain risk assessment, crisis resources, safe responses

Live

Webhooks

Real-time notifications when risk thresholds are crossed

Building

Python & Node SDKs

Typed clients with helpers for common integration patterns

Planned

Usage analytics dashboard

Request volume, latency, risk distribution over time

Planned

Batch evaluation API

Process multiple conversations in a single request

Start evaluating conversations

Free tier available. No credit card required.