Live demo available now

EmpathIQ reads your face, your voice, and your words simultaneously — and responds with genuine emotional understanding.

Detecting 😊 Joy
Try EmpathIQ Live Watch Demo

AI talks. But does it feel?

Every AI today processes your words. None of them process you.

🗣️
Text without tone

Current chatbots read what you type — but miss the exhaustion in your voice, the tension in your face, and the anxiety behind your words.

🧱
One-size responses

Identical replies regardless of whether you're devastated or delighted. No emotional calibration. No real empathy — just pattern-matched politeness.

💔
Disconnected channels

Your face, voice, and text are processed in silos — never together. Real emotional intelligence requires reading all three at once.

Three signals. One understanding.

EmpathIQ fuses multimodal inputs in real time so every response is tuned to your true emotional state.

👁️
Face

face-api.js detects 7 core emotions from your webcam feed with 68-point facial landmark tracking.

🎙️
Voice

Hume EVI analyses prosody in real-time — pitch, cadence, and tone revealing 48+ nuanced emotional dimensions.

💬
Words

Claude reads your messages and fuses them with live face + voice data to craft contextually and emotionally aware replies.

When your face shows sadness but your words say "I'm fine" — EmpathIQ catches the dissonance and responds to what's real.

Everything your mental state needs

🫂
9 Coaching Modes

Companion, Dating Coach, Sales Coach, Meditation, Anger Release, Focus Coach, Sleep Guide, Confidence Booster — each with customisable system prompts.

📊
Coherence Score

Real-time emotional coherence ring that scores alignment between what your face shows and what your words say — 0–100 live.

🔮
Proactive Check-ins

If the AI detects sustained negative emotion for over 30 seconds, it gently reaches out — without you saying a word.

📜
Session History

Every session is saved with timestamps, dominant emotion, and a Claude-generated summary so you can track your emotional arc over time.

🥽
Smart Glasses Coach

Real-time coaching overlaid on a glasses-style HUD — powered by what your face and voice reveal in the moment.

🎙️
Full Voice Mode

Hume EVI voice sessions with male/female voice selection, live prosody emotion display, and transcript replay after every session.

"The next frontier of human–computer interaction isn't smarter AI. It's more empathetic AI."

Wearable integration — real glasses, real-time coaching in your field of view

Longitudinal emotional health tracking with weekly insight reports

Couples and group modes — reading the emotional room, not just one person

Clinical-grade PTSD and anxiety support with therapist dashboard integration

Built on the best

Assembled in Replit. Powered by frontier APIs. Deployed in hours.

👁️ face-api.js 🤖 Claude API 🎙️ Hume EVI ⚡ Replit ⚛️ React + Vite 🗄️ PostgreSQL 🔷 TypeScript 🎨 Tailwind CSS

Built for the Replit AI Buildathon

EmpathIQ was conceived, designed, and shipped end-to-end during the buildathon — from blank canvas to full multimodal emotional AI. Zero pre-built templates. 100% original.

Ready to be truly heard?

No sign-up. No download. Just open it and start talking.

Built on Replit

EmpathIQ · Emotionally Intelligent AI · Built with ♥ on Replit