Spaces:
Sleeping
MrrrMe Avatar Frontend
Next.js 16 Web Application with 3D Avatar Integration
Real-time emotion detection interface with customizable 3D avatars, WebSocket communication, and multi-lingual support.
Technology Stack
Framework: Next.js 16.0.0 (App Router)
UI Library: React 19.2.0
Language: TypeScript 5.9+
3D Engine: React Three Fiber 9.4.0 + Three.js 0.180.0
Avatar SDK: Avaturn SDK (CDN)
Styling: Tailwind CSS v4 + Custom CSS Variables
Build Tool: Next.js Standalone Output
Project Structure
avatar-frontend/
β
βββ app/ # Next.js App Router
β βββ api/
β β βββ avaturn-proxy/
β β βββ route.ts # CORS proxy for avatar assets
β β
β βββ app/ # Main application (authenticated)
β β βββ page.tsx # Avatar UI + WebSocket + Emotion detection
β β
β βββ login/
β β βββ page.tsx # Authentication page (signup/login)
β β
β βββ layout.tsx # Root layout (fonts, metadata)
β βββ page.tsx # Landing page (marketing)
β βββ globals.css # Design system (light/dark mode)
β
βββ public/
β βββ idle-animation.glb # Avatar idle animation (Git LFS, 199 KB)
β βββ next.svg # Next.js logo
β βββ vercel.svg # Vercel logo
β βββ file.svg # UI icons
β βββ globe.svg
β βββ window.svg
β
βββ package.json # Node dependencies
βββ tsconfig.json # TypeScript configuration
βββ next.config.ts # Next.js config (standalone output)
βββ postcss.config.mjs # PostCSS config
βββ eslint.config.mjs # ESLint config (Next.js 16)
βββ .gitignore
Key Features
1. 3D Avatar System
- Avaturn SDK Integration: Create custom avatars via embedded modal
- React Three Fiber: Real-time 3D rendering with WebGL
- Lip-Sync Animation: Viseme-based mouth animation synchronized with TTS
- Idle Animations: Natural breathing and blinking using GLB animations
- Customizable Positioning: Adjustable camera angle, position, and scale
Avatar Pipeline:
User clicks "Create Avatar"
β Avaturn SDK modal opens
β User customizes avatar
β Exports GLB URL
β CORS proxy fetches asset
β Three.js loads and renders
β Visemes drive morph targets
2. Multi-Modal Emotion Detection
- Facial Emotion: Real-time ViT-Face-Expression analysis
- Voice Emotion: HuBERT-Large prosody detection
- Text Sentiment: DistilRoBERTa with rule overrides
- Fusion Display: Combined emotion with confidence scores
Emotion Test Modal:
- Live probability distribution (4 emotions)
- Confidence percentage
- Quality score
- Prediction counter
3. WebSocket Communication
- Protocol:
ws://(dev) orwss://(production) - Authentication: Token-based session management
- Real-Time Streams: Video (200ms), Audio (500ms), Transcription
- Bidirectional: Client sends frames, server sends emotions + responses
Message Types:
// Client β Server
type: "auth" | "video_frame" | "audio_chunk" | "speech_end" | "preferences"
// Server β Client
type: "authenticated" | "face_emotion" | "voice_emotion" | "llm_response" | "error"
4. Conversation Interface
- Message History: Persistent chat with timestamps
- Speech Recognition: Web Speech API (continuous, interim results)
- Text Input: Keyboard fallback for typing
- Auto-Greeting: AI initiates conversation on connect
5. User Preferences
- Languages: English, Dutch (switch mid-conversation)
- Voice: Male (Damien Black), Female (Ana Florence)
- Personality: Therapist (empathetic) or Coach (action-focused)
- Theme: Light/Dark mode with smooth transitions
6. Privacy & Authentication
- Session-Based Auth: Token stored in localStorage
- No Data Upload: Only video/audio chunks sent for processing
- Logout Cleanup: Properly closes WebSocket and media streams
- CORS Proxy: Secure avatar asset loading
Installation
Prerequisites
- Node.js 20+ (LTS recommended)
- npm 10+ or pnpm 9+
- Git LFS (for idle-animation.glb)
Setup
# Navigate to frontend directory
cd avatar-frontend
# Install Git LFS (if not installed)
git lfs install
git lfs pull
# Install dependencies
npm install
# or
pnpm install
# Run development server
npm run dev
# or
pnpm dev
# Open browser
open http://localhost:3000
Environment Variables
Create .env.local:
# Backend WebSocket URL (auto-detected if not set)
NEXT_PUBLIC_BACKEND_URL=http://localhost:8000
# Avatar TTS URL (auto-detected if not set)
NEXT_PUBLIC_AVATAR_URL=http://localhost:8765
Configuration
WebSocket Connection
File: app/app/page.tsx
const getWebSocketURL = () => {
if (typeof window === "undefined") return "ws://localhost:8000/ws";
const protocol = window.location.protocol === "https:" ? "wss:" : "ws:";
return `${protocol}//${window.location.host}/ws`;
};
Automatically uses:
ws://localhost:3000/ws(local dev)wss://your-domain.com/ws(production)
Avatar Positioning
File: app/app/page.tsx (lines 493-495)
const [avatarPosition] = useState({ x: -0.01, y: -2.12, z: 0.06 });
const [avatarRotation] = useState({ x: 0.00, y: 0.51, z: 0.00 });
const [avatarScale] = useState(1.25);
Adjust these values to change camera framing.
Theme Customization
File: app/globals.css
Light Mode:
:root {
--background: #ffffff;
--foreground: #1d1d1f;
--accent-gradient-from: #007aff;
--accent-gradient-to: #5e5ce6;
--surface: rgba(255, 255, 255, 0.72);
--border: rgba(0, 0, 0, 0.06);
/* ... */
}
Dark Mode:
:root.dark-mode {
--background: #000000;
--foreground: #f5f5f7;
--accent-gradient-from: #0a84ff;
--accent-gradient-to: #5e5ce6;
--surface: rgba(28, 28, 30, 0.72);
--border: rgba(255, 255, 255, 0.08);
/* ... */
}
Component Architecture
Main Application (app/app/page.tsx)
State Management:
// Authentication
const [username, setUsername] = useState("");
const [userToken, setUserToken] = useState("");
// Emotion Detection
const [faceEmotion, setFaceEmotion] = useState("Neutral");
const [voiceEmotion, setVoiceEmotion] = useState("Neutral");
// Avatar
const [liveBlend, setLiveBlend] = useState<Blend>({});
const [avatarUrl, setAvatarUrl] = useState(DEFAULT_AVATAR);
// UI State
const [showHistory, setShowHistory] = useState(false);
const [showSettings, setShowSettings] = useState(false);
const [isAvatarSpeaking, setIsAvatarSpeaking] = useState(false);
// Preferences
const [selectedLanguage, setSelectedLanguage] = useState<"en" | "nl">("en");
const [selectedVoice, setSelectedVoice] = useState<"male" | "female">("female");
const [selectedPersonality, setSelectedPersonality] = useState<"therapist" | "coach">("therapist");
Key Functions:
connectWebSocket()- Establishes WebSocket connection with authstartCapture()- Initializes camera/microphone accessstartVideoCapture()- Sends video frames at 5 FPS (200ms intervals)startAudioCapture()- Sends audio chunks every 500msstartSpeechRecognition()- Web Speech API for transcriptionplayAvatarResponse()- Syncs audio + visemes for lip-sync
Avatar Component (app/app/page.tsx lines 171-227)
function Avatar({ liveBlend, avatarUrl, position, rotation, scale }) {
const gltf = useGLTF(avatarUrl);
const { scene, animations } = gltf;
const idleAnimGLTF = useGLTF('/idle-animation.glb');
// Find all meshes with morph targets (for lip-sync)
const morphMeshes = useMemo(() => {
const arr = [];
scene.traverse((o) => {
if (o.morphTargetDictionary && o.morphTargetInfluences) {
arr.push(o);
}
});
return arr;
}, [scene]);
// Animation loop: update morph targets + play idle animation
useFrame((_, dt) => {
if (mixerRef.current) mixerRef.current.update(dt);
morphMeshes.forEach((m) => {
Object.entries(liveBlend).forEach(([name, target]) => {
const i = m.morphTargetDictionary[name];
if (i !== undefined) {
m.morphTargetInfluences[i] += (target - m.morphTargetInfluences[i]) * dt * 25;
}
});
});
});
}
Blend Shapes (ARKit standard):
jawOpen: Mouth openingmouthSmile: Smile intensitymouthFrown: Frown intensitymouthPucker: Lip pucker (for "oo" sounds)- And ~50 more ARKit blend shapes
Avaturn Modal (app/app/page.tsx lines 91-170)
function AvaturnModal({ open, onClose, onExport, subdomain = "mrrrme" }) {
// Dynamically import Avaturn SDK from CDN
const AvaturnSDK = await importFromCdn(
"https://cdn.jsdelivr.net/npm/@avaturn/sdk/dist/index.js"
);
// Initialize SDK in container
await sdk.init(containerRef.current, {
url: `https://${subdomain}.avaturn.dev`
});
// Listen for export event
sdk.on("export", (data) => {
const glbUrl = data?.links?.glb?.url;
onExport(glbUrl); // Pass URL to parent
});
}
Emotion Test Modal (app/app/page.tsx lines 20-89)
Real-time emotion dashboard:
- Current emotion with confidence
- 4-class probability distribution (Neutral, Happy, Sad, Angry)
- Face quality score
- Prediction counter
WebSocket Protocol
Client Messages
Authentication:
{
"type": "auth",
"token": "session_token_from_login"
}
Video Frame (every 200ms):
{
"type": "video_frame",
"frame": "data:image/jpeg;base64,/9j/4AAQSkZJRg..."
}
Audio Chunk (every 500ms):
{
"type": "audio_chunk",
"audio": "base64_webm_audio_data"
}
Speech End (when user stops talking):
{
"type": "speech_end",
"text": "transcribed speech from Web Speech API"
}
Update Preferences:
{
"type": "preferences",
"voice": "female" | "male",
"language": "en" | "nl",
"personality": "therapist" | "coach"
}
Request Greeting:
{
"type": "request_greeting"
}
Server Messages
Authentication Success:
{
"type": "authenticated",
"username": "alice",
"summary": "User summary from previous conversations..."
}
Face Emotion Update:
{
"type": "face_emotion",
"emotion": "Happy",
"confidence": 0.87,
"probabilities": [0.05, 0.87, 0.04, 0.04],
"quality": 0.92
}
Voice Emotion Update:
{
"type": "voice_emotion",
"emotion": "Happy"
}
LLM Response (with avatar TTS):
{
"type": "llm_response",
"text": "That's wonderful to hear!",
"emotion": "Happy",
"intensity": 0.75,
"audio_url": "/static/tts_12345.mp3",
"visemes": [
{"t": 0.0, "blend": {"jawOpen": 0.0}},
{"t": 0.1, "blend": {"jawOpen": 0.3, "mouthSmile": 0.2}},
{"t": 0.2, "blend": {"jawOpen": 0.5}}
]
}
Error:
{
"type": "error",
"message": "Invalid session - please login again"
}
API Routes
CORS Proxy (app/api/avaturn-proxy/route.ts)
Proxies avatar GLB files to bypass CORS restrictions.
Allowed Domains:
*.avaturn.dev*.avaturn.me*.cloudfront.netstorage.googleapis.com*.amazonaws.commodels.readyplayer.me
Usage:
GET /api/avaturn-proxy?url=https://models.readyplayer.me/avatar.glb
Response:
- Success: Binary GLB file with
model/gltf-binarycontent-type - Error 400: URL not allowed
- Error 502: Upstream fetch failed
Design System
Color Palette
Light Mode:
- Background:
#ffffff(pure white) - Foreground:
#1d1d1f(near black) - Accent:
#007affβ#5e5ce6(blue gradient) - Surface:
rgba(255, 255, 255, 0.72)(frosted glass)
Dark Mode:
- Background:
#000000(pure black) - Foreground:
#f5f5f7(off-white) - Accent:
#0a84ffβ#5e5ce6(lighter blue gradient) - Surface:
rgba(28, 28, 30, 0.72)(dark frosted glass)
Glass Morphism
.glass {
background: var(--surface);
backdrop-filter: blur(24px) saturate(180%);
border: 1px solid var(--border);
}
.glass-elevated {
background: var(--surface-elevated);
backdrop-filter: blur(32px) saturate(200%);
border: 1px solid var(--border-strong);
}
Animations
@keyframes fadeIn {
from { opacity: 0; }
to { opacity: 1; }
}
@keyframes scaleIn {
from { opacity: 0; transform: scale(0.95); }
to { opacity: 1; transform: scale(1); }
}
@keyframes slideInRight {
from { transform: translateX(100%); }
to { transform: translateX(0); }
}
Component API
Avatar Component
<Avatar
liveBlend={liveBlend} // Current viseme blend shapes
avatarUrl={avatarUrl} // GLB URL from Avaturn
position={[-0.01, -2.12, 0.06]} // [x, y, z] position
rotation={[0.00, 0.51, 0.00]} // [x, y, z] Euler angles
scale={1.25} // Uniform scale
/>
Message Bubble
<MessageBubble
message={{
id: "unique_id",
role: "user" | "assistant",
content: "Message text",
timestamp: new Date(),
emotion: "Happy" // optional
}}
/>
Control Button
<ControlButton
onClick={() => handleAction()}
icon={<svg>...</svg>}
label="Button label"
variant="default" | "danger" | "primary"
/>
Emotion Test Modal
<EmotionTestModal
open={showEmotionTest}
onClose={() => setShowEmotionTest(false)}
wsRef={wsRef} // WebSocket ref for listening to face_emotion messages
/>
Build & Deployment
Development
npm run dev
Runs on http://localhost:3000
Production Build
npm run build
npm start
Creates .next/standalone directory for deployment.
Docker Build
Handled by root Dockerfile:
# Stage 1: Install frontend dependencies
FROM node:20-alpine AS frontend-deps
WORKDIR /app/avatar-frontend
COPY avatar-frontend/package*.json ./
RUN npm ci
# Stage 2: Build frontend
FROM node:20-alpine AS frontend-builder
WORKDIR /app/avatar-frontend
COPY --from=frontend-deps /app/avatar-frontend/node_modules ./node_modules
COPY avatar-frontend/ ./
RUN npm run build
# Stage 3: Copy to standalone
COPY --from=frontend-builder /app/avatar-frontend/.next/standalone ./avatar-frontend/.next/standalone
Nginx Proxy
Frontend served through Nginx on port 7860:
# Next.js frontend
location / {
proxy_pass http://127.0.0.1:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
# WebSocket (backend)
location /ws {
proxy_pass http://127.0.0.1:8000/ws;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
# Backend API
location /api {
proxy_pass http://127.0.0.1:8000/api;
}
# Avatar TTS
location /static {
proxy_pass http://127.0.0.1:8765/static;
}
User Flow
1. Landing Page (/)
- Marketing content
- Features showcase
- "Get Started" CTA β redirects to
/login
2. Authentication (/login)
- Username + password form
- Signup or login toggle
- Stores token in
localStorageon success - Redirects to
/app
3. Main App (/app)
On Mount:
- Check
localStoragefor token - Redirect to
/loginif not authenticated - Apply theme preference (light/dark)
On Start:
- Request camera + microphone permissions
- Connect WebSocket with auth token
- Start video/audio capture
- Start speech recognition
- Show "Create Avatar" welcome screen
Avatar Creation:
- Click "Create Your Avatar"
- Avaturn SDK modal opens
- Customize avatar
- Export GLB URL
- Proxy fetches GLB via
/api/avaturn-proxy - Three.js loads and renders avatar
- Auto-greeting plays with lip-sync
Conversation Loop:
- User speaks (Web Speech API transcribes)
- On silence, send
speech_endmessage - Backend processes emotions + generates response
- Server sends
llm_responsewith audio + visemes - Play audio while animating avatar mouth
- On audio end, resume speech recognition
Performance Optimizations
1. Frame Rate Control
- Video: 5 FPS (200ms intervals) - emotions change slowly
- Audio: 2 Hz (500ms chunks) - sufficient for real-time
- Canvas Rendering: 60 FPS via
requestAnimationFrame
2. Asset Loading
- Git LFS: Large files (idle-animation.glb) not in git history
- Image Optimization: Disabled (
unoptimized: true) for faster builds - Standalone Build: Minimal production bundle
3. State Management
- Refs for Non-Reactive State:
isPausedRef,recognitionRef,wsRef - Minimal Re-Renders: Only update UI when necessary
- Memoization:
useMemofor morph mesh detection
4. Network Efficiency
- JPEG Compression: 70% quality for video frames
- Base64 Encoding: Binary data transmission
- WebSocket Keep-Alive: Single persistent connection
Troubleshooting
Camera/Microphone Access Denied
Problem: Browser doesn't request permissions
Solution:
- Use HTTPS in production (required for getUserMedia)
- Check browser settings β Site permissions
- Try different browser (Chrome recommended)
WebSocket Connection Failed
Problem: ws://localhost:3000/ws not connecting
Check:
# Ensure backend is running
curl http://localhost:8000/health
# Check WebSocket endpoint
curl http://localhost:8000/ws
Fix: Update BACKEND_WS URL in app/app/page.tsx
Avatar Not Loading
Problem: Avatar shows blank screen
Possible Causes:
- GLB URL blocked by CORS β Use
/api/avaturn-proxy?url=... - Invalid GLB format β Re-export from Avaturn
- Git LFS not installed β Run
git lfs pull
Debug:
# Check if idle-animation.glb is real file (not pointer)
file public/idle-animation.glb
# Should show: "glTF binary" not "ASCII text"
# Check if proxy works
curl "http://localhost:3000/api/avaturn-proxy?url=https://models.readyplayer.me/some-avatar.glb"
Speech Recognition Not Working
Problem: Microphone captures but no transcription
Fixes:
- Check browser support: Chrome/Edge only (Safari doesn't support continuous mode)
- Language mismatch: Ensure
recognition.langmatchesselectedLanguage - Restart: Change language to force recognition restart
Debug:
// Add to startSpeechRecognition()
recognition.onerror = (event) => {
console.log('[SpeechRec] Error:', event.error, event.message);
};
recognition.onresult = (event) => {
console.log('[SpeechRec] Result:', event.results[event.resultIndex][0].transcript);
};
Avatar Lip-Sync Out of Sync
Problem: Mouth moves too early/late
Fix 1: Adjust viseme interpolation in useFrame():
// Faster interpolation (current: dt * 25)
m.morphTargetInfluences[i] += (target - current) * dt * 40;
// Slower interpolation
m.morphTargetInfluences[i] += (target - current) * dt * 15;
Fix 2: Add offset to viseme timing:
const t = audioRef.current.currentTime + 0.05; // 50ms lookahead
Dependencies
Core
{
"next": "16.0.0",
"react": "19.2.0",
"react-dom": "19.2.0",
"three": "^0.180.0",
"@react-three/fiber": "^9.4.0",
"@react-three/drei": "^10.7.6"
}
Dev Dependencies
{
"typescript": "^5",
"tailwindcss": "^4",
"@tailwindcss/postcss": "^4",
"eslint": "^9",
"eslint-config-next": "16.0.0"
}
Three.js Ecosystem
- React Three Fiber: React renderer for Three.js
- Drei: Helper components (Environment, Html, useGLTF)
- Three.js 0.180: Core 3D engine
Why Three.js 0.180?
- Compatible with Avaturn SDK exports
- Supports ARKit blend shapes
- GLTFLoader with morph targets
Authentication Flow
Signup
POST /api/signup
{
"username": "alice",
"password": "secure123"
}
Response:
{
"success": true,
"message": "Account created!"
}
Login
POST /api/login
{
"username": "alice",
"password": "secure123"
}
Response:
{
"success": true,
"token": "random_session_token_32_chars",
"username": "alice",
"user_id": "user_abc123",
"summary": "Previous conversation summary or null"
}
Token Storage
// Save to localStorage
localStorage.setItem("mrrrme_token", token);
localStorage.setItem("mrrrme_username", username);
// Retrieve on app load
const token = localStorage.getItem("mrrrme_token");
if (!token) router.push("/login");
Logout
POST /api/logout
{
"token": "session_token"
}
// Frontend cleanup:
localStorage.removeItem("mrrrme_token");
localStorage.removeItem("mrrrme_username");
wsRef.current?.close();
mediaRecorderRef.current?.stop();
recognitionRef.current?.stop();
Responsive Design
Breakpoints (Tailwind)
// Mobile
default (< 640px)
// Tablet
md: (>= 768px)
// Desktop
lg: (>= 1024px)
Mobile Adaptations
History Panel:
// Mobile: Full width
className="w-full"
// Desktop: Fixed 420px
className="md:w-[420px]"
Message Bubbles:
// Mobile: 85% width
maxWidth: "85%"
// Desktop: 70% width
md:maxWidth: "70%"
Known Issues
Current Limitations
- Browser Support: Chrome/Edge only for speech recognition
- Mobile Safari: No continuous speech recognition
- Avatar Loading: Requires stable internet for GLB download
- Viseme Coverage: Not all phonemes have perfect ARKit mappings
- Memory Usage: Three.js can consume 200-400 MB RAM
Workarounds
Speech Recognition on Safari:
- Use text input instead (bottom bar)
- Fallback to server-side Whisper transcription
Slow Avatar Loading:
- Preload idle-animation.glb (already in /public)
- Cache Avaturn exports in IndexedDB (future work)
High Memory Usage:
- Clear previous avatar before loading new one:
if (oldUrl !== DEFAULT_AVATAR) {
(useGLTF as any).clear?.(oldUrl);
}
if (objectUrlRef.current) {
URL.revokeObjectURL(objectUrlRef.current);
}
Future Enhancements
Planned Features (Weeks 10-15)
Avatar Improvements:
- Emotion-driven facial expressions (smile, frown, concern)
- Eye gaze tracking (looks at camera)
- Head movement (subtle nodding, tilting)
- Blink animation at natural intervals
UI/UX:
- Emotion timeline graph (Chart.js or Recharts)
- Export conversation to CSV/JSON
- Session statistics dashboard
- Advanced settings (fusion weights, model selection)
Performance:
- WebWorker for audio processing
- OffscreenCanvas for video encoding
- IndexedDB caching for avatars
Accessibility:
- Screen reader support
- Keyboard navigation
- High contrast mode
- Text size controls
Development Guidelines
Code Style
TypeScript:
- Strict mode enabled
- Explicit types for function parameters
- Avoid
anytypes
React:
- Functional components only
- Hooks for state management
useCallbackfor expensive functionsuseMemofor computed values
Naming Conventions:
- Components:
PascalCase - Functions:
camelCase - Constants:
UPPER_SNAKE_CASE - CSS Variables:
--kebab-case
File Organization
app/
page.tsx # Default export component
layout.tsx # Layout wrapper
api/
route.ts # API route handler
State Management
Local State: useState for UI toggles
Refs: useRef for non-reactive values (WebSocket, MediaRecorder)
Global State: Props drilling (no Redux/Zustand needed for small app)
Testing
Manual Testing Checklist
- Login with new account
- Login with existing account
- Create avatar via Avaturn
- Avatar loads and displays
- Camera permission granted
- Microphone permission granted
- Face emotion updates in real-time
- Speech recognition transcribes correctly
- LLM response plays with lip-sync
- Switch language (English β Dutch)
- Switch voice (Male β Female)
- Switch personality (Therapist β Coach)
- Toggle light/dark mode
- View conversation history
- Pause/resume listening
- Logout properly closes connections
Browser Testing
Recommended:
- Chrome 120+ (full support)
- Edge 120+ (full support)
Limited Support:
- Firefox (no Web Speech API continuous mode)
- Safari (no Web Speech API on desktop)
Mobile:
- Chrome Android (works)
- Safari iOS (limited - no continuous speech)
Performance Metrics
Bundle Size
npm run build
# Output:
Route (app) Size First Load JS
β β / 15.2 kB 105 kB
β β /api/avaturn-proxy 0 B 0 B
β β /app 89.3 kB 195 kB
β β /login 12.8 kB 102 kB
Total First Load JS: ~195 kB (with Three.js)
Load Times (Local Dev)
- Initial page load: 1.2s
- Avatar GLB download: 2-4s (depends on size)
- WebSocket connection: <100ms
- First video frame: 200ms
Runtime Performance
- FPS: 60 (Three.js canvas)
- Memory: 200-400 MB
- CPU: 15-25% (with webcam)
- Network: 50-100 KB/s (video + audio upload)
Security Considerations
Authentication
- Tokens are random 32-character strings
- Passwords hashed with SHA-256 (server-side)
- Session validation on every WebSocket message
- Auto-logout on invalid token
Data Privacy
- Video/audio chunks sent to backend, not stored
- No face recognition or identification
- Conversation history saved per-user in backend SQLite
- localStorage tokens cleared on logout
CORS
- Avaturn proxy restricts to whitelisted domains
- Backend CORS allows all origins (dev only)
- Production should restrict to specific domains
Contributing
Development Setup
# Fork repository
git clone https://github.com/YourUsername/MrrrMe.git
cd MrrrMe/avatar-frontend
# Install dependencies
npm install
# Create feature branch
git checkout -b feature/your-feature
# Run dev server
npm run dev
# Make changes, test thoroughly
# Commit and push
git add .
git commit -m "Add your feature"
git push origin feature/your-feature
# Open Pull Request
Code Review Checklist
- TypeScript types are correct
- No console.log in production code
- Components are properly memoized
- CSS variables used (no hardcoded colors)
- Responsive design tested
- WebSocket cleanup on unmount
- Error handling implemented
License
MIT License - See root LICENSE file
Contact
Project Team:
- Musaed Al-Fareh - [email protected]
- Michon Goddijn - [email protected]
- Lorena KraljiΔ - [email protected]
Course: Applied Data Science - Artificial Intelligence
Institution: Breda University of Applied Sciences
Acknowledgments
- Avaturn: 3D avatar creation platform
- Pmndrs: React Three Fiber ecosystem
- Next.js Team: Framework development
- Three.js: WebGL rendering engine
Last Updated: December 10, 2024
Version: 2.0.0 (Next.js 16 + React 19)
Status: Production Ready