MusaedMusaedSadeqMusaedAl-Fareh225739 commited on
Commit
52a9a25
Β·
1 Parent(s): 1df4a51

README file for frontend

Browse files
Files changed (1) hide show
  1. avatar-frontend/README.md +1208 -18
avatar-frontend/README.md CHANGED
@@ -1,36 +1,1226 @@
1
- This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).
2
 
3
- ## Getting Started
4
 
5
- First, run the development server:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
  ```bash
8
- npm run dev
 
 
 
 
 
 
 
 
9
  # or
10
- yarn dev
 
 
 
11
  # or
12
  pnpm dev
13
- # or
14
- bun dev
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ```
16
 
17
- Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
20
 
21
- This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.
22
 
23
- ## Learn More
 
 
 
24
 
25
- To learn more about Next.js, take a look at the following resources:
 
26
 
27
- - [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
28
- - [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
29
 
30
- You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!
31
 
32
- ## Deploy on Vercel
 
 
 
33
 
34
- The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.
35
 
36
- Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details.
 
 
 
1
+ # MrrrMe Avatar Frontend
2
 
3
+ **Next.js 16 Web Application with 3D Avatar Integration**
4
 
5
+ Real-time emotion detection interface with customizable 3D avatars, WebSocket communication, and multi-lingual support.
6
+
7
+ ---
8
+
9
+ ## Technology Stack
10
+
11
+ **Framework**: Next.js 16.0.0 (App Router)
12
+ **UI Library**: React 19.2.0
13
+ **Language**: TypeScript 5.9+
14
+ **3D Engine**: React Three Fiber 9.4.0 + Three.js 0.180.0
15
+ **Avatar SDK**: Avaturn SDK (CDN)
16
+ **Styling**: Tailwind CSS v4 + Custom CSS Variables
17
+ **Build Tool**: Next.js Standalone Output
18
+
19
+ ---
20
+
21
+ ## Project Structure
22
+
23
+ ```
24
+ avatar-frontend/
25
+ β”‚
26
+ β”œβ”€β”€ app/ # Next.js App Router
27
+ β”‚ β”œβ”€β”€ api/
28
+ β”‚ β”‚ └── avaturn-proxy/
29
+ β”‚ β”‚ └── route.ts # CORS proxy for avatar assets
30
+ β”‚ β”‚
31
+ β”‚ β”œβ”€β”€ app/ # Main application (authenticated)
32
+ β”‚ β”‚ └── page.tsx # Avatar UI + WebSocket + Emotion detection
33
+ β”‚ β”‚
34
+ β”‚ β”œβ”€β”€ login/
35
+ β”‚ β”‚ └── page.tsx # Authentication page (signup/login)
36
+ β”‚ β”‚
37
+ β”‚ β”œβ”€β”€ layout.tsx # Root layout (fonts, metadata)
38
+ β”‚ β”œβ”€β”€ page.tsx # Landing page (marketing)
39
+ β”‚ └── globals.css # Design system (light/dark mode)
40
+ β”‚
41
+ β”œβ”€β”€ public/
42
+ β”‚ β”œβ”€β”€ idle-animation.glb # Avatar idle animation (Git LFS, 199 KB)
43
+ β”‚ β”œβ”€β”€ next.svg # Next.js logo
44
+ β”‚ β”œβ”€β”€ vercel.svg # Vercel logo
45
+ β”‚ β”œβ”€β”€ file.svg # UI icons
46
+ β”‚ β”œβ”€β”€ globe.svg
47
+ β”‚ └── window.svg
48
+ β”‚
49
+ β”œβ”€β”€ package.json # Node dependencies
50
+ β”œβ”€β”€ tsconfig.json # TypeScript configuration
51
+ β”œβ”€β”€ next.config.ts # Next.js config (standalone output)
52
+ β”œβ”€β”€ postcss.config.mjs # PostCSS config
53
+ β”œβ”€β”€ eslint.config.mjs # ESLint config (Next.js 16)
54
+ └── .gitignore
55
+ ```
56
+
57
+ ---
58
+
59
+ ## Key Features
60
+
61
+ ### 1. 3D Avatar System
62
+ - **Avaturn SDK Integration**: Create custom avatars via embedded modal
63
+ - **React Three Fiber**: Real-time 3D rendering with WebGL
64
+ - **Lip-Sync Animation**: Viseme-based mouth animation synchronized with TTS
65
+ - **Idle Animations**: Natural breathing and blinking using GLB animations
66
+ - **Customizable Positioning**: Adjustable camera angle, position, and scale
67
+
68
+ **Avatar Pipeline**:
69
+ ```
70
+ User clicks "Create Avatar"
71
+ β†’ Avaturn SDK modal opens
72
+ β†’ User customizes avatar
73
+ β†’ Exports GLB URL
74
+ β†’ CORS proxy fetches asset
75
+ β†’ Three.js loads and renders
76
+ β†’ Visemes drive morph targets
77
+ ```
78
+
79
+ ### 2. Multi-Modal Emotion Detection
80
+ - **Facial Emotion**: Real-time ViT-Face-Expression analysis
81
+ - **Voice Emotion**: HuBERT-Large prosody detection
82
+ - **Text Sentiment**: DistilRoBERTa with rule overrides
83
+ - **Fusion Display**: Combined emotion with confidence scores
84
+
85
+ **Emotion Test Modal**:
86
+ - Live probability distribution (4 emotions)
87
+ - Confidence percentage
88
+ - Quality score
89
+ - Prediction counter
90
+
91
+ ### 3. WebSocket Communication
92
+ - **Protocol**: `ws://` (dev) or `wss://` (production)
93
+ - **Authentication**: Token-based session management
94
+ - **Real-Time Streams**: Video (200ms), Audio (500ms), Transcription
95
+ - **Bidirectional**: Client sends frames, server sends emotions + responses
96
+
97
+ **Message Types**:
98
+ ```typescript
99
+ // Client β†’ Server
100
+ type: "auth" | "video_frame" | "audio_chunk" | "speech_end" | "preferences"
101
+
102
+ // Server β†’ Client
103
+ type: "authenticated" | "face_emotion" | "voice_emotion" | "llm_response" | "error"
104
+ ```
105
+
106
+ ### 4. Conversation Interface
107
+ - **Message History**: Persistent chat with timestamps
108
+ - **Speech Recognition**: Web Speech API (continuous, interim results)
109
+ - **Text Input**: Keyboard fallback for typing
110
+ - **Auto-Greeting**: AI initiates conversation on connect
111
+
112
+ ### 5. User Preferences
113
+ - **Languages**: English, Dutch (switch mid-conversation)
114
+ - **Voice**: Male (Damien Black), Female (Ana Florence)
115
+ - **Personality**: Therapist (empathetic) or Coach (action-focused)
116
+ - **Theme**: Light/Dark mode with smooth transitions
117
+
118
+ ### 6. Privacy & Authentication
119
+ - **Session-Based Auth**: Token stored in localStorage
120
+ - **No Data Upload**: Only video/audio chunks sent for processing
121
+ - **Logout Cleanup**: Properly closes WebSocket and media streams
122
+ - **CORS Proxy**: Secure avatar asset loading
123
+
124
+ ---
125
+
126
+ ## Installation
127
+
128
+ ### Prerequisites
129
+
130
+ - Node.js 20+ (LTS recommended)
131
+ - npm 10+ or pnpm 9+
132
+ - Git LFS (for idle-animation.glb)
133
+
134
+ ### Setup
135
 
136
  ```bash
137
+ # Navigate to frontend directory
138
+ cd avatar-frontend
139
+
140
+ # Install Git LFS (if not installed)
141
+ git lfs install
142
+ git lfs pull
143
+
144
+ # Install dependencies
145
+ npm install
146
  # or
147
+ pnpm install
148
+
149
+ # Run development server
150
+ npm run dev
151
  # or
152
  pnpm dev
153
+
154
+ # Open browser
155
+ open http://localhost:3000
156
+ ```
157
+
158
+ ### Environment Variables
159
+
160
+ Create `.env.local`:
161
+
162
+ ```bash
163
+ # Backend WebSocket URL (auto-detected if not set)
164
+ NEXT_PUBLIC_BACKEND_URL=http://localhost:8000
165
+
166
+ # Avatar TTS URL (auto-detected if not set)
167
+ NEXT_PUBLIC_AVATAR_URL=http://localhost:8765
168
+ ```
169
+
170
+ ---
171
+
172
+ ## Configuration
173
+
174
+ ### WebSocket Connection
175
+
176
+ File: `app/app/page.tsx`
177
+
178
+ ```typescript
179
+ const getWebSocketURL = () => {
180
+ if (typeof window === "undefined") return "ws://localhost:8000/ws";
181
+ const protocol = window.location.protocol === "https:" ? "wss:" : "ws:";
182
+ return `${protocol}//${window.location.host}/ws`;
183
+ };
184
+ ```
185
+
186
+ Automatically uses:
187
+ - `ws://localhost:3000/ws` (local dev)
188
+ - `wss://your-domain.com/ws` (production)
189
+
190
+ ### Avatar Positioning
191
+
192
+ File: `app/app/page.tsx` (lines 493-495)
193
+
194
+ ```typescript
195
+ const [avatarPosition] = useState({ x: -0.01, y: -2.12, z: 0.06 });
196
+ const [avatarRotation] = useState({ x: 0.00, y: 0.51, z: 0.00 });
197
+ const [avatarScale] = useState(1.25);
198
+ ```
199
+
200
+ Adjust these values to change camera framing.
201
+
202
+ ### Theme Customization
203
+
204
+ File: `app/globals.css`
205
+
206
+ **Light Mode**:
207
+ ```css
208
+ :root {
209
+ --background: #ffffff;
210
+ --foreground: #1d1d1f;
211
+ --accent-gradient-from: #007aff;
212
+ --accent-gradient-to: #5e5ce6;
213
+ --surface: rgba(255, 255, 255, 0.72);
214
+ --border: rgba(0, 0, 0, 0.06);
215
+ /* ... */
216
+ }
217
+ ```
218
+
219
+ **Dark Mode**:
220
+ ```css
221
+ :root.dark-mode {
222
+ --background: #000000;
223
+ --foreground: #f5f5f7;
224
+ --accent-gradient-from: #0a84ff;
225
+ --accent-gradient-to: #5e5ce6;
226
+ --surface: rgba(28, 28, 30, 0.72);
227
+ --border: rgba(255, 255, 255, 0.08);
228
+ /* ... */
229
+ }
230
+ ```
231
+
232
+ ---
233
+
234
+ ## Component Architecture
235
+
236
+ ### Main Application (`app/app/page.tsx`)
237
+
238
+ **State Management**:
239
+ ```typescript
240
+ // Authentication
241
+ const [username, setUsername] = useState("");
242
+ const [userToken, setUserToken] = useState("");
243
+
244
+ // Emotion Detection
245
+ const [faceEmotion, setFaceEmotion] = useState("Neutral");
246
+ const [voiceEmotion, setVoiceEmotion] = useState("Neutral");
247
+
248
+ // Avatar
249
+ const [liveBlend, setLiveBlend] = useState<Blend>({});
250
+ const [avatarUrl, setAvatarUrl] = useState(DEFAULT_AVATAR);
251
+
252
+ // UI State
253
+ const [showHistory, setShowHistory] = useState(false);
254
+ const [showSettings, setShowSettings] = useState(false);
255
+ const [isAvatarSpeaking, setIsAvatarSpeaking] = useState(false);
256
+
257
+ // Preferences
258
+ const [selectedLanguage, setSelectedLanguage] = useState<"en" | "nl">("en");
259
+ const [selectedVoice, setSelectedVoice] = useState<"male" | "female">("female");
260
+ const [selectedPersonality, setSelectedPersonality] = useState<"therapist" | "coach">("therapist");
261
+ ```
262
+
263
+ **Key Functions**:
264
+
265
+ 1. `connectWebSocket()` - Establishes WebSocket connection with auth
266
+ 2. `startCapture()` - Initializes camera/microphone access
267
+ 3. `startVideoCapture()` - Sends video frames at 5 FPS (200ms intervals)
268
+ 4. `startAudioCapture()` - Sends audio chunks every 500ms
269
+ 5. `startSpeechRecognition()` - Web Speech API for transcription
270
+ 6. `playAvatarResponse()` - Syncs audio + visemes for lip-sync
271
+
272
+ ### Avatar Component (`app/app/page.tsx` lines 171-227)
273
+
274
+ ```typescript
275
+ function Avatar({ liveBlend, avatarUrl, position, rotation, scale }) {
276
+ const gltf = useGLTF(avatarUrl);
277
+ const { scene, animations } = gltf;
278
+ const idleAnimGLTF = useGLTF('/idle-animation.glb');
279
+
280
+ // Find all meshes with morph targets (for lip-sync)
281
+ const morphMeshes = useMemo(() => {
282
+ const arr = [];
283
+ scene.traverse((o) => {
284
+ if (o.morphTargetDictionary && o.morphTargetInfluences) {
285
+ arr.push(o);
286
+ }
287
+ });
288
+ return arr;
289
+ }, [scene]);
290
+
291
+ // Animation loop: update morph targets + play idle animation
292
+ useFrame((_, dt) => {
293
+ if (mixerRef.current) mixerRef.current.update(dt);
294
+
295
+ morphMeshes.forEach((m) => {
296
+ Object.entries(liveBlend).forEach(([name, target]) => {
297
+ const i = m.morphTargetDictionary[name];
298
+ if (i !== undefined) {
299
+ m.morphTargetInfluences[i] += (target - m.morphTargetInfluences[i]) * dt * 25;
300
+ }
301
+ });
302
+ });
303
+ });
304
+ }
305
+ ```
306
+
307
+ **Blend Shapes** (ARKit standard):
308
+ - `jawOpen`: Mouth opening
309
+ - `mouthSmile`: Smile intensity
310
+ - `mouthFrown`: Frown intensity
311
+ - `mouthPucker`: Lip pucker (for "oo" sounds)
312
+ - And ~50 more ARKit blend shapes
313
+
314
+ ### Avaturn Modal (`app/app/page.tsx` lines 91-170)
315
+
316
+ ```typescript
317
+ function AvaturnModal({ open, onClose, onExport, subdomain = "mrrrme" }) {
318
+ // Dynamically import Avaturn SDK from CDN
319
+ const AvaturnSDK = await importFromCdn(
320
+ "https://cdn.jsdelivr.net/npm/@avaturn/sdk/dist/index.js"
321
+ );
322
+
323
+ // Initialize SDK in container
324
+ await sdk.init(containerRef.current, {
325
+ url: `https://${subdomain}.avaturn.dev`
326
+ });
327
+
328
+ // Listen for export event
329
+ sdk.on("export", (data) => {
330
+ const glbUrl = data?.links?.glb?.url;
331
+ onExport(glbUrl); // Pass URL to parent
332
+ });
333
+ }
334
+ ```
335
+
336
+ ### Emotion Test Modal (`app/app/page.tsx` lines 20-89)
337
+
338
+ Real-time emotion dashboard:
339
+ - Current emotion with confidence
340
+ - 4-class probability distribution (Neutral, Happy, Sad, Angry)
341
+ - Face quality score
342
+ - Prediction counter
343
+
344
+ ---
345
+
346
+ ## WebSocket Protocol
347
+
348
+ ### Client Messages
349
+
350
+ **Authentication**:
351
+ ```json
352
+ {
353
+ "type": "auth",
354
+ "token": "session_token_from_login"
355
+ }
356
+ ```
357
+
358
+ **Video Frame** (every 200ms):
359
+ ```json
360
+ {
361
+ "type": "video_frame",
362
+ "frame": "data:image/jpeg;base64,/9j/4AAQSkZJRg..."
363
+ }
364
+ ```
365
+
366
+ **Audio Chunk** (every 500ms):
367
+ ```json
368
+ {
369
+ "type": "audio_chunk",
370
+ "audio": "base64_webm_audio_data"
371
+ }
372
+ ```
373
+
374
+ **Speech End** (when user stops talking):
375
+ ```json
376
+ {
377
+ "type": "speech_end",
378
+ "text": "transcribed speech from Web Speech API"
379
+ }
380
+ ```
381
+
382
+ **Update Preferences**:
383
+ ```json
384
+ {
385
+ "type": "preferences",
386
+ "voice": "female" | "male",
387
+ "language": "en" | "nl",
388
+ "personality": "therapist" | "coach"
389
+ }
390
+ ```
391
+
392
+ **Request Greeting**:
393
+ ```json
394
+ {
395
+ "type": "request_greeting"
396
+ }
397
+ ```
398
+
399
+ ### Server Messages
400
+
401
+ **Authentication Success**:
402
+ ```json
403
+ {
404
+ "type": "authenticated",
405
+ "username": "alice",
406
+ "summary": "User summary from previous conversations..."
407
+ }
408
+ ```
409
+
410
+ **Face Emotion Update**:
411
+ ```json
412
+ {
413
+ "type": "face_emotion",
414
+ "emotion": "Happy",
415
+ "confidence": 0.87,
416
+ "probabilities": [0.05, 0.87, 0.04, 0.04],
417
+ "quality": 0.92
418
+ }
419
+ ```
420
+
421
+ **Voice Emotion Update**:
422
+ ```json
423
+ {
424
+ "type": "voice_emotion",
425
+ "emotion": "Happy"
426
+ }
427
+ ```
428
+
429
+ **LLM Response** (with avatar TTS):
430
+ ```json
431
+ {
432
+ "type": "llm_response",
433
+ "text": "That's wonderful to hear!",
434
+ "emotion": "Happy",
435
+ "intensity": 0.75,
436
+ "audio_url": "/static/tts_12345.mp3",
437
+ "visemes": [
438
+ {"t": 0.0, "blend": {"jawOpen": 0.0}},
439
+ {"t": 0.1, "blend": {"jawOpen": 0.3, "mouthSmile": 0.2}},
440
+ {"t": 0.2, "blend": {"jawOpen": 0.5}}
441
+ ]
442
+ }
443
+ ```
444
+
445
+ **Error**:
446
+ ```json
447
+ {
448
+ "type": "error",
449
+ "message": "Invalid session - please login again"
450
+ }
451
  ```
452
 
453
+ ---
454
+
455
+ ## API Routes
456
+
457
+ ### CORS Proxy (`app/api/avaturn-proxy/route.ts`)
458
+
459
+ Proxies avatar GLB files to bypass CORS restrictions.
460
+
461
+ **Allowed Domains**:
462
+ - `*.avaturn.dev`
463
+ - `*.avaturn.me`
464
+ - `*.cloudfront.net`
465
+ - `storage.googleapis.com`
466
+ - `*.amazonaws.com`
467
+ - `models.readyplayer.me`
468
+
469
+ **Usage**:
470
+ ```
471
+ GET /api/avaturn-proxy?url=https://models.readyplayer.me/avatar.glb
472
+ ```
473
+
474
+ **Response**:
475
+ - Success: Binary GLB file with `model/gltf-binary` content-type
476
+ - Error 400: URL not allowed
477
+ - Error 502: Upstream fetch failed
478
+
479
+ ---
480
+
481
+ ## Design System
482
+
483
+ ### Color Palette
484
+
485
+ **Light Mode**:
486
+ - Background: `#ffffff` (pure white)
487
+ - Foreground: `#1d1d1f` (near black)
488
+ - Accent: `#007aff` β†’ `#5e5ce6` (blue gradient)
489
+ - Surface: `rgba(255, 255, 255, 0.72)` (frosted glass)
490
+
491
+ **Dark Mode**:
492
+ - Background: `#000000` (pure black)
493
+ - Foreground: `#f5f5f7` (off-white)
494
+ - Accent: `#0a84ff` β†’ `#5e5ce6` (lighter blue gradient)
495
+ - Surface: `rgba(28, 28, 30, 0.72)` (dark frosted glass)
496
+
497
+ ### Glass Morphism
498
+
499
+ ```css
500
+ .glass {
501
+ background: var(--surface);
502
+ backdrop-filter: blur(24px) saturate(180%);
503
+ border: 1px solid var(--border);
504
+ }
505
+
506
+ .glass-elevated {
507
+ background: var(--surface-elevated);
508
+ backdrop-filter: blur(32px) saturate(200%);
509
+ border: 1px solid var(--border-strong);
510
+ }
511
+ ```
512
+
513
+ ### Animations
514
+
515
+ ```css
516
+ @keyframes fadeIn {
517
+ from { opacity: 0; }
518
+ to { opacity: 1; }
519
+ }
520
+
521
+ @keyframes scaleIn {
522
+ from { opacity: 0; transform: scale(0.95); }
523
+ to { opacity: 1; transform: scale(1); }
524
+ }
525
+
526
+ @keyframes slideInRight {
527
+ from { transform: translateX(100%); }
528
+ to { transform: translateX(0); }
529
+ }
530
+ ```
531
+
532
+ ---
533
+
534
+ ## Component API
535
+
536
+ ### Avatar Component
537
+
538
+ ```typescript
539
+ <Avatar
540
+ liveBlend={liveBlend} // Current viseme blend shapes
541
+ avatarUrl={avatarUrl} // GLB URL from Avaturn
542
+ position={[-0.01, -2.12, 0.06]} // [x, y, z] position
543
+ rotation={[0.00, 0.51, 0.00]} // [x, y, z] Euler angles
544
+ scale={1.25} // Uniform scale
545
+ />
546
+ ```
547
+
548
+ ### Message Bubble
549
+
550
+ ```typescript
551
+ <MessageBubble
552
+ message={{
553
+ id: "unique_id",
554
+ role: "user" | "assistant",
555
+ content: "Message text",
556
+ timestamp: new Date(),
557
+ emotion: "Happy" // optional
558
+ }}
559
+ />
560
+ ```
561
+
562
+ ### Control Button
563
+
564
+ ```typescript
565
+ <ControlButton
566
+ onClick={() => handleAction()}
567
+ icon={<svg>...</svg>}
568
+ label="Button label"
569
+ variant="default" | "danger" | "primary"
570
+ />
571
+ ```
572
+
573
+ ### Emotion Test Modal
574
+
575
+ ```typescript
576
+ <EmotionTestModal
577
+ open={showEmotionTest}
578
+ onClose={() => setShowEmotionTest(false)}
579
+ wsRef={wsRef} // WebSocket ref for listening to face_emotion messages
580
+ />
581
+ ```
582
+
583
+ ---
584
+
585
+ ## Build & Deployment
586
+
587
+ ### Development
588
+
589
+ ```bash
590
+ npm run dev
591
+ ```
592
+
593
+ Runs on `http://localhost:3000`
594
+
595
+ ### Production Build
596
+
597
+ ```bash
598
+ npm run build
599
+ npm start
600
+ ```
601
+
602
+ Creates `.next/standalone` directory for deployment.
603
+
604
+ ### Docker Build
605
+
606
+ Handled by root `Dockerfile`:
607
+
608
+ ```dockerfile
609
+ # Stage 1: Install frontend dependencies
610
+ FROM node:20-alpine AS frontend-deps
611
+ WORKDIR /app/avatar-frontend
612
+ COPY avatar-frontend/package*.json ./
613
+ RUN npm ci
614
+
615
+ # Stage 2: Build frontend
616
+ FROM node:20-alpine AS frontend-builder
617
+ WORKDIR /app/avatar-frontend
618
+ COPY --from=frontend-deps /app/avatar-frontend/node_modules ./node_modules
619
+ COPY avatar-frontend/ ./
620
+ RUN npm run build
621
+
622
+ # Stage 3: Copy to standalone
623
+ COPY --from=frontend-builder /app/avatar-frontend/.next/standalone ./avatar-frontend/.next/standalone
624
+ ```
625
+
626
+ ### Nginx Proxy
627
+
628
+ Frontend served through Nginx on port 7860:
629
+
630
+ ```nginx
631
+ # Next.js frontend
632
+ location / {
633
+ proxy_pass http://127.0.0.1:3001;
634
+ proxy_http_version 1.1;
635
+ proxy_set_header Upgrade $http_upgrade;
636
+ proxy_set_header Connection 'upgrade';
637
+ proxy_set_header Host $host;
638
+ proxy_cache_bypass $http_upgrade;
639
+ }
640
+
641
+ # WebSocket (backend)
642
+ location /ws {
643
+ proxy_pass http://127.0.0.1:8000/ws;
644
+ proxy_http_version 1.1;
645
+ proxy_set_header Upgrade $http_upgrade;
646
+ proxy_set_header Connection "Upgrade";
647
+ }
648
+
649
+ # Backend API
650
+ location /api {
651
+ proxy_pass http://127.0.0.1:8000/api;
652
+ }
653
+
654
+ # Avatar TTS
655
+ location /static {
656
+ proxy_pass http://127.0.0.1:8765/static;
657
+ }
658
+ ```
659
+
660
+ ---
661
+
662
+ ## User Flow
663
+
664
+ ### 1. Landing Page (`/`)
665
+ - Marketing content
666
+ - Features showcase
667
+ - "Get Started" CTA β†’ redirects to `/login`
668
+
669
+ ### 2. Authentication (`/login`)
670
+ - Username + password form
671
+ - Signup or login toggle
672
+ - Stores token in `localStorage` on success
673
+ - Redirects to `/app`
674
+
675
+ ### 3. Main App (`/app`)
676
+
677
+ **On Mount**:
678
+ 1. Check `localStorage` for token
679
+ 2. Redirect to `/login` if not authenticated
680
+ 3. Apply theme preference (light/dark)
681
+
682
+ **On Start**:
683
+ 1. Request camera + microphone permissions
684
+ 2. Connect WebSocket with auth token
685
+ 3. Start video/audio capture
686
+ 4. Start speech recognition
687
+ 5. Show "Create Avatar" welcome screen
688
+
689
+ **Avatar Creation**:
690
+ 1. Click "Create Your Avatar"
691
+ 2. Avaturn SDK modal opens
692
+ 3. Customize avatar
693
+ 4. Export GLB URL
694
+ 5. Proxy fetches GLB via `/api/avaturn-proxy`
695
+ 6. Three.js loads and renders avatar
696
+ 7. Auto-greeting plays with lip-sync
697
+
698
+ **Conversation Loop**:
699
+ 1. User speaks (Web Speech API transcribes)
700
+ 2. On silence, send `speech_end` message
701
+ 3. Backend processes emotions + generates response
702
+ 4. Server sends `llm_response` with audio + visemes
703
+ 5. Play audio while animating avatar mouth
704
+ 6. On audio end, resume speech recognition
705
+
706
+ ---
707
+
708
+ ## Performance Optimizations
709
+
710
+ ### 1. Frame Rate Control
711
+ - **Video**: 5 FPS (200ms intervals) - emotions change slowly
712
+ - **Audio**: 2 Hz (500ms chunks) - sufficient for real-time
713
+ - **Canvas Rendering**: 60 FPS via `requestAnimationFrame`
714
+
715
+ ### 2. Asset Loading
716
+ - **Git LFS**: Large files (idle-animation.glb) not in git history
717
+ - **Image Optimization**: Disabled (`unoptimized: true`) for faster builds
718
+ - **Standalone Build**: Minimal production bundle
719
+
720
+ ### 3. State Management
721
+ - **Refs for Non-Reactive State**: `isPausedRef`, `recognitionRef`, `wsRef`
722
+ - **Minimal Re-Renders**: Only update UI when necessary
723
+ - **Memoization**: `useMemo` for morph mesh detection
724
+
725
+ ### 4. Network Efficiency
726
+ - **JPEG Compression**: 70% quality for video frames
727
+ - **Base64 Encoding**: Binary data transmission
728
+ - **WebSocket Keep-Alive**: Single persistent connection
729
+
730
+ ---
731
+
732
+ ## Troubleshooting
733
+
734
+ ### Camera/Microphone Access Denied
735
+
736
+ **Problem**: Browser doesn't request permissions
737
+
738
+ **Solution**:
739
+ - Use HTTPS in production (required for getUserMedia)
740
+ - Check browser settings β†’ Site permissions
741
+ - Try different browser (Chrome recommended)
742
+
743
+ ### WebSocket Connection Failed
744
+
745
+ **Problem**: `ws://localhost:3000/ws` not connecting
746
+
747
+ **Check**:
748
+ ```bash
749
+ # Ensure backend is running
750
+ curl http://localhost:8000/health
751
+
752
+ # Check WebSocket endpoint
753
+ curl http://localhost:8000/ws
754
+ ```
755
+
756
+ **Fix**: Update `BACKEND_WS` URL in `app/app/page.tsx`
757
+
758
+ ### Avatar Not Loading
759
+
760
+ **Problem**: Avatar shows blank screen
761
+
762
+ **Possible Causes**:
763
+ 1. GLB URL blocked by CORS β†’ Use `/api/avaturn-proxy?url=...`
764
+ 2. Invalid GLB format β†’ Re-export from Avaturn
765
+ 3. Git LFS not installed β†’ Run `git lfs pull`
766
+
767
+ **Debug**:
768
+ ```bash
769
+ # Check if idle-animation.glb is real file (not pointer)
770
+ file public/idle-animation.glb
771
+ # Should show: "glTF binary" not "ASCII text"
772
+
773
+ # Check if proxy works
774
+ curl "http://localhost:3000/api/avaturn-proxy?url=https://models.readyplayer.me/some-avatar.glb"
775
+ ```
776
+
777
+ ### Speech Recognition Not Working
778
+
779
+ **Problem**: Microphone captures but no transcription
780
+
781
+ **Fixes**:
782
+ - **Check browser support**: Chrome/Edge only (Safari doesn't support continuous mode)
783
+ - **Language mismatch**: Ensure `recognition.lang` matches `selectedLanguage`
784
+ - **Restart**: Change language to force recognition restart
785
+
786
+ **Debug**:
787
+ ```javascript
788
+ // Add to startSpeechRecognition()
789
+ recognition.onerror = (event) => {
790
+ console.log('[SpeechRec] Error:', event.error, event.message);
791
+ };
792
+
793
+ recognition.onresult = (event) => {
794
+ console.log('[SpeechRec] Result:', event.results[event.resultIndex][0].transcript);
795
+ };
796
+ ```
797
+
798
+ ### Avatar Lip-Sync Out of Sync
799
+
800
+ **Problem**: Mouth moves too early/late
801
+
802
+ **Fix 1**: Adjust viseme interpolation in `useFrame()`:
803
+ ```typescript
804
+ // Faster interpolation (current: dt * 25)
805
+ m.morphTargetInfluences[i] += (target - current) * dt * 40;
806
+
807
+ // Slower interpolation
808
+ m.morphTargetInfluences[i] += (target - current) * dt * 15;
809
+ ```
810
+
811
+ **Fix 2**: Add offset to viseme timing:
812
+ ```typescript
813
+ const t = audioRef.current.currentTime + 0.05; // 50ms lookahead
814
+ ```
815
+
816
+ ---
817
+
818
+ ## Dependencies
819
+
820
+ ### Core
821
+
822
+ ```json
823
+ {
824
+ "next": "16.0.0",
825
+ "react": "19.2.0",
826
+ "react-dom": "19.2.0",
827
+ "three": "^0.180.0",
828
+ "@react-three/fiber": "^9.4.0",
829
+ "@react-three/drei": "^10.7.6"
830
+ }
831
+ ```
832
+
833
+ ### Dev Dependencies
834
+
835
+ ```json
836
+ {
837
+ "typescript": "^5",
838
+ "tailwindcss": "^4",
839
+ "@tailwindcss/postcss": "^4",
840
+ "eslint": "^9",
841
+ "eslint-config-next": "16.0.0"
842
+ }
843
+ ```
844
+
845
+ ### Three.js Ecosystem
846
+
847
+ - **React Three Fiber**: React renderer for Three.js
848
+ - **Drei**: Helper components (Environment, Html, useGLTF)
849
+ - **Three.js 0.180**: Core 3D engine
850
+
851
+ **Why Three.js 0.180?**
852
+ - Compatible with Avaturn SDK exports
853
+ - Supports ARKit blend shapes
854
+ - GLTFLoader with morph targets
855
+
856
+ ---
857
+
858
+ ## Authentication Flow
859
+
860
+ ### Signup
861
+
862
+ ```typescript
863
+ POST /api/signup
864
+ {
865
+ "username": "alice",
866
+ "password": "secure123"
867
+ }
868
+
869
+ Response:
870
+ {
871
+ "success": true,
872
+ "message": "Account created!"
873
+ }
874
+ ```
875
+
876
+ ### Login
877
+
878
+ ```typescript
879
+ POST /api/login
880
+ {
881
+ "username": "alice",
882
+ "password": "secure123"
883
+ }
884
+
885
+ Response:
886
+ {
887
+ "success": true,
888
+ "token": "random_session_token_32_chars",
889
+ "username": "alice",
890
+ "user_id": "user_abc123",
891
+ "summary": "Previous conversation summary or null"
892
+ }
893
+ ```
894
+
895
+ ### Token Storage
896
+
897
+ ```typescript
898
+ // Save to localStorage
899
+ localStorage.setItem("mrrrme_token", token);
900
+ localStorage.setItem("mrrrme_username", username);
901
+
902
+ // Retrieve on app load
903
+ const token = localStorage.getItem("mrrrme_token");
904
+ if (!token) router.push("/login");
905
+ ```
906
+
907
+ ### Logout
908
+
909
+ ```typescript
910
+ POST /api/logout
911
+ {
912
+ "token": "session_token"
913
+ }
914
+
915
+ // Frontend cleanup:
916
+ localStorage.removeItem("mrrrme_token");
917
+ localStorage.removeItem("mrrrme_username");
918
+ wsRef.current?.close();
919
+ mediaRecorderRef.current?.stop();
920
+ recognitionRef.current?.stop();
921
+ ```
922
+
923
+ ---
924
+
925
+ ## Responsive Design
926
+
927
+ ### Breakpoints (Tailwind)
928
+
929
+ ```typescript
930
+ // Mobile
931
+ default (< 640px)
932
+
933
+ // Tablet
934
+ md: (>= 768px)
935
+
936
+ // Desktop
937
+ lg: (>= 1024px)
938
+ ```
939
+
940
+ ### Mobile Adaptations
941
+
942
+ **History Panel**:
943
+ ```typescript
944
+ // Mobile: Full width
945
+ className="w-full"
946
+
947
+ // Desktop: Fixed 420px
948
+ className="md:w-[420px]"
949
+ ```
950
+
951
+ **Message Bubbles**:
952
+ ```typescript
953
+ // Mobile: 85% width
954
+ maxWidth: "85%"
955
+
956
+ // Desktop: 70% width
957
+ md:maxWidth: "70%"
958
+ ```
959
+
960
+ ---
961
+
962
+ ## Known Issues
963
+
964
+ ### Current Limitations
965
+
966
+ 1. **Browser Support**: Chrome/Edge only for speech recognition
967
+ 2. **Mobile Safari**: No continuous speech recognition
968
+ 3. **Avatar Loading**: Requires stable internet for GLB download
969
+ 4. **Viseme Coverage**: Not all phonemes have perfect ARKit mappings
970
+ 5. **Memory Usage**: Three.js can consume 200-400 MB RAM
971
+
972
+ ### Workarounds
973
+
974
+ **Speech Recognition on Safari**:
975
+ - Use text input instead (bottom bar)
976
+ - Fallback to server-side Whisper transcription
977
+
978
+ **Slow Avatar Loading**:
979
+ - Preload idle-animation.glb (already in /public)
980
+ - Cache Avaturn exports in IndexedDB (future work)
981
+
982
+ **High Memory Usage**:
983
+ - Clear previous avatar before loading new one:
984
+ ```typescript
985
+ if (oldUrl !== DEFAULT_AVATAR) {
986
+ (useGLTF as any).clear?.(oldUrl);
987
+ }
988
+ if (objectUrlRef.current) {
989
+ URL.revokeObjectURL(objectUrlRef.current);
990
+ }
991
+ ```
992
+
993
+ ---
994
+
995
+ ## Future Enhancements
996
+
997
+ ### Planned Features (Weeks 10-15)
998
+
999
+ **Avatar Improvements**:
1000
+ - Emotion-driven facial expressions (smile, frown, concern)
1001
+ - Eye gaze tracking (looks at camera)
1002
+ - Head movement (subtle nodding, tilting)
1003
+ - Blink animation at natural intervals
1004
+
1005
+ **UI/UX**:
1006
+ - Emotion timeline graph (Chart.js or Recharts)
1007
+ - Export conversation to CSV/JSON
1008
+ - Session statistics dashboard
1009
+ - Advanced settings (fusion weights, model selection)
1010
+
1011
+ **Performance**:
1012
+ - WebWorker for audio processing
1013
+ - OffscreenCanvas for video encoding
1014
+ - IndexedDB caching for avatars
1015
+
1016
+ **Accessibility**:
1017
+ - Screen reader support
1018
+ - Keyboard navigation
1019
+ - High contrast mode
1020
+ - Text size controls
1021
+
1022
+ ---
1023
+
1024
+ ## Development Guidelines
1025
+
1026
+ ### Code Style
1027
+
1028
+ **TypeScript**:
1029
+ - Strict mode enabled
1030
+ - Explicit types for function parameters
1031
+ - Avoid `any` types
1032
+
1033
+ **React**:
1034
+ - Functional components only
1035
+ - Hooks for state management
1036
+ - `useCallback` for expensive functions
1037
+ - `useMemo` for computed values
1038
+
1039
+ **Naming Conventions**:
1040
+ - Components: `PascalCase`
1041
+ - Functions: `camelCase`
1042
+ - Constants: `UPPER_SNAKE_CASE`
1043
+ - CSS Variables: `--kebab-case`
1044
+
1045
+ ### File Organization
1046
+
1047
+ ```
1048
+ app/
1049
+ page.tsx # Default export component
1050
+ layout.tsx # Layout wrapper
1051
+ api/
1052
+ route.ts # API route handler
1053
+ ```
1054
+
1055
+ ### State Management
1056
+
1057
+ **Local State**: `useState` for UI toggles
1058
+ **Refs**: `useRef` for non-reactive values (WebSocket, MediaRecorder)
1059
+ **Global State**: Props drilling (no Redux/Zustand needed for small app)
1060
+
1061
+ ---
1062
+
1063
+ ## Testing
1064
+
1065
+ ### Manual Testing Checklist
1066
+
1067
+ - [ ] Login with new account
1068
+ - [ ] Login with existing account
1069
+ - [ ] Create avatar via Avaturn
1070
+ - [ ] Avatar loads and displays
1071
+ - [ ] Camera permission granted
1072
+ - [ ] Microphone permission granted
1073
+ - [ ] Face emotion updates in real-time
1074
+ - [ ] Speech recognition transcribes correctly
1075
+ - [ ] LLM response plays with lip-sync
1076
+ - [ ] Switch language (English ↔ Dutch)
1077
+ - [ ] Switch voice (Male ↔ Female)
1078
+ - [ ] Switch personality (Therapist ↔ Coach)
1079
+ - [ ] Toggle light/dark mode
1080
+ - [ ] View conversation history
1081
+ - [ ] Pause/resume listening
1082
+ - [ ] Logout properly closes connections
1083
+
1084
+ ### Browser Testing
1085
+
1086
+ **Recommended**:
1087
+ - Chrome 120+ (full support)
1088
+ - Edge 120+ (full support)
1089
+
1090
+ **Limited Support**:
1091
+ - Firefox (no Web Speech API continuous mode)
1092
+ - Safari (no Web Speech API on desktop)
1093
+
1094
+ **Mobile**:
1095
+ - Chrome Android (works)
1096
+ - Safari iOS (limited - no continuous speech)
1097
+
1098
+ ---
1099
+
1100
+ ## Performance Metrics
1101
+
1102
+ ### Bundle Size
1103
+
1104
+ ```bash
1105
+ npm run build
1106
+
1107
+ # Output:
1108
+ Route (app) Size First Load JS
1109
+ β”Œ β—‹ / 15.2 kB 105 kB
1110
+ β”œ β—‹ /api/avaturn-proxy 0 B 0 B
1111
+ β”œ β—‹ /app 89.3 kB 195 kB
1112
+ β”” β—‹ /login 12.8 kB 102 kB
1113
+
1114
+ Total First Load JS: ~195 kB (with Three.js)
1115
+ ```
1116
+
1117
+ ### Load Times (Local Dev)
1118
+
1119
+ - Initial page load: 1.2s
1120
+ - Avatar GLB download: 2-4s (depends on size)
1121
+ - WebSocket connection: <100ms
1122
+ - First video frame: 200ms
1123
+
1124
+ ### Runtime Performance
1125
+
1126
+ - FPS: 60 (Three.js canvas)
1127
+ - Memory: 200-400 MB
1128
+ - CPU: 15-25% (with webcam)
1129
+ - Network: 50-100 KB/s (video + audio upload)
1130
+
1131
+ ---
1132
+
1133
+ ## Security Considerations
1134
+
1135
+ ### Authentication
1136
+
1137
+ - Tokens are random 32-character strings
1138
+ - Passwords hashed with SHA-256 (server-side)
1139
+ - Session validation on every WebSocket message
1140
+ - Auto-logout on invalid token
1141
+
1142
+ ### Data Privacy
1143
+
1144
+ - Video/audio chunks sent to backend, not stored
1145
+ - No face recognition or identification
1146
+ - Conversation history saved per-user in backend SQLite
1147
+ - localStorage tokens cleared on logout
1148
+
1149
+ ### CORS
1150
+
1151
+ - Avaturn proxy restricts to whitelisted domains
1152
+ - Backend CORS allows all origins (dev only)
1153
+ - Production should restrict to specific domains
1154
+
1155
+ ---
1156
+
1157
+ ## Contributing
1158
+
1159
+ ### Development Setup
1160
+
1161
+ ```bash
1162
+ # Fork repository
1163
+ git clone https://github.com/YourUsername/MrrrMe.git
1164
+ cd MrrrMe/avatar-frontend
1165
+
1166
+ # Install dependencies
1167
+ npm install
1168
+
1169
+ # Create feature branch
1170
+ git checkout -b feature/your-feature
1171
+
1172
+ # Run dev server
1173
+ npm run dev
1174
+
1175
+ # Make changes, test thoroughly
1176
+
1177
+ # Commit and push
1178
+ git add .
1179
+ git commit -m "Add your feature"
1180
+ git push origin feature/your-feature
1181
+
1182
+ # Open Pull Request
1183
+ ```
1184
+
1185
+ ### Code Review Checklist
1186
+
1187
+ - [ ] TypeScript types are correct
1188
+ - [ ] No console.log in production code
1189
+ - [ ] Components are properly memoized
1190
+ - [ ] CSS variables used (no hardcoded colors)
1191
+ - [ ] Responsive design tested
1192
+ - [ ] WebSocket cleanup on unmount
1193
+ - [ ] Error handling implemented
1194
+
1195
+ ---
1196
+
1197
+ ## License
1198
+
1199
+ MIT License - See root LICENSE file
1200
 
1201
+ ---
1202
 
1203
+ ## Contact
1204
 
1205
+ **Project Team**:
1206
+ - Musaed Al-Fareh - [email protected]
1207
+ - Michon Goddijn - [email protected]
1208
+ - Lorena KraljiΔ‡ - [email protected]
1209
 
1210
+ **Course**: Applied Data Science - Artificial Intelligence
1211
+ **Institution**: Breda University of Applied Sciences
1212
 
1213
+ ---
 
1214
 
1215
+ ## Acknowledgments
1216
 
1217
+ - **Avaturn**: 3D avatar creation platform
1218
+ - **Pmndrs**: React Three Fiber ecosystem
1219
+ - **Next.js Team**: Framework development
1220
+ - **Three.js**: WebGL rendering engine
1221
 
1222
+ ---
1223
 
1224
+ **Last Updated**: December 10, 2024
1225
+ **Version**: 2.0.0 (Next.js 16 + React 19)
1226
+ **Status**: Production Ready