MusaedMusaedSadeqMusaedAl-Fareh225739 commited on
Commit
10fba92
Β·
1 Parent(s): 6b3e217

backend folder

Browse files
.gitignore CHANGED
@@ -4,5 +4,3 @@ __pycache__/
4
  node_modules/
5
  .next/
6
  *.log
7
- mrrrme/backend/
8
- avatar-frontend/backend/
 
4
  node_modules/
5
  .next/
6
  *.log
 
 
mrrrme/backend/README.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MrrrMe Backend - Refactored Structure
2
+
3
+ This is a refactored version of `backend_server.py` split into modular components.
4
+
5
+ ## πŸ“ Directory Structure
6
+
7
+ ```
8
+ backend/
9
+ β”œβ”€β”€ __init__.py # Main package init
10
+ β”œβ”€β”€ app.py # FastAPI app setup
11
+ β”œβ”€β”€ config.py # Configuration & constants
12
+ β”œβ”€β”€ websocket.py # WebSocket handler (TO BE CREATED)
13
+ β”‚
14
+ β”œβ”€β”€ auth/ # Authentication
15
+ β”‚ β”œβ”€β”€ __init__.py
16
+ β”‚ β”œβ”€β”€ models.py # Pydantic models
17
+ β”‚ β”œβ”€β”€ database.py # Database init & helpers
18
+ β”‚ └── routes.py # Auth endpoints
19
+ β”‚
20
+ β”œβ”€β”€ session/ # Session management
21
+ β”‚ β”œβ”€β”€ __init__.py
22
+ β”‚ β”œβ”€β”€ manager.py # Token validation, user context
23
+ β”‚ └── summary.py # AI summary generation
24
+ β”‚
25
+ β”œβ”€β”€ models/ # AI model loading
26
+ β”‚ β”œβ”€β”€ __init__.py
27
+ β”‚ └── loader.py # Async model initialization
28
+ β”‚
29
+ β”œβ”€β”€ processing/ # Core processing logic
30
+ β”‚ β”œβ”€β”€ __init__.py
31
+ β”‚ β”œβ”€β”€ video.py # Video frame processing
32
+ β”‚ β”œβ”€β”€ audio.py # Audio chunk processing
33
+ β”‚ β”œβ”€β”€ speech.py # Speech processing (TO BE CREATED)
34
+ β”‚ └── fusion.py # Emotion fusion
35
+ β”‚
36
+ β”œβ”€β”€ debug/ # Debug endpoints
37
+ β”‚ β”œβ”€β”€ __init__.py
38
+ β”‚ └── routes.py # Debug routes
39
+ β”‚
40
+ └── utils/ # Utilities
41
+ β”œβ”€β”€ __init__.py
42
+ β”œβ”€β”€ helpers.py # Helper functions
43
+ └── patches.py # GPU & system patches
44
+ ```
45
+
46
+ ## πŸš€ How to Use
47
+
48
+ ### Option 1: Drop-in Replacement
49
+ 1. Copy the `backend/` folder to `mrrrme/backend/`
50
+ 2. Create/update `mrrrme/backend_new.py`:
51
+
52
+ ```python
53
+ """New modular backend server"""
54
+ from backend import app
55
+
56
+ if __name__ == "__main__":
57
+ import uvicorn
58
+ uvicorn.run(app, host="0.0.0.0", port=8000)
59
+ ```
60
+
61
+ 3. Run: `python mrrrme/backend_new.py`
62
+
63
+ ### Option 2: Gradual Migration
64
+ Keep your old `backend_server.py` and slowly migrate endpoints:
65
+
66
+ 1. Import refactored modules:
67
+ ```python
68
+ from backend.auth import router as auth_router
69
+ from backend.session import validate_token
70
+ ```
71
+
72
+ 2. Replace sections incrementally
73
+
74
+ ## πŸ”§ Still Need to Create
75
+
76
+ The following files need the full logic from `backend_server.py`:
77
+
78
+ ### `websocket.py` (~400 lines)
79
+ - Main WebSocket endpoint logic
80
+ - Message type routing
81
+ - Greeting generation
82
+ - Video/audio/speech handling orchestration
83
+
84
+ ### `processing/speech.py` (~250 lines)
85
+ - Full speech processing pipeline
86
+ - Transcription filtering
87
+ - Emotion detection coordination
88
+ - LLM context preparation
89
+ - Avatar TTS integration
90
+
91
+ ## πŸ“‹ Migration Checklist
92
+
93
+ - [x] Config & environment setup
94
+ - [x] Authentication (signup/login/logout)
95
+ - [x] Database management
96
+ - [x] Session validation
97
+ - [x] AI summary generation
98
+ - [x] Model loading
99
+ - [x] Video frame processing
100
+ - [x] Audio chunk processing
101
+ - [x] Emotion fusion logic
102
+ - [x] Debug endpoints
103
+ - [ ] WebSocket handler (main logic)
104
+ - [ ] Speech processing pipeline
105
+ - [ ] Greeting generation logic
106
+ - [ ] Full integration testing
107
+
108
+ ## 🎯 Benefits
109
+
110
+ 1. **Modularity**: Each component has a single responsibility
111
+ 2. **Testability**: Easy to unit test individual modules
112
+ 3. **Maintainability**: Find and fix bugs faster
113
+ 4. **Scalability**: Add new features without bloating
114
+ 5. **Collaboration**: Multiple devs can work on different modules
115
+
116
+ ## πŸ“¦ Original File Size
117
+
118
+ - `backend_server.py`: 47KB (1,300 lines)
119
+ - Refactored: 12 modules (~100-300 lines each)
120
+
121
+ ## ⚠️ Important Notes
122
+
123
+ - Ensure all imports match your project structure
124
+ - Update `mrrrme` module imports in `models/loader.py`
125
+ - Test thoroughly before replacing production code
126
+ - Keep `backend_server.py` as backup during migration
127
+
mrrrme/backend/__init__.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Refactored Modular Structure"""
2
+ from .app import app
3
+ from .config import *
4
+ from .utils import apply_all_patches
5
+
6
+ # Apply system patches on import
7
+ apply_all_patches()
8
+
9
+ __version__ = "2.0.0"
10
+ __all__ = ['app']
mrrrme/backend/app.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - FastAPI Application Setup"""
2
+ import os
3
+ import asyncio
4
+ from fastapi import FastAPI
5
+ from fastapi.middleware.cors import CORSMiddleware
6
+
7
+ from .models.loader import load_models, models_ready
8
+ from .utils.helpers import get_avatar_api_url, check_avatar_service
9
+ from .auth.routes import router as auth_router
10
+ from .debug.routes import router as debug_router
11
+ from .websocket import websocket_endpoint
12
+
13
+ # Create FastAPI app
14
+ app = FastAPI(title="MrrrMe AI Backend", version="1.0.0")
15
+
16
+ # CORS for browser access
17
+ app.add_middleware(
18
+ CORSMiddleware,
19
+ allow_origins=["*"],
20
+ allow_credentials=True,
21
+ allow_methods=["*"],
22
+ allow_headers=["*"],
23
+ )
24
+
25
+ # Include routers
26
+ app.include_router(auth_router, prefix="/api")
27
+ app.include_router(debug_router, prefix="/api/debug")
28
+
29
+ # WebSocket endpoint
30
+ app.add_api_websocket_route("/ws", websocket_endpoint)
31
+
32
+ @app.on_event("startup")
33
+ async def startup_event():
34
+ """Start loading models in background after server is ready"""
35
+ print("[Backend] πŸš€ Starting up...")
36
+
37
+ # Check avatar service availability
38
+ avatar_api = get_avatar_api_url()
39
+ print(f"[Backend] 🎭 Avatar API URL: {avatar_api}")
40
+ await check_avatar_service(avatar_api)
41
+
42
+ # Load models asynchronously
43
+ asyncio.create_task(load_models())
44
+
45
+ @app.get("/")
46
+ async def root():
47
+ """Root endpoint"""
48
+ return {
49
+ "status": "running",
50
+ "models_ready": models_ready,
51
+ "message": "MrrrMe AI Backend"
52
+ }
53
+
54
+ @app.get("/health")
55
+ async def health():
56
+ """Health check - responds immediately"""
57
+ return {
58
+ "status": "healthy",
59
+ "models_ready": models_ready
60
+ }
mrrrme/backend/auth/__init__.py ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Authentication Package"""
2
+ from .routes import router
3
+ from .models import SignupRequest, LoginRequest, LogoutRequest
4
+ from .database import init_db, hash_password, get_db_connection
5
+
6
+ __all__ = ['router', 'SignupRequest', 'LoginRequest', 'LogoutRequest',
7
+ 'init_db', 'hash_password', 'get_db_connection']
mrrrme/backend/auth/database.py ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Database Management"""
2
+ import sqlite3
3
+ import hashlib
4
+ from ..config import DB_PATH
5
+
6
+ def init_db():
7
+ """Initialize SQLite database with required tables"""
8
+ conn = sqlite3.connect(DB_PATH)
9
+ cursor = conn.cursor()
10
+
11
+ # Users table
12
+ cursor.execute("""
13
+ CREATE TABLE IF NOT EXISTS users (
14
+ user_id TEXT PRIMARY KEY,
15
+ username TEXT UNIQUE NOT NULL,
16
+ password_hash TEXT NOT NULL,
17
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
18
+ )
19
+ """)
20
+
21
+ # Sessions table
22
+ cursor.execute("""
23
+ CREATE TABLE IF NOT EXISTS sessions (
24
+ session_id TEXT PRIMARY KEY,
25
+ user_id TEXT NOT NULL,
26
+ token TEXT UNIQUE NOT NULL,
27
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
28
+ is_active BOOLEAN DEFAULT 1
29
+ )
30
+ """)
31
+
32
+ # Messages table
33
+ cursor.execute("""
34
+ CREATE TABLE IF NOT EXISTS messages (
35
+ message_id INTEGER PRIMARY KEY AUTOINCREMENT,
36
+ session_id TEXT NOT NULL,
37
+ role TEXT NOT NULL,
38
+ content TEXT NOT NULL,
39
+ emotion TEXT,
40
+ timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
41
+ )
42
+ """)
43
+
44
+ # User summaries table
45
+ cursor.execute("""
46
+ CREATE TABLE IF NOT EXISTS user_summaries (
47
+ user_id TEXT PRIMARY KEY,
48
+ summary_text TEXT NOT NULL,
49
+ updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
50
+ )
51
+ """)
52
+
53
+ conn.commit()
54
+ conn.close()
55
+ print(f"[Database] βœ… Initialized at {DB_PATH}")
56
+
57
+ def hash_password(password: str) -> str:
58
+ """Hash password using SHA-256"""
59
+ return hashlib.sha256(password.encode()).hexdigest()
60
+
61
+ def get_db_connection():
62
+ """Get database connection"""
63
+ return sqlite3.connect(DB_PATH)
64
+
65
+ # Initialize database on module import
66
+ init_db()
mrrrme/backend/auth/models.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Authentication Models"""
2
+ from pydantic import BaseModel
3
+
4
+ class SignupRequest(BaseModel):
5
+ """User signup request model"""
6
+ username: str
7
+ password: str
8
+
9
+ class LoginRequest(BaseModel):
10
+ """User login request model"""
11
+ username: str
12
+ password: str
13
+
14
+ class LogoutRequest(BaseModel):
15
+ """User logout request model"""
16
+ token: str
mrrrme/backend/auth/routes.py ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Authentication Routes"""
2
+ import secrets
3
+ import sqlite3
4
+ from fastapi import APIRouter, HTTPException
5
+ from .models import SignupRequest, LoginRequest, LogoutRequest
6
+ from .database import hash_password, get_db_connection
7
+ from ..session.summary import generate_session_summary
8
+
9
+ router = APIRouter()
10
+
11
+ @router.post("/signup")
12
+ async def signup(req: SignupRequest):
13
+ """Create new user account"""
14
+ conn = get_db_connection()
15
+ cursor = conn.cursor()
16
+
17
+ try:
18
+ user_id = secrets.token_urlsafe(16)
19
+ cursor.execute(
20
+ "INSERT INTO users (user_id, username, password_hash) VALUES (?, ?, ?)",
21
+ (user_id, req.username, hash_password(req.password))
22
+ )
23
+ conn.commit()
24
+ conn.close()
25
+ return {"success": True, "message": "Account created!"}
26
+ except sqlite3.IntegrityError:
27
+ conn.close()
28
+ raise HTTPException(status_code=400, detail="Username already exists")
29
+
30
+ @router.post("/login")
31
+ async def login(req: LoginRequest):
32
+ """Login user and create session"""
33
+ conn = get_db_connection()
34
+ cursor = conn.cursor()
35
+
36
+ # Verify credentials
37
+ cursor.execute(
38
+ "SELECT user_id, username FROM users WHERE username = ? AND password_hash = ?",
39
+ (req.username, hash_password(req.password))
40
+ )
41
+
42
+ result = cursor.fetchone()
43
+
44
+ if not result:
45
+ conn.close()
46
+ raise HTTPException(status_code=401, detail="Invalid credentials")
47
+
48
+ user_id, username = result
49
+
50
+ # Create session
51
+ session_id = secrets.token_urlsafe(16)
52
+ token = secrets.token_urlsafe(32)
53
+
54
+ cursor.execute(
55
+ "INSERT INTO sessions (session_id, user_id, token) VALUES (?, ?, ?)",
56
+ (session_id, user_id, token)
57
+ )
58
+
59
+ # Get user summary
60
+ cursor.execute(
61
+ "SELECT summary_text FROM user_summaries WHERE user_id = ?",
62
+ (user_id,)
63
+ )
64
+ summary_row = cursor.fetchone()
65
+ summary = summary_row[0] if summary_row else None
66
+
67
+ conn.commit()
68
+ conn.close()
69
+
70
+ return {
71
+ "success": True,
72
+ "token": token,
73
+ "username": username,
74
+ "user_id": user_id,
75
+ "summary": summary
76
+ }
77
+
78
+ @router.post("/logout")
79
+ async def logout(req: LogoutRequest):
80
+ """Logout user and generate session summary"""
81
+ conn = get_db_connection()
82
+ cursor = conn.cursor()
83
+
84
+ # Get session info before closing
85
+ cursor.execute(
86
+ "SELECT session_id, user_id FROM sessions WHERE token = ? AND is_active = 1",
87
+ (req.token,)
88
+ )
89
+ result = cursor.fetchone()
90
+
91
+ if result:
92
+ session_id, user_id = result
93
+
94
+ # Mark session as inactive
95
+ cursor.execute(
96
+ "UPDATE sessions SET is_active = 0 WHERE token = ?",
97
+ (req.token,)
98
+ )
99
+ conn.commit()
100
+ conn.close()
101
+
102
+ # Generate summary on explicit logout
103
+ print(f"[Logout] πŸ“ Generating summary for user {user_id}...")
104
+ summary = await generate_session_summary(session_id, user_id)
105
+ if summary:
106
+ print(f"[Logout] βœ… Summary generated")
107
+
108
+ return {"success": True, "message": "Logged out successfully"}
109
+ else:
110
+ conn.close()
111
+ return {"success": True, "message": "Session already closed"}
mrrrme/backend/config.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Configuration and Environment Setup"""
2
+ import os
3
+
4
+ # ===== SET CACHE DIRECTORIES FIRST =====
5
+ os.environ['HF_HOME'] = '/tmp/huggingface'
6
+ os.environ['TRANSFORMERS_CACHE'] = '/tmp/transformers'
7
+ os.environ['HF_HUB_CACHE'] = '/tmp/huggingface/hub'
8
+ os.environ['TORCH_HOME'] = '/tmp/torch'
9
+
10
+ # Create cache directories
11
+ for cache_dir in ['/tmp/huggingface', '/tmp/transformers', '/tmp/huggingface/hub', '/tmp/torch']:
12
+ os.makedirs(cache_dir, exist_ok=True)
13
+
14
+ # ===== DATABASE CONFIGURATION =====
15
+ # Use /data for Hugging Face Spaces (persistent) or /tmp for local dev
16
+ if os.path.exists('/data'):
17
+ DB_PATH = "/data/mrrrme_users.db"
18
+ print("[Config] πŸ“ Using persistent storage: /data/mrrrme_users.db")
19
+ else:
20
+ DB_PATH = "/tmp/mrrrme_users.db"
21
+ print("[Config] ⚠️ Using ephemeral storage: /tmp/mrrrme_users.db")
22
+ print("[Config] ⚠️ Data will reset on rebuild! Enable persistent storage in HF Spaces.")
23
+
24
+ # ===== API KEYS =====
25
+ GROQ_API_KEY = os.getenv("GROQ_API_KEY", "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")
26
+
27
+ # ===== FUSION WEIGHTS =====
28
+ FUSION_WEIGHTS = {
29
+ 'face': 0.5,
30
+ 'voice': 0.3,
31
+ 'text': 0.2
32
+ }
33
+
34
+ # ===== EMOTION MAPPING =====
35
+ EMOTION_MAP = {'Neutral': 0, 'Happy': 1, 'Sad': 2, 'Angry': 3}
36
+ FUSE4 = ['Neutral', 'Happy', 'Sad', 'Angry']
37
+
38
+ # ===== SPEECH PROCESSING =====
39
+ AUDIO_BUFFER_SIZE = 5 # Number of audio chunks to buffer
40
+ AUDIO_BUFFER_KEEP = 3 # Number of chunks to keep after processing
41
+
42
+ # ===== MESSAGE FILTERING =====
43
+ HALLUCINATION_PHRASES = {"thank you", "thanks", "okay", "ok", "you", "yeah", "yep"}
44
+ MIN_TRANSCRIPTION_LENGTH = 2
45
+
46
+ # ===== SUMMARY GENERATION =====
47
+ MIN_MESSAGES_FOR_SUMMARY = 3
48
+ SUMMARY_MODEL = "llama-3.1-8b-instant"
49
+ SUMMARY_MAX_TOKENS = 150
50
+ SUMMARY_TEMPERATURE = 0.7
51
+
52
+ # ===== GREETING MESSAGES =====
53
+ GREETINGS = {
54
+ "en": {
55
+ "new": "Hey {username}! I'm MrrrMe, your emotion AI companion. How are you feeling today?",
56
+ "returning": "Welcome back, {username}! It's great to see you again. How have you been?"
57
+ },
58
+ "nl": {
59
+ "new": "Hoi {username}! Ik ben MrrrMe, jouw emotie AI-metgezel. Hoe voel je je vandaag?",
60
+ "returning": "Welkom terug, {username}! Fijn je weer te zien. Hoe gaat het met je?"
61
+ }
62
+ }
mrrrme/backend/debug/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ """MrrrMe Backend - Debug Package"""
2
+ from .routes import router
3
+
4
+ __all__ = ['router']
mrrrme/backend/debug/routes.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Debug Routes"""
2
+ from fastapi import APIRouter
3
+ from ..auth.database import get_db_connection
4
+ from ..config import DB_PATH
5
+
6
+ router = APIRouter()
7
+
8
+ @router.get("/users")
9
+ async def debug_users():
10
+ """Debug endpoint - view all users and their summaries"""
11
+ conn = get_db_connection()
12
+ cursor = conn.cursor()
13
+
14
+ cursor.execute("""
15
+ SELECT u.username, u.user_id, s.summary_text, s.updated_at
16
+ FROM users u
17
+ LEFT JOIN user_summaries s ON u.user_id = s.user_id
18
+ ORDER BY u.created_at DESC
19
+ """)
20
+
21
+ users = []
22
+ for username, user_id, summary, updated in cursor.fetchall():
23
+ users.append({
24
+ "username": username,
25
+ "user_id": user_id,
26
+ "summary": summary,
27
+ "summary_updated": updated
28
+ })
29
+
30
+ conn.close()
31
+
32
+ return {"users": users, "database": DB_PATH}
33
+
34
+ @router.get("/sessions")
35
+ async def debug_sessions():
36
+ """Debug endpoint - view all active sessions"""
37
+ conn = get_db_connection()
38
+ cursor = conn.cursor()
39
+
40
+ cursor.execute("""
41
+ SELECT s.session_id, s.token, u.username, s.is_active, s.created_at
42
+ FROM sessions s
43
+ JOIN users u ON s.user_id = u.user_id
44
+ ORDER BY s.created_at DESC
45
+ LIMIT 20
46
+ """)
47
+
48
+ sessions = []
49
+ for session_id, token, username, is_active, created_at in cursor.fetchall():
50
+ sessions.append({
51
+ "session_id": session_id,
52
+ "token_preview": token[:10] + "..." if token else None,
53
+ "username": username,
54
+ "is_active": bool(is_active),
55
+ "created_at": created_at
56
+ })
57
+
58
+ conn.close()
59
+
60
+ return {"sessions": sessions, "database": DB_PATH}
mrrrme/backend/models/__init__.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Models Package"""
2
+ from .loader import (
3
+ load_models,
4
+ get_models,
5
+ models_ready,
6
+ face_processor,
7
+ text_analyzer,
8
+ whisper_worker,
9
+ voice_worker,
10
+ llm_generator,
11
+ fusion_engine,
12
+ FusionEngine
13
+ )
14
+
15
+ __all__ = [
16
+ 'load_models',
17
+ 'get_models',
18
+ 'models_ready',
19
+ 'face_processor',
20
+ 'text_analyzer',
21
+ 'whisper_worker',
22
+ 'voice_worker',
23
+ 'llm_generator',
24
+ 'fusion_engine',
25
+ 'FusionEngine'
26
+ ]
mrrrme/backend/models/loader.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - AI Model Loader"""
2
+ import torch
3
+ import numpy as np
4
+ from typing import Optional
5
+
6
+ # Global model variables
7
+ face_processor = None
8
+ text_analyzer = None
9
+ whisper_worker = None
10
+ voice_worker = None
11
+ llm_generator = None
12
+ fusion_engine = None
13
+ models_ready = False
14
+
15
+ class FusionEngine:
16
+ """Multi-modal emotion fusion engine"""
17
+ def __init__(self, alpha_face=0.5, alpha_voice=0.3, alpha_text=0.2):
18
+ self.alpha_face = alpha_face
19
+ self.alpha_voice = alpha_voice
20
+ self.alpha_text = alpha_text
21
+
22
+ def fuse(self, face_probs, voice_probs, text_probs):
23
+ """Fuse emotion probabilities from multiple modalities"""
24
+ from ..config import FUSE4
25
+
26
+ fused = (
27
+ self.alpha_face * face_probs +
28
+ self.alpha_voice * voice_probs +
29
+ self.alpha_text * text_probs
30
+ )
31
+ fused = fused / (np.sum(fused) + 1e-8)
32
+ fused_idx = int(np.argmax(fused))
33
+ fused_emotion = FUSE4[fused_idx]
34
+ intensity = float(np.max(fused))
35
+ return fused_emotion, intensity
36
+
37
+ async def load_models():
38
+ """Load all AI models asynchronously"""
39
+ global face_processor, text_analyzer, whisper_worker, voice_worker
40
+ global llm_generator, fusion_engine, models_ready
41
+
42
+ print("[Backend] πŸš€ Initializing MrrrMe AI models in background...")
43
+
44
+ try:
45
+ # Import modules (adjust paths based on your actual structure)
46
+ from mrrrme.vision.face_processor import FaceProcessor
47
+ from mrrrme.audio.voice_emotion import VoiceEmotionWorker
48
+ from mrrrme.audio.whisper_transcription import WhisperTranscriptionWorker
49
+ from mrrrme.nlp.text_sentiment import TextSentimentAnalyzer
50
+ from mrrrme.nlp.llm_generator_groq import LLMResponseGenerator
51
+ from ..config import GROQ_API_KEY
52
+
53
+ # Load models
54
+ print("[Backend] Loading FaceProcessor...")
55
+ face_processor = FaceProcessor()
56
+
57
+ print("[Backend] Loading TextSentiment...")
58
+ text_analyzer = TextSentimentAnalyzer()
59
+
60
+ print("[Backend] Loading Whisper...")
61
+ whisper_worker = WhisperTranscriptionWorker(text_analyzer)
62
+
63
+ print("[Backend] Loading VoiceEmotion...")
64
+ voice_worker = VoiceEmotionWorker(whisper_worker=whisper_worker)
65
+
66
+ print("[Backend] Initializing LLM...")
67
+ llm_generator = LLMResponseGenerator(api_key=GROQ_API_KEY)
68
+
69
+ # Initialize fusion engine
70
+ print("[Backend] Initializing FusionEngine...")
71
+ fusion_engine = FusionEngine()
72
+
73
+ models_ready = True
74
+
75
+ print("[Backend] βœ… All models loaded!")
76
+
77
+ # GPU check
78
+ if torch.cuda.is_available():
79
+ print(f"[Backend] βœ… GPU available: {torch.cuda.get_device_name(0)}")
80
+ else:
81
+ print("[Backend] ⚠️ No GPU detected - using CPU mode")
82
+
83
+ except Exception as e:
84
+ print(f"[Backend] ❌ Error loading models: {e}")
85
+ import traceback
86
+ traceback.print_exc()
87
+
88
+ def get_models():
89
+ """Get loaded model instances"""
90
+ return {
91
+ 'face_processor': face_processor,
92
+ 'text_analyzer': text_analyzer,
93
+ 'whisper_worker': whisper_worker,
94
+ 'voice_worker': voice_worker,
95
+ 'llm_generator': llm_generator,
96
+ 'fusion_engine': fusion_engine,
97
+ 'models_ready': models_ready
98
+ }
mrrrme/backend/processing/__init__.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Processing Package"""
2
+ from .video import process_video_frame
3
+ from .audio import process_audio_chunk
4
+ from .speech import process_speech_end, filter_transcription
5
+ from .fusion import calculate_fusion, adjust_fusion_weights
6
+
7
+ __all__ = [
8
+ 'process_video_frame',
9
+ 'process_audio_chunk',
10
+ 'process_speech_end',
11
+ 'filter_transcription',
12
+ 'calculate_fusion',
13
+ 'adjust_fusion_weights'
14
+ ]
mrrrme/backend/processing/audio.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Audio Chunk Processing"""
2
+ import base64
3
+ from ..models.loader import voice_worker
4
+ from ..config import AUDIO_BUFFER_SIZE, AUDIO_BUFFER_KEEP
5
+
6
+ audio_buffer = []
7
+
8
+ async def process_audio_chunk(audio_data_b64: str) -> dict:
9
+ """Process audio chunk for voice emotion detection"""
10
+ global audio_buffer
11
+
12
+ try:
13
+ audio_data = base64.b64decode(audio_data_b64)
14
+ audio_buffer.append(audio_data)
15
+
16
+ if len(audio_buffer) >= AUDIO_BUFFER_SIZE:
17
+ voice_probs, voice_emotion = voice_worker.get_probs()
18
+ audio_buffer = audio_buffer[-AUDIO_BUFFER_KEEP:]
19
+
20
+ return {
21
+ "type": "voice_emotion",
22
+ "emotion": voice_emotion
23
+ }
24
+
25
+ return None
26
+
27
+ except Exception as e:
28
+ print(f"[Audio] Error: {e}")
29
+ return None
mrrrme/backend/processing/fusion.py ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Emotion Fusion Logic"""
2
+ import numpy as np
3
+ from ..config import EMOTION_MAP, FUSION_WEIGHTS
4
+
5
+ def adjust_fusion_weights(face_quality: float, voice_active: bool, text_length: int) -> dict:
6
+ """
7
+ Adjust fusion weights based on quality metrics
8
+
9
+ Returns:
10
+ Dict with adjusted weights and adjustment log
11
+ """
12
+ adjusted_weights = FUSION_WEIGHTS.copy()
13
+ adjustments = []
14
+
15
+ # Reduce face weight if quality is poor
16
+ if face_quality < 0.5:
17
+ adjusted_weights['face'] *= 0.7
18
+ adjustments.append(f"Face weight reduced (low quality: {face_quality:.3f})")
19
+
20
+ # Reduce voice weight if not active
21
+ if not voice_active:
22
+ adjusted_weights['voice'] *= 0.5
23
+ adjustments.append("Voice weight reduced (no recent speech)")
24
+
25
+ # Reduce text weight if very short
26
+ if text_length < 10:
27
+ adjusted_weights['text'] *= 0.7
28
+ adjustments.append(f"Text weight reduced (short input: {text_length} chars)")
29
+
30
+ # Normalize to sum to 1.0
31
+ total = sum(adjusted_weights.values())
32
+ final_weights = {k: v/total for k, v in adjusted_weights.items()}
33
+
34
+ return {
35
+ 'weights': final_weights,
36
+ 'adjustments': adjustments
37
+ }
38
+
39
+ def calculate_fusion(face_probs, voice_probs, text_probs, weights: dict):
40
+ """Calculate weighted fusion of emotion probabilities"""
41
+ fused_probs = (
42
+ weights['face'] * face_probs +
43
+ weights['voice'] * voice_probs +
44
+ weights['text'] * text_probs
45
+ )
46
+ fused_probs = fused_probs / (np.sum(fused_probs) + 1e-8)
47
+
48
+ fused_idx = int(np.argmax(fused_probs))
49
+ emotions = ['Neutral', 'Happy', 'Sad', 'Angry']
50
+ fused_emotion = emotions[fused_idx]
51
+ intensity = float(np.max(fused_probs))
52
+
53
+ return fused_emotion, intensity, fused_probs
mrrrme/backend/processing/video.py ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Video Frame Processing"""
2
+ import base64
3
+ import io
4
+ import cv2
5
+ import numpy as np
6
+ from PIL import Image
7
+ from ..models.loader import face_processor
8
+
9
+ async def process_video_frame(frame_data: str) -> dict:
10
+ """
11
+ Process video frame for facial emotion detection
12
+
13
+ Args:
14
+ frame_data: Base64 encoded image
15
+
16
+ Returns:
17
+ Dict with emotion, confidence, probabilities, quality
18
+ """
19
+ try:
20
+ # Decode base64 image
21
+ img_data = base64.b64decode(frame_data.split(",")[1])
22
+ img = Image.open(io.BytesIO(img_data))
23
+ frame = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
24
+
25
+ # Process face emotion
26
+ try:
27
+ processed_frame, result = face_processor.process_frame(frame)
28
+ face_emotion = face_processor.get_last_emotion() or "Neutral"
29
+ face_confidence = face_processor.get_last_confidence() or 0.0
30
+ face_probs = face_processor.get_last_probs()
31
+ face_quality = face_processor.get_last_quality() if hasattr(face_processor, 'get_last_quality') else 0.5
32
+ except Exception as proc_err:
33
+ print(f"[FaceProcessor] Error: {proc_err}")
34
+ face_emotion = "Neutral"
35
+ face_confidence = 0.0
36
+ face_probs = np.array([0.25, 0.25, 0.25, 0.25])
37
+ face_quality = 0.0
38
+
39
+ return {
40
+ "type": "face_emotion",
41
+ "emotion": face_emotion,
42
+ "confidence": face_confidence,
43
+ "probabilities": face_probs.tolist(),
44
+ "quality": face_quality
45
+ }
46
+
47
+ except Exception as e:
48
+ print(f"[Video] Error: {e}")
49
+ return {
50
+ "type": "face_emotion",
51
+ "emotion": "Neutral",
52
+ "confidence": 0.0,
53
+ "probabilities": [0.25, 0.25, 0.25, 0.25],
54
+ "quality": 0.0
55
+ }
mrrrme/backend/session/__init__.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Session Package"""
2
+ from .manager import (
3
+ validate_token,
4
+ get_user_summary,
5
+ load_user_history,
6
+ save_message
7
+ )
8
+ from .summary import generate_session_summary
9
+
10
+ __all__ = [
11
+ 'validate_token',
12
+ 'get_user_summary',
13
+ 'load_user_history',
14
+ 'save_message',
15
+ 'generate_session_summary'
16
+ ]
mrrrme/backend/session/manager.py ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Session Management"""
2
+ import sqlite3
3
+ from typing import Optional, Tuple, Dict
4
+ from ..auth.database import get_db_connection
5
+
6
+ def validate_token(token: str) -> Optional[Dict[str, str]]:
7
+ """
8
+ Validate session token and return session data
9
+
10
+ Returns:
11
+ Dict with session_id, user_id, username if valid
12
+ None if invalid
13
+ """
14
+ conn = get_db_connection()
15
+ cursor = conn.cursor()
16
+
17
+ cursor.execute(
18
+ """SELECT s.session_id, s.user_id, u.username
19
+ FROM sessions s
20
+ JOIN users u ON s.user_id = u.user_id
21
+ WHERE s.token = ? AND s.is_active = 1""",
22
+ (token,)
23
+ )
24
+
25
+ result = cursor.fetchone()
26
+ conn.close()
27
+
28
+ if not result:
29
+ return None
30
+
31
+ session_id, user_id, username = result
32
+ return {
33
+ 'session_id': session_id,
34
+ 'user_id': user_id,
35
+ 'username': username
36
+ }
37
+
38
+ def get_user_summary(user_id: str) -> Optional[str]:
39
+ """Get user's conversation summary"""
40
+ conn = get_db_connection()
41
+ cursor = conn.cursor()
42
+
43
+ cursor.execute(
44
+ "SELECT summary_text FROM user_summaries WHERE user_id = ?",
45
+ (user_id,)
46
+ )
47
+
48
+ summary_row = cursor.fetchone()
49
+ conn.close()
50
+
51
+ return summary_row[0] if summary_row else None
52
+
53
+ def load_user_history(user_id: str, limit: int = 10) -> list:
54
+ """Load recent conversation history for user"""
55
+ conn = get_db_connection()
56
+ cursor = conn.cursor()
57
+
58
+ cursor.execute(
59
+ """SELECT role, content FROM messages
60
+ WHERE session_id IN (
61
+ SELECT session_id FROM sessions WHERE user_id = ?
62
+ )
63
+ ORDER BY timestamp DESC
64
+ LIMIT ?""",
65
+ (user_id, limit)
66
+ )
67
+
68
+ history = cursor.fetchall()
69
+ conn.close()
70
+
71
+ return list(reversed(history)) # Return chronological order
72
+
73
+ def save_message(session_id: str, role: str, content: str, emotion: Optional[str] = None):
74
+ """Save message to database"""
75
+ conn = get_db_connection()
76
+ cursor = conn.cursor()
77
+
78
+ cursor.execute(
79
+ "INSERT INTO messages (session_id, role, content, emotion) VALUES (?, ?, ?, ?)",
80
+ (session_id, role, content, emotion)
81
+ )
82
+
83
+ conn.commit()
84
+ conn.close()
mrrrme/backend/session/summary.py ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - AI Session Summary Generation"""
2
+ from datetime import datetime
3
+ from typing import Optional
4
+ from groq import Groq
5
+ from ..auth.database import get_db_connection
6
+ from ..config import (
7
+ GROQ_API_KEY,
8
+ MIN_MESSAGES_FOR_SUMMARY,
9
+ SUMMARY_MODEL,
10
+ SUMMARY_MAX_TOKENS,
11
+ SUMMARY_TEMPERATURE
12
+ )
13
+
14
+ async def generate_session_summary(session_id: str, user_id: str) -> Optional[str]:
15
+ """
16
+ Generate AI summary of conversation for THIS specific user
17
+
18
+ Args:
19
+ session_id: Session to summarize
20
+ user_id: User who owns the session
21
+
22
+ Returns:
23
+ Summary text if successful, None otherwise
24
+ """
25
+ conn = get_db_connection()
26
+ cursor = conn.cursor()
27
+
28
+ # Verify session belongs to user (security check)
29
+ cursor.execute(
30
+ "SELECT user_id FROM sessions WHERE session_id = ?",
31
+ (session_id,)
32
+ )
33
+ session_owner = cursor.fetchone()
34
+
35
+ if not session_owner or session_owner[0] != user_id:
36
+ print(f"[Summary] ❌ Security error: session {session_id} doesn't belong to user {user_id}")
37
+ conn.close()
38
+ return None
39
+
40
+ # Get messages from this session
41
+ cursor.execute(
42
+ "SELECT role, content, emotion FROM messages WHERE session_id = ? ORDER BY timestamp ASC",
43
+ (session_id,)
44
+ )
45
+
46
+ messages = cursor.fetchall()
47
+
48
+ # Get username for better logging
49
+ cursor.execute("SELECT username FROM users WHERE user_id = ?", (user_id,))
50
+ username_row = cursor.fetchone()
51
+ username = username_row[0] if username_row else user_id
52
+
53
+ conn.close()
54
+
55
+ # Skip if not enough messages
56
+ if len(messages) < MIN_MESSAGES_FOR_SUMMARY:
57
+ print(f"[Summary] ⏭️ Skipped for {username} (only {len(messages)} messages)")
58
+ return None
59
+
60
+ # Build conversation text
61
+ conversation = ""
62
+ for role, content, emotion in messages:
63
+ speaker = "User" if role == "user" else "AI"
64
+ emo_tag = f" [{emotion}]" if emotion else ""
65
+ conversation += f"{speaker}{emo_tag}: {content}\n"
66
+
67
+ try:
68
+ # Generate summary using Groq
69
+ groq_client = Groq(api_key=GROQ_API_KEY)
70
+
71
+ prompt = f"""Analyze this conversation and create a 2-3 sentence summary about THIS SPECIFIC USER.
72
+
73
+ DO NOT include information about other users or other conversations.
74
+ ONLY summarize what THIS user said and their patterns.
75
+
76
+ Conversation ({len(messages)} messages):
77
+ {conversation}
78
+
79
+ Create a concise summary including: topics this user discussed, their emotional patterns, personal details THEY mentioned, and their preferences."""
80
+
81
+ response = groq_client.chat.completions.create(
82
+ model=SUMMARY_MODEL,
83
+ messages=[{"role": "user", "content": prompt}],
84
+ max_tokens=SUMMARY_MAX_TOKENS,
85
+ temperature=SUMMARY_TEMPERATURE
86
+ )
87
+
88
+ summary = response.choices[0].message.content.strip()
89
+
90
+ # Save summary FOR THIS USER ONLY
91
+ conn = get_db_connection()
92
+ cursor = conn.cursor()
93
+
94
+ cursor.execute(
95
+ "INSERT OR REPLACE INTO user_summaries (user_id, summary_text, updated_at) VALUES (?, ?, ?)",
96
+ (user_id, summary, datetime.now())
97
+ )
98
+
99
+ conn.commit()
100
+ conn.close()
101
+
102
+ print(f"[Summary] βœ… Generated for {username} (user_id: {user_id})")
103
+ print(f"[Summary] πŸ“ Content: {summary}")
104
+ return summary
105
+
106
+ except Exception as e:
107
+ print(f"[Summary] ❌ Error for {username}: {e}")
108
+ import traceback
109
+ traceback.print_exc()
110
+ return None
mrrrme/backend/utils/__init__.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ """MrrrMe Backend - Utils Package"""
2
+ from .helpers import get_avatar_api_url, check_avatar_service
3
+ from .patches import apply_all_patches
4
+
5
+ __all__ = ['get_avatar_api_url', 'check_avatar_service', 'apply_all_patches']
mrrrme/backend/utils/helpers.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - Utility Helper Functions"""
2
+ import os
3
+ import requests
4
+
5
+ def get_avatar_api_url():
6
+ """Get correct avatar API URL based on environment"""
7
+ # For Hugging Face Spaces, use same host
8
+ if os.path.exists('/.dockerenv') or os.environ.get('SPACE_ID'):
9
+ # Running in Docker/HF Spaces - use internal networking
10
+ return "http://127.0.0.1:8765"
11
+ else:
12
+ # Local development
13
+ return "http://localhost:8765"
14
+
15
+ async def check_avatar_service(avatar_api: str):
16
+ """Check if avatar TTS service is running"""
17
+ try:
18
+ response = requests.get(f"{avatar_api}/", timeout=2)
19
+ if response.status_code == 200:
20
+ print(f"[Backend] βœ… Avatar TTS service available at {avatar_api}")
21
+ else:
22
+ print(f"[Backend] ⚠️ Avatar TTS service responded with {response.status_code}")
23
+ except requests.exceptions.ConnectionError:
24
+ print(f"[Backend] ⚠️ Avatar TTS service NOT available at {avatar_api}")
25
+ print(f"[Backend] πŸ’‘ Text-only mode will be used (no avatar speech)")
26
+ print(f"[Backend] πŸ“ To enable avatar:")
27
+ print(f"[Backend] cd avatar && python speak_server.py")
28
+ except Exception as e:
29
+ print(f"[Backend] ⚠️ Error checking avatar service: {e}")
mrrrme/backend/utils/patches.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MrrrMe Backend - GPU Fixes and Patches"""
2
+ import os
3
+ import logging
4
+
5
+ # ===== GPU FIX: Patch TensorBoard =====
6
+ class DummySummaryWriter:
7
+ """Dummy TensorBoard writer to prevent GPU issues"""
8
+ def __init__(self, *args, **kwargs):
9
+ pass
10
+
11
+ def __getattr__(self, name):
12
+ return lambda *args, **kwargs: None
13
+
14
+ def patch_tensorboard():
15
+ """Patch TensorBoard to avoid GPU conflicts"""
16
+ try:
17
+ import tensorboardX
18
+ tensorboardX.SummaryWriter = DummySummaryWriter
19
+ print("[Patches] βœ… TensorBoard patched")
20
+ except ImportError:
21
+ pass # TensorBoard not installed
22
+
23
+ # ===== GPU FIX: Patch Logging to redirect /work paths =====
24
+ _original_FileHandler = logging.FileHandler
25
+
26
+ class RedirectingFileHandler(_original_FileHandler):
27
+ """File handler that redirects /work paths to /tmp"""
28
+ def __init__(self, filename, mode='a', encoding=None, delay=False, errors=None):
29
+ if isinstance(filename, str) and filename.startswith('/work'):
30
+ filename = '/tmp/openface_log.txt'
31
+
32
+ # Ensure directory exists
33
+ dirname = os.path.dirname(filename)
34
+ if dirname:
35
+ os.makedirs(dirname, exist_ok=True)
36
+ else:
37
+ os.makedirs('/tmp', exist_ok=True)
38
+
39
+ super().__init__(filename, mode, encoding, delay, errors)
40
+
41
+ def patch_logging():
42
+ """Patch logging FileHandler to redirect paths"""
43
+ logging.FileHandler = RedirectingFileHandler
44
+ print("[Patches] βœ… Logging FileHandler patched")
45
+
46
+ def apply_all_patches():
47
+ """Apply all GPU and system patches"""
48
+ patch_tensorboard()
49
+ patch_logging()