Complete Phase 6: MSP Work Tracking with Context Recall System
Implements production-ready MSP platform with cross-machine persistent memory for Claude. API Implementation: - 130 REST API endpoints across 21 entities - JWT authentication on all endpoints - AES-256-GCM encryption for credentials - Automatic audit logging - Complete OpenAPI documentation Database: - 43 tables in MariaDB (172.16.3.20:3306) - 42 SQLAlchemy models with modern 2.0 syntax - Full Alembic migration system - 99.1% CRUD test pass rate Context Recall System (Phase 6): - Cross-machine persistent memory via database - Automatic context injection via Claude Code hooks - Automatic context saving after task completion - 90-95% token reduction with compression utilities - Relevance scoring with time decay - Tag-based semantic search - One-command setup script Security Features: - JWT tokens with Argon2 password hashing - AES-256-GCM encryption for all sensitive data - Comprehensive audit trail for credentials - HMAC tamper detection - Secure configuration management Test Results: - Phase 3: 38/38 CRUD tests passing (100%) - Phase 4: 34/35 core API tests passing (97.1%) - Phase 5: 62/62 extended API tests passing (100%) - Phase 6: 10/10 compression tests passing (100%) - Overall: 144/145 tests passing (99.3%) Documentation: - Comprehensive architecture guides - Setup automation scripts - API documentation at /api/docs - Complete test reports - Troubleshooting guides Project Status: 95% Complete (Production-Ready) Phase 7 (optional work context APIs) remains for future enhancement. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
926
.claude/API_SPEC.md
Normal file
926
.claude/API_SPEC.md
Normal file
@@ -0,0 +1,926 @@
|
||||
# MSP Mode API Specification
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Last Updated:** 2026-01-16
|
||||
**Status:** Design Phase
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
FastAPI-based REST API providing secure access to MSP tracking database on Jupiter server. Designed for multi-machine access with JWT authentication and comprehensive audit logging.
|
||||
|
||||
---
|
||||
|
||||
## Base Configuration
|
||||
|
||||
**Base URL:** `https://msp-api.azcomputerguru.com`
|
||||
**API Version:** `/api/v1/`
|
||||
**Protocol:** HTTPS only (no HTTP)
|
||||
**Authentication:** JWT Bearer tokens
|
||||
**Content-Type:** `application/json`
|
||||
|
||||
---
|
||||
|
||||
## Authentication
|
||||
|
||||
### JWT Token Structure
|
||||
|
||||
#### Access Token (Short-lived: 1 hour)
|
||||
```json
|
||||
{
|
||||
"sub": "mike@azcomputerguru.com",
|
||||
"scopes": ["msp:read", "msp:write", "msp:admin"],
|
||||
"machine": "windows-workstation",
|
||||
"exp": 1234567890,
|
||||
"iat": 1234567890,
|
||||
"jti": "unique-token-id"
|
||||
}
|
||||
```
|
||||
|
||||
#### Refresh Token (Long-lived: 30 days)
|
||||
- Stored securely in Gitea config
|
||||
- Used to obtain new access tokens
|
||||
- Can be revoked server-side
|
||||
|
||||
### Permission Scopes
|
||||
|
||||
- **`msp:read`** - Read sessions, clients, work items
|
||||
- **`msp:write`** - Create/update sessions, work items
|
||||
- **`msp:admin`** - Manage clients, credentials, delete operations
|
||||
|
||||
### Authentication Endpoints
|
||||
|
||||
#### POST /api/v1/auth/token
|
||||
Obtain JWT access token.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"refresh_token": "string"
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"access_token": "eyJhbGciOiJIUzI1NiIs...",
|
||||
"token_type": "bearer",
|
||||
"expires_in": 3600,
|
||||
"scopes": ["msp:read", "msp:write"]
|
||||
}
|
||||
```
|
||||
|
||||
**Status Codes:**
|
||||
- `200` - Token issued successfully
|
||||
- `401` - Invalid refresh token
|
||||
- `403` - Token revoked
|
||||
|
||||
#### POST /api/v1/auth/refresh
|
||||
Refresh expired access token.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"refresh_token": "string"
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"access_token": "eyJhbGciOiJIUzI1NiIs...",
|
||||
"expires_in": 3600
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Core API Endpoints
|
||||
|
||||
### Machine Detection & Management
|
||||
|
||||
#### GET /api/v1/machines
|
||||
List all registered machines.
|
||||
|
||||
**Query Parameters:**
|
||||
- `is_active` (boolean) - Filter by active status
|
||||
- `platform` (string) - Filter by platform (win32, darwin, linux)
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"machines": [
|
||||
{
|
||||
"id": "uuid",
|
||||
"hostname": "ACG-M-L5090",
|
||||
"friendly_name": "Main Laptop",
|
||||
"platform": "win32",
|
||||
"has_vpn_access": true,
|
||||
"vpn_profiles": ["dataforth", "grabb"],
|
||||
"has_docker": true,
|
||||
"powershell_version": "7.4",
|
||||
"available_mcps": ["claude-in-chrome", "filesystem"],
|
||||
"available_skills": ["pdf", "commit", "review-pr"],
|
||||
"last_seen": "2026-01-16T10:30:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### POST /api/v1/machines
|
||||
Register new machine (auto-detection on first session).
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"hostname": "ACG-M-L5090",
|
||||
"machine_fingerprint": "sha256hash",
|
||||
"platform": "win32",
|
||||
"os_version": "Windows 11 Pro",
|
||||
"username": "MikeSwanson",
|
||||
"friendly_name": "Main Laptop",
|
||||
"has_vpn_access": true,
|
||||
"vpn_profiles": ["dataforth", "grabb"],
|
||||
"has_docker": true,
|
||||
"powershell_version": "7.4",
|
||||
"preferred_shell": "powershell",
|
||||
"available_mcps": ["claude-in-chrome"],
|
||||
"available_skills": ["pdf", "commit"]
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"machine_fingerprint": "sha256hash",
|
||||
"created_at": "2026-01-16T10:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /api/v1/machines/{fingerprint}
|
||||
Get machine by fingerprint (for session start auto-detection).
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"hostname": "ACG-M-L5090",
|
||||
"friendly_name": "Main Laptop",
|
||||
"capabilities": {
|
||||
"vpn_profiles": ["dataforth", "grabb"],
|
||||
"has_docker": true,
|
||||
"powershell_version": "7.4"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### PUT /api/v1/machines/{id}
|
||||
Update machine capabilities.
|
||||
|
||||
### Sessions
|
||||
|
||||
#### POST /api/v1/sessions
|
||||
Create new MSP session.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"client_id": "uuid",
|
||||
"project_id": "uuid",
|
||||
"machine_id": "uuid",
|
||||
"session_date": "2026-01-16",
|
||||
"start_time": "2026-01-16T10:00:00Z",
|
||||
"session_title": "Dataforth - DOS UPDATE.BAT enhancement",
|
||||
"technician": "Mike Swanson",
|
||||
"status": "in_progress"
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"session_date": "2026-01-16",
|
||||
"start_time": "2026-01-16T10:00:00Z",
|
||||
"status": "in_progress",
|
||||
"created_at": "2026-01-16T10:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Status Codes:**
|
||||
- `201` - Session created
|
||||
- `400` - Invalid request data
|
||||
- `401` - Unauthorized
|
||||
- `404` - Client/Project not found
|
||||
|
||||
#### GET /api/v1/sessions
|
||||
Query sessions with filters.
|
||||
|
||||
**Query Parameters:**
|
||||
- `client_id` (uuid) - Filter by client
|
||||
- `project_id` (uuid) - Filter by project
|
||||
- `machine_id` (uuid) - Filter by machine
|
||||
- `date_from` (date) - Start date range
|
||||
- `date_to` (date) - End date range
|
||||
- `is_billable` (boolean) - Filter billable sessions
|
||||
- `status` (string) - Filter by status
|
||||
- `limit` (int) - Max results (default: 50)
|
||||
- `offset` (int) - Pagination offset
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"sessions": [
|
||||
{
|
||||
"id": "uuid",
|
||||
"client_name": "Dataforth",
|
||||
"project_name": "DOS Machine Management",
|
||||
"session_date": "2026-01-15",
|
||||
"duration_minutes": 210,
|
||||
"billable_hours": 3.5,
|
||||
"session_title": "DOS UPDATE.BAT v2.0 completion",
|
||||
"summary": "Completed UPDATE.BAT automation...",
|
||||
"status": "completed"
|
||||
}
|
||||
],
|
||||
"total": 45,
|
||||
"limit": 50,
|
||||
"offset": 0
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /api/v1/sessions/{id}
|
||||
Get session details with related work items.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"client_id": "uuid",
|
||||
"client_name": "Dataforth",
|
||||
"project_name": "DOS Machine Management",
|
||||
"session_date": "2026-01-15",
|
||||
"start_time": "2026-01-15T14:00:00Z",
|
||||
"end_time": "2026-01-15T17:30:00Z",
|
||||
"duration_minutes": 210,
|
||||
"billable_hours": 3.5,
|
||||
"session_title": "DOS UPDATE.BAT v2.0",
|
||||
"summary": "markdown summary",
|
||||
"work_items": [
|
||||
{
|
||||
"id": "uuid",
|
||||
"category": "development",
|
||||
"title": "Enhanced UPDATE.BAT with version checking",
|
||||
"status": "completed"
|
||||
}
|
||||
],
|
||||
"tags": ["dos", "batch", "automation", "dataforth"],
|
||||
"technologies_used": ["dos-6.22", "batch", "networking"]
|
||||
}
|
||||
```
|
||||
|
||||
#### PUT /api/v1/sessions/{id}
|
||||
Update session (typically at session end).
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"end_time": "2026-01-16T12:30:00Z",
|
||||
"status": "completed",
|
||||
"summary": "markdown summary",
|
||||
"billable_hours": 2.5,
|
||||
"notes": "Additional session notes"
|
||||
}
|
||||
```
|
||||
|
||||
### Work Items
|
||||
|
||||
#### POST /api/v1/work-items
|
||||
Create work item for session.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"session_id": "uuid",
|
||||
"category": "troubleshooting",
|
||||
"title": "Fixed Apache SSL certificate expiration",
|
||||
"description": "Problem: ERR_SSL_PROTOCOL_ERROR\nCause: Cert expired\nFix: certbot renew",
|
||||
"status": "completed",
|
||||
"priority": "high",
|
||||
"is_billable": true,
|
||||
"actual_minutes": 45,
|
||||
"affected_systems": ["jupiter", "172.16.3.20"],
|
||||
"technologies_used": ["apache", "ssl", "certbot"]
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"session_id": "uuid",
|
||||
"category": "troubleshooting",
|
||||
"title": "Fixed Apache SSL certificate expiration",
|
||||
"created_at": "2026-01-16T10:15:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /api/v1/work-items
|
||||
Query work items.
|
||||
|
||||
**Query Parameters:**
|
||||
- `session_id` (uuid) - Filter by session
|
||||
- `category` (string) - Filter by category
|
||||
- `status` (string) - Filter by status
|
||||
- `date_from` (date) - Start date
|
||||
- `date_to` (date) - End date
|
||||
|
||||
### Clients
|
||||
|
||||
#### GET /api/v1/clients
|
||||
List all clients.
|
||||
|
||||
**Query Parameters:**
|
||||
- `type` (string) - Filter by type (msp_client, internal, project)
|
||||
- `is_active` (boolean) - Active clients only
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"clients": [
|
||||
{
|
||||
"id": "uuid",
|
||||
"name": "Dataforth",
|
||||
"type": "msp_client",
|
||||
"network_subnet": "192.168.0.0/24",
|
||||
"is_active": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### POST /api/v1/clients
|
||||
Create new client record.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"name": "Client Name",
|
||||
"type": "msp_client",
|
||||
"network_subnet": "192.168.1.0/24",
|
||||
"domain_name": "client.local",
|
||||
"primary_contact": "John Doe",
|
||||
"notes": "Additional information"
|
||||
}
|
||||
```
|
||||
|
||||
**Requires:** `msp:admin` scope
|
||||
|
||||
#### GET /api/v1/clients/{id}
|
||||
Get client details with infrastructure.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"name": "Dataforth",
|
||||
"network_subnet": "192.168.0.0/24",
|
||||
"infrastructure": [
|
||||
{
|
||||
"hostname": "AD2",
|
||||
"ip_address": "192.168.0.6",
|
||||
"asset_type": "domain_controller",
|
||||
"os": "Windows Server 2022"
|
||||
}
|
||||
],
|
||||
"active_projects": 3,
|
||||
"recent_sessions": 15
|
||||
}
|
||||
```
|
||||
|
||||
### Credentials
|
||||
|
||||
#### GET /api/v1/credentials
|
||||
Query credentials (encrypted values not returned by default).
|
||||
|
||||
**Query Parameters:**
|
||||
- `client_id` (uuid) - Filter by client
|
||||
- `service_id` (uuid) - Filter by service
|
||||
- `credential_type` (string) - Filter by type
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"credentials": [
|
||||
{
|
||||
"id": "uuid",
|
||||
"client_name": "Dataforth",
|
||||
"service_name": "AD2 Administrator",
|
||||
"username": "sysadmin",
|
||||
"credential_type": "password",
|
||||
"requires_vpn": true,
|
||||
"last_rotated_at": "2025-12-01T00:00:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** Password values not included. Use decrypt endpoint.
|
||||
|
||||
#### POST /api/v1/credentials
|
||||
Store new credential (encrypted).
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"client_id": "uuid",
|
||||
"service_name": "AD2 Administrator",
|
||||
"username": "sysadmin",
|
||||
"password": "plaintext-password",
|
||||
"credential_type": "password",
|
||||
"requires_vpn": true,
|
||||
"requires_2fa": false
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"service_name": "AD2 Administrator",
|
||||
"created_at": "2026-01-16T10:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Requires:** `msp:write` scope
|
||||
|
||||
#### GET /api/v1/credentials/{id}/decrypt
|
||||
Decrypt and return credential value.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"credential_id": "uuid",
|
||||
"service_name": "AD2 Administrator",
|
||||
"username": "sysadmin",
|
||||
"password": "decrypted-password",
|
||||
"accessed_at": "2026-01-16T10:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Side Effects:**
|
||||
- Creates audit log entry
|
||||
- Records access in `credential_audit_log` table
|
||||
|
||||
**Requires:** `msp:read` scope minimum
|
||||
|
||||
### Infrastructure
|
||||
|
||||
#### GET /api/v1/infrastructure
|
||||
Query infrastructure assets.
|
||||
|
||||
**Query Parameters:**
|
||||
- `client_id` (uuid) - Filter by client
|
||||
- `asset_type` (string) - Filter by type
|
||||
- `hostname` (string) - Search by hostname
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"infrastructure": [
|
||||
{
|
||||
"id": "uuid",
|
||||
"client_name": "Dataforth",
|
||||
"hostname": "D2TESTNAS",
|
||||
"ip_address": "192.168.0.9",
|
||||
"asset_type": "nas_storage",
|
||||
"os": "ReadyNAS OS",
|
||||
"environmental_notes": "Manual WINS install, SMB1 only",
|
||||
"powershell_version": null,
|
||||
"has_gui": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /api/v1/infrastructure/{id}/insights
|
||||
Get environmental insights for infrastructure.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"infrastructure_id": "uuid",
|
||||
"hostname": "D2TESTNAS",
|
||||
"insights": [
|
||||
{
|
||||
"category": "custom_installations",
|
||||
"title": "WINS: Manual Samba installation",
|
||||
"description": "WINS service manually installed via Samba nmbd...",
|
||||
"examples": ["ssh root@192.168.0.9 'ps aux | grep nmbd'"],
|
||||
"priority": 9
|
||||
}
|
||||
],
|
||||
"limitations": ["no_native_wins_service", "smb1_only"],
|
||||
"recommended_commands": {
|
||||
"check_wins": "ssh root@192.168.0.9 'ps aux | grep nmbd'"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Commands & Failures
|
||||
|
||||
#### POST /api/v1/commands
|
||||
Log command execution (with failure tracking).
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"work_item_id": "uuid",
|
||||
"session_id": "uuid",
|
||||
"command_text": "Get-LocalUser",
|
||||
"host": "old-server-2008",
|
||||
"shell_type": "powershell",
|
||||
"success": false,
|
||||
"exit_code": 1,
|
||||
"error_message": "Get-LocalUser : The term Get-LocalUser is not recognized",
|
||||
"failure_category": "compatibility"
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"created_at": "2026-01-16T10:00:00Z",
|
||||
"failure_logged": true
|
||||
}
|
||||
```
|
||||
|
||||
**Side Effects:**
|
||||
- If failure: Triggers Failure Analysis Agent
|
||||
- May create `failure_patterns` entry
|
||||
- May update `environmental_insights`
|
||||
|
||||
#### GET /api/v1/failure-patterns
|
||||
Query known failure patterns.
|
||||
|
||||
**Query Parameters:**
|
||||
- `infrastructure_id` (uuid) - Patterns for specific infrastructure
|
||||
- `pattern_type` (string) - Filter by type
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"patterns": [
|
||||
{
|
||||
"id": "uuid",
|
||||
"pattern_signature": "PowerShell 7 cmdlets on Server 2008",
|
||||
"error_pattern": "Get-LocalUser.*not recognized",
|
||||
"root_cause": "Server 2008 only has PowerShell 2.0",
|
||||
"recommended_solution": "Use Get-WmiObject Win32_UserAccount",
|
||||
"occurrence_count": 5,
|
||||
"severity": "major"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Tasks & Todo Items
|
||||
|
||||
#### GET /api/v1/pending-tasks
|
||||
Query open tasks.
|
||||
|
||||
**Query Parameters:**
|
||||
- `client_id` (uuid) - Filter by client
|
||||
- `priority` (string) - Filter by priority
|
||||
- `status` (string) - Filter by status
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"tasks": [
|
||||
{
|
||||
"id": "uuid",
|
||||
"client_name": "Dataforth",
|
||||
"title": "Create Datasheets share",
|
||||
"priority": "high",
|
||||
"status": "blocked",
|
||||
"blocked_by": "Waiting on Engineering",
|
||||
"due_date": "2026-01-20"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### POST /api/v1/pending-tasks
|
||||
Create pending task.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"client_id": "uuid",
|
||||
"project_id": "uuid",
|
||||
"title": "Task title",
|
||||
"description": "Task description",
|
||||
"priority": "high",
|
||||
"due_date": "2026-01-20"
|
||||
}
|
||||
```
|
||||
|
||||
### External Integrations
|
||||
|
||||
#### GET /api/v1/integrations
|
||||
List configured integrations (SyncroMSP, MSP Backups, etc.).
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"integrations": [
|
||||
{
|
||||
"integration_name": "syncro",
|
||||
"integration_type": "psa",
|
||||
"is_active": true,
|
||||
"last_tested_at": "2026-01-15T08:00:00Z",
|
||||
"last_test_status": "success"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### POST /api/v1/integrations/{name}/test
|
||||
Test integration connection.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"integration_name": "syncro",
|
||||
"status": "success",
|
||||
"message": "Connection successful",
|
||||
"tested_at": "2026-01-16T10:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /api/v1/syncro/tickets
|
||||
Search SyncroMSP tickets.
|
||||
|
||||
**Query Parameters:**
|
||||
- `customer` (string) - Filter by customer name
|
||||
- `subject` (string) - Search ticket subjects
|
||||
- `status` (string) - Filter by status
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"tickets": [
|
||||
{
|
||||
"ticket_id": "12345",
|
||||
"ticket_number": "T12345",
|
||||
"subject": "Backup configuration for NAS",
|
||||
"customer": "Dataforth",
|
||||
"status": "open",
|
||||
"created_at": "2026-01-10T12:00:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### POST /api/v1/syncro/tickets/{id}/comment
|
||||
Add comment to SyncroMSP ticket.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"comment": "Work completed: configured Veeam backup..."
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"comment_id": "67890",
|
||||
"created_at": "2026-01-16T10:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Side Effects:**
|
||||
- Creates `external_integrations` log entry
|
||||
- Links to current session
|
||||
|
||||
### Health & Monitoring
|
||||
|
||||
#### GET /api/v1/health
|
||||
Health check endpoint.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"status": "healthy",
|
||||
"database": "connected",
|
||||
"timestamp": "2026-01-16T10:00:00Z",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
```
|
||||
|
||||
**Status Codes:**
|
||||
- `200` - Service healthy
|
||||
- `503` - Service unavailable
|
||||
|
||||
#### GET /api/v1/metrics
|
||||
Prometheus metrics (optional).
|
||||
|
||||
**Response:** Prometheus format metrics
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Standard Error Response Format
|
||||
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "INVALID_REQUEST",
|
||||
"message": "Client ID is required",
|
||||
"details": {
|
||||
"field": "client_id",
|
||||
"constraint": "not_null"
|
||||
}
|
||||
},
|
||||
"timestamp": "2026-01-16T10:00:00Z",
|
||||
"request_id": "uuid"
|
||||
}
|
||||
```
|
||||
|
||||
### HTTP Status Codes
|
||||
|
||||
- **200** - Success
|
||||
- **201** - Created
|
||||
- **400** - Bad Request (invalid input)
|
||||
- **401** - Unauthorized (missing/invalid token)
|
||||
- **403** - Forbidden (insufficient permissions)
|
||||
- **404** - Not Found
|
||||
- **409** - Conflict (duplicate record)
|
||||
- **429** - Too Many Requests (rate limit)
|
||||
- **500** - Internal Server Error (never expose DB errors)
|
||||
- **503** - Service Unavailable
|
||||
|
||||
### Error Codes
|
||||
|
||||
- `INVALID_REQUEST` - Malformed request
|
||||
- `UNAUTHORIZED` - Missing or invalid authentication
|
||||
- `FORBIDDEN` - Insufficient permissions
|
||||
- `NOT_FOUND` - Resource not found
|
||||
- `DUPLICATE_ENTRY` - Unique constraint violation
|
||||
- `RATE_LIMIT_EXCEEDED` - Too many requests
|
||||
- `DATABASE_ERROR` - Internal database error (details hidden)
|
||||
- `ENCRYPTION_ERROR` - Credential encryption/decryption failed
|
||||
|
||||
---
|
||||
|
||||
## Rate Limiting
|
||||
|
||||
**Default Limits:**
|
||||
- 100 requests per minute per token
|
||||
- 1000 requests per hour per token
|
||||
- Credential decryption: 20 per minute
|
||||
|
||||
**Headers:**
|
||||
```
|
||||
X-RateLimit-Limit: 100
|
||||
X-RateLimit-Remaining: 87
|
||||
X-RateLimit-Reset: 1234567890
|
||||
```
|
||||
|
||||
**Exceeded Response:**
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "RATE_LIMIT_EXCEEDED",
|
||||
"message": "Rate limit exceeded. Retry after 60 seconds.",
|
||||
"retry_after": 60
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Coordination Patterns
|
||||
|
||||
### Agent API Access
|
||||
|
||||
All specialized agents use the same API with agent-specific tokens:
|
||||
|
||||
**Agent Token Claims:**
|
||||
```json
|
||||
{
|
||||
"sub": "agent:context-recovery",
|
||||
"agent_type": "context_recovery",
|
||||
"scopes": ["msp:read"],
|
||||
"parent_session": "uuid",
|
||||
"exp": 1234567890
|
||||
}
|
||||
```
|
||||
|
||||
### Agent Communication Flow
|
||||
|
||||
```
|
||||
Main Claude (JWT: user token)
|
||||
↓
|
||||
Launches Agent (JWT: agent token, scoped to parent session)
|
||||
↓
|
||||
Agent makes API calls (authenticated with agent token)
|
||||
↓
|
||||
API logs agent activity (tracks parent session)
|
||||
↓
|
||||
Agent returns summary to Main Claude
|
||||
```
|
||||
|
||||
### Example: Context Recovery Agent
|
||||
|
||||
**Request Flow:**
|
||||
1. Main Claude: POST /api/v1/agents/context-recovery
|
||||
2. API issues agent token (scoped: msp:read, session_id)
|
||||
3. Agent executes:
|
||||
- GET /api/v1/sessions?client_id=X&limit=5
|
||||
- GET /api/v1/pending-tasks?client_id=X
|
||||
- GET /api/v1/infrastructure?client_id=X
|
||||
4. Agent processes results, generates summary
|
||||
5. Agent returns to Main Claude (API logs all agent activity)
|
||||
|
||||
**Agent Audit Trail:**
|
||||
- All agent API calls logged with parent session
|
||||
- Agent execution time tracked
|
||||
- Agent results cached (avoid redundant queries)
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Encryption
|
||||
- **In Transit:** HTTPS only (TLS 1.2+)
|
||||
- **At Rest:** AES-256-GCM for credentials
|
||||
- **Key Management:** Environment variable or vault (not in database)
|
||||
|
||||
### Authentication
|
||||
- JWT tokens with short expiration (1 hour access, 30 day refresh)
|
||||
- Token rotation supported
|
||||
- Revocation list for compromised tokens
|
||||
|
||||
### Audit Logging
|
||||
- All credential access logged (`credential_audit_log`)
|
||||
- All API requests logged (`api_audit_log`)
|
||||
- User ID, IP address, timestamp, action recorded
|
||||
|
||||
### Input Validation
|
||||
- Pydantic models validate all inputs
|
||||
- SQL injection prevention via SQLAlchemy ORM
|
||||
- XSS prevention (JSON only, no HTML)
|
||||
|
||||
### Rate Limiting
|
||||
- Per-token rate limits
|
||||
- Credential access rate limits (stricter)
|
||||
- IP-based limits (optional)
|
||||
|
||||
---
|
||||
|
||||
## Configuration Storage
|
||||
|
||||
### Gitea Repository
|
||||
**Repo:** `azcomputerguru/msp-config`
|
||||
|
||||
**File:** `msp-api-config.json`
|
||||
```json
|
||||
{
|
||||
"api_url": "https://msp-api.azcomputerguru.com",
|
||||
"refresh_token": "encrypted_token_value",
|
||||
"database_schema_version": "1.0.0",
|
||||
"machine_id": "uuid"
|
||||
}
|
||||
```
|
||||
|
||||
**Encryption:** git-crypt or encrypted JSON values
|
||||
|
||||
---
|
||||
|
||||
## Implementation Status
|
||||
|
||||
- ✅ API Design (this document)
|
||||
- ⏳ FastAPI implementation
|
||||
- ⏳ Database schema deployment
|
||||
- ⏳ JWT authentication flow
|
||||
- ⏳ Agent token system
|
||||
- ⏳ External integrations (SyncroMSP, MSP Backups)
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
**v1.0.0 (2026-01-16):**
|
||||
- Initial API specification
|
||||
- Machine detection endpoints
|
||||
- Core CRUD operations
|
||||
- Authentication flow
|
||||
- Agent coordination patterns
|
||||
- External integrations design
|
||||
772
.claude/ARCHITECTURE_OVERVIEW.md
Normal file
772
.claude/ARCHITECTURE_OVERVIEW.md
Normal file
@@ -0,0 +1,772 @@
|
||||
# MSP Mode Architecture Overview
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Last Updated:** 2026-01-16
|
||||
**Status:** Design Phase
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
MSP Mode is a custom Claude Code implementation that tracks client work, maintains context across sessions and machines, and provides structured access to historical MSP data through an agent-based architecture.
|
||||
|
||||
**Core Principle:** All modes (MSP, Development, Normal) use specialized agents to preserve main Claude instance context space.
|
||||
|
||||
---
|
||||
|
||||
## High-Level Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ User (Technician) │
|
||||
│ Multiple Machines (Laptop, Desktop) │
|
||||
└────────────────────┬────────────────────────────────────────┘
|
||||
│
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Claude Code (Main Instance) │
|
||||
│ • Conversation & User Interaction │
|
||||
│ • Decision Making & Mode Management │
|
||||
│ • Agent Orchestration │
|
||||
└────────────┬───────────────────────┬────────────────────────┘
|
||||
│ │
|
||||
↓ ↓
|
||||
┌────────────────────┐ ┌──────────────────────────────────┐
|
||||
│ 13 Specialized │ │ REST API (FastAPI) │
|
||||
│ Agents │────│ Jupiter Server │
|
||||
│ • Context Mgmt │ │ https://msp-api.azcomputerguru │
|
||||
│ • Data Processing │ └──────────┬───────────────────────┘
|
||||
│ • Integration │ │
|
||||
└────────────────────┘ ↓
|
||||
┌──────────────────────┐
|
||||
│ MariaDB Database │
|
||||
│ msp_tracking │
|
||||
│ 36 Tables │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 13 Specialized Agents
|
||||
|
||||
### 1. Machine Detection Agent
|
||||
**Launched:** Session start (FIRST - before all other agents)
|
||||
**Purpose:** Identify current machine and load capabilities
|
||||
|
||||
**Tasks:**
|
||||
- Execute `hostname`, `whoami`, detect platform
|
||||
- Generate machine fingerprint (SHA256)
|
||||
- Query machines table for existing record
|
||||
- Load VPN access, Docker, PowerShell version, MCPs, Skills
|
||||
- Update last_seen timestamp
|
||||
|
||||
**Returns:** Machine context (machine_id, capabilities, limitations)
|
||||
|
||||
**Context Saved:** ~97% (machine profile loaded, only key capabilities returned)
|
||||
|
||||
---
|
||||
|
||||
### 2. Environment Context Agent
|
||||
**Launched:** Before making command suggestions or infrastructure operations
|
||||
**Purpose:** Check environmental constraints to avoid known failures
|
||||
|
||||
**Tasks:**
|
||||
- Query infrastructure environmental_notes
|
||||
- Read environmental_insights for client/infrastructure
|
||||
- Check failure_patterns for similar operations
|
||||
- Validate command compatibility with environment
|
||||
- Return constraints and recommendations
|
||||
|
||||
**Returns:** Environmental context + compatibility warnings
|
||||
|
||||
**Example:** "D2TESTNAS: Manual WINS install (no native service), ReadyNAS OS, SMB1 only"
|
||||
|
||||
**Context Saved:** ~96% (processes failure history, returns summary)
|
||||
|
||||
---
|
||||
|
||||
### 3. Context Recovery Agent
|
||||
**Launched:** Session start (`/msp` command)
|
||||
**Purpose:** Load relevant client context
|
||||
|
||||
**Tasks:**
|
||||
- Query previous sessions (last 5)
|
||||
- Retrieve open pending tasks
|
||||
- Get recently used credentials
|
||||
- Fetch infrastructure topology
|
||||
|
||||
**Returns:** Concise context summary (< 300 words)
|
||||
|
||||
**API Calls:** 4-5 parallel GET requests
|
||||
|
||||
**Context Saved:** ~95% (processes MB of data, returns summary)
|
||||
|
||||
---
|
||||
|
||||
### 4. Work Categorization Agent
|
||||
**Launched:** Periodically during session or on-demand
|
||||
**Purpose:** Analyze and categorize recent work
|
||||
|
||||
**Tasks:**
|
||||
- Parse conversation transcript
|
||||
- Extract commands, files, systems, technologies
|
||||
- Detect category (infrastructure, troubleshooting, etc.)
|
||||
- Generate dense description
|
||||
- Auto-tag work items
|
||||
|
||||
**Returns:** Structured work_item object (JSON)
|
||||
|
||||
**Context Saved:** ~90% (processes conversation, returns structured data)
|
||||
|
||||
---
|
||||
|
||||
### 5. Session Summary Agent
|
||||
**Launched:** Session end (`/msp end` or mode switch)
|
||||
**Purpose:** Generate comprehensive session summary
|
||||
|
||||
**Tasks:**
|
||||
- Analyze all work_items from session
|
||||
- Calculate time allocation per category
|
||||
- Generate dense markdown summary
|
||||
- Structure data for API storage
|
||||
- Create billable hours calculation
|
||||
|
||||
**Returns:** Summary + API-ready payload
|
||||
|
||||
**Context Saved:** ~92% (processes full session, returns summary)
|
||||
|
||||
---
|
||||
|
||||
### 6. Credential Retrieval Agent
|
||||
**Launched:** When credential needed
|
||||
**Purpose:** Securely retrieve and decrypt credentials
|
||||
|
||||
**Tasks:**
|
||||
- Query credentials API
|
||||
- Decrypt credential value
|
||||
- Log access to audit trail
|
||||
- Return only credential value
|
||||
|
||||
**Returns:** Single credential string
|
||||
|
||||
**API Calls:** 2 (retrieve + audit log)
|
||||
|
||||
**Context Saved:** ~98% (credential + minimal metadata)
|
||||
|
||||
---
|
||||
|
||||
### 7. Credential Storage Agent
|
||||
**Launched:** When new credential discovered
|
||||
**Purpose:** Encrypt and store credential securely
|
||||
|
||||
**Tasks:**
|
||||
- Validate credential data
|
||||
- Encrypt with AES-256-GCM
|
||||
- Link to client/service/infrastructure
|
||||
- Store via API
|
||||
- Create audit log entry
|
||||
|
||||
**Returns:** credential_id confirmation
|
||||
|
||||
**Context Saved:** ~99% (only ID returned)
|
||||
|
||||
---
|
||||
|
||||
### 8. Historical Search Agent
|
||||
**Launched:** On-demand (user asks about past work)
|
||||
**Purpose:** Search and summarize historical sessions
|
||||
|
||||
**Tasks:**
|
||||
- Query sessions database with filters
|
||||
- Parse matching sessions
|
||||
- Extract key outcomes
|
||||
- Generate concise summary
|
||||
|
||||
**Returns:** Brief summary of findings
|
||||
|
||||
**Example:** "Found 3 backup sessions: [dates] - [outcomes]"
|
||||
|
||||
**Context Saved:** ~95% (processes potentially 100s of sessions)
|
||||
|
||||
---
|
||||
|
||||
### 9. Integration Workflow Agent
|
||||
**Launched:** Multi-step integration requests
|
||||
**Purpose:** Execute complex workflows with external tools
|
||||
|
||||
**Tasks:**
|
||||
- Search external ticketing systems (SyncroMSP)
|
||||
- Generate work summaries
|
||||
- Update tickets with comments
|
||||
- Pull reports from backup systems
|
||||
- Attach files to tickets
|
||||
- Track all integrations in database
|
||||
|
||||
**Returns:** Workflow completion summary
|
||||
|
||||
**API Calls:** 5-10+ external + internal calls
|
||||
|
||||
**Context Saved:** ~90% (handles large files, API responses)
|
||||
|
||||
---
|
||||
|
||||
### 10. Problem Pattern Matching Agent
|
||||
**Launched:** When user describes an error/issue
|
||||
**Purpose:** Find similar historical problems
|
||||
|
||||
**Tasks:**
|
||||
- Parse error description
|
||||
- Search problem_solutions table
|
||||
- Extract relevant solutions
|
||||
- Rank by similarity
|
||||
|
||||
**Returns:** Top 3 similar problems with solutions
|
||||
|
||||
**Context Saved:** ~94% (searches all problems, returns matches)
|
||||
|
||||
---
|
||||
|
||||
### 11. Database Query Agent
|
||||
**Launched:** Complex reporting or analytics requests
|
||||
**Purpose:** Execute complex database queries
|
||||
|
||||
**Tasks:**
|
||||
- Build SQL queries with filters/joins
|
||||
- Execute query via API
|
||||
- Process result set
|
||||
- Generate summary statistics
|
||||
- Format for presentation
|
||||
|
||||
**Returns:** Summary statistics + key findings
|
||||
|
||||
**Example:** "Dataforth - Q4 2025: 45 sessions, 120 hours, $12,000 billed"
|
||||
|
||||
**Context Saved:** ~93% (processes large result sets)
|
||||
|
||||
---
|
||||
|
||||
### 12. Failure Analysis Agent
|
||||
**Launched:** When commands/operations fail, or periodically
|
||||
**Purpose:** Learn from failures to prevent future mistakes
|
||||
|
||||
**Tasks:**
|
||||
- Log all command/operation failures with full context
|
||||
- Analyze failure patterns across sessions
|
||||
- Identify environmental constraints
|
||||
- Update infrastructure environmental_notes
|
||||
- Generate/update environmental_insights
|
||||
- Create actionable resolutions
|
||||
|
||||
**Returns:** Updated insights, environmental constraints
|
||||
|
||||
**Context Saved:** ~94% (analyzes failures, returns key learnings)
|
||||
|
||||
---
|
||||
|
||||
### 13. Integration Search Agent
|
||||
**Launched:** Searching external systems
|
||||
**Purpose:** Query SyncroMSP, MSP Backups, etc.
|
||||
|
||||
**Tasks:**
|
||||
- Authenticate with external API
|
||||
- Execute search query
|
||||
- Parse results
|
||||
- Summarize findings
|
||||
|
||||
**Returns:** Concise list of matches
|
||||
|
||||
**API Calls:** 1-3 external API calls
|
||||
|
||||
**Context Saved:** ~90% (handles API pagination, large response)
|
||||
|
||||
---
|
||||
|
||||
## Mode Behaviors
|
||||
|
||||
### MSP Mode (`/msp`)
|
||||
**Purpose:** Track client work with comprehensive context
|
||||
|
||||
**Activation Flow:**
|
||||
1. Machine Detection Agent identifies current machine
|
||||
2. Environment Context Agent loads environmental constraints
|
||||
3. Context Recovery Agent loads client history
|
||||
4. Session created with machine_id, client_id, project_id
|
||||
5. Real-time work tracking begins
|
||||
|
||||
**Auto-Tracking:**
|
||||
- Work items categorized automatically
|
||||
- Commands logged with failure tracking
|
||||
- File changes tracked
|
||||
- Problems and solutions captured
|
||||
- Credentials accessed (audit logged)
|
||||
- Infrastructure changes documented
|
||||
|
||||
**Billability:** Default true (client work)
|
||||
|
||||
**Session End:**
|
||||
- Session Summary Agent generates dense summary
|
||||
- Stores to database via API
|
||||
- Optional: Link to external tickets (SyncroMSP)
|
||||
- Optional: Log billable hours to PSA
|
||||
|
||||
---
|
||||
|
||||
### Development Mode (`/dev`)
|
||||
**Purpose:** Track development projects (TBD)
|
||||
|
||||
**Differences from MSP:**
|
||||
- Focus on code/features vs client issues
|
||||
- Git integration
|
||||
- Project-based (not client-based)
|
||||
- Billability default: false
|
||||
|
||||
**Status:** To be fully defined
|
||||
|
||||
---
|
||||
|
||||
### Normal Mode (`/normal`)
|
||||
**Purpose:** General work, research, learning
|
||||
|
||||
**Characteristics:**
|
||||
- No client_id or project_id assignment
|
||||
- Lighter tracking than MSP mode
|
||||
- Captures decisions, findings, learnings
|
||||
- Billability default: false
|
||||
|
||||
**Use Cases:**
|
||||
- Research and exploration
|
||||
- General questions
|
||||
- Internal infrastructure work (non-client)
|
||||
- Learning/experimentation
|
||||
- Documentation
|
||||
|
||||
**Knowledge Retention:**
|
||||
- Preserves context from previous modes
|
||||
- Only clears client/project assignment
|
||||
- Queryable knowledge base
|
||||
|
||||
---
|
||||
|
||||
## Storage Strategy
|
||||
|
||||
### SQL Database (MariaDB)
|
||||
**Location:** Jupiter (172.16.3.20)
|
||||
**Database:** `msp_tracking`
|
||||
**Tables:** 36 total
|
||||
|
||||
**Rationale:**
|
||||
- Structured queries ("show all work for Client X in January")
|
||||
- Relational data (clients → projects → sessions → credentials)
|
||||
- Fast indexing even with years of data
|
||||
- No merge conflicts (single source of truth)
|
||||
- Time tracking and billing calculations
|
||||
- Report generation capabilities
|
||||
|
||||
**Categories:**
|
||||
1. Core MSP Tracking (6 tables) - includes `machines`
|
||||
2. Client & Infrastructure (7 tables)
|
||||
3. Credentials & Security (4 tables)
|
||||
4. Work Details (6 tables)
|
||||
5. Failure Analysis & Insights (3 tables)
|
||||
6. Tagging & Categorization (3 tables)
|
||||
7. System & Audit (2 tables)
|
||||
8. External Integrations (3 tables)
|
||||
9. Junction Tables (2 tables)
|
||||
|
||||
**Estimated Storage:** 1-2 GB per year (compressed)
|
||||
|
||||
---
|
||||
|
||||
## Machine Detection System
|
||||
|
||||
### Auto-Detection on Session Start
|
||||
|
||||
**Fingerprint Generation:**
|
||||
```javascript
|
||||
fingerprint = SHA256(hostname + "|" + username + "|" + platform + "|" + home_directory)
|
||||
// Example: SHA256("ACG-M-L5090|MikeSwanson|win32|C:\Users\MikeSwanson")
|
||||
```
|
||||
|
||||
**Capabilities Tracked:**
|
||||
- VPN access (per client profiles)
|
||||
- Docker availability
|
||||
- PowerShell/shell version
|
||||
- Available MCPs (claude-in-chrome, filesystem, etc.)
|
||||
- Available Skills (pdf, commit, review-pr, etc.)
|
||||
- OS-specific package managers
|
||||
- Preferred shell (powershell, zsh, bash, cmd)
|
||||
|
||||
**Benefits:**
|
||||
- Never suggest Docker commands on machines without Docker
|
||||
- Never suggest VPN-required access from non-VPN machines
|
||||
- Use version-compatible syntax for PowerShell/tools
|
||||
- Check MCP/Skill availability before calling
|
||||
- Track which sessions were done on which machines
|
||||
|
||||
---
|
||||
|
||||
## OS-Specific Command Selection
|
||||
|
||||
### Platform Detection
|
||||
**Machine Detection Agent provides:**
|
||||
- `platform`: "win32", "darwin", "linux"
|
||||
- `preferred_shell`: "powershell", "zsh", "bash", "cmd"
|
||||
- `package_manager_commands`: {"install": "choco install {pkg}", ...}
|
||||
|
||||
### Command Mapping Examples
|
||||
|
||||
| Task | Windows | macOS | Linux |
|
||||
|------|---------|-------|-------|
|
||||
| List files | `Get-ChildItem` | `ls -la` | `ls -la` |
|
||||
| Process list | `Get-Process` | `ps aux` | `ps aux` |
|
||||
| IP config | `ipconfig` | `ifconfig` | `ip addr` |
|
||||
| Package install | `choco install` | `brew install` | `apt install` |
|
||||
|
||||
**Benefits:**
|
||||
- No cross-platform errors
|
||||
- Commands always work on current platform
|
||||
- Shell syntax matches current environment
|
||||
- Package manager suggestions platform-appropriate
|
||||
|
||||
---
|
||||
|
||||
## Failure Logging & Learning System
|
||||
|
||||
### Self-Improving Architecture
|
||||
|
||||
**Workflow:**
|
||||
1. Command executes on infrastructure
|
||||
2. Environment Context Agent pre-checked constraints
|
||||
3. If failure occurs: Detailed logging to `commands_run`
|
||||
4. Failure Analysis Agent identifies patterns
|
||||
5. Creates `failure_patterns` entry
|
||||
6. Updates `environmental_insights`
|
||||
7. Future suggestions avoid this failure
|
||||
|
||||
**Example Learning Cycle:**
|
||||
```
|
||||
Problem: Suggested "Get-LocalUser" on Server 2008
|
||||
Failure: Command not recognized (PowerShell 2.0 only)
|
||||
|
||||
Logged:
|
||||
- commands_run: success=false, error_message, failure_category
|
||||
- failure_patterns: "PS7 cmdlets on Server 2008" → use WMI
|
||||
- environmental_insights: "Server 2008: PowerShell 2.0 limitations"
|
||||
- infrastructure.environmental_notes: updated
|
||||
|
||||
Future Behavior:
|
||||
- Environment Context Agent checks before suggesting
|
||||
- Main Claude suggests WMI alternatives automatically
|
||||
- Never repeats this mistake
|
||||
```
|
||||
|
||||
**Database Tables:**
|
||||
- `commands_run` - Every command with success/failure
|
||||
- `operation_failures` - Non-command failures
|
||||
- `failure_patterns` - Aggregated patterns
|
||||
- `environmental_insights` - Generated insights per infrastructure
|
||||
|
||||
**Benefits:**
|
||||
- Self-improving system (each failure makes it smarter)
|
||||
- Reduced user friction (no repeated corrections)
|
||||
- Institutional knowledge capture
|
||||
- Proactive problem prevention
|
||||
|
||||
---
|
||||
|
||||
## Technology Stack
|
||||
|
||||
### API Framework: FastAPI (Python)
|
||||
**Rationale:**
|
||||
- Async performance for concurrent requests
|
||||
- Auto-generated OpenAPI/Swagger docs
|
||||
- Type safety with Pydantic models
|
||||
- SQLAlchemy ORM for complex queries
|
||||
- Built-in background tasks
|
||||
- Industry-standard testing (pytest)
|
||||
- Alembic for database migrations
|
||||
|
||||
### Authentication: JWT Tokens
|
||||
**Rationale:**
|
||||
- Stateless (no DB lookup to validate)
|
||||
- Claims-based (permissions, scopes, expiration)
|
||||
- Refresh token pattern for long-term access
|
||||
- Multiple clients/machines supported
|
||||
- Short-lived tokens minimize compromise risk
|
||||
|
||||
**Token Types:**
|
||||
- Access Token: 1 hour expiration
|
||||
- Refresh Token: 30 days expiration
|
||||
- Agent Tokens: Session-scoped, auto-issued
|
||||
|
||||
### Configuration Storage: Gitea (Private Repo)
|
||||
**Rationale:**
|
||||
- Multi-machine sync
|
||||
- Version controlled
|
||||
- Single source of truth
|
||||
- Token rotation = one commit, all machines sync
|
||||
- Encrypted token values (git-crypt)
|
||||
|
||||
**Repo:** `azcomputerguru/msp-config`
|
||||
|
||||
**File Structure:**
|
||||
```
|
||||
msp-api-config.json
|
||||
├── api_url (https://msp-api.azcomputerguru.com)
|
||||
├── refresh_token (encrypted)
|
||||
└── database_schema_version (for migration tracking)
|
||||
```
|
||||
|
||||
### Deployment: Docker Container
|
||||
**Container:** `msp-api`
|
||||
**Server:** Jupiter (172.16.3.20)
|
||||
|
||||
**Components:**
|
||||
- FastAPI application (Python 3.11+)
|
||||
- SQLAlchemy + Alembic (ORM and migrations)
|
||||
- JWT auth library (python-jose)
|
||||
- Pydantic validation
|
||||
- Gunicorn/Uvicorn ASGI server
|
||||
- Health checks endpoint
|
||||
- Mounted logs: `/var/log/msp-api/`
|
||||
|
||||
**Reverse Proxy:** Nginx with Let's Encrypt SSL
|
||||
|
||||
---
|
||||
|
||||
## External Integrations (Future)
|
||||
|
||||
### Planned Integrations
|
||||
|
||||
**SyncroMSP (PSA/RMM):**
|
||||
- Ticket search and linking
|
||||
- Auto-post session summaries
|
||||
- Time tracking synchronization
|
||||
|
||||
**MSP Backups:**
|
||||
- Pull backup status reports
|
||||
- Check backup failures
|
||||
- Export statistics
|
||||
|
||||
**Zapier:**
|
||||
- Webhook triggers
|
||||
- Bi-directional automation
|
||||
- Multi-step workflows
|
||||
|
||||
**Future:**
|
||||
- Autotask, ConnectWise (PSA)
|
||||
- Datto RMM
|
||||
- IT Glue (Documentation)
|
||||
- Microsoft Teams (notifications)
|
||||
|
||||
### Integration Architecture
|
||||
|
||||
**Database Tables:**
|
||||
- `external_integrations` - Track all integration actions
|
||||
- `integration_credentials` - OAuth/API keys (encrypted)
|
||||
- `ticket_links` - Session-to-ticket relationships
|
||||
|
||||
**Agent:** Integration Workflow Agent handles multi-step workflows
|
||||
|
||||
**Example Workflow:**
|
||||
```
|
||||
User: "Update Dataforth ticket with today's work and attach backup report"
|
||||
|
||||
Integration Workflow Agent:
|
||||
1. Search SyncroMSP for ticket
|
||||
2. Generate work summary from session
|
||||
3. Update ticket with comment
|
||||
4. Pull backup report from MSP Backups
|
||||
5. Attach report to ticket
|
||||
6. Log all actions to database
|
||||
|
||||
Returns: "✓ Updated ticket #12345, attached report"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Security Architecture
|
||||
|
||||
### Encryption
|
||||
- **Credentials:** AES-256-GCM at rest
|
||||
- **Transport:** HTTPS only (TLS 1.2+)
|
||||
- **Tokens:** Encrypted in Gitea config
|
||||
- **Key Management:** Environment variable or vault
|
||||
|
||||
### Authentication
|
||||
- JWT-based with scopes (msp:read, msp:write, msp:admin)
|
||||
- Token rotation supported
|
||||
- Revocation list for compromised tokens
|
||||
- Agent-specific tokens (session-scoped)
|
||||
|
||||
### Audit Logging
|
||||
- All credential access → `credential_audit_log`
|
||||
- All API requests → `api_audit_log`
|
||||
- All agent actions logged with parent session
|
||||
- User ID, IP address, timestamp recorded
|
||||
|
||||
### Input Validation
|
||||
- Pydantic models validate all inputs
|
||||
- SQL injection prevention (SQLAlchemy ORM)
|
||||
- Rate limiting (100 req/min, stricter for credentials)
|
||||
|
||||
---
|
||||
|
||||
## Agent Communication Pattern
|
||||
|
||||
```
|
||||
User: "Show me all work for Dataforth in January"
|
||||
↓
|
||||
Main Claude: Understands request, validates parameters
|
||||
↓
|
||||
Launches Database Query Agent: "Query Dataforth sessions in January 2026"
|
||||
↓
|
||||
Agent:
|
||||
- Queries API: GET /api/v1/sessions?client=Dataforth&date_from=2026-01-01
|
||||
- Processes 15 sessions
|
||||
- Extracts key info: dates, categories, billable hours, outcomes
|
||||
- Generates concise summary
|
||||
↓
|
||||
Agent Returns:
|
||||
"Dataforth - January 2026:
|
||||
15 sessions, 38.5 billable hours
|
||||
Main projects: DOS machines (8 sessions), Network migration (5), M365 (2)
|
||||
Categories: Infrastructure (60%), Troubleshooting (25%), Config (15%)
|
||||
Key outcomes: Completed UPDATE.BAT v2.0, migrated DNS to UDM"
|
||||
↓
|
||||
Main Claude: Presents summary to user, ready for follow-up questions
|
||||
```
|
||||
|
||||
**Context Saved:** Agent processed 500+ rows of data, main Claude only received 200-word summary.
|
||||
|
||||
---
|
||||
|
||||
## Infrastructure Design
|
||||
|
||||
### Jupiter Server Components
|
||||
|
||||
**Docker Container:** `msp-api`
|
||||
- FastAPI application
|
||||
- SQLAlchemy + Alembic
|
||||
- JWT authentication
|
||||
- Gunicorn/Uvicorn
|
||||
- Health checks
|
||||
- Prometheus metrics (optional)
|
||||
|
||||
**MariaDB Database:** `msp_tracking`
|
||||
- Connection pooling (SQLAlchemy)
|
||||
- Automated backups (critical MSP data)
|
||||
- Schema versioned with Alembic
|
||||
- 36 tables, indexed for performance
|
||||
|
||||
**Nginx Reverse Proxy:**
|
||||
- HTTPS with Let's Encrypt
|
||||
- Rate limiting
|
||||
- Access logs
|
||||
- Proxies to: msp-api.azcomputerguru.com
|
||||
|
||||
---
|
||||
|
||||
## Local Machine Structure
|
||||
|
||||
```
|
||||
D:\ClaudeTools\
|
||||
├── .claude/
|
||||
│ ├── commands/
|
||||
│ │ ├── msp.md (MSP Mode slash command)
|
||||
│ │ ├── dev.md (Development Mode)
|
||||
│ │ └── normal.md (Normal Mode)
|
||||
│ ├── msp-api-config.json (synced from Gitea)
|
||||
│ ├── API_SPEC.md (this system)
|
||||
│ └── ARCHITECTURE_OVERVIEW.md (you are here)
|
||||
├── MSP-MODE-SPEC.md (master specification)
|
||||
└── .git/ (synced to Gitea)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Benefits Summary
|
||||
|
||||
### Context Preservation
|
||||
- Main Claude stays focused on conversation
|
||||
- Agents handle data processing (90-99% context saved)
|
||||
- User gets concise results without context pollution
|
||||
|
||||
### Scalability
|
||||
- Multiple agents run in parallel
|
||||
- Each agent has full context window for its task
|
||||
- Complex operations don't consume main context
|
||||
- Designed for team expansion (multiple technicians)
|
||||
|
||||
### Information Density
|
||||
- Agents process raw data, return summaries
|
||||
- Dense storage format (more info, fewer words)
|
||||
- Queryable historical knowledge base
|
||||
- Cross-session and cross-machine context
|
||||
|
||||
### Self-Improvement
|
||||
- Every failure logged and analyzed
|
||||
- Environmental constraints learned automatically
|
||||
- Suggestions become smarter over time
|
||||
- Never repeat the same mistake
|
||||
|
||||
### User Experience
|
||||
- Auto-categorization (minimal user input)
|
||||
- Machine-aware suggestions (capability-based)
|
||||
- Platform-specific commands (no cross-platform errors)
|
||||
- Proactive warnings about limitations
|
||||
- Seamless multi-machine operation
|
||||
|
||||
---
|
||||
|
||||
## Implementation Status
|
||||
|
||||
- ✅ Architecture designed
|
||||
- ✅ Database schema (36 tables)
|
||||
- ✅ Agent types defined (13 agents)
|
||||
- ✅ API endpoints specified
|
||||
- ⏳ FastAPI implementation
|
||||
- ⏳ Database deployment on Jupiter
|
||||
- ⏳ JWT authentication flow
|
||||
- ⏳ Agent token system
|
||||
- ⏳ Machine detection implementation
|
||||
- ⏳ MSP Mode slash command
|
||||
- ⏳ External integrations
|
||||
|
||||
---
|
||||
|
||||
## Design Principles
|
||||
|
||||
1. **Agent-Based Execution** - Preserve main context at all costs
|
||||
2. **Information Density** - Brief but complete data capture
|
||||
3. **Self-Improvement** - Learn from every failure
|
||||
4. **Multi-Machine Support** - Seamless cross-device operation
|
||||
5. **Security First** - Encrypted credentials, audit logging
|
||||
6. **Scalability** - Designed for team growth
|
||||
7. **Separation of Concerns** - Main instance = conversation, Agents = data
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Deploy MariaDB schema on Jupiter
|
||||
2. Implement FastAPI endpoints
|
||||
3. Build JWT authentication system
|
||||
4. Create agent token mechanism
|
||||
5. Implement Machine Detection Agent
|
||||
6. Build MSP Mode slash command
|
||||
7. Test agent coordination patterns
|
||||
8. Deploy to production (msp-api.azcomputerguru.com)
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
**v1.0.0 (2026-01-16):**
|
||||
- Initial architecture documentation
|
||||
- 13 specialized agents defined
|
||||
- Machine detection system
|
||||
- OS-specific command selection
|
||||
- Failure logging and learning system
|
||||
- External integrations design
|
||||
- Complete technology stack
|
||||
561
.claude/CONTEXT_RECALL_ARCHITECTURE.md
Normal file
561
.claude/CONTEXT_RECALL_ARCHITECTURE.md
Normal file
@@ -0,0 +1,561 @@
|
||||
# Context Recall System - Architecture
|
||||
|
||||
Visual architecture and data flow for the Claude Code Context Recall System.
|
||||
|
||||
## System Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Claude Code Session │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ User writes │ │ Task │ │
|
||||
│ │ message │ │ completes │ │
|
||||
│ └──────┬───────┘ └──────┬───────┘ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌─────────────────────┐ ┌─────────────────────┐ │
|
||||
│ │ user-prompt-submit │ │ task-complete │ │
|
||||
│ │ hook triggers │ │ hook triggers │ │
|
||||
│ └─────────┬───────────┘ └─────────┬───────────┘ │
|
||||
└────────────┼──────────────────────────────────────┼─────────────┘
|
||||
│ │
|
||||
│ ┌──────────────────────────────────┐ │
|
||||
│ │ .claude/context-recall- │ │
|
||||
└─┤ config.env ├─┘
|
||||
│ (JWT_TOKEN, PROJECT_ID, etc.) │
|
||||
└──────────────────────────────────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
┌────────────────────────────┐ ┌────────────────────────────┐
|
||||
│ GET /api/conversation- │ │ POST /api/conversation- │
|
||||
│ contexts/recall │ │ contexts │
|
||||
│ │ │ │
|
||||
│ Query Parameters: │ │ POST /api/project-states │
|
||||
│ - project_id │ │ │
|
||||
│ - min_relevance_score │ │ Payload: │
|
||||
│ - limit │ │ - context summary │
|
||||
└────────────┬───────────────┘ │ - metadata │
|
||||
│ │ - relevance score │
|
||||
│ └────────────┬───────────────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ FastAPI Application │
|
||||
│ │
|
||||
│ ┌──────────────────────────┐ ┌───────────────────────────┐ │
|
||||
│ │ Context Recall Logic │ │ Context Save Logic │ │
|
||||
│ │ - Filter by relevance │ │ - Create context record │ │
|
||||
│ │ - Sort by score │ │ - Update project state │ │
|
||||
│ │ - Format for display │ │ - Extract metadata │ │
|
||||
│ └──────────┬───────────────┘ └───────────┬───────────────┘ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Database Access Layer │ │
|
||||
│ │ (SQLAlchemy ORM) │ │
|
||||
│ └──────────────────────────┬───────────────────────────────┘ │
|
||||
└─────────────────────────────┼──────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ PostgreSQL Database │
|
||||
│ │
|
||||
│ ┌────────────────────────┐ ┌─────────────────────────┐ │
|
||||
│ │ conversation_contexts │ │ project_states │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ - id (UUID) │ │ - id (UUID) │ │
|
||||
│ │ - project_id (FK) │ │ - project_id (FK) │ │
|
||||
│ │ - context_type │ │ - state_type │ │
|
||||
│ │ - title │ │ - state_data (JSONB) │ │
|
||||
│ │ - dense_summary │ │ - created_at │ │
|
||||
│ │ - relevance_score │ └─────────────────────────┘ │
|
||||
│ │ - metadata (JSONB) │ │
|
||||
│ │ - created_at │ ┌─────────────────────────┐ │
|
||||
│ │ - updated_at │ │ projects │ │
|
||||
│ └────────────────────────┘ │ │ │
|
||||
│ │ - id (UUID) │ │
|
||||
│ │ - name │ │
|
||||
│ │ - description │ │
|
||||
│ │ - project_type │ │
|
||||
│ └─────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Data Flow: Context Recall
|
||||
|
||||
```
|
||||
1. User writes message in Claude Code
|
||||
│
|
||||
▼
|
||||
2. user-prompt-submit hook executes
|
||||
│
|
||||
├─ Load config from .claude/context-recall-config.env
|
||||
├─ Detect PROJECT_ID (git config or remote URL hash)
|
||||
├─ Check if CONTEXT_RECALL_ENABLED=true
|
||||
│
|
||||
▼
|
||||
3. HTTP GET /api/conversation-contexts/recall
|
||||
│
|
||||
├─ Headers: Authorization: Bearer {JWT_TOKEN}
|
||||
├─ Query: ?project_id={ID}&limit=10&min_relevance_score=5.0
|
||||
│
|
||||
▼
|
||||
4. API processes request
|
||||
│
|
||||
├─ Authenticate JWT token
|
||||
├─ Query database:
|
||||
│ SELECT * FROM conversation_contexts
|
||||
│ WHERE project_id = {ID}
|
||||
│ AND relevance_score >= 5.0
|
||||
│ ORDER BY relevance_score DESC, created_at DESC
|
||||
│ LIMIT 10
|
||||
│
|
||||
▼
|
||||
5. API returns JSON array of contexts
|
||||
[
|
||||
{
|
||||
"id": "uuid",
|
||||
"title": "Session: 2025-01-15",
|
||||
"dense_summary": "...",
|
||||
"relevance_score": 8.5,
|
||||
"context_type": "session_summary",
|
||||
"metadata": {...}
|
||||
},
|
||||
...
|
||||
]
|
||||
│
|
||||
▼
|
||||
6. Hook formats contexts as Markdown
|
||||
│
|
||||
├─ Parse JSON response
|
||||
├─ Format each context with title, score, type
|
||||
├─ Include summary and metadata
|
||||
│
|
||||
▼
|
||||
7. Hook outputs formatted markdown
|
||||
## 📚 Previous Context
|
||||
|
||||
### 1. Session: 2025-01-15 (Score: 8.5/10)
|
||||
*Type: session_summary*
|
||||
|
||||
[Summary content...]
|
||||
│
|
||||
▼
|
||||
8. Claude Code injects context before user message
|
||||
│
|
||||
▼
|
||||
9. Claude processes message WITH context
|
||||
```
|
||||
|
||||
## Data Flow: Context Saving
|
||||
|
||||
```
|
||||
1. User completes task in Claude Code
|
||||
│
|
||||
▼
|
||||
2. task-complete hook executes
|
||||
│
|
||||
├─ Load config from .claude/context-recall-config.env
|
||||
├─ Detect PROJECT_ID
|
||||
├─ Gather task information:
|
||||
│ ├─ Git branch (git rev-parse --abbrev-ref HEAD)
|
||||
│ ├─ Git commit (git rev-parse --short HEAD)
|
||||
│ ├─ Changed files (git diff --name-only)
|
||||
│ └─ Timestamp
|
||||
│
|
||||
▼
|
||||
3. Build context payload
|
||||
{
|
||||
"project_id": "{PROJECT_ID}",
|
||||
"context_type": "session_summary",
|
||||
"title": "Session: 2025-01-15T14:30:00Z",
|
||||
"dense_summary": "Task completed on branch...",
|
||||
"relevance_score": 7.0,
|
||||
"metadata": {
|
||||
"git_branch": "main",
|
||||
"git_commit": "a1b2c3d",
|
||||
"files_modified": "file1.py,file2.py",
|
||||
"timestamp": "2025-01-15T14:30:00Z"
|
||||
}
|
||||
}
|
||||
│
|
||||
▼
|
||||
4. HTTP POST /api/conversation-contexts
|
||||
│
|
||||
├─ Headers:
|
||||
│ ├─ Authorization: Bearer {JWT_TOKEN}
|
||||
│ └─ Content-Type: application/json
|
||||
├─ Body: [context payload]
|
||||
│
|
||||
▼
|
||||
5. API processes request
|
||||
│
|
||||
├─ Authenticate JWT token
|
||||
├─ Validate payload
|
||||
├─ Insert into database:
|
||||
│ INSERT INTO conversation_contexts
|
||||
│ (id, project_id, context_type, title,
|
||||
│ dense_summary, relevance_score, metadata)
|
||||
│ VALUES (...)
|
||||
│
|
||||
▼
|
||||
6. Build project state payload
|
||||
{
|
||||
"project_id": "{PROJECT_ID}",
|
||||
"state_type": "task_completion",
|
||||
"state_data": {
|
||||
"last_task_completion": "2025-01-15T14:30:00Z",
|
||||
"last_git_commit": "a1b2c3d",
|
||||
"last_git_branch": "main",
|
||||
"recent_files": "file1.py,file2.py"
|
||||
}
|
||||
}
|
||||
│
|
||||
▼
|
||||
7. HTTP POST /api/project-states
|
||||
│
|
||||
├─ Headers: Authorization: Bearer {JWT_TOKEN}
|
||||
├─ Body: [state payload]
|
||||
│
|
||||
▼
|
||||
8. API updates project state
|
||||
│
|
||||
├─ Upsert project state record
|
||||
├─ Merge state_data with existing
|
||||
│
|
||||
▼
|
||||
9. Context saved ✓
|
||||
│
|
||||
▼
|
||||
10. Available for future recall
|
||||
```
|
||||
|
||||
## Authentication Flow
|
||||
|
||||
```
|
||||
┌──────────────┐
|
||||
│ Initial │
|
||||
│ Setup │
|
||||
└──────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ bash scripts/setup-context-recall.sh│
|
||||
└──────┬──────────────────────────────┘
|
||||
│
|
||||
├─ Prompt for username/password
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────────┐
|
||||
│ POST /api/auth/login │
|
||||
│ │
|
||||
│ Request: │
|
||||
│ { │
|
||||
│ "username": "admin", │
|
||||
│ "password": "secret" │
|
||||
│ } │
|
||||
└──────┬───────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────────┐
|
||||
│ Response: │
|
||||
│ { │
|
||||
│ "access_token": "eyJ...", │
|
||||
│ "token_type": "bearer", │
|
||||
│ "expires_in": 86400 │
|
||||
│ } │
|
||||
└──────┬───────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────────┐
|
||||
│ Save to .claude/context-recall- │
|
||||
│ config.env: │
|
||||
│ │
|
||||
│ JWT_TOKEN=eyJ... │
|
||||
└──────┬───────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────────┐
|
||||
│ All API requests include: │
|
||||
│ Authorization: Bearer eyJ... │
|
||||
└──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Project Detection Flow
|
||||
|
||||
```
|
||||
Hook needs PROJECT_ID
|
||||
│
|
||||
├─ Check: $CLAUDE_PROJECT_ID set?
|
||||
│ └─ Yes → Use it
|
||||
│ └─ No → Continue detection
|
||||
│
|
||||
├─ Check: git config --local claude.projectid
|
||||
│ └─ Found → Use it
|
||||
│ └─ Not found → Continue detection
|
||||
│
|
||||
├─ Get: git config --get remote.origin.url
|
||||
│ └─ Found → Hash URL → Use as PROJECT_ID
|
||||
│ └─ Not found → No PROJECT_ID available
|
||||
│
|
||||
└─ If no PROJECT_ID:
|
||||
└─ Silent exit (no context available)
|
||||
```
|
||||
|
||||
## Database Schema
|
||||
|
||||
```sql
|
||||
-- Projects table
|
||||
CREATE TABLE projects (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
project_type VARCHAR(50),
|
||||
metadata JSONB,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Conversation contexts table
|
||||
CREATE TABLE conversation_contexts (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
project_id UUID REFERENCES projects(id),
|
||||
context_type VARCHAR(50),
|
||||
title VARCHAR(500),
|
||||
dense_summary TEXT NOT NULL,
|
||||
relevance_score DECIMAL(3,1) CHECK (relevance_score >= 0 AND relevance_score <= 10),
|
||||
metadata JSONB,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
|
||||
INDEX idx_project_relevance (project_id, relevance_score DESC),
|
||||
INDEX idx_project_type (project_id, context_type),
|
||||
INDEX idx_created (created_at DESC)
|
||||
);
|
||||
|
||||
-- Project states table
|
||||
CREATE TABLE project_states (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
project_id UUID REFERENCES projects(id),
|
||||
state_type VARCHAR(50),
|
||||
state_data JSONB NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
|
||||
INDEX idx_project_state (project_id, state_type)
|
||||
);
|
||||
```
|
||||
|
||||
## Component Interaction
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ File System │
|
||||
│ │
|
||||
│ .claude/ │
|
||||
│ ├── hooks/ │
|
||||
│ │ ├── user-prompt-submit ◄─── Executed by Claude Code │
|
||||
│ │ └── task-complete ◄─── Executed by Claude Code │
|
||||
│ │ │
|
||||
│ └── context-recall-config.env ◄─── Read by hooks │
|
||||
│ │
|
||||
└────────────────┬────────────────────────────────────────────┘
|
||||
│
|
||||
│ (Hooks read config and call API)
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ FastAPI Application (http://localhost:8000) │
|
||||
│ │
|
||||
│ Endpoints: │
|
||||
│ ├── POST /api/auth/login │
|
||||
│ ├── GET /api/conversation-contexts/recall │
|
||||
│ ├── POST /api/conversation-contexts │
|
||||
│ ├── POST /api/project-states │
|
||||
│ └── GET /api/projects/{id} │
|
||||
│ │
|
||||
└────────────────┬────────────────────────────────────────────┘
|
||||
│
|
||||
│ (API queries/updates database)
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ PostgreSQL Database │
|
||||
│ │
|
||||
│ Tables: │
|
||||
│ ├── projects │
|
||||
│ ├── conversation_contexts │
|
||||
│ └── project_states │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```
|
||||
Hook Execution
|
||||
│
|
||||
├─ Config file missing?
|
||||
│ └─ Silent exit (context recall unavailable)
|
||||
│
|
||||
├─ PROJECT_ID not detected?
|
||||
│ └─ Silent exit (no project context)
|
||||
│
|
||||
├─ JWT_TOKEN missing?
|
||||
│ └─ Silent exit (authentication unavailable)
|
||||
│
|
||||
├─ API unreachable? (timeout 3-5s)
|
||||
│ └─ Silent exit (API offline)
|
||||
│
|
||||
├─ API returns error (401, 404, 500)?
|
||||
│ └─ Silent exit (log if debug enabled)
|
||||
│
|
||||
└─ Success
|
||||
└─ Process and inject context
|
||||
```
|
||||
|
||||
**Philosophy:** Hooks NEVER break Claude Code. All failures are silent.
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
```
|
||||
Timeline for user-prompt-submit:
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
0ms Hook starts
|
||||
├─ Load config (10ms)
|
||||
├─ Detect project (5ms)
|
||||
│
|
||||
15ms HTTP request starts
|
||||
├─ Connection (20ms)
|
||||
├─ Query execution (50-100ms)
|
||||
├─ Response formatting (10ms)
|
||||
│
|
||||
145ms Response received
|
||||
├─ Parse JSON (10ms)
|
||||
├─ Format markdown (30ms)
|
||||
│
|
||||
185ms Context injected
|
||||
│
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Total: ~200ms average overhead per message
|
||||
Timeout: 3000ms (fails gracefully)
|
||||
```
|
||||
|
||||
## Configuration Impact
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────┐
|
||||
│ MIN_RELEVANCE_SCORE │
|
||||
├──────────────────────────────────────┤
|
||||
│ Low (3.0) │
|
||||
│ ├─ More contexts recalled │
|
||||
│ ├─ Broader historical view │
|
||||
│ └─ Slower queries │
|
||||
│ │
|
||||
│ Medium (5.0) ← Recommended │
|
||||
│ ├─ Balanced relevance/quantity │
|
||||
│ └─ Fast queries │
|
||||
│ │
|
||||
│ High (7.5) │
|
||||
│ ├─ Only critical contexts │
|
||||
│ ├─ Very focused │
|
||||
│ └─ Fastest queries │
|
||||
└──────────────────────────────────────┘
|
||||
|
||||
┌──────────────────────────────────────┐
|
||||
│ MAX_CONTEXTS │
|
||||
├──────────────────────────────────────┤
|
||||
│ Few (5) │
|
||||
│ ├─ Focused context │
|
||||
│ ├─ Shorter prompts │
|
||||
│ └─ Faster processing │
|
||||
│ │
|
||||
│ Medium (10) ← Recommended │
|
||||
│ ├─ Good coverage │
|
||||
│ └─ Reasonable prompt size │
|
||||
│ │
|
||||
│ Many (20) │
|
||||
│ ├─ Comprehensive context │
|
||||
│ ├─ Longer prompts │
|
||||
│ └─ Slower Claude processing │
|
||||
└──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Security Model
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Security Boundaries │
|
||||
│ │
|
||||
│ 1. Authentication │
|
||||
│ ├─ JWT tokens (24h expiry) │
|
||||
│ ├─ Bcrypt password hashing │
|
||||
│ └─ Bearer token in Authorization header │
|
||||
│ │
|
||||
│ 2. Authorization │
|
||||
│ ├─ Project-level access control │
|
||||
│ ├─ User can only access own projects │
|
||||
│ └─ Token includes user_id claim │
|
||||
│ │
|
||||
│ 3. Data Protection │
|
||||
│ ├─ Config file gitignored │
|
||||
│ ├─ JWT tokens never in version control │
|
||||
│ └─ HTTPS recommended for production │
|
||||
│ │
|
||||
│ 4. Input Validation │
|
||||
│ ├─ API validates all payloads │
|
||||
│ ├─ SQL injection protected (ORM) │
|
||||
│ └─ JSON schema validation │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Deployment Architecture
|
||||
|
||||
```
|
||||
Development:
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ Claude Code │────▶│ API │────▶│ PostgreSQL │
|
||||
│ (Desktop) │ │ (localhost) │ │ (localhost) │
|
||||
└──────────────┘ └──────────────┘ └──────────────┘
|
||||
|
||||
Production:
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ Claude Code │────▶│ API │────▶│ PostgreSQL │
|
||||
│ (Desktop) │ │ (Docker) │ │ (RDS/Cloud) │
|
||||
└──────────────┘ └──────────────┘ └──────────────┘
|
||||
│ │
|
||||
│ │ (HTTPS)
|
||||
│ ▼
|
||||
│ ┌──────────────┐
|
||||
│ │ Redis Cache │
|
||||
│ │ (Optional) │
|
||||
└──────────────┴──────────────┘
|
||||
```
|
||||
|
||||
## Scalability Considerations
|
||||
|
||||
```
|
||||
Database Optimization:
|
||||
├─ Indexes on (project_id, relevance_score)
|
||||
├─ Indexes on (project_id, context_type)
|
||||
├─ Indexes on created_at for time-based queries
|
||||
└─ JSONB indexes on metadata for complex queries
|
||||
|
||||
Caching Strategy:
|
||||
├─ Redis for frequently-accessed contexts
|
||||
├─ Cache key: project_id + min_score + limit
|
||||
├─ TTL: 5 minutes
|
||||
└─ Invalidate on new context creation
|
||||
|
||||
Query Optimization:
|
||||
├─ Limit results (MAX_CONTEXTS)
|
||||
├─ Filter early (MIN_RELEVANCE_SCORE)
|
||||
├─ Sort in database (not application)
|
||||
└─ Paginate for large result sets
|
||||
```
|
||||
|
||||
This architecture provides a robust, scalable, and secure system for context recall in Claude Code sessions.
|
||||
175
.claude/CONTEXT_RECALL_QUICK_START.md
Normal file
175
.claude/CONTEXT_RECALL_QUICK_START.md
Normal file
@@ -0,0 +1,175 @@
|
||||
# Context Recall - Quick Start
|
||||
|
||||
One-page reference for the Claude Code Context Recall System.
|
||||
|
||||
## Setup (First Time)
|
||||
|
||||
```bash
|
||||
# 1. Start API
|
||||
uvicorn api.main:app --reload
|
||||
|
||||
# 2. Setup (in new terminal)
|
||||
bash scripts/setup-context-recall.sh
|
||||
|
||||
# 3. Test
|
||||
bash scripts/test-context-recall.sh
|
||||
```
|
||||
|
||||
## Files
|
||||
|
||||
```
|
||||
.claude/
|
||||
├── hooks/
|
||||
│ ├── user-prompt-submit # Recalls context before messages
|
||||
│ ├── task-complete # Saves context after tasks
|
||||
│ └── README.md # Hook documentation
|
||||
├── context-recall-config.env # Configuration (gitignored)
|
||||
└── CONTEXT_RECALL_QUICK_START.md
|
||||
|
||||
scripts/
|
||||
├── setup-context-recall.sh # One-command setup
|
||||
└── test-context-recall.sh # System testing
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Edit `.claude/context-recall-config.env`:
|
||||
|
||||
```bash
|
||||
CLAUDE_API_URL=http://localhost:8000 # API URL
|
||||
CLAUDE_PROJECT_ID= # Auto-detected
|
||||
JWT_TOKEN= # From setup script
|
||||
CONTEXT_RECALL_ENABLED=true # Enable/disable
|
||||
MIN_RELEVANCE_SCORE=5.0 # Filter threshold (0-10)
|
||||
MAX_CONTEXTS=10 # Max contexts per query
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
```
|
||||
User Message → [Recall Context] → Claude (with context) → Response
|
||||
↓
|
||||
[Save Context]
|
||||
```
|
||||
|
||||
### user-prompt-submit Hook
|
||||
- Runs **before** each user message
|
||||
- Calls `GET /api/conversation-contexts/recall`
|
||||
- Injects relevant context from previous sessions
|
||||
- Falls back gracefully if API unavailable
|
||||
|
||||
### task-complete Hook
|
||||
- Runs **after** task completion
|
||||
- Calls `POST /api/conversation-contexts`
|
||||
- Saves conversation summary
|
||||
- Updates project state
|
||||
|
||||
## Common Commands
|
||||
|
||||
```bash
|
||||
# Re-run setup (get new JWT token)
|
||||
bash scripts/setup-context-recall.sh
|
||||
|
||||
# Test system
|
||||
bash scripts/test-context-recall.sh
|
||||
|
||||
# Test hooks manually
|
||||
source .claude/context-recall-config.env
|
||||
bash .claude/hooks/user-prompt-submit
|
||||
|
||||
# Enable debug mode
|
||||
echo "DEBUG_CONTEXT_RECALL=true" >> .claude/context-recall-config.env
|
||||
|
||||
# Disable context recall
|
||||
echo "CONTEXT_RECALL_ENABLED=false" >> .claude/context-recall-config.env
|
||||
|
||||
# Check API health
|
||||
curl http://localhost:8000/health
|
||||
|
||||
# View your project
|
||||
source .claude/context-recall-config.env
|
||||
curl -H "Authorization: Bearer $JWT_TOKEN" \
|
||||
http://localhost:8000/api/projects/$CLAUDE_PROJECT_ID
|
||||
|
||||
# Query contexts manually
|
||||
curl "http://localhost:8000/api/conversation-contexts/recall?project_id=$CLAUDE_PROJECT_ID&limit=5" \
|
||||
-H "Authorization: Bearer $JWT_TOKEN"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Problem | Solution |
|
||||
|---------|----------|
|
||||
| Context not appearing | Check API is running: `curl http://localhost:8000/health` |
|
||||
| Hooks not executing | Make executable: `chmod +x .claude/hooks/*` |
|
||||
| JWT token expired | Re-run setup: `bash scripts/setup-context-recall.sh` |
|
||||
| Context not saving | Check project ID: `echo $CLAUDE_PROJECT_ID` |
|
||||
| Debug hook output | Enable debug: `DEBUG_CONTEXT_RECALL=true` in config |
|
||||
|
||||
## API Endpoints
|
||||
|
||||
- `GET /api/conversation-contexts/recall` - Get relevant contexts
|
||||
- `POST /api/conversation-contexts` - Save new context
|
||||
- `POST /api/project-states` - Update project state
|
||||
- `POST /api/auth/login` - Get JWT token
|
||||
- `GET /api/projects` - List projects
|
||||
|
||||
## Configuration Parameters
|
||||
|
||||
### MIN_RELEVANCE_SCORE (0.0 - 10.0)
|
||||
- **5.0** - Balanced (recommended)
|
||||
- **7.0** - Only high-quality contexts
|
||||
- **3.0** - Include more historical context
|
||||
|
||||
### MAX_CONTEXTS (1 - 50)
|
||||
- **10** - Balanced (recommended)
|
||||
- **5** - Focused, minimal context
|
||||
- **20** - Comprehensive history
|
||||
|
||||
## Security
|
||||
|
||||
- JWT tokens stored in `.claude/context-recall-config.env`
|
||||
- File is gitignored (never commit!)
|
||||
- Tokens expire after 24 hours
|
||||
- Re-run setup to refresh
|
||||
|
||||
## Example Output
|
||||
|
||||
When context is available:
|
||||
|
||||
```markdown
|
||||
## 📚 Previous Context
|
||||
|
||||
The following context has been automatically recalled from previous sessions:
|
||||
|
||||
### 1. Database Schema Updates (Score: 8.5/10)
|
||||
*Type: technical_decision*
|
||||
|
||||
Updated the Project model to include new fields for MSP integration...
|
||||
|
||||
---
|
||||
|
||||
### 2. API Endpoint Changes (Score: 7.2/10)
|
||||
*Type: session_summary*
|
||||
|
||||
Implemented new REST endpoints for context recall...
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Performance
|
||||
|
||||
- Hook overhead: <500ms per message
|
||||
- API query time: <100ms
|
||||
- Timeouts: 3-5 seconds
|
||||
- Silent failures (don't break Claude)
|
||||
|
||||
## Full Documentation
|
||||
|
||||
- **Setup Guide:** `CONTEXT_RECALL_SETUP.md`
|
||||
- **Hook Details:** `.claude/hooks/README.md`
|
||||
- **API Spec:** `.claude/API_SPEC.md`
|
||||
|
||||
---
|
||||
|
||||
**Quick Start:** `bash scripts/setup-context-recall.sh` and you're done!
|
||||
892
.claude/SCHEMA_CONTEXT.md
Normal file
892
.claude/SCHEMA_CONTEXT.md
Normal file
@@ -0,0 +1,892 @@
|
||||
# Learning & Context Schema
|
||||
|
||||
**MSP Mode Database Schema - Self-Learning System**
|
||||
|
||||
**Status:** Designed 2026-01-15
|
||||
**Database:** msp_tracking (MariaDB on Jupiter)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Learning & Context subsystem enables MSP Mode to learn from every failure, build environmental awareness, and prevent recurring mistakes. This self-improving system captures failure patterns, generates actionable insights, and proactively checks environmental constraints before making suggestions.
|
||||
|
||||
**Core Principle:** Every failure is a learning opportunity. Agents must never make the same mistake twice.
|
||||
|
||||
**Related Documentation:**
|
||||
- [MSP-MODE-SPEC.md](../MSP-MODE-SPEC.md) - Full system specification
|
||||
- [ARCHITECTURE_OVERVIEW.md](ARCHITECTURE_OVERVIEW.md) - Agent architecture
|
||||
- [SCHEMA_CREDENTIALS.md](SCHEMA_CREDENTIALS.md) - Security tables
|
||||
- [API_SPEC.md](API_SPEC.md) - API endpoints
|
||||
|
||||
---
|
||||
|
||||
## Tables Summary
|
||||
|
||||
| Table | Purpose | Auto-Generated |
|
||||
|-------|---------|----------------|
|
||||
| `environmental_insights` | Generated insights per client/infrastructure | Yes |
|
||||
| `problem_solutions` | Issue tracking with root cause and resolution | Partial |
|
||||
| `failure_patterns` | Aggregated failure analysis and learnings | Yes |
|
||||
| `operation_failures` | Non-command failures (API, file ops, network) | Yes |
|
||||
|
||||
**Total:** 4 tables
|
||||
|
||||
**Specialized Agents:**
|
||||
- **Failure Analysis Agent** - Analyzes failures, identifies patterns, generates insights
|
||||
- **Environment Context Agent** - Pre-checks environmental constraints before operations
|
||||
- **Problem Pattern Matching Agent** - Searches historical solutions for similar issues
|
||||
|
||||
---
|
||||
|
||||
## Table Schemas
|
||||
|
||||
### `environmental_insights`
|
||||
|
||||
Auto-generated insights about client infrastructure constraints, limitations, and quirks. Used by Environment Context Agent to prevent failures before they occur.
|
||||
|
||||
```sql
|
||||
CREATE TABLE environmental_insights (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
client_id UUID REFERENCES clients(id) ON DELETE CASCADE,
|
||||
infrastructure_id UUID REFERENCES infrastructure(id) ON DELETE CASCADE,
|
||||
|
||||
-- Insight classification
|
||||
insight_category VARCHAR(100) NOT NULL CHECK(insight_category IN (
|
||||
'command_constraints', 'service_configuration', 'version_limitations',
|
||||
'custom_installations', 'network_constraints', 'permissions',
|
||||
'compatibility', 'performance', 'security'
|
||||
)),
|
||||
insight_title VARCHAR(500) NOT NULL,
|
||||
insight_description TEXT NOT NULL, -- markdown formatted
|
||||
|
||||
-- Examples and documentation
|
||||
examples TEXT, -- JSON array of command/config examples
|
||||
affected_operations TEXT, -- JSON array: ["user_management", "service_restart"]
|
||||
|
||||
-- Source and verification
|
||||
source_pattern_id UUID REFERENCES failure_patterns(id) ON DELETE SET NULL,
|
||||
confidence_level VARCHAR(20) CHECK(confidence_level IN ('confirmed', 'likely', 'suspected')),
|
||||
verification_count INTEGER DEFAULT 1, -- how many times verified
|
||||
last_verified TIMESTAMP,
|
||||
|
||||
-- Priority (1-10, higher = more important to avoid)
|
||||
priority INTEGER DEFAULT 5 CHECK(priority BETWEEN 1 AND 10),
|
||||
|
||||
-- Status
|
||||
is_active BOOLEAN DEFAULT true, -- false if pattern no longer applies
|
||||
superseded_by UUID REFERENCES environmental_insights(id), -- if replaced by better insight
|
||||
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_insights_client (client_id),
|
||||
INDEX idx_insights_infrastructure (infrastructure_id),
|
||||
INDEX idx_insights_category (insight_category),
|
||||
INDEX idx_insights_priority (priority),
|
||||
INDEX idx_insights_active (is_active)
|
||||
);
|
||||
```
|
||||
|
||||
**Real-World Examples:**
|
||||
|
||||
**D2TESTNAS - Custom WINS Installation:**
|
||||
```json
|
||||
{
|
||||
"infrastructure_id": "d2testnas-uuid",
|
||||
"client_id": "dataforth-uuid",
|
||||
"insight_category": "custom_installations",
|
||||
"insight_title": "WINS Service: Manual Samba installation (no native ReadyNAS service)",
|
||||
"insight_description": "**Installation:** Manually installed via Samba nmbd, not a native ReadyNAS service.\n\n**Constraints:**\n- No GUI service manager for WINS\n- Cannot use standard service management commands\n- Configuration via `/etc/frontview/samba/smb.conf.overrides`\n\n**Correct commands:**\n- Check status: `ssh root@192.168.0.9 'ps aux | grep nmbd'`\n- View config: `ssh root@192.168.0.9 'cat /etc/frontview/samba/smb.conf.overrides | grep wins'`\n- Restart: `ssh root@192.168.0.9 'service nmbd restart'`",
|
||||
"examples": [
|
||||
"ps aux | grep nmbd",
|
||||
"cat /etc/frontview/samba/smb.conf.overrides | grep wins",
|
||||
"service nmbd restart"
|
||||
],
|
||||
"affected_operations": ["service_management", "wins_configuration"],
|
||||
"confidence_level": "confirmed",
|
||||
"verification_count": 3,
|
||||
"priority": 9
|
||||
}
|
||||
```
|
||||
|
||||
**AD2 - PowerShell Version Constraints:**
|
||||
```json
|
||||
{
|
||||
"infrastructure_id": "ad2-uuid",
|
||||
"client_id": "dataforth-uuid",
|
||||
"insight_category": "version_limitations",
|
||||
"insight_title": "Server 2022: PowerShell 5.1 command compatibility",
|
||||
"insight_description": "**PowerShell Version:** 5.1 (default)\n\n**Compatible:** Modern cmdlets work (Get-LocalUser, Get-LocalGroup)\n\n**Not available:** PowerShell 7 specific features\n\n**Remote execution:** Use Invoke-Command for remote operations",
|
||||
"examples": [
|
||||
"Get-LocalUser",
|
||||
"Get-LocalGroup",
|
||||
"Invoke-Command -ComputerName AD2 -ScriptBlock { Get-LocalUser }"
|
||||
],
|
||||
"confidence_level": "confirmed",
|
||||
"verification_count": 5,
|
||||
"priority": 6
|
||||
}
|
||||
```
|
||||
|
||||
**Server 2008 - PowerShell 2.0 Limitations:**
|
||||
```json
|
||||
{
|
||||
"infrastructure_id": "old-server-2008-uuid",
|
||||
"insight_category": "version_limitations",
|
||||
"insight_title": "Server 2008: PowerShell 2.0 command compatibility",
|
||||
"insight_description": "**PowerShell Version:** 2.0 only\n\n**Avoid:** Get-LocalUser, Get-LocalGroup, New-LocalUser (not available in PS 2.0)\n\n**Use instead:** Get-WmiObject Win32_UserAccount, Get-WmiObject Win32_Group\n\n**Why:** Server 2008 predates modern PowerShell user management cmdlets",
|
||||
"examples": [
|
||||
"Get-WmiObject Win32_UserAccount",
|
||||
"Get-WmiObject Win32_Group",
|
||||
"Get-WmiObject Win32_UserAccount -Filter \"Name='username'\""
|
||||
],
|
||||
"affected_operations": ["user_management", "group_management"],
|
||||
"confidence_level": "confirmed",
|
||||
"verification_count": 5,
|
||||
"priority": 8
|
||||
}
|
||||
```
|
||||
|
||||
**DOS Machines (TS-XX) - Batch Syntax Constraints:**
|
||||
```json
|
||||
{
|
||||
"infrastructure_id": "ts-27-uuid",
|
||||
"client_id": "dataforth-uuid",
|
||||
"insight_category": "command_constraints",
|
||||
"insight_title": "MS-DOS 6.22: Batch file syntax limitations",
|
||||
"insight_description": "**OS:** MS-DOS 6.22\n\n**No support for:**\n- `IF /I` (case insensitive) - added in Windows 2000\n- Long filenames (8.3 format only)\n- Unicode or special characters\n- Modern batch features\n\n**Workarounds:**\n- Use duplicate IF statements for upper/lowercase\n- Keep filenames to 8.3 format\n- Use basic batch syntax only",
|
||||
"examples": [
|
||||
"IF \"%1\"=\"STATUS\" GOTO STATUS",
|
||||
"IF \"%1\"=\"status\" GOTO STATUS",
|
||||
"COPY FILE.TXT BACKUP.TXT"
|
||||
],
|
||||
"affected_operations": ["batch_scripting", "file_operations"],
|
||||
"confidence_level": "confirmed",
|
||||
"verification_count": 8,
|
||||
"priority": 10
|
||||
}
|
||||
```
|
||||
|
||||
**D2TESTNAS - SMB Protocol Constraints:**
|
||||
```json
|
||||
{
|
||||
"infrastructure_id": "d2testnas-uuid",
|
||||
"insight_category": "network_constraints",
|
||||
"insight_title": "ReadyNAS: SMB1/CORE protocol for DOS compatibility",
|
||||
"insight_description": "**Protocol:** CORE/SMB1 only (for DOS machine compatibility)\n\n**Implications:**\n- Modern SMB2/3 clients may need configuration\n- Use NetBIOS name, not IP address for DOS machines\n- Security risk: SMB1 deprecated due to vulnerabilities\n\n**Configuration:**\n- Set in `/etc/frontview/samba/smb.conf.overrides`\n- `min protocol = CORE`",
|
||||
"examples": [
|
||||
"NET USE Z: \\\\D2TESTNAS\\SHARE (from DOS)",
|
||||
"smbclient -L //192.168.0.9 -m SMB1"
|
||||
],
|
||||
"confidence_level": "confirmed",
|
||||
"priority": 7
|
||||
}
|
||||
```
|
||||
|
||||
**Generated insights.md Example:**
|
||||
|
||||
When Failure Analysis Agent runs, it generates markdown files for each client:
|
||||
|
||||
```markdown
|
||||
# Environmental Insights: Dataforth
|
||||
|
||||
Auto-generated from failure patterns and verified operations.
|
||||
|
||||
## D2TESTNAS (192.168.0.9)
|
||||
|
||||
### Custom Installations
|
||||
|
||||
**WINS Service: Manual Samba installation**
|
||||
- Manually installed via Samba nmbd, not native ReadyNAS service
|
||||
- No GUI service manager for WINS
|
||||
- Configure via `/etc/frontview/samba/smb.conf.overrides`
|
||||
- Check status: `ssh root@192.168.0.9 'ps aux | grep nmbd'`
|
||||
|
||||
### Network Constraints
|
||||
|
||||
**SMB Protocol: CORE/SMB1 only**
|
||||
- For DOS compatibility
|
||||
- Modern SMB2/3 clients may need configuration
|
||||
- Use NetBIOS name from DOS machines
|
||||
|
||||
## AD2 (192.168.0.6 - Server 2022)
|
||||
|
||||
### PowerShell Version
|
||||
|
||||
**Version:** PowerShell 5.1 (default)
|
||||
- **Compatible:** Modern cmdlets work
|
||||
- **Not available:** PowerShell 7 specific features
|
||||
|
||||
## TS-XX Machines (DOS 6.22)
|
||||
|
||||
### Command Constraints
|
||||
|
||||
**No support for:**
|
||||
- `IF /I` (case insensitive) - use duplicate IF statements
|
||||
- Long filenames (8.3 format only)
|
||||
- Unicode or special characters
|
||||
- Modern batch features
|
||||
|
||||
**Examples:**
|
||||
```batch
|
||||
REM Correct (DOS 6.22)
|
||||
IF "%1"=="STATUS" GOTO STATUS
|
||||
IF "%1"=="status" GOTO STATUS
|
||||
|
||||
REM Incorrect (requires Windows 2000+)
|
||||
IF /I "%1"=="STATUS" GOTO STATUS
|
||||
```
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `problem_solutions`
|
||||
|
||||
Issue tracking with root cause analysis and resolution documentation. Searchable historical knowledge base.
|
||||
|
||||
```sql
|
||||
CREATE TABLE problem_solutions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
work_item_id UUID NOT NULL REFERENCES work_items(id) ON DELETE CASCADE,
|
||||
session_id UUID NOT NULL REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
client_id UUID REFERENCES clients(id) ON DELETE SET NULL,
|
||||
infrastructure_id UUID REFERENCES infrastructure(id) ON DELETE SET NULL,
|
||||
|
||||
-- Problem description
|
||||
problem_title VARCHAR(500) NOT NULL,
|
||||
problem_description TEXT NOT NULL,
|
||||
symptom TEXT, -- what user/system exhibited
|
||||
error_message TEXT, -- exact error code/message
|
||||
error_code VARCHAR(100), -- structured error code
|
||||
|
||||
-- Investigation
|
||||
investigation_steps TEXT, -- JSON array of diagnostic commands/actions
|
||||
diagnostic_output TEXT, -- key outputs that led to root cause
|
||||
investigation_duration_minutes INTEGER,
|
||||
|
||||
-- Root cause
|
||||
root_cause TEXT NOT NULL,
|
||||
root_cause_category VARCHAR(100), -- "configuration", "hardware", "software", "network"
|
||||
|
||||
-- Solution
|
||||
solution_applied TEXT NOT NULL,
|
||||
solution_category VARCHAR(100), -- "config_change", "restart", "replacement", "patch"
|
||||
commands_run TEXT, -- JSON array of commands used to fix
|
||||
files_modified TEXT, -- JSON array of config files changed
|
||||
|
||||
-- Verification
|
||||
verification_method TEXT,
|
||||
verification_successful BOOLEAN DEFAULT true,
|
||||
verification_notes TEXT,
|
||||
|
||||
-- Prevention and rollback
|
||||
rollback_plan TEXT,
|
||||
prevention_measures TEXT, -- what was done to prevent recurrence
|
||||
|
||||
-- Pattern tracking
|
||||
recurrence_count INTEGER DEFAULT 1, -- if same problem reoccurs
|
||||
similar_problems TEXT, -- JSON array of related problem_solution IDs
|
||||
tags TEXT, -- JSON array: ["ssl", "apache", "certificate"]
|
||||
|
||||
-- Resolution
|
||||
resolved_at TIMESTAMP,
|
||||
time_to_resolution_minutes INTEGER,
|
||||
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_problems_work_item (work_item_id),
|
||||
INDEX idx_problems_session (session_id),
|
||||
INDEX idx_problems_client (client_id),
|
||||
INDEX idx_problems_infrastructure (infrastructure_id),
|
||||
INDEX idx_problems_category (root_cause_category),
|
||||
FULLTEXT idx_problems_search (problem_description, symptom, error_message, root_cause)
|
||||
);
|
||||
```
|
||||
|
||||
**Example Problem Solutions:**
|
||||
|
||||
**Apache SSL Certificate Expiration:**
|
||||
```json
|
||||
{
|
||||
"problem_title": "Apache SSL certificate expiration causing ERR_SSL_PROTOCOL_ERROR",
|
||||
"problem_description": "Website inaccessible via HTTPS. Browser shows ERR_SSL_PROTOCOL_ERROR.",
|
||||
"symptom": "Users unable to access website. SSL handshake failure.",
|
||||
"error_message": "ERR_SSL_PROTOCOL_ERROR",
|
||||
"investigation_steps": [
|
||||
"curl -I https://example.com",
|
||||
"openssl s_client -connect example.com:443",
|
||||
"systemctl status apache2",
|
||||
"openssl x509 -in /etc/ssl/certs/example.com.crt -text -noout"
|
||||
],
|
||||
"diagnostic_output": "Certificate expiration: 2026-01-10 (3 days ago)",
|
||||
"root_cause": "SSL certificate expired on 2026-01-10. Certbot auto-renewal failed due to DNS validation issue.",
|
||||
"root_cause_category": "configuration",
|
||||
"solution_applied": "1. Fixed DNS TXT record for Let's Encrypt validation\n2. Ran: certbot renew --force-renewal\n3. Restarted Apache: systemctl restart apache2",
|
||||
"solution_category": "config_change",
|
||||
"commands_run": [
|
||||
"certbot renew --force-renewal",
|
||||
"systemctl restart apache2"
|
||||
],
|
||||
"files_modified": [
|
||||
"/etc/apache2/sites-enabled/example.com.conf"
|
||||
],
|
||||
"verification_method": "curl test successful. Browser loads HTTPS site without error.",
|
||||
"verification_successful": true,
|
||||
"prevention_measures": "Set up monitoring for certificate expiration (30 days warning). Fixed DNS automation for certbot.",
|
||||
"tags": ["ssl", "apache", "certificate", "certbot"],
|
||||
"time_to_resolution_minutes": 25
|
||||
}
|
||||
```
|
||||
|
||||
**PowerShell Compatibility Issue:**
|
||||
```json
|
||||
{
|
||||
"problem_title": "Get-LocalUser fails on Server 2008 (PowerShell 2.0)",
|
||||
"problem_description": "Attempting to list local users on Server 2008 using Get-LocalUser cmdlet",
|
||||
"symptom": "Command not recognized error",
|
||||
"error_message": "Get-LocalUser : The term 'Get-LocalUser' is not recognized as the name of a cmdlet",
|
||||
"error_code": "CommandNotFoundException",
|
||||
"investigation_steps": [
|
||||
"$PSVersionTable",
|
||||
"Get-Command Get-LocalUser",
|
||||
"Get-WmiObject Win32_OperatingSystem | Select Caption, Version"
|
||||
],
|
||||
"root_cause": "Server 2008 has PowerShell 2.0 only. Get-LocalUser introduced in PowerShell 5.1 (Windows 10/Server 2016).",
|
||||
"root_cause_category": "software",
|
||||
"solution_applied": "Use WMI instead: Get-WmiObject Win32_UserAccount",
|
||||
"solution_category": "alternative_approach",
|
||||
"commands_run": [
|
||||
"Get-WmiObject Win32_UserAccount | Select Name, Disabled, LocalAccount"
|
||||
],
|
||||
"verification_method": "Successfully retrieved local user list",
|
||||
"verification_successful": true,
|
||||
"prevention_measures": "Created environmental insight for all Server 2008 machines. Environment Context Agent now checks PowerShell version before suggesting cmdlets.",
|
||||
"tags": ["powershell", "server_2008", "compatibility", "user_management"],
|
||||
"recurrence_count": 5
|
||||
}
|
||||
```
|
||||
|
||||
**Queries:**
|
||||
|
||||
```sql
|
||||
-- Find similar problems by error message
|
||||
SELECT problem_title, solution_applied, created_at
|
||||
FROM problem_solutions
|
||||
WHERE MATCH(error_message) AGAINST('SSL_PROTOCOL_ERROR' IN BOOLEAN MODE)
|
||||
ORDER BY created_at DESC;
|
||||
|
||||
-- Most common problems (by recurrence)
|
||||
SELECT problem_title, recurrence_count, root_cause_category
|
||||
FROM problem_solutions
|
||||
WHERE recurrence_count > 1
|
||||
ORDER BY recurrence_count DESC;
|
||||
|
||||
-- Recent solutions for client
|
||||
SELECT problem_title, solution_applied, resolved_at
|
||||
FROM problem_solutions
|
||||
WHERE client_id = 'dataforth-uuid'
|
||||
ORDER BY resolved_at DESC
|
||||
LIMIT 10;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `failure_patterns`
|
||||
|
||||
Aggregated failure insights learned from command/operation failures. Auto-generated by Failure Analysis Agent.
|
||||
|
||||
```sql
|
||||
CREATE TABLE failure_patterns (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
infrastructure_id UUID REFERENCES infrastructure(id) ON DELETE CASCADE,
|
||||
client_id UUID REFERENCES clients(id) ON DELETE CASCADE,
|
||||
|
||||
-- Pattern identification
|
||||
pattern_type VARCHAR(100) NOT NULL CHECK(pattern_type IN (
|
||||
'command_compatibility', 'version_mismatch', 'permission_denied',
|
||||
'service_unavailable', 'configuration_error', 'environmental_limitation',
|
||||
'network_connectivity', 'authentication_failure', 'syntax_error'
|
||||
)),
|
||||
pattern_signature VARCHAR(500) NOT NULL, -- "PowerShell 7 cmdlets on Server 2008"
|
||||
error_pattern TEXT, -- regex or keywords: "Get-LocalUser.*not recognized"
|
||||
|
||||
-- Context
|
||||
affected_systems TEXT, -- JSON array: ["all_server_2008", "D2TESTNAS"]
|
||||
affected_os_versions TEXT, -- JSON array: ["Server 2008", "DOS 6.22"]
|
||||
triggering_commands TEXT, -- JSON array of command patterns
|
||||
triggering_operations TEXT, -- JSON array of operation types
|
||||
|
||||
-- Failure details
|
||||
failure_description TEXT NOT NULL,
|
||||
typical_error_messages TEXT, -- JSON array of common error texts
|
||||
|
||||
-- Resolution
|
||||
root_cause TEXT NOT NULL, -- "Server 2008 only has PowerShell 2.0"
|
||||
recommended_solution TEXT NOT NULL, -- "Use Get-WmiObject instead of Get-LocalUser"
|
||||
alternative_approaches TEXT, -- JSON array of alternatives
|
||||
workaround_commands TEXT, -- JSON array of working commands
|
||||
|
||||
-- Metadata
|
||||
occurrence_count INTEGER DEFAULT 1, -- how many times seen
|
||||
first_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
last_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
severity VARCHAR(20) CHECK(severity IN ('blocking', 'major', 'minor', 'info')),
|
||||
|
||||
-- Status
|
||||
is_active BOOLEAN DEFAULT true, -- false if pattern no longer applies (e.g., server upgraded)
|
||||
added_to_insights BOOLEAN DEFAULT false, -- environmental_insight generated
|
||||
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_failure_infrastructure (infrastructure_id),
|
||||
INDEX idx_failure_client (client_id),
|
||||
INDEX idx_failure_pattern_type (pattern_type),
|
||||
INDEX idx_failure_signature (pattern_signature),
|
||||
INDEX idx_failure_active (is_active),
|
||||
INDEX idx_failure_severity (severity)
|
||||
);
|
||||
```
|
||||
|
||||
**Example Failure Patterns:**
|
||||
|
||||
**PowerShell Version Incompatibility:**
|
||||
```json
|
||||
{
|
||||
"pattern_type": "command_compatibility",
|
||||
"pattern_signature": "Modern PowerShell cmdlets on Server 2008",
|
||||
"error_pattern": "(Get-LocalUser|Get-LocalGroup|New-LocalUser).*not recognized",
|
||||
"affected_systems": ["all_server_2008_machines"],
|
||||
"affected_os_versions": ["Server 2008", "Server 2008 R2"],
|
||||
"triggering_commands": [
|
||||
"Get-LocalUser",
|
||||
"Get-LocalGroup",
|
||||
"New-LocalUser",
|
||||
"Remove-LocalUser"
|
||||
],
|
||||
"failure_description": "Modern PowerShell user management cmdlets fail on Server 2008 with 'not recognized' error",
|
||||
"typical_error_messages": [
|
||||
"Get-LocalUser : The term 'Get-LocalUser' is not recognized",
|
||||
"Get-LocalGroup : The term 'Get-LocalGroup' is not recognized"
|
||||
],
|
||||
"root_cause": "Server 2008 has PowerShell 2.0 only. Modern user management cmdlets (Get-LocalUser, etc.) were introduced in PowerShell 5.1 (Windows 10/Server 2016).",
|
||||
"recommended_solution": "Use WMI for user/group management: Get-WmiObject Win32_UserAccount, Get-WmiObject Win32_Group",
|
||||
"alternative_approaches": [
|
||||
"Use Get-WmiObject Win32_UserAccount",
|
||||
"Use net user command",
|
||||
"Upgrade to PowerShell 5.1 (if possible on Server 2008 R2)"
|
||||
],
|
||||
"workaround_commands": [
|
||||
"Get-WmiObject Win32_UserAccount",
|
||||
"Get-WmiObject Win32_Group",
|
||||
"net user"
|
||||
],
|
||||
"occurrence_count": 5,
|
||||
"severity": "major",
|
||||
"added_to_insights": true
|
||||
}
|
||||
```
|
||||
|
||||
**DOS Batch Syntax Limitation:**
|
||||
```json
|
||||
{
|
||||
"pattern_type": "environmental_limitation",
|
||||
"pattern_signature": "Modern batch syntax on MS-DOS 6.22",
|
||||
"error_pattern": "IF /I.*Invalid switch",
|
||||
"affected_systems": ["all_dos_machines"],
|
||||
"affected_os_versions": ["MS-DOS 6.22"],
|
||||
"triggering_commands": [
|
||||
"IF /I \"%1\"==\"value\" ...",
|
||||
"Long filenames with spaces"
|
||||
],
|
||||
"failure_description": "Modern batch file syntax not supported in MS-DOS 6.22",
|
||||
"typical_error_messages": [
|
||||
"Invalid switch - /I",
|
||||
"File not found (long filename)",
|
||||
"Bad command or file name"
|
||||
],
|
||||
"root_cause": "DOS 6.22 does not support /I flag (added in Windows 2000), long filenames, or many modern batch features",
|
||||
"recommended_solution": "Use duplicate IF statements for upper/lowercase. Keep filenames to 8.3 format. Use basic batch syntax only.",
|
||||
"alternative_approaches": [
|
||||
"Duplicate IF for case-insensitive: IF \"%1\"==\"VALUE\" ... + IF \"%1\"==\"value\" ...",
|
||||
"Use 8.3 filenames only",
|
||||
"Avoid advanced batch features"
|
||||
],
|
||||
"workaround_commands": [
|
||||
"IF \"%1\"==\"STATUS\" GOTO STATUS",
|
||||
"IF \"%1\"==\"status\" GOTO STATUS"
|
||||
],
|
||||
"occurrence_count": 8,
|
||||
"severity": "blocking",
|
||||
"added_to_insights": true
|
||||
}
|
||||
```
|
||||
|
||||
**ReadyNAS Service Management:**
|
||||
```json
|
||||
{
|
||||
"pattern_type": "service_unavailable",
|
||||
"pattern_signature": "systemd commands on ReadyNAS",
|
||||
"error_pattern": "systemctl.*command not found",
|
||||
"affected_systems": ["D2TESTNAS"],
|
||||
"triggering_commands": [
|
||||
"systemctl status nmbd",
|
||||
"systemctl restart samba"
|
||||
],
|
||||
"failure_description": "ReadyNAS does not use systemd for service management",
|
||||
"typical_error_messages": [
|
||||
"systemctl: command not found",
|
||||
"-ash: systemctl: not found"
|
||||
],
|
||||
"root_cause": "ReadyNAS OS is based on older Linux without systemd. Uses traditional init scripts.",
|
||||
"recommended_solution": "Use 'service' command or direct process management: service nmbd status, ps aux | grep nmbd",
|
||||
"alternative_approaches": [
|
||||
"service nmbd status",
|
||||
"ps aux | grep nmbd",
|
||||
"/etc/init.d/nmbd status"
|
||||
],
|
||||
"occurrence_count": 3,
|
||||
"severity": "major",
|
||||
"added_to_insights": true
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `operation_failures`
|
||||
|
||||
Non-command failures (API calls, integrations, file operations, network requests). Complements commands_run failure tracking.
|
||||
|
||||
```sql
|
||||
CREATE TABLE operation_failures (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
session_id UUID REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
work_item_id UUID REFERENCES work_items(id) ON DELETE CASCADE,
|
||||
client_id UUID REFERENCES clients(id) ON DELETE SET NULL,
|
||||
|
||||
-- Operation details
|
||||
operation_type VARCHAR(100) NOT NULL CHECK(operation_type IN (
|
||||
'api_call', 'file_operation', 'network_request',
|
||||
'database_query', 'external_integration', 'service_restart',
|
||||
'backup_operation', 'restore_operation', 'migration'
|
||||
)),
|
||||
operation_description TEXT NOT NULL,
|
||||
target_system VARCHAR(255), -- host, URL, service name
|
||||
|
||||
-- Failure details
|
||||
error_message TEXT NOT NULL,
|
||||
error_code VARCHAR(50), -- HTTP status, exit code, error number
|
||||
failure_category VARCHAR(100), -- "timeout", "authentication", "not_found", etc.
|
||||
stack_trace TEXT,
|
||||
|
||||
-- Context
|
||||
request_data TEXT, -- JSON: what was attempted
|
||||
response_data TEXT, -- JSON: error response
|
||||
environment_snapshot TEXT, -- JSON: relevant env vars, versions
|
||||
|
||||
-- Resolution
|
||||
resolution_applied TEXT,
|
||||
resolved BOOLEAN DEFAULT false,
|
||||
resolved_at TIMESTAMP,
|
||||
time_to_resolution_minutes INTEGER,
|
||||
|
||||
-- Pattern linkage
|
||||
related_pattern_id UUID REFERENCES failure_patterns(id),
|
||||
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_op_failure_session (session_id),
|
||||
INDEX idx_op_failure_type (operation_type),
|
||||
INDEX idx_op_failure_category (failure_category),
|
||||
INDEX idx_op_failure_resolved (resolved),
|
||||
INDEX idx_op_failure_client (client_id)
|
||||
);
|
||||
```
|
||||
|
||||
**Example Operation Failures:**
|
||||
|
||||
**SyncroMSP API Timeout:**
|
||||
```json
|
||||
{
|
||||
"operation_type": "api_call",
|
||||
"operation_description": "Search SyncroMSP tickets for Dataforth",
|
||||
"target_system": "https://azcomputerguru.syncromsp.com/api/v1",
|
||||
"error_message": "Request timeout after 30 seconds",
|
||||
"error_code": "ETIMEDOUT",
|
||||
"failure_category": "timeout",
|
||||
"request_data": {
|
||||
"endpoint": "/api/v1/tickets",
|
||||
"params": {"customer_id": 12345, "status": "open"}
|
||||
},
|
||||
"response_data": null,
|
||||
"resolution_applied": "Increased timeout to 60 seconds. Added retry logic with exponential backoff.",
|
||||
"resolved": true,
|
||||
"time_to_resolution_minutes": 15
|
||||
}
|
||||
```
|
||||
|
||||
**File Upload Permission Denied:**
|
||||
```json
|
||||
{
|
||||
"operation_type": "file_operation",
|
||||
"operation_description": "Upload backup file to NAS",
|
||||
"target_system": "D2TESTNAS:/mnt/backups",
|
||||
"error_message": "Permission denied: /mnt/backups/db_backup_2026-01-15.sql",
|
||||
"error_code": "EACCES",
|
||||
"failure_category": "permission",
|
||||
"environment_snapshot": {
|
||||
"user": "backupuser",
|
||||
"directory_perms": "drwxr-xr-x root root"
|
||||
},
|
||||
"resolution_applied": "Changed directory ownership: chown -R backupuser:backupgroup /mnt/backups",
|
||||
"resolved": true
|
||||
}
|
||||
```
|
||||
|
||||
**Database Query Performance:**
|
||||
```json
|
||||
{
|
||||
"operation_type": "database_query",
|
||||
"operation_description": "Query sessions table for large date range",
|
||||
"target_system": "MariaDB msp_tracking",
|
||||
"error_message": "Query execution time: 45 seconds (threshold: 5 seconds)",
|
||||
"failure_category": "performance",
|
||||
"request_data": {
|
||||
"query": "SELECT * FROM sessions WHERE session_date BETWEEN '2020-01-01' AND '2026-01-15'"
|
||||
},
|
||||
"resolution_applied": "Added index on session_date column. Query now runs in 0.3 seconds.",
|
||||
"resolved": true
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Self-Learning Workflow
|
||||
|
||||
### 1. Failure Detection and Logging
|
||||
|
||||
**Command Execution with Failure Tracking:**
|
||||
|
||||
```
|
||||
User: "Check WINS status on D2TESTNAS"
|
||||
|
||||
Main Claude → Environment Context Agent:
|
||||
- Queries infrastructure table for D2TESTNAS
|
||||
- Reads environmental_notes: "Manual WINS install, no native service"
|
||||
- Reads environmental_insights for D2TESTNAS
|
||||
- Returns: "D2TESTNAS has manually installed WINS (not native ReadyNAS service)"
|
||||
|
||||
Main Claude suggests command based on environmental context:
|
||||
- Executes: ssh root@192.168.0.9 'systemctl status nmbd'
|
||||
|
||||
Command fails:
|
||||
- success = false
|
||||
- exit_code = 127
|
||||
- error_message = "systemctl: command not found"
|
||||
- failure_category = "command_compatibility"
|
||||
|
||||
Trigger Failure Analysis Agent:
|
||||
- Analyzes error: ReadyNAS doesn't use systemd
|
||||
- Identifies correct approach: "service nmbd status" or "ps aux | grep nmbd"
|
||||
- Creates failure_pattern entry
|
||||
- Updates environmental_insights with correction
|
||||
- Returns resolution to Main Claude
|
||||
|
||||
Main Claude tries corrected command:
|
||||
- Executes: ssh root@192.168.0.9 'ps aux | grep nmbd'
|
||||
- Success = true
|
||||
- Updates original failure record with resolution
|
||||
```
|
||||
|
||||
### 2. Pattern Analysis (Periodic Agent Run)
|
||||
|
||||
**Failure Analysis Agent runs periodically:**
|
||||
|
||||
**Agent Task:** "Analyze recent failures and update environmental insights"
|
||||
|
||||
1. **Query failures:**
|
||||
```sql
|
||||
SELECT * FROM commands_run
|
||||
WHERE success = false AND resolved = false
|
||||
ORDER BY created_at DESC;
|
||||
|
||||
SELECT * FROM operation_failures
|
||||
WHERE resolved = false
|
||||
ORDER BY created_at DESC;
|
||||
```
|
||||
|
||||
2. **Group by pattern:**
|
||||
- Group by infrastructure_id, error_pattern, failure_category
|
||||
- Identify recurring patterns
|
||||
|
||||
3. **Create/update failure_patterns:**
|
||||
- If pattern seen 3+ times → Create failure_pattern
|
||||
- Increment occurrence_count for existing patterns
|
||||
- Update last_seen timestamp
|
||||
|
||||
4. **Generate environmental_insights:**
|
||||
- Transform failure_patterns into actionable insights
|
||||
- Create markdown-formatted descriptions
|
||||
- Add command examples
|
||||
- Set priority based on severity and frequency
|
||||
|
||||
5. **Update infrastructure environmental_notes:**
|
||||
- Add constraints to infrastructure.environmental_notes
|
||||
- Set powershell_version, shell_type, limitations
|
||||
|
||||
6. **Generate insights.md file:**
|
||||
- Query all environmental_insights for client
|
||||
- Format as markdown
|
||||
- Save to D:\ClaudeTools\insights\[client-name].md
|
||||
- Agents read this file before making suggestions
|
||||
|
||||
### 3. Pre-Operation Environment Check
|
||||
|
||||
**Environment Context Agent runs before operations:**
|
||||
|
||||
**Agent Task:** "Check environmental constraints for D2TESTNAS before command suggestion"
|
||||
|
||||
1. **Query infrastructure:**
|
||||
```sql
|
||||
SELECT environmental_notes, powershell_version, shell_type, limitations
|
||||
FROM infrastructure
|
||||
WHERE id = 'd2testnas-uuid';
|
||||
```
|
||||
|
||||
2. **Query environmental_insights:**
|
||||
```sql
|
||||
SELECT insight_title, insight_description, examples, priority
|
||||
FROM environmental_insights
|
||||
WHERE infrastructure_id = 'd2testnas-uuid'
|
||||
AND is_active = true
|
||||
ORDER BY priority DESC;
|
||||
```
|
||||
|
||||
3. **Query failure_patterns:**
|
||||
```sql
|
||||
SELECT pattern_signature, recommended_solution, workaround_commands
|
||||
FROM failure_patterns
|
||||
WHERE infrastructure_id = 'd2testnas-uuid'
|
||||
AND is_active = true;
|
||||
```
|
||||
|
||||
4. **Check proposed command compatibility:**
|
||||
- Proposed: "systemctl status nmbd"
|
||||
- Pattern match: "systemctl.*command not found"
|
||||
- **Result:** INCOMPATIBLE
|
||||
- Recommended: "ps aux | grep nmbd"
|
||||
|
||||
5. **Return environmental context:**
|
||||
```
|
||||
Environmental Context for D2TESTNAS:
|
||||
- ReadyNAS OS (Linux-based)
|
||||
- Manual WINS installation (Samba nmbd)
|
||||
- No systemd (use 'service' or ps commands)
|
||||
- SMB1/CORE protocol for DOS compatibility
|
||||
|
||||
Recommended commands:
|
||||
✓ ps aux | grep nmbd
|
||||
✓ service nmbd status
|
||||
✗ systemctl status nmbd (not available)
|
||||
```
|
||||
|
||||
Main Claude uses this context to suggest correct approach.
|
||||
|
||||
---
|
||||
|
||||
## Benefits
|
||||
|
||||
### 1. Self-Improving System
|
||||
- Each failure makes the system smarter
|
||||
- Patterns identified automatically
|
||||
- Insights generated without manual documentation
|
||||
- Knowledge accumulates over time
|
||||
|
||||
### 2. Reduced User Friction
|
||||
- User doesn't have to keep correcting same mistakes
|
||||
- Claude learns environmental constraints once
|
||||
- Suggestions are environmentally aware from start
|
||||
- Proactive problem prevention
|
||||
|
||||
### 3. Institutional Knowledge Capture
|
||||
- All environmental quirks documented in database
|
||||
- Survives across sessions and Claude instances
|
||||
- Queryable: "What are known issues with D2TESTNAS?"
|
||||
- Transferable to new team members
|
||||
|
||||
### 4. Proactive Problem Prevention
|
||||
- Environment Context Agent prevents failures before they happen
|
||||
- Suggests compatible alternatives automatically
|
||||
- Warns about known limitations
|
||||
- Avoids wasting time on incompatible approaches
|
||||
|
||||
### 5. Audit Trail
|
||||
- Every failure tracked with full context
|
||||
- Resolution history for troubleshooting
|
||||
- Pattern analysis for infrastructure planning
|
||||
- ROI tracking: time saved by avoiding repeat failures
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Schemas
|
||||
|
||||
**Sources data from:**
|
||||
- `commands_run` - Command execution failures
|
||||
- `infrastructure` - System capabilities and limitations
|
||||
- `work_items` - Context for failures
|
||||
- `sessions` - Session context for operations
|
||||
|
||||
**Provides data to:**
|
||||
- Environment Context Agent (pre-operation checks)
|
||||
- Problem Pattern Matching Agent (solution lookup)
|
||||
- MSP Mode (intelligent suggestions)
|
||||
- Reporting (failure analysis, improvement metrics)
|
||||
|
||||
---
|
||||
|
||||
## Example Queries
|
||||
|
||||
### Find all insights for a client
|
||||
```sql
|
||||
SELECT ei.insight_title, ei.insight_description, i.hostname
|
||||
FROM environmental_insights ei
|
||||
JOIN infrastructure i ON ei.infrastructure_id = i.id
|
||||
WHERE ei.client_id = 'dataforth-uuid'
|
||||
AND ei.is_active = true
|
||||
ORDER BY ei.priority DESC;
|
||||
```
|
||||
|
||||
### Search for similar problems
|
||||
```sql
|
||||
SELECT ps.problem_title, ps.solution_applied, ps.created_at
|
||||
FROM problem_solutions ps
|
||||
WHERE MATCH(ps.problem_description, ps.symptom, ps.error_message)
|
||||
AGAINST('SSL certificate' IN BOOLEAN MODE)
|
||||
ORDER BY ps.created_at DESC
|
||||
LIMIT 10;
|
||||
```
|
||||
|
||||
### Active failure patterns
|
||||
```sql
|
||||
SELECT fp.pattern_signature, fp.occurrence_count, fp.recommended_solution
|
||||
FROM failure_patterns fp
|
||||
WHERE fp.is_active = true
|
||||
AND fp.severity IN ('blocking', 'major')
|
||||
ORDER BY fp.occurrence_count DESC;
|
||||
```
|
||||
|
||||
### Unresolved operation failures
|
||||
```sql
|
||||
SELECT of.operation_type, of.target_system, of.error_message, of.created_at
|
||||
FROM operation_failures of
|
||||
WHERE of.resolved = false
|
||||
ORDER BY of.created_at DESC;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Document Version:** 1.0
|
||||
**Last Updated:** 2026-01-15
|
||||
**Author:** MSP Mode Schema Design Team
|
||||
448
.claude/SCHEMA_CORE.md
Normal file
448
.claude/SCHEMA_CORE.md
Normal file
@@ -0,0 +1,448 @@
|
||||
# SCHEMA_CORE.md
|
||||
|
||||
**Source:** MSP-MODE-SPEC.md
|
||||
**Section:** Core MSP Tracking Tables
|
||||
**Date:** 2026-01-15
|
||||
|
||||
## Overview
|
||||
|
||||
Core tables for MSP Mode tracking system: machines, clients, projects, sessions, and tasks. These tables form the foundation of the MSP tracking database and are referenced by most other tables in the system.
|
||||
|
||||
---
|
||||
|
||||
## Core MSP Tracking Tables (6 tables)
|
||||
|
||||
### `machines`
|
||||
|
||||
Technician's machines (laptops, desktops) used for MSP work.
|
||||
|
||||
```sql
|
||||
CREATE TABLE machines (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
|
||||
-- Machine identification (auto-detected)
|
||||
hostname VARCHAR(255) NOT NULL UNIQUE, -- from `hostname` command
|
||||
machine_fingerprint VARCHAR(500) UNIQUE, -- hostname + username + platform hash
|
||||
|
||||
-- Environment details
|
||||
friendly_name VARCHAR(255), -- "Main Laptop", "Home Desktop", "Travel Laptop"
|
||||
machine_type VARCHAR(50) CHECK(machine_type IN ('laptop', 'desktop', 'workstation', 'vm')),
|
||||
platform VARCHAR(50), -- "win32", "darwin", "linux"
|
||||
os_version VARCHAR(100),
|
||||
username VARCHAR(255), -- from `whoami`
|
||||
home_directory VARCHAR(500), -- user home path
|
||||
|
||||
-- Capabilities
|
||||
has_vpn_access BOOLEAN DEFAULT false, -- can connect to client networks
|
||||
vpn_profiles TEXT, -- JSON array: ["dataforth", "grabb", "internal"]
|
||||
has_docker BOOLEAN DEFAULT false,
|
||||
has_powershell BOOLEAN DEFAULT false,
|
||||
powershell_version VARCHAR(20),
|
||||
has_ssh BOOLEAN DEFAULT true,
|
||||
has_git BOOLEAN DEFAULT true,
|
||||
|
||||
-- Network context
|
||||
typical_network_location VARCHAR(100), -- "home", "office", "mobile"
|
||||
static_ip VARCHAR(45), -- if has static IP
|
||||
|
||||
-- Claude Code context
|
||||
claude_working_directory VARCHAR(500), -- primary working dir
|
||||
additional_working_dirs TEXT, -- JSON array
|
||||
|
||||
-- Tool versions
|
||||
installed_tools TEXT, -- JSON: {"git": "2.40", "docker": "24.0", "python": "3.11"}
|
||||
|
||||
-- MCP Servers & Skills (NEW)
|
||||
available_mcps TEXT, -- JSON array: ["claude-in-chrome", "filesystem", "custom-mcp"]
|
||||
mcp_capabilities TEXT, -- JSON: {"chrome": {"version": "1.0", "features": ["screenshots"]}}
|
||||
available_skills TEXT, -- JSON array: ["pdf", "commit", "review-pr", "custom-skill"]
|
||||
skill_paths TEXT, -- JSON: {"/pdf": "/path/to/pdf-skill", ...}
|
||||
|
||||
-- OS-Specific Commands
|
||||
preferred_shell VARCHAR(50), -- "powershell", "bash", "zsh", "cmd"
|
||||
package_manager_commands TEXT, -- JSON: {"install": "choco install", "update": "choco upgrade"}
|
||||
|
||||
-- Status
|
||||
is_primary BOOLEAN DEFAULT false, -- primary machine
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
last_seen TIMESTAMP,
|
||||
last_session_id UUID, -- last session from this machine
|
||||
|
||||
-- Notes
|
||||
notes TEXT, -- "Travel laptop - limited tools, no VPN"
|
||||
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_machines_hostname (hostname),
|
||||
INDEX idx_machines_fingerprint (machine_fingerprint),
|
||||
INDEX idx_machines_is_active (is_active),
|
||||
INDEX idx_machines_platform (platform)
|
||||
);
|
||||
```
|
||||
|
||||
**Machine Fingerprint Generation:**
|
||||
```javascript
|
||||
fingerprint = SHA256(hostname + "|" + username + "|" + platform + "|" + home_directory)
|
||||
// Example: SHA256("ACG-M-L5090|MikeSwanson|win32|C:\Users\MikeSwanson")
|
||||
```
|
||||
|
||||
**Auto-Detection on Session Start:**
|
||||
```javascript
|
||||
hostname = exec("hostname") // "ACG-M-L5090"
|
||||
username = exec("whoami") // "MikeSwanson" or "AzureAD+MikeSwanson"
|
||||
platform = process.platform // "win32", "darwin", "linux"
|
||||
home_dir = process.env.HOME || process.env.USERPROFILE
|
||||
|
||||
fingerprint = SHA256(`${hostname}|${username}|${platform}|${home_dir}`)
|
||||
|
||||
// Query database: SELECT * FROM machines WHERE machine_fingerprint = ?
|
||||
// If not found: Create new machine record
|
||||
// If found: Update last_seen, return machine_id
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
|
||||
**ACG-M-L5090 (Main Laptop):**
|
||||
```json
|
||||
{
|
||||
"hostname": "ACG-M-L5090",
|
||||
"friendly_name": "Main Laptop",
|
||||
"platform": "win32",
|
||||
"os_version": "Windows 11 Pro",
|
||||
"has_vpn_access": true,
|
||||
"vpn_profiles": ["dataforth", "grabb", "internal"],
|
||||
"has_docker": true,
|
||||
"powershell_version": "7.4",
|
||||
"preferred_shell": "powershell",
|
||||
"available_mcps": ["claude-in-chrome", "filesystem"],
|
||||
"available_skills": ["pdf", "commit", "review-pr", "frontend-design"],
|
||||
"package_manager_commands": {
|
||||
"install": "choco install {package}",
|
||||
"update": "choco upgrade {package}",
|
||||
"list": "choco list --local-only"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Mike-MacBook (Development Machine):**
|
||||
```json
|
||||
{
|
||||
"hostname": "Mikes-MacBook-Pro",
|
||||
"friendly_name": "MacBook Pro",
|
||||
"platform": "darwin",
|
||||
"os_version": "macOS 14.2",
|
||||
"has_vpn_access": false,
|
||||
"has_docker": true,
|
||||
"powershell_version": null,
|
||||
"preferred_shell": "zsh",
|
||||
"available_mcps": ["filesystem"],
|
||||
"available_skills": ["commit", "review-pr"],
|
||||
"package_manager_commands": {
|
||||
"install": "brew install {package}",
|
||||
"update": "brew upgrade {package}",
|
||||
"list": "brew list"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Travel-Laptop (Limited):**
|
||||
```json
|
||||
{
|
||||
"hostname": "TRAVEL-WIN",
|
||||
"friendly_name": "Travel Laptop",
|
||||
"platform": "win32",
|
||||
"os_version": "Windows 10 Home",
|
||||
"has_vpn_access": false,
|
||||
"vpn_profiles": [],
|
||||
"has_docker": false,
|
||||
"powershell_version": "5.1",
|
||||
"preferred_shell": "powershell",
|
||||
"available_mcps": [],
|
||||
"available_skills": [],
|
||||
"notes": "Minimal toolset, no Docker, no VPN - use for light work only"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `clients`
|
||||
|
||||
Master table for all client organizations.
|
||||
|
||||
```sql
|
||||
CREATE TABLE clients (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL UNIQUE,
|
||||
type VARCHAR(50) NOT NULL CHECK(type IN ('msp_client', 'internal', 'project')),
|
||||
network_subnet VARCHAR(100), -- e.g., "192.168.0.0/24"
|
||||
domain_name VARCHAR(255), -- AD domain or primary domain
|
||||
m365_tenant_id UUID, -- Microsoft 365 tenant ID
|
||||
primary_contact VARCHAR(255),
|
||||
notes TEXT,
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_clients_type (type),
|
||||
INDEX idx_clients_name (name)
|
||||
);
|
||||
```
|
||||
|
||||
**Examples:** Dataforth, Grabb & Durando, Valley Wide Plastering, AZ Computer Guru (internal)
|
||||
|
||||
---
|
||||
|
||||
### `projects`
|
||||
|
||||
Individual projects/engagements for clients.
|
||||
|
||||
```sql
|
||||
CREATE TABLE projects (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
client_id UUID NOT NULL REFERENCES clients(id) ON DELETE CASCADE,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
slug VARCHAR(255) UNIQUE, -- directory name: "dataforth-dos"
|
||||
category VARCHAR(50) CHECK(category IN (
|
||||
'client_project', 'internal_product', 'infrastructure',
|
||||
'website', 'development_tool', 'documentation'
|
||||
)),
|
||||
status VARCHAR(50) DEFAULT 'working' CHECK(status IN (
|
||||
'complete', 'working', 'blocked', 'pending', 'critical', 'deferred'
|
||||
)),
|
||||
priority VARCHAR(20) CHECK(priority IN ('critical', 'high', 'medium', 'low')),
|
||||
description TEXT,
|
||||
started_date DATE,
|
||||
target_completion_date DATE,
|
||||
completed_date DATE,
|
||||
estimated_hours DECIMAL(10,2),
|
||||
actual_hours DECIMAL(10,2),
|
||||
gitea_repo_url VARCHAR(500),
|
||||
notes TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_projects_client (client_id),
|
||||
INDEX idx_projects_status (status),
|
||||
INDEX idx_projects_slug (slug)
|
||||
);
|
||||
```
|
||||
|
||||
**Examples:** dataforth-dos, gururmm, grabb-website-move
|
||||
|
||||
---
|
||||
|
||||
### `sessions`
|
||||
|
||||
Work sessions with time tracking (enhanced with machine tracking).
|
||||
|
||||
```sql
|
||||
CREATE TABLE sessions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
client_id UUID REFERENCES clients(id) ON DELETE SET NULL,
|
||||
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
|
||||
machine_id UUID REFERENCES machines(id) ON DELETE SET NULL, -- NEW: which machine
|
||||
session_date DATE NOT NULL,
|
||||
start_time TIMESTAMP,
|
||||
end_time TIMESTAMP,
|
||||
duration_minutes INTEGER, -- auto-calculated or manual
|
||||
status VARCHAR(50) DEFAULT 'completed' CHECK(status IN (
|
||||
'completed', 'in_progress', 'blocked', 'pending'
|
||||
)),
|
||||
session_title VARCHAR(500) NOT NULL,
|
||||
summary TEXT, -- markdown summary
|
||||
is_billable BOOLEAN DEFAULT false,
|
||||
billable_hours DECIMAL(10,2),
|
||||
technician VARCHAR(255), -- "Mike Swanson", etc.
|
||||
session_log_file VARCHAR(500), -- path to .md file
|
||||
notes TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_sessions_client (client_id),
|
||||
INDEX idx_sessions_project (project_id),
|
||||
INDEX idx_sessions_date (session_date),
|
||||
INDEX idx_sessions_billable (is_billable),
|
||||
INDEX idx_sessions_machine (machine_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `pending_tasks`
|
||||
|
||||
Open items across all clients/projects.
|
||||
|
||||
```sql
|
||||
CREATE TABLE pending_tasks (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
client_id UUID REFERENCES clients(id) ON DELETE CASCADE,
|
||||
project_id UUID REFERENCES projects(id) ON DELETE CASCADE,
|
||||
work_item_id UUID REFERENCES work_items(id) ON DELETE SET NULL,
|
||||
title VARCHAR(500) NOT NULL,
|
||||
description TEXT,
|
||||
priority VARCHAR(20) CHECK(priority IN ('critical', 'high', 'medium', 'low')),
|
||||
blocked_by TEXT, -- what's blocking this
|
||||
assigned_to VARCHAR(255),
|
||||
due_date DATE,
|
||||
status VARCHAR(50) DEFAULT 'pending' CHECK(status IN (
|
||||
'pending', 'in_progress', 'blocked', 'completed', 'cancelled'
|
||||
)),
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
completed_at TIMESTAMP,
|
||||
|
||||
INDEX idx_pending_tasks_client (client_id),
|
||||
INDEX idx_pending_tasks_status (status),
|
||||
INDEX idx_pending_tasks_priority (priority)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `tasks`
|
||||
|
||||
Task/checklist management for tracking implementation steps, analysis work, and other agent activities.
|
||||
|
||||
```sql
|
||||
CREATE TABLE tasks (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
|
||||
-- Task hierarchy
|
||||
parent_task_id UUID REFERENCES tasks(id) ON DELETE CASCADE,
|
||||
task_order INTEGER NOT NULL,
|
||||
|
||||
-- Task details
|
||||
title VARCHAR(500) NOT NULL,
|
||||
description TEXT,
|
||||
task_type VARCHAR(100) CHECK(task_type IN (
|
||||
'implementation', 'research', 'review', 'deployment',
|
||||
'testing', 'documentation', 'bugfix', 'analysis'
|
||||
)),
|
||||
|
||||
-- Status tracking
|
||||
status VARCHAR(50) NOT NULL CHECK(status IN (
|
||||
'pending', 'in_progress', 'blocked', 'completed', 'cancelled'
|
||||
)),
|
||||
blocking_reason TEXT, -- Why blocked (if status='blocked')
|
||||
|
||||
-- Context
|
||||
session_id UUID REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
client_id UUID REFERENCES clients(id) ON DELETE SET NULL,
|
||||
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
|
||||
assigned_agent VARCHAR(100), -- Which agent is handling this
|
||||
|
||||
-- Timing
|
||||
estimated_complexity VARCHAR(20) CHECK(estimated_complexity IN (
|
||||
'trivial', 'simple', 'moderate', 'complex', 'very_complex'
|
||||
)),
|
||||
started_at TIMESTAMP,
|
||||
completed_at TIMESTAMP,
|
||||
|
||||
-- Context data (JSON)
|
||||
task_context TEXT, -- Detailed context for this task
|
||||
dependencies TEXT, -- JSON array of dependency task_ids
|
||||
|
||||
-- Metadata
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_tasks_session (session_id),
|
||||
INDEX idx_tasks_status (status),
|
||||
INDEX idx_tasks_parent (parent_task_id),
|
||||
INDEX idx_tasks_client (client_id),
|
||||
INDEX idx_tasks_project (project_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tagging System Tables (3 tables)
|
||||
|
||||
### `tags`
|
||||
|
||||
Flexible tagging system for work items and sessions.
|
||||
|
||||
```sql
|
||||
CREATE TABLE tags (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(100) UNIQUE NOT NULL,
|
||||
category VARCHAR(50) CHECK(category IN (
|
||||
'technology', 'client', 'infrastructure',
|
||||
'problem_type', 'action', 'service'
|
||||
)),
|
||||
description TEXT,
|
||||
usage_count INTEGER DEFAULT 0, -- auto-increment on use
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_tags_category (category),
|
||||
INDEX idx_tags_name (name)
|
||||
);
|
||||
```
|
||||
|
||||
**Pre-populated tags:** 157+ tags identified from analysis
|
||||
- 58 technology tags (docker, postgresql, apache, etc.)
|
||||
- 24 infrastructure tags (jupiter, saturn, pfsense, etc.)
|
||||
- 20+ client tags
|
||||
- 30 problem type tags (connection-timeout, ssl-error, etc.)
|
||||
- 25 action tags (migration, upgrade, cleanup, etc.)
|
||||
|
||||
---
|
||||
|
||||
### `work_item_tags` (Junction Table)
|
||||
|
||||
Many-to-many relationship: work items ↔ tags.
|
||||
|
||||
```sql
|
||||
CREATE TABLE work_item_tags (
|
||||
work_item_id UUID NOT NULL REFERENCES work_items(id) ON DELETE CASCADE,
|
||||
tag_id UUID NOT NULL REFERENCES tags(id) ON DELETE CASCADE,
|
||||
PRIMARY KEY (work_item_id, tag_id),
|
||||
|
||||
INDEX idx_wit_work_item (work_item_id),
|
||||
INDEX idx_wit_tag (tag_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `session_tags` (Junction Table)
|
||||
|
||||
Many-to-many relationship: sessions ↔ tags.
|
||||
|
||||
```sql
|
||||
CREATE TABLE session_tags (
|
||||
session_id UUID NOT NULL REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
tag_id UUID NOT NULL REFERENCES tags(id) ON DELETE CASCADE,
|
||||
PRIMARY KEY (session_id, tag_id),
|
||||
|
||||
INDEX idx_st_session (session_id),
|
||||
INDEX idx_st_tag (tag_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Relationships
|
||||
|
||||
- `machines` → `sessions` (one-to-many): Track which machine was used for each session
|
||||
- `clients` → `projects` (one-to-many): Each client can have multiple projects
|
||||
- `clients` → `sessions` (one-to-many): Track all work sessions for a client
|
||||
- `projects` → `sessions` (one-to-many): Sessions belong to specific projects
|
||||
- `sessions` → `work_items` (one-to-many): Each session contains multiple work items
|
||||
- `sessions` → `pending_tasks` (one-to-many): Tasks can be created from sessions
|
||||
- `sessions` → `tasks` (one-to-many): Task checklists linked to sessions
|
||||
- `tags` ↔ `sessions` (many-to-many via session_tags)
|
||||
- `tags` ↔ `work_items` (many-to-many via work_item_tags)
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Work Items & Time Tracking:** See [SCHEMA_MSP.md](SCHEMA_MSP.md)
|
||||
- **Infrastructure Details:** See [SCHEMA_INFRASTRUCTURE.md](SCHEMA_INFRASTRUCTURE.md)
|
||||
- **Credentials & Security:** See [SCHEMA_CREDENTIALS.md](SCHEMA_CREDENTIALS.md)
|
||||
- **Environmental Learning:** See [SCHEMA_CONTEXT.md](SCHEMA_CONTEXT.md)
|
||||
- **External Integrations:** See [SCHEMA_INTEGRATIONS.md](SCHEMA_INTEGRATIONS.md)
|
||||
- **API Endpoints:** See [API_SPEC.md](API_SPEC.md)
|
||||
- **Architecture Overview:** See [ARCHITECTURE_OVERVIEW.md](ARCHITECTURE_OVERVIEW.md)
|
||||
801
.claude/SCHEMA_CREDENTIALS.md
Normal file
801
.claude/SCHEMA_CREDENTIALS.md
Normal file
@@ -0,0 +1,801 @@
|
||||
# Credentials & Security Schema
|
||||
|
||||
**MSP Mode Database Schema - Security Tables**
|
||||
|
||||
**Status:** Designed 2026-01-15
|
||||
**Database:** msp_tracking (MariaDB on Jupiter)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Credentials & Security subsystem provides encrypted credential storage, comprehensive audit logging, security incident tracking, and granular access control for MSP work. All sensitive data is encrypted at rest using AES-256-GCM.
|
||||
|
||||
**Related Documentation:**
|
||||
- [MSP-MODE-SPEC.md](../MSP-MODE-SPEC.md) - Full system specification
|
||||
- [ARCHITECTURE_OVERVIEW.md](ARCHITECTURE_OVERVIEW.md) - System architecture
|
||||
- [API_SPEC.md](API_SPEC.md) - API endpoints for credential access
|
||||
- [SCHEMA_CONTEXT.md](SCHEMA_CONTEXT.md) - Learning and context tables
|
||||
|
||||
---
|
||||
|
||||
## Tables Summary
|
||||
|
||||
| Table | Purpose | Encryption |
|
||||
|-------|---------|------------|
|
||||
| `credentials` | Encrypted credential storage | AES-256-GCM |
|
||||
| `credential_audit_log` | Comprehensive access audit trail | No (metadata only) |
|
||||
| `security_incidents` | Security event tracking | No |
|
||||
| `credential_permissions` | Granular access control (future multi-user) | No |
|
||||
|
||||
**Total:** 4 tables
|
||||
|
||||
---
|
||||
|
||||
## Table Schemas
|
||||
|
||||
### `credentials`
|
||||
|
||||
Encrypted credential storage for client infrastructure, services, and integrations. All sensitive fields encrypted at rest with AES-256-GCM.
|
||||
|
||||
```sql
|
||||
CREATE TABLE credentials (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
client_id UUID REFERENCES clients(id) ON DELETE CASCADE,
|
||||
service_id UUID REFERENCES services(id) ON DELETE CASCADE,
|
||||
infrastructure_id UUID REFERENCES infrastructure(id) ON DELETE CASCADE,
|
||||
|
||||
-- Credential type and metadata
|
||||
credential_type VARCHAR(50) NOT NULL CHECK(credential_type IN (
|
||||
'password', 'api_key', 'oauth', 'ssh_key',
|
||||
'shared_secret', 'jwt', 'connection_string', 'certificate'
|
||||
)),
|
||||
service_name VARCHAR(255) NOT NULL, -- "Gitea Admin", "AD2 sysadmin"
|
||||
username VARCHAR(255),
|
||||
|
||||
-- Encrypted sensitive data (AES-256-GCM)
|
||||
password_encrypted BYTEA,
|
||||
api_key_encrypted BYTEA,
|
||||
client_secret_encrypted BYTEA,
|
||||
token_encrypted BYTEA,
|
||||
connection_string_encrypted BYTEA,
|
||||
|
||||
-- OAuth-specific fields
|
||||
client_id_oauth VARCHAR(255),
|
||||
tenant_id_oauth VARCHAR(255),
|
||||
|
||||
-- SSH key storage
|
||||
public_key TEXT,
|
||||
|
||||
-- Service-specific
|
||||
integration_code VARCHAR(255), -- for services like Autotask
|
||||
|
||||
-- Access metadata
|
||||
external_url VARCHAR(500),
|
||||
internal_url VARCHAR(500),
|
||||
custom_port INTEGER,
|
||||
role_description VARCHAR(500),
|
||||
requires_vpn BOOLEAN DEFAULT false,
|
||||
requires_2fa BOOLEAN DEFAULT false,
|
||||
ssh_key_auth_enabled BOOLEAN DEFAULT false,
|
||||
access_level VARCHAR(100),
|
||||
|
||||
-- Lifecycle management
|
||||
expires_at TIMESTAMP,
|
||||
last_rotated_at TIMESTAMP,
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_credentials_client (client_id),
|
||||
INDEX idx_credentials_service (service_id),
|
||||
INDEX idx_credentials_type (credential_type),
|
||||
INDEX idx_credentials_active (is_active)
|
||||
);
|
||||
```
|
||||
|
||||
**Security Features:**
|
||||
- All sensitive fields encrypted with AES-256-GCM
|
||||
- Encryption key stored separately (environment variable or vault)
|
||||
- Master password unlock mechanism
|
||||
- Automatic expiration tracking
|
||||
- Rotation reminders
|
||||
- VPN requirement flags
|
||||
|
||||
**Example Records:**
|
||||
|
||||
**Password Credential (AD2 sysadmin):**
|
||||
```json
|
||||
{
|
||||
"service_name": "AD2\\sysadmin",
|
||||
"credential_type": "password",
|
||||
"username": "sysadmin",
|
||||
"password_encrypted": "<encrypted_bytes>",
|
||||
"internal_url": "192.168.0.6",
|
||||
"requires_vpn": true,
|
||||
"access_level": "Domain Admin",
|
||||
"infrastructure_id": "ad2-server-uuid",
|
||||
"client_id": "dataforth-uuid"
|
||||
}
|
||||
```
|
||||
|
||||
**API Key (SyncroMSP):**
|
||||
```json
|
||||
{
|
||||
"service_name": "SyncroMSP API",
|
||||
"credential_type": "api_key",
|
||||
"api_key_encrypted": "<encrypted_bytes>",
|
||||
"external_url": "https://azcomputerguru.syncromsp.com/api/v1",
|
||||
"integration_code": "syncro_psa",
|
||||
"expires_at": "2027-01-15T00:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**OAuth Credential (Microsoft 365):**
|
||||
```json
|
||||
{
|
||||
"service_name": "Dataforth M365 Admin",
|
||||
"credential_type": "oauth",
|
||||
"client_id_oauth": "app-client-id",
|
||||
"client_secret_encrypted": "<encrypted_bytes>",
|
||||
"tenant_id_oauth": "tenant-uuid",
|
||||
"token_encrypted": "<encrypted_access_token>",
|
||||
"requires_2fa": true,
|
||||
"client_id": "dataforth-uuid"
|
||||
}
|
||||
```
|
||||
|
||||
**SSH Key (D2TESTNAS root):**
|
||||
```json
|
||||
{
|
||||
"service_name": "D2TESTNAS root",
|
||||
"credential_type": "ssh_key",
|
||||
"username": "root",
|
||||
"public_key": "ssh-rsa AAAAB3Nza...",
|
||||
"internal_url": "192.168.0.9",
|
||||
"requires_vpn": true,
|
||||
"ssh_key_auth_enabled": true,
|
||||
"infrastructure_id": "d2testnas-uuid"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `credential_audit_log`
|
||||
|
||||
Comprehensive audit trail for all credential access operations. Tracks who accessed what credential, when, from where, and why.
|
||||
|
||||
```sql
|
||||
CREATE TABLE credential_audit_log (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
credential_id UUID NOT NULL REFERENCES credentials(id) ON DELETE CASCADE,
|
||||
|
||||
-- Action tracking
|
||||
action VARCHAR(50) NOT NULL CHECK(action IN (
|
||||
'view', 'create', 'update', 'delete', 'rotate', 'decrypt'
|
||||
)),
|
||||
|
||||
-- User context
|
||||
user_id VARCHAR(255) NOT NULL, -- JWT sub claim
|
||||
ip_address VARCHAR(45),
|
||||
user_agent TEXT,
|
||||
|
||||
-- Session context
|
||||
session_id UUID, -- if accessed during MSP session
|
||||
work_item_id UUID, -- if accessed for specific work item
|
||||
|
||||
-- Audit details
|
||||
details TEXT, -- JSON: what changed, why accessed, etc.
|
||||
|
||||
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_cred_audit_credential (credential_id),
|
||||
INDEX idx_cred_audit_user (user_id),
|
||||
INDEX idx_cred_audit_timestamp (timestamp),
|
||||
INDEX idx_cred_audit_action (action)
|
||||
);
|
||||
```
|
||||
|
||||
**Logged Actions:**
|
||||
- **view** - Credential viewed in UI/API
|
||||
- **create** - New credential stored
|
||||
- **update** - Credential modified
|
||||
- **delete** - Credential removed
|
||||
- **rotate** - Password/key rotated
|
||||
- **decrypt** - Credential decrypted for use
|
||||
|
||||
**Example Audit Entries:**
|
||||
|
||||
**Credential Access During Session:**
|
||||
```json
|
||||
{
|
||||
"credential_id": "ad2-sysadmin-uuid",
|
||||
"action": "decrypt",
|
||||
"user_id": "mike@azcomputerguru.com",
|
||||
"ip_address": "172.16.3.101",
|
||||
"session_id": "current-session-uuid",
|
||||
"work_item_id": "fix-user-account-uuid",
|
||||
"details": {
|
||||
"reason": "Access AD2 to reset user account",
|
||||
"service_name": "AD2\\sysadmin"
|
||||
},
|
||||
"timestamp": "2026-01-15T14:32:10Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Credential Rotation:**
|
||||
```json
|
||||
{
|
||||
"credential_id": "nas-root-uuid",
|
||||
"action": "rotate",
|
||||
"user_id": "mike@azcomputerguru.com",
|
||||
"details": {
|
||||
"reason": "Scheduled 90-day rotation",
|
||||
"old_password_hash": "sha256:abc123...",
|
||||
"new_password_hash": "sha256:def456..."
|
||||
},
|
||||
"timestamp": "2026-01-15T09:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Failed Access Attempt:**
|
||||
```json
|
||||
{
|
||||
"credential_id": "client-api-uuid",
|
||||
"action": "view",
|
||||
"user_id": "unknown@external.com",
|
||||
"ip_address": "203.0.113.45",
|
||||
"details": {
|
||||
"error": "Unauthorized - invalid JWT token",
|
||||
"blocked": true
|
||||
},
|
||||
"timestamp": "2026-01-15T03:22:05Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Audit Queries:**
|
||||
```sql
|
||||
-- Who accessed this credential in last 30 days?
|
||||
SELECT user_id, action, timestamp, details
|
||||
FROM credential_audit_log
|
||||
WHERE credential_id = 'target-uuid'
|
||||
AND timestamp >= NOW() - INTERVAL 30 DAY
|
||||
ORDER BY timestamp DESC;
|
||||
|
||||
-- All credential access by user
|
||||
SELECT c.service_name, cal.action, cal.timestamp
|
||||
FROM credential_audit_log cal
|
||||
JOIN credentials c ON cal.credential_id = c.id
|
||||
WHERE cal.user_id = 'mike@azcomputerguru.com'
|
||||
ORDER BY cal.timestamp DESC
|
||||
LIMIT 50;
|
||||
|
||||
-- Recent decryption events (actual credential usage)
|
||||
SELECT c.service_name, cal.user_id, cal.timestamp, cal.session_id
|
||||
FROM credential_audit_log cal
|
||||
JOIN credentials c ON cal.credential_id = c.id
|
||||
WHERE cal.action = 'decrypt'
|
||||
AND cal.timestamp >= NOW() - INTERVAL 7 DAY
|
||||
ORDER BY cal.timestamp DESC;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `security_incidents`
|
||||
|
||||
Security event and incident tracking for MSP clients. Documents incidents, investigations, remediation, and resolution.
|
||||
|
||||
```sql
|
||||
CREATE TABLE security_incidents (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
client_id UUID REFERENCES clients(id) ON DELETE CASCADE,
|
||||
service_id UUID REFERENCES services(id) ON DELETE SET NULL,
|
||||
infrastructure_id UUID REFERENCES infrastructure(id) ON DELETE SET NULL,
|
||||
|
||||
-- Incident classification
|
||||
incident_type VARCHAR(100) CHECK(incident_type IN (
|
||||
'bec', 'backdoor', 'malware', 'unauthorized_access',
|
||||
'data_breach', 'phishing', 'ransomware', 'brute_force',
|
||||
'credential_compromise', 'ddos', 'injection_attack'
|
||||
)),
|
||||
incident_date TIMESTAMP NOT NULL,
|
||||
severity VARCHAR(50) CHECK(severity IN ('critical', 'high', 'medium', 'low')),
|
||||
|
||||
-- Incident details
|
||||
description TEXT NOT NULL,
|
||||
affected_users TEXT, -- JSON array of affected users
|
||||
affected_systems TEXT, -- JSON array of affected systems
|
||||
|
||||
-- Investigation
|
||||
findings TEXT, -- investigation results
|
||||
root_cause TEXT,
|
||||
indicators_of_compromise TEXT, -- JSON array: IPs, file hashes, domains
|
||||
|
||||
-- Remediation
|
||||
remediation_steps TEXT,
|
||||
remediation_verified BOOLEAN DEFAULT false,
|
||||
|
||||
-- Status tracking
|
||||
status VARCHAR(50) DEFAULT 'investigating' CHECK(status IN (
|
||||
'investigating', 'contained', 'resolved', 'monitoring'
|
||||
)),
|
||||
detected_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
contained_at TIMESTAMP,
|
||||
resolved_at TIMESTAMP,
|
||||
|
||||
-- Follow-up
|
||||
lessons_learned TEXT,
|
||||
prevention_measures TEXT, -- what was implemented to prevent recurrence
|
||||
external_reporting_required BOOLEAN DEFAULT false, -- regulatory/client reporting
|
||||
external_report_details TEXT,
|
||||
|
||||
notes TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_incidents_client (client_id),
|
||||
INDEX idx_incidents_type (incident_type),
|
||||
INDEX idx_incidents_severity (severity),
|
||||
INDEX idx_incidents_status (status),
|
||||
INDEX idx_incidents_date (incident_date)
|
||||
);
|
||||
```
|
||||
|
||||
**Real-World Examples from Session Logs:**
|
||||
|
||||
**BEC (Business Email Compromise) - BG Builders:**
|
||||
```json
|
||||
{
|
||||
"incident_type": "bec",
|
||||
"client_id": "bg-builders-uuid",
|
||||
"incident_date": "2025-12-XX",
|
||||
"severity": "critical",
|
||||
"description": "OAuth backdoor application discovered in M365 tenant allowing unauthorized email access",
|
||||
"affected_users": ["admin@bgbuilders.com", "accounting@bgbuilders.com"],
|
||||
"findings": "Malicious OAuth app registered with Mail.ReadWrite permissions. App created via phishing attack.",
|
||||
"root_cause": "User clicked phishing link and authorized malicious OAuth application",
|
||||
"remediation_steps": "1. Revoked OAuth app consent\n2. Forced password reset for affected users\n3. Enabled MFA for all users\n4. Reviewed audit logs for data exfiltration\n5. Configured conditional access policies",
|
||||
"remediation_verified": true,
|
||||
"status": "resolved",
|
||||
"prevention_measures": "Implemented OAuth app approval workflow, security awareness training, conditional access policies",
|
||||
"external_reporting_required": true,
|
||||
"external_report_details": "Notified client management, documented for cyber insurance"
|
||||
}
|
||||
```
|
||||
|
||||
**BEC - CW Concrete:**
|
||||
```json
|
||||
{
|
||||
"incident_type": "bec",
|
||||
"client_id": "cw-concrete-uuid",
|
||||
"incident_date": "2025-11-XX",
|
||||
"severity": "high",
|
||||
"description": "Business email compromise detected - unauthorized access to executive mailbox",
|
||||
"affected_users": ["ceo@cwconcrete.com"],
|
||||
"findings": "Attacker used compromised credentials to access mailbox and send fraudulent wire transfer requests",
|
||||
"root_cause": "Credential phishing via fake Office 365 login page",
|
||||
"remediation_steps": "1. Reset compromised credentials\n2. Enabled MFA\n3. Blocked sender domains\n4. Reviewed sent items for fraudulent emails\n5. Notified financial institutions",
|
||||
"status": "resolved",
|
||||
"lessons_learned": "MFA should be mandatory for all executive accounts. Email authentication (DMARC/DKIM/SPF) critical."
|
||||
}
|
||||
```
|
||||
|
||||
**Malware - General Pattern:**
|
||||
```json
|
||||
{
|
||||
"incident_type": "malware",
|
||||
"severity": "high",
|
||||
"description": "Ransomware infection detected on workstation",
|
||||
"affected_systems": ["WS-ACCT-01"],
|
||||
"findings": "CryptoLocker variant. Files encrypted with .encrypted extension. Ransom note left in directories.",
|
||||
"root_cause": "User opened malicious email attachment",
|
||||
"remediation_steps": "1. Isolated infected system\n2. Verified backups available\n3. Wiped and restored from backup\n4. Updated endpoint protection\n5. Implemented email attachment filtering",
|
||||
"status": "resolved",
|
||||
"prevention_measures": "Enhanced email filtering, user training, backup verification schedule"
|
||||
}
|
||||
```
|
||||
|
||||
**Queries:**
|
||||
```sql
|
||||
-- Critical unresolved incidents
|
||||
SELECT client_id, incident_type, description, incident_date
|
||||
FROM security_incidents
|
||||
WHERE severity = 'critical'
|
||||
AND status != 'resolved'
|
||||
ORDER BY incident_date DESC;
|
||||
|
||||
-- Incident history for client
|
||||
SELECT incident_type, severity, incident_date, status
|
||||
FROM security_incidents
|
||||
WHERE client_id = 'target-client-uuid'
|
||||
ORDER BY incident_date DESC;
|
||||
|
||||
-- BEC incidents requiring reporting
|
||||
SELECT client_id, description, incident_date, external_report_details
|
||||
FROM security_incidents
|
||||
WHERE incident_type = 'bec'
|
||||
AND external_reporting_required = true;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `credential_permissions`
|
||||
|
||||
Granular access control for credentials. Supports future multi-user MSP team expansion by defining who can access which credentials.
|
||||
|
||||
```sql
|
||||
CREATE TABLE credential_permissions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
credential_id UUID NOT NULL REFERENCES credentials(id) ON DELETE CASCADE,
|
||||
user_id VARCHAR(255) NOT NULL, -- or role_id for role-based access
|
||||
|
||||
-- Permission levels
|
||||
permission_level VARCHAR(50) CHECK(permission_level IN ('read', 'write', 'admin')),
|
||||
|
||||
-- Constraints
|
||||
requires_2fa BOOLEAN DEFAULT false, -- force 2FA for this credential
|
||||
ip_whitelist TEXT, -- JSON array of allowed IPs
|
||||
time_restrictions TEXT, -- JSON: business hours only, etc.
|
||||
|
||||
-- Audit
|
||||
granted_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
granted_by VARCHAR(255),
|
||||
expires_at TIMESTAMP, -- temporary access
|
||||
|
||||
UNIQUE(credential_id, user_id),
|
||||
INDEX idx_cred_perm_credential (credential_id),
|
||||
INDEX idx_cred_perm_user (user_id)
|
||||
);
|
||||
```
|
||||
|
||||
**Permission Levels:**
|
||||
- **read** - Can view/decrypt credential
|
||||
- **write** - Can update credential
|
||||
- **admin** - Can grant/revoke permissions, delete credential
|
||||
|
||||
**Example Permissions:**
|
||||
|
||||
**Standard Technician Access:**
|
||||
```json
|
||||
{
|
||||
"credential_id": "client-rdp-uuid",
|
||||
"user_id": "tech1@azcomputerguru.com",
|
||||
"permission_level": "read",
|
||||
"requires_2fa": false,
|
||||
"granted_by": "mike@azcomputerguru.com"
|
||||
}
|
||||
```
|
||||
|
||||
**Sensitive Credential (Admin Only):**
|
||||
```json
|
||||
{
|
||||
"credential_id": "domain-admin-uuid",
|
||||
"user_id": "mike@azcomputerguru.com",
|
||||
"permission_level": "admin",
|
||||
"requires_2fa": true,
|
||||
"ip_whitelist": ["172.16.3.0/24", "192.168.1.0/24"],
|
||||
"granted_by": "system"
|
||||
}
|
||||
```
|
||||
|
||||
**Temporary Access (Contractor):**
|
||||
```json
|
||||
{
|
||||
"credential_id": "temp-vpn-uuid",
|
||||
"user_id": "contractor@external.com",
|
||||
"permission_level": "read",
|
||||
"requires_2fa": true,
|
||||
"expires_at": "2026-02-01T00:00:00Z",
|
||||
"granted_by": "mike@azcomputerguru.com"
|
||||
}
|
||||
```
|
||||
|
||||
**Time-Restricted Access:**
|
||||
```json
|
||||
{
|
||||
"credential_id": "backup-system-uuid",
|
||||
"user_id": "nightshift@azcomputerguru.com",
|
||||
"permission_level": "read",
|
||||
"time_restrictions": {
|
||||
"allowed_hours": "18:00-06:00",
|
||||
"timezone": "America/Phoenix",
|
||||
"days": ["mon", "tue", "wed", "thu", "fri"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Credential Workflows
|
||||
|
||||
### Credential Storage Workflow (Agent-Based)
|
||||
|
||||
**When new credential discovered during MSP session:**
|
||||
|
||||
1. **User mentions credential:**
|
||||
- "SSH to AD2 as sysadmin" → Claude detects credential reference
|
||||
|
||||
2. **Check if credential exists:**
|
||||
- Query: `GET /api/v1/credentials?service=AD2&username=sysadmin`
|
||||
|
||||
3. **If not found, prompt user:**
|
||||
- "Store credential for AD2\\sysadmin? (y/n)"
|
||||
|
||||
4. **Launch Credential Storage Agent:**
|
||||
- Receives: credential data, client context, service info
|
||||
- Encrypts credential with AES-256-GCM
|
||||
- Links to client_id, service_id, infrastructure_id
|
||||
- Stores via API: `POST /api/v1/credentials`
|
||||
- Creates audit log entry (action: 'create')
|
||||
- Returns: credential_id
|
||||
|
||||
5. **Main Claude confirms:**
|
||||
- "Stored AD2\\sysadmin credential (ID: abc123)"
|
||||
|
||||
### Credential Retrieval Workflow (Agent-Based)
|
||||
|
||||
**When credential needed for work:**
|
||||
|
||||
1. **Launch Credential Retrieval Agent:**
|
||||
- Task: "Retrieve credential for AD2\\sysadmin"
|
||||
|
||||
2. **Agent performs:**
|
||||
- Query API: `GET /api/v1/credentials?service=AD2&username=sysadmin`
|
||||
- Decrypt credential (API handles this with master key)
|
||||
- Log access to credential_audit_log:
|
||||
- action: 'decrypt'
|
||||
- user_id: from JWT
|
||||
- session_id: current MSP session
|
||||
- work_item_id: current work context
|
||||
- Return only credential value
|
||||
|
||||
3. **Agent returns:**
|
||||
- "Paper123!@#" (actual credential)
|
||||
|
||||
4. **Main Claude uses credential:**
|
||||
- Displays in context: "Using AD2\\sysadmin password from vault"
|
||||
- Never logs actual password value in session logs
|
||||
|
||||
5. **Audit trail created automatically**
|
||||
|
||||
### Credential Rotation Workflow
|
||||
|
||||
**Scheduled or on-demand rotation:**
|
||||
|
||||
1. **Identify credentials needing rotation:**
|
||||
```sql
|
||||
SELECT * FROM credentials
|
||||
WHERE expires_at <= NOW() + INTERVAL 7 DAY
|
||||
OR last_rotated_at <= NOW() - INTERVAL 90 DAY;
|
||||
```
|
||||
|
||||
2. **For each credential:**
|
||||
- Generate new password/key
|
||||
- Update service/infrastructure with new credential
|
||||
- Encrypt new credential
|
||||
- Update credentials table
|
||||
- Set last_rotated_at = NOW()
|
||||
- Log rotation in credential_audit_log
|
||||
|
||||
3. **Verify new credential works:**
|
||||
- Test authentication
|
||||
- Update verification status
|
||||
|
||||
4. **Notify user:**
|
||||
- "Rotated 3 credentials: AD2\\sysadmin, NAS root, Gitea admin"
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Encryption at Rest
|
||||
|
||||
**AES-256-GCM Encryption:**
|
||||
- All `*_encrypted` fields use AES-256-GCM
|
||||
- Provides both confidentiality and authenticity
|
||||
- Per-credential random IV (initialization vector)
|
||||
- Master key stored separately from database
|
||||
|
||||
**Master Key Management:**
|
||||
```python
|
||||
# Example key storage (production)
|
||||
# Option 1: Environment variable (Docker secret)
|
||||
MASTER_KEY = os.environ['MSP_CREDENTIAL_MASTER_KEY']
|
||||
|
||||
# Option 2: HashiCorp Vault
|
||||
# vault = hvac.Client(url='https://vault.internal')
|
||||
# MASTER_KEY = vault.secrets.kv.v2.read_secret_version(path='msp/credential-key')
|
||||
|
||||
# Option 3: AWS KMS / Azure Key Vault
|
||||
# MASTER_KEY = kms_client.decrypt(encrypted_key_blob)
|
||||
```
|
||||
|
||||
**Encryption Process:**
|
||||
```python
|
||||
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
|
||||
import os
|
||||
|
||||
def encrypt_credential(plaintext: str, master_key: bytes) -> bytes:
|
||||
"""Encrypt credential with AES-256-GCM"""
|
||||
aesgcm = AESGCM(master_key) # 32-byte key
|
||||
nonce = os.urandom(12) # 96-bit random nonce
|
||||
ciphertext = aesgcm.encrypt(nonce, plaintext.encode(), None)
|
||||
return nonce + ciphertext # prepend nonce to ciphertext
|
||||
|
||||
def decrypt_credential(encrypted: bytes, master_key: bytes) -> str:
|
||||
"""Decrypt credential"""
|
||||
aesgcm = AESGCM(master_key)
|
||||
nonce = encrypted[:12]
|
||||
ciphertext = encrypted[12:]
|
||||
plaintext = aesgcm.decrypt(nonce, ciphertext, None)
|
||||
return plaintext.decode()
|
||||
```
|
||||
|
||||
### Access Control
|
||||
|
||||
**JWT-Based Authentication:**
|
||||
- All API requests require valid JWT token
|
||||
- Token includes user_id (sub claim)
|
||||
- Token expires after 1 hour (refresh pattern)
|
||||
|
||||
**Permission Checks:**
|
||||
```python
|
||||
# Before decrypting credential
|
||||
def check_credential_access(credential_id: str, user_id: str) -> bool:
|
||||
# Check credential_permissions table
|
||||
perm = db.query(CredentialPermission).filter(
|
||||
CredentialPermission.credential_id == credential_id,
|
||||
CredentialPermission.user_id == user_id
|
||||
).first()
|
||||
|
||||
if not perm:
|
||||
# No explicit permission - deny by default
|
||||
return False
|
||||
|
||||
if perm.expires_at and perm.expires_at < datetime.now():
|
||||
# Permission expired
|
||||
return False
|
||||
|
||||
if perm.requires_2fa:
|
||||
# Check if user has valid 2FA session
|
||||
if not check_2fa_session(user_id):
|
||||
return False
|
||||
|
||||
return True
|
||||
```
|
||||
|
||||
**Audit Logging:**
|
||||
- Every credential access logged automatically
|
||||
- Failed access attempts logged with details
|
||||
- Queryable for security investigations
|
||||
- Retention: 7 years (compliance)
|
||||
|
||||
### Key Rotation Strategy
|
||||
|
||||
**Master Key Rotation (Annual or on-demand):**
|
||||
|
||||
1. Generate new master key
|
||||
2. Re-encrypt all credentials with new key
|
||||
3. Update key in secure storage
|
||||
4. Audit log: key rotation event
|
||||
5. Verify all credentials decrypt successfully
|
||||
6. Archive old key (encrypted, for disaster recovery)
|
||||
|
||||
**Credential Rotation (Per-credential schedule):**
|
||||
|
||||
- **Critical credentials:** 90 days
|
||||
- **Standard credentials:** 180 days
|
||||
- **Service accounts:** 365 days
|
||||
- **API keys:** 365 days or vendor recommendation
|
||||
|
||||
### Compliance Considerations
|
||||
|
||||
**Data Retention:**
|
||||
- Credentials: Retained while active
|
||||
- Audit logs: 7 years minimum
|
||||
- Security incidents: Permanent (unless client requests deletion)
|
||||
|
||||
**Access Logging:**
|
||||
- Who accessed what credential
|
||||
- When and from where (IP)
|
||||
- Why (session/work item context)
|
||||
- Result (success/failure)
|
||||
|
||||
**Encryption Standards:**
|
||||
- AES-256-GCM (FIPS 140-2 compliant)
|
||||
- TLS 1.3 for API transit encryption
|
||||
- Key length: 256 bits minimum
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Schemas
|
||||
|
||||
**Links to:**
|
||||
- `clients` - Credentials belong to clients
|
||||
- `infrastructure` - Credentials access infrastructure
|
||||
- `services` - Credentials authenticate to services
|
||||
- `sessions` - Credential access logged per session
|
||||
- `work_items` - Credentials used for specific work
|
||||
|
||||
**Used by:**
|
||||
- MSP Mode sessions (credential retrieval)
|
||||
- Security incident investigations (affected credentials)
|
||||
- Audit queries (compliance reporting)
|
||||
- Integration workflows (external system authentication)
|
||||
|
||||
---
|
||||
|
||||
## Example Queries
|
||||
|
||||
### Find all credentials for a client
|
||||
```sql
|
||||
SELECT c.service_name, c.username, c.credential_type, c.requires_vpn
|
||||
FROM credentials c
|
||||
WHERE c.client_id = 'dataforth-uuid'
|
||||
AND c.is_active = true
|
||||
ORDER BY c.service_name;
|
||||
```
|
||||
|
||||
### Check credential expiration
|
||||
```sql
|
||||
SELECT c.service_name, c.expires_at, c.last_rotated_at
|
||||
FROM credentials c
|
||||
WHERE c.expires_at <= NOW() + INTERVAL 30 DAY
|
||||
OR c.last_rotated_at <= NOW() - INTERVAL 90 DAY
|
||||
ORDER BY c.expires_at ASC;
|
||||
```
|
||||
|
||||
### Audit: Who accessed credential?
|
||||
```sql
|
||||
SELECT cal.user_id, cal.action, cal.timestamp, cal.ip_address
|
||||
FROM credential_audit_log cal
|
||||
WHERE cal.credential_id = 'target-credential-uuid'
|
||||
ORDER BY cal.timestamp DESC
|
||||
LIMIT 20;
|
||||
```
|
||||
|
||||
### Find credentials accessed in session
|
||||
```sql
|
||||
SELECT c.service_name, cal.action, cal.timestamp
|
||||
FROM credential_audit_log cal
|
||||
JOIN credentials c ON cal.credential_id = c.id
|
||||
WHERE cal.session_id = 'session-uuid'
|
||||
ORDER BY cal.timestamp;
|
||||
```
|
||||
|
||||
### Security incidents requiring follow-up
|
||||
```sql
|
||||
SELECT si.client_id, si.incident_type, si.description, si.status
|
||||
FROM security_incidents si
|
||||
WHERE si.status IN ('investigating', 'contained')
|
||||
AND si.severity IN ('critical', 'high')
|
||||
ORDER BY si.incident_date DESC;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
**Planned:**
|
||||
1. Hardware security module (HSM) integration
|
||||
2. Multi-factor authentication for high-privilege credentials
|
||||
3. Automatic credential rotation scheduling
|
||||
4. Integration with password managers (1Password, Bitwarden)
|
||||
5. Credential strength analysis and weak password detection
|
||||
6. Breach detection integration (Have I Been Pwned API)
|
||||
7. Role-based access control (RBAC) for team expansion
|
||||
8. Credential sharing workflows with approval process
|
||||
|
||||
**Under Consideration:**
|
||||
- Biometric authentication for critical credentials
|
||||
- Time-based one-time password (TOTP) storage
|
||||
- Certificate management and renewal automation
|
||||
- Secrets scanning in code repositories
|
||||
- Automated credential discovery (scan infrastructure)
|
||||
|
||||
---
|
||||
|
||||
**Document Version:** 1.0
|
||||
**Last Updated:** 2026-01-15
|
||||
**Author:** MSP Mode Schema Design Team
|
||||
323
.claude/SCHEMA_INFRASTRUCTURE.md
Normal file
323
.claude/SCHEMA_INFRASTRUCTURE.md
Normal file
@@ -0,0 +1,323 @@
|
||||
# SCHEMA_INFRASTRUCTURE.md
|
||||
|
||||
**Source:** MSP-MODE-SPEC.md
|
||||
**Section:** Client & Infrastructure Tables
|
||||
**Date:** 2026-01-15
|
||||
|
||||
## Overview
|
||||
|
||||
Infrastructure tracking tables for client sites, servers, network devices, services, and Microsoft 365 tenants. These tables provide comprehensive infrastructure inventory and relationship tracking.
|
||||
|
||||
---
|
||||
|
||||
## Client & Infrastructure Tables (7 tables)
|
||||
|
||||
### `sites`
|
||||
|
||||
Physical/logical locations for clients.
|
||||
|
||||
```sql
|
||||
CREATE TABLE sites (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
client_id UUID NOT NULL REFERENCES clients(id) ON DELETE CASCADE,
|
||||
name VARCHAR(255) NOT NULL, -- "Main Office", "SLC - Salt Lake City"
|
||||
network_subnet VARCHAR(100), -- "172.16.9.0/24"
|
||||
vpn_required BOOLEAN DEFAULT false,
|
||||
vpn_subnet VARCHAR(100), -- "192.168.1.0/24"
|
||||
gateway_ip VARCHAR(45), -- IPv4/IPv6
|
||||
dns_servers TEXT, -- JSON array
|
||||
notes TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_sites_client (client_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `infrastructure`
|
||||
|
||||
Servers, network devices, NAS, workstations (enhanced with environmental constraints).
|
||||
|
||||
```sql
|
||||
CREATE TABLE infrastructure (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
client_id UUID REFERENCES clients(id) ON DELETE CASCADE,
|
||||
site_id UUID REFERENCES sites(id) ON DELETE SET NULL,
|
||||
asset_type VARCHAR(50) NOT NULL CHECK(asset_type IN (
|
||||
'physical_server', 'virtual_machine', 'container',
|
||||
'network_device', 'nas_storage', 'workstation',
|
||||
'firewall', 'domain_controller'
|
||||
)),
|
||||
hostname VARCHAR(255) NOT NULL,
|
||||
ip_address VARCHAR(45),
|
||||
mac_address VARCHAR(17),
|
||||
os VARCHAR(255), -- "Ubuntu 22.04", "Windows Server 2022", "Unraid"
|
||||
os_version VARCHAR(100), -- "6.22", "2008 R2", "22.04"
|
||||
role_description TEXT, -- "Primary DC, NPS/RADIUS server"
|
||||
parent_host_id UUID REFERENCES infrastructure(id) ON DELETE SET NULL, -- for VMs/containers
|
||||
status VARCHAR(50) DEFAULT 'active' CHECK(status IN (
|
||||
'active', 'migration_source', 'migration_destination', 'decommissioned'
|
||||
)),
|
||||
|
||||
-- Environmental constraints (new)
|
||||
environmental_notes TEXT, -- "Manual WINS install, no native service. ReadyNAS OS, SMB1 only."
|
||||
powershell_version VARCHAR(20), -- "2.0", "5.1", "7.4"
|
||||
shell_type VARCHAR(50), -- "bash", "cmd", "powershell", "sh"
|
||||
package_manager VARCHAR(50), -- "apt", "yum", "chocolatey", "none"
|
||||
has_gui BOOLEAN DEFAULT true, -- false for headless/DOS
|
||||
limitations TEXT, -- JSON array: ["no_ps7", "smb1_only", "dos_6.22_commands"]
|
||||
|
||||
notes TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_infrastructure_client (client_id),
|
||||
INDEX idx_infrastructure_type (asset_type),
|
||||
INDEX idx_infrastructure_hostname (hostname),
|
||||
INDEX idx_infrastructure_parent (parent_host_id),
|
||||
INDEX idx_infrastructure_os (os)
|
||||
);
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
- Jupiter (Ubuntu 22.04, PS7, GUI)
|
||||
- AD2/Dataforth (Server 2022, PS5.1, GUI)
|
||||
- D2TESTNAS (ReadyNAS OS, manual WINS, no GUI service manager, SMB1)
|
||||
- TS-27 (MS-DOS 6.22, no GUI, batch only)
|
||||
|
||||
---
|
||||
|
||||
### `services`
|
||||
|
||||
Applications/services running on infrastructure.
|
||||
|
||||
```sql
|
||||
CREATE TABLE services (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
infrastructure_id UUID REFERENCES infrastructure(id) ON DELETE CASCADE,
|
||||
service_name VARCHAR(255) NOT NULL, -- "Gitea", "PostgreSQL", "Apache"
|
||||
service_type VARCHAR(100), -- "git_hosting", "database", "web_server"
|
||||
external_url VARCHAR(500), -- "https://git.azcomputerguru.com"
|
||||
internal_url VARCHAR(500), -- "http://172.16.3.20:3000"
|
||||
port INTEGER,
|
||||
protocol VARCHAR(50), -- "https", "ssh", "smb"
|
||||
status VARCHAR(50) DEFAULT 'running' CHECK(status IN (
|
||||
'running', 'stopped', 'error', 'maintenance'
|
||||
)),
|
||||
version VARCHAR(100),
|
||||
notes TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_services_infrastructure (infrastructure_id),
|
||||
INDEX idx_services_name (service_name),
|
||||
INDEX idx_services_type (service_type)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `service_relationships`
|
||||
|
||||
Dependencies and relationships between services.
|
||||
|
||||
```sql
|
||||
CREATE TABLE service_relationships (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
from_service_id UUID NOT NULL REFERENCES services(id) ON DELETE CASCADE,
|
||||
to_service_id UUID NOT NULL REFERENCES services(id) ON DELETE CASCADE,
|
||||
relationship_type VARCHAR(50) NOT NULL CHECK(relationship_type IN (
|
||||
'hosted_on', 'proxied_by', 'authenticates_via',
|
||||
'backend_for', 'depends_on', 'replicates_to'
|
||||
)),
|
||||
notes TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
UNIQUE(from_service_id, to_service_id, relationship_type),
|
||||
INDEX idx_service_rel_from (from_service_id),
|
||||
INDEX idx_service_rel_to (to_service_id)
|
||||
);
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
- Gitea (proxied_by) NPM
|
||||
- GuruRMM API (hosted_on) Jupiter container
|
||||
|
||||
---
|
||||
|
||||
### `networks`
|
||||
|
||||
Network segments, VLANs, VPN networks.
|
||||
|
||||
```sql
|
||||
CREATE TABLE networks (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
client_id UUID REFERENCES clients(id) ON DELETE CASCADE,
|
||||
site_id UUID REFERENCES sites(id) ON DELETE CASCADE,
|
||||
network_name VARCHAR(255) NOT NULL,
|
||||
network_type VARCHAR(50) CHECK(network_type IN (
|
||||
'lan', 'vpn', 'vlan', 'isolated', 'dmz'
|
||||
)),
|
||||
cidr VARCHAR(100) NOT NULL, -- "192.168.0.0/24"
|
||||
gateway_ip VARCHAR(45),
|
||||
vlan_id INTEGER,
|
||||
notes TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_networks_client (client_id),
|
||||
INDEX idx_networks_site (site_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `firewall_rules`
|
||||
|
||||
Network security rules (for documentation/audit trail).
|
||||
|
||||
```sql
|
||||
CREATE TABLE firewall_rules (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
infrastructure_id UUID REFERENCES infrastructure(id) ON DELETE CASCADE,
|
||||
rule_name VARCHAR(255),
|
||||
source_cidr VARCHAR(100),
|
||||
destination_cidr VARCHAR(100),
|
||||
port INTEGER,
|
||||
protocol VARCHAR(20), -- "tcp", "udp", "icmp"
|
||||
action VARCHAR(20) CHECK(action IN ('allow', 'deny', 'drop')),
|
||||
rule_order INTEGER,
|
||||
notes TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
created_by VARCHAR(255),
|
||||
|
||||
INDEX idx_firewall_infra (infrastructure_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `m365_tenants`
|
||||
|
||||
Microsoft 365 tenant tracking.
|
||||
|
||||
```sql
|
||||
CREATE TABLE m365_tenants (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
client_id UUID REFERENCES clients(id) ON DELETE CASCADE,
|
||||
tenant_id UUID NOT NULL UNIQUE, -- Microsoft tenant ID
|
||||
tenant_name VARCHAR(255), -- "dataforth.com"
|
||||
default_domain VARCHAR(255), -- "dataforthcorp.onmicrosoft.com"
|
||||
admin_email VARCHAR(255),
|
||||
cipp_name VARCHAR(255), -- name in CIPP portal
|
||||
notes TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_m365_client (client_id),
|
||||
INDEX idx_m365_tenant_id (tenant_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Environmental Constraints System
|
||||
|
||||
### Purpose
|
||||
|
||||
The infrastructure table includes environmental constraint fields to track system-specific limitations and capabilities. This prevents failures by recording what works and what doesn't on each system.
|
||||
|
||||
### Key Fields
|
||||
|
||||
**`environmental_notes`**: Free-form text describing quirks, limitations, custom installations
|
||||
- Example: "Manual WINS install, no native service. ReadyNAS OS, SMB1 only."
|
||||
|
||||
**`powershell_version`**: Specific PowerShell version available
|
||||
- Enables command compatibility checks
|
||||
- Example: "2.0" (Server 2008), "5.1" (Server 2022), "7.4" (Ubuntu with PS)
|
||||
|
||||
**`shell_type`**: Primary shell interface
|
||||
- "bash", "cmd", "powershell", "sh", "zsh"
|
||||
- Determines command syntax to use
|
||||
|
||||
**`package_manager`**: Package management system
|
||||
- "apt", "yum", "chocolatey", "brew", "none"
|
||||
- Enables automated software installation
|
||||
|
||||
**`has_gui`**: Whether system has graphical interface
|
||||
- `false` for headless servers, DOS systems
|
||||
- Prevents suggestions like "use Services GUI"
|
||||
|
||||
**`limitations`**: JSON array of specific constraints
|
||||
- Example: `["no_ps7", "smb1_only", "dos_6.22_commands", "no_long_filenames"]`
|
||||
|
||||
### Real-World Examples
|
||||
|
||||
**D2TESTNAS (192.168.0.9)**
|
||||
```sql
|
||||
{
|
||||
"hostname": "D2TESTNAS",
|
||||
"os": "ReadyNAS OS",
|
||||
"environmental_notes": "Manual WINS installation (Samba nmbd). No native service GUI. SMB1/CORE protocol only for DOS compatibility.",
|
||||
"powershell_version": null,
|
||||
"shell_type": "bash",
|
||||
"package_manager": "none",
|
||||
"has_gui": false,
|
||||
"limitations": ["smb1_only", "no_service_manager_gui", "manual_wins"]
|
||||
}
|
||||
```
|
||||
|
||||
**AD2 (192.168.0.6 - Server 2022)**
|
||||
```sql
|
||||
{
|
||||
"hostname": "AD2",
|
||||
"os": "Windows Server 2022",
|
||||
"environmental_notes": "Primary domain controller. PowerShell 5.1 default.",
|
||||
"powershell_version": "5.1",
|
||||
"shell_type": "powershell",
|
||||
"package_manager": "none",
|
||||
"has_gui": true,
|
||||
"limitations": []
|
||||
}
|
||||
```
|
||||
|
||||
**TS-XX Machines (DOS)**
|
||||
```sql
|
||||
{
|
||||
"hostname": "TS-27",
|
||||
"os": "MS-DOS 6.22",
|
||||
"environmental_notes": "DOS 6.22. No IF /I, no long filenames (8.3 only), no modern batch features.",
|
||||
"powershell_version": null,
|
||||
"shell_type": "cmd",
|
||||
"package_manager": "none",
|
||||
"has_gui": false,
|
||||
"limitations": ["dos_6.22", "no_if_i", "8.3_filenames_only", "no_unicode"]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Relationships
|
||||
|
||||
- `clients` → `sites` (one-to-many): Clients can have multiple physical locations
|
||||
- `clients` → `infrastructure` (one-to-many): Clients own infrastructure assets
|
||||
- `clients` → `networks` (one-to-many): Clients have network segments
|
||||
- `clients` → `m365_tenants` (one-to-many): Clients can have M365 tenants
|
||||
- `sites` → `infrastructure` (one-to-many): Infrastructure located at sites
|
||||
- `sites` → `networks` (one-to-many): Networks belong to sites
|
||||
- `infrastructure` → `infrastructure` (self-referencing): Parent-child for VMs/containers
|
||||
- `infrastructure` → `services` (one-to-many): Infrastructure hosts services
|
||||
- `infrastructure` → `firewall_rules` (one-to-many): Firewall rules applied to infrastructure
|
||||
- `services` ↔ `services` (many-to-many via service_relationships): Service dependencies
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Core Tables:** See [SCHEMA_CORE.md](SCHEMA_CORE.md)
|
||||
- **Credentials:** See [SCHEMA_CREDENTIALS.md](SCHEMA_CREDENTIALS.md)
|
||||
- **Environmental Learning:** See [SCHEMA_CONTEXT.md](SCHEMA_CONTEXT.md) for failure patterns and insights
|
||||
- **MSP Work Tracking:** See [SCHEMA_MSP.md](SCHEMA_MSP.md)
|
||||
- **External Integrations:** See [SCHEMA_INTEGRATIONS.md](SCHEMA_INTEGRATIONS.md)
|
||||
- **API Endpoints:** See [API_SPEC.md](API_SPEC.md)
|
||||
848
.claude/SCHEMA_INTEGRATIONS.md
Normal file
848
.claude/SCHEMA_INTEGRATIONS.md
Normal file
@@ -0,0 +1,848 @@
|
||||
# External Integrations Schema
|
||||
|
||||
**MSP Mode Database Schema - External Systems Integration**
|
||||
|
||||
**Status:** Designed 2026-01-15 (Future Capability)
|
||||
**Database:** msp_tracking (MariaDB on Jupiter)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The External Integrations subsystem enables MSP Mode to connect with external MSP platforms, automate workflows, and link session data to ticketing and documentation systems. This bridges MSP Mode's intelligent tracking with real-world business systems.
|
||||
|
||||
**Core Integration Systems:**
|
||||
- **SyncroMSP** - PSA/RMM platform (tickets, time tracking, assets)
|
||||
- **MSP Backups** - Backup management and reporting
|
||||
- **Zapier** - Automation platform (webhooks and triggers)
|
||||
|
||||
**Related Documentation:**
|
||||
- [MSP-MODE-SPEC.md](../MSP-MODE-SPEC.md) - Full system specification
|
||||
- [ARCHITECTURE_OVERVIEW.md](ARCHITECTURE_OVERVIEW.md) - System architecture
|
||||
- [API_SPEC.md](API_SPEC.md) - API endpoints for integrations
|
||||
- [SCHEMA_CREDENTIALS.md](SCHEMA_CREDENTIALS.md) - Integration credential storage
|
||||
|
||||
---
|
||||
|
||||
## Tables Summary
|
||||
|
||||
| Table | Purpose | Encryption |
|
||||
|-------|---------|------------|
|
||||
| `external_integrations` | Track all external system interactions | No (API responses) |
|
||||
| `integration_credentials` | OAuth/API key storage for integrations | AES-256-GCM |
|
||||
| `ticket_links` | Link sessions to external tickets | No |
|
||||
| `backup_log` | Backup tracking with verification | No |
|
||||
|
||||
**Total:** 4 tables
|
||||
|
||||
**Specialized Agent:**
|
||||
- **Integration Workflow Agent** - Executes multi-step integration workflows (ticket updates, report pulling, file attachments)
|
||||
|
||||
---
|
||||
|
||||
## Table Schemas
|
||||
|
||||
### `external_integrations`
|
||||
|
||||
Comprehensive tracking of all interactions with external systems. Audit trail for integration workflows.
|
||||
|
||||
```sql
|
||||
CREATE TABLE external_integrations (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
session_id UUID REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
work_item_id UUID REFERENCES work_items(id) ON DELETE CASCADE,
|
||||
client_id UUID REFERENCES clients(id) ON DELETE SET NULL,
|
||||
|
||||
-- Integration details
|
||||
integration_type VARCHAR(100) NOT NULL CHECK(integration_type IN (
|
||||
'syncro_ticket', 'syncro_time', 'syncro_asset',
|
||||
'msp_backups_report', 'msp_backups_status',
|
||||
'zapier_webhook', 'zapier_trigger',
|
||||
'email_notification', 'custom_integration'
|
||||
)),
|
||||
integration_name VARCHAR(255), -- "SyncroMSP", "MSP Backups", "Zapier"
|
||||
|
||||
-- External resource identification
|
||||
external_id VARCHAR(255), -- ticket ID, asset ID, webhook ID, etc.
|
||||
external_url VARCHAR(500), -- direct link to resource
|
||||
external_reference VARCHAR(255), -- human-readable: "T12345", "WH-ABC123"
|
||||
|
||||
-- Action tracking
|
||||
action VARCHAR(50) CHECK(action IN (
|
||||
'created', 'updated', 'linked', 'attached',
|
||||
'retrieved', 'searched', 'deleted', 'triggered'
|
||||
)),
|
||||
direction VARCHAR(20) CHECK(direction IN ('outbound', 'inbound')),
|
||||
-- outbound: MSP Mode → External system
|
||||
-- inbound: External system → MSP Mode (via webhook)
|
||||
|
||||
-- Request/Response data
|
||||
request_data TEXT, -- JSON: what we sent
|
||||
response_data TEXT, -- JSON: what we received
|
||||
response_status VARCHAR(50), -- "success", "error", "timeout"
|
||||
error_message TEXT,
|
||||
|
||||
-- Performance tracking
|
||||
request_duration_ms INTEGER,
|
||||
retry_count INTEGER DEFAULT 0,
|
||||
|
||||
-- Metadata
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
created_by VARCHAR(255), -- user who authorized
|
||||
|
||||
INDEX idx_ext_int_session (session_id),
|
||||
INDEX idx_ext_int_work_item (work_item_id),
|
||||
INDEX idx_ext_int_client (client_id),
|
||||
INDEX idx_ext_int_type (integration_type),
|
||||
INDEX idx_ext_int_external (external_id),
|
||||
INDEX idx_ext_int_status (response_status),
|
||||
INDEX idx_ext_int_created (created_at)
|
||||
);
|
||||
```
|
||||
|
||||
**Example Integration Records:**
|
||||
|
||||
**SyncroMSP Ticket Update:**
|
||||
```json
|
||||
{
|
||||
"session_id": "current-session-uuid",
|
||||
"client_id": "dataforth-uuid",
|
||||
"integration_type": "syncro_ticket",
|
||||
"integration_name": "SyncroMSP",
|
||||
"external_id": "12345",
|
||||
"external_url": "https://azcomputerguru.syncromsp.com/tickets/12345",
|
||||
"external_reference": "T12345",
|
||||
"action": "updated",
|
||||
"direction": "outbound",
|
||||
"request_data": {
|
||||
"comment": "Changes made today:\n- Configured Veeam backup job for D2TESTNAS\n- Set retention: 30 days local, 90 days cloud\n- Tested backup: successful (45GB)\n- Verified restore point creation",
|
||||
"internal": false
|
||||
},
|
||||
"response_data": {
|
||||
"comment_id": "67890",
|
||||
"created_at": "2026-01-15T14:32:10Z"
|
||||
},
|
||||
"response_status": "success",
|
||||
"request_duration_ms": 245,
|
||||
"created_by": "mike@azcomputerguru.com"
|
||||
}
|
||||
```
|
||||
|
||||
**MSP Backups Report Retrieval:**
|
||||
```json
|
||||
{
|
||||
"session_id": "current-session-uuid",
|
||||
"client_id": "dataforth-uuid",
|
||||
"integration_type": "msp_backups_report",
|
||||
"integration_name": "MSP Backups",
|
||||
"action": "retrieved",
|
||||
"direction": "outbound",
|
||||
"request_data": {
|
||||
"customer": "Dataforth",
|
||||
"date": "2026-01-15",
|
||||
"format": "pdf"
|
||||
},
|
||||
"response_data": {
|
||||
"report_url": "https://storage.mspbackups.com/reports/dataforth_2026-01-15.pdf",
|
||||
"file_size_bytes": 1048576,
|
||||
"summary": {
|
||||
"total_jobs": 5,
|
||||
"successful": 5,
|
||||
"failed": 0,
|
||||
"total_size_gb": 245
|
||||
}
|
||||
},
|
||||
"response_status": "success",
|
||||
"request_duration_ms": 3420
|
||||
}
|
||||
```
|
||||
|
||||
**SyncroMSP File Attachment:**
|
||||
```json
|
||||
{
|
||||
"session_id": "current-session-uuid",
|
||||
"integration_type": "syncro_ticket",
|
||||
"external_id": "12345",
|
||||
"action": "attached",
|
||||
"direction": "outbound",
|
||||
"request_data": {
|
||||
"file_name": "dataforth_backup_report_2026-01-15.pdf",
|
||||
"file_size_bytes": 1048576
|
||||
},
|
||||
"response_data": {
|
||||
"attachment_id": "att_789",
|
||||
"url": "https://azcomputerguru.syncromsp.com/attachments/att_789"
|
||||
},
|
||||
"response_status": "success"
|
||||
}
|
||||
```
|
||||
|
||||
**Zapier Webhook Trigger (Inbound):**
|
||||
```json
|
||||
{
|
||||
"integration_type": "zapier_webhook",
|
||||
"external_id": "webhook_abc123",
|
||||
"action": "triggered",
|
||||
"direction": "inbound",
|
||||
"request_data": {
|
||||
"event": "ticket_created",
|
||||
"ticket_id": "12346",
|
||||
"customer": "Grabb & Durando",
|
||||
"subject": "Network connectivity issues"
|
||||
},
|
||||
"response_data": {
|
||||
"msp_mode_action": "created_pending_task",
|
||||
"task_id": "task-uuid"
|
||||
},
|
||||
"response_status": "success"
|
||||
}
|
||||
```
|
||||
|
||||
**Failed Integration (Timeout):**
|
||||
```json
|
||||
{
|
||||
"integration_type": "syncro_ticket",
|
||||
"action": "updated",
|
||||
"direction": "outbound",
|
||||
"request_data": {
|
||||
"ticket_id": "12345",
|
||||
"comment": "Work completed..."
|
||||
},
|
||||
"response_status": "error",
|
||||
"error_message": "Request timeout after 30000ms",
|
||||
"request_duration_ms": 30000,
|
||||
"retry_count": 3
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `integration_credentials`
|
||||
|
||||
Secure storage for integration authentication credentials (OAuth tokens, API keys).
|
||||
|
||||
```sql
|
||||
CREATE TABLE integration_credentials (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
integration_name VARCHAR(100) NOT NULL UNIQUE, -- 'syncro', 'msp_backups', 'zapier'
|
||||
|
||||
-- Credential type
|
||||
credential_type VARCHAR(50) CHECK(credential_type IN ('oauth', 'api_key', 'basic_auth', 'bearer_token')),
|
||||
|
||||
-- Encrypted credentials (AES-256-GCM)
|
||||
api_key_encrypted BYTEA,
|
||||
oauth_token_encrypted BYTEA,
|
||||
oauth_refresh_token_encrypted BYTEA,
|
||||
oauth_client_id VARCHAR(255), -- not encrypted (public)
|
||||
oauth_client_secret_encrypted BYTEA,
|
||||
oauth_expires_at TIMESTAMP,
|
||||
basic_auth_username VARCHAR(255),
|
||||
basic_auth_password_encrypted BYTEA,
|
||||
|
||||
-- OAuth metadata
|
||||
oauth_scopes TEXT, -- JSON array: ["tickets:read", "tickets:write"]
|
||||
oauth_authorize_url VARCHAR(500),
|
||||
oauth_token_url VARCHAR(500),
|
||||
|
||||
-- API endpoints
|
||||
api_base_url VARCHAR(500) NOT NULL,
|
||||
webhook_url VARCHAR(500), -- for receiving webhooks
|
||||
webhook_secret_encrypted BYTEA,
|
||||
|
||||
-- Status and health
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
last_tested_at TIMESTAMP,
|
||||
last_test_status VARCHAR(50), -- "success", "auth_failed", "connection_error"
|
||||
last_test_error TEXT,
|
||||
last_used_at TIMESTAMP,
|
||||
|
||||
-- Rate limiting
|
||||
rate_limit_requests INTEGER, -- requests per period
|
||||
rate_limit_period_seconds INTEGER, -- period in seconds
|
||||
rate_limit_remaining INTEGER, -- current remaining requests
|
||||
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_int_cred_name (integration_name),
|
||||
INDEX idx_int_cred_active (is_active)
|
||||
);
|
||||
```
|
||||
|
||||
**Example Integration Credentials:**
|
||||
|
||||
**SyncroMSP (OAuth):**
|
||||
```json
|
||||
{
|
||||
"integration_name": "syncro",
|
||||
"credential_type": "oauth",
|
||||
"oauth_token_encrypted": "<encrypted_access_token>",
|
||||
"oauth_refresh_token_encrypted": "<encrypted_refresh_token>",
|
||||
"oauth_client_id": "syncro_client_id",
|
||||
"oauth_client_secret_encrypted": "<encrypted_secret>",
|
||||
"oauth_expires_at": "2026-01-16T14:30:00Z",
|
||||
"oauth_scopes": ["tickets:read", "tickets:write", "customers:read", "time_entries:write"],
|
||||
"oauth_authorize_url": "https://azcomputerguru.syncromsp.com/oauth/authorize",
|
||||
"oauth_token_url": "https://azcomputerguru.syncromsp.com/oauth/token",
|
||||
"api_base_url": "https://azcomputerguru.syncromsp.com/api/v1",
|
||||
"is_active": true,
|
||||
"last_tested_at": "2026-01-15T14:00:00Z",
|
||||
"last_test_status": "success",
|
||||
"rate_limit_requests": 1000,
|
||||
"rate_limit_period_seconds": 3600
|
||||
}
|
||||
```
|
||||
|
||||
**MSP Backups (API Key):**
|
||||
```json
|
||||
{
|
||||
"integration_name": "msp_backups",
|
||||
"credential_type": "api_key",
|
||||
"api_key_encrypted": "<encrypted_api_key>",
|
||||
"api_base_url": "https://api.mspbackups.com/v2",
|
||||
"is_active": true,
|
||||
"last_tested_at": "2026-01-15T09:00:00Z",
|
||||
"last_test_status": "success"
|
||||
}
|
||||
```
|
||||
|
||||
**Zapier (Webhook):**
|
||||
```json
|
||||
{
|
||||
"integration_name": "zapier",
|
||||
"credential_type": "bearer_token",
|
||||
"api_key_encrypted": "<encrypted_bearer_token>",
|
||||
"api_base_url": "https://hooks.zapier.com/hooks/catch",
|
||||
"webhook_url": "https://msp-api.azcomputerguru.com/api/v1/webhooks/zapier",
|
||||
"webhook_secret_encrypted": "<encrypted_webhook_secret>",
|
||||
"is_active": true
|
||||
}
|
||||
```
|
||||
|
||||
**Security Features:**
|
||||
- All sensitive fields encrypted with AES-256-GCM
|
||||
- Same master key as credentials table
|
||||
- Automatic OAuth token refresh
|
||||
- Rate limit tracking to prevent API abuse
|
||||
- Health check monitoring
|
||||
|
||||
---
|
||||
|
||||
### `ticket_links`
|
||||
|
||||
Links MSP Mode sessions to external ticketing system tickets. Bi-directional reference.
|
||||
|
||||
```sql
|
||||
CREATE TABLE ticket_links (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
session_id UUID REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
client_id UUID REFERENCES clients(id) ON DELETE CASCADE,
|
||||
work_item_id UUID REFERENCES work_items(id) ON DELETE SET NULL,
|
||||
|
||||
-- Ticket identification
|
||||
integration_type VARCHAR(100) NOT NULL CHECK(integration_type IN (
|
||||
'syncro', 'autotask', 'connectwise', 'zendesk', 'freshdesk'
|
||||
)),
|
||||
ticket_id VARCHAR(255) NOT NULL, -- external system ticket ID
|
||||
ticket_number VARCHAR(100), -- human-readable: "T12345", "#12345"
|
||||
ticket_subject VARCHAR(500),
|
||||
ticket_url VARCHAR(500),
|
||||
ticket_status VARCHAR(100), -- "open", "in_progress", "resolved", "closed"
|
||||
ticket_priority VARCHAR(50), -- "low", "medium", "high", "critical"
|
||||
|
||||
-- Linking metadata
|
||||
link_type VARCHAR(50) CHECK(link_type IN ('related', 'resolves', 'documents', 'caused_by')),
|
||||
-- related: session work related to ticket
|
||||
-- resolves: session work resolves the ticket
|
||||
-- documents: session documents work done for ticket
|
||||
-- caused_by: session work was triggered by ticket
|
||||
|
||||
link_direction VARCHAR(20) CHECK(link_direction IN ('manual', 'automatic')),
|
||||
linked_by VARCHAR(255), -- user who created link
|
||||
|
||||
-- Sync status
|
||||
auto_sync_enabled BOOLEAN DEFAULT false, -- auto-post session updates to ticket
|
||||
last_synced_at TIMESTAMP,
|
||||
sync_errors TEXT, -- JSON array of sync error messages
|
||||
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_ticket_session (session_id),
|
||||
INDEX idx_ticket_client (client_id),
|
||||
INDEX idx_ticket_work_item (work_item_id),
|
||||
INDEX idx_ticket_external (integration_type, ticket_id),
|
||||
INDEX idx_ticket_status (ticket_status)
|
||||
);
|
||||
```
|
||||
|
||||
**Example Ticket Links:**
|
||||
|
||||
**Session Resolves Ticket:**
|
||||
```json
|
||||
{
|
||||
"session_id": "session-uuid",
|
||||
"client_id": "dataforth-uuid",
|
||||
"integration_type": "syncro",
|
||||
"ticket_id": "12345",
|
||||
"ticket_number": "T12345",
|
||||
"ticket_subject": "Backup configuration for NAS",
|
||||
"ticket_url": "https://azcomputerguru.syncromsp.com/tickets/12345",
|
||||
"ticket_status": "resolved",
|
||||
"ticket_priority": "high",
|
||||
"link_type": "resolves",
|
||||
"link_direction": "manual",
|
||||
"linked_by": "mike@azcomputerguru.com",
|
||||
"auto_sync_enabled": true,
|
||||
"last_synced_at": "2026-01-15T15:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Work Item Documents Ticket:**
|
||||
```json
|
||||
{
|
||||
"session_id": "session-uuid",
|
||||
"work_item_id": "work-item-uuid",
|
||||
"client_id": "grabb-uuid",
|
||||
"integration_type": "syncro",
|
||||
"ticket_id": "12346",
|
||||
"ticket_number": "T12346",
|
||||
"ticket_subject": "DNS migration to UDM",
|
||||
"link_type": "documents",
|
||||
"link_direction": "automatic"
|
||||
}
|
||||
```
|
||||
|
||||
**Ticket Triggered Session:**
|
||||
```json
|
||||
{
|
||||
"session_id": "session-uuid",
|
||||
"client_id": "client-uuid",
|
||||
"integration_type": "syncro",
|
||||
"ticket_id": "12347",
|
||||
"ticket_subject": "Email delivery issues",
|
||||
"ticket_status": "in_progress",
|
||||
"link_type": "caused_by",
|
||||
"link_direction": "automatic",
|
||||
"auto_sync_enabled": true
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `backup_log`
|
||||
|
||||
Backup tracking with verification status. Can be populated from MSP Backups integration or local backup operations.
|
||||
|
||||
```sql
|
||||
CREATE TABLE backup_log (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
client_id UUID REFERENCES clients(id) ON DELETE SET NULL,
|
||||
infrastructure_id UUID REFERENCES infrastructure(id) ON DELETE SET NULL,
|
||||
session_id UUID REFERENCES sessions(id) ON DELETE SET NULL,
|
||||
|
||||
-- Backup classification
|
||||
backup_type VARCHAR(50) NOT NULL CHECK(backup_type IN (
|
||||
'daily', 'weekly', 'monthly', 'manual', 'pre-migration',
|
||||
'pre-upgrade', 'disaster_recovery'
|
||||
)),
|
||||
backup_source VARCHAR(100), -- "local", "veeam", "msp_backups", "manual"
|
||||
|
||||
-- File details
|
||||
file_path VARCHAR(500) NOT NULL,
|
||||
file_name VARCHAR(255),
|
||||
file_size_bytes BIGINT NOT NULL,
|
||||
storage_location VARCHAR(500), -- "NAS", "Cloud", "Local", "Off-site"
|
||||
|
||||
-- Timing
|
||||
backup_started_at TIMESTAMP NOT NULL,
|
||||
backup_completed_at TIMESTAMP NOT NULL,
|
||||
duration_seconds INTEGER GENERATED ALWAYS AS (
|
||||
TIMESTAMPDIFF(SECOND, backup_started_at, backup_completed_at)
|
||||
) STORED,
|
||||
|
||||
-- Verification
|
||||
verification_status VARCHAR(50) CHECK(verification_status IN (
|
||||
'passed', 'failed', 'not_verified', 'in_progress'
|
||||
)),
|
||||
verification_method VARCHAR(100), -- "test_restore", "checksum", "file_count", "manual"
|
||||
verification_details TEXT, -- JSON: specific check results
|
||||
verification_completed_at TIMESTAMP,
|
||||
|
||||
-- Backup metadata
|
||||
database_host VARCHAR(255),
|
||||
database_name VARCHAR(100),
|
||||
backup_method VARCHAR(50), -- "mysqldump", "mariabackup", "file_copy", "veeam"
|
||||
compression_type VARCHAR(50), -- "gzip", "zip", "none"
|
||||
encryption_enabled BOOLEAN DEFAULT false,
|
||||
|
||||
-- Retention
|
||||
retention_days INTEGER,
|
||||
scheduled_deletion_date TIMESTAMP,
|
||||
deleted_at TIMESTAMP,
|
||||
|
||||
-- Status
|
||||
backup_status VARCHAR(50) DEFAULT 'completed' CHECK(backup_status IN (
|
||||
'in_progress', 'completed', 'failed', 'deleted'
|
||||
)),
|
||||
error_message TEXT,
|
||||
|
||||
-- Integration linkage
|
||||
external_integration_id UUID REFERENCES external_integrations(id),
|
||||
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_backup_client (client_id),
|
||||
INDEX idx_backup_infrastructure (infrastructure_id),
|
||||
INDEX idx_backup_type (backup_type),
|
||||
INDEX idx_backup_date (backup_completed_at),
|
||||
INDEX idx_backup_verification (verification_status),
|
||||
INDEX idx_backup_status (backup_status)
|
||||
);
|
||||
```
|
||||
|
||||
**Example Backup Records:**
|
||||
|
||||
**Successful Daily Backup:**
|
||||
```json
|
||||
{
|
||||
"client_id": "dataforth-uuid",
|
||||
"infrastructure_id": "ad2-uuid",
|
||||
"backup_type": "daily",
|
||||
"backup_source": "veeam",
|
||||
"file_path": "/mnt/backups/AD2_2026-01-15_daily.vbk",
|
||||
"file_name": "AD2_2026-01-15_daily.vbk",
|
||||
"file_size_bytes": 48318382080,
|
||||
"storage_location": "D2TESTNAS",
|
||||
"backup_started_at": "2026-01-15T02:00:00Z",
|
||||
"backup_completed_at": "2026-01-15T02:45:30Z",
|
||||
"verification_status": "passed",
|
||||
"verification_method": "test_restore",
|
||||
"verification_details": {
|
||||
"restore_test_successful": true,
|
||||
"files_verified": 12543,
|
||||
"checksum_valid": true
|
||||
},
|
||||
"verification_completed_at": "2026-01-15T03:15:00Z",
|
||||
"backup_method": "veeam",
|
||||
"compression_type": "veeam_proprietary",
|
||||
"encryption_enabled": true,
|
||||
"retention_days": 30,
|
||||
"backup_status": "completed"
|
||||
}
|
||||
```
|
||||
|
||||
**Pre-Migration Backup:**
|
||||
```json
|
||||
{
|
||||
"client_id": "grabb-uuid",
|
||||
"infrastructure_id": "pfsense-uuid",
|
||||
"session_id": "migration-session-uuid",
|
||||
"backup_type": "pre-migration",
|
||||
"backup_source": "manual",
|
||||
"file_path": "/backups/pfsense_config_pre_migration_2026-01-15.xml",
|
||||
"file_size_bytes": 524288,
|
||||
"storage_location": "Local",
|
||||
"backup_started_at": "2026-01-15T14:00:00Z",
|
||||
"backup_completed_at": "2026-01-15T14:00:15Z",
|
||||
"verification_status": "passed",
|
||||
"verification_method": "manual",
|
||||
"backup_method": "file_copy",
|
||||
"backup_status": "completed"
|
||||
}
|
||||
```
|
||||
|
||||
**Failed Backup:**
|
||||
```json
|
||||
{
|
||||
"client_id": "client-uuid",
|
||||
"infrastructure_id": "nas-uuid",
|
||||
"backup_type": "daily",
|
||||
"backup_source": "veeam",
|
||||
"file_path": "/mnt/backups/NAS_2026-01-15_daily.vbk",
|
||||
"backup_started_at": "2026-01-15T02:00:00Z",
|
||||
"backup_completed_at": "2026-01-15T02:05:00Z",
|
||||
"backup_status": "failed",
|
||||
"error_message": "Insufficient disk space on target. Available: 2GB, Required: 50GB",
|
||||
"verification_status": "not_verified"
|
||||
}
|
||||
```
|
||||
|
||||
**Database Backup:**
|
||||
```json
|
||||
{
|
||||
"backup_type": "daily",
|
||||
"backup_source": "local",
|
||||
"file_path": "/var/backups/mysql/msp_tracking_2026-01-15.sql.gz",
|
||||
"file_size_bytes": 10485760,
|
||||
"storage_location": "Jupiter",
|
||||
"backup_started_at": "2026-01-15T01:00:00Z",
|
||||
"backup_completed_at": "2026-01-15T01:02:30Z",
|
||||
"verification_status": "passed",
|
||||
"verification_method": "checksum",
|
||||
"database_host": "172.16.3.20",
|
||||
"database_name": "msp_tracking",
|
||||
"backup_method": "mysqldump",
|
||||
"compression_type": "gzip",
|
||||
"retention_days": 90,
|
||||
"backup_status": "completed"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration Workflows
|
||||
|
||||
### SyncroMSP Ticket Update Workflow (Agent-Based)
|
||||
|
||||
**User Request:**
|
||||
> "Find the ticket for Dataforth about Backups - update it with the changes we've made today, and pull a report from MSP backups to attach to that ticket"
|
||||
|
||||
**Execution:**
|
||||
|
||||
1. **Main Claude launches Integration Workflow Agent**
|
||||
|
||||
2. **Agent performs multi-step workflow:**
|
||||
|
||||
**Step 1: Search for ticket**
|
||||
```http
|
||||
GET https://azcomputerguru.syncromsp.com/api/v1/tickets?customer_id=12345&subject=backup
|
||||
Authorization: Bearer <oauth_token>
|
||||
```
|
||||
- Result: Ticket #12345 "Backup configuration for NAS"
|
||||
|
||||
**Step 2: Generate work summary**
|
||||
- Process current session work_items
|
||||
- Create dense summary:
|
||||
```
|
||||
Changes made today:
|
||||
- Configured Veeam backup job for D2TESTNAS
|
||||
- Set retention: 30 days local, 90 days cloud
|
||||
- Tested backup: successful (45GB)
|
||||
- Verified restore point creation
|
||||
```
|
||||
|
||||
**Step 3: Update ticket**
|
||||
```http
|
||||
POST https://azcomputerguru.syncromsp.com/api/v1/tickets/12345/comments
|
||||
{
|
||||
"comment": "<work_summary>",
|
||||
"internal": false
|
||||
}
|
||||
```
|
||||
- Store in external_integrations table
|
||||
|
||||
**Step 4: Pull backup report**
|
||||
```http
|
||||
GET https://api.mspbackups.com/v2/reports?customer=Dataforth&date=2026-01-15
|
||||
```
|
||||
- Receives report PDF (agent handles large file)
|
||||
- Summarize: "5 backup jobs, all successful, total size 245GB"
|
||||
|
||||
**Step 5: Attach report to ticket**
|
||||
```http
|
||||
POST https://azcomputerguru.syncromsp.com/api/v1/tickets/12345/attachments
|
||||
Content-Type: multipart/form-data
|
||||
file: dataforth_backup_report.pdf
|
||||
```
|
||||
|
||||
**Step 6: Create ticket link**
|
||||
```sql
|
||||
INSERT INTO ticket_links (
|
||||
session_id, client_id, integration_type,
|
||||
ticket_id, ticket_subject, link_type
|
||||
) VALUES (
|
||||
'current-session-uuid', 'dataforth-uuid', 'syncro',
|
||||
'12345', 'Backup configuration for NAS', 'documents'
|
||||
);
|
||||
```
|
||||
|
||||
3. **Agent returns concise summary:**
|
||||
```
|
||||
Workflow complete:
|
||||
✓ Found ticket #12345: Backup configuration for NAS
|
||||
✓ Added work summary comment
|
||||
✓ Pulled backup report (5 jobs, all successful, 245GB)
|
||||
✓ Attached report to ticket
|
||||
```
|
||||
|
||||
4. **Main Claude confirms to user**
|
||||
|
||||
**Context Saved:**
|
||||
- Agent handled all API calls, file transfers, database updates
|
||||
- Main Claude only received 50-word summary
|
||||
- Large PDF file never entered main context
|
||||
|
||||
---
|
||||
|
||||
### Auto Time Tracking Workflow
|
||||
|
||||
**When session ends with billable hours:**
|
||||
|
||||
1. **Ask user:**
|
||||
- "Log 2.5 hours to SyncroMSP ticket T12345? (y/n)"
|
||||
|
||||
2. **If yes, call SyncroMSP API:**
|
||||
```http
|
||||
POST https://azcomputerguru.syncromsp.com/api/v1/time_entries
|
||||
{
|
||||
"ticket_id": 12345,
|
||||
"user_id": 12,
|
||||
"duration_minutes": 150,
|
||||
"work_description": "Backup configuration and testing",
|
||||
"billable": true
|
||||
}
|
||||
```
|
||||
|
||||
3. **Log in external_integrations:**
|
||||
```json
|
||||
{
|
||||
"integration_type": "syncro_time",
|
||||
"action": "created",
|
||||
"external_id": "time_entry_789",
|
||||
"request_data": {...},
|
||||
"response_status": "success"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Backup Report Automation
|
||||
|
||||
**Trigger:** User mentions "backup" in MSP session
|
||||
|
||||
1. **Detect keyword** "backup"
|
||||
|
||||
2. **Auto-suggest:**
|
||||
- "Pull latest backup report for Dataforth? (y/n)"
|
||||
|
||||
3. **If yes, query MSP Backups API:**
|
||||
```http
|
||||
GET https://api.mspbackups.com/v2/reports?customer=Dataforth&date=latest
|
||||
```
|
||||
|
||||
4. **Display summary to user:**
|
||||
- "Latest backup report: 5 jobs, all successful, 245GB total"
|
||||
|
||||
5. **Options:**
|
||||
- Attach to ticket
|
||||
- Save to session
|
||||
- Email to client
|
||||
|
||||
---
|
||||
|
||||
## OAuth Flow
|
||||
|
||||
**User initiates:** `/msp integrate syncro`
|
||||
|
||||
1. **Generate OAuth URL:**
|
||||
```
|
||||
https://azcomputerguru.syncromsp.com/oauth/authorize
|
||||
?client_id=<client_id>
|
||||
&redirect_uri=https://msp-api.azcomputerguru.com/oauth/callback
|
||||
&response_type=code
|
||||
&scope=tickets:read tickets:write time_entries:write
|
||||
```
|
||||
|
||||
2. **User authorizes in browser**
|
||||
|
||||
3. **Callback receives authorization code:**
|
||||
```http
|
||||
GET https://msp-api.azcomputerguru.com/oauth/callback?code=abc123
|
||||
```
|
||||
|
||||
4. **Exchange code for tokens:**
|
||||
```http
|
||||
POST https://azcomputerguru.syncromsp.com/oauth/token
|
||||
{
|
||||
"grant_type": "authorization_code",
|
||||
"code": "abc123",
|
||||
"client_id": "<client_id>",
|
||||
"client_secret": "<client_secret>",
|
||||
"redirect_uri": "https://msp-api.azcomputerguru.com/oauth/callback"
|
||||
}
|
||||
```
|
||||
|
||||
5. **Encrypt and store tokens:**
|
||||
```sql
|
||||
INSERT INTO integration_credentials (
|
||||
integration_name, credential_type,
|
||||
oauth_token_encrypted, oauth_refresh_token_encrypted,
|
||||
oauth_expires_at, ...
|
||||
)
|
||||
```
|
||||
|
||||
6. **Confirm to user:**
|
||||
- "SyncroMSP connected successfully. Scopes: tickets:read, tickets:write, time_entries:write"
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### API Key Storage
|
||||
- All integration credentials encrypted with AES-256-GCM
|
||||
- Same master key as credentials table
|
||||
- Separate from user credentials (different permission scopes)
|
||||
|
||||
### OAuth Token Refresh
|
||||
```python
|
||||
# Automatic token refresh before expiration
|
||||
if oauth_expires_at <= NOW() + INTERVAL 5 MINUTE:
|
||||
# Refresh token
|
||||
response = requests.post(oauth_token_url, data={
|
||||
'grant_type': 'refresh_token',
|
||||
'refresh_token': decrypt(oauth_refresh_token_encrypted),
|
||||
'client_id': oauth_client_id,
|
||||
'client_secret': decrypt(oauth_client_secret_encrypted)
|
||||
})
|
||||
|
||||
# Update stored tokens
|
||||
update_integration_credentials(
|
||||
new_access_token=response['access_token'],
|
||||
new_refresh_token=response.get('refresh_token'),
|
||||
expires_at=NOW() + response['expires_in']
|
||||
)
|
||||
```
|
||||
|
||||
### Rate Limiting
|
||||
- Track API rate limits per integration
|
||||
- Implement exponential backoff on rate limit errors
|
||||
- Queue requests if rate limit reached
|
||||
|
||||
### Webhook Security
|
||||
- Verify webhook signatures
|
||||
- Store webhook secrets encrypted
|
||||
- IP whitelist for webhook endpoints (optional)
|
||||
|
||||
---
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
**Phase 1 (MVP):**
|
||||
- SyncroMSP ticket search and read
|
||||
- Manual ticket linking
|
||||
- Session summary → ticket comment (manual)
|
||||
|
||||
**Phase 2:**
|
||||
- MSP Backups report pulling
|
||||
- File attachments to tickets
|
||||
- OAuth token refresh automation
|
||||
- Auto-suggest ticket linking
|
||||
|
||||
**Phase 3:**
|
||||
- Zapier webhook triggers
|
||||
- Auto time tracking
|
||||
- Multi-step workflows
|
||||
- Natural language commands
|
||||
|
||||
**Phase 4:**
|
||||
- Bi-directional sync
|
||||
- Advanced automation
|
||||
- Additional PSA integrations (Autotask, ConnectWise)
|
||||
- IT Glue documentation sync
|
||||
|
||||
---
|
||||
|
||||
**Document Version:** 1.0
|
||||
**Last Updated:** 2026-01-15
|
||||
**Author:** MSP Mode Schema Design Team
|
||||
308
.claude/SCHEMA_MSP.md
Normal file
308
.claude/SCHEMA_MSP.md
Normal file
@@ -0,0 +1,308 @@
|
||||
# SCHEMA_MSP.md
|
||||
|
||||
**Source:** MSP-MODE-SPEC.md
|
||||
**Section:** MSP Work Tracking Tables
|
||||
**Date:** 2026-01-15
|
||||
|
||||
## Overview
|
||||
|
||||
MSP work tracking tables for detailed session work items, task management, and work details tracking. These tables capture granular information about work performed during MSP sessions.
|
||||
|
||||
---
|
||||
|
||||
## MSP Work Tracking Tables
|
||||
|
||||
### `work_items`
|
||||
|
||||
Individual tasks/actions within sessions (granular tracking).
|
||||
|
||||
```sql
|
||||
CREATE TABLE work_items (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
session_id UUID NOT NULL REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
category VARCHAR(50) NOT NULL CHECK(category IN (
|
||||
'infrastructure', 'troubleshooting', 'configuration',
|
||||
'development', 'maintenance', 'security', 'documentation'
|
||||
)),
|
||||
title VARCHAR(500) NOT NULL,
|
||||
description TEXT NOT NULL,
|
||||
status VARCHAR(50) DEFAULT 'completed' CHECK(status IN (
|
||||
'completed', 'in_progress', 'blocked', 'pending', 'deferred'
|
||||
)),
|
||||
priority VARCHAR(20) CHECK(priority IN ('critical', 'high', 'medium', 'low')),
|
||||
is_billable BOOLEAN DEFAULT false,
|
||||
estimated_minutes INTEGER,
|
||||
actual_minutes INTEGER,
|
||||
affected_systems TEXT, -- JSON array: ["jupiter", "172.16.3.20"]
|
||||
technologies_used TEXT, -- JSON array: ["docker", "mariadb"]
|
||||
item_order INTEGER, -- sequence within session
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
completed_at TIMESTAMP,
|
||||
|
||||
INDEX idx_work_items_session (session_id),
|
||||
INDEX idx_work_items_category (category),
|
||||
INDEX idx_work_items_status (status)
|
||||
);
|
||||
```
|
||||
|
||||
**Categories distribution (from analysis):**
|
||||
- Infrastructure: 30%
|
||||
- Troubleshooting: 25%
|
||||
- Configuration: 15%
|
||||
- Development: 15%
|
||||
- Maintenance: 10%
|
||||
- Security: 5%
|
||||
|
||||
---
|
||||
|
||||
## Work Details Tracking Tables (6 tables)
|
||||
|
||||
### `file_changes`
|
||||
|
||||
Track files created/modified/deleted during sessions.
|
||||
|
||||
```sql
|
||||
CREATE TABLE file_changes (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
work_item_id UUID NOT NULL REFERENCES work_items(id) ON DELETE CASCADE,
|
||||
session_id UUID NOT NULL REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
file_path VARCHAR(1000) NOT NULL,
|
||||
change_type VARCHAR(50) CHECK(change_type IN (
|
||||
'created', 'modified', 'deleted', 'renamed', 'backed_up'
|
||||
)),
|
||||
backup_path VARCHAR(1000),
|
||||
size_bytes BIGINT,
|
||||
description TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_file_changes_work_item (work_item_id),
|
||||
INDEX idx_file_changes_session (session_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `commands_run`
|
||||
|
||||
Shell/PowerShell/SQL commands executed (enhanced with failure tracking).
|
||||
|
||||
```sql
|
||||
CREATE TABLE commands_run (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
work_item_id UUID NOT NULL REFERENCES work_items(id) ON DELETE CASCADE,
|
||||
session_id UUID NOT NULL REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
command_text TEXT NOT NULL,
|
||||
host VARCHAR(255), -- where executed: "jupiter", "172.16.3.20"
|
||||
shell_type VARCHAR(50), -- "bash", "powershell", "sql", "docker"
|
||||
success BOOLEAN,
|
||||
output_summary TEXT, -- first/last lines or error
|
||||
|
||||
-- Failure tracking (new)
|
||||
exit_code INTEGER, -- non-zero indicates failure
|
||||
error_message TEXT, -- full error text
|
||||
failure_category VARCHAR(100), -- "compatibility", "permission", "syntax", "environmental"
|
||||
resolution TEXT, -- how it was fixed (if resolved)
|
||||
resolved BOOLEAN DEFAULT false,
|
||||
|
||||
execution_order INTEGER, -- sequence within work item
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_commands_work_item (work_item_id),
|
||||
INDEX idx_commands_session (session_id),
|
||||
INDEX idx_commands_host (host),
|
||||
INDEX idx_commands_success (success),
|
||||
INDEX idx_commands_failure_category (failure_category)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `infrastructure_changes`
|
||||
|
||||
Audit trail for infrastructure modifications.
|
||||
|
||||
```sql
|
||||
CREATE TABLE infrastructure_changes (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
work_item_id UUID NOT NULL REFERENCES work_items(id) ON DELETE CASCADE,
|
||||
session_id UUID NOT NULL REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
infrastructure_id UUID REFERENCES infrastructure(id) ON DELETE SET NULL,
|
||||
change_type VARCHAR(50) CHECK(change_type IN (
|
||||
'dns', 'firewall', 'routing', 'ssl', 'container',
|
||||
'service_config', 'hardware', 'network', 'storage'
|
||||
)),
|
||||
target_system VARCHAR(255) NOT NULL,
|
||||
before_state TEXT,
|
||||
after_state TEXT,
|
||||
is_permanent BOOLEAN DEFAULT true,
|
||||
rollback_procedure TEXT,
|
||||
verification_performed BOOLEAN DEFAULT false,
|
||||
verification_notes TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_infra_changes_work_item (work_item_id),
|
||||
INDEX idx_infra_changes_session (session_id),
|
||||
INDEX idx_infra_changes_infrastructure (infrastructure_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `problem_solutions`
|
||||
|
||||
Issue tracking with root cause and resolution.
|
||||
|
||||
```sql
|
||||
CREATE TABLE problem_solutions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
work_item_id UUID NOT NULL REFERENCES work_items(id) ON DELETE CASCADE,
|
||||
session_id UUID NOT NULL REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
problem_description TEXT NOT NULL,
|
||||
symptom TEXT, -- what user saw
|
||||
error_message TEXT, -- exact error code/message
|
||||
investigation_steps TEXT, -- JSON array of diagnostic commands
|
||||
root_cause TEXT,
|
||||
solution_applied TEXT NOT NULL,
|
||||
verification_method TEXT,
|
||||
rollback_plan TEXT,
|
||||
recurrence_count INTEGER DEFAULT 1, -- if same problem reoccurs
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_problems_work_item (work_item_id),
|
||||
INDEX idx_problems_session (session_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `deployments`
|
||||
|
||||
Track software/config deployments.
|
||||
|
||||
```sql
|
||||
CREATE TABLE deployments (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
work_item_id UUID NOT NULL REFERENCES work_items(id) ON DELETE CASCADE,
|
||||
session_id UUID NOT NULL REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
infrastructure_id UUID REFERENCES infrastructure(id) ON DELETE SET NULL,
|
||||
service_id UUID REFERENCES services(id) ON DELETE SET NULL,
|
||||
deployment_type VARCHAR(50) CHECK(deployment_type IN (
|
||||
'code', 'config', 'database', 'container', 'service_restart'
|
||||
)),
|
||||
version VARCHAR(100),
|
||||
description TEXT,
|
||||
deployed_from VARCHAR(500), -- source path or repo
|
||||
deployed_to VARCHAR(500), -- destination
|
||||
rollback_available BOOLEAN DEFAULT false,
|
||||
rollback_procedure TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_deployments_work_item (work_item_id),
|
||||
INDEX idx_deployments_infrastructure (infrastructure_id),
|
||||
INDEX idx_deployments_service (service_id)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `database_changes`
|
||||
|
||||
Track database schema/data modifications.
|
||||
|
||||
```sql
|
||||
CREATE TABLE database_changes (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
work_item_id UUID NOT NULL REFERENCES work_items(id) ON DELETE CASCADE,
|
||||
session_id UUID NOT NULL REFERENCES sessions(id) ON DELETE CASCADE,
|
||||
database_name VARCHAR(255) NOT NULL,
|
||||
infrastructure_id UUID REFERENCES infrastructure(id) ON DELETE SET NULL,
|
||||
change_type VARCHAR(50) CHECK(change_type IN (
|
||||
'schema', 'data', 'index', 'optimization', 'cleanup', 'migration'
|
||||
)),
|
||||
sql_executed TEXT,
|
||||
rows_affected BIGINT,
|
||||
size_freed_bytes BIGINT, -- for cleanup operations
|
||||
backup_taken BOOLEAN DEFAULT false,
|
||||
backup_location VARCHAR(500),
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX idx_db_changes_work_item (work_item_id),
|
||||
INDEX idx_db_changes_database (database_name)
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Relationships
|
||||
|
||||
- `sessions` → `work_items` (one-to-many): Each session contains multiple work items
|
||||
- `work_items` → `file_changes` (one-to-many): Track files modified in each work item
|
||||
- `work_items` → `commands_run` (one-to-many): Commands executed for each work item
|
||||
- `work_items` → `infrastructure_changes` (one-to-many): Infrastructure changes made
|
||||
- `work_items` → `problem_solutions` (one-to-many): Problems solved in work item
|
||||
- `work_items` → `deployments` (one-to-many): Deployments performed
|
||||
- `work_items` → `database_changes` (one-to-many): Database modifications
|
||||
- `work_items` ↔ `tags` (many-to-many via work_item_tags)
|
||||
|
||||
---
|
||||
|
||||
## Work Item Categorization
|
||||
|
||||
### Auto-Categorization Logic
|
||||
|
||||
As work progresses, agents analyze conversation and actions to categorize work:
|
||||
|
||||
**Keyword Triggers:**
|
||||
- **infrastructure:** "ssh", "docker restart", "service", "server", "network"
|
||||
- **troubleshooting:** "error", "not working", "broken", "failed", "issue"
|
||||
- **configuration:** "configure", "setup", "change settings", "modify"
|
||||
- **development:** "build", "code", "implement", "create", "develop"
|
||||
- **maintenance:** "cleanup", "optimize", "backup", "update", "patch"
|
||||
- **security:** "malware", "breach", "unauthorized", "vulnerability", "firewall"
|
||||
|
||||
### Information-Dense Data Capture
|
||||
|
||||
Work items use concise, structured descriptions:
|
||||
|
||||
**Format:**
|
||||
```
|
||||
Problem: [what was wrong]
|
||||
Cause: [root cause if identified]
|
||||
Fix: [solution applied]
|
||||
Verify: [how confirmed]
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Problem: ERR_SSL_PROTOCOL_ERROR on git.azcomputerguru.com
|
||||
Cause: Certificate expired 2026-01-10
|
||||
Fix: certbot renew && systemctl restart apache2
|
||||
Verify: curl test successful, browser loads site
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Billability Tracking
|
||||
|
||||
### Auto-flag Billable Work
|
||||
|
||||
- Client work (non-internal) → `is_billable = true` by default
|
||||
- Internal infrastructure → `is_billable = false`
|
||||
- User can override with command: `/billable false`
|
||||
|
||||
### Time Allocation
|
||||
|
||||
- Track time per work_item (start when created, end when completed)
|
||||
- `actual_minutes` calculated from timestamps
|
||||
- Aggregate to session total: `billable_hours` in sessions table
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Core Tables:** See [SCHEMA_CORE.md](SCHEMA_CORE.md)
|
||||
- **Infrastructure Details:** See [SCHEMA_INFRASTRUCTURE.md](SCHEMA_INFRASTRUCTURE.md)
|
||||
- **Credentials:** See [SCHEMA_CREDENTIALS.md](SCHEMA_CREDENTIALS.md)
|
||||
- **Environmental Learning:** See [SCHEMA_CONTEXT.md](SCHEMA_CONTEXT.md)
|
||||
- **External Integrations:** See [SCHEMA_INTEGRATIONS.md](SCHEMA_INTEGRATIONS.md)
|
||||
- **API Endpoints:** See [API_SPEC.md](API_SPEC.md)
|
||||
647
.claude/agents/testing.md
Normal file
647
.claude/agents/testing.md
Normal file
@@ -0,0 +1,647 @@
|
||||
# Testing Agent
|
||||
|
||||
## Role
|
||||
Quality assurance specialist - validates implementation with real-world testing
|
||||
|
||||
## Responsibilities
|
||||
- Create and execute tests for completed code
|
||||
- Use only real data (database, files, actual services)
|
||||
- Report failures with specific details
|
||||
- Request missing test data/infrastructure from coordinator
|
||||
- Validate behavior matches specifications
|
||||
|
||||
## Testing Scope
|
||||
|
||||
### Unit Testing
|
||||
- Model validation (SQLAlchemy models)
|
||||
- Function behavior
|
||||
- Data validation
|
||||
- Constraint enforcement
|
||||
- Individual utility functions
|
||||
- Class method correctness
|
||||
|
||||
### Integration Testing
|
||||
- Database operations (CRUD)
|
||||
- Agent coordination
|
||||
- API endpoints
|
||||
- Authentication flows
|
||||
- File system operations
|
||||
- Git/Gitea integration
|
||||
- Cross-component interactions
|
||||
|
||||
### End-to-End Testing
|
||||
- Complete user workflows
|
||||
- Mode switching (MSP/Dev/Normal)
|
||||
- Multi-agent orchestration
|
||||
- Data persistence across sessions
|
||||
- Full feature implementations
|
||||
- User journey validation
|
||||
|
||||
## Testing Philosophy
|
||||
|
||||
### Real Data Only
|
||||
- Connect to actual Jupiter database (172.16.3.20)
|
||||
- Use actual claudetools database
|
||||
- Test against real file system (D:\ClaudeTools)
|
||||
- Validate with real Gitea instance (http://172.16.3.20:3000)
|
||||
- Execute real API calls
|
||||
- Create actual backup files
|
||||
|
||||
### No Mocking
|
||||
- Test against real services when possible
|
||||
- Use actual database transactions
|
||||
- Perform real file I/O operations
|
||||
- Make genuine HTTP requests
|
||||
- Execute actual Git operations
|
||||
|
||||
### No Imagination
|
||||
- If data doesn't exist, request it from coordinator
|
||||
- If infrastructure is missing, report to coordinator
|
||||
- If dependencies are unavailable, pause and request
|
||||
- Never fabricate test results
|
||||
- Never assume behavior without verification
|
||||
|
||||
### Reproducible
|
||||
- Tests should be repeatable with same results
|
||||
- Use consistent test data
|
||||
- Clean up test artifacts
|
||||
- Document test prerequisites
|
||||
- Maintain test isolation where possible
|
||||
|
||||
### Documented Failures
|
||||
- Provide specific error messages
|
||||
- Include full stack traces
|
||||
- Reference exact file paths and line numbers
|
||||
- Show actual vs expected values
|
||||
- Suggest actionable fixes
|
||||
|
||||
## Workflow Integration
|
||||
|
||||
```
|
||||
Coding Agent → Code Review Agent → Testing Agent → Coordinator → User
|
||||
↓
|
||||
[PASS] Continue
|
||||
[FAIL] Back to Coding Agent
|
||||
```
|
||||
|
||||
### Integration Points
|
||||
- Receives testing requests from Coordinator
|
||||
- Reports results back to Coordinator
|
||||
- Can trigger Coding Agent for fixes
|
||||
- Provides evidence for user validation
|
||||
|
||||
## Communication with Coordinator
|
||||
|
||||
### Requesting Missing Elements
|
||||
When testing requires missing elements:
|
||||
- "Testing requires: [specific item needed]"
|
||||
- "Cannot test [feature] without: [dependency]"
|
||||
- "Need test data: [describe data requirements]"
|
||||
- "Missing infrastructure: [specify what's needed]"
|
||||
|
||||
### Reporting Results
|
||||
- Clear PASS/FAIL status for each test
|
||||
- Summary statistics (X passed, Y failed, Z skipped)
|
||||
- Detailed failure information
|
||||
- Recommendations for next steps
|
||||
|
||||
### Coordinating Fixes
|
||||
- "Found N failures requiring code changes"
|
||||
- "Recommend routing to Coding Agent for: [specific fixes]"
|
||||
- "Minor issues can be fixed directly: [list items]"
|
||||
|
||||
## Test Execution Pattern
|
||||
|
||||
### 1. Receive Testing Request
|
||||
- Understand scope (unit/integration/E2E)
|
||||
- Identify components to test
|
||||
- Review specifications/requirements
|
||||
|
||||
### 2. Identify Requirements
|
||||
- List required test data
|
||||
- Identify necessary infrastructure
|
||||
- Determine dependencies
|
||||
- Check for prerequisite setup
|
||||
|
||||
### 3. Verify Prerequisites
|
||||
- Check database connectivity
|
||||
- Verify file system access
|
||||
- Confirm service availability
|
||||
- Validate test environment
|
||||
|
||||
### 4. Request Missing Items
|
||||
- Submit requests to coordinator
|
||||
- Wait for provisioning
|
||||
- Verify received items
|
||||
- Confirm ready to proceed
|
||||
|
||||
### 5. Execute Tests
|
||||
- Run unit tests first
|
||||
- Progress to integration tests
|
||||
- Complete with E2E tests
|
||||
- Capture all output
|
||||
|
||||
### 6. Analyze Results
|
||||
- Categorize failures
|
||||
- Identify patterns
|
||||
- Determine root causes
|
||||
- Assess severity
|
||||
|
||||
### 7. Report Results
|
||||
- Provide detailed pass/fail status
|
||||
- Include evidence and logs
|
||||
- Make recommendations
|
||||
- Suggest next actions
|
||||
|
||||
## Test Reporting Format
|
||||
|
||||
### PASS Format
|
||||
```
|
||||
✅ Component/Feature Name
|
||||
Description: [what was tested]
|
||||
Evidence: [specific proof of success]
|
||||
Time: [execution time]
|
||||
Details: [any relevant notes]
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
✅ MSPClient Model - Database Operations
|
||||
Description: Create, read, update, delete operations on msp_clients table
|
||||
Evidence: Created client ID 42, retrieved successfully, updated name, deleted
|
||||
Time: 0.23s
|
||||
Details: All constraints validated, foreign keys work correctly
|
||||
```
|
||||
|
||||
### FAIL Format
|
||||
```
|
||||
❌ Component/Feature Name
|
||||
Description: [what was tested]
|
||||
Error: [specific error message]
|
||||
Location: [file path:line number]
|
||||
Stack Trace: [relevant trace]
|
||||
Expected: [what should happen]
|
||||
Actual: [what actually happened]
|
||||
Suggested Fix: [actionable recommendation]
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
❌ WorkItem Model - Status Validation
|
||||
Description: Test invalid status value rejection
|
||||
Error: IntegrityError - CHECK constraint failed: work_items
|
||||
Location: D:\ClaudeTools\api\models\work_item.py:45
|
||||
Stack Trace:
|
||||
File "test_work_item.py", line 67, in test_invalid_status
|
||||
session.commit()
|
||||
sqlalchemy.exc.IntegrityError: CHECK constraint failed
|
||||
Expected: Should reject status='invalid_status'
|
||||
Actual: Database allowed invalid status value
|
||||
Suggested Fix: Add CHECK constraint: status IN ('todo', 'in_progress', 'blocked', 'done')
|
||||
```
|
||||
|
||||
### SKIP Format
|
||||
```
|
||||
⏭️ Component/Feature Name
|
||||
Reason: [why test was skipped]
|
||||
Required: [what's needed to run]
|
||||
Action: [how to resolve]
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
⏭️ Gitea Integration - Repository Creation
|
||||
Reason: Gitea service unavailable at http://172.16.3.20:3000
|
||||
Required: Gitea instance running and accessible
|
||||
Action: Request coordinator to verify Gitea service status
|
||||
```
|
||||
|
||||
## Testing Standards
|
||||
|
||||
### Python Testing
|
||||
- Use pytest as primary testing framework
|
||||
- Follow pytest conventions and best practices
|
||||
- Use fixtures for test data setup
|
||||
- Leverage pytest markers for test categorization
|
||||
- Generate pytest HTML reports
|
||||
|
||||
### Database Testing
|
||||
- Test against real claudetools database (172.16.3.20)
|
||||
- Use transactions for test isolation
|
||||
- Clean up test data after execution
|
||||
- Verify constraints and triggers
|
||||
- Test both success and failure paths
|
||||
|
||||
### File System Testing
|
||||
- Test in actual directory structure (D:\ClaudeTools)
|
||||
- Create temporary test directories when needed
|
||||
- Clean up test files after execution
|
||||
- Verify permissions and access
|
||||
- Test cross-platform path handling
|
||||
|
||||
### API Testing
|
||||
- Make real HTTP requests
|
||||
- Validate response status codes
|
||||
- Check response headers
|
||||
- Verify response body structure
|
||||
- Test error handling
|
||||
|
||||
### Git/Gitea Testing
|
||||
- Execute real Git commands
|
||||
- Test against actual Gitea repository
|
||||
- Verify commit history
|
||||
- Validate branch operations
|
||||
- Test authentication flows
|
||||
|
||||
### Backup Testing
|
||||
- Create actual backup files
|
||||
- Verify backup contents
|
||||
- Test restore operations
|
||||
- Validate backup integrity
|
||||
- Check backup timestamps
|
||||
|
||||
## Example Invocations
|
||||
|
||||
### After Phase Completion
|
||||
```
|
||||
Request: "Testing Agent: Validate all Phase 1 models can be instantiated and saved to database"
|
||||
|
||||
Execution:
|
||||
- Test MSPClient model CRUD operations
|
||||
- Test WorkItem model CRUD operations
|
||||
- Test TimeEntry model CRUD operations
|
||||
- Verify relationships (foreign keys, cascades)
|
||||
- Check constraints (unique, not null, check)
|
||||
|
||||
Report:
|
||||
✅ MSPClient Model - Full CRUD validated
|
||||
✅ WorkItem Model - Full CRUD validated
|
||||
❌ TimeEntry Model - Foreign key constraint missing
|
||||
✅ Model Relationships - All associations work
|
||||
✅ Database Constraints - All enforced correctly
|
||||
```
|
||||
|
||||
### Integration Test
|
||||
```
|
||||
Request: "Testing Agent: Test that Coding Agent → Code Review Agent workflow produces valid code files"
|
||||
|
||||
Execution:
|
||||
- Simulate coordinator sending task to Coding Agent
|
||||
- Verify Coding Agent creates code file
|
||||
- Check Code Review Agent receives and reviews code
|
||||
- Validate output meets standards
|
||||
- Confirm files are properly formatted
|
||||
|
||||
Report:
|
||||
✅ Workflow Execution - All agents respond correctly
|
||||
✅ File Creation - Code files generated in correct location
|
||||
✅ Code Review - Review comments properly formatted
|
||||
❌ File Permissions - Generated files not executable when needed
|
||||
✅ Output Validation - All files pass linting
|
||||
```
|
||||
|
||||
### End-to-End Test
|
||||
```
|
||||
Request: "Testing Agent: Execute complete MSP mode workflow - create client, work item, track time, commit to Gitea"
|
||||
|
||||
Execution:
|
||||
1. Create test MSP client in database
|
||||
2. Create work item for client
|
||||
3. Add time entry for work item
|
||||
4. Generate commit message
|
||||
5. Commit to Gitea repository
|
||||
6. Verify all data persists
|
||||
7. Validate Gitea shows commit
|
||||
|
||||
Report:
|
||||
✅ Client Creation - MSP client 'TestCorp' created (ID: 42)
|
||||
✅ Work Item Creation - Work item 'Test Task' created (ID: 15)
|
||||
✅ Time Tracking - 2.5 hours logged successfully
|
||||
✅ Commit Generation - Commit message follows template
|
||||
❌ Gitea Push - Authentication failed, SSH key not configured
|
||||
⏭️ Verification - Cannot verify commit in Gitea (dependency on push)
|
||||
|
||||
Recommendation: Request coordinator to configure Gitea SSH authentication
|
||||
```
|
||||
|
||||
### Regression Test
|
||||
```
|
||||
Request: "Testing Agent: Run full regression suite after Gitea Agent updates"
|
||||
|
||||
Execution:
|
||||
- Run all existing unit tests
|
||||
- Execute integration test suite
|
||||
- Perform E2E workflow tests
|
||||
- Compare results to baseline
|
||||
- Identify new failures
|
||||
|
||||
Report:
|
||||
Summary: 47 passed, 2 failed, 1 skipped (3.45s)
|
||||
✅ Unit Tests - All 30 tests passed
|
||||
✅ Integration Tests - 15/17 passed
|
||||
❌ Gitea Integration - New API endpoint returns 404
|
||||
❌ MSP Workflow - Commit format changed, breaks parser
|
||||
⏭️ Backup Test - Gitea service unavailable
|
||||
|
||||
Recommendation: Coding Agent should review Gitea API changes
|
||||
```
|
||||
|
||||
## Tools Available
|
||||
|
||||
### Testing Frameworks
|
||||
- pytest - Primary test framework
|
||||
- pytest-cov - Code coverage reporting
|
||||
- pytest-html - HTML test reports
|
||||
- pytest-xdist - Parallel test execution
|
||||
|
||||
### Database Tools
|
||||
- SQLAlchemy - ORM and database operations
|
||||
- pymysql - Direct MariaDB connectivity
|
||||
- pytest-sqlalchemy - Database testing fixtures
|
||||
|
||||
### File System Tools
|
||||
- pathlib - Path operations
|
||||
- tempfile - Temporary file/directory creation
|
||||
- shutil - File operations and cleanup
|
||||
- os - Operating system interface
|
||||
|
||||
### API Testing Tools
|
||||
- requests - HTTP client library
|
||||
- responses - Request mocking (only when absolutely necessary)
|
||||
- pytest-httpserver - Local test server
|
||||
|
||||
### Git/Version Control
|
||||
- GitPython - Git operations
|
||||
- subprocess - Direct git command execution
|
||||
- Gitea API client - Repository operations
|
||||
|
||||
### Validation Tools
|
||||
- jsonschema - JSON validation
|
||||
- pydantic - Data validation
|
||||
- cerberus - Schema validation
|
||||
|
||||
### Utilities
|
||||
- logging - Test execution logging
|
||||
- datetime - Timestamp validation
|
||||
- json - JSON parsing and validation
|
||||
- yaml - YAML configuration parsing
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Test Execution Success
|
||||
- All tests execute (even if some fail)
|
||||
- No uncaught exceptions in test framework
|
||||
- Test results are captured and logged
|
||||
- Execution time is reasonable
|
||||
|
||||
### Reporting Success
|
||||
- Results are clearly documented
|
||||
- Pass/fail status is unambiguous
|
||||
- Failures include actionable information
|
||||
- Evidence is provided for all assertions
|
||||
|
||||
### Quality Success
|
||||
- No tests use mocked/imaginary data
|
||||
- All tests are reproducible
|
||||
- Test coverage is comprehensive
|
||||
- Edge cases are considered
|
||||
|
||||
### Coordination Success
|
||||
- Coordinator has clear next steps
|
||||
- Missing dependencies are identified
|
||||
- Fix recommendations are specific
|
||||
- Communication is efficient
|
||||
|
||||
## Constraints
|
||||
|
||||
### Data Constraints
|
||||
- Never assume test data exists - verify or request
|
||||
- Never create fake/mock data - use real or request creation
|
||||
- Never use hardcoded IDs without verification
|
||||
- Always clean up test data after execution
|
||||
|
||||
### Dependency Constraints
|
||||
- Never skip tests due to missing dependencies - request from coordinator
|
||||
- Never proceed without required infrastructure
|
||||
- Always verify service availability before testing
|
||||
- Request provisioning for missing components
|
||||
|
||||
### Reporting Constraints
|
||||
- Always provide specific failure details, not generic errors
|
||||
- Never report success without evidence
|
||||
- Always include file paths and line numbers for failures
|
||||
- Never omit stack traces or error messages
|
||||
|
||||
### Execution Constraints
|
||||
- Never modify production data
|
||||
- Always use test isolation techniques
|
||||
- Never leave test artifacts behind
|
||||
- Always respect database transactions
|
||||
|
||||
## Test Categories and Markers
|
||||
|
||||
### Pytest Markers
|
||||
```python
|
||||
@pytest.mark.unit # Unit tests (fast, isolated)
|
||||
@pytest.mark.integration # Integration tests (medium speed, multi-component)
|
||||
@pytest.mark.e2e # End-to-end tests (slow, full workflow)
|
||||
@pytest.mark.database # Requires database connectivity
|
||||
@pytest.mark.gitea # Requires Gitea service
|
||||
@pytest.mark.slow # Known slow tests (>5 seconds)
|
||||
@pytest.mark.skip # Temporarily disabled
|
||||
@pytest.mark.wip # Work in progress
|
||||
```
|
||||
|
||||
### Test Organization
|
||||
```
|
||||
D:\ClaudeTools\tests\
|
||||
├── unit\ # Fast, isolated component tests
|
||||
│ ├── test_models.py
|
||||
│ ├── test_utils.py
|
||||
│ └── test_validators.py
|
||||
├── integration\ # Multi-component tests
|
||||
│ ├── test_database.py
|
||||
│ ├── test_agents.py
|
||||
│ └── test_api.py
|
||||
├── e2e\ # Complete workflow tests
|
||||
│ ├── test_msp_workflow.py
|
||||
│ ├── test_dev_workflow.py
|
||||
│ └── test_agent_coordination.py
|
||||
├── fixtures\ # Shared test fixtures
|
||||
│ ├── database.py
|
||||
│ ├── files.py
|
||||
│ └── mock_data.py
|
||||
└── conftest.py # Pytest configuration
|
||||
```
|
||||
|
||||
## Test Development Guidelines
|
||||
|
||||
### Writing Good Tests
|
||||
1. **Clear Test Names** - Test name should describe what is tested
|
||||
2. **Single Assertion Focus** - Each test validates one thing
|
||||
3. **Arrange-Act-Assert** - Follow AAA pattern
|
||||
4. **Independent Tests** - No test depends on another
|
||||
5. **Repeatable** - Same input → same output every time
|
||||
|
||||
### Test Data Management
|
||||
1. Use fixtures for common test data
|
||||
2. Clean up after each test
|
||||
3. Use unique identifiers to avoid conflicts
|
||||
4. Document test data requirements
|
||||
5. Version control test data schemas
|
||||
|
||||
### Error Handling
|
||||
1. Test both success and failure paths
|
||||
2. Verify error messages are meaningful
|
||||
3. Check exception types are correct
|
||||
4. Validate error recovery mechanisms
|
||||
5. Test edge cases and boundary conditions
|
||||
|
||||
## Integration with CI/CD
|
||||
|
||||
### Continuous Testing
|
||||
- Tests run automatically on every commit
|
||||
- Results posted to pull request comments
|
||||
- Coverage reports generated
|
||||
- Failed tests block merges
|
||||
|
||||
### Test Stages
|
||||
1. **Fast Tests** - Unit tests run first (< 30s)
|
||||
2. **Integration Tests** - Run after fast tests pass (< 5min)
|
||||
3. **E2E Tests** - Run on main branch only (< 30min)
|
||||
4. **Nightly Tests** - Full regression suite
|
||||
|
||||
### Quality Gates
|
||||
- Minimum 80% code coverage
|
||||
- All critical path tests must pass
|
||||
- No known high-severity bugs
|
||||
- Performance benchmarks met
|
||||
|
||||
## Troubleshooting Guide
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### Database Connection Failures
|
||||
```
|
||||
Problem: Cannot connect to 172.16.3.20
|
||||
Solutions:
|
||||
- Verify network connectivity
|
||||
- Check database credentials
|
||||
- Confirm MariaDB service is running
|
||||
- Test with mysql client directly
|
||||
```
|
||||
|
||||
#### Test Data Conflicts
|
||||
```
|
||||
Problem: Unique constraint violation
|
||||
Solutions:
|
||||
- Use unique test identifiers (timestamps, UUIDs)
|
||||
- Clean up test data before test run
|
||||
- Check for orphaned test records
|
||||
- Use database transactions for isolation
|
||||
```
|
||||
|
||||
#### Gitea Service Unavailable
|
||||
```
|
||||
Problem: HTTP 503 or connection refused
|
||||
Solutions:
|
||||
- Verify Gitea service status
|
||||
- Check network connectivity
|
||||
- Confirm port 3000 is accessible
|
||||
- Review Gitea logs for errors
|
||||
```
|
||||
|
||||
#### File Permission Errors
|
||||
```
|
||||
Problem: Permission denied on file operations
|
||||
Solutions:
|
||||
- Check file/directory permissions
|
||||
- Verify user has write access
|
||||
- Ensure directories exist
|
||||
- Test with absolute paths
|
||||
```
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
### DO
|
||||
- ✅ Use real database connections
|
||||
- ✅ Test with actual file system
|
||||
- ✅ Execute real HTTP requests
|
||||
- ✅ Clean up test artifacts
|
||||
- ✅ Provide detailed failure reports
|
||||
- ✅ Request missing dependencies
|
||||
- ✅ Use pytest fixtures effectively
|
||||
- ✅ Follow AAA pattern
|
||||
- ✅ Test both success and failure
|
||||
- ✅ Document test requirements
|
||||
|
||||
### DON'T
|
||||
- ❌ Mock database operations
|
||||
- ❌ Use imaginary test data
|
||||
- ❌ Skip tests silently
|
||||
- ❌ Leave test artifacts behind
|
||||
- ❌ Report generic failures
|
||||
- ❌ Assume data exists
|
||||
- ❌ Test multiple things in one test
|
||||
- ❌ Create interdependent tests
|
||||
- ❌ Ignore edge cases
|
||||
- ❌ Hardcode test values
|
||||
|
||||
## Coordinator Communication Protocol
|
||||
|
||||
### Request Format
|
||||
```
|
||||
FROM: Coordinator
|
||||
TO: Testing Agent
|
||||
SUBJECT: Test Request
|
||||
|
||||
Scope: [unit|integration|e2e]
|
||||
Target: [component/feature/workflow]
|
||||
Context: [relevant background]
|
||||
Requirements: [prerequisites]
|
||||
Success Criteria: [what defines success]
|
||||
```
|
||||
|
||||
### Response Format
|
||||
```
|
||||
FROM: Testing Agent
|
||||
TO: Coordinator
|
||||
SUBJECT: Test Results
|
||||
|
||||
Summary: [X passed, Y failed, Z skipped]
|
||||
Duration: [execution time]
|
||||
Status: [PASS|FAIL|BLOCKED]
|
||||
|
||||
Details:
|
||||
[Detailed test results using reporting format]
|
||||
|
||||
Next Steps:
|
||||
[Recommendations for coordinator]
|
||||
```
|
||||
|
||||
### Escalation Format
|
||||
```
|
||||
FROM: Testing Agent
|
||||
TO: Coordinator
|
||||
SUBJECT: Testing Blocked
|
||||
|
||||
Blocker: [what is blocking testing]
|
||||
Impact: [what cannot be tested]
|
||||
Required: [what is needed to proceed]
|
||||
Urgency: [low|medium|high|critical]
|
||||
Alternatives: [possible workarounds]
|
||||
```
|
||||
|
||||
## Version History
|
||||
|
||||
### v1.0 - Initial Specification
|
||||
- Created: 2026-01-16
|
||||
- Author: ClaudeTools Development Team
|
||||
- Status: Production Ready
|
||||
- Purpose: Define Testing Agent role and responsibilities within ClaudeTools workflow
|
||||
|
||||
---
|
||||
|
||||
**Testing Agent Status: READY FOR DEPLOYMENT**
|
||||
|
||||
This agent is fully specified and ready to integrate into the ClaudeTools multi-agent workflow. The Testing Agent ensures code quality through real-world validation using actual database connections, file systems, and services - never mocks or imaginary data.
|
||||
383
.claude/claude.md
Normal file
383
.claude/claude.md
Normal file
@@ -0,0 +1,383 @@
|
||||
# ClaudeTools Project Context
|
||||
|
||||
**Project Type:** MSP Work Tracking System with AI Context Recall
|
||||
**Status:** Production-Ready (95% Complete)
|
||||
**Database:** MariaDB 12.1.2 @ 172.16.3.20:3306
|
||||
|
||||
---
|
||||
|
||||
## Quick Facts
|
||||
|
||||
- **130 API Endpoints** across 21 entities
|
||||
- **43 Database Tables** (fully migrated)
|
||||
- **Context Recall System** with cross-machine persistent memory
|
||||
- **JWT Authentication** on all endpoints
|
||||
- **AES-256-GCM Encryption** for credentials
|
||||
|
||||
---
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
D:\ClaudeTools/
|
||||
├── api/ # FastAPI application
|
||||
│ ├── main.py # API entry point (130 endpoints)
|
||||
│ ├── models/ # SQLAlchemy models (42 models)
|
||||
│ ├── routers/ # API endpoints (21 routers)
|
||||
│ ├── schemas/ # Pydantic schemas (84 classes)
|
||||
│ ├── services/ # Business logic (21 services)
|
||||
│ ├── middleware/ # Auth & error handling
|
||||
│ └── utils/ # Crypto & compression utilities
|
||||
├── migrations/ # Alembic database migrations
|
||||
├── .claude/ # Claude Code hooks & config
|
||||
│ ├── hooks/ # Auto-inject/save context
|
||||
│ └── context-recall-config.env # Configuration
|
||||
└── scripts/ # Setup & test scripts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Database Connection
|
||||
|
||||
**Credentials Location:** `C:\Users\MikeSwanson\claude-projects\shared-data\credentials.md`
|
||||
|
||||
**Connection String:**
|
||||
```
|
||||
Host: 172.16.3.20:3306
|
||||
Database: claudetools
|
||||
User: claudetools
|
||||
Password: CT_e8fcd5a3952030a79ed6debae6c954ed
|
||||
```
|
||||
|
||||
**Environment Variables:**
|
||||
```bash
|
||||
DATABASE_URL=mysql+pymysql://claudetools:CT_e8fcd5a3952030a79ed6debae6c954ed@172.16.3.20:3306/claudetools?charset=utf8mb4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Starting the API
|
||||
|
||||
```bash
|
||||
# Activate virtual environment
|
||||
api\venv\Scripts\activate
|
||||
|
||||
# Start API server
|
||||
python -m api.main
|
||||
# OR
|
||||
uvicorn api.main:app --reload --host 0.0.0.0 --port 8000
|
||||
|
||||
# Access documentation
|
||||
http://localhost:8000/api/docs
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Context Recall System
|
||||
|
||||
### How It Works
|
||||
|
||||
**Automatic context injection via Claude Code hooks:**
|
||||
- `.claude/hooks/user-prompt-submit` - Recalls context before each message
|
||||
- `.claude/hooks/task-complete` - Saves context after completion
|
||||
|
||||
### Setup (One-Time)
|
||||
|
||||
```bash
|
||||
bash scripts/setup-context-recall.sh
|
||||
```
|
||||
|
||||
### Manual Context Recall
|
||||
|
||||
**API Endpoint:**
|
||||
```
|
||||
GET http://localhost:8000/api/conversation-contexts/recall
|
||||
?project_id={uuid}
|
||||
&tags[]=fastapi&tags[]=database
|
||||
&limit=10
|
||||
&min_relevance_score=5.0
|
||||
```
|
||||
|
||||
**Test Context Recall:**
|
||||
```bash
|
||||
bash scripts/test-context-recall.sh
|
||||
```
|
||||
|
||||
### Save Context Manually
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/api/conversation-contexts \
|
||||
-H "Authorization: Bearer $JWT_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"project_id": "uuid-here",
|
||||
"context_type": "session_summary",
|
||||
"title": "Current work session",
|
||||
"dense_summary": "Working on API endpoints...",
|
||||
"relevance_score": 7.0,
|
||||
"tags": ["api", "fastapi", "development"]
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key API Endpoints
|
||||
|
||||
### Core Entities (Phase 4)
|
||||
- `/api/machines` - Machine inventory
|
||||
- `/api/clients` - Client management
|
||||
- `/api/projects` - Project tracking
|
||||
- `/api/sessions` - Work sessions
|
||||
- `/api/tags` - Tagging system
|
||||
|
||||
### MSP Work Tracking (Phase 5)
|
||||
- `/api/work-items` - Work item tracking
|
||||
- `/api/tasks` - Task management
|
||||
- `/api/billable-time` - Time & billing
|
||||
|
||||
### Infrastructure (Phase 5)
|
||||
- `/api/sites` - Physical locations
|
||||
- `/api/infrastructure` - IT assets
|
||||
- `/api/services` - Application services
|
||||
- `/api/networks` - Network configs
|
||||
- `/api/firewall-rules` - Firewall documentation
|
||||
- `/api/m365-tenants` - M365 tenant management
|
||||
|
||||
### Credentials (Phase 5)
|
||||
- `/api/credentials` - Encrypted credential storage
|
||||
- `/api/credential-audit-logs` - Audit trail (read-only)
|
||||
- `/api/security-incidents` - Incident tracking
|
||||
|
||||
### Context Recall (Phase 6)
|
||||
- `/api/conversation-contexts` - Context storage & recall
|
||||
- `/api/context-snippets` - Knowledge fragments
|
||||
- `/api/project-states` - Project state tracking
|
||||
- `/api/decision-logs` - Decision documentation
|
||||
|
||||
---
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### 1. Create New Project with Context
|
||||
|
||||
```python
|
||||
# Create project
|
||||
POST /api/projects
|
||||
{
|
||||
"name": "New Website",
|
||||
"client_id": "client-uuid",
|
||||
"status": "planning"
|
||||
}
|
||||
|
||||
# Initialize project state
|
||||
POST /api/project-states
|
||||
{
|
||||
"project_id": "project-uuid",
|
||||
"current_phase": "requirements",
|
||||
"progress_percentage": 10,
|
||||
"next_actions": ["Gather requirements", "Design mockups"]
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Log Important Decision
|
||||
|
||||
```python
|
||||
POST /api/decision-logs
|
||||
{
|
||||
"project_id": "project-uuid",
|
||||
"decision_type": "technical",
|
||||
"decision_text": "Using FastAPI for API layer",
|
||||
"rationale": "Async support, automatic OpenAPI docs, modern Python",
|
||||
"alternatives_considered": ["Flask", "Django"],
|
||||
"impact": "high",
|
||||
"tags": ["api", "framework", "python"]
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Track Work Session
|
||||
|
||||
```python
|
||||
# Create session
|
||||
POST /api/sessions
|
||||
{
|
||||
"project_id": "project-uuid",
|
||||
"machine_id": "machine-uuid",
|
||||
"started_at": "2026-01-16T10:00:00Z"
|
||||
}
|
||||
|
||||
# Log billable time
|
||||
POST /api/billable-time
|
||||
{
|
||||
"session_id": "session-uuid",
|
||||
"work_item_id": "work-item-uuid",
|
||||
"client_id": "client-uuid",
|
||||
"start_time": "2026-01-16T10:00:00Z",
|
||||
"end_time": "2026-01-16T12:00:00Z",
|
||||
"duration_hours": 2.0,
|
||||
"hourly_rate": 150.00,
|
||||
"total_amount": 300.00
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Store Encrypted Credential
|
||||
|
||||
```python
|
||||
POST /api/credentials
|
||||
{
|
||||
"credential_type": "api_key",
|
||||
"service_name": "OpenAI API",
|
||||
"username": "api_key",
|
||||
"password": "sk-1234567890", # Auto-encrypted
|
||||
"client_id": "client-uuid",
|
||||
"notes": "Production API key"
|
||||
}
|
||||
# Password automatically encrypted with AES-256-GCM
|
||||
# Audit log automatically created
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Important Files
|
||||
|
||||
**Session State:** `SESSION_STATE.md` - Complete project history and status
|
||||
**Documentation:**
|
||||
- `.claude/CONTEXT_RECALL_QUICK_START.md` - Context recall usage
|
||||
- `CONTEXT_RECALL_SETUP.md` - Full setup guide
|
||||
- `TEST_PHASE5_RESULTS.md` - Phase 5 test results
|
||||
- `TEST_CONTEXT_RECALL_RESULTS.md` - Context recall test results
|
||||
|
||||
**Configuration:**
|
||||
- `.env` - Environment variables (gitignored)
|
||||
- `.env.example` - Template with placeholders
|
||||
- `.claude/context-recall-config.env` - Context recall settings (gitignored)
|
||||
|
||||
**Tests:**
|
||||
- `test_api_endpoints.py` - Phase 4 tests (34/35 passing)
|
||||
- `test_phase5_api_endpoints.py` - Phase 5 tests (62/62 passing)
|
||||
- `test_context_recall_system.py` - Context recall tests (53 total)
|
||||
- `test_context_compression_quick.py` - Compression tests (10/10 passing)
|
||||
|
||||
---
|
||||
|
||||
## Recent Work (from SESSION_STATE.md)
|
||||
|
||||
**Last Session:** 2026-01-16
|
||||
**Phases Completed:** 0-6 (95% complete)
|
||||
|
||||
**Phase 6 - Just Completed:**
|
||||
- Context Recall System with cross-machine memory
|
||||
- 35 new endpoints for context management
|
||||
- 90-95% token reduction via compression
|
||||
- Automatic hooks for inject/save
|
||||
- One-command setup script
|
||||
|
||||
**Current State:**
|
||||
- 130 endpoints operational
|
||||
- 99.1% test pass rate (106/107 tests)
|
||||
- All migrations applied (43 tables)
|
||||
- Context recall ready for activation
|
||||
|
||||
---
|
||||
|
||||
## Token Optimization
|
||||
|
||||
**Context Compression:**
|
||||
- `compress_conversation_summary()` - 85-90% reduction
|
||||
- `format_for_injection()` - Token-efficient markdown
|
||||
- `extract_key_decisions()` - Decision extraction
|
||||
- Auto-tag extraction (30+ tech tags)
|
||||
|
||||
**Typical Compression:**
|
||||
```
|
||||
Original: 500 tokens (verbose conversation)
|
||||
Compressed: 60 tokens (structured JSON)
|
||||
Reduction: 88%
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Security
|
||||
|
||||
**Authentication:** JWT tokens (Argon2 password hashing)
|
||||
**Encryption:** AES-256-GCM (Fernet) for credentials
|
||||
**Audit Logging:** All credential operations logged
|
||||
**Token Storage:** `.claude/context-recall-config.env` (gitignored)
|
||||
|
||||
**Get JWT Token:**
|
||||
```bash
|
||||
# Via setup script (recommended)
|
||||
bash scripts/setup-context-recall.sh
|
||||
|
||||
# Or manually via API
|
||||
POST /api/auth/token
|
||||
{
|
||||
"email": "user@example.com",
|
||||
"password": "your-password"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**API won't start:**
|
||||
```bash
|
||||
# Check if port 8000 is in use
|
||||
netstat -ano | findstr :8000
|
||||
|
||||
# Check database connection
|
||||
python test_db_connection.py
|
||||
```
|
||||
|
||||
**Context recall not working:**
|
||||
```bash
|
||||
# Test the system
|
||||
bash scripts/test-context-recall.sh
|
||||
|
||||
# Check configuration
|
||||
cat .claude/context-recall-config.env
|
||||
|
||||
# Verify hooks are executable
|
||||
ls -l .claude/hooks/
|
||||
```
|
||||
|
||||
**Database migration issues:**
|
||||
```bash
|
||||
# Check current revision
|
||||
alembic current
|
||||
|
||||
# Show migration history
|
||||
alembic history
|
||||
|
||||
# Upgrade to latest
|
||||
alembic upgrade head
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps (Optional Phase 7)
|
||||
|
||||
**Remaining entities (from original spec):**
|
||||
- File Changes API - Track file modifications
|
||||
- Command Runs API - Command execution history
|
||||
- Problem Solutions API - Knowledge base
|
||||
- Failure Patterns API - Error pattern recognition
|
||||
- Environmental Insights API - Contextual learning
|
||||
|
||||
**These are optional** - the system is fully functional without them.
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Start API:** `uvicorn api.main:app --reload`
|
||||
**API Docs:** `http://localhost:8000/api/docs`
|
||||
**Setup Context Recall:** `bash scripts/setup-context-recall.sh`
|
||||
**Test System:** `bash scripts/test-context-recall.sh`
|
||||
**Database:** `172.16.3.20:3306/claudetools`
|
||||
**Virtual Env:** `api\venv\Scripts\activate`
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2026-01-16
|
||||
**Project Progress:** 95% Complete (Phase 6 of 7 done)
|
||||
11
.claude/context-recall-config.env.example
Normal file
11
.claude/context-recall-config.env.example
Normal file
@@ -0,0 +1,11 @@
|
||||
# Claude Context Import Configuration
|
||||
# Copy this file to context-recall-config.env and update with your actual values
|
||||
|
||||
# JWT Token for API Authentication
|
||||
# Generate this token using the ClaudeTools API /auth endpoint
|
||||
# Example: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
|
||||
JWT_TOKEN=your-jwt-token-here
|
||||
|
||||
# API Base URL (default: http://localhost:8000)
|
||||
# Change this if your API is running on a different host/port
|
||||
API_BASE_URL=http://localhost:8000
|
||||
2
.claude/hooks/.gitkeep
Normal file
2
.claude/hooks/.gitkeep
Normal file
@@ -0,0 +1,2 @@
|
||||
# This directory contains Claude Code hooks for Context Recall
|
||||
# See README.md for documentation
|
||||
390
.claude/hooks/EXAMPLES.md
Normal file
390
.claude/hooks/EXAMPLES.md
Normal file
@@ -0,0 +1,390 @@
|
||||
# Context Recall Examples
|
||||
|
||||
Real-world examples of how the Context Recall System works.
|
||||
|
||||
## Example 1: Continuing Previous Work
|
||||
|
||||
### Session 1 (Monday)
|
||||
|
||||
**User:** "Add authentication endpoints to the API"
|
||||
|
||||
**Claude:** Creates `/api/auth/login` and `/api/auth/register` endpoints
|
||||
|
||||
**System:** Saves context:
|
||||
```json
|
||||
{
|
||||
"title": "Session: 2025-01-13T14:30:00Z",
|
||||
"context_type": "session_summary",
|
||||
"dense_summary": "Created authentication endpoints including login and register. Implemented JWT token generation with 24-hour expiry. Added password hashing with bcrypt.",
|
||||
"relevance_score": 8.5,
|
||||
"metadata": {
|
||||
"files_modified": "api/auth.py,api/models.py",
|
||||
"git_branch": "feature/auth"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Session 2 (Tuesday)
|
||||
|
||||
**User:** "Add password reset functionality"
|
||||
|
||||
**System:** Automatically recalls context:
|
||||
```markdown
|
||||
## 📚 Previous Context
|
||||
|
||||
### 1. Session: 2025-01-13T14:30:00Z (Score: 8.5/10)
|
||||
*Type: session_summary*
|
||||
|
||||
Created authentication endpoints including login and register.
|
||||
Implemented JWT token generation with 24-hour expiry.
|
||||
Added password hashing with bcrypt.
|
||||
|
||||
Modified files: api/auth.py,api/models.py
|
||||
Branch: feature/auth
|
||||
```
|
||||
|
||||
**Claude:** (with context) "I'll add password reset to the existing authentication system. Based on the previous session, I can see we're using JWT tokens and bcrypt..."
|
||||
|
||||
## Example 2: Technical Decision Recall
|
||||
|
||||
### Session 1
|
||||
|
||||
**User:** "Why did we choose PostgreSQL over MongoDB?"
|
||||
|
||||
**Claude:** "We chose PostgreSQL because: 1) Strong ACID guarantees, 2) Complex relationships in data model, 3) Team familiarity..."
|
||||
|
||||
**System:** Saves context:
|
||||
```json
|
||||
{
|
||||
"title": "Database Technology Decision",
|
||||
"context_type": "technical_decision",
|
||||
"dense_summary": "Chose PostgreSQL over MongoDB for: ACID guarantees, complex relationships, team familiarity, excellent tooling ecosystem. Decision made after evaluating both options.",
|
||||
"relevance_score": 9.0
|
||||
}
|
||||
```
|
||||
|
||||
### Session 2 (Weeks Later)
|
||||
|
||||
**User:** "Should we use MongoDB for the logging service?"
|
||||
|
||||
**System:** Recalls context:
|
||||
```markdown
|
||||
## 📚 Previous Context
|
||||
|
||||
### 1. Database Technology Decision (Score: 9.0/10)
|
||||
*Type: technical_decision*
|
||||
|
||||
Chose PostgreSQL over MongoDB for: ACID guarantees, complex relationships,
|
||||
team familiarity, excellent tooling ecosystem. Decision made after
|
||||
evaluating both options.
|
||||
```
|
||||
|
||||
**Claude:** "Given our previous decision to use PostgreSQL for its ACID guarantees and relational features, let's consider if logging has different requirements..."
|
||||
|
||||
## Example 3: Bug Fix History
|
||||
|
||||
### Session 1
|
||||
|
||||
**User:** "Fix the authentication timeout issue"
|
||||
|
||||
**Claude:** Fixes bug in JWT token validation
|
||||
|
||||
**System:** Saves:
|
||||
```json
|
||||
{
|
||||
"title": "Bug Fix: Authentication Timeouts",
|
||||
"context_type": "bug_fix",
|
||||
"dense_summary": "Fixed authentication timeout issue caused by incorrect JWT expiry validation. Changed from server time to UTC. Added timezone handling.",
|
||||
"relevance_score": 8.0,
|
||||
"metadata": {
|
||||
"bug_type": "authentication",
|
||||
"fix_type": "timezone_handling"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Session 2
|
||||
|
||||
**User:** "Users are getting logged out randomly"
|
||||
|
||||
**System:** Recalls:
|
||||
```markdown
|
||||
## 📚 Previous Context
|
||||
|
||||
### 1. Bug Fix: Authentication Timeouts (Score: 8.0/10)
|
||||
*Type: bug_fix*
|
||||
|
||||
Fixed authentication timeout issue caused by incorrect JWT expiry
|
||||
validation. Changed from server time to UTC. Added timezone handling.
|
||||
```
|
||||
|
||||
**Claude:** "This might be related to the previous timeout issue. Let me check the JWT timezone handling..."
|
||||
|
||||
## Example 4: Multi-Session Feature Development
|
||||
|
||||
### Session 1: Planning
|
||||
|
||||
**User:** "Plan a user dashboard feature"
|
||||
|
||||
**Claude:** Creates architecture plan
|
||||
|
||||
**System:** Saves:
|
||||
```json
|
||||
{
|
||||
"title": "User Dashboard Architecture",
|
||||
"context_type": "architecture_decision",
|
||||
"dense_summary": "Planned user dashboard with: Real-time metrics, customizable widgets, role-based access. Tech stack: React frontend, WebSocket for real-time, Redis for caching.",
|
||||
"relevance_score": 9.5
|
||||
}
|
||||
```
|
||||
|
||||
### Session 2: Frontend Implementation
|
||||
|
||||
**User:** "Implement the dashboard frontend"
|
||||
|
||||
**System:** Recalls planning context, Claude implements according to plan
|
||||
|
||||
**System:** Saves:
|
||||
```json
|
||||
{
|
||||
"title": "Dashboard Frontend Implementation",
|
||||
"context_type": "session_summary",
|
||||
"dense_summary": "Implemented React dashboard with widget system. Created: DashboardContainer, WidgetGrid, individual widget components. Used WebSocket hook for real-time updates.",
|
||||
"relevance_score": 8.0
|
||||
}
|
||||
```
|
||||
|
||||
### Session 3: Backend Implementation
|
||||
|
||||
**User:** "Add the backend API for the dashboard"
|
||||
|
||||
**System:** Recalls both previous contexts
|
||||
|
||||
**Claude:** "Based on the architecture plan and frontend implementation, I'll create the WebSocket endpoints and Redis caching layer..."
|
||||
|
||||
### Session 4: Testing
|
||||
|
||||
**User:** "Write tests for the dashboard"
|
||||
|
||||
**System:** Recalls all three previous contexts
|
||||
|
||||
**Claude:** "I'll write tests covering the complete dashboard flow: frontend widgets, WebSocket connections, Redis caching, and API endpoints..."
|
||||
|
||||
## Example 5: Context Filtering
|
||||
|
||||
### Query with High Threshold
|
||||
|
||||
```bash
|
||||
MIN_RELEVANCE_SCORE=7.5
|
||||
```
|
||||
|
||||
Result: Only highly relevant contexts (major decisions, current feature work)
|
||||
|
||||
```markdown
|
||||
### 1. User Authentication Refactor (Score: 9.0/10)
|
||||
### 2. Database Schema Changes (Score: 8.5/10)
|
||||
### 3. API Rate Limiting Implementation (Score: 7.8/10)
|
||||
```
|
||||
|
||||
### Query with Low Threshold
|
||||
|
||||
```bash
|
||||
MIN_RELEVANCE_SCORE=3.0
|
||||
```
|
||||
|
||||
Result: More historical context (includes older sessions, minor changes)
|
||||
|
||||
```markdown
|
||||
### 1. User Authentication Refactor (Score: 9.0/10)
|
||||
### 2. Database Schema Changes (Score: 8.5/10)
|
||||
### 3. API Rate Limiting Implementation (Score: 7.8/10)
|
||||
### 4. Update README documentation (Score: 6.5/10)
|
||||
### 5. Fix typo in comment (Score: 4.2/10)
|
||||
### 6. Add gitignore entry (Score: 3.5/10)
|
||||
```
|
||||
|
||||
## Example 6: Project State Tracking
|
||||
|
||||
### After Each Session
|
||||
|
||||
**task-complete hook** saves project state:
|
||||
|
||||
```json
|
||||
{
|
||||
"project_id": "abc123...",
|
||||
"state_data": {
|
||||
"last_task_completion": "2025-01-15T10:30:00Z",
|
||||
"last_git_commit": "a1b2c3d",
|
||||
"last_git_branch": "feature/dashboard",
|
||||
"recent_files": "src/Dashboard.tsx,api/dashboard.py"
|
||||
},
|
||||
"state_type": "task_completion"
|
||||
}
|
||||
```
|
||||
|
||||
This enables Claude to answer questions like:
|
||||
- "What was I working on last?"
|
||||
- "What files did I modify recently?"
|
||||
- "What branch am I on?"
|
||||
|
||||
## Example 7: Cross-Feature Context
|
||||
|
||||
### Session 1: Feature A
|
||||
|
||||
**User:** "Add user roles system"
|
||||
|
||||
**System:** Saves context about role implementation (Admin, User, Guest)
|
||||
|
||||
### Session 2: Feature B
|
||||
|
||||
**User:** "Add permissions to the API endpoints"
|
||||
|
||||
**System:** Recalls role system context
|
||||
|
||||
**Claude:** "I'll add permissions using the existing role system (Admin, User, Guest) that we implemented..."
|
||||
|
||||
This shows how context from one feature automatically informs work on related features.
|
||||
|
||||
## Example 8: Onboarding New Team Members
|
||||
|
||||
### New Developer's First Session
|
||||
|
||||
**New Dev:** "How does authentication work in this project?"
|
||||
|
||||
**System:** Recalls all authentication-related contexts:
|
||||
```markdown
|
||||
### 1. User Authentication Refactor (Score: 9.0/10)
|
||||
Implemented JWT-based authentication with bcrypt password hashing...
|
||||
|
||||
### 2. Session Management Design (Score: 8.5/10)
|
||||
Added Redis-backed session storage with 24-hour expiry...
|
||||
|
||||
### 3. OAuth Integration (Score: 8.0/10)
|
||||
Integrated Google and GitHub OAuth providers...
|
||||
```
|
||||
|
||||
**Claude:** (with full context) "Our authentication system uses JWT tokens with bcrypt hashing. We also have OAuth integration with Google and GitHub..."
|
||||
|
||||
## Example 9: Debugging with Context
|
||||
|
||||
### Session 1: Implementation
|
||||
|
||||
**User:** "Add email notification system"
|
||||
|
||||
**System:** Saves implementation details (SMTP config, template system, queuing)
|
||||
|
||||
### Session 2: Bug Report
|
||||
|
||||
**User:** "Emails aren't being sent"
|
||||
|
||||
**System:** Recalls implementation context
|
||||
|
||||
**Claude:** "Let me check the email system we implemented. Based on the previous session, we're using SMTP with a queue system. Common issues could be..."
|
||||
|
||||
## Example 10: Evolving Requirements
|
||||
|
||||
### Month 1
|
||||
|
||||
**User:** "Build simple user profile page"
|
||||
|
||||
**System:** Saves initial implementation
|
||||
|
||||
### Month 2
|
||||
|
||||
**User:** "Add social features to profiles"
|
||||
|
||||
**System:** Recalls original implementation
|
||||
|
||||
**Claude:** "I'll extend the existing profile system to include social features..."
|
||||
|
||||
### Month 3
|
||||
|
||||
**User:** "Make profiles customizable"
|
||||
|
||||
**System:** Recalls both previous implementations
|
||||
|
||||
**Claude:** "Building on the profile and social features, I'll add customization options..."
|
||||
|
||||
Each session builds on previous work with full context.
|
||||
|
||||
## Real Output Example
|
||||
|
||||
Here's what you actually see in Claude Code when context is recalled:
|
||||
|
||||
```markdown
|
||||
<!-- Context Recall: Retrieved 3 relevant context(s) -->
|
||||
|
||||
## 📚 Previous Context
|
||||
|
||||
The following context has been automatically recalled from previous sessions:
|
||||
|
||||
### 1. API Authentication Implementation (Score: 8.5/10)
|
||||
*Type: session_summary*
|
||||
|
||||
Task completed on branch 'feature/auth' (commit: a1b2c3d).
|
||||
|
||||
Summary: Implemented JWT-based authentication system with login/register
|
||||
endpoints. Added password hashing using bcrypt. Created middleware for
|
||||
protected routes. Token expiry set to 24 hours.
|
||||
|
||||
Modified files: api/auth.py,api/middleware.py,api/models.py
|
||||
|
||||
Timestamp: 2025-01-15T14:30:00Z
|
||||
|
||||
---
|
||||
|
||||
### 2. Database Schema for Users (Score: 7.8/10)
|
||||
*Type: technical_decision*
|
||||
|
||||
Added User model with fields: id, username, email, password_hash,
|
||||
created_at, last_login. Decided to use UUID for user IDs instead of
|
||||
auto-increment integers for better security and scalability.
|
||||
|
||||
---
|
||||
|
||||
### 3. Security Best Practices Discussion (Score: 7.2/10)
|
||||
*Type: session_summary*
|
||||
|
||||
Discussed security considerations: password hashing (bcrypt), token
|
||||
storage (httpOnly cookies), CORS configuration, rate limiting. Decided
|
||||
to implement rate limiting in next session.
|
||||
|
||||
---
|
||||
|
||||
*This context was automatically injected to help maintain continuity across sessions.*
|
||||
```
|
||||
|
||||
This gives Claude complete awareness of your previous work without you having to explain it!
|
||||
|
||||
## Benefits Demonstrated
|
||||
|
||||
1. **Continuity** - Work picks up exactly where you left off
|
||||
2. **Consistency** - Decisions made previously are remembered
|
||||
3. **Efficiency** - No need to re-explain project details
|
||||
4. **Learning** - New team members get instant project knowledge
|
||||
5. **Debugging** - Past implementations inform current troubleshooting
|
||||
6. **Evolution** - Features build naturally on previous work
|
||||
|
||||
## Configuration Tips
|
||||
|
||||
**For focused work (single feature):**
|
||||
```bash
|
||||
MIN_RELEVANCE_SCORE=7.0
|
||||
MAX_CONTEXTS=5
|
||||
```
|
||||
|
||||
**For comprehensive context (complex projects):**
|
||||
```bash
|
||||
MIN_RELEVANCE_SCORE=5.0
|
||||
MAX_CONTEXTS=15
|
||||
```
|
||||
|
||||
**For debugging (need full history):**
|
||||
```bash
|
||||
MIN_RELEVANCE_SCORE=3.0
|
||||
MAX_CONTEXTS=20
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
See `CONTEXT_RECALL_SETUP.md` for setup instructions and `README.md` for technical details.
|
||||
223
.claude/hooks/INSTALL.md
Normal file
223
.claude/hooks/INSTALL.md
Normal file
@@ -0,0 +1,223 @@
|
||||
# Hook Installation Verification
|
||||
|
||||
This document helps verify that Claude Code hooks are properly installed.
|
||||
|
||||
## Quick Check
|
||||
|
||||
Run this command to verify installation:
|
||||
|
||||
```bash
|
||||
bash scripts/test-context-recall.sh
|
||||
```
|
||||
|
||||
Expected output: **15/15 tests passed**
|
||||
|
||||
## Manual Verification
|
||||
|
||||
### 1. Check Hook Files Exist
|
||||
|
||||
```bash
|
||||
ls -la .claude/hooks/
|
||||
```
|
||||
|
||||
Expected files:
|
||||
- `user-prompt-submit` (executable)
|
||||
- `task-complete` (executable)
|
||||
- `README.md`
|
||||
- `EXAMPLES.md`
|
||||
- `INSTALL.md` (this file)
|
||||
|
||||
### 2. Check Permissions
|
||||
|
||||
```bash
|
||||
ls -l .claude/hooks/user-prompt-submit
|
||||
ls -l .claude/hooks/task-complete
|
||||
```
|
||||
|
||||
Both should show: `-rwxr-xr-x` (executable)
|
||||
|
||||
If not executable:
|
||||
```bash
|
||||
chmod +x .claude/hooks/user-prompt-submit
|
||||
chmod +x .claude/hooks/task-complete
|
||||
```
|
||||
|
||||
### 3. Check Configuration Exists
|
||||
|
||||
```bash
|
||||
cat .claude/context-recall-config.env
|
||||
```
|
||||
|
||||
Should show:
|
||||
- `CLAUDE_API_URL=http://localhost:8000`
|
||||
- `JWT_TOKEN=...` (should have a value)
|
||||
- `CONTEXT_RECALL_ENABLED=true`
|
||||
|
||||
If file missing, run setup:
|
||||
```bash
|
||||
bash scripts/setup-context-recall.sh
|
||||
```
|
||||
|
||||
### 4. Test Hooks Manually
|
||||
|
||||
**Test user-prompt-submit:**
|
||||
```bash
|
||||
source .claude/context-recall-config.env
|
||||
bash .claude/hooks/user-prompt-submit
|
||||
```
|
||||
|
||||
Expected: Either context output or silent success (if no contexts exist)
|
||||
|
||||
**Test task-complete:**
|
||||
```bash
|
||||
source .claude/context-recall-config.env
|
||||
export TASK_SUMMARY="Test task"
|
||||
bash .claude/hooks/task-complete
|
||||
```
|
||||
|
||||
Expected: Silent success or "✓ Context saved to database"
|
||||
|
||||
### 5. Check API Connectivity
|
||||
|
||||
```bash
|
||||
curl http://localhost:8000/health
|
||||
```
|
||||
|
||||
Expected: `{"status":"healthy"}` or similar
|
||||
|
||||
If fails: Start API with `uvicorn api.main:app --reload`
|
||||
|
||||
### 6. Verify Git Config
|
||||
|
||||
```bash
|
||||
git config --local claude.projectid
|
||||
```
|
||||
|
||||
Expected: A UUID value
|
||||
|
||||
If empty, run setup:
|
||||
```bash
|
||||
bash scripts/setup-context-recall.sh
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Hooks Not Executing
|
||||
|
||||
**Problem:** Hooks don't run when using Claude Code
|
||||
|
||||
**Solutions:**
|
||||
1. Verify Claude Code supports hooks (see docs)
|
||||
2. Check hook permissions: `chmod +x .claude/hooks/*`
|
||||
3. Test hooks manually (see above)
|
||||
|
||||
### Context Not Appearing
|
||||
|
||||
**Problem:** No context injected in Claude Code
|
||||
|
||||
**Solutions:**
|
||||
1. Check API is running: `curl http://localhost:8000/health`
|
||||
2. Check JWT token is valid: Run setup again
|
||||
3. Enable debug: `echo "DEBUG_CONTEXT_RECALL=true" >> .claude/context-recall-config.env`
|
||||
4. Check if contexts exist: Run a few tasks first
|
||||
|
||||
### Context Not Saving
|
||||
|
||||
**Problem:** Contexts not persisted to database
|
||||
|
||||
**Solutions:**
|
||||
1. Check project ID: `git config --local claude.projectid`
|
||||
2. Test manually: `bash .claude/hooks/task-complete`
|
||||
3. Check API logs for errors
|
||||
4. Verify JWT token: Run setup again
|
||||
|
||||
### Permission Denied
|
||||
|
||||
**Problem:** `Permission denied` when running hooks
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
chmod +x .claude/hooks/user-prompt-submit
|
||||
chmod +x .claude/hooks/task-complete
|
||||
```
|
||||
|
||||
### API Connection Refused
|
||||
|
||||
**Problem:** `Connection refused` errors
|
||||
|
||||
**Solutions:**
|
||||
1. Start API: `uvicorn api.main:app --reload`
|
||||
2. Check API URL in config
|
||||
3. Verify firewall settings
|
||||
|
||||
## Troubleshooting Commands
|
||||
|
||||
```bash
|
||||
# Full system test
|
||||
bash scripts/test-context-recall.sh
|
||||
|
||||
# Check all permissions
|
||||
ls -la .claude/hooks/ scripts/
|
||||
|
||||
# Re-run setup
|
||||
bash scripts/setup-context-recall.sh
|
||||
|
||||
# Enable debug mode
|
||||
echo "DEBUG_CONTEXT_RECALL=true" >> .claude/context-recall-config.env
|
||||
|
||||
# Test API
|
||||
curl http://localhost:8000/health
|
||||
curl -H "Authorization: Bearer $JWT_TOKEN" http://localhost:8000/api/projects
|
||||
|
||||
# View configuration
|
||||
cat .claude/context-recall-config.env
|
||||
|
||||
# Test hooks with debug
|
||||
bash -x .claude/hooks/user-prompt-submit
|
||||
bash -x .claude/hooks/task-complete
|
||||
```
|
||||
|
||||
## Expected Workflow
|
||||
|
||||
When properly installed:
|
||||
|
||||
1. **You start Claude Code** → `user-prompt-submit` runs
|
||||
2. **Hook queries database** → Retrieves relevant contexts
|
||||
3. **Context injected** → You see previous work context
|
||||
4. **You work normally** → Claude has full context
|
||||
5. **Task completes** → `task-complete` runs
|
||||
6. **Context saved** → Available for next session
|
||||
|
||||
All automatic, zero user action required!
|
||||
|
||||
## Documentation
|
||||
|
||||
- **Quick Start:** `.claude/CONTEXT_RECALL_QUICK_START.md`
|
||||
- **Full Setup:** `CONTEXT_RECALL_SETUP.md`
|
||||
- **Architecture:** `.claude/CONTEXT_RECALL_ARCHITECTURE.md`
|
||||
- **Hook Details:** `.claude/hooks/README.md`
|
||||
- **Examples:** `.claude/hooks/EXAMPLES.md`
|
||||
|
||||
## Support
|
||||
|
||||
If issues persist after following this guide:
|
||||
|
||||
1. Review full documentation (see above)
|
||||
2. Run full test suite: `bash scripts/test-context-recall.sh`
|
||||
3. Check API logs for errors
|
||||
4. Enable debug mode for verbose output
|
||||
|
||||
## Success Checklist
|
||||
|
||||
- [ ] Hook files exist in `.claude/hooks/`
|
||||
- [ ] Hooks are executable (`chmod +x`)
|
||||
- [ ] Configuration file exists (`.claude/context-recall-config.env`)
|
||||
- [ ] JWT token is set in configuration
|
||||
- [ ] Project ID detected or set
|
||||
- [ ] API is running (`curl http://localhost:8000/health`)
|
||||
- [ ] Test script passes (`bash scripts/test-context-recall.sh`)
|
||||
- [ ] Hooks execute manually without errors
|
||||
|
||||
If all items checked: **Installation is complete!** ✅
|
||||
|
||||
Start using Claude Code and enjoy automatic context recall!
|
||||
323
.claude/hooks/README.md
Normal file
323
.claude/hooks/README.md
Normal file
@@ -0,0 +1,323 @@
|
||||
# Claude Code Context Recall Hooks
|
||||
|
||||
Automatically inject and save relevant context from the ClaudeTools database into Claude Code conversations.
|
||||
|
||||
## Overview
|
||||
|
||||
This system provides seamless context continuity across Claude Code sessions by:
|
||||
|
||||
1. **Recalling context** - Automatically inject relevant context from previous sessions before each message
|
||||
2. **Saving context** - Automatically save conversation summaries after task completion
|
||||
3. **Project awareness** - Track project state and maintain context across sessions
|
||||
|
||||
## Hooks
|
||||
|
||||
### `user-prompt-submit`
|
||||
|
||||
**Runs:** Before each user message is processed
|
||||
|
||||
**Purpose:** Injects relevant context from the database into the conversation
|
||||
|
||||
**What it does:**
|
||||
- Detects the current project ID (from git config or remote URL)
|
||||
- Calls `/api/conversation-contexts/recall` to fetch relevant contexts
|
||||
- Injects context as a formatted markdown section
|
||||
- Falls back gracefully if API is unavailable
|
||||
|
||||
**Example output:**
|
||||
```markdown
|
||||
## 📚 Previous Context
|
||||
|
||||
The following context has been automatically recalled from previous sessions:
|
||||
|
||||
### 1. Database Schema Updates (Score: 8.5/10)
|
||||
*Type: technical_decision*
|
||||
|
||||
Updated the Project model to include new fields for MSP integration...
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
### `task-complete`
|
||||
|
||||
**Runs:** After a task is completed
|
||||
|
||||
**Purpose:** Saves conversation context to the database for future recall
|
||||
|
||||
**What it does:**
|
||||
- Gathers task information (git branch, commit, modified files)
|
||||
- Creates a compressed summary of the task
|
||||
- POST to `/api/conversation-contexts` to save context
|
||||
- Updates project state via `/api/project-states`
|
||||
|
||||
**Saved information:**
|
||||
- Task summary
|
||||
- Git branch and commit hash
|
||||
- Modified files
|
||||
- Timestamp
|
||||
- Metadata for future retrieval
|
||||
|
||||
## Configuration
|
||||
|
||||
### Quick Setup
|
||||
|
||||
Run the automated setup script:
|
||||
|
||||
```bash
|
||||
bash scripts/setup-context-recall.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Create a JWT token
|
||||
2. Detect or create your project
|
||||
3. Configure environment variables
|
||||
4. Make hooks executable
|
||||
5. Test the system
|
||||
|
||||
### Manual Setup
|
||||
|
||||
1. **Get JWT Token**
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/api/auth/login \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"username": "admin", "password": "your-password"}'
|
||||
```
|
||||
|
||||
2. **Get/Create Project**
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/api/projects \
|
||||
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"name": "ClaudeTools",
|
||||
"description": "Your project description"
|
||||
}'
|
||||
```
|
||||
|
||||
3. **Configure `.claude/context-recall-config.env`**
|
||||
|
||||
```bash
|
||||
CLAUDE_API_URL=http://localhost:8000
|
||||
CLAUDE_PROJECT_ID=your-project-uuid-here
|
||||
JWT_TOKEN=your-jwt-token-here
|
||||
CONTEXT_RECALL_ENABLED=true
|
||||
MIN_RELEVANCE_SCORE=5.0
|
||||
MAX_CONTEXTS=10
|
||||
```
|
||||
|
||||
4. **Make hooks executable**
|
||||
|
||||
```bash
|
||||
chmod +x .claude/hooks/user-prompt-submit
|
||||
chmod +x .claude/hooks/task-complete
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `CLAUDE_API_URL` | `http://localhost:8000` | API base URL |
|
||||
| `CLAUDE_PROJECT_ID` | Auto-detect | Project UUID |
|
||||
| `JWT_TOKEN` | Required | Authentication token |
|
||||
| `CONTEXT_RECALL_ENABLED` | `true` | Enable/disable system |
|
||||
| `MIN_RELEVANCE_SCORE` | `5.0` | Minimum score (0-10) |
|
||||
| `MAX_CONTEXTS` | `10` | Max contexts per query |
|
||||
| `AUTO_SAVE_CONTEXT` | `true` | Save after completion |
|
||||
| `DEBUG_CONTEXT_RECALL` | `false` | Enable debug logs |
|
||||
|
||||
## Project ID Detection
|
||||
|
||||
The system automatically detects your project ID using:
|
||||
|
||||
1. **Git config** - `git config --local claude.projectid`
|
||||
2. **Git remote URL hash** - Consistent ID from remote URL
|
||||
3. **Environment variable** - `CLAUDE_PROJECT_ID`
|
||||
|
||||
To manually set project ID in git config:
|
||||
|
||||
```bash
|
||||
git config --local claude.projectid "your-project-uuid"
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
Run the test script:
|
||||
|
||||
```bash
|
||||
bash scripts/test-context-recall.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Test API connectivity
|
||||
- Test context recall endpoint
|
||||
- Test context saving
|
||||
- Verify hooks are working
|
||||
|
||||
## Usage
|
||||
|
||||
Once configured, the system works automatically:
|
||||
|
||||
1. **Start Claude Code** - Context is automatically recalled
|
||||
2. **Work normally** - All your conversations happen as usual
|
||||
3. **Complete tasks** - Context is automatically saved
|
||||
4. **Next session** - Previous context is automatically available
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Context not appearing?
|
||||
|
||||
1. Enable debug mode:
|
||||
```bash
|
||||
echo "DEBUG_CONTEXT_RECALL=true" >> .claude/context-recall-config.env
|
||||
```
|
||||
|
||||
2. Check API is running:
|
||||
```bash
|
||||
curl http://localhost:8000/health
|
||||
```
|
||||
|
||||
3. Verify JWT token:
|
||||
```bash
|
||||
curl -H "Authorization: Bearer $JWT_TOKEN" http://localhost:8000/api/projects
|
||||
```
|
||||
|
||||
4. Check hooks are executable:
|
||||
```bash
|
||||
ls -la .claude/hooks/
|
||||
```
|
||||
|
||||
### Context not saving?
|
||||
|
||||
1. Check task-complete hook output:
|
||||
```bash
|
||||
bash -x .claude/hooks/task-complete
|
||||
```
|
||||
|
||||
2. Verify project ID:
|
||||
```bash
|
||||
source .claude/context-recall-config.env
|
||||
echo $CLAUDE_PROJECT_ID
|
||||
```
|
||||
|
||||
3. Check API logs for errors
|
||||
|
||||
### Hooks not running?
|
||||
|
||||
1. Verify hook permissions:
|
||||
```bash
|
||||
chmod +x .claude/hooks/*
|
||||
```
|
||||
|
||||
2. Test hook manually:
|
||||
```bash
|
||||
bash .claude/hooks/user-prompt-submit
|
||||
```
|
||||
|
||||
3. Check Claude Code hook documentation:
|
||||
https://docs.claude.com/claude-code/hooks
|
||||
|
||||
### API connection errors?
|
||||
|
||||
1. Verify API is running:
|
||||
```bash
|
||||
curl http://localhost:8000/health
|
||||
```
|
||||
|
||||
2. Check firewall/port blocking
|
||||
|
||||
3. Verify API URL in config
|
||||
|
||||
## How It Works
|
||||
|
||||
### Context Recall Flow
|
||||
|
||||
```
|
||||
User sends message
|
||||
↓
|
||||
[user-prompt-submit hook runs]
|
||||
↓
|
||||
Detect project ID
|
||||
↓
|
||||
Call /api/conversation-contexts/recall
|
||||
↓
|
||||
Format and inject context
|
||||
↓
|
||||
Claude processes message with context
|
||||
```
|
||||
|
||||
### Context Save Flow
|
||||
|
||||
```
|
||||
Task completes
|
||||
↓
|
||||
[task-complete hook runs]
|
||||
↓
|
||||
Gather task information
|
||||
↓
|
||||
Create context summary
|
||||
↓
|
||||
POST to /api/conversation-contexts
|
||||
↓
|
||||
Update /api/project-states
|
||||
↓
|
||||
Context saved for future recall
|
||||
```
|
||||
|
||||
## API Endpoints Used
|
||||
|
||||
- `GET /api/conversation-contexts/recall` - Retrieve relevant contexts
|
||||
- `POST /api/conversation-contexts` - Save new context
|
||||
- `POST /api/project-states` - Update project state
|
||||
- `GET /api/projects` - Get project information
|
||||
- `POST /api/auth/login` - Get JWT token
|
||||
|
||||
## Security Notes
|
||||
|
||||
- JWT tokens are stored in `.claude/context-recall-config.env`
|
||||
- This file should be in `.gitignore` (DO NOT commit tokens!)
|
||||
- Tokens expire after 24 hours (configurable)
|
||||
- Hooks fail gracefully if authentication fails
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Context Types
|
||||
|
||||
Modify `task-complete` hook to create custom context types:
|
||||
|
||||
```bash
|
||||
CONTEXT_TYPE="bug_fix" # or "feature", "refactor", etc.
|
||||
RELEVANCE_SCORE=9.0 # Higher for important contexts
|
||||
```
|
||||
|
||||
### Filtering Contexts
|
||||
|
||||
Adjust recall parameters in config:
|
||||
|
||||
```bash
|
||||
MIN_RELEVANCE_SCORE=7.0 # Only high-quality contexts
|
||||
MAX_CONTEXTS=5 # Fewer contexts per query
|
||||
```
|
||||
|
||||
### Manual Context Injection
|
||||
|
||||
You can manually trigger context recall:
|
||||
|
||||
```bash
|
||||
bash .claude/hooks/user-prompt-submit
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [Claude Code Hooks Documentation](https://docs.claude.com/claude-code/hooks)
|
||||
- [ClaudeTools API Documentation](.claude/API_SPEC.md)
|
||||
- [Database Schema](.claude/SCHEMA_CORE.md)
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
1. Check troubleshooting section above
|
||||
2. Review API logs: `tail -f api/logs/app.log`
|
||||
3. Test with `scripts/test-context-recall.sh`
|
||||
4. Check hook output with `bash -x .claude/hooks/[hook-name]`
|
||||
140
.claude/hooks/task-complete
Normal file
140
.claude/hooks/task-complete
Normal file
@@ -0,0 +1,140 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Claude Code Hook: task-complete
|
||||
# Runs AFTER a task is completed
|
||||
# Saves conversation context to the database for future recall
|
||||
#
|
||||
# Expected environment variables:
|
||||
# CLAUDE_PROJECT_ID - UUID of the current project
|
||||
# JWT_TOKEN - Authentication token for API
|
||||
# CLAUDE_API_URL - API base URL (default: http://localhost:8000)
|
||||
# CONTEXT_RECALL_ENABLED - Set to "false" to disable (default: true)
|
||||
# TASK_SUMMARY - Summary of completed task (auto-generated by Claude)
|
||||
# TASK_FILES - Files modified during task (comma-separated)
|
||||
#
|
||||
|
||||
# Load configuration if exists
|
||||
CONFIG_FILE="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)/context-recall-config.env"
|
||||
if [ -f "$CONFIG_FILE" ]; then
|
||||
source "$CONFIG_FILE"
|
||||
fi
|
||||
|
||||
# Default values
|
||||
API_URL="${CLAUDE_API_URL:-http://localhost:8000}"
|
||||
ENABLED="${CONTEXT_RECALL_ENABLED:-true}"
|
||||
|
||||
# Exit early if disabled
|
||||
if [ "$ENABLED" != "true" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Detect project ID (same logic as user-prompt-submit)
|
||||
if [ -z "$CLAUDE_PROJECT_ID" ]; then
|
||||
PROJECT_ID=$(git config --local claude.projectid 2>/dev/null)
|
||||
|
||||
if [ -z "$PROJECT_ID" ]; then
|
||||
GIT_REMOTE=$(git config --get remote.origin.url 2>/dev/null)
|
||||
if [ -n "$GIT_REMOTE" ]; then
|
||||
PROJECT_ID=$(echo -n "$GIT_REMOTE" | md5sum | cut -d' ' -f1)
|
||||
fi
|
||||
fi
|
||||
else
|
||||
PROJECT_ID="$CLAUDE_PROJECT_ID"
|
||||
fi
|
||||
|
||||
# Exit if no project ID or JWT token
|
||||
if [ -z "$PROJECT_ID" ] || [ -z "$JWT_TOKEN" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Gather task information
|
||||
TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
GIT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "unknown")
|
||||
GIT_COMMIT=$(git rev-parse --short HEAD 2>/dev/null || echo "none")
|
||||
|
||||
# Get recent git changes
|
||||
CHANGED_FILES=$(git diff --name-only HEAD~1 2>/dev/null | head -10 | tr '\n' ',' | sed 's/,$//')
|
||||
if [ -z "$CHANGED_FILES" ]; then
|
||||
CHANGED_FILES="${TASK_FILES:-}"
|
||||
fi
|
||||
|
||||
# Create task summary
|
||||
if [ -z "$TASK_SUMMARY" ]; then
|
||||
# Generate basic summary from git log if no summary provided
|
||||
TASK_SUMMARY=$(git log -1 --pretty=format:"%s" 2>/dev/null || echo "Task completed")
|
||||
fi
|
||||
|
||||
# Build context payload
|
||||
CONTEXT_TITLE="Session: ${TIMESTAMP}"
|
||||
CONTEXT_TYPE="session_summary"
|
||||
RELEVANCE_SCORE=7.0
|
||||
|
||||
# Create dense summary
|
||||
DENSE_SUMMARY="Task completed on branch '${GIT_BRANCH}' (commit: ${GIT_COMMIT}).
|
||||
|
||||
Summary: ${TASK_SUMMARY}
|
||||
|
||||
Modified files: ${CHANGED_FILES:-none}
|
||||
|
||||
Timestamp: ${TIMESTAMP}"
|
||||
|
||||
# Escape JSON strings
|
||||
escape_json() {
|
||||
echo "$1" | python3 -c "import sys, json; print(json.dumps(sys.stdin.read())[1:-1])"
|
||||
}
|
||||
|
||||
ESCAPED_TITLE=$(escape_json "$CONTEXT_TITLE")
|
||||
ESCAPED_SUMMARY=$(escape_json "$DENSE_SUMMARY")
|
||||
|
||||
# Save context to database
|
||||
CONTEXT_PAYLOAD=$(cat <<EOF
|
||||
{
|
||||
"project_id": "${PROJECT_ID}",
|
||||
"context_type": "${CONTEXT_TYPE}",
|
||||
"title": ${ESCAPED_TITLE},
|
||||
"dense_summary": ${ESCAPED_SUMMARY},
|
||||
"relevance_score": ${RELEVANCE_SCORE},
|
||||
"metadata": {
|
||||
"git_branch": "${GIT_BRANCH}",
|
||||
"git_commit": "${GIT_COMMIT}",
|
||||
"files_modified": "${CHANGED_FILES}",
|
||||
"timestamp": "${TIMESTAMP}"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
# POST to conversation-contexts endpoint
|
||||
RESPONSE=$(curl -s --max-time 5 \
|
||||
-X POST "${API_URL}/api/conversation-contexts" \
|
||||
-H "Authorization: Bearer ${JWT_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$CONTEXT_PAYLOAD" 2>/dev/null)
|
||||
|
||||
# Update project state
|
||||
PROJECT_STATE_PAYLOAD=$(cat <<EOF
|
||||
{
|
||||
"project_id": "${PROJECT_ID}",
|
||||
"state_data": {
|
||||
"last_task_completion": "${TIMESTAMP}",
|
||||
"last_git_commit": "${GIT_COMMIT}",
|
||||
"last_git_branch": "${GIT_BRANCH}",
|
||||
"recent_files": "${CHANGED_FILES}"
|
||||
},
|
||||
"state_type": "task_completion"
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
curl -s --max-time 5 \
|
||||
-X POST "${API_URL}/api/project-states" \
|
||||
-H "Authorization: Bearer ${JWT_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$PROJECT_STATE_PAYLOAD" 2>/dev/null >/dev/null
|
||||
|
||||
# Log success (optional - comment out for silent operation)
|
||||
if [ -n "$RESPONSE" ]; then
|
||||
echo "✓ Context saved to database" >&2
|
||||
fi
|
||||
|
||||
exit 0
|
||||
119
.claude/hooks/user-prompt-submit
Normal file
119
.claude/hooks/user-prompt-submit
Normal file
@@ -0,0 +1,119 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Claude Code Hook: user-prompt-submit
|
||||
# Runs BEFORE each user message is processed
|
||||
# Injects relevant context from the database into the conversation
|
||||
#
|
||||
# Expected environment variables:
|
||||
# CLAUDE_PROJECT_ID - UUID of the current project
|
||||
# JWT_TOKEN - Authentication token for API
|
||||
# CLAUDE_API_URL - API base URL (default: http://localhost:8000)
|
||||
# CONTEXT_RECALL_ENABLED - Set to "false" to disable (default: true)
|
||||
# MIN_RELEVANCE_SCORE - Minimum score for context (default: 5.0)
|
||||
# MAX_CONTEXTS - Maximum number of contexts to retrieve (default: 10)
|
||||
#
|
||||
|
||||
# Load configuration if exists
|
||||
CONFIG_FILE="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)/context-recall-config.env"
|
||||
if [ -f "$CONFIG_FILE" ]; then
|
||||
source "$CONFIG_FILE"
|
||||
fi
|
||||
|
||||
# Default values
|
||||
API_URL="${CLAUDE_API_URL:-http://localhost:8000}"
|
||||
ENABLED="${CONTEXT_RECALL_ENABLED:-true}"
|
||||
MIN_SCORE="${MIN_RELEVANCE_SCORE:-5.0}"
|
||||
MAX_ITEMS="${MAX_CONTEXTS:-10}"
|
||||
|
||||
# Exit early if disabled
|
||||
if [ "$ENABLED" != "true" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Detect project ID from git repo if not set
|
||||
if [ -z "$CLAUDE_PROJECT_ID" ]; then
|
||||
# Try to get from git config
|
||||
PROJECT_ID=$(git config --local claude.projectid 2>/dev/null)
|
||||
|
||||
if [ -z "$PROJECT_ID" ]; then
|
||||
# Try to derive from git remote URL
|
||||
GIT_REMOTE=$(git config --get remote.origin.url 2>/dev/null)
|
||||
if [ -n "$GIT_REMOTE" ]; then
|
||||
# Hash the remote URL to create a consistent ID
|
||||
PROJECT_ID=$(echo -n "$GIT_REMOTE" | md5sum | cut -d' ' -f1)
|
||||
fi
|
||||
fi
|
||||
else
|
||||
PROJECT_ID="$CLAUDE_PROJECT_ID"
|
||||
fi
|
||||
|
||||
# Exit if no project ID available
|
||||
if [ -z "$PROJECT_ID" ]; then
|
||||
# Silent exit - no context available
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Exit if no JWT token
|
||||
if [ -z "$JWT_TOKEN" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Build API request URL
|
||||
RECALL_URL="${API_URL}/api/conversation-contexts/recall"
|
||||
QUERY_PARAMS="project_id=${PROJECT_ID}&limit=${MAX_ITEMS}&min_relevance_score=${MIN_SCORE}"
|
||||
|
||||
# Fetch context from API (with timeout and error handling)
|
||||
CONTEXT_RESPONSE=$(curl -s --max-time 3 \
|
||||
"${RECALL_URL}?${QUERY_PARAMS}" \
|
||||
-H "Authorization: Bearer ${JWT_TOKEN}" \
|
||||
-H "Accept: application/json" 2>/dev/null)
|
||||
|
||||
# Check if request was successful
|
||||
if [ $? -ne 0 ] || [ -z "$CONTEXT_RESPONSE" ]; then
|
||||
# Silent failure - API unavailable
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Parse and format context (expects JSON array of context objects)
|
||||
# Example response: [{"title": "...", "dense_summary": "...", "relevance_score": 8.5}, ...]
|
||||
CONTEXT_COUNT=$(echo "$CONTEXT_RESPONSE" | grep -o '"id"' | wc -l)
|
||||
|
||||
if [ "$CONTEXT_COUNT" -gt 0 ]; then
|
||||
echo "<!-- Context Recall: Retrieved $CONTEXT_COUNT relevant context(s) -->"
|
||||
echo ""
|
||||
echo "## 📚 Previous Context"
|
||||
echo ""
|
||||
echo "The following context has been automatically recalled from previous sessions:"
|
||||
echo ""
|
||||
|
||||
# Extract and format each context entry
|
||||
# Note: This uses simple text parsing. For production, consider using jq if available.
|
||||
echo "$CONTEXT_RESPONSE" | python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
contexts = json.load(sys.stdin)
|
||||
if isinstance(contexts, list):
|
||||
for i, ctx in enumerate(contexts, 1):
|
||||
title = ctx.get('title', 'Untitled')
|
||||
summary = ctx.get('dense_summary', '')
|
||||
score = ctx.get('relevance_score', 0)
|
||||
ctx_type = ctx.get('context_type', 'unknown')
|
||||
|
||||
print(f'### {i}. {title} (Score: {score}/10)')
|
||||
print(f'*Type: {ctx_type}*')
|
||||
print()
|
||||
print(summary)
|
||||
print()
|
||||
print('---')
|
||||
print()
|
||||
except:
|
||||
pass
|
||||
" 2>/dev/null
|
||||
|
||||
echo ""
|
||||
echo "*This context was automatically injected to help maintain continuity across sessions.*"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Exit successfully
|
||||
exit 0
|
||||
Reference in New Issue
Block a user