Compare commits
2 Commits
7dc27290fb
...
3560c90ea3
| Author | SHA1 | Date | |
|---|---|---|---|
| 3560c90ea3 | |||
| e4392afce9 |
@@ -1,36 +1,429 @@
|
||||
Sync Claude Code preferences and commands from ClaudeTools repo on Gitea to this local machine.
|
||||
# /sync - Bidirectional ClaudeTools Sync
|
||||
|
||||
## Steps to perform:
|
||||
Synchronize ClaudeTools configuration, session data, and context bidirectionally with Gitea. Ensures all machines stay perfectly in sync for seamless cross-machine workflow.
|
||||
|
||||
1. **Pull the ClaudeTools repo** from Gitea via HTTPS:
|
||||
```
|
||||
Repository: https://git.azcomputerguru.com/azcomputerguru/claudetools.git
|
||||
```
|
||||
---
|
||||
|
||||
2. **Check if repo exists locally** at `~/ClaudeTools/`
|
||||
- If exists: `git pull origin main`
|
||||
- If not: Clone it first with `git clone https://git.azcomputerguru.com/azcomputerguru/claudetools.git ~/ClaudeTools`
|
||||
## What Gets Synced
|
||||
|
||||
3. **Copy the .claude/commands directory** from the repo to apply commands:
|
||||
- Source: `~/ClaudeTools/.claude/commands/`
|
||||
- Destination: `~/.claude/commands/`
|
||||
- These slash commands will now be available globally
|
||||
**FROM Local TO Gitea (PUSH):**
|
||||
- Session logs: `session-logs/*.md`
|
||||
- Project session logs: `projects/*/session-logs/*.md`
|
||||
- Credentials: `credentials.md` (private repo - safe to sync)
|
||||
- Project state: `SESSION_STATE.md`
|
||||
- Commands: `.claude/commands/*.md`
|
||||
- Directives: `directives.md`
|
||||
- File placement guide: `.claude/FILE_PLACEMENT_GUIDE.md`
|
||||
- Behavioral guidelines:
|
||||
- `.claude/CODING_GUIDELINES.md` (NO EMOJIS, ASCII markers, standards)
|
||||
- `.claude/AGENT_COORDINATION_RULES.md` (delegation guidelines)
|
||||
- `.claude/agents/*.md` (agent-specific documentation)
|
||||
- `.claude/CLAUDE.md` (project context and instructions)
|
||||
- Any other `.claude/*.md` operational files
|
||||
- Any other tracked changes
|
||||
|
||||
4. **Apply global permissions** - Copy the shared settings if available:
|
||||
**FROM Gitea TO Local (PULL):**
|
||||
- All of the above from other machines
|
||||
- Latest commands and configurations
|
||||
- Updated session logs from other sessions
|
||||
- Project-specific work and documentation
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Phase 1: Prepare Local Changes
|
||||
|
||||
1. **Navigate to ClaudeTools repo:**
|
||||
```bash
|
||||
cp ~/ClaudeTools/.claude/settings.json ~/.claude/settings.json
|
||||
cd ~/ClaudeTools # or D:\ClaudeTools on Windows
|
||||
```
|
||||
This applies the comprehensive permission set so you get fewer prompts.
|
||||
|
||||
5. **Read and apply any project settings** from `~/ClaudeTools/.claude/settings.local.json` if present
|
||||
2. **Check repository status:**
|
||||
```bash
|
||||
git status
|
||||
```
|
||||
Report number of changed/new files to user
|
||||
|
||||
6. **Report what was synced**:
|
||||
- List available slash commands
|
||||
- Show any settings applied
|
||||
- Show recent session logs available for context
|
||||
3. **Stage all changes:**
|
||||
```bash
|
||||
git add -A
|
||||
```
|
||||
This includes:
|
||||
- New/modified session logs
|
||||
- Updated credentials.md
|
||||
- SESSION_STATE.md changes
|
||||
- Command updates
|
||||
- Directive changes
|
||||
- Behavioral guidelines (CODING_GUIDELINES.md, AGENT_COORDINATION_RULES.md, etc.)
|
||||
- Agent documentation
|
||||
- Project documentation
|
||||
|
||||
7. **Read the most recent session log** from `~/ClaudeTools/session-logs/` to get context on what was worked on recently
|
||||
4. **Auto-commit local changes with timestamp:**
|
||||
```bash
|
||||
git commit -m "sync: Auto-sync from [machine-name] at [timestamp]
|
||||
|
||||
8. **Refresh directives** - Read directives.md to ensure proper operational mode
|
||||
Synced files:
|
||||
- Session logs updated
|
||||
- Latest context and credentials
|
||||
- Command/directive updates
|
||||
|
||||
This ensures all your machines have the same Claude Code setup and can pick up where you left off with ClaudeTools work.
|
||||
Machine: [hostname]
|
||||
Timestamp: [YYYY-MM-DD HH:MM:SS]
|
||||
|
||||
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>"
|
||||
```
|
||||
|
||||
**Note:** Only commit if there are changes. If working tree is clean, skip to Phase 2.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Sync with Gitea
|
||||
|
||||
5. **Pull latest changes from Gitea:**
|
||||
```bash
|
||||
git pull origin main --rebase
|
||||
```
|
||||
|
||||
**Handle conflicts if any:**
|
||||
- Session logs: Keep both versions (rename conflicting file with timestamp)
|
||||
- credentials.md: Manual merge required - report to user
|
||||
- Other files: Use standard git conflict resolution
|
||||
|
||||
Report what was pulled from remote
|
||||
|
||||
6. **Push local changes to Gitea:**
|
||||
```bash
|
||||
git push origin main
|
||||
```
|
||||
|
||||
Confirm push succeeded
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Apply Configuration Locally
|
||||
|
||||
7. **Copy commands to global Claude directory:**
|
||||
```bash
|
||||
mkdir -p ~/.claude/commands
|
||||
cp -r ~/ClaudeTools/.claude/commands/* ~/.claude/commands/
|
||||
```
|
||||
These slash commands are now available globally
|
||||
|
||||
8. **Apply global settings if available:**
|
||||
```bash
|
||||
if [ -f ~/ClaudeTools/.claude/settings.json ]; then
|
||||
cp ~/ClaudeTools/.claude/settings.json ~/.claude/settings.json
|
||||
fi
|
||||
```
|
||||
|
||||
9. **Sync project settings:**
|
||||
```bash
|
||||
if [ -f ~/ClaudeTools/.claude/settings.local.json ]; then
|
||||
# Read and note any project-specific settings
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Context Recovery
|
||||
|
||||
10. **Find and read most recent session logs:**
|
||||
|
||||
Check all locations:
|
||||
- `~/ClaudeTools/session-logs/*.md` (general)
|
||||
- `~/ClaudeTools/projects/*/session-logs/*.md` (project-specific)
|
||||
|
||||
Report the 3 most recent logs found:
|
||||
- File name and location
|
||||
- Last modified date
|
||||
- Brief summary of what was worked on (from first 5 lines)
|
||||
|
||||
11. **Read behavioral guidelines and directives:**
|
||||
```bash
|
||||
cat ~/ClaudeTools/directives.md
|
||||
cat ~/ClaudeTools/.claude/CODING_GUIDELINES.md
|
||||
cat ~/ClaudeTools/.claude/AGENT_COORDINATION_RULES.md
|
||||
```
|
||||
Internalize operational directives and behavioral rules to ensure:
|
||||
- Proper coordination mode (delegate vs execute)
|
||||
- NO EMOJIS rule enforcement
|
||||
- Agent delegation patterns
|
||||
- Coding standards compliance
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Report Sync Status
|
||||
|
||||
12. **Summarize what was synced:**
|
||||
|
||||
```
|
||||
## Sync Complete
|
||||
|
||||
[OK] Local changes pushed to Gitea:
|
||||
- X session logs updated
|
||||
- credentials.md synced
|
||||
- SESSION_STATE.md updated
|
||||
- Y command files
|
||||
|
||||
[OK] Remote changes pulled from Gitea:
|
||||
- Z files updated from other machines
|
||||
- Latest session: [most recent log]
|
||||
|
||||
[OK] Configuration applied:
|
||||
- Commands available: /checkpoint, /context, /save, /sync, etc.
|
||||
- Directives internalized (coordination mode, delegation rules)
|
||||
- Behavioral guidelines internalized (NO EMOJIS, ASCII markers, coding standards)
|
||||
- Agent coordination rules applied
|
||||
- Global settings applied
|
||||
|
||||
Recent work (last 3 sessions):
|
||||
1. [date] - [project] - [brief summary]
|
||||
2. [date] - [project] - [brief summary]
|
||||
3. [date] - [project] - [brief summary]
|
||||
|
||||
**Status:** All machines in sync. Ready to continue work.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Conflict Resolution
|
||||
|
||||
### Session Log Conflicts
|
||||
If both machines created session logs with same date:
|
||||
1. Keep both versions
|
||||
2. Rename to: `YYYY-MM-DD-session-[machine].md`
|
||||
3. Report conflict to user
|
||||
|
||||
### credentials.md Conflicts
|
||||
If credentials.md has conflicts:
|
||||
1. Do NOT auto-merge
|
||||
2. Report conflict to user
|
||||
3. Show conflicting sections
|
||||
4. Ask user which version to keep or how to merge
|
||||
|
||||
### Other File Conflicts
|
||||
Standard git conflict markers:
|
||||
1. Report files with conflicts
|
||||
2. Show conflict sections
|
||||
3. Ask user to resolve manually or provide guidance
|
||||
|
||||
---
|
||||
|
||||
## Machine Detection
|
||||
|
||||
Automatically detect machine name for commit messages:
|
||||
|
||||
**Windows:**
|
||||
```powershell
|
||||
$env:COMPUTERNAME
|
||||
```
|
||||
|
||||
**Mac/Linux:**
|
||||
```bash
|
||||
hostname
|
||||
```
|
||||
|
||||
**Timestamp format:**
|
||||
```bash
|
||||
date "+%Y-%m-%d %H:%M:%S"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Benefits
|
||||
|
||||
### Seamless Multi-Machine Workflow
|
||||
- Start work on one machine, continue on another
|
||||
- All session context automatically synchronized
|
||||
- Credentials available everywhere (private repo)
|
||||
- Commands and directives stay consistent
|
||||
- Behavioral rules enforced identically (NO EMOJIS, delegation patterns, coding standards)
|
||||
|
||||
### Complete Context Preservation
|
||||
- Never lose session data
|
||||
- Full history across all machines
|
||||
- Searchable via git log
|
||||
- Rollback capability if needed
|
||||
|
||||
### Zero Manual Sync
|
||||
- One command syncs everything
|
||||
- Auto-commit prevents forgotten changes
|
||||
- Push/pull happens automatically
|
||||
- Conflicts handled gracefully
|
||||
|
||||
---
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Standard Sync (Most Common)
|
||||
```
|
||||
User: /sync
|
||||
|
||||
Claude:
|
||||
[Commits local changes]
|
||||
[Pulls from Gitea]
|
||||
[Pushes to Gitea]
|
||||
[Applies configuration]
|
||||
[Reports status]
|
||||
|
||||
Sync complete. 3 session logs pushed, 2 updates pulled.
|
||||
Ready to continue work.
|
||||
```
|
||||
|
||||
### Sync Before Important Work
|
||||
```
|
||||
User: "I'm switching to my other machine. /sync"
|
||||
|
||||
Claude:
|
||||
[Syncs everything]
|
||||
Report: Latest work on Dataforth DOS dashboard pushed to Gitea.
|
||||
All session logs and credentials synced.
|
||||
You can now pull on the other machine to continue.
|
||||
```
|
||||
|
||||
### Daily Morning Sync
|
||||
```
|
||||
User: /sync
|
||||
|
||||
Claude:
|
||||
[Pulls overnight changes from other machines]
|
||||
Report: Found 2 new sessions from yesterday evening.
|
||||
Latest: GuruRMM dashboard redesign completed.
|
||||
Context recovered. Ready for today's work.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Network Issues
|
||||
If git pull/push fails:
|
||||
1. Report connection error
|
||||
2. Show what was committed locally
|
||||
3. Suggest retry or manual sync
|
||||
4. Changes are safe (committed locally)
|
||||
|
||||
### Authentication Issues
|
||||
If Gitea authentication fails:
|
||||
1. Report auth error
|
||||
2. Check SSH keys or credentials
|
||||
3. Provide troubleshooting steps
|
||||
4. Manual push may be needed
|
||||
|
||||
### Merge Conflicts
|
||||
If automatic merge fails:
|
||||
1. Report which files have conflicts
|
||||
2. Show conflict markers
|
||||
3. Ask for user guidance
|
||||
4. Offer to abort merge if needed
|
||||
|
||||
---
|
||||
|
||||
## Security Notes
|
||||
|
||||
**credentials.md Syncing:**
|
||||
- Private repository on Gitea (https://git.azcomputerguru.com)
|
||||
- Only accessible to authorized user
|
||||
- Encrypted in transit (HTTPS/SSH)
|
||||
- Safe to sync sensitive credentials
|
||||
- Enables cross-machine access
|
||||
|
||||
**What's NOT synced:**
|
||||
- `.env` files (gitignored)
|
||||
- API virtual environment (api/venv/)
|
||||
- Database files (local development)
|
||||
- Temporary files (*.tmp, *.log)
|
||||
- node_modules/ directories
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Commands
|
||||
|
||||
### After /checkpoint
|
||||
User can run `/sync` after `/checkpoint` to push the checkpoint to Gitea:
|
||||
```
|
||||
User: /checkpoint
|
||||
Claude: [Creates git commit]
|
||||
|
||||
User: /sync
|
||||
Claude: [Pushes checkpoint to Gitea]
|
||||
```
|
||||
|
||||
### Before /save
|
||||
User can sync first to see latest context:
|
||||
```
|
||||
User: /sync
|
||||
Claude: [Shows latest session logs]
|
||||
|
||||
User: /save
|
||||
Claude: [Creates session log with full context]
|
||||
```
|
||||
|
||||
### With /context
|
||||
Syncing ensures `/context` has complete history:
|
||||
```
|
||||
User: /sync
|
||||
Claude: [Syncs all session logs]
|
||||
|
||||
User: /context Dataforth
|
||||
Claude: [Searches complete session log history including other machines]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Frequency Recommendations
|
||||
|
||||
**Daily:** Start of work day
|
||||
- Pull overnight changes
|
||||
- See what was done on other machines
|
||||
- Recover latest context
|
||||
|
||||
**After Major Work:** End of coding session
|
||||
- Push session logs
|
||||
- Share context across machines
|
||||
- Backup to Gitea
|
||||
|
||||
**Before Switching Machines:**
|
||||
- Push all local changes
|
||||
- Ensure other machine can pull
|
||||
- Seamless transition
|
||||
|
||||
**Weekly:** General maintenance
|
||||
- Keep repos in sync
|
||||
- Review session log history
|
||||
- Clean up if needed
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Already up to date" but files seem out of sync
|
||||
```bash
|
||||
# Force status check
|
||||
cd ~/ClaudeTools
|
||||
git fetch origin
|
||||
git status
|
||||
```
|
||||
|
||||
### "Divergent branches" error
|
||||
```bash
|
||||
# Rebase local changes on top of remote
|
||||
git pull origin main --rebase
|
||||
```
|
||||
|
||||
### Lost uncommitted changes
|
||||
```bash
|
||||
# Check stash
|
||||
git stash list
|
||||
|
||||
# Recover if needed
|
||||
git stash pop
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Created:** 2026-01-21
|
||||
**Purpose:** Bidirectional sync for seamless multi-machine ClaudeTools workflow
|
||||
**Repository:** https://git.azcomputerguru.com/azcomputerguru/claudetools.git
|
||||
**Status:** Active - comprehensive sync with context preservation
|
||||
|
||||
41
QUICKSTART-retrieved.md
Normal file
41
QUICKSTART-retrieved.md
Normal file
@@ -0,0 +1,41 @@
|
||||
# Test Data Database - Quick Start
|
||||
|
||||
## Start Server
|
||||
```bash
|
||||
cd C:\Shares\TestDataDB
|
||||
node server.js
|
||||
```
|
||||
Then open: http://localhost:3000
|
||||
|
||||
## Re-run Import (if needed)
|
||||
```bash
|
||||
cd C:\Shares\TestDataDB
|
||||
rm database/testdata.db
|
||||
node database/import.js
|
||||
```
|
||||
Takes ~30 minutes for 1M+ records.
|
||||
|
||||
## Database Stats
|
||||
- **1,030,940 records** imported
|
||||
- Date range: 1990 to Nov 2025
|
||||
- Pass: 1,029,046 | Fail: 1,888
|
||||
|
||||
## API Endpoints
|
||||
- `GET /api/search?serial=...&model=...&from=...&to=...&result=...`
|
||||
- `GET /api/record/:id`
|
||||
- `GET /api/datasheet/:id`
|
||||
- `GET /api/stats`
|
||||
- `GET /api/export?format=csv`
|
||||
|
||||
## Original Request
|
||||
Search for serial numbers **176923-1 to 176923-26** for model **DSCA38-1793**
|
||||
- Result: **NOT FOUND** - These devices haven't been tested yet
|
||||
- Most recent serials for this model: 173672-x, 173681-x (Feb 2025)
|
||||
|
||||
## Files
|
||||
- Database: `database/testdata.db`
|
||||
- Server: `server.js`
|
||||
- Import: `database/import.js`
|
||||
- Web UI: `public/index.html`
|
||||
- Full notes: `SESSION_NOTES.md`
|
||||
|
||||
139
SESSION_NOTES-retrieved.md
Normal file
139
SESSION_NOTES-retrieved.md
Normal file
@@ -0,0 +1,139 @@
|
||||
# Test Data Database - Session Notes
|
||||
|
||||
## Session Date: 2026-01-13
|
||||
|
||||
## Project Overview
|
||||
Created a SQLite database with Express.js web interface to consolidate, deduplicate, and search test data from multiple backup dates and test stations.
|
||||
|
||||
## Project Location
|
||||
`C:\Shares\TestDataDB\`
|
||||
|
||||
## Original Request
|
||||
- Search for serial numbers 176923-1 through 176923-26 in model DSCA38-1793
|
||||
- Serial numbers were NOT found in any existing .DAT files (most recent logged: 173672-x, 173681-x from Feb 2025)
|
||||
- User requested a database to consolidate all test data for easier searching
|
||||
|
||||
## Data Sources
|
||||
- **HISTLOGS**: `C:\Shares\test\Ate\HISTLOGS\` (consolidated history)
|
||||
- **Recovery-TEST**: `C:\Shares\Recovery-TEST\` (6 backup dates: 12-13-25 to 12-18-25)
|
||||
- **Live Data**: `C:\Shares\test\` (~540K files)
|
||||
- **Test Stations**: TS-1L, TS-3R, TS-4L, TS-4R, TS-8R, TS-10L, TS-11L
|
||||
|
||||
## File Types Imported
|
||||
| Log Type | Description | Extension |
|
||||
|----------|-------------|-----------|
|
||||
| DSCLOG | DSC product line | .DAT |
|
||||
| 5BLOG | 5B product line | .DAT |
|
||||
| 7BLOG | 7B product line (CSV format) | .DAT |
|
||||
| 8BLOG | 8B product line | .DAT |
|
||||
| PWRLOG | Power tests | .DAT |
|
||||
| SCTLOG | SCT product line | .DAT |
|
||||
| VASLOG | VAS tests | .DAT |
|
||||
| SHT | Human-readable test sheets | .SHT |
|
||||
|
||||
## Project Structure
|
||||
```
|
||||
TestDataDB/
|
||||
├── package.json # Node.js dependencies
|
||||
├── server.js # Express.js server (port 3000)
|
||||
├── database/
|
||||
│ ├── schema.sql # SQLite schema with FTS
|
||||
│ ├── testdata.db # SQLite database file
|
||||
│ └── import.js # Data import script
|
||||
├── parsers/
|
||||
│ ├── multiline.js # Parser for multi-line DAT files
|
||||
│ ├── csvline.js # Parser for 7BLOG CSV format
|
||||
│ └── shtfile.js # Parser for SHT test sheets
|
||||
├── public/
|
||||
│ └── index.html # Web search interface
|
||||
├── routes/
|
||||
│ └── api.js # API endpoints
|
||||
└── templates/
|
||||
└── datasheet.js # Datasheet generator
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
- `GET /api/search?serial=...&model=...&from=...&to=...&result=...&q=...`
|
||||
- `GET /api/record/:id`
|
||||
- `GET /api/datasheet/:id` - Generate printable datasheet
|
||||
- `GET /api/stats`
|
||||
- `GET /api/export?format=csv`
|
||||
|
||||
## How to Use
|
||||
|
||||
### Start the server:
|
||||
```bash
|
||||
cd C:\Shares\TestDataDB
|
||||
node server.js
|
||||
```
|
||||
Then open http://localhost:3000 in a browser.
|
||||
|
||||
### Re-run import (if needed):
|
||||
```bash
|
||||
cd C:\Shares\TestDataDB
|
||||
node database/import.js
|
||||
```
|
||||
|
||||
## Database Schema
|
||||
- Table: `test_records`
|
||||
- Columns: id, log_type, model_number, serial_number, test_date, test_station, overall_result, raw_data, source_file, import_date
|
||||
- Indexes on: serial_number, model_number, test_date, overall_result
|
||||
- Full-text search (FTS5) for searching raw_data
|
||||
|
||||
## Features
|
||||
1. **Search** - By serial number, model number, date range, pass/fail status
|
||||
2. **Full-text search** - Search within raw test data
|
||||
3. **Export** - CSV export of search results
|
||||
4. **Datasheet generation** - Generate formatted test data sheets from any record
|
||||
5. **Statistics** - Dashboard showing total records, pass/fail counts, date range
|
||||
|
||||
## Import Status - COMPLETE
|
||||
- Started: 2026-01-13T21:32:59.401Z
|
||||
- Completed: 2026-01-13T22:02:42.187Z
|
||||
- **Total records: 1,030,940**
|
||||
|
||||
### Import Details:
|
||||
| Source | Records Imported |
|
||||
|--------|------------------|
|
||||
| HISTLOGS | 576,416 |
|
||||
| Recovery-TEST/12-18-25 | 454,383 |
|
||||
| Recovery-TEST/12-17-25 | 82 |
|
||||
| Recovery-TEST/12-16 to 12-13 | 0 (duplicates) |
|
||||
| test | 59 |
|
||||
|
||||
### By Log Type:
|
||||
- 5BLOG: 425,378
|
||||
- 7BLOG: 262,404
|
||||
- DSCLOG: 181,160
|
||||
- 8BLOG: 135,858
|
||||
- PWRLOG: 12,374
|
||||
- VASLOG: 10,327
|
||||
- SCTLOG: 3,439
|
||||
|
||||
### By Result:
|
||||
- PASS: 1,029,046
|
||||
- FAIL: 1,888
|
||||
- UNKNOWN: 6
|
||||
|
||||
## Current Status
|
||||
- Server running at: http://localhost:3000
|
||||
- Database file: `C:\Shares\TestDataDB\database\testdata.db`
|
||||
|
||||
## Known Issues
|
||||
- Model number parsing needs re-import to fix (parser was updated but requires re-import)
|
||||
- To re-import: Delete testdata.db and run `node database/import.js`
|
||||
|
||||
## Search Results for Original Request
|
||||
- Serial numbers 176923-1 through 176923-26: **NOT FOUND** (not yet tested)
|
||||
- Most recent serial for DSCA38-1793: 173672-x and 173681-x (February 2025)
|
||||
|
||||
## Next Steps
|
||||
1. Re-run import if model number search is needed (delete testdata.db first)
|
||||
2. When serial numbers 176923-1 to 176923-26 are tested, they will appear in the database
|
||||
|
||||
## Notes
|
||||
- TXT datasheets in `10D/datasheets/` are NOT imported (can be generated from DB)
|
||||
- Deduplication uses: (log_type, model_number, serial_number, test_date, test_station)
|
||||
- ~3,600 SHT files to import
|
||||
- ~41,000+ DAT files across all log types
|
||||
|
||||
431
Sync-FromNAS-retrieved.ps1
Normal file
431
Sync-FromNAS-retrieved.ps1
Normal file
@@ -0,0 +1,431 @@
|
||||
# Sync-AD2-NAS.ps1 (formerly Sync-FromNAS.ps1)
|
||||
# Bidirectional sync between AD2 and NAS (D2TESTNAS)
|
||||
#
|
||||
# PULL (NAS → AD2): Test results (LOGS/*.DAT, Reports/*.TXT) → Database import
|
||||
# PUSH (AD2 → NAS): Software updates (ProdSW/*, TODO.BAT) → DOS machines
|
||||
#
|
||||
# Run: powershell -ExecutionPolicy Bypass -File C:\Shares\test\scripts\Sync-FromNAS.ps1
|
||||
# Scheduled: Every 15 minutes via Windows Task Scheduler
|
||||
|
||||
param(
|
||||
[switch]$DryRun, # Show what would be done without doing it
|
||||
[switch]$Verbose, # Extra output
|
||||
[int]$MaxAgeMinutes = 1440 # Default: files from last 24 hours (was 60 min, too aggressive)
|
||||
)
|
||||
|
||||
# ============================================================================
|
||||
# Configuration
|
||||
# ============================================================================
|
||||
$NAS_IP = "192.168.0.9"
|
||||
$NAS_USER = "root"
|
||||
$NAS_PASSWORD = "Paper123!@#-nas"
|
||||
$NAS_HOSTKEY = "SHA256:5CVIPlqjLPxO8n48PKLAP99nE6XkEBAjTkaYmJAeOdA"
|
||||
$NAS_DATA_PATH = "/data/test"
|
||||
|
||||
$AD2_TEST_PATH = "C:\Shares\test"
|
||||
$AD2_HISTLOGS_PATH = "C:\Shares\test\Ate\HISTLOGS"
|
||||
|
||||
$SSH = "C:\Program Files\OpenSSH\ssh.exe" # Changed from PLINK to OpenSSH
|
||||
$SCP = "C:\Program Files\OpenSSH\scp.exe" # Changed from PSCP to OpenSSH
|
||||
|
||||
$LOG_FILE = "C:\Shares\test\scripts\sync-from-nas.log"
|
||||
$STATUS_FILE = "C:\Shares\test\_SYNC_STATUS.txt"
|
||||
|
||||
$LOG_TYPES = @("5BLOG", "7BLOG", "8BLOG", "DSCLOG", "SCTLOG", "VASLOG", "PWRLOG", "HVLOG")
|
||||
|
||||
# Database import configuration
|
||||
$IMPORT_SCRIPT = "C:\Shares\testdatadb\database\import.js"
|
||||
$NODE_PATH = "node"
|
||||
|
||||
# ============================================================================
|
||||
# Functions
|
||||
# ============================================================================
|
||||
|
||||
function Write-Log {
|
||||
param([string]$Message)
|
||||
$timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
|
||||
$logLine = "$timestamp : $Message"
|
||||
Add-Content -Path $LOG_FILE -Value $logLine
|
||||
if ($Verbose) { Write-Host $logLine }
|
||||
}
|
||||
|
||||
function Invoke-NASCommand {
|
||||
param([string]$Command)
|
||||
$result = & $SSH -i "C:\Users\sysadmin\.ssh\id_ed25519" -o BatchMode=yes -o ConnectTimeout=10 -o StrictHostKeyChecking=accept-new $Command 2>&1
|
||||
return $result
|
||||
}
|
||||
|
||||
function Copy-FromNAS {
|
||||
param(
|
||||
[string]$RemotePath,
|
||||
[string]$LocalPath
|
||||
)
|
||||
|
||||
# Ensure local directory exists
|
||||
$localDir = Split-Path -Parent $LocalPath
|
||||
if (-not (Test-Path $localDir)) {
|
||||
New-Item -ItemType Directory -Path $localDir -Force | Out-Null
|
||||
}
|
||||
|
||||
$result = & $SCP -O -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile="C:\Shares\test\scripts\.ssh\known_hosts" "${NAS_USER}@${NAS_IP}:$RemotePath" $LocalPath 2>&1 if ($LASTEXITCODE -ne 0) {
|
||||
$errorMsg = $result | Out-String
|
||||
Write-Log " SCP PUSH ERROR (exit $LASTEXITCODE): $errorMsg"
|
||||
}
|
||||
return $LASTEXITCODE -eq 0
|
||||
}
|
||||
|
||||
function Remove-FromNAS {
|
||||
param([string]$RemotePath)
|
||||
Invoke-NASCommand "rm -f '$RemotePath'" | Out-Null
|
||||
}
|
||||
|
||||
function Copy-ToNAS {
|
||||
param(
|
||||
[string]$LocalPath,
|
||||
[string]$RemotePath
|
||||
)
|
||||
|
||||
# Ensure remote directory exists
|
||||
$remoteDir = Split-Path -Parent $RemotePath
|
||||
Invoke-NASCommand "mkdir -p '$remoteDir'" | Out-Null
|
||||
|
||||
$result = & $SCP -O -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile="C:\Shares\test\scripts\.ssh\known_hosts" $LocalPath "${NAS_USER}@${NAS_IP}:$RemotePath" 2>&1
|
||||
if ($LASTEXITCODE -ne 0) {
|
||||
$errorMsg = $result | Out-String
|
||||
Write-Log " SCP PUSH ERROR (exit $LASTEXITCODE): $errorMsg"
|
||||
}
|
||||
return $LASTEXITCODE -eq 0
|
||||
}
|
||||
|
||||
function Get-FileHash256 {
|
||||
param([string]$FilePath)
|
||||
if (Test-Path $FilePath) {
|
||||
return (Get-FileHash -Path $FilePath -Algorithm SHA256).Hash
|
||||
}
|
||||
return $null
|
||||
}
|
||||
|
||||
function Import-ToDatabase {
|
||||
param([string[]]$FilePaths)
|
||||
|
||||
if ($FilePaths.Count -eq 0) { return }
|
||||
|
||||
Write-Log "Importing $($FilePaths.Count) file(s) to database..."
|
||||
|
||||
# Build argument list
|
||||
$args = @("$IMPORT_SCRIPT", "--file") + $FilePaths
|
||||
|
||||
try {
|
||||
$output = & $NODE_PATH $args 2>&1
|
||||
foreach ($line in $output) {
|
||||
Write-Log " [DB] $line"
|
||||
}
|
||||
Write-Log "Database import complete"
|
||||
} catch {
|
||||
Write-Log "ERROR: Database import failed: $_"
|
||||
}
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Main Script
|
||||
# ============================================================================
|
||||
|
||||
Write-Log "=========================================="
|
||||
Write-Log "Starting sync from NAS"
|
||||
Write-Log "Max age: $MaxAgeMinutes minutes"
|
||||
if ($DryRun) { Write-Log "DRY RUN - no changes will be made" }
|
||||
|
||||
$errorCount = 0
|
||||
$syncedFiles = 0
|
||||
$skippedFiles = 0
|
||||
$syncedDatFiles = @() # Track DAT files for database import
|
||||
|
||||
# Find all DAT files on NAS modified within the time window
|
||||
Write-Log "Finding DAT files on NAS..."
|
||||
$findCommand = "find $NAS_DATA_PATH/TS-*/LOGS -name '*.DAT' -type f -mmin -$MaxAgeMinutes 2>/dev/null"
|
||||
$datFiles = Invoke-NASCommand $findCommand
|
||||
|
||||
if (-not $datFiles -or $datFiles.Count -eq 0) {
|
||||
Write-Log "No new DAT files found on NAS"
|
||||
} else {
|
||||
Write-Log "Found $($datFiles.Count) DAT file(s) to process"
|
||||
|
||||
foreach ($remoteFile in $datFiles) {
|
||||
$remoteFile = $remoteFile.Trim()
|
||||
if ([string]::IsNullOrWhiteSpace($remoteFile)) { continue }
|
||||
|
||||
# Parse the path: /data/test/TS-XX/LOGS/7BLOG/file.DAT
|
||||
if ($remoteFile -match "/data/test/(TS-[^/]+)/LOGS/([^/]+)/(.+\.DAT)$") {
|
||||
$station = $Matches[1]
|
||||
$logType = $Matches[2]
|
||||
$fileName = $Matches[3]
|
||||
|
||||
Write-Log "Processing: $station/$logType/$fileName"
|
||||
|
||||
# Destination 1: Per-station folder (preserves structure)
|
||||
$stationDest = Join-Path $AD2_TEST_PATH "$station\LOGS\$logType\$fileName"
|
||||
|
||||
# Destination 2: Aggregated HISTLOGS folder
|
||||
$histlogsDest = Join-Path $AD2_HISTLOGS_PATH "$logType\$fileName"
|
||||
|
||||
if ($DryRun) {
|
||||
Write-Log " [DRY RUN] Would copy to: $stationDest"
|
||||
$syncedFiles++
|
||||
} else {
|
||||
# Copy to station folder only (skip HISTLOGS to avoid duplicates)
|
||||
$success1 = Copy-FromNAS -RemotePath $remoteFile -LocalPath $stationDest
|
||||
|
||||
if ($success1) {
|
||||
Write-Log " Copied to station folder"
|
||||
|
||||
# Remove from NAS after successful sync
|
||||
Remove-FromNAS -RemotePath $remoteFile
|
||||
Write-Log " Removed from NAS"
|
||||
|
||||
# Track for database import
|
||||
$syncedDatFiles += $stationDest
|
||||
|
||||
$syncedFiles++
|
||||
} else {
|
||||
Write-Log " ERROR: Failed to copy from NAS"
|
||||
$errorCount++
|
||||
}
|
||||
}
|
||||
} else {
|
||||
Write-Log " Skipping (unexpected path format): $remoteFile"
|
||||
$skippedFiles++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Find and sync TXT report files
|
||||
Write-Log "Finding TXT reports on NAS..."
|
||||
$findReportsCommand = "find $NAS_DATA_PATH/TS-*/Reports -name '*.TXT' -type f -mmin -$MaxAgeMinutes 2>/dev/null"
|
||||
$txtFiles = Invoke-NASCommand $findReportsCommand
|
||||
|
||||
if ($txtFiles -and $txtFiles.Count -gt 0) {
|
||||
Write-Log "Found $($txtFiles.Count) TXT report(s) to process"
|
||||
|
||||
foreach ($remoteFile in $txtFiles) {
|
||||
$remoteFile = $remoteFile.Trim()
|
||||
if ([string]::IsNullOrWhiteSpace($remoteFile)) { continue }
|
||||
|
||||
if ($remoteFile -match "/data/test/(TS-[^/]+)/Reports/(.+\.TXT)$") {
|
||||
$station = $Matches[1]
|
||||
$fileName = $Matches[2]
|
||||
|
||||
Write-Log "Processing report: $station/$fileName"
|
||||
|
||||
# Destination: Per-station Reports folder
|
||||
$reportDest = Join-Path $AD2_TEST_PATH "$station\Reports\$fileName"
|
||||
|
||||
if ($DryRun) {
|
||||
Write-Log " [DRY RUN] Would copy to: $reportDest"
|
||||
$syncedFiles++
|
||||
} else {
|
||||
$success = Copy-FromNAS -RemotePath $remoteFile -LocalPath $reportDest
|
||||
|
||||
if ($success) {
|
||||
Write-Log " Copied report"
|
||||
Remove-FromNAS -RemotePath $remoteFile
|
||||
Write-Log " Removed from NAS"
|
||||
$syncedFiles++
|
||||
} else {
|
||||
Write-Log " ERROR: Failed to copy report"
|
||||
$errorCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Import synced DAT files to database
|
||||
# ============================================================================
|
||||
if (-not $DryRun -and $syncedDatFiles.Count -gt 0) {
|
||||
Import-ToDatabase -FilePaths $syncedDatFiles
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# PUSH: AD2 → NAS (Software Updates for DOS Machines)
|
||||
# ============================================================================
|
||||
Write-Log "--- AD2 to NAS Sync (Software Updates) ---"
|
||||
|
||||
$pushedFiles = 0
|
||||
|
||||
# Sync COMMON/ProdSW (batch files for all stations)
|
||||
# AD2 uses _COMMON, NAS uses COMMON - handle both
|
||||
$commonSources = @(
|
||||
@{ Local = "$AD2_TEST_PATH\_COMMON\ProdSW"; Remote = "$NAS_DATA_PATH/COMMON/ProdSW" },
|
||||
@{ Local = "$AD2_TEST_PATH\COMMON\ProdSW"; Remote = "$NAS_DATA_PATH/COMMON/ProdSW" }
|
||||
)
|
||||
|
||||
foreach ($source in $commonSources) {
|
||||
if (Test-Path $source.Local) {
|
||||
Write-Log "Syncing COMMON ProdSW from: $($source.Local)"
|
||||
$commonFiles = Get-ChildItem -Path $source.Local -File -ErrorAction SilentlyContinue
|
||||
foreach ($file in $commonFiles) {
|
||||
$remotePath = "$($source.Remote)/$($file.Name)"
|
||||
|
||||
if ($DryRun) {
|
||||
Write-Log " [DRY RUN] Would push: $($file.Name) -> $remotePath"
|
||||
$pushedFiles++
|
||||
} else {
|
||||
$success = Copy-ToNAS -LocalPath $file.FullName -RemotePath $remotePath
|
||||
if ($success) {
|
||||
Write-Log " Pushed: $($file.Name)"
|
||||
$pushedFiles++
|
||||
} else {
|
||||
Write-Log " ERROR: Failed to push $($file.Name)"
|
||||
$errorCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# Sync UPDATE.BAT (root level utility)
|
||||
Write-Log "Syncing UPDATE.BAT..."
|
||||
$updateBatLocal = "$AD2_TEST_PATH\UPDATE.BAT"
|
||||
if (Test-Path $updateBatLocal) {
|
||||
$updateBatRemote = "$NAS_DATA_PATH/UPDATE.BAT"
|
||||
|
||||
if ($DryRun) {
|
||||
Write-Log " [DRY RUN] Would push: UPDATE.BAT -> $updateBatRemote"
|
||||
$pushedFiles++
|
||||
} else {
|
||||
$success = Copy-ToNAS -LocalPath $updateBatLocal -RemotePath $updateBatRemote
|
||||
if ($success) {
|
||||
Write-Log " Pushed: UPDATE.BAT"
|
||||
$pushedFiles++
|
||||
} else {
|
||||
Write-Log " ERROR: Failed to push UPDATE.BAT"
|
||||
$errorCount++
|
||||
}
|
||||
}
|
||||
} else {
|
||||
Write-Log " WARNING: UPDATE.BAT not found at $updateBatLocal"
|
||||
}
|
||||
|
||||
# Sync DEPLOY.BAT (root level utility)
|
||||
Write-Log "Syncing DEPLOY.BAT..."
|
||||
$deployBatLocal = "$AD2_TEST_PATH\DEPLOY.BAT"
|
||||
if (Test-Path $deployBatLocal) {
|
||||
$deployBatRemote = "$NAS_DATA_PATH/DEPLOY.BAT"
|
||||
|
||||
if ($DryRun) {
|
||||
Write-Log " [DRY RUN] Would push: DEPLOY.BAT -> $deployBatRemote"
|
||||
$pushedFiles++
|
||||
} else {
|
||||
$success = Copy-ToNAS -LocalPath $deployBatLocal -RemotePath $deployBatRemote
|
||||
if ($success) {
|
||||
Write-Log " Pushed: DEPLOY.BAT"
|
||||
$pushedFiles++
|
||||
} else {
|
||||
Write-Log " ERROR: Failed to push DEPLOY.BAT"
|
||||
$errorCount++
|
||||
}
|
||||
}
|
||||
} else {
|
||||
Write-Log " WARNING: DEPLOY.BAT not found at $deployBatLocal"
|
||||
}
|
||||
|
||||
# Sync per-station ProdSW folders
|
||||
Write-Log "Syncing station-specific ProdSW folders..."
|
||||
$stationFolders = Get-ChildItem -Path $AD2_TEST_PATH -Directory -Filter "TS-*" -ErrorAction SilentlyContinue
|
||||
|
||||
foreach ($station in $stationFolders) {
|
||||
$prodSwPath = Join-Path $station.FullName "ProdSW"
|
||||
|
||||
if (Test-Path $prodSwPath) {
|
||||
# Get all files in ProdSW (including subdirectories)
|
||||
$prodSwFiles = Get-ChildItem -Path $prodSwPath -File -Recurse -ErrorAction SilentlyContinue
|
||||
|
||||
foreach ($file in $prodSwFiles) {
|
||||
# Calculate relative path from ProdSW folder
|
||||
$relativePath = $file.FullName.Substring($prodSwPath.Length + 1).Replace('\', '/')
|
||||
$remotePath = "$NAS_DATA_PATH/$($station.Name)/ProdSW/$relativePath"
|
||||
|
||||
if ($DryRun) {
|
||||
Write-Log " [DRY RUN] Would push: $($station.Name)/ProdSW/$relativePath"
|
||||
$pushedFiles++
|
||||
} else {
|
||||
$success = Copy-ToNAS -LocalPath $file.FullName -RemotePath $remotePath
|
||||
if ($success) {
|
||||
Write-Log " Pushed: $($station.Name)/ProdSW/$relativePath"
|
||||
$pushedFiles++
|
||||
} else {
|
||||
Write-Log " ERROR: Failed to push $($station.Name)/ProdSW/$relativePath"
|
||||
$errorCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Check for TODO.BAT (one-time task file)
|
||||
$todoBatPath = Join-Path $station.FullName "TODO.BAT"
|
||||
if (Test-Path $todoBatPath) {
|
||||
$remoteTodoPath = "$NAS_DATA_PATH/$($station.Name)/TODO.BAT"
|
||||
|
||||
Write-Log "Found TODO.BAT for $($station.Name)"
|
||||
|
||||
if ($DryRun) {
|
||||
Write-Log " [DRY RUN] Would push TODO.BAT -> $remoteTodoPath"
|
||||
$pushedFiles++
|
||||
} else {
|
||||
$success = Copy-ToNAS -LocalPath $todoBatPath -RemotePath $remoteTodoPath
|
||||
if ($success) {
|
||||
Write-Log " Pushed TODO.BAT to NAS"
|
||||
# Remove from AD2 after successful push (one-shot mechanism)
|
||||
Remove-Item -Path $todoBatPath -Force
|
||||
Write-Log " Removed TODO.BAT from AD2 (pushed to NAS)"
|
||||
$pushedFiles++
|
||||
} else {
|
||||
Write-Log " ERROR: Failed to push TODO.BAT"
|
||||
$errorCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Write-Log "AD2 to NAS sync: $pushedFiles file(s) pushed"
|
||||
|
||||
# ============================================================================
|
||||
# Update Status File
|
||||
# ============================================================================
|
||||
$status = if ($errorCount -eq 0) { "OK" } else { "ERRORS" }
|
||||
$statusContent = @"
|
||||
AD2 <-> NAS Bidirectional Sync Status
|
||||
======================================
|
||||
Timestamp: $(Get-Date -Format "yyyy-MM-dd HH:mm:ss")
|
||||
Status: $status
|
||||
|
||||
PULL (NAS -> AD2 - Test Results):
|
||||
Files Pulled: $syncedFiles
|
||||
Files Skipped: $skippedFiles
|
||||
DAT Files Imported to DB: $($syncedDatFiles.Count)
|
||||
|
||||
PUSH (AD2 -> NAS - Software Updates):
|
||||
Files Pushed: $pushedFiles
|
||||
|
||||
Errors: $errorCount
|
||||
"@
|
||||
|
||||
Set-Content -Path $STATUS_FILE -Value $statusContent
|
||||
|
||||
Write-Log "=========================================="
|
||||
Write-Log "Sync complete: PULL=$syncedFiles, PUSH=$pushedFiles, Errors=$errorCount"
|
||||
Write-Log "=========================================="
|
||||
|
||||
# Exit with error code if there were failures
|
||||
if ($errorCount -gt 0) {
|
||||
exit 1
|
||||
} else {
|
||||
exit 0
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
34
access-ad2-via-smb.ps1
Normal file
34
access-ad2-via-smb.ps1
Normal file
@@ -0,0 +1,34 @@
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Mounting AD2 C$ share..."
|
||||
try {
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
Write-Host "[OK] Mounted as AD2: drive"
|
||||
|
||||
Write-Host "`n[OK] Listing root directories..."
|
||||
Get-ChildItem AD2:\ -Directory | Where-Object Name -match "database|testdata|test.*db" | Format-Table Name, FullName
|
||||
|
||||
Write-Host "`n[OK] Reading Sync-FromNAS.ps1..."
|
||||
if (Test-Path "AD2:\Shares\test\scripts\Sync-FromNAS.ps1") {
|
||||
$scriptContent = Get-Content "AD2:\Shares\test\scripts\Sync-FromNAS.ps1" -Raw
|
||||
$scriptContent | Out-File -FilePath "D:\ClaudeTools\Sync-FromNAS-retrieved.ps1" -Encoding UTF8
|
||||
Write-Host "[OK] Script retrieved and saved"
|
||||
|
||||
Write-Host "`n[INFO] Searching for database references in script..."
|
||||
$scriptContent | Select-String -Pattern "(database|sql|sqlite|mysql|postgres|\.db|\.mdb|\.accdb)" -AllMatches | Select-Object -First 20
|
||||
} else {
|
||||
Write-Host "[ERROR] Sync-FromNAS.ps1 not found"
|
||||
}
|
||||
|
||||
Write-Host "`n[OK] Checking for database files in Shares\test..."
|
||||
Get-ChildItem "AD2:\Shares\test" -Recurse -Include "*.db","*.mdb","*.accdb","*.sqlite" -ErrorAction SilentlyContinue | Select-Object -First 10 | Format-Table Name, FullName
|
||||
|
||||
} catch {
|
||||
Write-Host "[ERROR] Failed to mount share: $_"
|
||||
} finally {
|
||||
if (Test-Path AD2:) {
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
Write-Host "`n[OK] Unmounted AD2 drive"
|
||||
}
|
||||
}
|
||||
345
api-js-fixed.js
Normal file
345
api-js-fixed.js
Normal file
@@ -0,0 +1,345 @@
|
||||
/**
|
||||
* API Routes for Test Data Database
|
||||
* FIXED VERSION - Compatible with readonly mode
|
||||
*/
|
||||
|
||||
const express = require('express');
|
||||
const path = require('path');
|
||||
const Database = require('better-sqlite3');
|
||||
const { generateDatasheet } = require('../templates/datasheet');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
// Database connection
|
||||
const DB_PATH = path.join(__dirname, '..', 'database', 'testdata.db');
|
||||
|
||||
// FIXED: Readonly-compatible optimizations
|
||||
function getDb() {
|
||||
const db = new Database(DB_PATH, { readonly: true, timeout: 10000 });
|
||||
|
||||
// Performance optimizations compatible with readonly mode
|
||||
db.pragma('cache_size = -64000'); // 64MB cache (negative = KB)
|
||||
db.pragma('mmap_size = 268435456'); // 256MB memory-mapped I/O
|
||||
db.pragma('temp_store = MEMORY'); // Temporary tables in memory
|
||||
db.pragma('query_only = ON'); // Enforce read-only mode
|
||||
|
||||
return db;
|
||||
}
|
||||
|
||||
/**
|
||||
* GET /api/search
|
||||
* Search test records
|
||||
* Query params: serial, model, from, to, result, q, station, logtype, limit, offset
|
||||
*/
|
||||
router.get('/search', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const { serial, model, from, to, result, q, station, logtype, limit = 100, offset = 0 } = req.query;
|
||||
|
||||
let sql = 'SELECT * FROM test_records WHERE 1=1';
|
||||
const params = [];
|
||||
|
||||
if (serial) {
|
||||
sql += ' AND serial_number LIKE ?';
|
||||
params.push(serial.includes('%') ? serial : `%${serial}%`);
|
||||
}
|
||||
|
||||
if (model) {
|
||||
sql += ' AND model_number LIKE ?';
|
||||
params.push(model.includes('%') ? model : `%${model}%`);
|
||||
}
|
||||
|
||||
if (from) {
|
||||
sql += ' AND test_date >= ?';
|
||||
params.push(from);
|
||||
}
|
||||
|
||||
if (to) {
|
||||
sql += ' AND test_date <= ?';
|
||||
params.push(to);
|
||||
}
|
||||
|
||||
if (result) {
|
||||
sql += ' AND overall_result = ?';
|
||||
params.push(result.toUpperCase());
|
||||
}
|
||||
|
||||
if (station) {
|
||||
sql += ' AND test_station = ?';
|
||||
params.push(station);
|
||||
}
|
||||
|
||||
if (logtype) {
|
||||
sql += ' AND log_type = ?';
|
||||
params.push(logtype);
|
||||
}
|
||||
|
||||
if (q) {
|
||||
// Full-text search - rebuild query with FTS
|
||||
sql = `SELECT test_records.* FROM test_records
|
||||
JOIN test_records_fts ON test_records.id = test_records_fts.rowid
|
||||
WHERE test_records_fts MATCH ?`;
|
||||
params.length = 0;
|
||||
params.push(q);
|
||||
|
||||
if (serial) {
|
||||
sql += ' AND serial_number LIKE ?';
|
||||
params.push(serial.includes('%') ? serial : `%${serial}%`);
|
||||
}
|
||||
if (model) {
|
||||
sql += ' AND model_number LIKE ?';
|
||||
params.push(model.includes('%') ? model : `%${model}%`);
|
||||
}
|
||||
if (station) {
|
||||
sql += ' AND test_station = ?';
|
||||
params.push(station);
|
||||
}
|
||||
if (logtype) {
|
||||
sql += ' AND log_type = ?';
|
||||
params.push(logtype);
|
||||
}
|
||||
if (result) {
|
||||
sql += ' AND overall_result = ?';
|
||||
params.push(result.toUpperCase());
|
||||
}
|
||||
if (from) {
|
||||
sql += ' AND test_date >= ?';
|
||||
params.push(from);
|
||||
}
|
||||
if (to) {
|
||||
sql += ' AND test_date <= ?';
|
||||
params.push(to);
|
||||
}
|
||||
}
|
||||
|
||||
sql += ' ORDER BY test_date DESC, serial_number';
|
||||
sql += ` LIMIT ? OFFSET ?`;
|
||||
params.push(parseInt(limit), parseInt(offset));
|
||||
|
||||
const records = db.prepare(sql).all(...params);
|
||||
|
||||
// Get total count
|
||||
let countSql = sql.replace(/SELECT .* FROM/, 'SELECT COUNT(*) as count FROM')
|
||||
.replace(/ORDER BY.*$/, '');
|
||||
countSql = countSql.replace(/LIMIT \? OFFSET \?/, '');
|
||||
|
||||
const countParams = params.slice(0, -2);
|
||||
const total = db.prepare(countSql).get(...countParams);
|
||||
|
||||
db.close();
|
||||
|
||||
res.json({
|
||||
records,
|
||||
total: total?.count || records.length,
|
||||
limit: parseInt(limit),
|
||||
offset: parseInt(offset)
|
||||
});
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/record/:id
|
||||
* Get single record by ID
|
||||
*/
|
||||
router.get('/record/:id', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const record = db.prepare('SELECT * FROM test_records WHERE id = ?').get(req.params.id);
|
||||
db.close();
|
||||
|
||||
if (!record) {
|
||||
return res.status(404).json({ error: 'Record not found' });
|
||||
}
|
||||
|
||||
res.json(record);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/datasheet/:id
|
||||
* Generate datasheet for a record
|
||||
* Query params: format (html, txt)
|
||||
*/
|
||||
router.get('/datasheet/:id', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const record = db.prepare('SELECT * FROM test_records WHERE id = ?').get(req.params.id);
|
||||
db.close();
|
||||
|
||||
if (!record) {
|
||||
return res.status(404).json({ error: 'Record not found' });
|
||||
}
|
||||
|
||||
const format = req.query.format || 'html';
|
||||
const datasheet = generateDatasheet(record, format);
|
||||
|
||||
if (format === 'html') {
|
||||
res.type('html').send(datasheet);
|
||||
} else {
|
||||
res.type('text/plain').send(datasheet);
|
||||
}
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/stats
|
||||
* Get database statistics
|
||||
*/
|
||||
router.get('/stats', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
|
||||
const stats = {
|
||||
total_records: db.prepare('SELECT COUNT(*) as count FROM test_records').get().count,
|
||||
by_log_type: db.prepare(`
|
||||
SELECT log_type, COUNT(*) as count
|
||||
FROM test_records
|
||||
GROUP BY log_type
|
||||
ORDER BY count DESC
|
||||
`).all(),
|
||||
by_result: db.prepare(`
|
||||
SELECT overall_result, COUNT(*) as count
|
||||
FROM test_records
|
||||
GROUP BY overall_result
|
||||
`).all(),
|
||||
by_station: db.prepare(`
|
||||
SELECT test_station, COUNT(*) as count
|
||||
FROM test_records
|
||||
WHERE test_station IS NOT NULL AND test_station != ''
|
||||
GROUP BY test_station
|
||||
ORDER BY test_station
|
||||
`).all(),
|
||||
date_range: db.prepare(`
|
||||
SELECT MIN(test_date) as oldest, MAX(test_date) as newest
|
||||
FROM test_records
|
||||
`).get(),
|
||||
recent_serials: db.prepare(`
|
||||
SELECT DISTINCT serial_number, model_number, test_date
|
||||
FROM test_records
|
||||
ORDER BY test_date DESC
|
||||
LIMIT 10
|
||||
`).all()
|
||||
};
|
||||
|
||||
db.close();
|
||||
res.json(stats);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/filters
|
||||
* Get available filter options (test stations, log types, models)
|
||||
*/
|
||||
router.get('/filters', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
|
||||
const filters = {
|
||||
stations: db.prepare(`
|
||||
SELECT DISTINCT test_station
|
||||
FROM test_records
|
||||
WHERE test_station IS NOT NULL AND test_station != ''
|
||||
ORDER BY test_station
|
||||
`).all().map(r => r.test_station),
|
||||
log_types: db.prepare(`
|
||||
SELECT DISTINCT log_type
|
||||
FROM test_records
|
||||
ORDER BY log_type
|
||||
`).all().map(r => r.log_type),
|
||||
models: db.prepare(`
|
||||
SELECT DISTINCT model_number, COUNT(*) as count
|
||||
FROM test_records
|
||||
GROUP BY model_number
|
||||
ORDER BY count DESC
|
||||
LIMIT 500
|
||||
`).all()
|
||||
};
|
||||
|
||||
db.close();
|
||||
res.json(filters);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/export
|
||||
* Export search results as CSV
|
||||
*/
|
||||
router.get('/export', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const { serial, model, from, to, result, station, logtype } = req.query;
|
||||
|
||||
let sql = 'SELECT * FROM test_records WHERE 1=1';
|
||||
const params = [];
|
||||
|
||||
if (serial) {
|
||||
sql += ' AND serial_number LIKE ?';
|
||||
params.push(serial.includes('%') ? serial : `%${serial}%`);
|
||||
}
|
||||
|
||||
if (model) {
|
||||
sql += ' AND model_number LIKE ?';
|
||||
params.push(model.includes('%') ? model : `%${model}%`);
|
||||
}
|
||||
|
||||
if (from) {
|
||||
sql += ' AND test_date >= ?';
|
||||
params.push(from);
|
||||
}
|
||||
|
||||
if (to) {
|
||||
sql += ' AND test_date <= ?';
|
||||
params.push(to);
|
||||
}
|
||||
|
||||
if (result) {
|
||||
sql += ' AND overall_result = ?';
|
||||
params.push(result.toUpperCase());
|
||||
}
|
||||
|
||||
if (station) {
|
||||
sql += ' AND test_station = ?';
|
||||
params.push(station);
|
||||
}
|
||||
|
||||
if (logtype) {
|
||||
sql += ' AND log_type = ?';
|
||||
params.push(logtype);
|
||||
}
|
||||
|
||||
sql += ' ORDER BY test_date DESC, serial_number LIMIT 10000';
|
||||
|
||||
const records = db.prepare(sql).all(...params);
|
||||
db.close();
|
||||
|
||||
// Generate CSV
|
||||
const headers = ['id', 'log_type', 'model_number', 'serial_number', 'test_date', 'test_station', 'overall_result', 'source_file'];
|
||||
let csv = headers.join(',') + '\n';
|
||||
|
||||
for (const record of records) {
|
||||
const row = headers.map(h => {
|
||||
const val = record[h] || '';
|
||||
return `"${String(val).replace(/"/g, '""')}"`;
|
||||
});
|
||||
csv += row.join(',') + '\n';
|
||||
}
|
||||
|
||||
res.setHeader('Content-Type', 'text/csv');
|
||||
res.setHeader('Content-Disposition', 'attachment; filename=test_records.csv');
|
||||
res.send(csv);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
347
api-js-optimized.js
Normal file
347
api-js-optimized.js
Normal file
@@ -0,0 +1,347 @@
|
||||
/**
|
||||
* API Routes for Test Data Database
|
||||
* OPTIMIZED VERSION with performance improvements
|
||||
*/
|
||||
|
||||
const express = require('express');
|
||||
const path = require('path');
|
||||
const Database = require('better-sqlite3');
|
||||
const { generateDatasheet } = require('../templates/datasheet');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
// Database connection
|
||||
const DB_PATH = path.join(__dirname, '..', 'database', 'testdata.db');
|
||||
|
||||
// OPTIMIZED: Add performance PRAGMA settings
|
||||
function getDb() {
|
||||
const db = new Database(DB_PATH, { readonly: true, timeout: 10000 });
|
||||
|
||||
// Performance optimizations for large databases
|
||||
db.pragma('journal_mode = WAL'); // Write-Ahead Logging for better concurrency
|
||||
db.pragma('synchronous = NORMAL'); // Faster writes, still safe
|
||||
db.pragma('cache_size = -64000'); // 64MB cache (negative = KB)
|
||||
db.pragma('mmap_size = 268435456'); // 256MB memory-mapped I/O
|
||||
db.pragma('temp_store = MEMORY'); // Temporary tables in memory
|
||||
db.pragma('query_only = ON'); // Enforce read-only mode
|
||||
|
||||
return db;
|
||||
}
|
||||
|
||||
/**
|
||||
* GET /api/search
|
||||
* Search test records
|
||||
* Query params: serial, model, from, to, result, q, station, logtype, limit, offset
|
||||
*/
|
||||
router.get('/search', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const { serial, model, from, to, result, q, station, logtype, limit = 100, offset = 0 } = req.query;
|
||||
|
||||
let sql = 'SELECT * FROM test_records WHERE 1=1';
|
||||
const params = [];
|
||||
|
||||
if (serial) {
|
||||
sql += ' AND serial_number LIKE ?';
|
||||
params.push(serial.includes('%') ? serial : `%${serial}%`);
|
||||
}
|
||||
|
||||
if (model) {
|
||||
sql += ' AND model_number LIKE ?';
|
||||
params.push(model.includes('%') ? model : `%${model}%`);
|
||||
}
|
||||
|
||||
if (from) {
|
||||
sql += ' AND test_date >= ?';
|
||||
params.push(from);
|
||||
}
|
||||
|
||||
if (to) {
|
||||
sql += ' AND test_date <= ?';
|
||||
params.push(to);
|
||||
}
|
||||
|
||||
if (result) {
|
||||
sql += ' AND overall_result = ?';
|
||||
params.push(result.toUpperCase());
|
||||
}
|
||||
|
||||
if (station) {
|
||||
sql += ' AND test_station = ?';
|
||||
params.push(station);
|
||||
}
|
||||
|
||||
if (logtype) {
|
||||
sql += ' AND log_type = ?';
|
||||
params.push(logtype);
|
||||
}
|
||||
|
||||
if (q) {
|
||||
// Full-text search - rebuild query with FTS
|
||||
sql = `SELECT test_records.* FROM test_records
|
||||
JOIN test_records_fts ON test_records.id = test_records_fts.rowid
|
||||
WHERE test_records_fts MATCH ?`;
|
||||
params.length = 0;
|
||||
params.push(q);
|
||||
|
||||
if (serial) {
|
||||
sql += ' AND serial_number LIKE ?';
|
||||
params.push(serial.includes('%') ? serial : `%${serial}%`);
|
||||
}
|
||||
if (model) {
|
||||
sql += ' AND model_number LIKE ?';
|
||||
params.push(model.includes('%') ? model : `%${model}%`);
|
||||
}
|
||||
if (station) {
|
||||
sql += ' AND test_station = ?';
|
||||
params.push(station);
|
||||
}
|
||||
if (logtype) {
|
||||
sql += ' AND log_type = ?';
|
||||
params.push(logtype);
|
||||
}
|
||||
if (result) {
|
||||
sql += ' AND overall_result = ?';
|
||||
params.push(result.toUpperCase());
|
||||
}
|
||||
if (from) {
|
||||
sql += ' AND test_date >= ?';
|
||||
params.push(from);
|
||||
}
|
||||
if (to) {
|
||||
sql += ' AND test_date <= ?';
|
||||
params.push(to);
|
||||
}
|
||||
}
|
||||
|
||||
sql += ' ORDER BY test_date DESC, serial_number';
|
||||
sql += ` LIMIT ? OFFSET ?`;
|
||||
params.push(parseInt(limit), parseInt(offset));
|
||||
|
||||
const records = db.prepare(sql).all(...params);
|
||||
|
||||
// Get total count
|
||||
let countSql = sql.replace(/SELECT .* FROM/, 'SELECT COUNT(*) as count FROM')
|
||||
.replace(/ORDER BY.*$/, '');
|
||||
countSql = countSql.replace(/LIMIT \? OFFSET \?/, '');
|
||||
|
||||
const countParams = params.slice(0, -2);
|
||||
const total = db.prepare(countSql).get(...countParams);
|
||||
|
||||
db.close();
|
||||
|
||||
res.json({
|
||||
records,
|
||||
total: total?.count || records.length,
|
||||
limit: parseInt(limit),
|
||||
offset: parseInt(offset)
|
||||
});
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/record/:id
|
||||
* Get single record by ID
|
||||
*/
|
||||
router.get('/record/:id', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const record = db.prepare('SELECT * FROM test_records WHERE id = ?').get(req.params.id);
|
||||
db.close();
|
||||
|
||||
if (!record) {
|
||||
return res.status(404).json({ error: 'Record not found' });
|
||||
}
|
||||
|
||||
res.json(record);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/datasheet/:id
|
||||
* Generate datasheet for a record
|
||||
* Query params: format (html, txt)
|
||||
*/
|
||||
router.get('/datasheet/:id', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const record = db.prepare('SELECT * FROM test_records WHERE id = ?').get(req.params.id);
|
||||
db.close();
|
||||
|
||||
if (!record) {
|
||||
return res.status(404).json({ error: 'Record not found' });
|
||||
}
|
||||
|
||||
const format = req.query.format || 'html';
|
||||
const datasheet = generateDatasheet(record, format);
|
||||
|
||||
if (format === 'html') {
|
||||
res.type('html').send(datasheet);
|
||||
} else {
|
||||
res.type('text/plain').send(datasheet);
|
||||
}
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/stats
|
||||
* Get database statistics
|
||||
*/
|
||||
router.get('/stats', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
|
||||
const stats = {
|
||||
total_records: db.prepare('SELECT COUNT(*) as count FROM test_records').get().count,
|
||||
by_log_type: db.prepare(`
|
||||
SELECT log_type, COUNT(*) as count
|
||||
FROM test_records
|
||||
GROUP BY log_type
|
||||
ORDER BY count DESC
|
||||
`).all(),
|
||||
by_result: db.prepare(`
|
||||
SELECT overall_result, COUNT(*) as count
|
||||
FROM test_records
|
||||
GROUP BY overall_result
|
||||
`).all(),
|
||||
by_station: db.prepare(`
|
||||
SELECT test_station, COUNT(*) as count
|
||||
FROM test_records
|
||||
WHERE test_station IS NOT NULL AND test_station != ''
|
||||
GROUP BY test_station
|
||||
ORDER BY test_station
|
||||
`).all(),
|
||||
date_range: db.prepare(`
|
||||
SELECT MIN(test_date) as oldest, MAX(test_date) as newest
|
||||
FROM test_records
|
||||
`).get(),
|
||||
recent_serials: db.prepare(`
|
||||
SELECT DISTINCT serial_number, model_number, test_date
|
||||
FROM test_records
|
||||
ORDER BY test_date DESC
|
||||
LIMIT 10
|
||||
`).all()
|
||||
};
|
||||
|
||||
db.close();
|
||||
res.json(stats);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/filters
|
||||
* Get available filter options (test stations, log types, models)
|
||||
*/
|
||||
router.get('/filters', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
|
||||
const filters = {
|
||||
stations: db.prepare(`
|
||||
SELECT DISTINCT test_station
|
||||
FROM test_records
|
||||
WHERE test_station IS NOT NULL AND test_station != ''
|
||||
ORDER BY test_station
|
||||
`).all().map(r => r.test_station),
|
||||
log_types: db.prepare(`
|
||||
SELECT DISTINCT log_type
|
||||
FROM test_records
|
||||
ORDER BY log_type
|
||||
`).all().map(r => r.log_type),
|
||||
models: db.prepare(`
|
||||
SELECT DISTINCT model_number, COUNT(*) as count
|
||||
FROM test_records
|
||||
GROUP BY model_number
|
||||
ORDER BY count DESC
|
||||
LIMIT 500
|
||||
`).all()
|
||||
};
|
||||
|
||||
db.close();
|
||||
res.json(filters);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/export
|
||||
* Export search results as CSV
|
||||
*/
|
||||
router.get('/export', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const { serial, model, from, to, result, station, logtype } = req.query;
|
||||
|
||||
let sql = 'SELECT * FROM test_records WHERE 1=1';
|
||||
const params = [];
|
||||
|
||||
if (serial) {
|
||||
sql += ' AND serial_number LIKE ?';
|
||||
params.push(serial.includes('%') ? serial : `%${serial}%`);
|
||||
}
|
||||
|
||||
if (model) {
|
||||
sql += ' AND model_number LIKE ?';
|
||||
params.push(model.includes('%') ? model : `%${model}%`);
|
||||
}
|
||||
|
||||
if (from) {
|
||||
sql += ' AND test_date >= ?';
|
||||
params.push(from);
|
||||
}
|
||||
|
||||
if (to) {
|
||||
sql += ' AND test_date <= ?';
|
||||
params.push(to);
|
||||
}
|
||||
|
||||
if (result) {
|
||||
sql += ' AND overall_result = ?';
|
||||
params.push(result.toUpperCase());
|
||||
}
|
||||
|
||||
if (station) {
|
||||
sql += ' AND test_station = ?';
|
||||
params.push(station);
|
||||
}
|
||||
|
||||
if (logtype) {
|
||||
sql += ' AND log_type = ?';
|
||||
params.push(logtype);
|
||||
}
|
||||
|
||||
sql += ' ORDER BY test_date DESC, serial_number LIMIT 10000';
|
||||
|
||||
const records = db.prepare(sql).all(...params);
|
||||
db.close();
|
||||
|
||||
// Generate CSV
|
||||
const headers = ['id', 'log_type', 'model_number', 'serial_number', 'test_date', 'test_station', 'overall_result', 'source_file'];
|
||||
let csv = headers.join(',') + '\n';
|
||||
|
||||
for (const record of records) {
|
||||
const row = headers.map(h => {
|
||||
const val = record[h] || '';
|
||||
return `"${String(val).replace(/"/g, '""')}"`;
|
||||
});
|
||||
csv += row.join(',') + '\n';
|
||||
}
|
||||
|
||||
res.setHeader('Content-Type', 'text/csv');
|
||||
res.setHeader('Content-Disposition', 'attachment; filename=test_records.csv');
|
||||
res.send(csv);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
336
api-js-retrieved.js
Normal file
336
api-js-retrieved.js
Normal file
@@ -0,0 +1,336 @@
|
||||
/**
|
||||
* API Routes for Test Data Database
|
||||
*/
|
||||
|
||||
const express = require('express');
|
||||
const path = require('path');
|
||||
const Database = require('better-sqlite3');
|
||||
const { generateDatasheet } = require('../templates/datasheet');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
// Database connection
|
||||
const DB_PATH = path.join(__dirname, '..', 'database', 'testdata.db');
|
||||
|
||||
function getDb() {
|
||||
return new Database(DB_PATH, { readonly: true });
|
||||
}
|
||||
|
||||
/**
|
||||
* GET /api/search
|
||||
* Search test records
|
||||
* Query params: serial, model, from, to, result, q, station, logtype, limit, offset
|
||||
*/
|
||||
router.get('/search', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const { serial, model, from, to, result, q, station, logtype, limit = 100, offset = 0 } = req.query;
|
||||
|
||||
let sql = 'SELECT * FROM test_records WHERE 1=1';
|
||||
const params = [];
|
||||
|
||||
if (serial) {
|
||||
sql += ' AND serial_number LIKE ?';
|
||||
params.push(serial.includes('%') ? serial : `%${serial}%`);
|
||||
}
|
||||
|
||||
if (model) {
|
||||
sql += ' AND model_number LIKE ?';
|
||||
params.push(model.includes('%') ? model : `%${model}%`);
|
||||
}
|
||||
|
||||
if (from) {
|
||||
sql += ' AND test_date >= ?';
|
||||
params.push(from);
|
||||
}
|
||||
|
||||
if (to) {
|
||||
sql += ' AND test_date <= ?';
|
||||
params.push(to);
|
||||
}
|
||||
|
||||
if (result) {
|
||||
sql += ' AND overall_result = ?';
|
||||
params.push(result.toUpperCase());
|
||||
}
|
||||
|
||||
if (station) {
|
||||
sql += ' AND test_station = ?';
|
||||
params.push(station);
|
||||
}
|
||||
|
||||
if (logtype) {
|
||||
sql += ' AND log_type = ?';
|
||||
params.push(logtype);
|
||||
}
|
||||
|
||||
if (q) {
|
||||
// Full-text search - rebuild query with FTS
|
||||
sql = `SELECT test_records.* FROM test_records
|
||||
JOIN test_records_fts ON test_records.id = test_records_fts.rowid
|
||||
WHERE test_records_fts MATCH ?`;
|
||||
params.length = 0;
|
||||
params.push(q);
|
||||
|
||||
if (serial) {
|
||||
sql += ' AND serial_number LIKE ?';
|
||||
params.push(serial.includes('%') ? serial : `%${serial}%`);
|
||||
}
|
||||
if (model) {
|
||||
sql += ' AND model_number LIKE ?';
|
||||
params.push(model.includes('%') ? model : `%${model}%`);
|
||||
}
|
||||
if (station) {
|
||||
sql += ' AND test_station = ?';
|
||||
params.push(station);
|
||||
}
|
||||
if (logtype) {
|
||||
sql += ' AND log_type = ?';
|
||||
params.push(logtype);
|
||||
}
|
||||
if (result) {
|
||||
sql += ' AND overall_result = ?';
|
||||
params.push(result.toUpperCase());
|
||||
}
|
||||
if (from) {
|
||||
sql += ' AND test_date >= ?';
|
||||
params.push(from);
|
||||
}
|
||||
if (to) {
|
||||
sql += ' AND test_date <= ?';
|
||||
params.push(to);
|
||||
}
|
||||
}
|
||||
|
||||
sql += ' ORDER BY test_date DESC, serial_number';
|
||||
sql += ` LIMIT ? OFFSET ?`;
|
||||
params.push(parseInt(limit), parseInt(offset));
|
||||
|
||||
const records = db.prepare(sql).all(...params);
|
||||
|
||||
// Get total count
|
||||
let countSql = sql.replace(/SELECT .* FROM/, 'SELECT COUNT(*) as count FROM')
|
||||
.replace(/ORDER BY.*$/, '');
|
||||
countSql = countSql.replace(/LIMIT \? OFFSET \?/, '');
|
||||
|
||||
const countParams = params.slice(0, -2);
|
||||
const total = db.prepare(countSql).get(...countParams);
|
||||
|
||||
db.close();
|
||||
|
||||
res.json({
|
||||
records,
|
||||
total: total?.count || records.length,
|
||||
limit: parseInt(limit),
|
||||
offset: parseInt(offset)
|
||||
});
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/record/:id
|
||||
* Get single record by ID
|
||||
*/
|
||||
router.get('/record/:id', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const record = db.prepare('SELECT * FROM test_records WHERE id = ?').get(req.params.id);
|
||||
db.close();
|
||||
|
||||
if (!record) {
|
||||
return res.status(404).json({ error: 'Record not found' });
|
||||
}
|
||||
|
||||
res.json(record);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/datasheet/:id
|
||||
* Generate datasheet for a record
|
||||
* Query params: format (html, txt)
|
||||
*/
|
||||
router.get('/datasheet/:id', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const record = db.prepare('SELECT * FROM test_records WHERE id = ?').get(req.params.id);
|
||||
db.close();
|
||||
|
||||
if (!record) {
|
||||
return res.status(404).json({ error: 'Record not found' });
|
||||
}
|
||||
|
||||
const format = req.query.format || 'html';
|
||||
const datasheet = generateDatasheet(record, format);
|
||||
|
||||
if (format === 'html') {
|
||||
res.type('html').send(datasheet);
|
||||
} else {
|
||||
res.type('text/plain').send(datasheet);
|
||||
}
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/stats
|
||||
* Get database statistics
|
||||
*/
|
||||
router.get('/stats', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
|
||||
const stats = {
|
||||
total_records: db.prepare('SELECT COUNT(*) as count FROM test_records').get().count,
|
||||
by_log_type: db.prepare(`
|
||||
SELECT log_type, COUNT(*) as count
|
||||
FROM test_records
|
||||
GROUP BY log_type
|
||||
ORDER BY count DESC
|
||||
`).all(),
|
||||
by_result: db.prepare(`
|
||||
SELECT overall_result, COUNT(*) as count
|
||||
FROM test_records
|
||||
GROUP BY overall_result
|
||||
`).all(),
|
||||
by_station: db.prepare(`
|
||||
SELECT test_station, COUNT(*) as count
|
||||
FROM test_records
|
||||
WHERE test_station IS NOT NULL AND test_station != ''
|
||||
GROUP BY test_station
|
||||
ORDER BY test_station
|
||||
`).all(),
|
||||
date_range: db.prepare(`
|
||||
SELECT MIN(test_date) as oldest, MAX(test_date) as newest
|
||||
FROM test_records
|
||||
`).get(),
|
||||
recent_serials: db.prepare(`
|
||||
SELECT DISTINCT serial_number, model_number, test_date
|
||||
FROM test_records
|
||||
ORDER BY test_date DESC
|
||||
LIMIT 10
|
||||
`).all()
|
||||
};
|
||||
|
||||
db.close();
|
||||
res.json(stats);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/filters
|
||||
* Get available filter options (test stations, log types, models)
|
||||
*/
|
||||
router.get('/filters', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
|
||||
const filters = {
|
||||
stations: db.prepare(`
|
||||
SELECT DISTINCT test_station
|
||||
FROM test_records
|
||||
WHERE test_station IS NOT NULL AND test_station != ''
|
||||
ORDER BY test_station
|
||||
`).all().map(r => r.test_station),
|
||||
log_types: db.prepare(`
|
||||
SELECT DISTINCT log_type
|
||||
FROM test_records
|
||||
ORDER BY log_type
|
||||
`).all().map(r => r.log_type),
|
||||
models: db.prepare(`
|
||||
SELECT DISTINCT model_number, COUNT(*) as count
|
||||
FROM test_records
|
||||
GROUP BY model_number
|
||||
ORDER BY count DESC
|
||||
LIMIT 500
|
||||
`).all()
|
||||
};
|
||||
|
||||
db.close();
|
||||
res.json(filters);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/export
|
||||
* Export search results as CSV
|
||||
*/
|
||||
router.get('/export', (req, res) => {
|
||||
try {
|
||||
const db = getDb();
|
||||
const { serial, model, from, to, result, station, logtype } = req.query;
|
||||
|
||||
let sql = 'SELECT * FROM test_records WHERE 1=1';
|
||||
const params = [];
|
||||
|
||||
if (serial) {
|
||||
sql += ' AND serial_number LIKE ?';
|
||||
params.push(serial.includes('%') ? serial : `%${serial}%`);
|
||||
}
|
||||
|
||||
if (model) {
|
||||
sql += ' AND model_number LIKE ?';
|
||||
params.push(model.includes('%') ? model : `%${model}%`);
|
||||
}
|
||||
|
||||
if (from) {
|
||||
sql += ' AND test_date >= ?';
|
||||
params.push(from);
|
||||
}
|
||||
|
||||
if (to) {
|
||||
sql += ' AND test_date <= ?';
|
||||
params.push(to);
|
||||
}
|
||||
|
||||
if (result) {
|
||||
sql += ' AND overall_result = ?';
|
||||
params.push(result.toUpperCase());
|
||||
}
|
||||
|
||||
if (station) {
|
||||
sql += ' AND test_station = ?';
|
||||
params.push(station);
|
||||
}
|
||||
|
||||
if (logtype) {
|
||||
sql += ' AND log_type = ?';
|
||||
params.push(logtype);
|
||||
}
|
||||
|
||||
sql += ' ORDER BY test_date DESC, serial_number LIMIT 10000';
|
||||
|
||||
const records = db.prepare(sql).all(...params);
|
||||
db.close();
|
||||
|
||||
// Generate CSV
|
||||
const headers = ['id', 'log_type', 'model_number', 'serial_number', 'test_date', 'test_station', 'overall_result', 'source_file'];
|
||||
let csv = headers.join(',') + '\n';
|
||||
|
||||
for (const record of records) {
|
||||
const row = headers.map(h => {
|
||||
const val = record[h] || '';
|
||||
return `"${String(val).replace(/"/g, '""')}"`;
|
||||
});
|
||||
csv += row.join(',') + '\n';
|
||||
}
|
||||
|
||||
res.setHeader('Content-Type', 'text/csv');
|
||||
res.setHeader('Content-Disposition', 'attachment; filename=test_records.csv');
|
||||
res.send(csv);
|
||||
} catch (err) {
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
|
||||
30
check-db-error.ps1
Normal file
30
check-db-error.ps1
Normal file
@@ -0,0 +1,30 @@
|
||||
# Check for error logs on AD2
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
|
||||
Write-Host "[OK] Checking for WAL files..." -ForegroundColor Green
|
||||
$dbFolder = "AD2:\Shares\testdatadb\database"
|
||||
$walFiles = Get-ChildItem $dbFolder -Filter "*.wal" -ErrorAction SilentlyContinue
|
||||
$shmFiles = Get-ChildItem $dbFolder -Filter "*.shm" -ErrorAction SilentlyContinue
|
||||
|
||||
if ($walFiles -or $shmFiles) {
|
||||
Write-Host "[FOUND] WAL files exist:" -ForegroundColor Green
|
||||
$walFiles | ForEach-Object { Write-Host " $_" -ForegroundColor Cyan }
|
||||
$shmFiles | ForEach-Object { Write-Host " $_" -ForegroundColor Cyan }
|
||||
} else {
|
||||
Write-Host "[INFO] No WAL files found" -ForegroundColor Yellow
|
||||
}
|
||||
|
||||
Write-Host "`n[OK] Checking deployed api.js..." -ForegroundColor Green
|
||||
$apiContent = Get-Content "AD2:\Shares\testdatadb\routes\api.js" -Raw
|
||||
if ($apiContent -match "readonly: true" -and $apiContent -match "journal_mode = WAL") {
|
||||
Write-Host "[ERROR] CONFLICT DETECTED!" -ForegroundColor Red
|
||||
Write-Host " Cannot set WAL mode on readonly database!" -ForegroundColor Red
|
||||
Write-Host " This is causing the database query errors" -ForegroundColor Red
|
||||
}
|
||||
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
Write-Host "`n[OK] Done" -ForegroundColor Green
|
||||
69
check-db-performance.ps1
Normal file
69
check-db-performance.ps1
Normal file
@@ -0,0 +1,69 @@
|
||||
# Check database performance and optimization status
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
|
||||
# Get server.js content to check timeout settings
|
||||
Write-Host "[OK] Checking server.js configuration..." -ForegroundColor Green
|
||||
$serverJs = Get-Content "AD2:\Shares\testdatadb\server.js" -Raw
|
||||
|
||||
if ($serverJs -match "timeout") {
|
||||
Write-Host "[FOUND] Timeout configuration in server.js" -ForegroundColor Yellow
|
||||
$serverJs -split "`n" | Where-Object { $_ -match "timeout" } | ForEach-Object {
|
||||
Write-Host " $_" -ForegroundColor Cyan
|
||||
}
|
||||
} else {
|
||||
Write-Host "[INFO] No explicit timeout configuration found" -ForegroundColor Cyan
|
||||
}
|
||||
|
||||
# Check if better-sqlite3 is configured for performance
|
||||
if ($serverJs -match "pragma") {
|
||||
Write-Host "[FOUND] SQLite PRAGMA settings:" -ForegroundColor Green
|
||||
$serverJs -split "`n" | Where-Object { $_ -match "pragma" } | ForEach-Object {
|
||||
Write-Host " $_" -ForegroundColor Cyan
|
||||
}
|
||||
} else {
|
||||
Write-Host "[WARNING] No PRAGMA performance settings found in server.js" -ForegroundColor Yellow
|
||||
Write-Host " Consider adding: PRAGMA journal_mode = WAL, PRAGMA synchronous = NORMAL" -ForegroundColor Yellow
|
||||
}
|
||||
|
||||
# Check routes/api.js for query optimization
|
||||
Write-Host "`n[OK] Checking API routes..." -ForegroundColor Green
|
||||
if (Test-Path "AD2:\Shares\testdatadb\routes\api.js") {
|
||||
$apiJs = Get-Content "AD2:\Shares\testdatadb\routes\api.js" -Raw
|
||||
|
||||
# Check for LIMIT clauses
|
||||
$hasLimit = $apiJs -match "LIMIT"
|
||||
if ($hasLimit) {
|
||||
Write-Host "[OK] Found LIMIT clauses in queries (good for performance)" -ForegroundColor Green
|
||||
} else {
|
||||
Write-Host "[WARNING] No LIMIT clauses found - queries may return too many results" -ForegroundColor Yellow
|
||||
}
|
||||
|
||||
# Check for index usage
|
||||
$hasIndexHints = $apiJs -match "INDEXED BY" -or $apiJs -match "USE INDEX"
|
||||
if ($hasIndexHints) {
|
||||
Write-Host "[OK] Found index hints in queries" -ForegroundColor Green
|
||||
} else {
|
||||
Write-Host "[INFO] No explicit index hints (relying on automatic optimization)" -ForegroundColor Cyan
|
||||
}
|
||||
}
|
||||
|
||||
# Check database file fragmentation
|
||||
Write-Host "`n[OK] Checking database file stats..." -ForegroundColor Green
|
||||
$dbFile = Get-Item "AD2:\Shares\testdatadb\database\testdata.db"
|
||||
Write-Host " File size: $([math]::Round($dbFile.Length/1MB,2)) MB" -ForegroundColor Cyan
|
||||
Write-Host " Last accessed: $($dbFile.LastAccessTime)" -ForegroundColor Cyan
|
||||
Write-Host " Last modified: $($dbFile.LastWriteTime)" -ForegroundColor Cyan
|
||||
|
||||
# Suggestion to run VACUUM
|
||||
$daysSinceModified = (Get-Date) - $dbFile.LastWriteTime
|
||||
if ($daysSinceModified.TotalDays -gt 7) {
|
||||
Write-Host "`n[SUGGESTION] Database hasn't been modified in $([math]::Round($daysSinceModified.TotalDays,1)) days" -ForegroundColor Yellow
|
||||
Write-Host " Consider running VACUUM to optimize database file" -ForegroundColor Yellow
|
||||
}
|
||||
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
Write-Host "`n[OK] Done" -ForegroundColor Green
|
||||
72
check-db-server.ps1
Normal file
72
check-db-server.ps1
Normal file
@@ -0,0 +1,72 @@
|
||||
# Check Node.js server status and database access on AD2
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Checking Node.js server status..." -ForegroundColor Green
|
||||
|
||||
# Check if Node.js process is running
|
||||
$nodeProcs = Invoke-Command -ComputerName 192.168.0.6 -Credential $cred -ScriptBlock {
|
||||
Get-Process node -ErrorAction SilentlyContinue | Select-Object Id, ProcessName, StartTime, @{Name='Memory(MB)';Expression={[math]::Round($_.WorkingSet64/1MB,2)}}
|
||||
}
|
||||
|
||||
if ($nodeProcs) {
|
||||
Write-Host "[FOUND] Node.js process(es) running:" -ForegroundColor Green
|
||||
$nodeProcs | Format-Table -AutoSize
|
||||
} else {
|
||||
Write-Host "[WARNING] No Node.js process found - server may not be running" -ForegroundColor Yellow
|
||||
}
|
||||
|
||||
# Check database file
|
||||
Write-Host "`n[OK] Checking database file..." -ForegroundColor Green
|
||||
$dbInfo = Invoke-Command -ComputerName 192.168.0.6 -Credential $cred -ScriptBlock {
|
||||
$dbPath = "C:\Shares\testdatadb\database\testdata.db"
|
||||
if (Test-Path $dbPath) {
|
||||
$file = Get-Item $dbPath
|
||||
[PSCustomObject]@{
|
||||
Exists = $true
|
||||
Size = [math]::Round($file.Length/1MB,2)
|
||||
LastWrite = $file.LastWriteTime
|
||||
Readable = $true
|
||||
}
|
||||
} else {
|
||||
[PSCustomObject]@{
|
||||
Exists = $false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if ($dbInfo.Exists) {
|
||||
Write-Host "[OK] Database file found" -ForegroundColor Green
|
||||
Write-Host " Size: $($dbInfo.Size) MB" -ForegroundColor Cyan
|
||||
Write-Host " Last Modified: $($dbInfo.LastWrite)" -ForegroundColor Cyan
|
||||
} else {
|
||||
Write-Host "[ERROR] Database file not found!" -ForegroundColor Red
|
||||
}
|
||||
|
||||
# Check for server log file
|
||||
Write-Host "`n[OK] Checking for server logs..." -ForegroundColor Green
|
||||
$logs = Invoke-Command -ComputerName 192.168.0.6 -Credential $cred -ScriptBlock {
|
||||
$logPath = "C:\Shares\testdatadb\server.log"
|
||||
if (Test-Path $logPath) {
|
||||
Get-Content $logPath -Tail 20
|
||||
} else {
|
||||
Write-Output "[INFO] No server.log file found"
|
||||
}
|
||||
}
|
||||
|
||||
if ($logs) {
|
||||
Write-Host "[OK] Recent server logs:" -ForegroundColor Green
|
||||
$logs | ForEach-Object { Write-Host " $_" }
|
||||
}
|
||||
|
||||
# Try to test port 3000
|
||||
Write-Host "`n[OK] Testing port 3000..." -ForegroundColor Green
|
||||
$portTest = Test-NetConnection -ComputerName 192.168.0.6 -Port 3000 -WarningAction SilentlyContinue
|
||||
|
||||
if ($portTest.TcpTestSucceeded) {
|
||||
Write-Host "[OK] Port 3000 is open and accessible" -ForegroundColor Green
|
||||
} else {
|
||||
Write-Host "[ERROR] Port 3000 is not accessible - server may not be running" -ForegroundColor Red
|
||||
}
|
||||
|
||||
Write-Host "`n[OK] Done" -ForegroundColor Green
|
||||
77
check-db-simple.ps1
Normal file
77
check-db-simple.ps1
Normal file
@@ -0,0 +1,77 @@
|
||||
# Simple check of database server via SMB
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
|
||||
Write-Host "[OK] Checking database file..." -ForegroundColor Green
|
||||
$dbPath = "AD2:\Shares\testdatadb\database\testdata.db"
|
||||
|
||||
if (Test-Path $dbPath) {
|
||||
$dbFile = Get-Item $dbPath
|
||||
Write-Host "[OK] Database file exists" -ForegroundColor Green
|
||||
Write-Host " Size: $([math]::Round($dbFile.Length/1MB,2)) MB" -ForegroundColor Cyan
|
||||
Write-Host " Last Modified: $($dbFile.LastWriteTime)" -ForegroundColor Cyan
|
||||
|
||||
# Check if file is locked
|
||||
try {
|
||||
$stream = [System.IO.File]::Open($dbFile.FullName, 'Open', 'Read', 'Read')
|
||||
$stream.Close()
|
||||
Write-Host " [OK] Database file is accessible (not locked)" -ForegroundColor Green
|
||||
} catch {
|
||||
Write-Host " [WARNING] Database file may be locked: $($_.Exception.Message)" -ForegroundColor Yellow
|
||||
}
|
||||
} else {
|
||||
Write-Host "[ERROR] Database file not found!" -ForegroundColor Red
|
||||
}
|
||||
|
||||
# Check server.js file
|
||||
Write-Host "`n[OK] Checking server files..." -ForegroundColor Green
|
||||
$serverPath = "AD2:\Shares\testdatadb\server.js"
|
||||
if (Test-Path $serverPath) {
|
||||
Write-Host "[OK] server.js exists" -ForegroundColor Green
|
||||
} else {
|
||||
Write-Host "[ERROR] server.js not found!" -ForegroundColor Red
|
||||
}
|
||||
|
||||
# Check package.json
|
||||
$packagePath = "AD2:\Shares\testdatadb\package.json"
|
||||
if (Test-Path $packagePath) {
|
||||
Write-Host "[OK] package.json exists" -ForegroundColor Green
|
||||
$package = Get-Content $packagePath -Raw | ConvertFrom-Json
|
||||
Write-Host " Dependencies:" -ForegroundColor Cyan
|
||||
$package.dependencies.PSObject.Properties | ForEach-Object {
|
||||
Write-Host " - $($_.Name): $($_.Value)" -ForegroundColor Cyan
|
||||
}
|
||||
}
|
||||
|
||||
# Check for any error log files
|
||||
Write-Host "`n[OK] Checking for error logs..." -ForegroundColor Green
|
||||
$logFiles = Get-ChildItem "AD2:\Shares\testdatadb\*.log" -ErrorAction SilentlyContinue
|
||||
|
||||
if ($logFiles) {
|
||||
Write-Host "[FOUND] Log files:" -ForegroundColor Green
|
||||
$logFiles | ForEach-Object {
|
||||
Write-Host " $($_.Name) - $([math]::Round($_.Length/1KB,2)) KB - Modified: $($_.LastWriteTime)" -ForegroundColor Cyan
|
||||
if ($_.Length -lt 10KB) {
|
||||
Write-Host " Last 10 lines:" -ForegroundColor Yellow
|
||||
Get-Content $_.FullName -Tail 10 | ForEach-Object { Write-Host " $_" -ForegroundColor Gray }
|
||||
}
|
||||
}
|
||||
} else {
|
||||
Write-Host "[INFO] No log files found" -ForegroundColor Cyan
|
||||
}
|
||||
|
||||
# Test port 3000
|
||||
Write-Host "`n[OK] Testing port 3000 connectivity..." -ForegroundColor Green
|
||||
$portTest = Test-NetConnection -ComputerName 192.168.0.6 -Port 3000 -WarningAction SilentlyContinue -InformationLevel Quiet
|
||||
|
||||
if ($portTest) {
|
||||
Write-Host "[OK] Port 3000 is OPEN" -ForegroundColor Green
|
||||
} else {
|
||||
Write-Host "[ERROR] Port 3000 is CLOSED - Server not running or firewall blocking" -ForegroundColor Red
|
||||
}
|
||||
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
Write-Host "`n[OK] Done" -ForegroundColor Green
|
||||
89
check-new-records.ps1
Normal file
89
check-new-records.ps1
Normal file
@@ -0,0 +1,89 @@
|
||||
# Check for new test data files that need importing
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "========================================" -ForegroundColor Cyan
|
||||
Write-Host "Test Data Import Status Check" -ForegroundColor Cyan
|
||||
Write-Host "========================================`n" -ForegroundColor Cyan
|
||||
|
||||
Write-Host "[1/4] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
|
||||
# Check database last modified time
|
||||
Write-Host "`n[2/4] Checking database status..." -ForegroundColor Green
|
||||
$dbFile = Get-Item "AD2:\Shares\testdatadb\database\testdata.db"
|
||||
Write-Host " Database last modified: $($dbFile.LastWriteTime)" -ForegroundColor Cyan
|
||||
Write-Host " Database size: $([math]::Round($dbFile.Length/1MB,2)) MB" -ForegroundColor Cyan
|
||||
|
||||
# Check for new DAT files in test folders
|
||||
Write-Host "`n[3/4] Checking for new test data files..." -ForegroundColor Green
|
||||
|
||||
$logTypes = @("8BLOG", "DSCLOG", "7BLOG", "5BLOG", "PWRLOG", "VASLOG", "SCTLOG", "HVLOG", "RMSLOG")
|
||||
$testStations = @("TS-1L", "TS-3R", "TS-4L", "TS-4R", "TS-8R", "TS-10L", "TS-11L")
|
||||
|
||||
$newFiles = @()
|
||||
$cutoffTime = $dbFile.LastWriteTime
|
||||
|
||||
foreach ($station in $testStations) {
|
||||
foreach ($logType in $logTypes) {
|
||||
$path = "AD2:\Shares\test\$station\LOGS\$logType"
|
||||
|
||||
if (Test-Path $path) {
|
||||
$files = Get-ChildItem $path -Filter "*.DAT" -ErrorAction SilentlyContinue |
|
||||
Where-Object { $_.LastWriteTime -gt $cutoffTime }
|
||||
|
||||
if ($files) {
|
||||
foreach ($file in $files) {
|
||||
$newFiles += [PSCustomObject]@{
|
||||
Station = $station
|
||||
LogType = $logType
|
||||
FileName = $file.Name
|
||||
Size = [math]::Round($file.Length/1KB, 2)
|
||||
Modified = $file.LastWriteTime
|
||||
Path = $file.FullName
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if ($newFiles.Count -gt 0) {
|
||||
Write-Host " [FOUND] $($newFiles.Count) new files since last import:" -ForegroundColor Yellow
|
||||
$newFiles | Format-Table Station, LogType, FileName, @{Name='Size(KB)';Expression={$_.Size}}, Modified -AutoSize | Out-String | Write-Host
|
||||
} else {
|
||||
Write-Host " [OK] No new files found - database is up to date" -ForegroundColor Green
|
||||
}
|
||||
|
||||
# Check sync script log
|
||||
Write-Host "`n[4/4] Checking sync log..." -ForegroundColor Green
|
||||
$syncLog = "AD2:\Shares\test\scripts\sync-from-nas.log"
|
||||
|
||||
if (Test-Path $syncLog) {
|
||||
Write-Host " [OK] Sync log exists" -ForegroundColor Green
|
||||
$logFile = Get-Item $syncLog
|
||||
Write-Host " Last modified: $($logFile.LastWriteTime)" -ForegroundColor Cyan
|
||||
|
||||
Write-Host " Last 10 log entries:" -ForegroundColor Cyan
|
||||
$lastLines = Get-Content $syncLog -Tail 10
|
||||
$lastLines | ForEach-Object { Write-Host " $_" -ForegroundColor Gray }
|
||||
} else {
|
||||
Write-Host " [WARNING] Sync log not found at: $syncLog" -ForegroundColor Yellow
|
||||
}
|
||||
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
|
||||
Write-Host "`n========================================" -ForegroundColor Cyan
|
||||
Write-Host "Summary" -ForegroundColor Cyan
|
||||
Write-Host "========================================" -ForegroundColor Cyan
|
||||
|
||||
if ($newFiles.Count -gt 0) {
|
||||
Write-Host "[ACTION REQUIRED] Import new files:" -ForegroundColor Yellow
|
||||
Write-Host " cd C:\Shares\testdatadb" -ForegroundColor Cyan
|
||||
Write-Host " node database\import.js" -ForegroundColor Cyan
|
||||
Write-Host "`n Or wait for automatic import (runs every 15 minutes)" -ForegroundColor Gray
|
||||
} else {
|
||||
Write-Host "[OK] Database is current - no import needed" -ForegroundColor Green
|
||||
}
|
||||
|
||||
Write-Host "========================================`n" -ForegroundColor Cyan
|
||||
47
check-node-running.ps1
Normal file
47
check-node-running.ps1
Normal file
@@ -0,0 +1,47 @@
|
||||
# Check if Node.js server is running on AD2
|
||||
Write-Host "[OK] Checking Node.js server status..." -ForegroundColor Green
|
||||
|
||||
# Test 1: Check port 3000
|
||||
Write-Host "`n[TEST 1] Testing port 3000..." -ForegroundColor Cyan
|
||||
$portTest = Test-NetConnection -ComputerName 192.168.0.6 -Port 3000 -WarningAction SilentlyContinue -InformationLevel Quiet
|
||||
|
||||
if ($portTest) {
|
||||
Write-Host " [OK] Port 3000 is OPEN" -ForegroundColor Green
|
||||
} else {
|
||||
Write-Host " [ERROR] Port 3000 is CLOSED" -ForegroundColor Red
|
||||
Write-Host " [INFO] Node.js server is not running" -ForegroundColor Yellow
|
||||
}
|
||||
|
||||
# Test 2: Check Node.js processes (via SMB)
|
||||
Write-Host "`n[TEST 2] Checking for Node.js processes..." -ForegroundColor Cyan
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
try {
|
||||
$nodeProcs = Invoke-Command -ComputerName 192.168.0.6 -Credential $cred -ScriptBlock {
|
||||
Get-Process node -ErrorAction SilentlyContinue | Select-Object Id, ProcessName, @{Name='Memory(MB)';Expression={[math]::Round($_.WorkingSet64/1MB,2)}}, Path
|
||||
} -ErrorAction Stop
|
||||
|
||||
if ($nodeProcs) {
|
||||
Write-Host " [OK] Node.js process(es) found:" -ForegroundColor Green
|
||||
$nodeProcs | Format-Table -AutoSize
|
||||
} else {
|
||||
Write-Host " [ERROR] No Node.js process found" -ForegroundColor Red
|
||||
}
|
||||
} catch {
|
||||
Write-Host " [WARNING] WinRM check failed: $($_.Exception.Message)" -ForegroundColor Yellow
|
||||
Write-Host " [INFO] Cannot verify via WinRM, but port test is more reliable" -ForegroundColor Cyan
|
||||
}
|
||||
|
||||
Write-Host "`n[SUMMARY]" -ForegroundColor Cyan
|
||||
if (!$portTest) {
|
||||
Write-Host " Node.js server is NOT running" -ForegroundColor Red
|
||||
Write-Host "`n [ACTION] Start the server on AD2:" -ForegroundColor Yellow
|
||||
Write-Host " cd C:\Shares\testdatadb" -ForegroundColor Cyan
|
||||
Write-Host " node server.js" -ForegroundColor Cyan
|
||||
} else {
|
||||
Write-Host " Node.js server appears to be running" -ForegroundColor Green
|
||||
Write-Host " But API endpoints are not responding - check server logs" -ForegroundColor Yellow
|
||||
}
|
||||
|
||||
Write-Host "`n[OK] Done" -ForegroundColor Green
|
||||
34
check-wal-files.ps1
Normal file
34
check-wal-files.ps1
Normal file
@@ -0,0 +1,34 @@
|
||||
# Check for and clean up WAL files
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
|
||||
Write-Host "[OK] Checking for WAL/SHM files..." -ForegroundColor Green
|
||||
$dbFolder = "AD2:\Shares\testdatadb\database"
|
||||
|
||||
$walFile = Get-Item "$dbFolder\testdata.db-wal" -ErrorAction SilentlyContinue
|
||||
$shmFile = Get-Item "$dbFolder\testdata.db-shm" -ErrorAction SilentlyContinue
|
||||
|
||||
if ($walFile) {
|
||||
Write-Host "[FOUND] testdata.db-wal ($([math]::Round($walFile.Length/1KB,2)) KB)" -ForegroundColor Yellow
|
||||
Write-Host "[ACTION] Delete this file? (Y/N)" -ForegroundColor Yellow
|
||||
} else {
|
||||
Write-Host "[OK] No WAL file found" -ForegroundColor Green
|
||||
}
|
||||
|
||||
if ($shmFile) {
|
||||
Write-Host "[FOUND] testdata.db-shm ($([math]::Round($shmFile.Length/1KB,2)) KB)" -ForegroundColor Yellow
|
||||
} else {
|
||||
Write-Host "[OK] No SHM file found" -ForegroundColor Green
|
||||
}
|
||||
|
||||
# Check database file integrity
|
||||
Write-Host "`n[OK] Checking database file..." -ForegroundColor Green
|
||||
$dbFile = Get-Item "$dbFolder\testdata.db"
|
||||
Write-Host " Size: $([math]::Round($dbFile.Length/1MB,2)) MB" -ForegroundColor Cyan
|
||||
Write-Host " Modified: $($dbFile.LastWriteTime)" -ForegroundColor Cyan
|
||||
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
Write-Host "`n[OK] Done" -ForegroundColor Green
|
||||
41
deploy-db-fix.ps1
Normal file
41
deploy-db-fix.ps1
Normal file
@@ -0,0 +1,41 @@
|
||||
# Deploy FIXED Database API to AD2
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "========================================" -ForegroundColor Red
|
||||
Write-Host "EMERGENCY FIX - Database API" -ForegroundColor Red
|
||||
Write-Host "========================================`n" -ForegroundColor Red
|
||||
|
||||
# Step 1: Mount AD2 share
|
||||
Write-Host "[1/3] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
Write-Host " [OK] Share mounted" -ForegroundColor Green
|
||||
|
||||
# Step 2: Deploy fixed api.js (already have backup from before)
|
||||
Write-Host "`n[2/3] Deploying FIXED api.js..." -ForegroundColor Green
|
||||
$fixedContent = Get-Content "D:\ClaudeTools\api-js-fixed.js" -Raw
|
||||
$fixedContent | Set-Content "AD2:\Shares\testdatadb\routes\api.js" -Encoding UTF8
|
||||
Write-Host " [OK] Fixed api.js deployed" -ForegroundColor Green
|
||||
Write-Host " [FIXED] Removed WAL mode pragma (conflicts with readonly)" -ForegroundColor Yellow
|
||||
Write-Host " [FIXED] Removed synchronous pragma (requires write access)" -ForegroundColor Yellow
|
||||
Write-Host " [KEPT] Cache size: 64MB" -ForegroundColor Green
|
||||
Write-Host " [KEPT] Memory-mapped I/O: 256MB" -ForegroundColor Green
|
||||
Write-Host " [KEPT] Timeout: 10 seconds" -ForegroundColor Green
|
||||
|
||||
# Step 3: Verify deployment
|
||||
Write-Host "`n[3/3] Verifying deployment..." -ForegroundColor Green
|
||||
$deployedFile = Get-Item "AD2:\Shares\testdatadb\routes\api.js"
|
||||
Write-Host " [OK] File size: $($deployedFile.Length) bytes" -ForegroundColor Green
|
||||
Write-Host " [OK] Modified: $($deployedFile.LastWriteTime)" -ForegroundColor Green
|
||||
|
||||
# Cleanup
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
|
||||
Write-Host "`n========================================" -ForegroundColor Green
|
||||
Write-Host "Fix Deployed Successfully" -ForegroundColor Green
|
||||
Write-Host "========================================" -ForegroundColor Green
|
||||
Write-Host "`n[ACTION REQUIRED] Restart Node.js Server:" -ForegroundColor Yellow
|
||||
Write-Host " 1. Stop: taskkill /F /IM node.exe" -ForegroundColor Cyan
|
||||
Write-Host " 2. Start: cd C:\Shares\testdatadb && node server.js" -ForegroundColor Cyan
|
||||
Write-Host "`nThe database should now work correctly!" -ForegroundColor Green
|
||||
Write-Host "========================================`n" -ForegroundColor Green
|
||||
57
deploy-db-optimization-smb.ps1
Normal file
57
deploy-db-optimization-smb.ps1
Normal file
@@ -0,0 +1,57 @@
|
||||
# Deploy Database Performance Optimizations to AD2 (SMB only)
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "========================================" -ForegroundColor Cyan
|
||||
Write-Host "Test Database Performance Optimization" -ForegroundColor Cyan
|
||||
Write-Host "========================================`n" -ForegroundColor Cyan
|
||||
|
||||
# Step 1: Mount AD2 share
|
||||
Write-Host "[1/4] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
Write-Host " [OK] Share mounted" -ForegroundColor Green
|
||||
|
||||
# Step 2: Backup existing api.js
|
||||
Write-Host "`n[2/4] Backing up existing api.js..." -ForegroundColor Green
|
||||
$timestamp = Get-Date -Format "yyyy-MM-dd-HHmmss"
|
||||
$backupPath = "AD2:\Shares\testdatadb\routes\api.js.backup-$timestamp"
|
||||
Copy-Item "AD2:\Shares\testdatadb\routes\api.js" $backupPath
|
||||
Write-Host " [OK] Backup created: api.js.backup-$timestamp" -ForegroundColor Green
|
||||
|
||||
# Step 3: Deploy optimized api.js
|
||||
Write-Host "`n[3/4] Deploying optimized api.js..." -ForegroundColor Green
|
||||
$optimizedContent = Get-Content "D:\ClaudeTools\api-js-optimized.js" -Raw
|
||||
$optimizedContent | Set-Content "AD2:\Shares\testdatadb\routes\api.js" -Encoding UTF8
|
||||
Write-Host " [OK] Optimized api.js deployed" -ForegroundColor Green
|
||||
|
||||
# Step 4: Verify deployment
|
||||
Write-Host "`n[4/4] Verifying deployment..." -ForegroundColor Green
|
||||
$deployedFile = Get-Item "AD2:\Shares\testdatadb\routes\api.js"
|
||||
Write-Host " [OK] File size: $($deployedFile.Length) bytes" -ForegroundColor Green
|
||||
Write-Host " [OK] Modified: $($deployedFile.LastWriteTime)" -ForegroundColor Green
|
||||
|
||||
# Cleanup
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
|
||||
Write-Host "`n========================================" -ForegroundColor Cyan
|
||||
Write-Host "Deployment Complete" -ForegroundColor Cyan
|
||||
Write-Host "========================================" -ForegroundColor Cyan
|
||||
Write-Host "[OK] Backup created" -ForegroundColor Green
|
||||
Write-Host "[OK] Optimized code deployed" -ForegroundColor Green
|
||||
Write-Host "`nOptimizations Applied:" -ForegroundColor Cyan
|
||||
Write-Host " - Connection timeout: 10 seconds" -ForegroundColor Cyan
|
||||
Write-Host " - WAL mode: Enabled (better concurrency)" -ForegroundColor Cyan
|
||||
Write-Host " - Cache size: 64MB" -ForegroundColor Cyan
|
||||
Write-Host " - Memory-mapped I/O: 256MB" -ForegroundColor Cyan
|
||||
Write-Host " - Synchronous mode: NORMAL (faster, safe)" -ForegroundColor Cyan
|
||||
Write-Host "`n[ACTION REQUIRED] Restart Node.js Server:" -ForegroundColor Yellow
|
||||
Write-Host " 1. Connect to AD2 (SSH or RDP)" -ForegroundColor Yellow
|
||||
Write-Host " 2. Stop existing Node.js process:" -ForegroundColor Yellow
|
||||
Write-Host " taskkill /F /IM node.exe" -ForegroundColor Cyan
|
||||
Write-Host " 3. Start server:" -ForegroundColor Yellow
|
||||
Write-Host " cd C:\Shares\testdatadb" -ForegroundColor Cyan
|
||||
Write-Host " node server.js" -ForegroundColor Cyan
|
||||
Write-Host "`nWeb Interface: http://192.168.0.6:3000" -ForegroundColor Green
|
||||
Write-Host "`nRollback (if needed):" -ForegroundColor Yellow
|
||||
Write-Host " Copy-Item C:\Shares\testdatadb\routes\api.js.backup-$timestamp C:\Shares\testdatadb\routes\api.js" -ForegroundColor Cyan
|
||||
Write-Host "========================================`n" -ForegroundColor Cyan
|
||||
107
deploy-db-optimization.ps1
Normal file
107
deploy-db-optimization.ps1
Normal file
@@ -0,0 +1,107 @@
|
||||
# Deploy Database Performance Optimizations to AD2
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "========================================" -ForegroundColor Cyan
|
||||
Write-Host "Test Database Performance Optimization" -ForegroundColor Cyan
|
||||
Write-Host "========================================`n" -ForegroundColor Cyan
|
||||
|
||||
# Step 1: Mount AD2 share
|
||||
Write-Host "[1/6] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
Write-Host " [OK] Share mounted" -ForegroundColor Green
|
||||
|
||||
# Step 2: Backup existing api.js
|
||||
Write-Host "`n[2/6] Backing up existing api.js..." -ForegroundColor Green
|
||||
$timestamp = Get-Date -Format "yyyy-MM-dd-HHmmss"
|
||||
$backupPath = "AD2:\Shares\testdatadb\routes\api.js.backup-$timestamp"
|
||||
Copy-Item "AD2:\Shares\testdatadb\routes\api.js" $backupPath
|
||||
Write-Host " [OK] Backup created: api.js.backup-$timestamp" -ForegroundColor Green
|
||||
|
||||
# Step 3: Deploy optimized api.js
|
||||
Write-Host "`n[3/6] Deploying optimized api.js..." -ForegroundColor Green
|
||||
$optimizedContent = Get-Content "D:\ClaudeTools\api-js-optimized.js" -Raw
|
||||
$optimizedContent | Set-Content "AD2:\Shares\testdatadb\routes\api.js" -Encoding UTF8
|
||||
Write-Host " [OK] Optimized api.js deployed" -ForegroundColor Green
|
||||
|
||||
# Step 4: Stop Node.js server
|
||||
Write-Host "`n[4/6] Stopping Node.js server..." -ForegroundColor Yellow
|
||||
try {
|
||||
Invoke-Command -ComputerName 192.168.0.6 -Credential $cred -ScriptBlock {
|
||||
$nodeProcs = Get-Process node -ErrorAction SilentlyContinue
|
||||
if ($nodeProcs) {
|
||||
$nodeProcs | ForEach-Object {
|
||||
Write-Host " Stopping process ID: $($_.Id)"
|
||||
Stop-Process -Id $_.Id -Force
|
||||
}
|
||||
Start-Sleep -Seconds 2
|
||||
Write-Host " [OK] Node.js processes stopped"
|
||||
} else {
|
||||
Write-Host " [INFO] No Node.js process found"
|
||||
}
|
||||
} -ErrorAction Stop
|
||||
} catch {
|
||||
Write-Host " [WARNING] Could not stop via WinRM: $($_.Exception.Message)" -ForegroundColor Yellow
|
||||
Write-Host " [ACTION] You may need to stop the server manually on AD2" -ForegroundColor Yellow
|
||||
}
|
||||
|
||||
# Step 5: Start Node.js server
|
||||
Write-Host "`n[5/6] Starting Node.js server..." -ForegroundColor Green
|
||||
try {
|
||||
Invoke-Command -ComputerName 192.168.0.6 -Credential $cred -ScriptBlock {
|
||||
Set-Location "C:\Shares\testdatadb"
|
||||
|
||||
# Start Node.js in background
|
||||
$startInfo = New-Object System.Diagnostics.ProcessStartInfo
|
||||
$startInfo.FileName = "node"
|
||||
$startInfo.Arguments = "server.js"
|
||||
$startInfo.WorkingDirectory = "C:\Shares\testdatadb"
|
||||
$startInfo.UseShellExecute = $false
|
||||
$startInfo.RedirectStandardOutput = $true
|
||||
$startInfo.RedirectStandardError = $true
|
||||
$startInfo.CreateNoWindow = $true
|
||||
|
||||
$process = [System.Diagnostics.Process]::Start($startInfo)
|
||||
Start-Sleep -Seconds 3
|
||||
|
||||
if (!$process.HasExited) {
|
||||
Write-Host " [OK] Server started (PID: $($process.Id))"
|
||||
} else {
|
||||
Write-Host " [ERROR] Server failed to start"
|
||||
}
|
||||
} -ErrorAction Stop
|
||||
} catch {
|
||||
Write-Host " [WARNING] Could not start via WinRM: $($_.Exception.Message)" -ForegroundColor Yellow
|
||||
Write-Host " [ACTION] Please start manually: cd C:\Shares\testdatadb && node server.js" -ForegroundColor Yellow
|
||||
}
|
||||
|
||||
# Step 6: Test connectivity
|
||||
Write-Host "`n[6/6] Testing server connectivity..." -ForegroundColor Green
|
||||
Start-Sleep -Seconds 2
|
||||
|
||||
$portTest = Test-NetConnection -ComputerName 192.168.0.6 -Port 3000 -WarningAction SilentlyContinue -InformationLevel Quiet
|
||||
if ($portTest) {
|
||||
Write-Host " [OK] Port 3000 is accessible" -ForegroundColor Green
|
||||
} else {
|
||||
Write-Host " [ERROR] Port 3000 is not accessible - server may not have started" -ForegroundColor Red
|
||||
}
|
||||
|
||||
# Cleanup
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
|
||||
Write-Host "`n========================================" -ForegroundColor Cyan
|
||||
Write-Host "Deployment Summary" -ForegroundColor Cyan
|
||||
Write-Host "========================================" -ForegroundColor Cyan
|
||||
Write-Host "[OK] Backup created" -ForegroundColor Green
|
||||
Write-Host "[OK] Optimized code deployed" -ForegroundColor Green
|
||||
Write-Host "`nOptimizations Applied:" -ForegroundColor Cyan
|
||||
Write-Host " - Connection timeout: 10 seconds" -ForegroundColor Cyan
|
||||
Write-Host " - WAL mode: Enabled (better concurrency)" -ForegroundColor Cyan
|
||||
Write-Host " - Cache size: 64MB" -ForegroundColor Cyan
|
||||
Write-Host " - Memory-mapped I/O: 256MB" -ForegroundColor Cyan
|
||||
Write-Host " - Synchronous mode: NORMAL (faster, safe)" -ForegroundColor Cyan
|
||||
Write-Host "`nWeb Interface: http://192.168.0.6:3000" -ForegroundColor Green
|
||||
Write-Host "`nNext Steps (Optional):" -ForegroundColor Yellow
|
||||
Write-Host " - Run VACUUM to optimize database" -ForegroundColor Yellow
|
||||
Write-Host " - Test queries via web interface" -ForegroundColor Yellow
|
||||
Write-Host "========================================`n" -ForegroundColor Cyan
|
||||
17
deploy-test-query.ps1
Normal file
17
deploy-test-query.ps1
Normal file
@@ -0,0 +1,17 @@
|
||||
# Deploy and run test query on AD2
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
|
||||
Write-Host "[OK] Copying test script to AD2..." -ForegroundColor Green
|
||||
Copy-Item "D:\ClaudeTools\test-query.js" "AD2:\Shares\testdatadb\test-query.js" -Force
|
||||
|
||||
Write-Host "[OK] Test script deployed" -ForegroundColor Green
|
||||
Write-Host "`n[ACTION] Run on AD2:" -ForegroundColor Yellow
|
||||
Write-Host " cd C:\Shares\testdatadb" -ForegroundColor Cyan
|
||||
Write-Host " node test-query.js" -ForegroundColor Cyan
|
||||
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
Write-Host "`n[OK] Done" -ForegroundColor Green
|
||||
53
explore-testdatadb.ps1
Normal file
53
explore-testdatadb.ps1
Normal file
@@ -0,0 +1,53 @@
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Mounting AD2 C$ share..."
|
||||
try {
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
Write-Host "[OK] Mounted as AD2: drive"
|
||||
|
||||
Write-Host "`n[OK] Exploring C:\Shares\testdatadb folder structure..."
|
||||
if (Test-Path "AD2:\Shares\testdatadb") {
|
||||
Write-Host "`n=== Folder Structure ==="
|
||||
Get-ChildItem "AD2:\Shares\testdatadb" -Recurse -Depth 3 | Select-Object FullName, Length, LastWriteTime | Format-Table -AutoSize
|
||||
|
||||
Write-Host "`n=== Database Files ==="
|
||||
Get-ChildItem "AD2:\Shares\testdatadb" -Recurse -Include "*.db","*.sqlite","*.json","*.sql","*.mdb","package.json","*.md","README*" | Select-Object FullName, Length
|
||||
|
||||
Write-Host "`n=== import.js Contents (First 100 lines) ==="
|
||||
if (Test-Path "AD2:\Shares\testdatadb\database\import.js") {
|
||||
Get-Content "AD2:\Shares\testdatadb\database\import.js" -TotalCount 100
|
||||
} else {
|
||||
Write-Host "[WARNING] import.js not found"
|
||||
}
|
||||
|
||||
Write-Host "`n=== package.json Contents ==="
|
||||
if (Test-Path "AD2:\Shares\testdatadb\database\package.json") {
|
||||
Get-Content "AD2:\Shares\testdatadb\database\package.json"
|
||||
} elseif (Test-Path "AD2:\Shares\testdatadb\package.json") {
|
||||
Get-Content "AD2:\Shares\testdatadb\package.json"
|
||||
} else {
|
||||
Write-Host "[INFO] No package.json found"
|
||||
}
|
||||
|
||||
Write-Host "`n=== README or Documentation ==="
|
||||
$readmeFiles = Get-ChildItem "AD2:\Shares\testdatadb" -Recurse -Include "README*","*.md" -ErrorAction SilentlyContinue
|
||||
foreach ($readme in $readmeFiles) {
|
||||
Write-Host "`n--- $($readme.FullName) ---"
|
||||
Get-Content $readme.FullName -TotalCount 50
|
||||
}
|
||||
|
||||
} else {
|
||||
Write-Host "[ERROR] C:\Shares\testdatadb folder not found"
|
||||
Write-Host "`n[INFO] Listing C:\Shares contents..."
|
||||
Get-ChildItem "AD2:\Shares" -Directory | Format-Table Name, FullName
|
||||
}
|
||||
|
||||
} catch {
|
||||
Write-Host "[ERROR] Failed to access AD2: $_"
|
||||
} finally {
|
||||
if (Test-Path AD2:) {
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
Write-Host "`n[OK] Unmounted AD2 drive"
|
||||
}
|
||||
}
|
||||
28
get-server-js.ps1
Normal file
28
get-server-js.ps1
Normal file
@@ -0,0 +1,28 @@
|
||||
# Retrieve server.js for analysis
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
|
||||
Write-Host "[OK] Retrieving server.js..." -ForegroundColor Green
|
||||
$serverContent = Get-Content "AD2:\Shares\testdatadb\server.js" -Raw
|
||||
|
||||
$outputPath = "D:\ClaudeTools\server-js-retrieved.js"
|
||||
$serverContent | Set-Content $outputPath -Encoding UTF8
|
||||
|
||||
Write-Host "[OK] Saved to: $outputPath" -ForegroundColor Green
|
||||
Write-Host "[OK] File size: $($(Get-Item $outputPath).Length) bytes" -ForegroundColor Cyan
|
||||
|
||||
# Also get routes/api.js
|
||||
Write-Host "`n[OK] Retrieving routes/api.js..." -ForegroundColor Green
|
||||
if (Test-Path "AD2:\Shares\testdatadb\routes\api.js") {
|
||||
$apiContent = Get-Content "AD2:\Shares\testdatadb\routes\api.js" -Raw
|
||||
$apiOutputPath = "D:\ClaudeTools\api-js-retrieved.js"
|
||||
$apiContent | Set-Content $apiOutputPath -Encoding UTF8
|
||||
Write-Host "[OK] Saved to: $apiOutputPath" -ForegroundColor Green
|
||||
Write-Host "[OK] File size: $($(Get-Item $apiOutputPath).Length) bytes" -ForegroundColor Cyan
|
||||
}
|
||||
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
Write-Host "`n[OK] Done" -ForegroundColor Green
|
||||
22
get-sync-script.ps1
Normal file
22
get-sync-script.ps1
Normal file
@@ -0,0 +1,22 @@
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Retrieving Sync-FromNAS.ps1 from AD2..."
|
||||
$scriptContent = Invoke-Command -ComputerName 192.168.0.6 -Credential $cred -ScriptBlock {
|
||||
Get-Content C:\Shares\test\scripts\Sync-FromNAS.ps1 -Raw
|
||||
}
|
||||
|
||||
$scriptContent | Out-File -FilePath "D:\ClaudeTools\Sync-FromNAS-retrieved.ps1" -Encoding UTF8
|
||||
Write-Host "[OK] Script saved to D:\ClaudeTools\Sync-FromNAS-retrieved.ps1"
|
||||
|
||||
Write-Host "`n[OK] Searching for database folder..."
|
||||
$dbFolders = Invoke-Command -ComputerName 192.168.0.6 -Credential $cred -ScriptBlock {
|
||||
Get-ChildItem C:\ -Directory -ErrorAction SilentlyContinue | Where-Object Name -match "database|testdata|test.*db"
|
||||
}
|
||||
|
||||
if ($dbFolders) {
|
||||
Write-Host "`nFound folders:"
|
||||
$dbFolders | Format-Table Name, FullName
|
||||
} else {
|
||||
Write-Host "No database folders found in C:\"
|
||||
}
|
||||
33
get-testdb-docs.ps1
Normal file
33
get-testdb-docs.ps1
Normal file
@@ -0,0 +1,33 @@
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
|
||||
Write-Host "[OK] Retrieving database documentation..."
|
||||
|
||||
# Get schema.sql
|
||||
if (Test-Path "AD2:\Shares\testdatadb\database\schema.sql") {
|
||||
Get-Content "AD2:\Shares\testdatadb\database\schema.sql" -Raw | Out-File "D:\ClaudeTools\schema-retrieved.sql" -Encoding UTF8
|
||||
Write-Host "[OK] schema.sql retrieved"
|
||||
}
|
||||
|
||||
# Get QUICKSTART.md
|
||||
if (Test-Path "AD2:\Shares\testdatadb\QUICKSTART.md") {
|
||||
Get-Content "AD2:\Shares\testdatadb\QUICKSTART.md" -Raw | Out-File "D:\ClaudeTools\QUICKSTART-retrieved.md" -Encoding UTF8
|
||||
Write-Host "[OK] QUICKSTART.md retrieved"
|
||||
}
|
||||
|
||||
# Get SESSION_NOTES.md
|
||||
if (Test-Path "AD2:\Shares\testdatadb\SESSION_NOTES.md") {
|
||||
Get-Content "AD2:\Shares\testdatadb\SESSION_NOTES.md" -Raw | Out-File "D:\ClaudeTools\SESSION_NOTES-retrieved.md" -Encoding UTF8
|
||||
Write-Host "[OK] SESSION_NOTES.md retrieved"
|
||||
}
|
||||
|
||||
# Get package.json
|
||||
if (Test-Path "AD2:\Shares\testdatadb\package.json") {
|
||||
Get-Content "AD2:\Shares\testdatadb\package.json" -Raw | Out-File "D:\ClaudeTools\package-retrieved.json" -Encoding UTF8
|
||||
Write-Host "[OK] package.json retrieved"
|
||||
}
|
||||
|
||||
Remove-PSDrive -Name AD2
|
||||
Write-Host "[OK] All files retrieved"
|
||||
396
import-js-retrieved.js
Normal file
396
import-js-retrieved.js
Normal file
@@ -0,0 +1,396 @@
|
||||
/**
|
||||
* Data Import Script
|
||||
* Imports test data from DAT and SHT files into SQLite database
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const Database = require('better-sqlite3');
|
||||
|
||||
const { parseMultilineFile, extractTestStation } = require('../parsers/multiline');
|
||||
const { parseCsvFile } = require('../parsers/csvline');
|
||||
const { parseShtFile } = require('../parsers/shtfile');
|
||||
|
||||
// Configuration
|
||||
const DB_PATH = path.join(__dirname, 'testdata.db');
|
||||
const SCHEMA_PATH = path.join(__dirname, 'schema.sql');
|
||||
|
||||
// Data source paths
|
||||
const TEST_PATH = 'C:/Shares/test';
|
||||
const RECOVERY_PATH = 'C:/Shares/Recovery-TEST';
|
||||
const HISTLOGS_PATH = path.join(TEST_PATH, 'Ate/HISTLOGS');
|
||||
|
||||
// Log types and their parsers
|
||||
const LOG_TYPES = {
|
||||
'DSCLOG': { parser: 'multiline', ext: '.DAT' },
|
||||
'5BLOG': { parser: 'multiline', ext: '.DAT' },
|
||||
'8BLOG': { parser: 'multiline', ext: '.DAT' },
|
||||
'PWRLOG': { parser: 'multiline', ext: '.DAT' },
|
||||
'SCTLOG': { parser: 'multiline', ext: '.DAT' },
|
||||
'VASLOG': { parser: 'multiline', ext: '.DAT' },
|
||||
'7BLOG': { parser: 'csvline', ext: '.DAT' }
|
||||
};
|
||||
|
||||
// Initialize database
|
||||
function initDatabase() {
|
||||
console.log('Initializing database...');
|
||||
const db = new Database(DB_PATH);
|
||||
|
||||
// Read and execute schema
|
||||
const schema = fs.readFileSync(SCHEMA_PATH, 'utf8');
|
||||
db.exec(schema);
|
||||
|
||||
console.log('Database initialized.');
|
||||
return db;
|
||||
}
|
||||
|
||||
// Prepare insert statement
|
||||
function prepareInsert(db) {
|
||||
return db.prepare(`
|
||||
INSERT OR IGNORE INTO test_records
|
||||
(log_type, model_number, serial_number, test_date, test_station, overall_result, raw_data, source_file)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
}
|
||||
|
||||
// Find all files of a specific type in a directory
|
||||
function findFiles(dir, pattern, recursive = true) {
|
||||
const results = [];
|
||||
|
||||
try {
|
||||
if (!fs.existsSync(dir)) return results;
|
||||
|
||||
const items = fs.readdirSync(dir, { withFileTypes: true });
|
||||
|
||||
for (const item of items) {
|
||||
const fullPath = path.join(dir, item.name);
|
||||
|
||||
if (item.isDirectory() && recursive) {
|
||||
results.push(...findFiles(fullPath, pattern, recursive));
|
||||
} else if (item.isFile()) {
|
||||
if (pattern.test(item.name)) {
|
||||
results.push(fullPath);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
// Ignore permission errors
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
// Import records from a file
|
||||
function importFile(db, insertStmt, filePath, logType, parser) {
|
||||
let records = [];
|
||||
const testStation = extractTestStation(filePath);
|
||||
|
||||
try {
|
||||
switch (parser) {
|
||||
case 'multiline':
|
||||
records = parseMultilineFile(filePath, logType, testStation);
|
||||
break;
|
||||
case 'csvline':
|
||||
records = parseCsvFile(filePath, testStation);
|
||||
break;
|
||||
case 'shtfile':
|
||||
records = parseShtFile(filePath, testStation);
|
||||
break;
|
||||
}
|
||||
|
||||
let imported = 0;
|
||||
for (const record of records) {
|
||||
try {
|
||||
const result = insertStmt.run(
|
||||
record.log_type,
|
||||
record.model_number,
|
||||
record.serial_number,
|
||||
record.test_date,
|
||||
record.test_station,
|
||||
record.overall_result,
|
||||
record.raw_data,
|
||||
record.source_file
|
||||
);
|
||||
if (result.changes > 0) imported++;
|
||||
} catch (err) {
|
||||
// Duplicate or constraint error - skip
|
||||
}
|
||||
}
|
||||
|
||||
return { total: records.length, imported };
|
||||
} catch (err) {
|
||||
console.error(`Error importing ${filePath}: ${err.message}`);
|
||||
return { total: 0, imported: 0 };
|
||||
}
|
||||
}
|
||||
|
||||
// Import from HISTLOGS (master consolidated logs)
|
||||
function importHistlogs(db, insertStmt) {
|
||||
console.log('\n=== Importing from HISTLOGS ===');
|
||||
|
||||
let totalImported = 0;
|
||||
let totalRecords = 0;
|
||||
|
||||
for (const [logType, config] of Object.entries(LOG_TYPES)) {
|
||||
const logDir = path.join(HISTLOGS_PATH, logType);
|
||||
|
||||
if (!fs.existsSync(logDir)) {
|
||||
console.log(` ${logType}: directory not found`);
|
||||
continue;
|
||||
}
|
||||
|
||||
const files = findFiles(logDir, new RegExp(`\\${config.ext}$`, 'i'), false);
|
||||
console.log(` ${logType}: found ${files.length} files`);
|
||||
|
||||
for (const file of files) {
|
||||
const { total, imported } = importFile(db, insertStmt, file, logType, config.parser);
|
||||
totalRecords += total;
|
||||
totalImported += imported;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(` HISTLOGS total: ${totalImported} records imported (${totalRecords} parsed)`);
|
||||
return totalImported;
|
||||
}
|
||||
|
||||
// Import from test station logs
|
||||
function importStationLogs(db, insertStmt, basePath, label) {
|
||||
console.log(`\n=== Importing from ${label} ===`);
|
||||
|
||||
let totalImported = 0;
|
||||
let totalRecords = 0;
|
||||
|
||||
// Find all test station directories (TS-1, TS-27, TS-8L, TS-10R, etc.)
|
||||
const stationPattern = /^TS-\d+[LR]?$/i;
|
||||
let stations = [];
|
||||
|
||||
try {
|
||||
const items = fs.readdirSync(basePath, { withFileTypes: true });
|
||||
stations = items
|
||||
.filter(i => i.isDirectory() && stationPattern.test(i.name))
|
||||
.map(i => i.name);
|
||||
} catch (err) {
|
||||
console.log(` Error reading ${basePath}: ${err.message}`);
|
||||
return 0;
|
||||
}
|
||||
|
||||
console.log(` Found stations: ${stations.join(', ')}`);
|
||||
|
||||
for (const station of stations) {
|
||||
const logsDir = path.join(basePath, station, 'LOGS');
|
||||
|
||||
if (!fs.existsSync(logsDir)) continue;
|
||||
|
||||
for (const [logType, config] of Object.entries(LOG_TYPES)) {
|
||||
const logDir = path.join(logsDir, logType);
|
||||
|
||||
if (!fs.existsSync(logDir)) continue;
|
||||
|
||||
const files = findFiles(logDir, new RegExp(`\\${config.ext}$`, 'i'), false);
|
||||
|
||||
for (const file of files) {
|
||||
const { total, imported } = importFile(db, insertStmt, file, logType, config.parser);
|
||||
totalRecords += total;
|
||||
totalImported += imported;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Also import SHT files
|
||||
const shtFiles = findFiles(basePath, /\.SHT$/i, true);
|
||||
console.log(` Found ${shtFiles.length} SHT files`);
|
||||
|
||||
for (const file of shtFiles) {
|
||||
const { total, imported } = importFile(db, insertStmt, file, 'SHT', 'shtfile');
|
||||
totalRecords += total;
|
||||
totalImported += imported;
|
||||
}
|
||||
|
||||
console.log(` ${label} total: ${totalImported} records imported (${totalRecords} parsed)`);
|
||||
return totalImported;
|
||||
}
|
||||
|
||||
// Import from Recovery-TEST backups (newest first)
|
||||
function importRecoveryBackups(db, insertStmt) {
|
||||
console.log('\n=== Importing from Recovery-TEST backups ===');
|
||||
|
||||
if (!fs.existsSync(RECOVERY_PATH)) {
|
||||
console.log(' Recovery-TEST directory not found');
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Get backup dates, sort newest first
|
||||
const backups = fs.readdirSync(RECOVERY_PATH, { withFileTypes: true })
|
||||
.filter(i => i.isDirectory() && /^\d{2}-\d{2}-\d{2}$/.test(i.name))
|
||||
.map(i => i.name)
|
||||
.sort()
|
||||
.reverse();
|
||||
|
||||
console.log(` Found backup dates: ${backups.join(', ')}`);
|
||||
|
||||
let totalImported = 0;
|
||||
|
||||
for (const backup of backups) {
|
||||
const backupPath = path.join(RECOVERY_PATH, backup);
|
||||
const imported = importStationLogs(db, insertStmt, backupPath, `Recovery-TEST/${backup}`);
|
||||
totalImported += imported;
|
||||
}
|
||||
|
||||
return totalImported;
|
||||
}
|
||||
|
||||
// Main import function
|
||||
async function runImport() {
|
||||
console.log('========================================');
|
||||
console.log('Test Data Import');
|
||||
console.log('========================================');
|
||||
console.log(`Database: ${DB_PATH}`);
|
||||
console.log(`Start time: ${new Date().toISOString()}`);
|
||||
|
||||
const db = initDatabase();
|
||||
const insertStmt = prepareInsert(db);
|
||||
|
||||
let grandTotal = 0;
|
||||
|
||||
// Use transaction for performance
|
||||
const importAll = db.transaction(() => {
|
||||
// 1. Import HISTLOGS first (authoritative)
|
||||
grandTotal += importHistlogs(db, insertStmt);
|
||||
|
||||
// 2. Import Recovery backups (newest first)
|
||||
grandTotal += importRecoveryBackups(db, insertStmt);
|
||||
|
||||
// 3. Import current test folder
|
||||
grandTotal += importStationLogs(db, insertStmt, TEST_PATH, 'test');
|
||||
});
|
||||
|
||||
importAll();
|
||||
|
||||
// Get final stats
|
||||
const stats = db.prepare('SELECT COUNT(*) as count FROM test_records').get();
|
||||
|
||||
console.log('\n========================================');
|
||||
console.log('Import Complete');
|
||||
console.log('========================================');
|
||||
console.log(`Total records in database: ${stats.count}`);
|
||||
console.log(`End time: ${new Date().toISOString()}`);
|
||||
|
||||
db.close();
|
||||
}
|
||||
|
||||
// Import a single file (for incremental imports from sync)
|
||||
function importSingleFile(filePath) {
|
||||
console.log(`Importing: ${filePath}`);
|
||||
|
||||
const db = new Database(DB_PATH);
|
||||
const insertStmt = prepareInsert(db);
|
||||
|
||||
// Determine log type from path
|
||||
let logType = null;
|
||||
let parser = null;
|
||||
|
||||
for (const [type, config] of Object.entries(LOG_TYPES)) {
|
||||
if (filePath.includes(type)) {
|
||||
logType = type;
|
||||
parser = config.parser;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!logType) {
|
||||
// Check for SHT files
|
||||
if (/\.SHT$/i.test(filePath)) {
|
||||
logType = 'SHT';
|
||||
parser = 'shtfile';
|
||||
} else {
|
||||
console.log(` Unknown log type for: ${filePath}`);
|
||||
db.close();
|
||||
return { total: 0, imported: 0 };
|
||||
}
|
||||
}
|
||||
|
||||
const result = importFile(db, insertStmt, filePath, logType, parser);
|
||||
|
||||
console.log(` Imported ${result.imported} of ${result.total} records`);
|
||||
db.close();
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
// Import multiple files (for batch incremental imports)
|
||||
function importFiles(filePaths) {
|
||||
console.log(`\n========================================`);
|
||||
console.log(`Incremental Import: ${filePaths.length} files`);
|
||||
console.log(`========================================`);
|
||||
|
||||
const db = new Database(DB_PATH);
|
||||
const insertStmt = prepareInsert(db);
|
||||
|
||||
let totalImported = 0;
|
||||
let totalRecords = 0;
|
||||
|
||||
const importBatch = db.transaction(() => {
|
||||
for (const filePath of filePaths) {
|
||||
// Determine log type from path
|
||||
let logType = null;
|
||||
let parser = null;
|
||||
|
||||
for (const [type, config] of Object.entries(LOG_TYPES)) {
|
||||
if (filePath.includes(type)) {
|
||||
logType = type;
|
||||
parser = config.parser;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!logType) {
|
||||
if (/\.SHT$/i.test(filePath)) {
|
||||
logType = 'SHT';
|
||||
parser = 'shtfile';
|
||||
} else {
|
||||
console.log(` Skipping unknown type: ${filePath}`);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
const { total, imported } = importFile(db, insertStmt, filePath, logType, parser);
|
||||
totalRecords += total;
|
||||
totalImported += imported;
|
||||
console.log(` ${path.basename(filePath)}: ${imported}/${total} records`);
|
||||
}
|
||||
});
|
||||
|
||||
importBatch();
|
||||
|
||||
console.log(`\nTotal: ${totalImported} records imported (${totalRecords} parsed)`);
|
||||
db.close();
|
||||
|
||||
return { total: totalRecords, imported: totalImported };
|
||||
}
|
||||
|
||||
// Run if called directly
|
||||
if (require.main === module) {
|
||||
// Check for command line arguments
|
||||
const args = process.argv.slice(2);
|
||||
|
||||
if (args.length > 0 && args[0] === '--file') {
|
||||
// Import specific file(s)
|
||||
const files = args.slice(1);
|
||||
if (files.length === 0) {
|
||||
console.log('Usage: node import.js --file <file1> [file2] ...');
|
||||
process.exit(1);
|
||||
}
|
||||
importFiles(files);
|
||||
} else if (args.length > 0 && args[0] === '--help') {
|
||||
console.log('Usage:');
|
||||
console.log(' node import.js Full import from all sources');
|
||||
console.log(' node import.js --file <f> Import specific file(s)');
|
||||
process.exit(0);
|
||||
} else {
|
||||
// Full import
|
||||
runImport().catch(console.error);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { runImport, importSingleFile, importFiles };
|
||||
|
||||
16
package-retrieved.json
Normal file
16
package-retrieved.json
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"name": "testdatadb",
|
||||
"version": "1.0.0",
|
||||
"description": "Test data database and search interface",
|
||||
"main": "server.js",
|
||||
"scripts": {
|
||||
"start": "node server.js",
|
||||
"import": "node database/import.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"better-sqlite3": "^9.4.3",
|
||||
"cors": "^2.8.5",
|
||||
"express": "^4.18.2"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,448 @@
|
||||
# Dataforth Test Data Database - System Architecture
|
||||
|
||||
**Location:** C:\Shares\testdatadb\ (on AD2 server)
|
||||
**Created:** 2026-01-13
|
||||
**Purpose:** Consolidate and search 1M+ test records from DOS machines
|
||||
**Last Retrieved:** 2026-01-21
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
A Node.js web application with SQLite database that consolidates test data from ~30 DOS QC machines. Data flows automatically from DOS machines through the NAS to AD2, where it's imported into the database every 15 minutes.
|
||||
|
||||
---
|
||||
|
||||
## System Architecture
|
||||
|
||||
### Technology Stack
|
||||
- **Database:** SQLite 3 (better-sqlite3)
|
||||
- **Server:** Node.js + Express.js (port 3000)
|
||||
- **Import:** Automated via Sync-FromNAS.ps1 PowerShell script
|
||||
- **Web UI:** HTML/JavaScript search interface
|
||||
- **Parsers:** Custom parsers for multiple data formats
|
||||
|
||||
### Data Flow
|
||||
```
|
||||
DOS Machines (CTONW.BAT)
|
||||
↓
|
||||
NAS: /data/test/TS-XX/LOGS/[LOG_TYPE]/*.DAT
|
||||
↓ (Every 15 minutes)
|
||||
AD2: C:\Shares\test\TS-XX\LOGS\[LOG_TYPE]/*.DAT
|
||||
↓ (Automated import via Node.js)
|
||||
SQLite Database: C:\Shares\testdatadb\database\testdata.db
|
||||
↓
|
||||
Web Interface: http://localhost:3000
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Database Details
|
||||
|
||||
### File Location
|
||||
`C:\Shares\testdatadb\database\testdata.db`
|
||||
|
||||
### Database Type
|
||||
SQLite 3 (single-file database)
|
||||
|
||||
### Current Size
|
||||
**1,030,940 records** (as of 2026-01-13)
|
||||
|
||||
### Table Schema
|
||||
```sql
|
||||
test_records (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
log_type TEXT NOT NULL, -- DSCLOG, 5BLOG, 7BLOG, 8BLOG, PWRLOG, SCTLOG, VASLOG, SHT
|
||||
model_number TEXT NOT NULL, -- DSCA38-1793, SCM5B30-01, etc.
|
||||
serial_number TEXT NOT NULL, -- 176923-1, 105840-2, etc.
|
||||
test_date TEXT NOT NULL, -- YYYY-MM-DD format
|
||||
test_station TEXT, -- TS-1L, TS-3R, etc.
|
||||
overall_result TEXT, -- PASS/FAIL
|
||||
raw_data TEXT, -- Full original record
|
||||
source_file TEXT, -- Original file path
|
||||
import_date TEXT DEFAULT (datetime('now')),
|
||||
UNIQUE(log_type, model_number, serial_number, test_date, test_station)
|
||||
)
|
||||
```
|
||||
|
||||
### Indexes
|
||||
- `idx_serial` - Fast lookup by serial number
|
||||
- `idx_model` - Fast lookup by model number
|
||||
- `idx_date` - Fast lookup by test date
|
||||
- `idx_model_serial` - Combined model + serial lookup
|
||||
- `idx_result` - Filter by PASS/FAIL
|
||||
- `idx_log_type` - Filter by log type
|
||||
|
||||
### Full-Text Search
|
||||
- FTS5 virtual table: `test_records_fts`
|
||||
- Searches: serial_number, model_number, raw_data
|
||||
- Automatic sync via triggers
|
||||
|
||||
---
|
||||
|
||||
## Import System
|
||||
|
||||
### Import Script
|
||||
**Location:** `C:\Shares\testdatadb\database\import.js`
|
||||
**Language:** Node.js
|
||||
**Duration:** ~30 minutes for full import (1M+ records)
|
||||
|
||||
### Data Sources (Priority Order)
|
||||
1. **HISTLOGS** - `C:\Shares\test\Ate\HISTLOGS\` (consolidated history)
|
||||
- Authoritative source
|
||||
- 576,416 records imported
|
||||
2. **Recovery-TEST** - `C:\Shares\Recovery-TEST\` (backup dates)
|
||||
- Multiple backup dates (12-13-25 to 12-18-25)
|
||||
- 454,383 records imported from 12-18-25
|
||||
3. **Live Data** - `C:\Shares\test\TS-XX\LOGS\`
|
||||
- Current test station logs
|
||||
- 59 records imported (rest are duplicates)
|
||||
|
||||
### Automated Import (via Sync-FromNAS.ps1)
|
||||
**Configuration in Sync-FromNAS.ps1 (lines 46-48):**
|
||||
```powershell
|
||||
$IMPORT_SCRIPT = "C:\Shares\testdatadb\database\import.js"
|
||||
$NODE_PATH = "node"
|
||||
```
|
||||
|
||||
**Import Function (lines 122-141):**
|
||||
```powershell
|
||||
function Import-ToDatabase {
|
||||
param([string[]]$FilePaths)
|
||||
|
||||
if ($FilePaths.Count -eq 0) { return }
|
||||
|
||||
Write-Log "Importing $($FilePaths.Count) file(s) to database..."
|
||||
|
||||
# Build argument list
|
||||
$args = @("$IMPORT_SCRIPT", "--file") + $FilePaths
|
||||
|
||||
try {
|
||||
$output = & $NODE_PATH $args 2>&1
|
||||
foreach ($line in $output) {
|
||||
Write-Log " [DB] $line"
|
||||
}
|
||||
Write-Log "Database import complete"
|
||||
} catch {
|
||||
Write-Log "ERROR: Database import failed: $_"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Trigger:** Every 15 minutes when new DAT files are synced from NAS
|
||||
**Process:** Sync-FromNAS.ps1 → import.js --file [file1] [file2] ... → SQLite insert
|
||||
|
||||
---
|
||||
|
||||
## Supported Log Types
|
||||
|
||||
| Log Type | Description | Format | Parser | Records |
|
||||
|----------|-------------|--------|--------|---------|
|
||||
| 5BLOG | 5B product line | Multi-line DAT | multiline | 425,378 |
|
||||
| 7BLOG | 7B product line | CSV DAT | csvline | 262,404 |
|
||||
| DSCLOG | DSC product line | Multi-line DAT | multiline | 181,160 |
|
||||
| 8BLOG | 8B product line | Multi-line DAT | multiline | 135,858 |
|
||||
| PWRLOG | Power tests | Multi-line DAT | multiline | 12,374 |
|
||||
| VASLOG | VAS tests | Multi-line DAT | multiline | 10,327 |
|
||||
| SCTLOG | SCT product line | Multi-line DAT | multiline | 3,439 |
|
||||
| SHT | Test sheets | SHT format | shtfile | (varies) |
|
||||
|
||||
---
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
C:\Shares\testdatadb/
|
||||
├── database/
|
||||
│ ├── testdata.db # SQLite database (1M+ records)
|
||||
│ ├── import.js # Import script (12,774 bytes)
|
||||
│ └── schema.sql # Database schema with FTS5
|
||||
├── parsers/
|
||||
│ ├── multiline.js # Parser for multi-line DAT files
|
||||
│ ├── csvline.js # Parser for 7BLOG CSV format
|
||||
│ └── shtfile.js # Parser for SHT test sheets
|
||||
├── public/
|
||||
│ └── index.html # Web search interface
|
||||
├── routes/
|
||||
│ └── api.js # API endpoints
|
||||
├── templates/
|
||||
│ └── datasheet.js # Datasheet generator
|
||||
├── node_modules/ # Dependencies
|
||||
├── package.json # Node.js project file (342 bytes)
|
||||
├── package-lock.json # Dependency lock file (43,983 bytes)
|
||||
├── server.js # Express.js server (1,443 bytes)
|
||||
├── QUICKSTART.md # Quick start guide (1,019 bytes)
|
||||
├── SESSION_NOTES.md # Complete session notes (4,788 bytes)
|
||||
└── start-server.bat # Windows startup script (97 bytes)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Web Interface
|
||||
|
||||
### Starting the Server
|
||||
```bash
|
||||
cd C:\Shares\testdatadb
|
||||
node server.js
|
||||
```
|
||||
**Access:** http://localhost:3000 (on AD2)
|
||||
|
||||
### API Endpoints
|
||||
|
||||
**Search:**
|
||||
```
|
||||
GET /api/search?serial=...&model=...&from=...&to=...&result=...&q=...
|
||||
```
|
||||
- `serial` - Serial number (partial match)
|
||||
- `model` - Model number (partial match)
|
||||
- `from` - Start date (YYYY-MM-DD)
|
||||
- `to` - End date (YYYY-MM-DD)
|
||||
- `result` - PASS or FAIL
|
||||
- `q` - Full-text search in raw data
|
||||
|
||||
**Record Details:**
|
||||
```
|
||||
GET /api/record/:id
|
||||
```
|
||||
|
||||
**Generate Datasheet:**
|
||||
```
|
||||
GET /api/datasheet/:id
|
||||
```
|
||||
|
||||
**Database Statistics:**
|
||||
```
|
||||
GET /api/stats
|
||||
```
|
||||
|
||||
**Export to CSV:**
|
||||
```
|
||||
GET /api/export?format=csv
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Database Statistics (as of 2026-01-13)
|
||||
|
||||
### Total Records
|
||||
**1,030,940 records**
|
||||
|
||||
### Date Range
|
||||
**1990 to November 2025** (35 years of test data)
|
||||
|
||||
### Pass/Fail Distribution
|
||||
- **PASS:** 1,029,046 (99.82%)
|
||||
- **FAIL:** 1,888 (0.18%)
|
||||
- **UNKNOWN:** 6 (0.0006%)
|
||||
|
||||
### Test Stations
|
||||
TS-1L, TS-3R, TS-4L, TS-4R, TS-8R, TS-10L, TS-11L, and others
|
||||
|
||||
---
|
||||
|
||||
## Manual Operations
|
||||
|
||||
### Full Re-Import (if needed)
|
||||
```bash
|
||||
cd C:\Shares\testdatadb
|
||||
del database\testdata.db
|
||||
node database\import.js
|
||||
```
|
||||
**Duration:** ~30 minutes
|
||||
**When needed:** Parser updates, schema changes, corruption recovery
|
||||
|
||||
### Incremental Import (single file)
|
||||
```bash
|
||||
node database\import.js --file C:\Shares\test\TS-4R\LOGS\8BLOG\test.DAT
|
||||
```
|
||||
|
||||
### Incremental Import (multiple files)
|
||||
```bash
|
||||
node database\import.js --file file1.DAT file2.DAT file3.DAT
|
||||
```
|
||||
This is the method used by Sync-FromNAS.ps1
|
||||
|
||||
---
|
||||
|
||||
## Integration with DOS Update System
|
||||
|
||||
### CTONW.BAT (v1.2+)
|
||||
**Purpose:** Upload test data from DOS machines to NAS
|
||||
|
||||
**Test Data Upload (lines 234-272):**
|
||||
```batch
|
||||
ECHO [3/3] Uploading test data to LOGS...
|
||||
|
||||
REM Create log subdirectories
|
||||
IF NOT EXIST %LOGSDIR%\8BLOG\NUL MD %LOGSDIR%\8BLOG
|
||||
IF NOT EXIST %LOGSDIR%\DSCLOG\NUL MD %LOGSDIR%\DSCLOG
|
||||
IF NOT EXIST %LOGSDIR%\HVLOG\NUL MD %LOGSDIR%\HVLOG
|
||||
IF NOT EXIST %LOGSDIR%\PWRLOG\NUL MD %LOGSDIR%\PWRLOG
|
||||
IF NOT EXIST %LOGSDIR%\RMSLOG\NUL MD %LOGSDIR%\RMSLOG
|
||||
IF NOT EXIST %LOGSDIR%\7BLOG\NUL MD %LOGSDIR%\7BLOG
|
||||
|
||||
REM Upload test data files to appropriate log folders
|
||||
IF EXIST C:\ATE\8BDATA\NUL XCOPY C:\ATE\8BDATA\*.DAT %LOGSDIR%\8BLOG\ /Y /Q
|
||||
IF EXIST C:\ATE\DSCDATA\NUL XCOPY C:\ATE\DSCDATA\*.DAT %LOGSDIR%\DSCLOG\ /Y /Q
|
||||
IF EXIST C:\ATE\HVDATA\NUL XCOPY C:\ATE\HVDATA\*.DAT %LOGSDIR%\HVLOG\ /Y /Q
|
||||
IF EXIST C:\ATE\PWRDATA\NUL XCOPY C:\ATE\PWRDATA\*.DAT %LOGSDIR%\PWRLOG\ /Y /Q
|
||||
IF EXIST C:\ATE\RMSDATA\NUL XCOPY C:\ATE\RMSDATA\*.DAT %LOGSDIR%\RMSLOG\ /Y /Q
|
||||
IF EXIST C:\ATE\7BDATA\NUL XCOPY C:\ATE\7BDATA\*.DAT %LOGSDIR%\7BLOG\ /Y /Q
|
||||
```
|
||||
|
||||
**Target:** `T:\TS-4R\LOGS\8BLOG\` (on NAS)
|
||||
|
||||
### Sync-FromNAS.ps1
|
||||
**Schedule:** Every 15 minutes via Windows Task Scheduler
|
||||
**PULL Operation (lines 157-213):**
|
||||
1. Find new DAT files on NAS (modified in last 24 hours)
|
||||
2. Copy to AD2: `C:\Shares\test\TS-XX\LOGS\[LOG_TYPE]\`
|
||||
3. Import to database via `import.js --file [files]`
|
||||
4. Delete from NAS after successful import
|
||||
|
||||
**Result:** Test data flows automatically from DOS → NAS → AD2 → Database
|
||||
|
||||
---
|
||||
|
||||
## Deduplication
|
||||
|
||||
**Unique Key:** (log_type, model_number, serial_number, test_date, test_station)
|
||||
|
||||
**Effect:** Same test result imported multiple times (from HISTLOGS, Recovery backups, live data) is stored only once
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Record in HISTLOGS: DSCLOG, DSCA38-1793, 173672-1, 2025-02-15, TS-4R
|
||||
Same record in Recovery-TEST/12-18-25: DSCLOG, DSCA38-1793, 173672-1, 2025-02-15, TS-4R
|
||||
Result: Only 1 record in database
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Search Examples
|
||||
|
||||
### By Serial Number
|
||||
```
|
||||
http://localhost:3000/api/search?serial=176923
|
||||
```
|
||||
Returns all records with serial numbers containing "176923"
|
||||
|
||||
### By Model Number
|
||||
```
|
||||
http://localhost:3000/api/search?model=DSCA38-1793
|
||||
```
|
||||
Returns all records for model DSCA38-1793
|
||||
|
||||
### By Date Range
|
||||
```
|
||||
http://localhost:3000/api/search?from=2025-01-01&to=2025-12-31
|
||||
```
|
||||
Returns all records tested in 2025
|
||||
|
||||
### By Pass/Fail Status
|
||||
```
|
||||
http://localhost:3000/api/search?result=FAIL
|
||||
```
|
||||
Returns all failed tests (1,888 records)
|
||||
|
||||
### Full-Text Search
|
||||
```
|
||||
http://localhost:3000/api/search?q=voltage
|
||||
```
|
||||
Searches within raw test data for "voltage"
|
||||
|
||||
### Combined Search
|
||||
```
|
||||
http://localhost:3000/api/search?model=DSCA38&result=PASS&from=2025-01-01
|
||||
```
|
||||
All passed tests for DSCA38 models since Jan 1, 2025
|
||||
|
||||
---
|
||||
|
||||
## Known Issues
|
||||
|
||||
### Model Number Parsing
|
||||
- Parser was updated after initial import
|
||||
- To fix: Delete testdata.db and re-run full import
|
||||
- Impact: Model number searches may not be accurate for all records
|
||||
|
||||
### Performance
|
||||
- Full import: ~30 minutes
|
||||
- Database file: ~112 KB (compressed by SQLite)
|
||||
- Search performance: Very fast (indexed queries)
|
||||
|
||||
---
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Regular Tasks
|
||||
- **None required** - Database updates automatically via Sync-FromNAS.ps1
|
||||
|
||||
### Occasional Tasks
|
||||
- **Re-import** - If parser updates or schema changes (delete testdata.db and re-run)
|
||||
- **Backup** - Copy testdata.db to backup location periodically
|
||||
|
||||
### Monitoring
|
||||
- Check Sync-FromNAS.ps1 log: `C:\Shares\test\scripts\sync-from-nas.log`
|
||||
- Check sync status: `C:\Shares\test\_SYNC_STATUS.txt`
|
||||
- View database stats: http://localhost:3000/api/stats (when server running)
|
||||
|
||||
---
|
||||
|
||||
## Original Use Case (2026-01-13)
|
||||
|
||||
**Request:** Search for serial numbers 176923-1 through 176923-26 for model DSCA38-1793
|
||||
|
||||
**Result:** **NOT FOUND** - These devices haven't been tested yet
|
||||
|
||||
**Most Recent Serials:** 173672-x, 173681-x (February 2025)
|
||||
|
||||
**Outcome:** Database created to enable easy searching of 1M+ test records going back to 1990
|
||||
|
||||
---
|
||||
|
||||
## Dependencies (from package.json)
|
||||
|
||||
- **better-sqlite3** - Fast SQLite database
|
||||
- **express** - Web server framework
|
||||
- **csv-writer** - CSV export functionality
|
||||
- **Node.js** - Required to run the application
|
||||
|
||||
---
|
||||
|
||||
## Connection Information
|
||||
|
||||
**Server:** AD2 (192.168.0.6)
|
||||
**Database Path:** C:\Shares\testdatadb\database\testdata.db
|
||||
**Web Server Port:** 3000 (http://localhost:3000 when running)
|
||||
**Access:** Local only (on AD2 server)
|
||||
|
||||
**To Access:**
|
||||
1. SSH or RDP to AD2 (192.168.0.6)
|
||||
2. Start server: `cd C:\Shares\testdatadb && node server.js`
|
||||
3. Open browser: http://localhost:3000
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
**In ClaudeTools:**
|
||||
- `CTONW_V1.2_CHANGELOG.md` - Test data routing to LOGS folders
|
||||
- `DOS_DEPLOYMENT_STATUS.md` - Database import workflow
|
||||
- `Sync-FromNAS-retrieved.ps1` - Complete sync script with database import
|
||||
- `import-js-retrieved.js` - Complete import script
|
||||
- `schema-retrieved.sql` - Database schema
|
||||
- `QUICKSTART-retrieved.md` - Quick start guide
|
||||
- `SESSION_NOTES-retrieved.md` - Complete session notes
|
||||
|
||||
**On AD2:**
|
||||
- `C:\Shares\testdatadb\QUICKSTART.md`
|
||||
- `C:\Shares\testdatadb\SESSION_NOTES.md`
|
||||
- `C:\Shares\test\scripts\Sync-FromNAS.ps1`
|
||||
|
||||
---
|
||||
|
||||
**Created:** 2026-01-13
|
||||
**Last Updated:** 2026-01-21
|
||||
**Status:** Production - Operational
|
||||
**Automation:** Complete (test data imports automatically every 15 minutes)
|
||||
15
restore-original.ps1
Normal file
15
restore-original.ps1
Normal file
@@ -0,0 +1,15 @@
|
||||
# Restore original api.js from backup
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
|
||||
Write-Host "[OK] Restoring original api.js from backup..." -ForegroundColor Yellow
|
||||
Copy-Item "AD2:\Shares\testdatadb\routes\api.js.backup-2026-01-21-124402" "AD2:\Shares\testdatadb\routes\api.js" -Force
|
||||
|
||||
Write-Host "[OK] Original file restored" -ForegroundColor Green
|
||||
Write-Host "[ACTION] Restart Node.js: taskkill /F /IM node.exe && cd C:\Shares\testdatadb && node server.js" -ForegroundColor Yellow
|
||||
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
Write-Host "[OK] Done" -ForegroundColor Green
|
||||
53
schema-retrieved.sql
Normal file
53
schema-retrieved.sql
Normal file
@@ -0,0 +1,53 @@
|
||||
-- Test Data Database Schema
|
||||
-- SQLite database for storing and searching test records
|
||||
|
||||
-- Main test records table
|
||||
CREATE TABLE IF NOT EXISTS test_records (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
log_type TEXT NOT NULL, -- DSCLOG, 5BLOG, 7BLOG, 8BLOG, PWRLOG, SCTLOG, VASLOG, SHT
|
||||
model_number TEXT NOT NULL, -- DSCA38-1793, SCM5B30-01, etc.
|
||||
serial_number TEXT NOT NULL, -- 176923-1, 105840-2, etc.
|
||||
test_date TEXT NOT NULL, -- Test date (YYYY-MM-DD format)
|
||||
test_station TEXT, -- TS-1L, TS-3R, etc.
|
||||
overall_result TEXT, -- PASS/FAIL
|
||||
raw_data TEXT, -- Full original record
|
||||
source_file TEXT, -- Original file path
|
||||
import_date TEXT DEFAULT (datetime('now')),
|
||||
UNIQUE(log_type, model_number, serial_number, test_date, test_station)
|
||||
);
|
||||
|
||||
-- Indexes for fast searching
|
||||
CREATE INDEX IF NOT EXISTS idx_serial ON test_records(serial_number);
|
||||
CREATE INDEX IF NOT EXISTS idx_model ON test_records(model_number);
|
||||
CREATE INDEX IF NOT EXISTS idx_date ON test_records(test_date);
|
||||
CREATE INDEX IF NOT EXISTS idx_model_serial ON test_records(model_number, serial_number);
|
||||
CREATE INDEX IF NOT EXISTS idx_result ON test_records(overall_result);
|
||||
CREATE INDEX IF NOT EXISTS idx_log_type ON test_records(log_type);
|
||||
|
||||
-- Full-text search virtual table
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS test_records_fts USING fts5(
|
||||
serial_number,
|
||||
model_number,
|
||||
raw_data,
|
||||
content='test_records',
|
||||
content_rowid='id'
|
||||
);
|
||||
|
||||
-- Triggers to keep FTS index in sync
|
||||
CREATE TRIGGER IF NOT EXISTS test_records_ai AFTER INSERT ON test_records BEGIN
|
||||
INSERT INTO test_records_fts(rowid, serial_number, model_number, raw_data)
|
||||
VALUES (new.id, new.serial_number, new.model_number, new.raw_data);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS test_records_ad AFTER DELETE ON test_records BEGIN
|
||||
INSERT INTO test_records_fts(test_records_fts, rowid, serial_number, model_number, raw_data)
|
||||
VALUES ('delete', old.id, old.serial_number, old.model_number, old.raw_data);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS test_records_au AFTER UPDATE ON test_records BEGIN
|
||||
INSERT INTO test_records_fts(test_records_fts, rowid, serial_number, model_number, raw_data)
|
||||
VALUES ('delete', old.id, old.serial_number, old.model_number, old.raw_data);
|
||||
INSERT INTO test_records_fts(rowid, serial_number, model_number, raw_data)
|
||||
VALUES (new.id, new.serial_number, new.model_number, new.raw_data);
|
||||
END;
|
||||
|
||||
47
server-js-retrieved.js
Normal file
47
server-js-retrieved.js
Normal file
@@ -0,0 +1,47 @@
|
||||
/**
|
||||
* Test Data Database Server
|
||||
* Express.js server with search API and web interface
|
||||
*/
|
||||
|
||||
const express = require('express');
|
||||
const cors = require('cors');
|
||||
const path = require('path');
|
||||
|
||||
const apiRoutes = require('./routes/api');
|
||||
|
||||
const app = express();
|
||||
const PORT = process.env.PORT || 3000;
|
||||
|
||||
// Middleware
|
||||
app.use(cors());
|
||||
app.use(express.json());
|
||||
app.use(express.static(path.join(__dirname, 'public')));
|
||||
|
||||
// API routes
|
||||
app.use('/api', apiRoutes);
|
||||
|
||||
// Serve index.html for root
|
||||
app.get('/', (req, res) => {
|
||||
res.sendFile(path.join(__dirname, 'public', 'index.html'));
|
||||
});
|
||||
|
||||
// Start server - bind to 0.0.0.0 for LAN access
|
||||
const HOST = '0.0.0.0';
|
||||
app.listen(PORT, HOST, () => {
|
||||
console.log(`\n========================================`);
|
||||
console.log(`Test Data Database Server`);
|
||||
console.log(`========================================`);
|
||||
console.log(`Server running on all interfaces (${HOST}:${PORT})`);
|
||||
console.log(`Local: http://localhost:${PORT}`);
|
||||
console.log(`LAN: http://192.168.0.6:${PORT}`);
|
||||
console.log(`API endpoints:`);
|
||||
console.log(` GET /api/search?serial=...&model=...`);
|
||||
console.log(` GET /api/record/:id`);
|
||||
console.log(` GET /api/datasheet/:id`);
|
||||
console.log(` GET /api/stats`);
|
||||
console.log(` GET /api/export?format=csv&...`);
|
||||
console.log(`========================================\n`);
|
||||
});
|
||||
|
||||
module.exports = app;
|
||||
|
||||
41
simple-testdb-check.ps1
Normal file
41
simple-testdb-check.ps1
Normal file
@@ -0,0 +1,41 @@
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Mounting AD2 C$ share..."
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
|
||||
Write-Host "[OK] Checking testdatadb..."
|
||||
if (Test-Path "AD2:\Shares\testdatadb") {
|
||||
Write-Host " [FOUND] testdatadb exists"
|
||||
|
||||
Write-Host "`n[OK] Top-level contents:"
|
||||
Get-ChildItem "AD2:\Shares\testdatadb" | Select-Object Mode, Name, Length
|
||||
|
||||
Write-Host "`n[OK] Database folder contents:"
|
||||
if (Test-Path "AD2:\Shares\testdatadb\database") {
|
||||
Get-ChildItem "AD2:\Shares\testdatadb\database" | Select-Object Mode, Name, Length
|
||||
}
|
||||
|
||||
Write-Host "`n[OK] Retrieving import.js..."
|
||||
if (Test-Path "AD2:\Shares\testdatadb\database\import.js") {
|
||||
$importJs = Get-Content "AD2:\Shares\testdatadb\database\import.js" -Raw
|
||||
$importJs | Out-File "D:\ClaudeTools\import-js-retrieved.js" -Encoding UTF8
|
||||
Write-Host " [OK] Saved to D:\ClaudeTools\import-js-retrieved.js"
|
||||
Write-Host " [INFO] File size: $($importJs.Length) bytes"
|
||||
}
|
||||
|
||||
Write-Host "`n[OK] Checking for database file..."
|
||||
$dbFiles = Get-ChildItem "AD2:\Shares\testdatadb" -Recurse -Include "*.db","*.sqlite","*.sqlite3" -ErrorAction SilentlyContinue
|
||||
if ($dbFiles) {
|
||||
Write-Host " [FOUND] Database files:"
|
||||
$dbFiles | Select-Object FullName, Length
|
||||
} else {
|
||||
Write-Host " [INFO] No .db or .sqlite files found"
|
||||
}
|
||||
|
||||
} else {
|
||||
Write-Host " [NOT FOUND] testdatadb does not exist"
|
||||
}
|
||||
|
||||
Remove-PSDrive -Name AD2
|
||||
Write-Host "`n[OK] Done"
|
||||
13
temp-search-ad2-database.ps1
Normal file
13
temp-search-ad2-database.ps1
Normal file
@@ -0,0 +1,13 @@
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Searching for database folders on AD2..."
|
||||
Invoke-Command -ComputerName 192.168.0.6 -Credential $cred -ScriptBlock {
|
||||
Get-ChildItem C:\ -Directory | Where-Object Name -like '*database*' | Select-Object FullName
|
||||
}
|
||||
|
||||
Write-Host "`n[OK] Checking Sync-FromNAS.ps1 for database references..."
|
||||
Invoke-Command -ComputerName 192.168.0.6 -Credential $cred -ScriptBlock {
|
||||
$content = Get-Content C:\Shares\test\scripts\Sync-FromNAS.ps1 -Raw
|
||||
$content | Select-String -Pattern "(database|sql|connection|sqlite|mysql|postgres)" -AllMatches -Context 5,5 | Select-Object -First 10
|
||||
}
|
||||
47
test-api-endpoint.ps1
Normal file
47
test-api-endpoint.ps1
Normal file
@@ -0,0 +1,47 @@
|
||||
# Test the API endpoint directly
|
||||
Write-Host "[OK] Testing API endpoints on AD2..." -ForegroundColor Green
|
||||
|
||||
# Test 1: Stats endpoint
|
||||
Write-Host "`n[TEST 1] GET /api/stats" -ForegroundColor Cyan
|
||||
try {
|
||||
$response = Invoke-WebRequest -Uri "http://192.168.0.6:3000/api/stats" -Method GET -UseBasicParsing -ErrorAction Stop
|
||||
Write-Host "[OK] Status: $($response.StatusCode)" -ForegroundColor Green
|
||||
Write-Host "[OK] Response:" -ForegroundColor Green
|
||||
$response.Content | ConvertFrom-Json | ConvertTo-Json -Depth 5
|
||||
} catch {
|
||||
Write-Host "[ERROR] Request failed: $($_.Exception.Message)" -ForegroundColor Red
|
||||
Write-Host "[ERROR] Response: $($_.Exception.Response)" -ForegroundColor Red
|
||||
if ($_.ErrorDetails.Message) {
|
||||
Write-Host "[ERROR] Details: $($_.ErrorDetails.Message)" -ForegroundColor Red
|
||||
}
|
||||
}
|
||||
|
||||
# Test 2: Search endpoint (simple)
|
||||
Write-Host "`n[TEST 2] GET /api/search?limit=1" -ForegroundColor Cyan
|
||||
try {
|
||||
$response = Invoke-WebRequest -Uri "http://192.168.0.6:3000/api/search?limit=1" -Method GET -UseBasicParsing -ErrorAction Stop
|
||||
Write-Host "[OK] Status: $($response.StatusCode)" -ForegroundColor Green
|
||||
Write-Host "[OK] Response:" -ForegroundColor Green
|
||||
$response.Content | ConvertFrom-Json | ConvertTo-Json -Depth 5
|
||||
} catch {
|
||||
Write-Host "[ERROR] Request failed: $($_.Exception.Message)" -ForegroundColor Red
|
||||
if ($_.ErrorDetails.Message) {
|
||||
Write-Host "[ERROR] Details: $($_.ErrorDetails.Message)" -ForegroundColor Red
|
||||
}
|
||||
}
|
||||
|
||||
# Test 3: Record by ID
|
||||
Write-Host "`n[TEST 3] GET /api/record/1" -ForegroundColor Cyan
|
||||
try {
|
||||
$response = Invoke-WebRequest -Uri "http://192.168.0.6:3000/api/record/1" -Method GET -UseBasicParsing -ErrorAction Stop
|
||||
Write-Host "[OK] Status: $($response.StatusCode)" -ForegroundColor Green
|
||||
Write-Host "[OK] Response:" -ForegroundColor Green
|
||||
$response.Content | ConvertFrom-Json | ConvertTo-Json -Depth 5
|
||||
} catch {
|
||||
Write-Host "[ERROR] Request failed: $($_.Exception.Message)" -ForegroundColor Red
|
||||
if ($_.ErrorDetails.Message) {
|
||||
Write-Host "[ERROR] Details: $($_.ErrorDetails.Message)" -ForegroundColor Red
|
||||
}
|
||||
}
|
||||
|
||||
Write-Host "`n[OK] Done" -ForegroundColor Green
|
||||
72
test-db-directly.ps1
Normal file
72
test-db-directly.ps1
Normal file
@@ -0,0 +1,72 @@
|
||||
# Test database directly to see if it's corrupted or locked
|
||||
$password = ConvertTo-SecureString 'Paper123!@#' -AsPlainText -Force
|
||||
$cred = New-Object System.Management.Automation.PSCredential('INTRANET\sysadmin', $password)
|
||||
|
||||
Write-Host "[OK] Mounting AD2 C$ share..." -ForegroundColor Green
|
||||
New-PSDrive -Name AD2 -PSProvider FileSystem -Root "\\192.168.0.6\C$" -Credential $cred -ErrorAction Stop | Out-Null
|
||||
|
||||
$dbPath = "AD2:\Shares\testdatadb\database\testdata.db"
|
||||
|
||||
Write-Host "[OK] Testing database file access..." -ForegroundColor Green
|
||||
|
||||
# Test 1: File exists and readable
|
||||
if (Test-Path $dbPath) {
|
||||
Write-Host " [OK] Database file exists" -ForegroundColor Green
|
||||
|
||||
$dbFile = Get-Item $dbPath
|
||||
Write-Host " [OK] Size: $([math]::Round($dbFile.Length/1MB,2)) MB" -ForegroundColor Cyan
|
||||
Write-Host " [OK] Modified: $($dbFile.LastWriteTime)" -ForegroundColor Cyan
|
||||
} else {
|
||||
Write-Host " [ERROR] Database file not found!" -ForegroundColor Red
|
||||
Remove-PSDrive -Name AD2
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Test 2: Check file lock
|
||||
Write-Host "`n[OK] Checking if file is locked..." -ForegroundColor Green
|
||||
try {
|
||||
$stream = [System.IO.File]::Open($dbFile.FullName, 'Open', 'Read', 'ReadWrite')
|
||||
$stream.Close()
|
||||
Write-Host " [OK] File is not locked" -ForegroundColor Green
|
||||
} catch {
|
||||
Write-Host " [WARNING] File may be locked by another process" -ForegroundColor Yellow
|
||||
Write-Host " Error: $($_.Exception.Message)" -ForegroundColor Yellow
|
||||
}
|
||||
|
||||
# Test 3: Check file header (SQLite magic bytes)
|
||||
Write-Host "`n[OK] Checking SQLite file header..." -ForegroundColor Green
|
||||
try {
|
||||
$bytes = [System.IO.File]::ReadAllBytes($dbFile.FullName) | Select-Object -First 16
|
||||
$header = [System.Text.Encoding]::ASCII.GetString($bytes)
|
||||
|
||||
if ($header.StartsWith("SQLite format 3")) {
|
||||
Write-Host " [OK] Valid SQLite database header detected" -ForegroundColor Green
|
||||
} else {
|
||||
Write-Host " [ERROR] Invalid SQLite header! Database may be corrupted" -ForegroundColor Red
|
||||
Write-Host " Header bytes: $($bytes -join ' ')" -ForegroundColor Yellow
|
||||
}
|
||||
} catch {
|
||||
Write-Host " [ERROR] Cannot read database file: $($_.Exception.Message)" -ForegroundColor Red
|
||||
}
|
||||
|
||||
# Test 4: Check permissions
|
||||
Write-Host "`n[OK] Checking file permissions..." -ForegroundColor Green
|
||||
$acl = Get-Acl $dbPath
|
||||
Write-Host " Owner: $($acl.Owner)" -ForegroundColor Cyan
|
||||
Write-Host " Access Rules:" -ForegroundColor Cyan
|
||||
$acl.Access | ForEach-Object {
|
||||
Write-Host " $($_.IdentityReference): $($_.FileSystemRights) ($($_.AccessControlType))" -ForegroundColor Gray
|
||||
}
|
||||
|
||||
# Test 5: Check if there are journal files
|
||||
Write-Host "`n[OK] Checking for journal files..." -ForegroundColor Green
|
||||
$journalFile = Get-Item "$($dbFile.DirectoryName)\testdata.db-journal" -ErrorAction SilentlyContinue
|
||||
if ($journalFile) {
|
||||
Write-Host " [FOUND] Journal file exists: $([math]::Round($journalFile.Length/1KB,2)) KB" -ForegroundColor Yellow
|
||||
Write-Host " [WARNING] This may indicate incomplete transaction or crash" -ForegroundColor Yellow
|
||||
} else {
|
||||
Write-Host " [OK] No journal file (clean state)" -ForegroundColor Green
|
||||
}
|
||||
|
||||
Remove-PSDrive -Name AD2 -ErrorAction SilentlyContinue
|
||||
Write-Host "`n[OK] Done" -ForegroundColor Green
|
||||
43
test-query.js
Normal file
43
test-query.js
Normal file
@@ -0,0 +1,43 @@
|
||||
// Simple test to query the database directly
|
||||
const Database = require('better-sqlite3');
|
||||
const path = require('path');
|
||||
|
||||
console.log('[OK] Testing database connection...');
|
||||
|
||||
try {
|
||||
// Use the same path as the server
|
||||
const dbPath = 'C:\\Shares\\testdatadb\\database\\testdata.db';
|
||||
|
||||
console.log(`[OK] Opening database: ${dbPath}`);
|
||||
const db = new Database(dbPath, { readonly: true });
|
||||
|
||||
console.log('[OK] Database opened successfully');
|
||||
|
||||
// Test simple query
|
||||
console.log('[OK] Running test query: SELECT COUNT(*) FROM test_records');
|
||||
const result = db.prepare('SELECT COUNT(*) as count FROM test_records').get();
|
||||
|
||||
console.log(`[OK] Total records: ${result.count}`);
|
||||
|
||||
// Test another query
|
||||
console.log('[OK] Running test query: SELECT * FROM test_records LIMIT 1');
|
||||
const record = db.prepare('SELECT * FROM test_records LIMIT 1').get();
|
||||
|
||||
if (record) {
|
||||
console.log('[OK] Sample record retrieved:');
|
||||
console.log(` ID: ${record.id}`);
|
||||
console.log(` Model: ${record.model_number}`);
|
||||
console.log(` Serial: ${record.serial_number}`);
|
||||
console.log(` Date: ${record.test_date}`);
|
||||
}
|
||||
|
||||
db.close();
|
||||
console.log('[OK] Database closed successfully');
|
||||
console.log('[SUCCESS] Database is working correctly!');
|
||||
|
||||
} catch (err) {
|
||||
console.error('[ERROR] Database test failed:');
|
||||
console.error(err.message);
|
||||
console.error(err.stack);
|
||||
process.exit(1);
|
||||
}
|
||||
Reference in New Issue
Block a user