New projects from 2026-02-09 research session: Wrightstown Solar: - DIY 48V LiFePO4 battery storage (EVE C40 cells) - Victron MultiPlus II whole-house UPS design - BMS comparison (Victron CAN bus compatible) - EV salvage analysis (new cells won) - Full parts list and budget Wrightstown Smart Home: - Home Assistant Yellow setup (local voice, no cloud) - Local LLM server build guide (Ollama + RTX 4090) - Hybrid LLM bridge (LiteLLM + Claude API + Grok API) - Network security (VLAN architecture, PII sanitization) Machine: ACG-M-L5090 Timestamp: 2026-02-09 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
86 lines
3.3 KiB
Markdown
86 lines
3.3 KiB
Markdown
# Wrightstown Smart Home - Session Log 2026-02-09
|
|
|
|
**Session:** Initial Research and Planning
|
|
**Machine:** ACG-M-L5090
|
|
|
|
---
|
|
|
|
## Work Completed
|
|
|
|
### 1. Project Scope Defined
|
|
- Privacy-first smart home (no Google/Alexa)
|
|
- Home Assistant Yellow (already owned, not set up)
|
|
- Local LLM server for private queries
|
|
- Hybrid bridge: local + Claude API + Grok API
|
|
- Separate project from Wrightstown Solar, with planned crossover
|
|
|
|
### 2. Home Assistant Yellow Research
|
|
- Yellow still receives updates despite production ending Oct 2025
|
|
- Needs CM4 or CM5 compute module (not included)
|
|
- Built-in Zigbee 3.0 radio, M.2 NVMe slot
|
|
- Local voice: Wyoming + Whisper + Piper (all local, no cloud)
|
|
- Victron integration planned via Modbus TCP (future crossover)
|
|
|
|
### 3. Local LLM Server Research
|
|
- Recommended build: RTX 4090 24GB ($1,940-2,240)
|
|
- Software: Ollama (primary) + Open WebUI (interface)
|
|
- Models: Qwen 2.5 7B (fast voice), Llama 3.1 70B Q4 (reasoning)
|
|
- Alternative builds researched: budget ($580), flagship ($4,000+), Mac Mini M4
|
|
|
|
### 4. Hybrid Bridge Design
|
|
- LiteLLM proxy as unified routing layer
|
|
- Routes: local (Ollama) / Claude API (reasoning) / Grok API (search)
|
|
- 80/15/5 split estimated at ~$5/month cloud costs
|
|
- Manual routing first, keyword-based later, semantic routing eventual
|
|
- Integration with HA via Extended OpenAI Conversation
|
|
|
|
### 5. Network Security Design
|
|
- 4-VLAN architecture: Trusted / Infrastructure / IoT / Guest
|
|
- IoT isolation (devices can't reach trusted network)
|
|
- PII sanitization pipeline for cloud-bound queries
|
|
- Private data (cameras, sensors, presence) stays local only
|
|
- Remote access via Tailscale or WireGuard
|
|
|
|
---
|
|
|
|
## Decisions Made
|
|
|
|
| Decision | Chosen | Rationale |
|
|
|---|---|---|
|
|
| Smart home platform | Home Assistant | Open source, local-first, 2000+ integrations |
|
|
| Hardware | HA Yellow (owned) | Built-in Zigbee, already purchased |
|
|
| Voice assistant | Wyoming/Whisper/Piper | Fully local, no cloud dependency |
|
|
| LLM server GPU | RTX 4090 24GB (recommended) | Best price/performance for 70B models |
|
|
| LLM software | Ollama | Simplest, OpenAI-compatible API |
|
|
| Hybrid routing | LiteLLM | Unified API, cost tracking, fallbacks |
|
|
| Cloud: reasoning | Claude API (Anthropic) | Best reasoning quality |
|
|
| Cloud: search | Grok API (xAI) | Internet access, 2M context |
|
|
| Network | VLAN segmentation | IoT isolation, privacy |
|
|
| Separate from solar | Yes | Different timelines, crossover later |
|
|
|
|
---
|
|
|
|
## Files Created
|
|
|
|
- `projects/wrightstown-smarthome/PROJECT_INDEX.md`
|
|
- `projects/wrightstown-smarthome/documentation/ha-yellow-setup.md`
|
|
- `projects/wrightstown-smarthome/documentation/llm-server-build.md`
|
|
- `projects/wrightstown-smarthome/documentation/hybrid-bridge.md`
|
|
- `projects/wrightstown-smarthome/documentation/network-security.md`
|
|
- `projects/wrightstown-smarthome/session-logs/2026-02-09-session.md` (this file)
|
|
|
|
---
|
|
|
|
## Next Steps
|
|
|
|
- [ ] Check if CM4/CM5 module is owned or needs purchasing
|
|
- [ ] Set up HA Yellow (basic install, Zigbee, first automations)
|
|
- [ ] Research specific Zigbee devices to purchase
|
|
- [ ] Decide on LLM server GPU budget (3060 budget vs 4090 sweet spot)
|
|
- [ ] Purchase LLM server hardware
|
|
- [ ] Decide on VLAN hardware (TP-Link Omada vs Ubiquiti UniFi)
|
|
- [ ] Set up Ollama + Open WebUI
|
|
- [ ] Create Anthropic API account + Grok API account
|
|
- [ ] Configure LiteLLM proxy
|
|
- [ ] Integrate with HA via Extended OpenAI Conversation
|