Loading...
Loading...

View Source Code
github.com/Danm72/ha-dashboard-2026
Connect on LinkedIn
linkedin.com/in/d-malone
Follow on Twitter/X
x.com/danmalone_mawla
Share this article
So, I've got about 1800 lines of Home Assistant YAML config that's accumulated over the years. Every time something breaks at 2am, I add another automation. Every time I get a new smart device, more YAML. And honestly? I had no idea what half of it was actually doing anymore.
That's when I started experimenting with using Claude Code and multi-agent patterns to wrangle this mess. Here's what I learned.
Like, if you've been running Home Assistant for more than a year, you probably know what I'm talking about. You've got:
I had 47 automations and 12 scripts. But which ones were actually running? No idea.
Here's the thing - I could read through all 1800 lines manually. But that sounds awful, right? So I tried something different: running multiple specialized AI agents in parallel, each looking at the config from a different angle.
I set up five agents:
Running them in parallel took about 20 minutes. Manual review would've been... I don't even want to think about it.
The results were actually pretty interesting. They found:
light.kitchen_lights but the actual entity was light.kitchen_z2mThe security agent flagged some stuff too, but I'll be honest - I kinda deprioritized that for convenience. It's a home automation system, not a bank.
So I wanted to validate what the agents found about dead automations. Turns out Home Assistant keeps execution traces in the .storage directory. There's a file called core.restore_state that has last_triggered timestamps for everything.
A bit of jq magic later:
jq -r '.data[] | select(.state.entity_id | test("^automation\\."))' core.restore_state
And yeah, the agents were right. Three automations had literally never triggered. Ever. And eight more were dead for over a year.
Getting Claude Code connected to my Home Assistant instance was its own journey. I looked at a few options:
The winning combo was ha-mcp for the low-level API access plus custom skills for the workflow patterns. They complement each other - MCP gives you the API, skills give you the patterns.
One gotcha: you need long-lived access tokens for MCP authentication, not OAuth. Took me longer than I'd like to admit to figure that out.
Looking back, there's a few things:
The multi-agent pattern isn't just for code review though. I'm now using it for planning changes, validating configs before deployment, that sort of stuff.
If you've got a large Home Assistant config, you're probably carrying a lot of dead weight. The combination of AI agents for review and execution trace analysis can help you figure out what's actually being used.
Is it perfect? Nah. The agents sometimes flag stuff that's fine, and they miss things a human would catch. But it's a lot better than reading 1800 lines of YAML manually.
Let's be honest, nobody wants to do that.
Explore the Full Code
Star the repo to stay updated
Let's Connect
Follow for more smart home content
Follow on Twitter/X
x.com/danmalone_mawla
Continue reading similar content

A step-by-step guide to setting up persistent, always-on AI staff in a Telegram forum — four specialized agents with isolated workspaces, inter-agent communication, and scheduled automation using OpenClaw.

I replaced my lead capture form with an AI conversation. You chat about your project, the AI extracts structured data in real time, and you walk away with a 12-page branded PDF brief. No forms. No wizards. Just a conversation.

I wanted a repeatable way to turn project context into polished 30-40 second launch videos. This is the architecture, quality loop, and hard lessons from building the video engine.