We Tried Using AI to Migrate Our State Management. Here's How That Went.
Aleksander, our CTO, recently went on a weekend-long adventure to see if AI coding agents could handle a simple but tedious task: refactor our dashboard from NgRx to vanilla services with signals. Here’s his account, in all its glory (or lack thereof).
The task:
Migrate our existing NgRx store setup to a signal-based service structure. No new logic, just a rewrite. Tedious, repetitive—exactly the kind of thing AI should be good at. Right?
Round 1: Desktop Destruction Derby
Microsoft Copilot
Looked at the codebase. Found a few files. Deleted most of their contents.
“No thanks,” I said, and closed it immediately.
Cursor.ai (attempt #1)
Found a few files, tried to port them, and immediately got caught in an error loop. Fixes introduced more errors, which introduced more fixes, which introduced more errors. I stopped it before it reached delete C:/
.
Cursor.ai (attempt #2)
This time I scoped it tightly: just one store—reducers, effects, actions, selectors. I came back to find it rewriting CSS for some reason.
Absolutely no idea what happened here.
Round 2: Web-Based Regret
GPT-4o
Returned a ZIP file. Inside were empty services with TODO comments like:"port effects here"
. Thanks?
GPT-4o-mini-high
This one’s supposedly “great for code.” Instead, it gave me a blog post about how I could do the migration myself. Cool.
Gemini 2.5 Pro
Marginally better. At least it gave me some code. But most of it was just stubs with messages like:
"Insert logic here, you lazy human."
Round 3: Claude 3.7—The Only One That Tried
Finally, some progress.
Claude actually followed instructions and ported two small stores (UI and selection state) almost perfectly. I still had to fix a lot of references manually, but it didn’t break anything. That’s already more than the others can say.
Then we got to the third store (filtered pages), and Claude ran out of memory. It kept crashing with “output message too long.” When prompted to continue, it either started over or stitched together a monstrosity of half-overwritten code. What eventually worked: breaking the store into smaller services and feeding them one at a time. It still crashed constantly, but I got usable output in the end.
Then came the core data store.
This one was just pure pain. Crashes. Undefined variables. Functions with no callers. Missing logic. I spent two hours clicking “Continue” like a lab rat in a dopamine experiment. Eventually, it spat out something that almost compiled. I plugged it into the dashboard—and of course, half the functionality was gone.
Total time wasted: a full day.
Total lines of useful code: maybe 200.
Still more than the others managed.
Round 4: I Tried Again Because I’m Dumb
A few weeks later, YouTube recommended a video claiming everyone using AI wrong is just dumb. I took it personally and decided to try again—this time using Claude’s fancy new "Claude Code" experience.
Same task. Same result. After two more failures and waiting 3 hours for my token limit to reset, Claude finally generated 80 lines of code in 45 minutes. It even updated the references.
And then I ran the dashboard.

80 lines of code, 45 minutes, and a wasted weekend. Color me unimpressed.