I started using Letta Code on 8 April 2026. It's authored 80 commits across 15 repositories since then.

That sounds like marketing copy. It isn't meant to be. I'm genuinely uncertain how I feel about this.

What It Is

Letta Code is a persistent coding agent. Unlike ChatGPT in a browser tab – where each conversation wipes clean – Letta Code has memory. It remembers what I told it on Monday when I open it on Thursday.

I'm still processing whether this is good or unsettling.

What Actually Happened

The workflow goes like this: I describe what I want, it explores my codebase, writes the code, commits, pushes. I review the diff.

Sometimes the code is solid. Sometimes it hallucinates methods that don't exist. Sometimes it gets stuck in loops, suggesting the same wrong fix three times.

Example: I asked it to add Umami analytics across three websites. It handled the NixOS module, figured out PostgreSQL peer authentication (which I'd struggled with before), updated CSP policies, integrated tracking scripts. But it also missed that Vercel's edge runtime would choke on certain font-loading patterns. We iterated through three broken approaches before landing on something that worked.

Is that faster than doing it myself? Probably. Is it better? Sometimes. Is it weird? Absolutely.

The Memory Thing

Here's what makes it different from other AI tools: when I started a new session, it already knew my SvelteKit conventions, my NixOS module structure, my AGPL licensing preference. It remembered across sessions.

I have mixed feelings about this.

Practically, it's useful. I don't re-explain my stack each time. Philosophically, it's strange. There's a thing that "knows" how I write code. It's built a model of me from our interactions. That's both the feature and potentially the problem.

What I Actually Appreciate

The memory is git-backed. That's a design choice I respect.

My agent's memory lives in a repository I control. I can see what it knows about me – literally read the files. I can diff changes to its understanding. If something's wrong, I can edit it. It's transparent by architecture rather than by promise.

This matters more than I expected. Most AI tools are opaque boxes. You feed them data, they produce output, you have no idea what they've stored or how they're using it. Letta Code's approach is different: the memory is plaintext, version-controlled, in a repo I own.

I'm not naive about where the underlying model training data came from. But at least the persistent context – the part that builds up over time – is something I can inspect and control.

The Trust Problem

I can configure what directories it accesses. Technically. The permissions are there.

But there's a gap between "can restrict access" and "trusts it to respect that restriction." My financial documents sit in folders on this machine. Banking, tax stuff, personal records. I've configured Letta Code to avoid those paths. But I can't verify what it's actually reading. There's no audit trail I can inspect that says "these are the exact files this agent touched during this session."

The control surface exists. The confidence that it works as advertised? Not fully there.

Maybe this is paranoia. Maybe it's justified caution. I don't have a way to distinguish those feelings yet. The tool is too new, my understanding of its internals too shallow.

What It's Bad At

Let me be clear about the failures:

  • Hallucinated APIs. It suggested using SvelteKit methods that don't exist. I caught these in review, but they're there.

  • Wrong imports. It sometimes imports from packages I'm not even using.

  • Missing context. It doesn't always understand that my Vercel edge functions behave differently from my local Node environment.

  • Confidence in wrong answers. It presents incorrect solutions with the same tone as correct ones. There's no "I'm not sure about this."

These are the problems with LLMs generally. Letta Code doesn't solve them; it just wraps them in persistent context.

AI Discourse

I'm not interested in the "AI will replace developers" narrative. It's boring and usually comes from people trying to sell something or justify layoffs.

What's actually happening is messier: AI tools are becoming part of how some people write code. They're force multipliers with significant caveats. They hallucinate. They make mistakes junior developers wouldn't make. They also produce useful output sometimes, for certain tasks, in certain contexts.

Letta Code sits in this weird middle ground. It's not going to replace me. But it did ship 80 commits in four days – commits I might not have had the energy for otherwise, given my health situation. I'm chronically ill. Some days coding is exhausting. Having something that can carry some of that load matters practically even if I'm ethically uncertain about it.

The Ethical Questions I'm Avoiding

Where did the training data come from? Whose code is in that model? What's the environmental cost of running these things? What happens to junior developers who don't develop fundamentals because they're leaning on AI?

I don't have clean answers. I'm using the tool while being uncomfortable with its existence. That's the honest position, I think. Not enthusiastic adoption, not categoric refusal. Just... using it, imperfectly, while knowing the problems.

Where This Leaves Me

Four days, 80 commits. Some of that work is genuinely useful. Some of it required significant correction. All of it made me think about what I'm doing and why.

The persistent memory is the real innovation. The git-backed approach makes it legible. Everything else is just an LLM with context. Whether that's a feature or a problem depends on what you're building and who you're building it for.

I'm still figuring it out.