🔥 Forging V.U.L.C.A.N.
I've decided to open up llm-core
as an open source project under a new name: V.U.L.C.A.N. LLM, which stands for Vibe Utility for Layered Construction of Atomic Nodes. The acronym speaks to how it's a vibe coding tool (of course), but uses a specific structure of inputs to generate stackable nodes to build out high quality softwre.
I also love the name Vulcan for it: the Roman god of fire and forge, builder of divine weapons and tools. That’s the spirit I’m aiming for: a system that doesn’t just generate code, but crafts high-quality wonders.
If it works, it could be the foundation for a new kind of developer workflow. I haven't found anything quite like this yet. Cursor comes the closest, but Cursor gives you a lot of rope to hang yourself. You can do anything with Cursor. It doesn't have any real opinions on what is good to do, so I've generated a lot of throwaway code via LLMs (and some stuff worth keeping).
I'm designing Vulcan to work like a development team, where I can take the product owner role I'm used to. Of course, the development team is an LLM, not a highly paid set of engineers, so that's a pretty radical experience.
What makes VULCAN different isn’t the codegen. It’s the necessary overhead make sure that what gets produced can be trusted.
So far, here’s what I’ve built into the forge:
- ✅ Audit Engine — six categories so far, including structure, efficiency, resilience, misuse, prompt quality, and (soon) plan.yaml integrity. Each is its own
.audit.ts
file that runs per layer. - 🧪 Test Suite Generator — uses Vitest, with per-layer coverage, path testing, schema fuzzing, adversarial mutation testing, and optional global test runs.
- 📊 Summary Grid — a clear matrix mapping each layer (model, service, CLI, etc.) to its audit and test coverage, so nothing slips through the cracks.
I'm hoping that this structure causes the LLM to output production-grade code, not flashy one-shot demos. The goal is to have the LLM generate code that can actually stand up over time.
I've already been living with the new workflow for week or so, and it seems to actually work. I'll describe the workflow itself in posts to come. And the code itself isn't quite ready yet to share (I'm manually executing the workflow by copy/pasting prompts in to Cursor at the moment). My next steps are to get the code ready to share, see what some other product managers think of it and get the output code reviewed by from engineers.
I’m under no illusions that making this open source means it’ll catch fire. But there’s something exciting about putting it out there. If it resonates, I’ll see who shows up. If not, it’s still the path I have to walk to seeing I can do anything useful with LLMs as a real productive toolset.