Why local LLMs are changing how I build automation systems
Running LLaMA locally isn't just about privacy — it's about owning the stack. Here's what I learned after building a full marketing engine on local inference.
Writing about building local-first AI, scalable engineering, and the intersection of code and creativity.
Running LLaMA locally isn't just about privacy — it's about owning the stack. Here's what I learned after building a full marketing engine on local inference.
When you're digitizing an organization's entire workflow, the schema choices you make on day one echo forever. Here's what I'd do differently.
Combining n8n, a local LLM, and ComfyUI to create a zero-marginal-cost content engine. The architecture, the tradeoffs, and what actually worked.
Why the most successful tools I've built started by solving a hyper-specific problem for a single person, and how that scales into a product.
Engineering-led design often misses the subtle cues that guide a user's eye. A deep dive into spacing, weight, and the 'squint test'.
How I balance academic rigor with the speed of modern shipping. Using university projects to stress-test production-ready architectures.