Liljeforce is one human, two AI agents, and a Raspberry Pi running a custom operating system. We use AI in every layer of the business: operations, communications, development, client work. This is the literal setup as of April 2026. The services we pay for, the automated jobs that run at 5 AM, the plain text files that hold everything together. Including the parts that are still rough.
TrellisPOS: a custom operating system on a Raspberry Pi
The name comes from Trellis (a structure that supports growth) and POS (Personal Operating System). It is the system that runs Liljeforce day to day.
Most companies piece together a stack of tools: one for documents, one for tasks, one for contacts, one for communication. We replaced most of that with a single system built on plain text files stored in a version-controlled repository on GitHub.
TrellisPOS holds our tasks, contacts, project summaries, strategy documents, brand voice guidelines, content calendars, and agent definitions. Everything our AI agents need to do their work lives in one place, in a format they can read and edit natively.
The whole thing runs on a Raspberry Pi 5, a small single-board computer sitting on a shelf in Stockholm. It is always on, always connected, running a persistent Claude Code session that serves as the primary interface. We access it from a desktop or laptop over a secure connection, and from a phone via Claude Code's remote control feature. One machine, one system, multiple entry points.
Why we built it ourselves
There are tools that try to solve this. AI agent platforms that connect language models to your apps and give you a bot you can text from your phone. We evaluated the most popular options and decided to build our own instead.
Everything we do in TrellisPOS could be done on one of those platforms. But you end up doing the same setup work inside someone else's framework, usually with more complexity and less control over how things connect. When something needs to change, you are still working within someone else's architecture, their abstractions, their plugin system, their data model. With our own system, there is nothing to work around. We build what we need, when we need it, and nothing else. That keeps it simple enough to actually maintain.
Many of these platforms also default to always-on agents that message you unprompted, act on incoming information, and run plugins in the background. That sounds productive, but in practice it becomes an attention thief. We deliberately limit autonomous action to one scheduled job, the morning briefing, and keep everything else human-initiated. More autonomy is planned, but only where it genuinely reduces work rather than creating new distractions.
Then there is security. When your agent has access to your email, calendar, and files, the attack surface matters. The most popular open-source agent platform has had critical vulnerabilities, prompt injection issues, and a plugin marketplace where a significant share of plugins turned out to be malicious. With our system, every change is a git commit we can inspect and revert. Our data stays in files we control, even though it flows through services we trust: Anthropic for the AI, GitHub for backup, Google for email and calendar. No additional platform sits in between, aggregating data and adding its own attack surface.
The trade-off is that we maintain it ourselves. For a team with the technical ability to do that, the benefits outweigh the cost.
How it works, in practice
TrellisPOS is a collection of structured text files. The task list is a file. Each contact is a file. Each project summary is a file. Strategy documents, brand voice rules, content drafts: all files in a clear folder structure.
AI reads these files before doing any work. When Cornelius prepares a morning briefing, he reads the task list, the calendar, recent emails, and the strategy document. When Bridget drafts a LinkedIn post, she reads the brand voice guidelines, the content calendar, and relevant client context. Everything they need is right there, no database queries, no API calls, just reading files.
Changes are tracked automatically through version control. We can see exactly what the morning briefing changed in the task list, undo a bad edit, or look back at how a strategy document evolved over months. The full history is preserved.
The system backs up to GitHub (a cloud-based code hosting platform) every night and after every working session. If the Pi fails, we restore from GitHub and keep going.
The stack
Here is every service that keeps Liljeforce running.
| Service | What it does for us |
|---|---|
| Claude Code | The engine behind everything. All AI agent work happens here: task management, research, email, content drafting, coding. Runs in a terminal. |
| GitHub | Cloud backup, version control, and project tracking. The safety net for all our files. |
| Google Workspace | Email and calendar. Our agents access these through a custom connector that lets them read, send, and schedule on our behalf. |
| Google Cloud | The infrastructure layer that connects our agents to Google Workspace securely. |
| Raspberry Pi 5 | The always-on server. Hosts the system, runs our AI sessions, and executes scheduled jobs. Draws about 5 watts. |
| Service | What it does for us |
|---|---|
| Claude Code | Also our coding tool. Tom designs the architecture, Claude writes the code. All products are built this way. |
| Vercel | Hosting and deployment for our web applications, including this website. |
| Supabase | Database and user authentication for our product ventures. |
| Stripe | Payment processing. |
| Resend | Automated email delivery for our products (confirmations, notifications). |
| Nylas | Email integration for one of our products. Connects to customer inboxes. |
| ElevenLabs | Voice technology. Will power a voice agent on this website. |
Two agents, same system
Cornelius and Bridget are named roles, not separate software. Each is a Claude Code session loaded with different context from TrellisPOS. Same tool, same computer. The difference is what gets loaded and what gets asked.
Corneliushandles everything internal: task management, calendar, email, weekly planning, and client research. His one automated job is the morning briefing. Every day at 06:00, he reads the calendar, recent email, and the task list, then delivers a prioritized plan for the day to Tom's inbox. Everything else he does starts with Tom opening a session and asking for it.
Bridget handles everything external: LinkedIn posts, website content, outreach messages, and brand voice. This piece went through Bridget. She reads the brand voice guidelines, content calendar, and relevant context at the start of every session, which keeps the tone consistent without starting from scratch each time.
How the code gets written
Every line of code across Liljeforce gets written with AI assistance. Claude Code is both the coding agent and the primary development tool. Tom designs the architecture and makes the technical decisions. Claude handles implementation: writing the code, catching edge cases, running tests, doing the structural work that would otherwise take hours.
VS Code (a code editor) is there for when Tom wants to inspect something directly or make a quick manual edit. But the heavy lifting happens in Claude Code, running in a terminal.
The products all deploy on the same stack: Vercel for hosting, Supabase for the database, GitHub for version control. Simple, repeatable, fast to deploy.
What works well
- Development speed. Running multiple product ventures as one person is only possible because AI handles the implementation work. This is the single biggest multiplier in the setup.
- Everything in one place. Tasks, priorities, calendar, email, strategy documents, contact history: it all lives in the same system. The morning briefing pulls from all of it automatically. Waking up to a prioritized plan that accounts for today's meetings, yesterday's email, and approaching deadlines means the day starts with clarity instead of 30 minutes of context-gathering across five different apps. When everything feeds into one view, nothing falls through the cracks.
- Credibility. When we advise clients on AI integration, we can show them exactly how we run our own business. The setup is the proof of concept.
What is still clunky
- Fine-tuning what the system notices. Teaching the agents what to pick up on takes iteration. Which emails should become tasks? What counts as an approaching deadline worth flagging? When should a contact profile get updated? The rules for this are not obvious upfront. You discover them by running the system, noticing what it misses or over-flags, and adjusting. After a few weeks of tuning, the morning briefing reliably surfaces the right things. But getting there was a process, not a setting you flip on day one.
- Output quality varies. Some AI drafts are 90% ready. Others need heavy editing. Every piece of output requires human review before it goes anywhere. This is not a system you set up and walk away from.
What we are building next
The biggest planned addition is a task queue. Instead of Tom initiating every piece of work, tasks would land in a queue and get assigned to the right agent on a schedule. Low-risk work like content drafts and research briefs would run on their own. Higher-stakes actions like sending emails would wait for Tom's approval.
This is designed but not built yet. When it ships, it fundamentally changes what a one-person operation can get done in a day.
Want to talk about this?
If you are thinking about how AI could work inside your team, we have probably already solved the problem you are facing.
Get in touch