We made a dent in Hacktoberfest
Oct 24, 2025
How will we build AI products in the future?
Right now, we're in the pre-WordPress era of AI.
Decoupling AI behavior from code comes next that opens up a whole new paradigm for AI design, control, and collaboration.
Back in 2011, 76% of websites were hand-coded. If you wanted to fix a typo, you needed a developer. Then WordPress came along. It separated content from code and gave writers a GUI. That shift turned the web from a developer-only playground into a space where anyone could publish.
We're seeing the same thing happen with AI.
Today, if you want to tweak a prompt or adjust behavior, you need a data engineer or a context engineer. It's slow. It's fragile. It doesn't scale.
The next big shift? Decoupling AI behavior from code. Moving from code-driven AI to collaborative, cross-functional environments where teams can design, control, and evolve AI behavior—together.
This begs the question — Which processes should we transition? What will those AI-powered GUIs look like? And can we discover them today?
Dear reader, this is the story of how Alignment got started. It's about the early sparks, the people we met, the first product we built in 7–8 hours, the demo that followed, and where we're headed next.
By the end, I think you'll see what I see: a new way to collaborate towards building reliable AI—and the massive wealth(1) waiting on the other side.
Hacktoberfest
One week ago, my "partner in hackathon" and I joined 1000 builders, hackers and doers at Hacktoberfest.
A guest speaker—someone who builds conversational AI agents for businesses—nailed it. The issue isn't the LLM. It's the data. Or more specifically, the lack of clean, accessible, usable data from the businesses themselves.
I asked him at the end of his talk: "What's the hair-on-fire moment? What's actually broken there?"
He didn't hesitate. Companies don't have their stuff together. No finished playbooks. No clear procedures. No up-to-date policies. And when they do exist, they're scattered across inboxes, wikis, and someone's head.
I've started thinking it's much worse under the iceberg.
Worse, ownership is a mess. Is it Customer support's job? Legal's? HR's? Business analyst? Everyone's in charge, which means no one is.
Even when domain experts from those competencies are involved, their insights are buried in email threads or locked in Word docs. Prompts are tuned manually. Results are shared as screenshots in Slack. It's chaos.
I've seen this before. Different kitchens, same mess. But this time, it clicked.
I realized that the contrarian truth is: The models are good enough, the use cases are obvious, and the business appetite is there.
What's missing? A workspace. One that brings together the right people—designers, analysts, domain experts—to build something reliable. Together.
I turned to my hackathon partner and our unofficial advisor (that's a story for another time) and said: "This is it. This guy's ready to pay. Let's figure out what we can ship by tomorrow."
We spent the next 7–8 hours building the first proof of concept.
What do we build?
In two sentences: Alignment is the workspace for cross-functional AI teams and domain experts building conversational AI products. It gives them a shared place to design, control, and collaborate on AI behavior—without the chaos.
But what does that actually mean?
Let's say you're building a customer agent for a subscription business. These companies run on playbooks—standard operating procedures for things like pausing a subscription, issuing a refund, or handling cancellations. Each of those actions is a decision tree.
A set of steps. A logic flow.
Is the customer eligible for a refund?
What's their subscription status?
What kind of change are they asking for?
These aren't engineering problems. They're owned by operations, support, legal, and product teams.
Is .docx or .pdf enough? Those formats were built for human readers, not machines. They're static, unstructured, and blind to context. AI needs more than words—it needs meaning, structure, and intent. It needs to understand not just what was written, but why it was written, and how it should behave because of it.
That's not something a .docx or .pdf can deliver.
Do those people have AI literacy to jump in quickly and shorten the path to production? Barely. AI providers still report monthly and weekly active users.
Mauch said, how do we fix this communication shortfall—the gap between how cross-functional teams and domain experts work on AI products?
The starting point should be a workspace that speaks their language—literally.
We imagine a cross-functional workspace where domain experts work in their native format: plain English. They bring their existing playbooks, SOPs, and raw knowledge. Using a playbook enrichment pipeline that combines curation, annotation, extraction, and human review to make information more structured, contextual, and reliable for AI products..
Then? That enriched knowledge flows into retrieval systems or model training. No handoffs. No translation layers. No bottlenecks.
This is Alignment.
Here is the back of the napkin:

In this video:
🎥 See the 3-minute walkthrough of our hackathon prototype
🎤 Hear the story behind the idea
🔥 Watch how we landed a live stage demo and connected with the community
We’ve got their attention!
And selected to demo on stage.
Did our live demo go well?
Oh, it landed.
People noticed. Connections were made.
We even walked away with some sweet sponsor prizes—shoutout to Instant and friends!
🎥 You can catch the magic on stage right here: Link
--
While we are here,
Big thanks to the Hacktoberfest organizers and our friends.
This moment didn’t just happen. It was made—by people who care about building, sharing, and making space for others to do the same. The organizers, the contributors, the quiet supporters—they created this moment for us and for every builder out there. We’re grateful.
Alignment today
We’re shaping Alignment into a minimum viable workspace—lean, functional, and opinionated.
-
AI-native document editor (Instantdb-powered) We start where domain experts are strongest: writing. The editor parses plain English into structured logic, enabling setting deterministic and probabilistic instructions, preserving nuance and intent. It’s not just readable—it’s executable. Everyone edits the same source of truth.
-
Embedded data enrichment tools What’s “basic”? What tasks are still too manual, too opaque, too slow? We want to turn those into intuitive user experiences that invite more domain expertise into the loop.
-
Deep market scan We’re reverse-engineering what’s out there. My tabs are overflowing, but it’s worth it. We’re mapping the landscape to avoid reinventing wheels—and to spot the gaps no one’s filled.
-
Alignment governance and ops model We’re exploring models that balance openness, sustainability, and velocity—this part is still in motion.
It’s going to be thrilling to watch users try Alignment for the first time. When someone enhances their first playbook—adds logic, enriches data, sees it come alive—you can see the spark. That moment of “Wait, I can do this?” That’s the magic. That’s the win.
And yeah, it’s electric.
We’re designing for that feeling. The delight of turning raw knowledge into something structured, usable, and alive. The goal is to make that transformation feel effortless—so intuitive that domain experts don’t even realize they’re building something powerful.
If you’re excited about this stuff, sign up and give us a try.. We will reach out to you personally for feedback.
Fin
Thank you for walking with me on this journey!
Like-Minded Folks
These didn’t come out of nowhere. They’ve been forming through real conversations, experiments, and moments that stuck.
A few days before the hackathon, Emmet Connolly shared a story about the tale of two agent builders—that sparked the idea of a document-based editor. The team at Mastra.ai gave me hands-on access to GUI, which helped me step into the world of AI agent building and start experimenting with my own toy projects.
Then came Mistral AI Studio. Just weeks after the hackathon, they surfaced what looks like the next big trend: studio environments on the web that let users spin up agents and AI applications in minutes, with built-in tools.
These signals shaped how I imagine the future. The interface may still need iteration, but the direction is clear—and it’s exciting.
Next Up
I want to make sure that what we’re building isn’t just a collection of features or their specific implementations — because nobody wakes up wanting “a workspace for AI teams.”
However, if we are building what people actually want:
- Making better decisions, made faster
- Reduced communication costs
- Zero-effort data enrichment for AI behavior
- Instant access to every AI product iteration
- Test AI features without engineering
- Fewer domain expert and data enrichment hours
- Reduced evaluation time by 50%
- Moves ideas from concept to release faster
- Multiplied example size, uncovering insights
- Test AI features in minutes, not weeks
- Increased development confidence
- 5× more AI features shipped to production
- 75%+ teams report improved culture & collaboration
If Alignment helps teams achieve results like these, we’ll find many more buyers.
There are all sorts of things people want. Wanting is free. Wanting now exacts a price. The challenge is avoiding the trap of what’s urgent—mistaking “wanting” for “wanting now”. That's our next focus. And the answer is straightforward: We turn to our hacker friends—and their networks—who are building conversational AIs for enterprise.
We engage. We show. We observe. We build. We demo.
Fast cycles. Real feedback. Learning as we go.
If this sparks something for you—and you think you can help us connect the dots — I’d love to hear from you and learn your perspective 🙂
Thanks Levan Parastashvili, Giorgi Bagdavadze, Tornike Gomareli reviewing drafts of this essay.
(1)Levan and I first met at a local AI meetup. We teamed up for the MCP/AI hackathon, took first place out of 34 teams, and have been loosely exploring side projects ever since. Then Hacktoberfest came around—and here we are.
(2)Wealth = things people want. Money = a medium to exchange wealth. Creating wealth means making things people want.
(3)Hacktoberfest was organized by the folks from the FLP and Devtherapy community, both a tech podcast and Discord space that knows how to bring bright-minded people together. I’ve had the chance to join their podcast twice—once to show how V0 cuts time to prototype, and again to dive into GenAI adoption and beyond. Both times were meaningful.
Images
Funny thing—the hackathon happened at the university I once called home. I spent years there, never finished, eventually dropped out. That’s a whole other story.
First few minutes of setup.
Left: Gua, wondering if the coffee will kick in before the code does.
Right: Levan, already deep in thought—or maybe just regretting not bringing a second monitor.
Demo time. Slide two—where we try to explain the problem without sounding like we created it ourselves.