When AI features started appearing in productivity tools, they followed a predictable pattern: take an existing product, add a chat sidebar, call it AI-native. Notion added Notion AI. Slack added Slack AI. Linear added AI features. Each bolted intelligence onto an existing paradigm.
SLEDS went the other direction. We didn't start with a product and add AI. We started with AI and built the product around it.
What AI-Native Actually Means
An AI-native product is designed from the ground up for how AI tools work, not how humans have traditionally organized information. Traditional tools organize around human workflows: projects, tasks, documents, channels. AI tools organize around context: what do I know, what's relevant, what connections exist.
This distinction matters because it determines the product's primitives. In a traditional project management tool, the primitive is a task or a ticket. In SLEDS, the primitive is a thread — a context-rich topic with an observation history. Tasks are something you extract from context. Context is what makes tasks meaningful.
Why Adaptation Fails
Bolting AI onto existing products creates an impedance mismatch. The product's data model was designed for human consumption — structured around projects, lists, and hierarchies. AI models want rich, unstructured context with semantic relationships. Forcing AI to work within task-oriented data models limits what it can do.
When you ask Notion AI about your project, it can search your Notion pages. But it can't access your Slack conversations, your Cursor sessions, or your email threads. The AI is constrained by the product's boundaries, not yours.
Starting Fresh
SLEDS has no tasks, no kanban boards, no Gantt charts. It has threads, observations, assets, and semantic links. These primitives are designed for AI consumption first, human consumption second. The data model optimizes for context richness and semantic searchability rather than visual organization.
This means Frost can do things that bolted-on AI features can't: synthesize context across multiple conversations, detect semantic conflicts between decisions, suggest work based on cross-thread analysis, and generate narrative briefings that understand how knowledge evolves over time.
Building AI-native isn't about having the fanciest AI features. It's about designing the entire system around the assumption that AI tools are primary consumers of your team's knowledge.