14 min read

Introducing AbleOS

Introducing AbleOS
Photo by Stephan Bechert / Unsplash

I've been quiet for a few months now, but I'm finally ready to talk about what's next.

I'm starting a "home computer" company.

No, not that kind. The other kind.

Read on.

A Problem of Perspective

The current trajectory of artificial intelligence development centers on creating autonomous digital personalities. Products like Alexa, Siri, Grok, ChatGPT, Claude, Gemini, and Meta AI present themselves as distinct entities with whom users must, to greater or lesser extents, establish relationships. While these systems demonstrate impressive capabilities, they embody a design paradigm that creates several problematic dynamics.

First, the business model creates misalignment between user interests and system behavior. We are rapidly moving toward a world where every vendor creates their own "agents" such as Salesforce agents, Shopify agents, banking agents, and e-commerce agents, but these are vendor agents serving vendor interests, not user agents serving user needs. Each SaaS company now positions AI assistance as a way for customers to interact with their platform, but the agent's primary loyalty remains with the vendor who created and controls it.

This creates a concerning dynamic. Users must navigate an ecosystem of competing vendor agents, each optimized to drive engagement with that vendor's specific platform, extract data for that vendor's business model, and create lock-in to that vendor's ecosystem. While Apple Intelligence may represent one of the few examples of an agent designed in-depth primarily to serve user interests, most others ultimately optimize for their creator's commercial objectives rather than pure user benefit. The result is a disproportionate number of agents outside user control rather than under it, inverting the proper relationship between intelligence and agency.

Second, the interaction model pushes agency outside the human. When you say "Hey Google," and ask it to perform some task, you are delegating some amount of decision-making authority to an external entity rather than using tools that amplify your own capabilities. This delegation model requires users to adapt their communication and thinking patterns to accommodate the assistant's interface and limitations, rather than having technology adapt to human natural patterns.

This disconnect creates additional cognitive overhead. Users must develop an intimate understanding of the agent's capabilities and limitations, even though those capabilities remain separate from their own. They find themselves having to cajole and guide the agent to understand and use its own tools properly, creating a strange dynamic where users become managers of artificial personalities rather than wielders of amplified capability.

More problematically, this model encourages users to focus solely on outcomes while remaining indifferent to methods. Users ask for results without understanding or caring how they're achieved, effectively taking their hands off the steering wheel. This delegation of the "how" creates the conditions for hallucinations, wrong solutions, and off-the-rails behavior because the user has abdicated responsibility for the process. Meanwhile, the agent lacks the user's contextual understanding and judgment.

When things go wrong, the user has few opportunities for course correction because they've been removed from the problem-solving process. The current AI paradigm optimizes for convenience by automating away human involvement, but this makes humans less capable rather than more so. A better approach would reduce the cognitive impedance between intent and action, rather than automating away the series of actions that achieve an outcome.

Third (and related to the previous point) accountability becomes diffused when AI systems act autonomously. When an agent makes a decision or takes an action, responsibility shifts from the user to the system, creating ambiguity about who is ultimately accountable for outcomes. This represents a departure from tools that amplify human agency while keeping humans clearly responsible for results.

Finally, there's a comprehension gap that emerges as these systems become more sophisticated. The more independently an AI agent operates (predicting user needs, taking proactive actions, making autonomous decisions), the greater the distance between user intent and system behavior. This distance pushes the system's operations outside what users can intuitively understand and predict, leading to interactions that feel alien or unpredictable.

Human Augmentation

AbleOS (AOS) represents a different approach that addresses these interaction model problems by extending the neural substrate out into the environment. Instead of creating digital entities that users must manage and negotiate with, AOS treats the environment itself as an extension of human neural architecture. This creates spaces that more seamlessly augment human capabilities while maintaining clear lines of agency and accountability.

This approach embodies what I call "intent-driven computing," a paradigm where intelligence embedded in environments amplifies human intention rather than pursuing autonomous decision-making. The system maintains tight feedback loops between user actions and environmental responses, ensuring that all automated behaviors remain within what users can intuitively understand and predict based on their own demonstrated patterns.

These ideas are heavily influenced by David Rose's Enchanted Objects, which I read in 2014, two years after I began working on Alexa and Echo. Rose's vision felt like he was reading my mind, articulating concepts I had been grappling with but couldn't fully express. His book became a cornerstone influence on my thinking about the proper trajectory of artificial intelligence and technology.

Rose advocates for "enchanted objects" that embed intelligence invisibly into everyday items rather than forcing users to interact through screens and apps. He contrasts this with what he calls the "terminal" paradigm, where all computing happens through rectangular interfaces that demand constant attention. Enchanted objects provide "glanceable" information, respond to "ambient" interaction patterns, and make their intelligence "invisible" by dissolving into natural human behavior.

Most importantly, Rose argues that truly magical technology extends human capability rather than replacing it with artificial personalities. This vision of seamless human-environment integration, where technology becomes an invisible amplifier of human intention, directly inspired the architectural principles behind AOS.

Short-term Trade-offs and Long-term Vision

This human augmentation approach requires honest acknowledgment of near-term trade-offs. In the short term, AOS will feel like a loss of functionality compared to current AI agents. The primary benefit of today's AI systems is their embedded procedural knowledge. Users can delegate planning and problem-solving to the model without understanding the underlying steps. AOS deliberately cannot do this. If users lack the knowledge to orchestrate environmental tools themselves, those processes cannot be automated until they learn through guided practice.

This creates an initial learning curve where users must develop skills before gaining automation benefits. A separate training system may guide users through AOS tool combinations, but automation only emerges after users understand the underlying processes. This represents a philosophical choice: capability that develops through learning rather than capability borrowed from external intelligence.

However, this apparent limitation becomes a strategic advantage as technology evolves toward direct neural interfaces. When AOS can tap into brain activity as users perceive their environment, the human augmentation model aligns naturally with neural integration. From the brain's perspective, environmental intelligence functions as a neural substrate extending existing senses and motor controls rather than competing with them. The system amplifies existing neural processes instead of introducing competing decision-making systems.

This architectural choice anticipates a future where brain-computer interfaces become viable. Current AI agent systems that develop increasingly strong opinions about problem-solving approaches would create conflicts when connected to neural systems. Two intelligence systems would be competing for dominance rather than collaborating. Such conflicts could lead to cognitive interference, reduced effectiveness, or worse, preventing humans from benefiting from neural enhancement technologies due to what are essentially architectural incompatibilities.

AOS avoids this trap by ensuring environmental intelligence always extends rather than competes with human cognition, creating seamless compatibility with future neural integration technologies.

Why Start with the Home

The interaction model problems I've outlined apply across the entire technology landscape, from enterprise software to mobile applications to AI-augmented cloud services. However, AOS focuses specifically on the home environment as the starting point for human augmentation. This choice is deliberate for several reasons.

The home represents a user's primary domain, the environment where they have the highest degree of control and the strongest expectation of privacy. It's a highly trusted space where users should feel most comfortable with intelligent systems that learn from their patterns and preferences. Unlike workplace or public environments where policies and constraints are imposed by external parties, the home is where users can fully benefit from technology that adapts to their personal needs without compromise.

The smart home market also presents a unique opportunity because it remains largely unsolved despite decades of attempts. There are relatively few dominant incumbents, and most existing solutions have failed to deliver meaningful innovation beyond basic automation. Current smart home technologies are fragmented, unreliable, and frustrating to use, creating an opening for a fundamentally different approach.

The home is where the problems with current AI interaction models become most obvious and painful. We've all experienced the frustration of smart devices that stop working when they lose internet connection, leaving us unable to control lights or thermostats in our own homes. We've watched devices become expensive paperweights when their vendors go out of business or discontinue support. We've struggled with devices from different manufacturers that refuse to communicate with each other, forcing us to juggle multiple apps and interfaces. We've dealt with voice assistants that consistently misunderstand commands or respond inappropriately to conversations they weren't meant to hear.

Perhaps most concerning, we've all grappled with the unsettling knowledge that intimate details of our home life (our routines, our conversations, our presence patterns) are being transmitted to remote servers controlled by companies whose interests may not align with our own. These experiences make clear why remotely-hosted, vendor-controlled agents are particularly problematic, and why human augmentation that keeps all processing and learning local is especially valuable in the home context.

The home environment makes the benefits of user-controlled human augmentation immediately tangible while avoiding the complexity of enterprise deployment or the constraints of public spaces. It's where the vision of environments that seamlessly extend human capability can be most clearly demonstrated and experienced.

Why Now

Beyond these strategic advantages, we're also approaching a curious moment of technological convergence that makes this vision achievable. Several technologies are maturing simultaneously, creating a unique opportunity for a ground-up rethink of what truly intelligent homes can be.

Foundational AI models have reached a level of capability and intelligence that enables sophisticated contextual prediction and pattern recognition while being efficient enough to run on consumer hardware. The same models that power advanced reasoning can now operate locally on compact, inexpensive devices, eliminating the cloud dependency that has plagued smart home systems while enabling the always-on contextual awareness that makes human augmentation possible.

Simultaneously, industry standardization efforts like Matter and Thread are establishing common protocols for smart home devices, changing the competitive landscape. These standards create unified interfaces for communicating with smart lights, window shade actuators, locks, and other connected devices, regardless of manufacturer. This standardization reduces vendor lock-in and breaks the hold of platform incumbents who previously required device manufacturers to build specifically for Alexa, Google Home, or proprietary ecosystems.

The convergence of capable local AI, affordable hardware, and standardized device protocols creates the foundation for rebuilding smart home systems from first principles. However, despite having all these technological pieces in place, the industry continues to apply them within the same tired paradigm of vendor-controlled agents and reactive automation. AOS represents the first system designed specifically to leverage this technological convergence for truly magical experiences. It aims beyond simply making existing approaches incrementally better.

Continuous Contextual Prediction

A key architectural innovation in AOS is its continuous operation as an always-on intelligence system that builds context, generates predictions, and prepares responses before explicit user requests. This approach reflects the insight from Jeff Hawkins' On Intelligence that memory and prediction form the core of intelligence itself. AOS implements this principle at an environmental scale. It creates systems that understand context through memory and respond intelligently through continuous prediction. This represents a departure from reactive smart home systems that respond only to explicit commands or predetermined triggers.

Consider this scenario: Two people are discussing a particular recipe they enjoyed at a restaurant. Throughout their conversation, AOS is continuously processing their discussion, identifying the specific dish, researching its typical ingredients, and building contextual understanding about their interest level and likelihood of wanting to recreate the meal. When one person makes a simple gesture and says "let's make that this weekend, add the ingredients to my grocery list," the system can respond immediately and accurately because it has already completed most of the necessary reasoning.

The gesture serves as an attention signal that cues the system to transition from continuous background processing toward active response. The user's brief statement provides final confirmation and direction. But the bulk of the work has already been completed. The system has already understood what "that" refers to, determined appropriate ingredients, and contextualized the request. This enables interactions that at least appear to operate at the speed of thought rather than the speed of computation.

This always-on intelligence is only possible because of AOS's commitment to local processing. Traditional cloud-based systems cannot maintain continuous contextual awareness without creating unacceptable privacy violations. Apple's Private Cloud Compute attempts to neatly solve this problem with attestable remote platform guarantees around privacy and security cryptographically rooted on your local device. But there will always be both perceived and actual threats as long as bits are leaving the home. AOS's local processing architecture enables deep environmental awareness while ensuring that sensitive contextual information never leaves the user's control. This transforms privacy from a limitation into an asset.

Every moment of environmental observation creates opportunities for the system to generate predictions about user intentions. These predictions operate at multiple temporal scales from immediate responses to long-term routine modeling. When users act on these predictions, they provide invaluable training signals that enable continuous refinement. Each interaction becomes a labeled example where contextual observations serve as features and user actions provide ground truth labels.

Self-Evolving Local Intelligence

The technical foundation that makes this vision possible represents a departure from current smart home approaches. Rather than static devices that respond to predetermined triggers, AOS creates continuously learning environments. These environments become more capable through experience while maintaining complete transparency and user control.

Central to this approach is "pattern hardening," a mechanism that mirrors the dual-system thinking described in Daniel Kahneman's Thinking, Fast and Slow. Initially, all environmental observations and user interactions flow through what Kahneman would call "System 2" thinking. This involves the deliberate, computationally expensive reasoning performed by large multimodal models. During periods of low activity, AOS processes captured interaction data. It identifies recurring patterns and converts successful reasoning sequences into specialized models. These function as "System 1" responses that are fast, automatic, and effortless.

This pattern hardening emerges from a co-evolutionary process between user and system rather than pure observation. Instead of simply watching user behavior and automating it away, AOS guides users through the effective combination of low-level environmental tools. It teaches them how to sequence sensor readings, device controls, and information processing to achieve desired outcomes. As users develop preferred approaches through this guided exploration, those collaborative patterns become candidates for hardening.

This approach preserves user agency and understanding because the automated behaviors emerge from explicit user participation rather than opaque algorithmic decisions. Users remain accountable for hardened patterns because they helped create them. They understand how these patterns work because they were taught the underlying tool combinations. When familiar situations arise, the system can respond through these co-created patterns with minimal computational overhead. Yet users retain the knowledge to modify or override them when needed.

The architecture employs a brain-inspired ensemble approach that mirrors biological intelligence processing. The "neocortex" consists of a large multimodal model capable of complex reasoning and novel problem-solving. It serves as the System 2 for deliberate analysis. Specialized "cortical regions" handle domain-specific tasks like spatial relationships or temporal patterns. Additionally, hardened pattern models function like learned reflexes. These can handle routine situations with System 1 speed while remaining transparent and modifiable. As the system learns from user patterns, it becomes more efficient at anticipating needs and preparing responses. But it always operates within the bounds of what users can understand and predict.

The Comprehension Window

An important aspect of AOS is ensuring that all automated behaviors remain within what I call the "comprehension window." This is the range of behaviors and anticipations that users can intuitively understand and predict based on their own demonstrated patterns. This isn't an afterthought or policy overlay. It's a core architectural requirement that prevents the system from becoming alien or unpredictable.

When a light turns on before someone enters a room, they should immediately understand why it happened based on their own past patterns. When the system adjusts temperature or displays information, the logic should be transparently derived from demonstrated user preferences. This prevents the unsettling experience of living with intelligence that feels autonomous or disconnected from user intent.

Current AI systems often violate this principle by making predictions or taking actions that users cannot easily trace back to their own behaviors. This creates a phenomenon where systems appear to understand user intent better than users understand it themselves. This crosses the boundary from helpful to unsettling. Advertising systems exemplify this problem by making accurate predictions based on data patterns invisible to users. Conversely, when AI systems make incorrect predictions, they become hindrances rather than helpful tools. This breaks user trust and workflow.

Strategic Positioning

The timing for AOS aligns with major technology companies' strategic investments in environmental intelligence. Both Meta and Apple recognize the importance of environments that understand and respond to human presence and intention. Meta demonstrates this through its metaverse vision while Apple shows it through its ecosystem approach to seamless integration across devices. This recognition appears clearly in their AR/VR initiatives, where environmental awareness enables new interactions. AOS aligns with these strategic directions but offers a fundamentally different approach.

Unlike current platforms, AOS is designed with a principled stance against excessive external agency. The system functions more like a neural substrate to its controller, whether human or AI assistant. It operates at a lower cognitive level, similar to how the motor cortex executes intentions without developing its own goals. This architectural choice ensures intelligence serves as a direct extension rather than developing its own agenda or priorities. By reducing external agency in the computing environment, AOS enhances human capability without creating dependencies on artificial personalities.

This approach dissolves the boundary between user intent and system capability. Traditional smart homes and AI assistants require users to learn interfaces or adapt to digital personalities. AOS eliminates this barrier while remaining compatible with existing AI systems. Because it's designed to operate as a neural substrate rather than an autonomous entity, it can enhance any AI interface without creating conflicts of control.

Whether someone wants to interact directly with their environment or delegate decisions to Siri, Google Assistant, or Meta AI, AOS amplifies both the user's capabilities and their chosen AI's effectiveness. It provides rich environmental context, local processing power, and seamless device integration. Users who prefer human-centric control get amplified capabilities. Those who prefer agent-driven experiences get more capable agents.

This creates a win-win dynamic where AOS strengthens existing vendor relationships rather than forcing users to choose sides in AI ecosystem wars. The environmental intelligence approach makes any interaction model more effective without requiring users to change their fundamental preferences about how they want to engage with AI.

What's Next

This vision has been brewing for over a decade. I first started thinking about these ideas in 2012 when I was working on Alexa and Echo at Amazon. I remember conversations with the lead product manager about creating environments that truly understood and extended human intelligence rather than just responding to commands. For 13 years, I've watched the industry take a different path.

I've spent those years building pieces of this vision, hoping that someone would eventually put it all together. But Amazon, Apple, and Google never got there. The industry kept doubling down on cloud-dependent agents, vendor-controlled personalities, reactive smart homes, and fragmented experiences. There was an unwillingness or perhaps inability to build devices that could understand and stitch together the necessary context to create truly intelligent and magical experiences. This was largely driven by legitimate privacy and security concerns about processing intimate data in the cloud.

Reading Enchanted Objects in 2014 crystallized these frustrations and gave me language for what I believed technology could become. Rose's vision of invisible, ambient intelligence that extends human capability rather than demanding attention felt like the missing piece. But the technology wasn't ready, and the industry was moving in the opposite direction.

The fundamental constraint has now been resolved. Local hardware has become powerful enough to handle sophisticated AI processing. This eliminates the privacy concerns that previously prevented context-rich experiences. However, the companies that could build this solution have become incumbents with vested interests in maintaining the current fragmented approach. They are constrained by existing ecosystems and business models built around cloud-dependent services and vendor lock-in.

This situation creates a unique opportunity. The vision would have been impossible even five years ago without substantial funding and resources. The advancements in AI have changed the leverage available to individual developers. Local models can now run sophisticated reasoning on consumer hardware. The technological barriers that once made this a billion-dollar enterprise problem have been eliminated.

In February, I quit my job and spent the last few months deciding what came next. Early retirement was certainly an appealing option. But after 13 years of thinking about this problem, I realized that, whatevever other professional/career aspirations I pursue, AOS was something I had to build. It's time to bring this vision to life. I have a new office and new hardware to experiment with. I'm dedicating significant time and attention to making this happen.

After all this time thinking about this problem, I can finally approach it with fresh perspective and build the future I want to see. This work represents an important new chapter for me. I look forward to sharing it with others who recognize the potential for technology that serves humanity.