2 min read

Broken

Broken
Photo by Luis Villasmil / Unsplash

When something is big—overwhelmingly big—the best place to start is with a simple description of the perceived problem.

Modern computing is broken.

In some ways, it changes too quickly. Through ingenuity and sheer brute force, we've seen leaps in artificial intelligence recently—especially generative AI. The release and public availability of GPT-3 and its descendants have shown us just how much of the work we do can probably be done for us. But we have yet to fully consider the impact of a feedback loop between models-trained-on-humans, humans-trained-on-machine-generated-content, and models-trained-on-humans-trained-on-machine-generated-content.

At best, continuing to train systems like OpenAI Codex on code that was generated by previous generations of OpenAI Codex may result in a gradual quality degrade if humans don't perform the ongoing task of filtering out what isn't valuable.

But at worst, the fibers of what makes us human are tugged at until the species collapses. What makes art interesting? In a world where computers can generate anything you can imagine and much more that you can't, what is the value of art? Does it become more about the story behind the piece than the piece itself? Does it become the artist themself?

And what about the perception of agency, or responsibility, or accountability? If we delegate decisions to our robotic servants, how long before they become our masters in ways we can't foresee or even understand? Some argue that technology has already done this. Now imagine that technology can predict, adapt, decide, and exert will in every corner of the physical world.

In others ways, computing changes too slowly. Because of its inherent complexity, it has grown incrementally. But the downstream impact has created an environment that demands more than continued incremental improvement will be able to support.

We face a privacy crisis—one we are attempting to solve through regulation. But that will always be subverted unless privacy is at the very core of computing. If you try to solve it at that core—beneath the regulations, and the contracts, and the apps, and the operating systems—down where hardware might enable the concept of data ownership at the lowest levels, you'll find that every other piece of the computing experience needs to be revisited as a result.

And if you take that to its full conclusion—where data is owned, private by default, and by default has some inherent value (necessary if only to understand and quantify the risk of private data leaking, which further informs a threat model, which ultimately manifests in the price of use), such a computing experience would axiomatically preclude the creation of the kinds of AI systems we're building today due to the sheer volume of data necessary to train them.

Here in these depths is where Project Able begins its journey. It is the ideological enemy of what will exist if we do nothing. Success is all but impossible. But the experiences that might be unlocked could put to shame anything that exists down the current road we are on: a complete inversion of computing as we know it today, where you can confidently navigate through a world made up of truly immersive, multimodal, multi-user experiences with full confidence and trust, where data is truly your own rather than locked up with dozens of vendors, where there is a notion of data permanence and history, and where we—the users—are not the product.

That is the future I choose. So no matter the odds, I will fight for that future for as long as I am able.