Principles

I've been reflecting more on Able. While I still believe everything I wrote about privacy in my last post is true, I've been contemplating its role in the vision I'm attempting to articulate.

The key question: Is privacy a leading feature, or is it simply a necessary foundational feature, given how I think computing should look?

More and more, I think it's the latter. It's a counter-balance to everything else. So what is "everything else?"

These are my working Principles of Natural Advanced Computing:

  1. Pervasive, but not invasive. This is not just "Here when you want it, gone when you don't." Instead it's "Gone by default, here on-demand."

    We are social animals. But also tribal animals. Creating barriers between humans sows distrust and discord. We can mitigate this simply by choosing a sane default that aligns with our deeper nature.
  2. An amplifier, not a replacement. Tangibly, this means technology should take the shape of magical tools, rather than independent agents. As soon as something has the trappings of true autonomy, accountability fades. This seemingly superficial distinction in the form of technology may have a profound impact on how technology evolves.

    Uncle Ben used to say, "With great power comes great responsibility." But I argue it goes further: with great power comes great accountability. The primary difference between these is that responsibility can be shared, but accountability cannot.  

    The only way we can ensure technology does not "run away" from us to to ensure it is limited by the accountability mechanisms we can apply to a person. This ensures the biologically rooted social machinery we've rapidly acquired over the last 300,000 years (including the most recent development: common law) can keep pace with technological advancement.
  3. An extension of mankind's natural instincts. Beyond the reasons cited above, this ensures technology remains accessible to people of all backgrounds. 10,000 years from now, someone should be able to happen upon our technology and mostly understand how it works (assuming it's still functional).
  4. Bounded by our senses. We should always be able to sense and comprehend the direct results (and ideally indirect results) of our actions. This not only ensures that it remains instinctive (since humans figure everything out through modeling, trial, and error), but also that we remain connected to the results of our actions—a necessary ingredient for accountability.
  5. Durable. It must outlive any single human, or company, or country. This ensure we do not take a strong—perhaps existential—dependency on something that can disappear in the next bear market, international conflict, or global catastrophe.
  6. Absolutely trustworthy, by limiting trust. We must remain paranoid, and that paranoia must extend into the core of our technology. The systems we build must assume adversarial activity. They must assume that different people have fundamentally different incentives. They must assume that our core tendencies toward tribalism are so deeply rooted in our evolution that overcoming them is pointless. Instead they must be integrated.

Each of these probably deserves their own post, if not their own book. There may be more. But these are the few that have stayed near the surface since I started thinking about this problem eight years ago.

Elements of privacy are vital to each of these, and especially the last one. But privacy is a prerequisite, not the lead.