The Black Box

Why AI surprises even the people who built it

A black box system is any system where both inputs and outputs are visible, but the transformation process in between isn’t well understood. A lot of folks don’t know that an AI’s answers can be just as surprising to AI scientists as they are to the rest of us.

Well they built it right? How can they not understand what they built? But if you go out and listen to the forefront AI scientists in interviews, they’ll attest to the fact that they are often surprised (and sometimes unnerved) by what their models can do.

Bringing it back down to earth, where the rest of us non-AI-scientists live: You can compare the relationship of an AI developer to the AI outputs as being similar to your own relationship to a system with more than a few variable data inputs that you might use at home or at work: even if you participated in the system design, you can’t always predict granular outputs with 100% accuracy.

It’s a compounding issue: Hundreds of raw data inputs can translate to thousands of potential outputs. Past a certain point of complexity, the system exceeds what the human brain can envision on its own. That’s why we build out robust systems in the first place. At that point, you need to run the system, see what happens, and analyze the results retrospectively to build a narrative and improve outcomes the next time.

Same for AI, but many times more complex. The behavior of a traditional workplace system might be complicated, but it’s understandable with enough expert analysis. Modern AI systems are way past that point of understandability for any single human: a true black box. That’s the weirdest (and most exciting) part about this technology. The next time that an AI tool truly surprises you, just know that you’re not alone in raising an eyebrow. That’ll become a collective experience for all of us as the technology continues to get exponentially more capable.

My advice: Use the tech carefully and strategically, and gut check your outputs every step along the way to ensure you don’t risk your credibility by producing AI-derived products with no defensible narrative.

Side note: If you want to go deeper on the concept of the AI black box (according to scientists and philosophers who have shaped the conversation around AI’s trajectory and risks), here’s my recommended starter reading list:

  • Life 3.0 (Max Tegmark)
  • Superintelligence (Nick Bostrom)