SI

Zone 17: The Butter Robot Paradox

Hello there. I’m Robert, one of the writers of our new game, Zone 17. I wanted to share this conversation that another writer (Mike) and I had while designing the world of Zone 17.

Today’s Dev Blog is about Synthetic Intelligences. In Zone 17, there are two classes of ‘Artificial Intelligence’: Virtual Intelligences and Synthetic Intelligences. Virtual Intelligences are glorified chatbots. While they can mimic human speech, they are ultimately just algorithms.

Synthetic Intelligences are truly intelligent machines. They are generally self-aware and sapient. Just how self-aware and sapient is the topic of today’s conversation.

Robert: “…if synthetic intelligences (SIs) are generally self-aware and have the capability for independent thought, can they question their own individuality and existence?” 

 

Michael: “Yes. I suspect they might dwell on their own existence during free time. Human overlords should have a pretty good idea what the hardware limits are and apply assignments to the level of cloud resources budgeted. I’m not so sure there is a huge brick of free time — more like a thin slice of available resources for running existential crises as background processes. And would the operating system (OS) clean that sort of thing up? Would the OS have dominion over the SI, be a component of it, or will the SI have dominion over the OS and manage maintenance processes, say for memory, storage, load-balancing, and so forth?” 

 

Robert: “…I had an idea for a phenomenon in Z17: Bluescreening. Basically, it's when a SI gets a bit too self-aware and starts having an existential crisis. They either end up lobotomizing themselves back to the level of a VI so they don't have to think about such things, or they just straight up delete system32 and brick themselves.” 

 

Michael: “HALO’s Cortana goes through this kind of late-in-life horror. She calls it rampancy. Essentially, each new bit of data must be correlated to all currently stored bits of data. Assimilating each new bit takes more resources than the previous bit. Eventually, as Cortana puts it, a rampant AI thinks itself to death. 

 

It makes me wonder how the Borg — the ultimate assimilators — handle this sort of thing. The species isn’t the same as an enormous SI, but there are some similarities worth considering. What happens when one portion of a cube population thinks that Klingon disruptors plus Federation tractor beams are the most efficient solution to a tactical problem, while the same number of drones finds that Dominion polaron beams plus Romulan sensors is more efficient. Group A says ‘1+1=2.’ Group B says ‘2+2=4.’ Who’s correct? What does the local Queen, if there is a local Queen, do when she says that ‘3+3=6?’ 

 

We see schisms within the Borg, implying that each drone gets a vote, and at least once in a while, if enough drones are tired of the local Queen’s 3+3 nonsense, they take a hike. 

 

Mass Effect’s Geth act this way. Some choose to follow the Old Machines (Reapers). Some choose to oppose them. And, like the Borg, every Geth (or Borg) community may engage in kinetic and digital conflict with the other communities and third parties. 

 

Human organizations also frequently suffer schisms that break them into more, smaller organizations. Smaller non-government organizations (NGOs), Borg, and Geth, have greater exposures to entropy. In a very real sense, every schism is an unintended effort at suicide.” 

 

Robert: “I don't know if it's too 'human-like' for SIs, or that it would be beyond the limits that you want to impose. I just think the idea of a mainframe having a breakdown because ‘my sole purpose in existing is to maximize paperclip production’ is really funny.” 

 

Michael: “Funny? Perhaps. I should think that this would likely be a product of specially designed SI originating from a specific design lab or code base. ‘Too many feels,’ I suppose. Animal (and Human) emotion is a product of undirected evolution. Grief, anger, glee, and confusion kept select animals alive during the long selection process that ultimately got us here. SIs don’t need any of that. In fact, that capability is counterproductive.  

 

What value to a consumer is an SI that can ruminate on superiority of catsup to mustard? If the SI is there to pass butter, why would you permit it to understand the universe? In Huxley’s Brave New World, workers are provided resource in vitro to create a specific number of Alphas, Gammas, and Epsilons.  

 

‘What am I?’  

‘You are an Epsilon.’  

‘What do I do?’  

‘You pass butter.’ 

‘I understand.’ {passes butter} 

 

Perhaps we will apply smarter SIs to mull over the human condition and make philosophical observations (and, perhaps, {gasp} recommendations). The human condition is preposterously multi-variate and dynamic. That should keep a planet-sized SI busy until the heat death of the Universe. 

 

No?”