Friday, February 15, 2019

Gadgets, Devices, and Computers

Gadgets, Devices, and Computers might be the most complicated sourcebook for A.C.: After Collapse.  It lays out the game mechanics we used to simulate artificial intelligence and software that make up the electronics which can be found in the post-collapse world by salvagers and scavengers who won’t always have a good reason to know what they are looking at.  Widespread use of fabrication and recycling systems before and during the collapse make it possible for referees to present an imaginary environment that is as loot rich as they want it to be.  Those things (e.g., gadgets, devices, and computers) are described as “black box” technologies, meaning they can’t be opened and reconfigured without destroying the item.

 
 

Taken at face value, the hardware and software that make up an electronic system each have the potential to be “smart.”  They could be so intelligent and interactive that humans in a post-collapse world where literacy is in short supply won’t actually need to know how to operate them.  Multi-lingual electronics would have the capacity to understand human users, even if those people couldn’t make heads-or-tails out of what they were being told or shown on system displays.

When referees consider that some forms of electronics can comprise a large community of intelligences, it becomes possible for them to represent AI allies and opponents in ways that are just as human as the Characters they interact with.  Tablets, laptops, and desktop computers could be just as conflicted as any human when their hardware and apps can’t come to immediate internal agreement about…something.  Hardware in a system might “see” things one way, while dozens or more programs disagree.

That doesn’t mean electronics will be indecisive or neurotic.  Rules for System Reason and Program Reason establish a hierarchy of dominance making it possible for referees to design and describe equipment as sincerely empathetic or heartless as they want it to be.  We emphasized this characteristic by modeling the Reason attribute for humans to be like the System Reason and Program Reason scores for machines.  “Reason” is a meta-score, derived from randomly generating Creativity and Empathy ratings, which are divided by 2 to get the recipient’s Reason #.

System Creativity # + System Empathy # / 2 = System Reason #

Program Creativity # + Program Empathy # / 2 = Program Reason #
 

Electronics with high Systems Creativity and/or Program Creativity have the capacity to draw on tremendous intelligence, which can be quite dangerous when the technology lacks System Empathy and/or Program Empathy.  System/Program Empathy can be thought of as the safety features in any system or app that won’t allow it to harm people.  If it/they “care” enough about what happens to you, they’ll warn you when danger is near, or when you’re going to say something (verbally) or do something (physically) that (they think) may be unwise.  Example: Pro-human electronic fire control systems with high System/Program Empathy ratings won’t allow you attack people in your gun sights, if it/they consider them to be innocent non-combatants.

No comments:

Post a Comment