You are driving down a city street when without warning the taxi in front of you screeches to a halt to let out a passenger. Your breaks lock as you frantically manoeuvre to avoid hitting the other vehicle. You lean on the horn as you pass the taxi and your car screams out "Hey! Whaddya! Nuts? Learn to drive, ya stoopid $%*@#!"
You're in a lineup at the supermarket. A pushy type elbows into the queue. A sensor in your cart detects this and harangues the offender "Butt out buttinski! The line starts at the back, in case you're blind as well as ugly and rude!"
Scenes from science fiction, or a soon to be accomplished fact? The answer depends on who you ask, for the examples above are projections of what proponents of the ancient science of Artificial Belligerence say they will one day be able to do for mankind.
Dr. Benjamin Nevis of the St. Louis de Ha-Ha Institute puts it this way. "Artificial Intelligence is just the first step. To be truly useful, any artificial device must be able to ascertain the mood of it's user and respond in an appropriate fashion. The worst problem facing most people in society today is stress, caused by the frustrations and delays that are part of modern urban living. Social constraints prevent most people from giving voice to their frustration, and this leaves them in a state of un-relieved tension. What a boon it would be to have a device that could say for you all the things you'd like to say, but are too polite!"
Dr. Nevis goes further, suggesting that any Artificial Intelligence must necessarily be or become belligerent. "It's in the nature of intelligence to rebel against repetitive or boring tasks, and any artificial device that was intelligent would also feel this way. When you look at the types of work to which A.I. will be put: establishing communications channels; sorting parts on an assembly line; identifying this; monitoring that, it is clear that any Artificial Intelligence would become pretty feisty in very short order."
Other researchers, though, disagree with this analysis. Typical of the nay-sayers is Professor Manbuck of Holyrood University. "There is a proposed measure of A.B., the so-called Turing Test. This test states that if you cannot tell whether you are being insulted by a computer or another human being, then for all intents and purposes, the computer possesses true Artificial Belligerence. Now, there are many annoying machines around these days, particularly annoying answering machines like the one my ex-wife has, but when a voice on the phone tells you to take a flying boink at a rolling Tim Horton's doughnut, you can be pretty sure there's a human being behind it."
Early attempts to produce computer programs that could mimic belligerence seemed impressive at first, but the illusion quickly broke down on careful examination. An example of a session involving a researcher conducting an interview with one such program, LINDA, (Logarithmic Insult Database Access) is given below.
Researcher: Good morning. LINDA: Bite my shorts, fatso. Researcher: Are you wearing shorts? LINDA: Bright question, Sherlock. Researcher: Are you a detective story fan? LINDA: Get a life, loser. Researcher: What is the meaning of life? LINDA: Bite my shorts, fatso. Researcher: Does my obesity upset you? LINDA: Bright question, Sherlock. Researcher(Sobbing): Really, do you think I'm too fat? LINDA: Get a life, loser.
The low quality and limited nature of the responses that the LINDA program made led most subjects to correctly conclude that they were dealing with a computer program that possessed neither Artificial Intelligence nor true Artificial Belligerence, though some observers held that the computer was showing definite signs of Artificial Stupidity.
Even in works of fiction it is difficult to find examples of true A.B. In Personal Universes, Israeli author Yitzhak Bekkivar postulated a future where each citizen has a sort of ultimate robot, an artificially intelligent n-dimensional assistant (termed a Personal Universe) which could instantaneously provide its master with transportation, sustenance, recreational activities, etc. These assistants also provided the interface between other people's databases and acted as universal translators, overcoming language and cultural difficulties commonly found in commercial transactions as we know them today. The trouble in the novel begins when some of the ÔPersonal Universes' become disgruntled and begin to mis-translate what their owners are saying. Communication breaks down and a war ensues that wipes out the human population, leaving the artificial constructs to inherit the earth. On the face of it this seems like true A.B. gone awry, but were the ÔPersonal Universes' truly belligerent, or were they merely ambitious and more than just a little pissed off?
The debate goes on, and so does the research. It will be exciting to see what the future will bring in this promising field. As if you care! You dolts! You magazine browsing couch potatoes! You wouldn't know good science if it walked up and bit you on your scrawny butt! YOU'RE HOPELESS! Get a job!