Inter-Galactic Memo
To: All personnel
Fr: W. Leavitt
Re: A.L.I.S.
9-04-09
Interesting article in Physorg. today.
The right honourable computer, barrister-at-law (that spelling is British and it correct over there.)
Here’s the opening grabber:
European researchers have created a legal analysis query engine that combines artificial intelligence, game theory and semantics to offer advice, conflict prevention and dispute settlement for European law, and it even supports policy.
It’s fairly interesting, although I didn’t understand a lot of it. But there are a few troubling items I thought we should look into. Essentially, this is a “sophisticated”, experimental computer program designed to take some of the work load off the shoulders of those stalwarts in the legal system. I don’t know about you, but I have doubts about letting a computer program make decisions on legal matters. The only situation I can think of that would be worse would be letting human beings make those decisions. Yikes.
Okay, first objection. “Game Theory.” Do we really want people who write the admittedly hugely complex algorithms for games taking over the legal system? Thankfully, this experiment is being perpetrated over in Europe right now, so we have time to form an underground resistance, come up with passwords and handshakes, and divvy ourselves up into sleeper cells. Viva la revolucion!
Second: Artificial Intelligence? Really? Are we there already? I don’t think so. I’ve written a few novels that involve AI’s, and I can tell you they are evil. All of them. (Except Mike, but he’s beyond AI; he’s a FABEC, or: Full-Awareness Bionetic Entanglement Computer.) Did we learn nothing from The Forbin Project? On second thought, AI’s might be just what lawyers are looking for. Two of a kind, as it were. Here’s an example of how they would help:
Game theory looks at how strategic interactions between rational people lead to outcomes reflecting real player preferences. In the Ultimatum game, for example, two players decide how a sum is to be divided. The proposer suggests what the split should be, the responder either can accept or reject this offer. But if the responder rejects the split, both players get nothing.
What kind of negotiating is that? We have to be rational all of a sudden? Here’s a big surprise; the Responder almost always accepts the initial proposal. I would too if the only other choice was nothing. To be fair, the Proposer suggests 50-50 most of the time, even though the Responder might have accepted a lower bid. Still, with a system like that, traditional factors like intimidation, coercion, under-the-table-favors and proposals and the ever-popular sex-for-acquittal, go out the window. Where’s the fun in that?
I think we can all agree that further inquiry is needed before we get on board with this. But I’ve saved the best objection for last: The acronym for this system is ALIS. Advanced-Level Information System. (Mine is cooler don’t you think?) This acronym is pronounced Alice. A few of you will instantly see the significance of this. ALICE is the name of the AI that runs the Umbrella Corporation in Raccoon City. She (ALICE) is responsible for letting all the Zombies loose in the city and causing three (soon to be four) movies worth of mayhem. I’m referring of course to the Resident Evil franchise. If it weren’t for Mila Javovich we’d all be brain-eating dead things by now.
In conclusion, I think I speak for all of us when I say that letting a computer program named ALIS run anything is a big mistake. Legal decisions should be left to the humans. At least they can be bribed.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment