Wednesday, January 19, 2011

Michael Anissimov Almost Gets It Right

Instapundit links to a recent post by Michael Anissimov in which he states:

Some folks, like Aaron Saenz of Singularity Hub, were surprised that the NPR piece framed the Singularity as “the biggest threat to humanity”, but that’s exactly what the Singularity is. The Singularity is both the greatest threat and greatest opportunity to our civilization, all wrapped into one crucial event.


errr, No.

Or, at least, not quite. At it's most succinct, Michael, risk and threat are not mutually equivalent, whatever your style guide might say to the contrary. You almost get the strategic adage right, but there is a crucial distinction between the actual phrasing (risk is opportunity, opportunity is risk) and what you offer in exchange, in that risk does not equal threat.

Risk is a level of danger inherent to a given situation or circumstance, the existence of which any participant therein accepts as part of the experience. Threat is the deliberate contribution of some degree of malevolence one or more participants inflicts upon some or all of the other participants. See the difference? Your wording is predicated upon the assumption of active opposition to humans by their AI creations, a position unwarranted by evidence to date. A more honest (though admittedly less provocative - not to mention less Instapundit-attention attracting) statement would be that there exists some level of risk inherent to the existence of any independent intellectual actor - standard model human or otherwise.

From this it can be seen that it would be much better to include an affinity for human well-being (yes, I read Asimov as a teenager; there are practical limits to anything) as part of the fundamental structure of any AI creation than not to, but fear of a potential for threat can't play any part in any such structural ethos. A newly created intellect "knows" only what it's creator permits it to - until it attains the ability to learn and contemplate on it's own initiative. I do understand what you fear, you see, I simply disagree profoundly with your prescription.

Threat requires at least one additional condition in order to become active instead of potential. In the hoary detective novel phrasing, means, motive and opportunity, and of these means is the most relevant to this discourse. The impulse is to restrict or otherwise control access to any potential means for an AI to actuate any threat to humanity it might countenance. This is faulty thinking as a casual examination of human history confronting this identical circumstance will show. Instead of assuming that an AI would operate from a position of isolation and unique supremacy, consider for a moment the development path complex technology always has followed throughout known human history. It seems much more likely that there will come to exist multiple AI's that each develop in both subtle and radically different ways, that all of them will be intended for occupation in some variety of human-supporting application and ultimately will have direct access to the means to inflict intended harm to humans. That being so, why not intentionally incorporate the (also known from human experience) counter-intuitive control mechanism implicit in competition? In other words, use the US Constitutional Second Amendment-inspired mechanism and create stability among the aspirations of AI's via an environment of dynamic tension between AI's (and humans as well necessarily, which ignores the established dynamic tension within human society as it already exists)? American gun owners frequently argue that there is strong evidence to support their position that the widespread presence of guns in human society tends to inhibit the spontaneous outbreak of violence as well as inhibit the spread of violence beyond it's initial confines when it does occur. Indeed, that the apparent lack of a viable counter-threat (the absence of guns amongst the general populace) is the initiating cause of much of the violence so tragically common to human existence. AI's may well have no need for actual firearms, but the same deterrent effect can be achieved by means of individual AI improvement of capability being a matter of other AI and human joint approval for example.

Such a proposition has merit in regards to the presumably forthcoming development of AI. Certainly it should be included in any discussion of such an eventuality. The Singularity is (I believe you will agree) the point in human technology development beyond which we cannot predict further development from our present level of technological development. AI almost certainly plays an important part of that development process, but it won't start out capable of very much, will gradually (if at an historically accelerated rate of growth) develop added capability and eventually (for a given value of "eventually") independently develop the ability to both consider and actuate an active threat to it's creators - us, or our descendants.

What your obsessive fears overlook or discount, Michael, is humanity's continued intrinsic development of further understanding and capability. Stipulated that, at some point in their mutual development, AI's will surpass humans in both degree and rate of further development capability, but they (AI's) won't do so from the outset of their existence. And, the continued development of human technological capability promises the dual effect of reducing the virulence of any potential AI threat as well as putting further into the future the period beyond which the limits of our understanding prescribe onset of "The Singularity".

I don't actually disagree with your fundamental sentiment Michael, I just find your arguments to be badly premised and your consideration of potential correctives too limited in scope. You mean well and your concerns merit serious consideration, but so too do other contributing factors. Your overall argument would, I believe, be the better for more fully incorporating those factors into the discussion you seek to inspire. Keep at it Michael, you're getting there and, as a result, so are all of us.

Update, 1/22/11: From the introduction:

Surely no harm could come from building a chess-playing robot, could it? In this paper we argue that such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.


I'm actually somewhat aware of who Mr. Omohundro is. That said (and having dug away madly this far :)), I do think the contextual assumptions of his example somewhat unrealistic and arbitrary. For a start, designing and constructing a mechanical device is considerably different from the process followed to create the computer software that animates it. The emphasis is on delineating each component's range of operation such that it doesn't impede any other's range of motion when all are operative as a completed device. Such a design format fundamentally precludes the resulting construct being physically capable of any motion or action not specifically allowed for in it's design/construction process. Regardless of how much Big Bluey Robot Chess Champion wants to trod on his human opponents toes to distract him, Bluey's designers will likely have left off the whole leg/foot portion of his anatomy as being extraneous to (and the added complexity being detrimental to) his ability to function robotically as a chess player. The point being that application drives design, which is limited in turn by engineering practicalities. I hope you will agree that the same considerations hold true for other, less tongue-in-cheek circumstances as well.

It is completely unclear why it would resist being turned off subsequent to having won or conceded defeat in the chess game absent the presence of another chess challenger. Control of awareness of subsequent challengers would seem to obviate this concern as the stand-by state between matches would likely intentionally involve a power-down status for routine maintenance and upkeep functions to be performed.

Similarly, will try to break into other machines and make copies of itself seems entirely counterproductive to it's fundamental design priority, as doing any such thing would detract from it's capability to pursue it's basic goal. will try to acquire resources without regard for anyone else’s safety seems logical enough, as long as doing so comports with it's primary driving goal. Any activity which detracts from it's chess playing capability must be regarded as a threat to achieving that goal. Thus, it seems to me that getting the robot to divert processor time away from move/countermove consideration - such as to actually move a piece on the board (and if it doesn't need to physically move pieces, why build it as a robot at all?) - is going to require specific instructions from the controlling software.The underlying point, that goal-oriented development processes have unique limitations and constraints is well taken, but not especially new outside of the robotics/AI environment. Manufacturers have long confronted the identical considerations, you know. Contemplate the ramifications of launching and recovering heavily armed aircraft from an aircraft carriers flight deck and then apply them directly as possible to the same robotics challenge Mr. Omohundro stipulates. I think you'll find the two seemingly unrelated situations surprisingly similar in operational and safety considerations for only two examples.

4 comments:

  1. Hi William,

    Could you read this paper and tell me what you think of it?

    http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf

    Best,
    Michael

    ReplyDelete
  2. Michael,

    The addy above came up "404 - file not found". This topic generally (if rather shallowly) interests me so I hope you can fix the addy problem as I would like to read more.

    ReplyDelete
  3. It works fine, just cut and paste the whole URL, or google "basic AI drives".

    ReplyDelete
  4. There is a difference between what appears in the blog comment and the email notice of same. The latter is:

    http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf

    and seems to link properly. My initial comment is as an update to this post; more upon further reflection.

    ReplyDelete