Saturday, January 22, 2011

More On "AI" And Getting It Right

In comments, Michael Anissimov asked my opinion of a paper written in 2008. Unfortunately, the routing addy has an error somewhere. Pending a successful resolution to that hick-up, I wish to take the opportunity to comment further on other issues raised in Michael's entire post.

Michael's concern over the ultimate potential of overt threat from AI is entirely valid and of such extreme potential that it ought rightly to be one of the - I'm completely guessing here - say, top five to ten issues requiring ongoing resolution (that is, each succeeding iteration of AI developmental design should have this concern reliably resolved). Sticking to my firearms allegory from the earlier post, just as each successive design of gun must have a functional safety mechanism as an intrinsic part of the design, so to should any such system be tested to work in that design development model irrespective of the safety mechanism's previous success in other weapon designs. Stretching the metaphor more than a bit, any successive AI development design must have demonstrably addressed the independent action threat potential inherent to any intellectually independent actor's capability parameters. As Michael puts it:

It will be easier and cheaper to create AIs with great capabilities but relatively simple goals, because humans will be in denial that AIs will eventually be able to self-improve more effectively than we can improve them ourselves, and potentially acquire great power. Simple goals will be seen as sufficient for narrow tasks, and even somewhat general tasks. Humans are so self-obsessed that we’d probably continue to avoid regarding AIs as autonomous thinkers even if they beat us on every test of intelligence and creativity that we could come up with.


Having at least some degree of familiarity with the military and technology therein, I differ just a bit with this assessment. I think it more likely that, within a DARPA-like environment, AI will be developed following the "stove pipe" economic model; that various applications of AI will be the defining factor guiding development and that the various commands (the particular branch or sub-division of a given branch of service) will tend to differ as to definition and orientation (ship or aircraft mounted? fixed or self-propelled? support or combat arms?). This set of factors alone will suffice for a multitude of simultaneous and near-independent development tracks for AI to follow and is only one example of a long list of development efforts publicly underweigh as I write.

"Simple goals" and "narrow tasks" are the basic metric of any technology's fundamental development process. Sorry old son, initial AI development isn't going to be much if at all different, if only because we humans don't have a better alternative process to follow (if we did, I promise you, those flinty, skinty financial types in industry would make certain we did it that way instead :)). Here we see that Michael and I are principally basing our individual approach to this problem (threat potential) from entirely opposite ends of the capability development gradient. I simply believe that it cannot be successfully addressed prior to a particular capability's initial design and development process - but absolutely should be an intrinsic part of that process. If I'm reading him at all correctly, Michael seems to think this potential problem needs be corrected for before AI development gets very much further along than it already has.

I do think that "self-obsessed" is a bit much. Can we agree that lack of a more successful model would serve to make the point equally well? :) Attempts at humor aside, humans don't have an alternative thought process with which to test and compare our intellectual assumptions against. Indeed, I can vaguely remember reading someone suggesting that one of the arguments in favor of AGI development is precisely to create an entity with which to do so. We are quite good at imagining how such a mind might work and how it/they might express their thoughts and beliefs. Life-long sci/fi fan that I am, that isn't really quite the same thing and is a slender reed indeed to base such a crucial result on. At the moment though, I confess I lack any improvement to suggest instead.

Michael also said:

Intelligence does not automatically equal “common sense”. Intelligence does not automatically equal benevolence. Intelligence does not automatically equal “live and let live”.


So, you, me and Sun Tzu all agree on this point at least. :)

What I find most wanting to any discussion about AI/AGI development is the complete lack of any sort of consistent context within which to discuss/debate the capabilities we all seem to generally agree contribute to the designation of "AI" (a context centered on the presumed viewpoint made possible by an AI/AGI intellects capability's perspective). Or that, it seems to me, is what Michael refers to in the paragraph he begins above. Not all human societies share the stipulated sentiments, nor do they apply equal importance to them among those that do. Perhaps most distressing to any discussion of AI development and potential human interaction is the wide-spread lack of consistency of application of the above concepts within any given human society now present among us. The inconsistency of it all drives us to extremes; why wouldn't a poor AI do likewise?

And therein lies the challenge, doesn't it?

One aspect of AI development I'd like to read more on from Michael is how, and by what means, early proto-AI constructs will be adapted to human augmentation and what that experience might teach us as regards AI/AGI potential for threat. Pace Michael, I'm not suggesting a toaster interface; rather something in the same general category of the robotic mechanisms and powered suits already being developed (remotely operated aircraft, vehicles and bomb disposal devices, mechanical suits for lifting heavy containers or equipment or pack loads, etc). If a human can electronically as well as physically interact with such devices now, what might be possible from a linked network of near-AI capable devices, human operators and further external communications and data resources? This seems to me a more likely first contact scenario between purely human society and Artificial intellects. One simple example to start off with:

Soldiers experienced with such an operational environment that permits for near-instantaneous communication with data retrieval sources along with cooperating independent units (either other soldiers or purely robotic in nature) performing a road march under active combat conditions - pick any of numerous examples from recent memory from Iraq - having to make the abrupt transition back to societal conditions equivalent to present-day US norms.

Now, consider the effect on such a soldier returning home on leave from his engineering unit on deployment to the farside of Luna. Presumably the lack of active, intelligent opposition can be substituted for with the natural hazards offered by the "normal" lunar environment.


Neither scenario strikes me as an at-all-unlikely possibility we will have to confront in the coming decade or so. The lessons we learn from circumstances much like the one above (augmented humans morally and ethically correctly interacting with unaugmented humans - and vice versa being of at least equal importance) will, I think, greatly influence the direction we take in trying to develop an AI/AGI intellectual ethos and moral code that treats humans as something to be valued rather than to be eliminated. How say you, Michael?

Because there is a context-critical distinction between "risk" and "threat" doesn't mean the latter isn't of importance. The distinction influences the manner and means we might best employ to achieve a harmonious outcome, but the simple existence of an acknowledged potential for threat should not be taken as reason for not continuing to advance our understanding and practical experience in AI development.

No comments: