Sunday, May 10, 2009

Hey, Michael Anissimov, this one's for you!

[The titular reference to an 80's Tommy Lee Jones movie should serve as fair warning to all regarding the serious nature of the scholastic standard applied to the following content by the author.]

As part of the comment thread to Michael's recent blog post I reiterated an earlier e-mailed offer to engage in discussion/debate with him on this and related topics. The pertinent portion of his specific answer was:

I’m afraid your blog post on the matter only has a paragraph or two of original content. I might be open to discuss/debate the Singularity and AGI, but could you perhaps make a longer blog post on your general position


I will not be at all surprised to discover that we also differ as the precise delineation of "pertinent". :)

Otherwise, one can't ask fairer than that. Forthwith, a summary of my general position on matters Singularity and related.

I suppose I should begin by noting that all of my interrogatives and premises are founded upon my amateur understanding of the tenants and principles that underlie the philosophy and practice of strategy as codified by Sun Tzu (I particularly recommend the translations developed by Gary Gagliardi). I have found this to provide a structure that relies upon neither mysticism nor higher mathematics to attain understanding of any topic I have considered in light of said principles. For the uninitiated, these principles and axioms are actually better applied at the individual level despite their being couched in language intended to intrigue a potential monarchical employer. By way of example, the primary principle upon which all else is predicated is that of protection and advancement of position as I first wrote about here.

I suppose my first examination of Singularity and AGI would be this post titled Strategy for a Singularity Model of Economics which is better understood in light of an earlier post titled Future Ground. The Singularity post garnered additional interesting commentary at the time here and here. I don't believe anyone can engage in a realistic discussion of these two issues (Singularity and AGI) without at least a casual examination of the economic aspects they inherently entail.

This seems a reasonable point to acknowledge that I am indeed one of those heretics who believes that AGI as such isn't particularly necessary to the possibility of a Singularity event. With all due respect to Vernor Vinge and Ray Kurzweil, and stipulating that their basic (Ok, Ray's doorstop hardly merits the acolade "basic", whatever one's opinion of the contents) conjecture regarding AI/AGI, I regard the Singularity phenomenon as being an ever-receding point of speculative human understanding of the physical sciences beyond which no further meaningful extrapolation is possible, pending additional advancement of scientific understanding. While the development of operant AGI certainly meets that qualification, I contend that it is not the only potential development that does so. Perhaps for this reason I would seem to be less susceptible to the potential for threat that AGI of a certainty entails than Michael seems to be (though I further suspect this may be at least as well explained by the differences in our respective ages and personal experience referents).

That all said, I also appear to differ from Michael as to the degree of willingness to advance towards ... what to call it, precursor AGI technology? Limited AI, perhaps? For the moment, at least, I will use the acronym LAI to indicate an artificial intelligence that specifically lacks the ability Stephen Omohundro called a "self-improving system". To bring this initial discussion right on point, the recent DARPA solicitation Michael and I take contrasting points of view over seems to me to be specifically targeted at developing LAI types of intelligences rather than an AGI entity (from the solicitation: "... and create tools to engineer intelligent systems that match the problem/environment in which they will exist.").

I submit that such entities, even should they prove to be developable given the limitations of current bio-science technology (I would argue not ... yet), would not possess the mechanism to "improve" themselves and that a human developer would so organise their construction such that it couldn't be altered without destroying the original construct's cognitive capabilities - essentially forcing a would-be bio-hacker back to developmental square one - if only as a means of copyright/trademark and liability protection.

See what I mean about economic issues being unavoidable to discussion of this topic?

Another aspect of Michael's position I find particularly troublesome is his willingness to argue that certain people are simultaneously capable of designing bio-weapons from their personal nanofactory (the other general topic of contention between Michael and myself) yet are too ignorant to comprehend the existential threat to themselves inherent to their manipulations. This logically questionable assertion seems to be the principal basis for his proposed regulatory scheme I treat so rudely here.

It is only proper that I take note of the fact that Michael and I don't appear to actually differ all that much as to the potential for threat and advancement entailed in both of these technologies. Our principal points of difference seem to revolve around their separate and combined potential for deliberate mis-use, the most-likely-to-be-successful mechanism for inhibiting that as well as the likelihood that LAI technology necessarily leads to AGI (leaving unsaid my own serious doubts that AGI is even practicable at all, at least in the form Michael, Kurzweil and Vinge propose it to take).

Regarding this last point, I contend that AI technology is most likely to achieve practical expression as some "enhancement" technology to existing human biology rather than as a stand-alone entity. Leaving the substance of the argument for another occasion, I submit that should this prove to be the actual course of events that develops, then AGI necessarily becomes "human" and the existential threat source remains unchanged (not to be correlated with unaltered). Simply put, adding capability all around (I did say "simply") doesn't offer much if any practical change to the presently understood threat environment, does it? The pre-existing mechanisms for inhibiting actualisation of threats would still basicly apply given the spectrum of potential each individual would enhance within.

I see I have not yet mentioned the more-than-a-little-questionable ethical aspects these two topics raise. These are so myriad and application-dependant that I content myself by observing that they are an important aspect of this discussion between Michael and myself (not to mention actual development of the technology just by the way) and that we will almost certainly contort ourselves into logical absurdities attempting to confront them. But not today.

It does not go without saying that, like any other technologic advancement recorded in human history, there will be inequities and dangers inherent to the process of development and dispersion of said technology throughout humanity. While there may in fact prove to be some aspect unique to either technology under discussion here that offers a truly existential threat to the human species, I confess I haven't seen anything plausably proposed that doesn't seem to have a near-analog in human history (and thus a practical near-analog to a solution). Such being the case, I associate myself with J. Storrs Hall's position regarding the concerns under discussion herein (hard to go too far wrong there, I think) with the proviso that I remain open to Michael's subsequent persuasions on the matter.

Over to you, Sir, I think.

No comments: