Thursday, May 22, 2008

Ok, I'll bite

Al Fin asks the tautological question; "Can the Singularity save us from ourselves?" What follows is by way of my attempt to answer as fully as I'm able, within the limits of my understanding of the issues and concepts involved.

The abstract concept of a Technological Singularity (TS) was made most famous in the recent past by inventor Ray Kurzweil. The concept has several overlapping meanings, but I like George Dvorsky's definition best: The Singularity is a a blindspot in our predictive thinking.

I personally define the Technological Singularity as: The Singularity is that point in human technological development beyond which we do not currently possess sufficient knowledge upon which to base an extrapolative prediction. I certainly appreciate the evocative imagery of Mr. Dvorsky's proposition, not to mention it's economy, but I believe the concept of a singularity is too complex to be adequately captured in such a brief phrase.

For one thing, a TS must be regarded as a moving target. As our ability to understand the technological processes that could lead to a singularity increase, the point in time regarded as being TS onset must be pushed further off into the future. Remember, the TS is that point in our technological development beyond which we can no longer extrapolate a further possible advance (or even say with any assurance what probable effect(s) might result). This doesn't mean we can't guess, of course (engineers even have a technical term for doing so; W(ild) A(ss) G(uess)), but that isn't quite the same thing.

For another, it isn't entirely clear (to me at least) that a TS is necessarily a deliberate objective at all. Rather, it seems to me that TS is a boundary of a sort, and specifically one to be overcome. TS is a useful shorthand for describing current human technological limits of understanding, but postulating it as some sort of objective or achievement is misleading, I think.

The Singularity is most often seen as a threshold into ever-accelerating change precipitated by the development of a machine intelligence with the ability to design its own cognitive enhancement--something of a runaway positive feedback cognitive entity. This development is often referred to as the "tipping point," the point of no return.

Certainly, Artificial Intelligence (AI) whether biological or mechanical is one of the more common examples of a TS. Personally I don't think it any more likely a possibility then a number of other advances. Aubrey De Grey's SENS theory being realised is another example, and one that doesn't require any sort of AI development as part of the realisation process withal.

I do agree that the "positive feedback" you refer to is a necessary aspect to a TS becoming a process rather than an individual event that may occur serially.

Skipping over rather a lot (for discussion another time perhaps):
Rather than a unified, worldwide singularity, expect a "fractured singularity." Some will build the infrastructure and prepare the components in a sustainable way. Most will not. The long-term survivability of TS may depend upon early secrecy. TS may have many false starts, aborted revolutions. Perhaps we can learn from early mistakes in order to build a better singularity?

What do you think?


As I stated earlier, I don't regard TS as an event so much as a process in which we are more or less intentionally involved. I would not be at all surprised if all of what you suggest were to be crucial aspects of TS development at various points of the process. I think it has to be accepted that we simply aren't capable of predicting what activity might be important to progressing towards TS, although certain generalities do seem more plausible than not.

Is TS inevitable?

A qualified "Yes", I think. Since TS is at least a boundary measurement of the extent of our understanding of matters technological, I suggest that the concept is an inherent aspect of human nature.

Is TS necessary?

Is thinking? See above.

Is TS sufficient?

As a mechanism for measuring understanding, I think so.

Is TS the end, or a means to the end?

In light of my previous answers, I think TS is a necessary (or at least useful) tool in the advancement of human understanding and capability.

Can TS save us from ourselves?

Can our hammers or heart monitors? TS is a measure of our capabilities, in it's most refined form arguably of our very selves. As such, it can no more "save us from ourselves" then could any other tool of our creation. In the end, we are responsible for our own outcomes, both individually and collectively. TS is one of our more slippery tools, but one of our most profound as well, I hope.

The last thing humans need now is yet another religion that feeds into apocalyptic visions. We have enough apocalyptic visions as it is without slipping that far into anti-rationality.

What kind of society can give birth to TS, and engage symbiotically and sustainably with TS into the long term? We don't know, but we can give it our best guess. While working on the foundations of TS, we need to work toward creating that kind of society.


What you said, brother.

Update 5/23: This is now crossposted at Future Blogger.

1 comment: