Saturday, February 26, 2011

A "Natural" Selection For Benevolent AI?

Via Labrat at Atomic Nerds comes notice of this intriguing result
Ever since Cicero's De Natura Deorum ii.34., humans have been intrigued by the origin and mechanisms underlying complexity in nature. Darwin suggested that adaptation and complexity could evolve by natural selection acting successively on numerous small, heritable modifications. But is this enough? Here, we describe selected studies of experimental evolution with robots to illustrate how the process of natural selection can lead to the evolution of complex traits such as adaptive behaviours. Just a few hundred generations of selection are sufficient to allow robots to evolve collision-free movement, homing, sophisticated predator versus prey strategies, coadaptation of brains and bodies, cooperation, and even altruism. In all cases this occurred via selection in robots controlled by a simple neural network, which mutated randomly.

PLoS Biology: Evolution of Adaptive Behaviour in Robots by Means of Darwinian Selection

[My bold]


Not a complete solution, of course, but evidence that a desired trait (like, apparently, an altruistic attitude toward a given group or class of recipient) can be developed as part of the general development process. Injecting a measure of periodic order into the selection process would seem to be a necessary next step, and we have an example of just such a process ready to hand.

Assuming the fundamentally Darwinian processes and techniques (reinforce the desired results, ruthlessly cull the undesirable ones as they first evidence themselves in a given individual example) utilised in the selective development of canine and other animal breeds will in fact more-or-less directly apply to nascent robotic and artificial intelligences, then I think attention should soon be paid to developing a logically consistent thought process (a self-reinforcing, circular logic construct) that is analogous to that underlying a human moral code. One that does not rely on any external justification. Something along the lines of, good thought/action causes the least harm to the greatest number of humans; the most benefit to the greatest number of humans is good thought/action.

Yet to be resolved still is the desired definition of "harm" and "benefit" within a range of contextual settings, but I think this research gives increased hope for a practical process to be developed. Eliezer Yudkowsky and Nick Bostrom are both looking at this very problem, but I confess I don't make much effort to keep up with their (or other's) work, as much of the detail is lost on me once things get anywhere near their respective levels of thought.

So, not an answer as such, but perhaps the beginings of developing a process whereby to eventually arrive at one.

No comments:

Post a Comment