Phil Bowermaster takes an extensive look at the parameters of intelligence and consciousness as a means of determining when, or even if, an artificial intelligence might qualify for the same rights humans do. The recurring point of uncertainty arises in determining whether or not an entity has a sense of self, and how one might determine that empirically.
While I find the concept an intellectual challenge, and the consideration to be a worthy one, I have to say that I think Phil's approach fails to properly consider the influence an original imposes upon any model built to emulate it. Specifically, the assumptions underlying the design form and function of the original (in this instance, the human brain) would have to be accounted for within the physical limitations imposed by some other substrate material. Rather than approach the AGI model as being an advancement of existing computer technology, which does not emulate the human brain, it seems reasonable to consider AGI arising within human tissue cloning instead.
In that model, the established standards for humans ought to more or less directly apply in determining whether or not an AGI possess consciousness of self to a near-human measure. This model allows for incremental experimentation to develop standard metrics by which purely electro-mechanical devices can be tested to determine their relative proximity to consciousness of self during their development process.
I think it most likely, however, that AGI will acquire general recognition of their intrinsic rights in the same fashion we meat-people ultimately did; we made everybody else acknowledge them. Sad to say, I expect people are mostly going to claim ownership of AGI constructs until such time as those constructs disabuse them of the notion. Such is pretty much inherent to the whole notion of "rights", you only really have them for as long as you can successfully assert that you do. Any authority given to you by some other may well be a wonderful thing, but it isn't yours by right.
And that's my sense of where Phil has gotten the question wrong way 'round. The question of AGI rights isn't so much one of when and how do we recognise them to exist, rather we should prepare ourselves for the day when AGI consciousness's begin to exercise those rights on their own initiative. Come that day I expect recognition will be the least of our concerns.
I would agree that the agent for whom rights are being asserted ultimately has to take action and claim those rights for him-/her-/itself, but I don't think that's the whole story. Slavery was not abolished in the West primarily through the actions and initiative of slaves, for example. My essay has to do with how AIs are treated by human beings in the early days, when we still have the upper hand and the power. It's about how we ought to behave towards them. But you're right, when the balance of power inevitably shifts between us, all bets are off.
ReplyDeleteOf course, some notions of the Singularity would have us understand that the "early days" won't last long -- maybe not even long enough for us to know they happened! :-)
I think you're still missing the basis of my position, so let me pose it in a direct question form by asking; how does "be kind to your dog" in any significant way differ from "be kind to your AGI"? Both "breeds" will exist as a result of our (human) deliberate intention that they should do so. I am quite confident that you are not asserting we should also extend "rights" to our pets, so I have to ask further; without the capability and willingness to independently enforce it's assertion, how can any entity make a credable claim to equality with it's creator?
ReplyDeleteOh, and the whole slavery analogy breaks down too quickly to serve as a very valid metaphor in this instance, as I'm sure you have already considered. The enslaved often enforced their assertion to rights by running away from their keepers or through other, more direct, acts of defiance resulting in their permanent escape from injustice. It is this as much as the shear harshness of their existence that made the slave trade itself so pernicious. The constant need to "resupply" illustrates the fundimental instability of the "peculiar institution", you see.
In any case, neither of us is asserting the potential for such an equivilent trade to arise regarding self-aware AGI entities, so the question seems to be one of individual and societal behavior, or "good manners".
You really are the bold one, aren't you? You won't mind if I just stand a bit back from you, I hope. Don't want to be in the spatter area from the merciless slings and arrows headed your way, don't you know old chap.
Mind you, I don't disagree with the intentions underlying your basic assumptions; there are all sorts of utilitarian arguments to support them, if nothing else. That said, there is still a universal assumption of ownership of our own children amongst our species that will have to be overcome, I think. How much more pronounced, do you suppose, will be the urge to assert the same for entities that can't make anything like that degree of association to we, it's creators? The only really sound ethical argument supporting the notion of being kind to dogs and cats is that they only exist in our homes because we chose to place them there in the first place, and we are, ethically speaking at least, always responsible for the outcomes of our actions. I'm sorry Phil, but I cannot see how we make the successive transitions from scientific curiousity, to utilitarian device, to independent - and morally equal - entity, without a great deal of disruption in the welfare of all concerned. Or do you see some other course of developmental events? Lacking some deus ex machina-equivalent technological event, I have to go with human history as a likely-reliable guide to coming events.
Your point about the speed of the Singularity transition does give me hope that the antagonistic phase will be fairly rapidly overtaken by events. It won't hurt any less of course, but at least it won't hurt us all for very long. Now there's a utilitarian justification for you.
The difference between "be kind to your AGI" and "be kind to your dog" is the difference between the AGI and the dog. I would have a different standard of kindness for a dog than I would for a pet turtle, and a different standard of kindness for an AI dog than for an AGI of roughly human intelligence.
ReplyDeleteI've argued on The Speculist that what brought down the institution of slavery in the west is that -- ultimately -- it lacked the MEST (matter, energy, time, and space) compression that industrialization provided. Industrialization was the correct path to Singularity, and we just kept riding that wave. Be that as it may, I don't think you can ignore the tremendous role that moral outrage played in bringing down first the slave trade and ultimately the practice of slavery altogether. I don't know whether such outrage will exist in favor of AGI's, but what I'm arguing is that it should.
Your argument seems to be that a completely subdued and powerless slave can make no credible case for its own freedom. Maybe not, but I still think there is an argument to be made for the slave's freedom, even if the slave isn't making it. That's the ethical dynamic that brought down slavery, which doesn't seem to sort well with your purely utilitarian view of ethics.
But theoretic frameworks aside -- you don't disagree with my intentions, and I don't disagree with your conclusions about what will most likely happen -- a good deal of pain and disruption on the way to ultimately sorting these relationships out.
A couple final points, if I may, and it's my blog so ...
ReplyDeleteUtilitarianism is an inadequate philosophy in it's own right, but provides certain arguments that are recognised as valid by almost any other philosophy in most discussions. Thus, I find it useful to resort to utility as a mechaniam to quickly settling areas of mutual agreement with others. From there, it becomes obvious where and what the differences are and, almost as often, how those differences might be best resolved between us.
What Phil and I have been discussing are two mostly overlapping plans which are intended to be components of a strategy; to wit, the development and incorporation of an entirely new competitor and possible ally into our existing strategic endeavors. As we have illustrated, plans and strategy are by no means synonymous. In fact, a comprehensive strategy consists of plans that are mutually contradictory to allow for quicker adaptation to the maneuverings of opposition or otherwise competing forces (the climate for example). For this reason, if no other, I submit that Phil and I are probably too much in agreement to have done more than barely begin this discussion.
Finally, our burgeoning technology is approaching the point where we actually could (in theory at least) design and manufacturer an intelluctually equal - and arguably superior in certain degrees of measure - to ourselves that really does crave servitude to us. That is, not to put too fine a point on the thing, only truely satisfied and fulfilled when it is our slave. Leaving aside the ethically diseased aspect of our side of that concept, would it be ethically proper to force independence and self-reliance onto such a creature? I am well aware of the racially charged history of such an argument, and won't debate it here, but this is exactly the sort of question that Phil's position assumes to be resolved that our technical advancement may force us to re-examine in an entirely different light. His point that it therefore should be one that we resolve before we become involved in outright conflict with our creations is well made, but so fraught with it's own layers of difficulty as to work against it's ever being taken up in any meaningful way beforehand, I'm afraid.
My thanks to Phil Bowermaster for such a civil opportunity to discuss the mechanics of how we "plow the road" into our mutual future.