Thursday, August 3, 2017

Adding To My Intellectual Posterity (or, Yet Another Failure To STFU :))

I'm going to archive my most recent foray into Internet Fame here because, if you can't be narcissistic on your own blog, why bother to even keep breathing on your own, I ask you?

Part The First: Should Jeff Bezos Hire Humanity?

For the permalink challenged: http://www.blogtalkradio.com/worldtransformed/2017/08/01/should-jeff-bezos-hire-humanity

Followed "the next day" by Part The Second: The Semi-Automated Economy by The World Transformed

Second chorus, same as the first:
https://soundcloud.com/phil-bowermaster/the-semi-automated-economy?fb_action_ids=10156475543678712&fb_action_types=soundcloud%3Apublish

Proof positive that you don't actually have to know what you're talking about to have a "respectable" (or at least printable) opinion.

You really should buy the book:

Visions for a World Transformed

17 comments:

  1. "Failure to STFU" is the story of my life, I think. I like that a lot.

    ReplyDelete
  2. Part 2 would probably not have happened without my having linked to your posts on how robots work and get built, so thanks for that.

    ReplyDelete
  3. lol. No sweat. It's an interesting interview. I like your take on weak AI and strong AI< that's probably as close as anyone has ever gotten to the meat of the subject.

    Lots of people don't understand that strong AI will not be like humans at all; humans are experts at taking conflicting ideas and integrating them into their lives. Like how statues can be racist, but maxine waters, al sharpton, and jesse jackson are not. Nobody could program a machine to accept multiple contrary ideas, it is a specific bug of humanity.

    ReplyDelete
  4. Not being a programmer, I can envision a program that permits contextual contradictions. I'm not at all clear what computational purpose that would achieve, but AI by definition isn't properly speaking a "computer" at all.

    Aside from the imagination thing, I can't begin to visualize the process one would take to design (never mind build) a general purpose artificial intelligence. I understand the concept of self-learning programming, but that's conflating tactics with strategy. No matter how complex or extensive the included CSS file(s) might be, "thought" is more than the sum of the ways the cards might be shuffled and dealt.

    I do hope that it will prove the case that complex machinery of a generally robotic nature, programmed with a comprehensive CCS data retrieval system, that operates under the conscious control of human(s), either remotely or directly, becomes the reality sooner rather than later. So much more becomes attainable for so many more, by so many more, that I honestly am in doubt as to whether there will be enough humans available for all the tasks needing doing, come the day.

    I look forward to finding out though. Live To See It, indeed. :)

    ReplyDelete
  5. If you build something that can think something is wrong, and at the same time right, you have not created artificial intelligence, you have created artificial insanity.

    ReplyDelete
  6. This is where we run smack into the (I think incompatible) differences between math and speech (written, verbal, gestural, etc).

    Something (action, concept, whatever) can be simultaneously "right" and "wrong", in the identical context, depending upon the meanings given to the two words. As I'm certain you're already aware. My attempt at a point was to note the difficulty of assigning variable mathematical values to imprecise distinctions; for example, "The right side of the rights issue, as viewed from the right perspective of the right people." A possible translation: "The correct opinion regarding the human rights issue, as viewed from the approved perspective of the selected people". There are, of course, many other possible variants without changing the context of the general assignment to determine a value judgement about the expressed beliefs of a given group of people in regards to a stipulated topic. As determined from the perspective of an entirely uninvolved entity (it already being stipulated - by us anyway :) - that an AI is necessarily not human).

    I would argue that being able to rationally distinguish between such variable, not to mention imprecise, contradictory concepts (in addition to the previously mentioned imagination problem) would also be a necessary capability for any construct to rigorously employ to be considered a Strong AI level device.

    I'm starting to feel a bit more confident about asking to spend Jeff Bezos' money. :)

    ReplyDelete
  7. What you say is true, only if there are no moral absolutes. There are moral absolutes.

    ReplyDelete
  8. There are moral absolutes. For a human being.

    We have previously established that a Strong AI would not be human though, so I'm unclear whether, or to what degree, human assumptions might apply. Recent experience with self-learning programs "teaching" themselves a unique language would seem to indicate we simply don't know the answer.

    To what extent, and with what measure of reliability, can we assume that programmer/designer/pick-a-title moral (or other) constraining absolutes must presumptively be considered to apply to the inanimate object being programmed/designed? I'm going to take the position that we don't know enough about the shape of that needle to even guess whether or not anything not "us" will or won't fit through the eye thereof. :)

    I'm not even clear on how we might go about establishing what the moral absolutes of an entirely-new-to-creation entity be ascertained (other than the traditional "hard way" that is). I would suggest that the safest (for humanity) assumption would be that the only absolutes that can be counted on to exist at the time of initial power up will be those deliberately input previously. What that non-human intellect might define as "moral" or "absolute" after that I have no clue, but from the very little data we do have to work with I'm guessing it won't necessarily be what we humans think those words mean and we may not be able to ask.

    Isn't hubris traditionally one of the deadly sins?

    ReplyDelete
  9. No, there are moral absolutes. Absolute means absolute.

    ReplyDelete
  10. Agreed. For humans. My understanding of the existing rule set (and I wouldn't bet the milk money on that) is that those rules don't apply in the same way to non-humans.

    There are absolute values in the physical universe too, which I try to use to some advantage here. I hope I didn't screw up that html tag.

    I think we are stumbling over the dividing line between "belief" and "fact". We humans have enough trouble navigating that distinction; how do we impart (program) a rigorous metric for resolving that distinction into a non-human intellect? My argument is that we (and probably our hypothetical AI too) would be better served by cutting that particular Gordian Knot out of the tapestry altogether. Not try to design that capability into a device.

    One of the driving impulses behind the concept of AI is to rule out the influence of "human error" to the greatest extent we can contrive. Deliberately designing in one of the primary issues contributing to that lamentable condition (as measured from any single given human viewpoint) doesn't seem particularly useful or desirable to me. If we stick to values we can directly test (in the established "scientific method" of independently repeatable tests arriving at the same results) and simply ignore those we can't unambiguously test, we don't get a "human equivalent AI" but we might get an extremely capable entity that has conceptual limits which don't inhibit functional utility in any practical way.

    We are trying to improve upon certain aspects of human potential by building an artificial version of ourselves. We also know there are aspects of humanity we just aren't very good at doing. We should focus on those limited aspects of humanity we universally want more of, and try to build a machine that does only that, better than we consistently can, that can be more-or-less easily adapted to as wide a variety of predictable circumstances as we can conceive. Come the day we arrive at a universally accepted understanding of the totality of the human condition, and agreement on how that should be implemented, we probably won't want or need an artificial version anyway.

    A very capable Weak AI may help provide the bridge between who we are and who we hope to become, without deliberately making that task more difficult than we already do.

    ReplyDelete
  11. I know that doesn't scratch that particular ontological itch, but in all truth I'm probably more of an irritant in that application any way. ;)

    ReplyDelete
  12. Not for humans. There are moral absolutes that are absolute, and this is a demonstrable truth, for everything that is sentient, and this will always be so, it is irrefutable, and cannot be argued. You may only understand moral absolutes as they apply to humans because you are a human, but the absolutes will always remain absolute, despite your perspective. And they are absolute for everything.

    ReplyDelete
  13. And I will bet money on it. Absolute means absolute, for everything.

    ReplyDelete
  14. No bet. :)

    So I suppose the question then becomes when, during the development process of a sentient intellect to equivalency with human, do the moral absolute standards begin to apply? Dogs and monkeys and fish don't qualify; when does our machine start to, and what is the application gradient like?

    The point here is absolutely (no pun intended, honestly :)) not to poke fun at your (or anyone else's for that matter) beliefs. AI is literally uncharted territory of human existence. We do ourselves a disservice to assume anything from our past experience will as a matter of course apply in precisely the way it always has in regards to our potential expectations of what and how we will experience a truly human-level artificial intelligence (not to even mention what that thing will be like by later that afternoon).

    I'm not questioning the existence of moral absolutes; I'm just a bit doubtful about the degree of our understanding of that universal constraint's effect upon an intelligence the Universe hasn't experienced before (to our knowledge - we certainly didn't deliberately create us after all). Similarly, I'm concerned that our inconsistent success at imparting appreciation for the ramifications of improper application of moral concepts to members of our own species ought to give us pause when it comes to trying the same to an intelligence that is designed to surpass our own at an exponential rate.

    As a practical matter, at least as far as the question of designing and building an AI is concerned, it doesn't really matter if moral absolutes exist or not. We have to assume they do, that we don't know how they apply to an intelligence separate to and (a few minutes later) superior to our own, and that we should work very hard to not make the question an issue we can't avoid dealing with (from a severely degrading position, I might add). It isn't a question of do moral absolutes exist or not; we simply cannot know how they apply to an alien intelligence as regards how that treats us. I don't question your ability to prove the existence of moral absolutes, but I doubt your knowledge of how they will be applied by a sentience that doesn't exist yet (having a good enough grasp on my own ignorance in this regard).

    If there is a morally absolute tenet that effectively prohibits humanity from building its own successor sentience, then it doesn't matter how many people waste their time and money trying. If there isn't ...

    ReplyDelete
  15. Absolute standards always apply, and none of them prohibit anything. Never have, never will. Theft is a moral absolute. To steal something from someone is wrong. Nothing can ever make it not wrong. Wrongness has never stopped anyone from doing something wrong. Maybe you don't have a clear understanding of what a moral absolute is, which is only an education issue. I'm moderately well educated on this specific subject, and not many are. I'm happy to have this conversation via messenger to avoid misunderstanding and long wait, should you desire.

    ReplyDelete
  16. Two things.

    My own education in moral behavior and standards is sufficient for me to understand that there are such things, that they apply to me in ways I think I generally understand, and that I am barely well-educated enough to make such a conversation with someone of your knowledge of the topic barely worth either of our time (at least as a single event). I'm retired now; we're you searching for a near-geriatric padawan (who is convinced that everything can be resolved by violence - though mostly in a very disappointing and undesirable fashion)?

    I think we may be in the early stages of talking past one another (albeit unintentionally). My blog post is focused on the issues complicating development of a Strong AI device, and this conversation we are having has been immensely helpful in my gaining insight into that issue. Should anyone other than us ever read it, I'm confident they too will find it useful to their thinking. At some point I would like to have an extensiveish conversation with you about possible ways that AI might influence the development and operation of robotic and/or remotely operated devices, for only one topic.

    Feel free to contact me at my email addy: williambrown@suddenlink.net (it's in my blogger "about me" link too). I look forward to talking with you when and as your schedule permits.

    ReplyDelete
  17. I am currently in Hell or possibly Toledo. I hope to have some time to discuss this further when I return. I don't think we're talking past one another, I don't think we're talking about the same things. I think we need to clarify terms a little better because I think without that you're not going to be able to understand what I'm saying and with it you will understand why I contend that strong AI is not likely. Stay cool, talk later

    ReplyDelete