Sunday, May 17, 2009

Commerce in Fundamentals

Roberta X recently put up a post regarding the merits of using an air-powered gun as an at-home marksmanship practice device. This strikes me as an excellent idea who's time has come in my own life.

Being less demanding as to the design parameters and similarity to my actual 1911 than is the lady herself (or possibly only substantially cheaper in nature), I have opted to go with this Daisy product rather than the one she appears to have decided upon in comments. My neighborhood Walmart had the Daisy available for $29.63 (plus the usual Governor's Gratuity) which is a marked discount from Daisy's own listed msrp of $50.99. Even with the additional cost of $3.96 for a 5-pack of CO2 bottles, $3.26 for 1500 BB's and the extravagance of a Beeman model 2085 pellet trap for $19.96, at just over $60 for the lot I've still managed to considerably discount the expense Roberta will bear for what is essentially an experiment in alternatives to practicing at a traditional range with an actual 1911 firing conventional ammunition (not a viable option for the Japanesse civilian market the Marui is targetted at).

I note that the Daisy shoots at a greater speed and distance, but question the practical effect this will have at the likely ranges the typical American home or garage will permit. Further, I also question the value of the blowback system her chosen weapon includes given the lack of restriction she experiences regarding opportunity to shoot an actual 1911 in practice. My inclination is that the added detail isn't cost effective outside of the restrictive environment it was designed for, but I look forward to her eventual blog posting on her shooting experience with the gun.

My expectation is that the Daisy will provide sufficient versimillitude to my Colt Commander as to permit reasonably realistic practice from the holster and the like within the confines of my 1-bedroom apartment. A couple old pillows framing the pellet trap ought to suffice as a backstop to keep the misses out of the drywall. The hope, of course, is that combining this with dry-firing and magazine drills with my actual 1911's will result in improvement of my weapon handling and shooting skills at little added financial cost. We'll see.

Tuesday, May 12, 2009

More getting there from here - baby steps

This past Sunday night, J. Storrs Hall was the guest on Fast Forward Radio (click on the link to listen to the recorded program) with the guys from The Speculist.

Early in the program, there is some discussion of how individual people might be effected by anticipated technologic change like nano-assemblers or AI. Dr. Hall professed not to have a good idea about how to begin that preparedness effort, but I think the strategic principle regarding alliances helpfully applies here.

Entities like the Foresight Institute are uniquely structured within the larger society to promote the framework for just such an alliance to address the individual transition problem - and do so in a fashion to facilitate the transition occurring that bit more quickly with less societal disruption. It seems a reasonable expectation that at least the initial adoption of assembler technology will involve robots operating at the established macro-assembly level that humans are gradually being supplanted from already and gradually incorporate greater refinement and adherence to tighter engineering tolerances as the overall size of the robots decreases. Not much room for human workers there, in fact they would basically be in the way. Dr. Hall could orchestrate a mechanism whereby ordinary people - some of them actual factory assembly workers like myself - could become involved in this development process while simultaneously preparing for their future financial security.

Visualize the 401(k) investment model; government regulates individual retirement initiatives in professionally managed investment schemes via tax policy. Now, imagine a specialized investment account that specifically seeks to invest in assembler technology (buy the current state-of-the-art technology) and rent it to industry with the income from that used to grow the fund further. As the state of the technology advances, the fund alters it's investment practices accordingly so as to remain as close as practical to the cutting edge. At some point the return generated by the rented robots exceeds the rate of individual investment. At some (individually variable) different point, individual investors will seek to draw income from the account as a replacement for their presumably by-then vanished employment income.

Via the national podium provided him by the Foresight Institute, Dr. Hall is in a position to coordinate such an effort among those with the expertise and wherewithal to evaluate and actualise such an expansive concept. Promoting such a transitional mechanism would permit him (and others) to expand the potential audience and resource support of the Institute generally as well as contribute to the Institute's established mission statement.

Needs more work, I admit; may not work at all. As Dr. Hall said on the show though, there will need to be a variety of mechanisms required to achieve the transition without utterly destroying our existing socio-economic structure entire. At the least, scrutiny of this concept seems a positive step towards mitigating the damaging potential inherent to such a fundamentally disruptive event as we are pursuing.

Sunday, May 10, 2009

Hey, Michael Anissimov, this one's for you!

[The titular reference to an 80's Tommy Lee Jones movie should serve as fair warning to all regarding the serious nature of the scholastic standard applied to the following content by the author.]

As part of the comment thread to Michael's recent blog post I reiterated an earlier e-mailed offer to engage in discussion/debate with him on this and related topics. The pertinent portion of his specific answer was:

I’m afraid your blog post on the matter only has a paragraph or two of original content. I might be open to discuss/debate the Singularity and AGI, but could you perhaps make a longer blog post on your general position


I will not be at all surprised to discover that we also differ as the precise delineation of "pertinent". :)

Otherwise, one can't ask fairer than that. Forthwith, a summary of my general position on matters Singularity and related.

I suppose I should begin by noting that all of my interrogatives and premises are founded upon my amateur understanding of the tenants and principles that underlie the philosophy and practice of strategy as codified by Sun Tzu (I particularly recommend the translations developed by Gary Gagliardi). I have found this to provide a structure that relies upon neither mysticism nor higher mathematics to attain understanding of any topic I have considered in light of said principles. For the uninitiated, these principles and axioms are actually better applied at the individual level despite their being couched in language intended to intrigue a potential monarchical employer. By way of example, the primary principle upon which all else is predicated is that of protection and advancement of position as I first wrote about here.

I suppose my first examination of Singularity and AGI would be this post titled Strategy for a Singularity Model of Economics which is better understood in light of an earlier post titled Future Ground. The Singularity post garnered additional interesting commentary at the time here and here. I don't believe anyone can engage in a realistic discussion of these two issues (Singularity and AGI) without at least a casual examination of the economic aspects they inherently entail.

This seems a reasonable point to acknowledge that I am indeed one of those heretics who believes that AGI as such isn't particularly necessary to the possibility of a Singularity event. With all due respect to Vernor Vinge and Ray Kurzweil, and stipulating that their basic (Ok, Ray's doorstop hardly merits the acolade "basic", whatever one's opinion of the contents) conjecture regarding AI/AGI, I regard the Singularity phenomenon as being an ever-receding point of speculative human understanding of the physical sciences beyond which no further meaningful extrapolation is possible, pending additional advancement of scientific understanding. While the development of operant AGI certainly meets that qualification, I contend that it is not the only potential development that does so. Perhaps for this reason I would seem to be less susceptible to the potential for threat that AGI of a certainty entails than Michael seems to be (though I further suspect this may be at least as well explained by the differences in our respective ages and personal experience referents).

That all said, I also appear to differ from Michael as to the degree of willingness to advance towards ... what to call it, precursor AGI technology? Limited AI, perhaps? For the moment, at least, I will use the acronym LAI to indicate an artificial intelligence that specifically lacks the ability Stephen Omohundro called a "self-improving system". To bring this initial discussion right on point, the recent DARPA solicitation Michael and I take contrasting points of view over seems to me to be specifically targeted at developing LAI types of intelligences rather than an AGI entity (from the solicitation: "... and create tools to engineer intelligent systems that match the problem/environment in which they will exist.").

I submit that such entities, even should they prove to be developable given the limitations of current bio-science technology (I would argue not ... yet), would not possess the mechanism to "improve" themselves and that a human developer would so organise their construction such that it couldn't be altered without destroying the original construct's cognitive capabilities - essentially forcing a would-be bio-hacker back to developmental square one - if only as a means of copyright/trademark and liability protection.

See what I mean about economic issues being unavoidable to discussion of this topic?

Another aspect of Michael's position I find particularly troublesome is his willingness to argue that certain people are simultaneously capable of designing bio-weapons from their personal nanofactory (the other general topic of contention between Michael and myself) yet are too ignorant to comprehend the existential threat to themselves inherent to their manipulations. This logically questionable assertion seems to be the principal basis for his proposed regulatory scheme I treat so rudely here.

It is only proper that I take note of the fact that Michael and I don't appear to actually differ all that much as to the potential for threat and advancement entailed in both of these technologies. Our principal points of difference seem to revolve around their separate and combined potential for deliberate mis-use, the most-likely-to-be-successful mechanism for inhibiting that as well as the likelihood that LAI technology necessarily leads to AGI (leaving unsaid my own serious doubts that AGI is even practicable at all, at least in the form Michael, Kurzweil and Vinge propose it to take).

Regarding this last point, I contend that AI technology is most likely to achieve practical expression as some "enhancement" technology to existing human biology rather than as a stand-alone entity. Leaving the substance of the argument for another occasion, I submit that should this prove to be the actual course of events that develops, then AGI necessarily becomes "human" and the existential threat source remains unchanged (not to be correlated with unaltered). Simply put, adding capability all around (I did say "simply") doesn't offer much if any practical change to the presently understood threat environment, does it? The pre-existing mechanisms for inhibiting actualisation of threats would still basicly apply given the spectrum of potential each individual would enhance within.

I see I have not yet mentioned the more-than-a-little-questionable ethical aspects these two topics raise. These are so myriad and application-dependant that I content myself by observing that they are an important aspect of this discussion between Michael and myself (not to mention actual development of the technology just by the way) and that we will almost certainly contort ourselves into logical absurdities attempting to confront them. But not today.

It does not go without saying that, like any other technologic advancement recorded in human history, there will be inequities and dangers inherent to the process of development and dispersion of said technology throughout humanity. While there may in fact prove to be some aspect unique to either technology under discussion here that offers a truly existential threat to the human species, I confess I haven't seen anything plausably proposed that doesn't seem to have a near-analog in human history (and thus a practical near-analog to a solution). Such being the case, I associate myself with J. Storrs Hall's position regarding the concerns under discussion herein (hard to go too far wrong there, I think) with the proviso that I remain open to Michael's subsequent persuasions on the matter.

Over to you, Sir, I think.

Saturday, May 9, 2009

Making AI

I recently took Michael Anissimov to task over some of his fretful reaction to this topic.

Now, via Brian Wang comes notice of the US Army's DARPA-lead effort to achieve that development:

The program plan is organized around three interrelated task areas: (1) creating a theory (a mathematical formalism) and validating it in natural and engineered systems; (2) building the first human-engineered systems that display physical intelligence in the form of abiotic, self-organizing electronic and chemical systems; and (3) developing analytical tools to support the design and understanding of physically intelligent systems. If successful, the program would launch a revolution of understanding across many fields of human endeavor, demonstrate the first intelligence engineered from first principles, create new classes of electronic, computational, and chemical systems, and create tools to engineer intelligent systems that match the problem/environment in which they will exist.


If I understand this correctly (as ever, not a given), the .mil wants to identify and then engineer a non-human intelligence that could ultimately replace ... well, me. The Me I was low these more-than-one decades passed, the average soldier (sailor actually) who performs some necessary but less-than-glamorous role in accomplishing the Mission.

Leaving aside the potential ethical quandaries this might pose, I suggest that at least one of the issues DARPA wants to tackle as part of this project is that regarding the question of "friendly AI" that has Mr. Anissimov (among numerous others) properly concerned. One possible mechanism to arriving at such an objective must be to simply manufacture an entity that cannot continue to function so far outside the designed tolerance levels that designate "friendly". This would offer future military planners the ability to much more finely calibrate the requirements of any potential mission requirements as well as mute the costs of success.

Swinging the Sword of Damocles* does provide a different set of options than does sitting on it at least.

*I know, classically it hangs above one by a single horse hair. It works either way I suppose, but my version seems more true to the military ethos I think.

Wednesday, May 6, 2009

Michael Anissimov hydrates his trousers ...

and J. Storrs Hall has him sit in his lap.

Let me preface this by stipulating that both of these men are deservedly admired for their intellect generally and their grasp of the technical challenges inherent to the development of molecular or nano-scale manufacturing specifically. That being said, I cannot fathom why either would waste any of that intellectual horsepower on the silly "concerns" they both go on about in the linked-to piece. Presumably there are simply no pending developments in any of the on-going efforts to create the stated technology that either gentleman might more profitably direct his attention towards.

How depressing.

Mr. Anissimov says:

I consider it likely that a singleton will emerge in the 21st century, whether we want it to or not, as a natural consequence of expanding technological powers on a finite-sized planet, as well as a historical trend of aggregation of powers at higher geopolitical levels. Note that the singleton concept does not specify what degree or scope of decision-making powers the entity (which, as pointed out, could be a worldwide democracy) has. 99% of policy choices could very well be made at the local and national levels, while a singleton intervenes in those 1% of choices with global importance.


Two things come quickly to mind about all of the above; first, pretty much all of the bullet-points of recorded human history have involved disputes over formation of precisely this socio-political arrangement [Master of all I survey]. I can't imagine any circumstance unique to this millennium that will change the basic drive for contention amongst human beings, whatever the technology developed. Secondly (and I'm kind-of embarrassed to have to point out something as obvious as this to these two men), since it apparently does need to be said, any governmental entity having the power to enforce it's decision making even 1% of the time necessarily must have the power to do so the remaining 99% of the time too, should it choose to do so. Only some of the questions this begs are: which 1%, to what extreme, to who's benefit/detriment, at what point of provocation and so forth.

Really fella's, is this sort of fantasy Civ speculation that urgent an issue?

I particularly like Mr. Anissimov's follow-up:

To me, what I’d want most out of a singleton would be a coherent and organized approach to problems that face the entire planet. Instead of a disorganized patchwork, there’d be more decisive action on global risks. No authoritarianism in cultural, political, or economic matters is implied.


Let me just say, Sir, if you truly believe that any such arrangement as you stipulate is remotely possible (never mind likely), I am available for employment as your personal source of common sense. As illustration of what you might expect from such an arrangement between us, consider for as long as necessary just how likely it might be for any entity possessing the means to enforce your compliance with it's directive to feel constrained by your subsequent objection regarding some putative cultural, political or economic aspect of said directive.

Take your time (if you're paying me).

I have no desire to turn this into some variant of Fisking, if only because I do respect both gentlemen despite my current mockery. Dr. Hall would do well to laugh as politely as he is able and gently point out the silliness being put on display. Mr. Anissimov needs to devote his attention toward developing a more credible (not to mention more logically consistent) oogy-boogy strawman with which to void his bladder in decent privacy in future.

The present example is just embarrassing, boys.