The Real AI Threat?

Miles Fidelman mfidelman at meetinghouse.net
Fri Dec 11 17:22:27 UTC 2020


Um... there are long standing techniques for programs to tune themselves 
& their algorithms - with languages that are particularly good for 
treating code as data (e.g., LISP - the grand daddy of AI languages - 
for whatever definition of AI you want to use).

And... a common complaint with current machine learning algorithms, is 
that they often "learn" to make decisions that can't be understood, 
after-the-fact.  We already have examples of "racist bots," and there 
are lots of legal issues regarding things like liability for injuries 
caused by self-guiding cars.

And then there are "spelling correctors" and digital "assistants" - when 
has Siri EVER done only what you want "her" to do?

The REAL problem is programs that blindly go off and do what you think 
you told them to do, and get it woefully wrong.  The more leeway we 
allow our programs to adapt, or learn, or self-tune, or 
whatever-you-want-to-call-it - the more trouble we're in.

(The point being:  We don't have to wait for "real" AI to see many of 
the dangers that folks fictionalize about - we are already seeing those 
dangers from mundane software - and it's only going to get worse while 
people are looking elsewhere.)

Miles Fidelman

J. Hellenthal wrote:
> Let me know when a program will rewrite itself and add its own 
> features ... then we may have a problem... otherwise they only do what 
> you want them to do.
>
> -- 
>  J. Hellenthal
>
> The fact that there's a highway to Hell but only a stairway to Heaven 
> says a lot about anticipated traffic volume.
>
>> On Dec 10, 2020, at 12:41, Mel Beckman <mel at beckman.org> wrote:
>>
>> 
>>>> Jeez... some guys seem to take a joke literally - while ignoring a 
>>>> real and present danger - which was the point.
>>
>> Miles,
>>
>> With all due respect, you didn’t present this as a joke. You 
>> presented "AI self-healing systems gone wild” as a genuine risk. 
>> Which it isn’t. In fact, AI fear mongering is a seriously 
>> debilitating factor in technology policy, where policymakers and 
>> pundits — who also don’t get “the joke” — lobby for silly laws and 
>> make ridiculous predictions, such as Elon Musks claim that, by 2025, 
>> “AI will be where AI conscious and vastly smarter than humans.”
>>
>> That’s the kind of ignorance that will waste billions of dollars. No 
>> joke.
>>
>>  -mel
>>
>>
>>
>>> On Dec 10, 2020, at 8:47 AM, Miles Fidelman 
>>> <mfidelman at meetinghouse.net <mailto:mfidelman at meetinghouse.net>> wrote:
>>>
>>> Ahh.... invasive spambots, running on OpenStack ... "the telephone 
>>> bell is tolling... "
>>>
>>> Miles
>>>
>>> adamv0025 at netconsultings.comwrote:
>>>> >Automated resource discovery + automated resource allocation = 
>>>> recipe for disaster
>>>> That is literally how OpenStack works.
>>>> For now, don’t worry about AI taking away your freedom on its own, 
>>>> rather worry about how people using it might…
>>>> adam
>>>> *From:*NANOG<nanog-bounces+adamv0025=netconsultings.com at nanog.org>*On 
>>>> Behalf Of*Miles Fidelman
>>>> *Sent:*Thursday, December 10, 2020 2:44 PM
>>>> *To:*'NANOG'<nanog at nanog.org>
>>>> *Subject:*Re: The Real AI Threat?
>>>> adamv0025 at netconsultings.com 
>>>> <mailto:adamv0025 at netconsultings.com>wrote:
>>>>
>>>>     > Put them together, and the nightmare scenario is:
>>>>
>>>>     > - machine learning algorithm detects need for more resources
>>>>
>>>>     All good so far
>>>>
>>>>       
>>>>
>>>>     > - machine learning algorithm makes use of vulnerability analysis library 
>>>>
>>>>     > to find other systems with resources to spare, and starts attaching
>>>>
>>>>     > those resources
>>>>
>>>>     Right so a company would built, trained and fine-tuned an AI,
>>>>     or would have bought such a product and implemented it as part
>>>>     of its NMS/DDoS mitigation suite, to do the above?
>>>>     What is the probability of anyone thinking that to be a good idea?
>>>>     To me that does sound like an AI based virus rather than a tool
>>>>     one would want to develop or buy from a third party and then
>>>>     integrate into the day to day operations.
>>>>     You can’t take for instance alpha-0 or GPT-3 and make it do the
>>>>     above. You’d have to train it to do so over millions of
>>>>     examples and trials.
>>>>     Oh and also these won’t “wake up” one day and “think” to
>>>>     themselves oh I’m fed up with Atari games I’m going to learn
>>>>     myself some chess and then do some reading on wiki about the
>>>>     chess rules.
>>>>
>>>>
>>>> Jeez... some guys seem to take a joke literally - while ignoring a 
>>>> real and present danger - which was the point.
>>>>
>>>> Meanwhile, yes, I think that a poorly ENGINEERED DDoS mitigation 
>>>> suite might well have failure modes that just keep eating up 
>>>> resources until systems start crashing all over the place.  Heck, 
>>>> spinning off processes until all available resources have been 
>>>> exhausted has been a failure mode of systems for years.  Automated 
>>>> resource discovery + automated resource allocation = recipe for 
>>>> disaster.  (No need for AIs eating the world.)
>>>>
>>>> Miles
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> -- 
>>>> In theory, there is no difference between theory and practice.
>>>> In practice, there is.  .... Yogi Berra
>>>> Theory is when you know everything but nothing works.
>>>> Practice is when everything works but no one knows why.
>>>> In our lab, theory and practice are combined:
>>>> nothing works and no one knows why.  ... unknown
>>>
>>>
>>> -- 
>>> In theory, there is no difference between theory and practice.
>>> In practice, there is.  .... Yogi Berra
>>>
>>> Theory is when you know everything but nothing works.
>>> Practice is when everything works but no one knows why.
>>> In our lab, theory and practice are combined:
>>> nothing works and no one knows why.  ... unknown
>>


-- 
In theory, there is no difference between theory and practice.
In practice, there is.  .... Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nanog.org/pipermail/nanog/attachments/20201211/525ffbcd/attachment.html>


More information about the NANOG mailing list