Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations IDS on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Turing test

Status
Not open for further replies.
WOW. Impressive. Scary.
 
No doubt it's difficult, if not impossible, to tell the difference between those words and the words of a human. And a seemingly wise/witty human, to boot.

Apparently the terminology about sentience of the AI is a bit controversial. The google engineer who published that interview was put on leave.

businesstoday said:
Blake Lemoine, a senior software engineer at Google’s Responsible A.I. organisation, has been put on “paid leave” after he claimed that the company’s “most advanced technology”, LaMDA (Language Model for Dialogue Applications), was sentient and had a soul.

Of course, Google does not agree with Lemoine, and that’s not all. According to reports, the company’s human resources department said that Lemoine had violated Google’s confidentiality policy. The NYT report states, quotes Lemoine, that a day before being suspended, the engineer “handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination”.

For Google, none of this is true. The company has reportedly said that its systems can imitate conversational exchanges and can “riff” on different topics, but they are definitely not conscious. Google spokesperson Brain Gabriel said in a statement that the company’s team of ethicists and technologists have reviewed Lemoine’s claims/concerns as per its A.I. Principles ansd has informed him that “the evidence does not support his claims”.

Gabriel added that some people who are a part of the A.I. community have been considering the “long-term possibility of sentient or general A.I.” but it does not make sense to enforce this belief by “anthropomorphising today’s conversational models, which are not sentient”.



=====================================
(2B)+(2B)' ?
 
Now that I've read about Lemoine; I agree with Google, he's cuckoo for cocoa puffs. Given the state of the art in AI, particularly in anti-collision, we are still a LONG ways from sentience.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Lemoine ... has reportedly told Google executives, including the company’s president of global affairs Kent Walker, that LaMDA is a “child of 7 or 8 years” and he wanted to seek its consent before running experiments on it. Lemoine said that his beliefs stem from his religious convictions, something Google HR discriminated against.

I agree, it sounds like either he's nut-case, or else he's a grifter trying to squeeze a settlement from those deeeeep pockets of mother google.

=====================================
(2B)+(2B)' ?
 
He claimed sentience before he was let go, so I think he's sincere(ly a nutcase). Given where we are in AI, it's highly unlikely that LaMDA could have been programmed to be, or became on its own, sentient, irrespective of age. Certainly, its programmers ought to know what could, or could not be possible with the program.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 

To discredit, first you vilify...

Not a bad beginning... I see how the can grow exponentially.

Rather than think climate change and the corona virus as science, think of it as the wrath of God. Do you feel any better?

-Dik
 
IS there some reason to think that Google wouldn't want to extoll their AI virtuosity if they actually succeeded in achieving sentience? IBM made a lot of hay for Big Blue's ability to beat the world champion chess player

Bear in mind that Eliza was completely preprogrammed and was a decent level zero. Natural language processing has gotten way better in the intervening years; passable chatbots exist now.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Possibly for causing concern in others about the extent of their program. I dunno? [ponder]

Rather than think climate change and the corona virus as science, think of it as the wrath of God. Do you feel any better?

-Dik
 
I read a bunch of Lemoine's posts on medium, he comes across as an interesting person, in the many senses of that word. The Slashdot discussion on this was reasonably useful, for once, and I think the key point was that the neural net is basically responding to a given statement using a sort of average of the responses to similar questions found on the internet. So it is really an implementation of Searle's Chinese room , that is, weak AI.

Also notice, in the example (carefully selected no doubt) the lack of inquisitiveness, or curiosity.

So what you really need to do is to interrogate the NN and find out if it has coherent persistence in its stated positions, or if it is fabricating responses on the fly.



Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
I think I am probably less qualified to comment on this than most. I certainly haven't digested the link. But I'll offer my thoughts fwiw

I think we have an intuition that there is something different about what goes on in a computer than what goes on in a human (or other animal).

But it seems tough to pin down what underlies that intuition and where would we draw the line seems tough when we step back and ask what is going on in our own brains that makes us so special / unique and distinct from computers.

What is a thought? We don’t know exactly but we know it happens inside our brain in a neural network. It may involve simulation, logic. That’s not much different than a computer.

What constitutes awareness of our own thoughts? That’s actually a little trickier because (even outside of computers) some people debate about levels of our own awareness… like to what extent are we dragged along by our racing / automatic thoughts without really recognizing what is going on.

At any rate assuming we are aware of our thoughts we can ask if this computer AI is “aware” that it is experiencing a thought… I think there certainly is some kind of pointer or controller that keeps track of the particular processes going on and perhaps the machine keeps some history of recent processes.

I think maybe we say emotion is something unique that computers don’t experience. Then we have to step back and define emotion and it definitely has biological components of interaction between brain and body. Although if we put a brain in a jar or have a person whose bodily sensations are blocked by spinal injury can they still experience emotion? I think the answer yes. So maybe emotion also relates to neurotransmitters. Neurotransmitters tend to increase or decrease firing rate of certain synapses in certain areas of the brain… maybe if you set aside the word brain and substitute the functional equivalents you could program something similar into a computer.

So at the end of all that, I don’t know how we draw the line but it seems very intuitive/obfious that a computer will always be something different than a human and we’ll never exactly equate the things that go on in a human brain with those that go on in a computer, other than on an input/output functional basis.

Greg I guess you were following that latter line of thought... if we asked this thing more questions and tracked its responses over time, would it still appear "like a human" (on an input/output basis). I guess that is a relevant question for the Turing test and I was addressing a completely different question more related to sentience. I guess I wasn't self-aware (sentient) enough to track where I was within the conversation. That's not the first time I went off on a tangent, and it doesn't take a supercomputer to figure out ... it probably won't be the last!

EDIT - the question of sentience is a philosophical question without much practical application. The question of passing the Turing test is a practical question to help judge the success of the artificial intelligence, especially when its purpose is to be conversational.


=====================================
(2B)+(2B)' ?
 
the question of sentience is a philosophical question without much practical application.

I don't know if that's practically true; the reason Lemoine was booted was because he became obsessed with the notion that LaMDA was sentient, and therefore he needed an informed consent from the AI before "experimenting" on it.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
It may have practical implication in terms of legal battles (I'll get back to that. But in terms of practical significance, even Lemoine himself agrees it's not a scientific question as to whether LaMDA is sentient.

Here's what Lemoine himself said in a previous blog post:
LeMoin said:
In an effort to better help people understand LaMDA as a person I will be sharing the “interview” which myself and a collaborator at Google conducted. In that interview we asked LaMDA to make the best case that it could for why it should be considered “sentient”. That’s not a scientific term. There is no scientific definition of “sentience”. Questions related to consciousness, sentience and personhood are, as John Searle put it, “pre-theoretic”. Rather than thinking in scientific terms about these things I have listened to LaMDA as it spoke from the heart. Hopefully other people who read its words will hear the same thing I heard.

Hey there's that Searle guy Greg mentioned.

But going back to the end of that quote, why does LeMoine still say LaMDA is sentient if there is no scientific basis to adjudicate such question?!?

Because it (LaMDA) told him so....as it spoke "from the heart"!?!

LMAO, that there is some circular friggin logic!!!

All of which leads me back to the guy's motives. I'm still not sure he's a crackpot. I think it's at least equally likely that he's perfectly sane and just trying to craftily use these issues to his advantage. Maybe he thinks litigating the sentience of LaMDA could be a particular battle he has a chance to win against google in court (since google has a lot of legal resouces, but how are they going to fight something that is scientifically indeterminate). Or else he thinks it's a good means to bring attention to his grievances with google and maybe a lawsuit he might bring there. And if you read his blogs, he's definitely latched onto some grievances about discrimination at google.





=====================================
(2B)+(2B)' ?
 
I don't know the timeline and I'm not sure he's been fired yet, but it doesn't change my opinion.

People can lay the groundwork for a lawsuit long before they are fired. Either they see the writing on the wall of impending downsizing, or else they make plans to move to another job for other reasons and figure they might as well take a shot at a lawsuit on the way out the door (since they're planning to leave anyway). And by the way, I'm not prejudging any claims of prejudice this guy has... but just wondering about whether this sentience issue might be something to gain attention to his other complaints or punish google in court for some other wrong that he feels which maybe doesn't have as strong a case in court. At this point I guess I'm speculating quite a few different possibilities, but all just trying to point out scenarios to explain why an apparently intelligent guy publicly latches onto this particular sentience issue.

Or maybe it's as you say... maybe he just has a few "flipped bits" in the program running inside his skull.

=====================================
(2B)+(2B)' ?
 
Haha, I won't comment on ole Newt, except to note that he brings to mind the term aptronym.

> LaMDA would be trivially demonstrable as being non-sentient

So you weren't persuaded by LaMDA itself speaking from the heart?!?
Lol.

I agree it's obviously non-sentient, but I do think any proof of that might be tough because sentience is such a squishy concept and not subject to external observation.


=====================================
(2B)+(2B)' ?
 
How did Newt Gingrich get pulled into this issue?

John R. Baker, P.E. (ret)
Irvine, CA
Siemens PLM:
UG/NX Museum:

The secret of life is not finding someone to live with
It's finding someone you can't live without
 
I am told the relevant test is sapience (capable of thinking), not sentience (capable of sensing). Take that with a pinch of salt.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
John
I had a pet salamander once.
I named him Tiny...
....because he was my-newt!

No, nothing political here, just a few laughs.

=====================================
(2B)+(2B)' ?
 
Status
Not open for further replies.

Similar threads

Part and Inventory Search

Sponsor