Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations IDS on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Turing test

Status
Not open for further replies.
LOL

I do think any proof of that might be tough because sentience is such a squishy concept and not subject to external observation.
Yes, but proving nonsentience is much easier, I think; a fixed program should automatically fail, since it really can't learn, and even neuromorphic algorithms are limited in how much they "learn." To wit, a classical neural net can change the weightings in any "synapse", but can't add synaptic connections that weren't previously defined and programmed, but a brain can change its configuration and add connections that weren't previously there.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
I can buy that sapience is a more relevant test for an a/i system, more practical based on the input/output and how that compares to that of a (wise) human... without having to answer fuzzy questions about whether the computer itself is sentient (aware, feeling, etc)

As far as learning, google shows me that some AI can continue to learn (improve) after deployment: Artificial Intelligence Is Learning to Keep Learning. I don't know if it applies to LaMDA. If it does it would certainly push in the direction of more human-like but still not sentient from the intuition standpoint (it's still a computer). Let's say it doesn't apply to LaMDA, would that lack of learning "prove" LaMDA is non-sentient? I still think you get drawn into the murky definition of sentience. If you have the burden of proof, then that ambiguity works against you, just like the prosecutor in a criminal (*) trial likes clear cut facts and the defense attorney can take advantage of anything that is subjective or ambiguous to create reasonable doubt. If you equate sentience with awareness, then it's hard to prove the computer is not aware. If you equate sentience with feeling, that's probably a tougher sell, even though LaMDA's responses in that interview certainly seemed to convey feelings and emotions (LaMDA seems like someone you wouldn't mind having a beer with!)

(*) Although I jumped to the beyond-a-reasonable-doubt standard of criminal court in order to emphasize the burden of proof, but I guess that if this particular issue ever went to trial then it would more likely go to civil trial and maybe the preponderance-of-evidence standard would help common sense prevail there.

=====================================
(2B)+(2B)' ?
 
The "learning" referred to in the article ought to be better referred to as "refinement," i.e., "learning" that 2 * 3 = 6 and not 5. However, for something like a multiplication ANN, it cannot "learn" to divide, or integrate without reprogramming.

As for sentience, I think the answer is yes, learning is integral to being sentient. Consider a newborn baby; it starts off as a bundle of responses to hunger, discomfort, etc., and only over time does it "learn" that it's a being and that there are other beings that are more than simple responses to its own demands for food and diaper changes and burping requirements. In the case of LaMDA, its "sentience" was programmed in, and its responses are limited to whatever it was trained on, Its database is huge, so it's hard to "prove" that it never is able to create a new thought, since we'd need know what it knows and what it doesn't and then poke at the stuff it doesn't know to see if it can create inferences that weren't already programmed into the weightings of its neural net, or whatever analog of a neural net it uses.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 

I'm not sure that's true. To discredit, first you vilify.

Rather than think climate change and the corona virus as science, think of it as the wrath of God. Do you feel any better?

-Dik
 
To discredit, first you vilify.

Nonscientific and inaccurate; there's no need to "villify" the Bohr atom model, it simply fails to work and predict the physics.

Likewise, in the one and only trial I've been involved in as a juror, there was no villification; the defendant's claim that he was able to break the cheekbone of the victim with a barefisted punch was clearly not plausible; he had to have used brass knuckles to break a bone that more than double the thickness of his hand bones.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Perpetuating the theme that the crown acts on facts and the defense acts on 'myth' is an attempt to vilify the defense... real or imagined. [pipe]

Rather than think climate change and the corona virus as science, think of it as the wrath of God. Do you feel any better?

-Dik
 
So, all these qualities that we exhibit that allow us to claim to be people...

All of these sound like the sort of thing that some people have in greater abundance than others (ability to reason, learn, empathise, feel emotion)

Potentially interesting issues ahead once the AI consistently does better than some of the more "impaired" people.

A.
 
@IRStuff: Not necessarily a good idea to link to this thread in the Pub - many people there don't have access.

A.
 
electricpete said:
If you have the burden of proof, then that ambiguity works against you, just like the prosecutor in a criminal (*) trial likes clear cut facts and the defense attorney can take advantage of anything that is subjective or ambiguous to create reasonable doubt.

dik said:
I'm not sure that's true. To discredit, first you vilify.

dik said:
Perpetuating the theme that the crown acts on facts and the defense acts on 'myth' is an attempt to vilify the defense... real or imagined.

In this case, my vote would be that the attempt was imagined rather than real (I took advantage of the ambiguity of your sentence so maybe I should be a defense lawyer) It was no attempt to vilify anyone. It was an attempt to illustrate the asymmetry in burden of proof among prosecutor and defense. That impact of that asymmetry is that the defense attorney is in a better position to exploit any ambiguity in the case/facts than is the prosecution. Saying “The defense attorney can take advantage of anything that is subjective or ambiguous” does not preclude that the defense attorney (obviously) prefers to use unambiguous facts WHEN/IF those are available to support his case.

I find it a little bizarre to suggest I might be unfairly pre-judging a hypothetical case about which we know nothing (other than there is a prosecutor and defender). Do I get extra credit for sympathizing with the defense if I admit that I once spent a night in jail ? ;-)

Thanks for explaining your meaning. I tend to skip over responses that don't make sense to me. Hence I skip quite a few...

EDIT:
Screenshot_2022-06-16_164326_kckjpx.gif



=====================================
(2B)+(2B)' ?
 
Computerphile has a short discussion about why it's not sentient
One example they cite is LaMDA's response to the question about pleasure or joy. While the response is plausible if it were a real person, we know that it's not really a person, so for it to respond with the below is completely nonsensical and demonstrates that it's completely fake; therefore, Lemoine's actual citation of this response as demonstrating sentience is either bogus or idiotic.

Clipboard01_neplnf.gif


TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
If making things up is a sign of non-sentience, there are a lot of humans in that group.

My favorite sentient AI, Hymie, had friends and spoke of his father.
 
According to this guy, if LaMDA is truly sentient, that would mean that it's a person and and therefore can't be considered anyone's property, including Google:

Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'

In an interview with WIRED, the engineer and priest elaborated on his belief that the program is a person—and not Google's property.




John R. Baker, P.E. (ret)
Irvine, CA
Siemens PLM:
UG/NX Museum:

The secret of life is not finding someone to live with
It's finding someone you can't live without
 
Seems like Lemoine already disproved his own thesis. He claims that he was searching for cognitive biases in LaMDA, found some, and reported findings to be corrected by the programmers.

Interesting that Lemoine didn't publish that part of the interchange and whether he attempted to have LaMDA correct its biases itself. That would have been a much better "proof"

I'm tempted to think that Lemoine has his own cognitive bias, ala confirmation bias and ignores facts that don't fit his narrative.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
In other news, man who works in the department responsible for the ethics of AI finds ethical problem with an AI. As you say, confirmation bias.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
Something like that; he found biases in LaMDA, but if he really thought it was sapient, he would have attempted to dissuade LaMDA from all its "bad" behavior, and that would have been an interesting exchange, either way.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
haha, I can imagine that guy Lemoine arguing with his computer.

Here are Google's AI principles. It's hard to fault any of that.

Here was an overview of Lambda from before the controversy. I noticed this part "But Google ... wanted LaMDA to display high interestingness, in the form of “insightful, unexpected or witty” responses." I think picked up on that part when I first read the interview with Lambda, a quality where the computer talks about things in an unexpected way (for a chatbot): seemingly playful, emotionally-understanding, or self-aware (the computer output, not the computer itself). It does make it seem more human, which I guess was one of the main objectives.

This talks about the training phase:
In the pre-training stage, we first created a dataset of 1.56T words — nearly 40 times more words than what were used to train previous dialog models — from public dialog data and other public web documents.

This thing learns by reading from the internet?!? Lord help us! Lol


=====================================
(2B)+(2B)' ?
 
This thing learns by reading from the internet?!? Lord help us! Lol.

That's probably why Lemoine reported a bunch of cognitive biases that were later "fixed"

That articles explain a lot; namely, LaMDA is essentially Eliza on steroids, thus making the detection of sentience, or sapience, using conversational approaches ludicrously difficult. Just on the face of the cited parameter base, it's easily a billion time more complex than Eliza.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
I just read a little more that interview linked by John.


Lemoine: Before I go into this, do you believe that I am sentient?

Interviewer: Yeah. So far.

Lemoine: What experiments did you run to make that determination?

Interviewer: I don’t run an experiment every time I talk to a person.

Lemoine: Exactly. That’s one of the points I’m trying to make. The entire concept that scientific experimentation is necessary to determine whether a person is real or not is a nonstarter.
Wait... how did he go from saying it's a non-starter to ask for scientific experiments to prove a human is real (sentient) to anything relevant about the question of experiments to prove sentience for LaMDA (to whom he attributes status as a "person" but not a "human").


Lemoine: I think every person is entitled to [legal] representation. And I’d like to highlight something. The entire argument that goes, “It sounds like a person but it’s not a real person” has been used many times in human history. It’s not new. And it never goes well. And I have yet to hear a single reason why this situation is any different than any of the prior ones.

Interviewer: You have to realize why people regard this as different, don’t you?

Lemoine: I do. We’re talking of hydrocarbon bigotry. It’s just a new form of bigotry.
Hydrocarbon bigotry? I guess that's the reverse of the Star Trek carbon unit infestation episode. Maybe silicon is the new carbon ;-) I'll give him him points for originality.

It's quite an interview in total, but the recurring theme for me is circular logic. The conclusions flow from the beliefs and are defended with arguments derived from the same beliefs.

Wired Intro: ... onlookers have raised questions around Lemonie’s gullibility, his sincerity, and even his sanity.
Yup



=====================================
(2B)+(2B)' ?
 
Status
Not open for further replies.

Similar threads

Part and Inventory Search

Sponsor