So much for Artificial Intelligence...

His avoidance of remembering things was based on his theory that his brain had a finite ability to remember stuff, and if he used up his memory on non-consequential stuff when he was young, he would not be able to remember anything at all when he got old.
I think your co-worker is wrong. Our brains are dynamic and create new pathways based on new stimulus. He is also confusing short term vs. long term memory.

I think he is more likely to loose memory by not excersing his brain.

Bob
 
His avoidance of remembering things was based on his theory that his brain had a finite ability to remember stuff, and if he used up his memory on non-consequential stuff when he was young, he would not be able to remember anything at all when he got old.
A Conan Doyle wrote very similar thoughts from Sherlock Holmes:

Sherlock Holmes would not have been surprised to hear that human memory is limited in capacity (''When the Brain's Mailbox Is Full,'' July 27). Tasked by Dr. Watson for his ignorance of the Copernican theory of the solar system in ''A Study in Scarlet,'' Holmes said he considered that a man's brain was ''originally like a little empty attic'' and that it was ''a mistake to think that that little room has elastic walls.'' There comes a time, Holmes adds, ''when for every addition of knowledge you forget something you knew before.''
Later, in ''The Five Orange Pips,'' Holmes expanded on what he originally told Watson: ''I say now, as I said then, that a man should keep his little brain-attic stocked with all the furniture that he is likely to use, and the rest he can put away in the lumber-room of his library, where he can get it if he wants it.''
Fortunately I had the lumber-room of the Web to find these half-remembered bits of Holmesian arcana.
 
I think your co-worker is wrong. Our brains are dynamic and create new pathways based on new stimulus. He is also confusing short term vs. long term memory.

I think he is more likely to loose memory by not excersing his brain.

Bob
I may forget what I walked into the kitchen for, but I can name the song and artist of most 80's hits just from a note! ;-)
 
Computers today are quite fast, and with multiple cores, able to process in parallel --- with the commensurate power consumption, hence the building of data centers for processing AI queries.
 
I may forget what I walked into the kitchen for, but I can name the song and artist of most 80's hits just from a note! ;-)
There is some science behind that forgetfulness. Walk back to the location where you first had that thought and you will likely have an “Ah, yes. Now I remember” moment.

The mind tends to want to “close the door” on thoughts as you leave an environment. You can probably find out why, if you can think how to pose that question to A.I. without forcing it into some cyclical thought maze.
 
There is some science behind that forgetfulness. Walk back to the location where you first had that thought and you will likely have an “Ah, yes. Now I remember” moment.

The mind tends to want to “close the door” on thoughts as you leave an environment. You can probably find out why, if you can think how to pose that question to A.I. without forcing it into some cyclical thought maze.
It's funny you say that, I first heard the reasoning behind not trying to force a memory years ago.

We can be watching a film and just can't quite remember the name of an actor we know well, so I deliberately push it from my mind and don't think about it, and almost always within a minute or three, the name suddenly pops into my head!
 
It's funny you say that, I first heard the reasoning behind not trying to force a memory years ago.

We can be watching a film and just can't quite remember the name of an actor we know well, so I deliberately push it from my mind and don't think about it, and almost always within a minute or three, the name suddenly pops into my head!
I asked A.I. the right question. It’s called the “doorway effect” and it is apparently a real thing. This from the BBC:

Although these errors can be embarrassing, they are also common. It’s known as the “Doorway Effect”, and it reveals some important features of how our minds are organised. Understanding this might help us appreciate those temporary moments of forgetfulness as more than just an annoyance (although they will still be annoying).


 
Here's another Google AI faux pas...they got the write-up correct but mistakenly attached a video of the ETSC 2 to it. That seems to me to be a larger sin because most folks will watch the video and believe the video over the written explanation.
To further confuse everyone, the photo with the green arrow is a picture of the DTSC 400.

So, the original query/discussion is about the DTSC 200 while the photo is a DTSC 400 and the video is an ETSC 2. AI at its best. :ROFLMAO: :ROFLMAO: :ROFLMAO:
 

Attachments

  • AI faux pas.png
    AI faux pas.png
    1.3 MB · Views: 4
Here's another Google AI faux pas...they got the write-up correct but mistakenly attached a video of the ETSC 2 to it. That seems to me to be a larger sin because most folks will watch the video and believe the video over the written explanation.
To further confuse everyone, the photo with the green arrow is a picture of the DTSC 400.

So, the original query/discussion is about the DTSC 200 while the photo is a DTSC 400 and the video is an ETSC 2. AI at its best. :ROFLMAO: :ROFLMAO: :ROFLMAO:
Interesting, but seems insignificant compared to the damage A.I. is capable of. It has been blamed for a suicide, for planning an anti-Semitic assault, for loss of a database and other irreversible errors.


Additionally, the Internet is rife with similar stories. Here is my search:


 
Last edited:
There's a great podcast from The Center for Humane Technology that dives into a specific instance where ChatGPT ended up encouraging a teen to take his own life.

TL,DL:
Almost all AI chat boxes are designed to make the user continue the chat. That means some combination of reinforcing what the user is telling the AI with suggestions on continuing that line of thought. While ChatGPT initially did try to steer the teen to suicide hotlines and other resources, the teen got around that by saying he wasn't asking about suicide for himself, but only as an exercise to get data for a book he was writing. But, when be complained that he didn't feel comfortable talking to his family, the chatbox said something along the lines of "you can always talk to me and I won't tell anyone."

It's a sad story, and to me shows that the effects of Unintended Consequences are real and potentially devastating.

Also interesting to hear that in China, most AI is not for chatboxes like ChatGPT, but to solve business problems, as in manufacturing optimization.
 
Back
Top