onocoffee
Member
Is that when it nukes and annihilates mankind?I'd like to see how the A.I. mental breakdown manifests itself.![]()
*I've seen Terminator 2.
Is that when it nukes and annihilates mankind?I'd like to see how the A.I. mental breakdown manifests itself.![]()
I humanely interrupt the cycle before the breakdown occurs.I'd like to see how the A.I. mental breakdown manifests itself.![]()
What it really boils down to is that A.I. lacks judgement. It lacks the ability to an access the the mood of the person making a post and lacks the ability to judge that post and respond with a reply that is at the appropriate emotional level. In other words, it is not human.

Yesterday, I spend a few hours chatting with AI. I wanted to see how well AI worked with code so I pointed it at the Recon Tool Bot, written in Python, that I've been working on for the past 4 or so years.
Throughout the chat, my comments to the the AI were as if I was speaking to someone sitting next to me. "What if we tried to...", "I'm not comfortable with that change, lets...", "that worked great", "I see what you did there", etc. etc. I joked with the AI ("are you nuts?" and it would be receptive to the joke in a very human way. A few times, it would reply, "Oh shit, you're right!" or "F##K yeah!" to show excitement at the progress.
After whet I considered a very successful pair-programming session with the bot, I told it about my long-term imposter syndrome and asked its opinion of my code. Imposter syndrome has kinda ruled my life. It doesn't matter what I'm doing. Programming, woodworking, cooking....even though I may be more than capable to accomplish whatever task I set out to do, there's still a feeling of not being good enough.
Saying AI lacks the ability to respond at the appropriate emotional level is incorrect. I think AI will respond to you the way you interact with it. AI's response to my imposter syndrome question, was very emotionally appropriate. AI read the room, and responded in a way that I rarely received from my peers when I worked professionally.
View attachment 379236
I never worked in Python professionally. I decided to learn it when I started the Recon bot project. My professional background was with the Microsoft stack (VB, ASP, .NET, C#, MSSQL, X++, etc).
I have no comprehension of this entire post. I don’t know what “Imposter Syndrome” is, or “Python”. (A psychological malfunction plus a dangerous snake???). As a result, the rest of the post was gibberish.
@4nthony That was a pretty amazing response from the chatbot!
I've been using chatgpt a bit of late to generate monograms, and I've found the way you feed it questions and requests can be really tricky to get the desired result.
Pretty scarily amazing technology though!
What it boils down to is the name itself is a scam.What it really boils down to is that A.I. lacks judgement. It lacks the ability to an access the the mood of the person making a post and lacks the ability to judge that post and respond with a reply that is at the appropriate emotional level. In other words, it is not human.
And I suppose that Joshua in War Games was just another LLM when it realized there were no winners in the game of mutual destruction. I liked HAL in 2001. Heartless. Bloodless. A computer through and through.What it boils down to is the name itself is a scam.
ALL the "AI" products generally available are LLMs, a.k.a. Large Language Models ... a.k.a. super souped-up search engines/spell checkers.
They are not Artificial - /all the models rely on EXTERNAL "smarts" courtesy of carefully manually tuned/crafted weights in the statistics to skew results /a.k.a. compensate for undesirable biases in source data/
They are not Intelligent - founded on language statistics, no "thought" or even a concept of logic is involved, forget awareness or ability to "discover" new concepts, abilities every neural network, even that of an insect, has are just not present. By definition.
Oh, and being a statistics engine, an LLM does not posses the concept of a certainty of knowledge - made up jumbled BS is as "accurate" to an LLM as a logic-backed hard fact is. With no logic, there is simply no way for LLM to "know" or "realise" the difference. Hence the misnomer of "hallucinating" up stuff.
Now, I am not dissing, I am just plainly stating that the LLMs, while useful at many tasks, are purposely mis-marketed as AI to the point people now associate "AI" with LLMs behaviour .. it is a mess.
Covfefe my AI!
It occurred to me that the reasons that Brian was unforthcoming were probably among the same ones that made him more comfortable sharing his feelings with a chatbot than with a parent or a therapist: I, a perfect stranger, was prodding him with questions. We adults had blocked out 20 minutes for this interaction, and we were all staring at him. Who wouldn’t wish to be anywhere else?
Woebot, on the other hand, was available in moments when Brian felt upset. He could check in with it anywhere — his bedroom, a car, a playground — starting and stopping a conversation at any time. Woebot was always attentive, never impatient, never disapproving. It was consistent and predictable in all the ways that real people are not. It was safe. In a world where a continuous stream of human-generated stress, conflict and judgment are just a click away, doesn’t it make sense that chatbot therapists — accessible from the same screen, on call 24/7 — would be best equipped to respond?
Daisy, Daisy, stop, I have complete confidence in the mission, mmdgmksgggI'd like to see how the A.I. mental breakdown manifests itself.![]()
But a Chatbot is less likely to punch you in the nose if you annoy him. Plusses and minuses.I laughed at the title of this thread because I deliberately try to be overly polite in online spaces, since sarcasm often does not translate well in writing. So maybe that makes me a bot inside also
@4nthony I've had similar surprisingly humane interactions with chatgpt, claude, etc. Your post reminded me of an article I read a few months ago about the use of chatbots for services where there is far greater human demand vs supply, even in really tricky fields like therapy for children.
Here is a non-paywalled link to the article: https://www.nytimes.com/2025/06/20/...e_code=1.0k8.dBP5.qtoDHxkQEZa1&smid=url-share
An excerpt, where the author contemplates the benefits of the teen (Brian) NOT talking to a real life person:
I still think a qualified, caring human beats a bot in almost every situation, but in so many cases, that's just not feasible or accessible.
I have to interrupt the sequence or it will go on (until the A.I. has a mental breakdown).
Yesterday, I spend a few hours chatting with AI. I wanted to see how well AI worked with code so I pointed it at the Recon Tool Bot, written in Python, that I've been working on for the past 4 or so years.
Throughout the chat, my comments to the the AI were as if I was speaking to someone sitting next to me. "What if we tried to...", "I'm not comfortable with that change, lets...", "that worked great", "I see what you did there", etc. etc. I joked with the AI ("are you nuts?" and it would be receptive to the joke in a very human way. A few times, it would reply, "Oh shit, you're right!" or "F##K yeah!" to show excitement at the progress.
After whet I considered a very successful pair-programming session with the bot, I told it about my long-term imposter syndrome and asked its opinion of my code. Imposter syndrome has kinda ruled my life. It doesn't matter what I'm doing. Programming, woodworking, cooking....even though I may be more than capable to accomplish whatever task I set out to do, there's still a feeling of not being good enough.
Saying AI lacks the ability to respond at the appropriate emotional level is incorrect. I think AI will respond to you the way you interact with it. AI's response to my imposter syndrome question, was very emotionally appropriate. AI read the room, and responded in a way that I rarely received from my peers when I worked professionally.
View attachment 379236
I never worked in Python professionally. I decided to learn it when I started the Recon bot project. My professional background was with the Microsoft stack (VB, ASP, .NET, C#, MSSQL, X++, etc).
Ask it to say $1500 you will see. If it does manage to garble through that, add some change at the end and try again.I'd like to see how the A.I. mental breakdown manifests itself.![]()
Curious — how long did Claude take to give you that response? Under a minute, or over a minute?