The Chinese Box And Turing Test: Ai Has No Intelligence At All

Opinion Remember ELIZA? The 1966 chatbot from MIT’s AI Lab convinced countless people it was intelligent using nothing but simple pattern matching and canned responses. Nearly 60 years later, ChatGPT has people making the same mistake. Chatbots don’t think – they’ve just gotten exponentially better at pretending.

Alan Turing’s 1950 test set a simple standard: if a judge can’t tell whether they’re conversing with a human or machine, the machine passes.

By this metric, many chatbots are already “intelligent.” You can test this yourself at Turing Test Live. Recent studies from Queen Mary University and University College London found people can’t reliably distinguish human voices from AI clones.

That’s great news for scammers, not so good for the rest of us. Keep that in mind the next time your kid calls to ask for a quick loan via Venmo to pay for a car accident – it may not actually be your child in trouble but you and your bank account if you pay up.

But is the AI being used for this actually intelligent or just very, very good at faking it? This is not a new question. American philosopher John Searle came up with the Chinese Room, aka the “Chinese Box” argument, all the way back in 1980. He argued that while a computer could eventually simulate understanding – i.e. it could pass the Turing Test – that doesn’t mean it’s intelligent.

The Chinese Box experiment imagines a person who does not understand Chinese shut inside a room, using a set of instructions (e.g. a program) to respond to written Chinese messages (data) slipped under the door. Although the person’s answers, with enough training (machine learning), are perfectly fluent, they are derived only from symbol manipulation, not from understanding. Searle argues this situation is analogous to how computers “understand” language. The man in the middle still doesn’t have a clue what either the incoming or outgoing messages mean. It’s syntactic processing without semantic comprehension. Or, as I like to put it, very sophisticated mass-production copy and paste.

For example, I was recently accused of writing a Linux story using AI. For the record, I don’t use AI for writing. Search, yes, Perplexity, for one, is a lot better than Google; writing, no. I looked into it, and what did I find? That ChatGPT, in this case, did indeed give answers that appeared a lot like my writing because, when I dug into it, it had “learned” by stealing words from my earlier articles on Linux.

According to Searle’s argument, current AI can never have true understanding, no matter how sophisticated they may get or how easily we’re fooled by them. I agree with him as far as today’s AI goes. Generative AI really is just copy and paste, and agentic AI, for all the chatter about it being a new step, is just GenAI large language models (LLMs) talking with each other. Neat, useful, but in no way, shape, or form a fundamental step forward.

Come the day we have Artificial General Intelligence (AGI), we may have a truly intelligent computer. We’re not there yet. Despite all the hype, we’re not even close to it today.

Sam Altman, head of OpenAI and the company’s number one cheerleader, may have said: “We are now confident we know how to build AGI as we have traditionally understood it,” but that’s crap. Will we eventually have a truly smart AI? Sure. I can see that happening. 

hot air balloon decrease ressure

The air is hissing out of the overinflated AI balloon

READ MORE

Wake me up when we have one that can pass the Chinese researchers’ Survival Game test. This test requires an AI to find answers to a wide variety of questions through continuous trial and error. You know, like we do. Their best guesstimate of when we can expect such a system to say and know precisely what it’s saying and doing, such as HAL saying “I’m sorry Dave, I’m afraid I can’t do that,” won’t be until 2100 AD.

I think we can get there faster than that. Technology always tends to improve faster than we think it will, even though we’re terrible at predicting exactly how it will improve. I still want my flying car, but I’ve given up hope that I’ll ever get one.

You may ask yourself: “Does this really matter? If my GenAI girlfriend says she loves me and I believe her, isn’t that enough?” OK, for the terminally lonely, that may be fine. Profoundly sad, but OK. However, when we think of AI as intelligent, we tend to also think they’re reliable. They’re not. Maybe StacyAI 2.0 won’t cheat on you, but for work, we want a wee bit more.

We’re not there yet. Kevin Weil, OpenAI’s VP for science, recently claimed “GPT-5 just found solutions to 10 (!) previously unsolved Erdös problems.” Ah, nope. No, it hadn’t. OpenAI’s latest model had simply scraped answers off the internet and regurgitated them as its own.

Like people, Anthropic has discovered that AI programs will lie, cheat, and blackmail. But they’re not coming up with it on their own. Once again, they’re just copying us. Sad, isn’t it? ®


Original Source


Support Our Work

A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.

If you like the site, please support us on Patreon or Buy Me A Coffee using the buttons below.

AI APIs OSINT driven New features