Searle's Chinese Room

If you've never heard of it, noodle on this: Searle's Chinese Room.
I believe this thought experiment is apodictic proof that no digital computer (or Turing Machine ) can "understand" in the sense that Human Beings can. Here is why.

Some approach the Chinese Room as a game: can you come up with a question (or sequence of questions) that will differentiate which room has the computer and which has the person who understands Chinese? It's a fun game but only a distraction from Searle's point - his thought experiment goes deeper than this.

Give AI the benefit of the doubt and suppose hypothetically that some algorithm or program, while finite, can be so advanced or sophisticated that is impossible to come up with a question (or sequence of questions) that would reveal which room has the computer. Even so, this would not imply that the computer (or any Turing Machine) "understands" Chinese. Put differently, even if an algorithm is clever or sophisticated enough to trick people into thinking the computer is a person, it does not imply that the computer or algorithm "understands" anything. The classic Turing Test is an interesting turning point in the state of the art, but philosophically meaningless.

The reason why is simple:

Replace the computer with another Human Being who does not understand Chinese. That person can blindly follow the same algorithm/program/instructions the computer did. The person will be slower, but that is an immaterial difference. Time is not a factor in this thought experiment.

We as Human Beings know these two people are in different mental states. The mental state of blindly following instructions is entirely different from the mental state of understanding the questions and thinking up responses. How do we know this? We Human Beings are familiar with both of these mental states, we frequently use either one, and we know they are totally different.

This, I believe, was Searle's point. And I believe there is no counter argument for this. It is based on our direct immediate knowledge and experience as Human Beings who regularly use each of these mental states and know they are different.

Here's another example. A Turing machine with a good enough algorithm can beat a Human Being at chess - to our credit we Human Beings came up with such algorithms. But the Turing machine doesn't "understand" the game of Chess. Indeed, a person who doesn't even know how to play could follow the same algorithm.

In conclusion, nobody is sure what the human mind and consciousness really is, but we know with certainty it is more than a Turing machine. Goedel's Incompleteness Theorem is another proof of this.