I wasn't intending on arguing actually. I just thought it very interesting that God could reason creation into being versus uttering magic words. And it made me consider in the context of what I was reading how the meaning and implications change.
God can reason without words? I certainly can't. Perhaps He's a geometer?
We do a lot of our reasoning without words, why would God not be able to?
Who is this "we" of whom you speak? Indeed, on whose behalf you speak.
I wasn't intending on arguing actually. I just thought it very interesting that God could reason creation into being versus uttering magic words. And it made me consider in the context of what I was reading how the meaning and implications change.
God can reason without words? I certainly can't. Perhaps He's a geometer?
We do a lot of our reasoning without words, why would God not be able to?
Who is this "we" of whom you speak? Indeed, on whose behalf you speak.
I wasn't intending on arguing actually. I just thought it very interesting that God could reason creation into being versus uttering magic words. And it made me consider in the context of what I was reading how the meaning and implications change.
God can reason without words? I certainly can't. Perhaps He's a geometer?
We do a lot of our reasoning without words, why would God not be able to?
Who is this "we" of whom you speak? Indeed, on whose behalf you speak.
Mathematicians?
"42" ... Says it ALL ...
OK, metamathematicians.
UNTIL ... we get to the square root of -1 ... and/or the value of "pi" (as a whole or by the slice ... ???)
Humans. If you think you use words in all your thinking, you are not thinking very clearly.
And you're a genius too.
If we must use language to think, leaps would not be possible because we don't have the language. Language is about communication with others and it can help to run through internal ideas with language, but that does not make all thinking tied to it. We thought without language before we invented it and in order to invent new language to describe new concepts.
Humans. If you think you use words in all your thinking, you are not thinking very clearly.
And you're a genius too.
If we must use language to think, leaps would not be possible because we don't have the language. Language is about communication with others and it can help to run through internal ideas with language, but that does not make all thinking tied to it. We thought without language before we invented it and in order to invent new language to describe new concepts.
There certainly is good evidence that non-human animals engage in "thinking," despite having no "language," per se, as we understand it ...
But having said that, based on accumulated observations, we at this point have no access to the subjective INNER mental world of non-humans ...
see: Donald Griffin, "Animal Thinking." (1984, Harvard Univ. Press) ...
Humans. If you think you use words in all your thinking, you are not thinking very clearly.
And you're a genius too.
If we must use language to think, leaps would not be possible because we don't have the language. Language is about communication with others and it can help to run through internal ideas with language, but that does not make all thinking tied to it. We thought without language before we invented it and in order to invent new language to describe new concepts.
Indeed. Good thing I never claimed we all must use words to reason. Bad thing that nobody read what I really said, but only what they think I said. Indeed what I did was ask a question, who's this "we"? The question was never answered, I was just insulted. Call that "reasoning" because I don't.
Humans. If you think you use words in all your thinking, you are not thinking very clearly.
And you're a genius too.
If we must use language to think, leaps would not be possible because we don't have the language. Language is about communication with others and it can help to run through internal ideas with language, but that does not make all thinking tied to it. We thought without language before we invented it and in order to invent new language to describe new concepts.
Indeed. Good thing I never claimed we all must use words to reason. Bad thing that nobody read what I really said, but only what they think I said. Indeed what I did was ask a question, who's this "we"? The question was never answered, I was just insulted. Call that "reasoning" because I don't.
Humans was my answer to you. The you in the rest was addressed to anyone who thinks we need words to think. Insult? It was an accurate statement, whether that is insulting is a different issue.
I read what you said. Perhaps I do not understand what you meant. You said
God can reason without words? I certainly can't. Perhaps He's a geometer?
The first sentences questions the ability to think without words. The second makes a personal claim, which I doubt is true.* The third sentence is irrelevant in that mathematics are a language. One can solve geometry problems without using language to do it. Though, one must use language to communicate the solution.
*I don't doubt you might think it true, I just doubt that it is an accurate statement.
God can reason without words? I certainly can't. Perhaps He's a geometer?
Strictly speaking God doesn't reason as God exists outside time so doesn't work through arguments in sequence, and anyway God already knows the conclusions. Neither can God climb trees. We talk about God reasoning because we're using human imagery to describe what is beyond language.
God can reason without words? I certainly can't. Perhaps He's a geometer?
Strictly speaking God doesn't reason as God exists outside time so doesn't work through arguments in sequence, and anyway God already knows the conclusions. Neither can God climb trees. We talk about God reasoning because we're using human imagery to describe what is beyond language.
And lest we forget, NO ... !!! ... God cannot make a rock so big that (S)He could not move it ... (and ... ONE angel can dance on the head of a pin ...see: Billy Collins, "Questions About Angels.")
Whatever you do best, psychoanalizing me isn't it. And I really wish you'd stop.
I am not psychoanalysing you. I am speaking to the way the neurotypical* human mind works. Most people do not think about the way they think, nor observe other people in situations that lend themselves to analysing the process.
*Temple Grandin speaks of thinking in images. IMO and IME, everyone does to one degree or other .
Whatever you do best, psychoanalizing me isn't it. And I really wish you'd stop.
I am not psychoanalysing you. I am speaking to the way the neurotypical* human mind works. Most people do not think about the way they think, nor observe other people in situations that lend themselves to analysing the process.
*Temple Grandin speaks of thinking in images. IMO and IME, everyone does to one degree or other .
see: "neurolinguistic programming," as per Bandler and Grinder
God can reason without words? I certainly can't. Perhaps He's a geometer?
The first sentences questions the ability to think without words. The second makes a personal claim, which I doubt is true.* The third sentence is irrelevant in that mathematics are a language. One can solve geometry problems without using language to do it. Though, one must use language to communicate the solution.
*I don't doubt you might think it true, I just doubt that it is an accurate statement.
I can certainly think without words. I can come to conclusions of what action I should take without words. I'm not sure that doing that constitutes reason.
Consider this as an example. I can produce a deterministic mathematical algorithm that, given an input dataset, will compute a conclusion. That's "reasoning".
I can also train a neural net to do the same thing. The trained net that I have produced is still deterministic, and we'll assume that it does a good job of reproducing the results of the formal algorithm. But it doesn't think in the same way. Is my neural net "reasoning"?
God can reason without words? I certainly can't. Perhaps He's a geometer?
The first sentences questions the ability to think without words. The second makes a personal claim, which I doubt is true.* The third sentence is irrelevant in that mathematics are a language. One can solve geometry problems without using language to do it. Though, one must use language to communicate the solution.
*I don't doubt you might think it true, I just doubt that it is an accurate statement.
I can certainly think without words. I can come to conclusions of what action I should take without words. I'm not sure that doing that constitutes reason.
Consider this as an example. I can produce a deterministic mathematical algorithm that, given an input dataset, will compute a conclusion. That's "reasoning".
I can also train a neural net to do the same thing. The trained net that I have produced is still deterministic, and we'll assume that it does a good job of reproducing the results of the formal algorithm. But it doesn't think in the same way. Is my neural net "reasoning"?
Well, I cannot answer for you.
But generally speaking: Yes. I often work with teams comprised of people of different specialities. When one person presents an analysis or solution, other people often will posit a problem with it instantly, with no time to "talk" through the process internally. I have watched them work through the steps and find the words to explain after they came up with the initial insight. I have done it myself. And this is across disciplines as well as specialities, so direct experience is not a factor.
But generally speaking: Yes. I often work with teams comprised of people of different specialities. When one person presents an analysis or solution, other people often will posit a problem with it instantly, with no time to "talk" through the process internally. I have watched them work through the steps and find the words to explain after they came up with the initial insight. I have done it myself. And this is across disciplines as well as specialities, so direct experience is not a factor.
I agree that this happens - I'm just wondering whether it is reason or something else. And I'm not sure whether I'm arguing semantics or not.
I suspect that the inisghts you talk about here are quite a lot like the skills that people use to catch a ball thrown to them (which do not, in any sense, include a computation of the trajectory of the ball, and are, I think, very much like the neural net model I used.)
I think I would reserve "reason" to describe the thinking-though process that you talk about them going through after the fact, to enable them to explain to other people why the proposed solution doesn't work. I think the idea I'm groping for here is that "reason" exists independent of the reasoner, whereas thought processes that depend on the internal state of the thinker are something else.
Consider this as an example. I can produce a deterministic mathematical algorithm that, given an input dataset, will compute a conclusion. That's "reasoning".
I can also train a neural net to do the same thing. The trained net that I have produced is still deterministic, and we'll assume that it does a good job of reproducing the results of the formal algorithm. But it doesn't think in the same way. Is my neural net "reasoning"?
To take a more specific example, a neural network can be programmed to play chess, a game with no random components with strategy based on analytical thought (i.e. "reason"). In what sense is a neural network so programmed not "playing chess"?
I think it's a recognised phenomenon that you get people who think without words and people who don't think without words to talk to each other and each group insists that the other group must be wrong about their own experience.
Animals can problem solve without words or other symbols. Therefore, so can we.
What one cannot do without words is second-order problem solving. Words allow you to hold up two different solutions and compare which is better.
Also, we need language - specifically figures of speech - to come up with new concepts. One can have an inchoate grasp of some new idea, but to flesh it out and make it thinkable with, we have to assimilate it to something we can already talk about. That's why the words for mental phenomena are originally words describing physical phenomena rather than words coined for the purpose.
To take a more specific example, a neural network can be programmed to play chess, a game with no random components with strategy based on analytical thought (i.e. "reason"). In what sense is a neural network so programmed not "playing chess"?
I think the question is whether or not the net is "reasoning". It is clearly playing chess.
But generally speaking: Yes. I often work with teams comprised of people of different specialities. When one person presents an analysis or solution, other people often will posit a problem with it instantly, with no time to "talk" through the process internally. I have watched them work through the steps and find the words to explain after they came up with the initial insight. I have done it myself. And this is across disciplines as well as specialities, so direct experience is not a factor.
I agree that this happens - I'm just wondering whether it is reason or something else. And I'm not sure whether I'm arguing semantics or not.
I suspect that the inisghts you talk about here are quite a lot like the skills that people use to catch a ball thrown to them (which do not, in any sense, include a computation of the trajectory of the ball, and are, I think, very much like the neural net model I used.)
Whilst some people learn that a lot quicker, it is learned none the less. Catching a ball at a distance one has not before is a slight variation one the muscle memory and experience one has acquired. It is not the same as lateral thinking to solve an problem one has confronted before.
I think I would reserve "reason" to describe the thinking-though process that you talk about them going through after the fact, to enable them to explain to other people why the proposed solution doesn't work. I think the idea I'm groping for here is that "reason" exists independent of the reasoner, whereas thought processes that depend on the internal state of the thinker are something else.
Intuitive thinking is not grabbing something from nowhere. It is the brain working through a problem, though it is making jumps rather than plodding through. Whilst language allows for the transmission of complex reasoning, it also slows down thinking.
I think it's a recognised phenomenon that you get people who think without words and people who don't think without words to talk to each other and each group insists that the other group must be wrong about their own experience.
I do not think anyone who can speak never reasons with words. ISTM, everyone reasons without words, they are just not aware of when they are doing it.
I think it's a recognised phenomenon that you get people who think without words and people who don't think without words to talk to each other and each group insists that the other group must be wrong about their own experience.
I do not think anyone who can speak never reasons with words. ISTM, everyone reasons without words, they are just not aware of when they are doing it.
I think you'd need to provide examples for that kind of claim. How reasoning actually functions is a difficult problem to resolve, but there is strong evidence that language is key to reasoning at all levels. Cognitive science and philosophy have furthered this view, although with key and important dissenters.
The language of thought hypothesis, that there is a necessary linguistic component to thinking and reasoning, is a prominent and complicated view, but quite interesting. This article is a good overview, although somewhat technical. Although it also comes at it from an explicitly Anglo-American philosophical perspective.
Gerald Murnane, however, is an Australian writer and novelist who writes quite convincingly about how he reasons through images, primarily. Although he is not a philosopher.
I think it's a recognised phenomenon that you get people who think without words and people who don't think without words to talk to each other and each group insists that the other group must be wrong about their own experience.
I do not think anyone who can speak never reasons with words. ISTM, everyone reasons without words, they are just not aware of when they are doing it.
ISTY based on what? Normalizing your own experience?
I think it's a recognised phenomenon that you get people who think without words and people who don't think without words to talk to each other and each group insists that the other group must be wrong about their own experience.
I do not think anyone who can speak never reasons with words. ISTM, everyone reasons without words, they are just not aware of when they are doing it.
I think you'd need to provide examples for that kind of claim.
I did. Sort of general, but specific wouldn't be any different because you would have to take my word. In school I solved geometry problems before I learned the theorems. I had little language, maths or spoken, to arrive at the conclusions I did. Geometry is visual, so it is easier to see without too much explanation.
I think it's a recognised phenomenon that you get people who think without words and people who don't think without words to talk to each other and each group insists that the other group must be wrong about their own experience.
I do not think anyone who can speak never reasons with words. ISTM, everyone reasons without words, they are just not aware of when they are doing it.
ISTY based on what? Normalizing your own experience?
And observing other people. Language takes time, language is slow.
I think it's a recognised phenomenon that you get people who think without words and people who don't think without words to talk to each other and each group insists that the other group must be wrong about their own experience.
I do not think anyone who can speak never reasons with words. ISTM, everyone reasons without words, they are just not aware of when they are doing it.
I think you'd need to provide examples for that kind of claim.
I did. Sort of general, but specific wouldn't be any different because you would have to take my word. In school I solved geometry problems before I learned the theorems. I had little language, maths or spoken, to arrive at the conclusions I did. Geometry is visual, so it is easier to see without too much explanation.
Right, there are some instances where thinking is less linguistic, but it's worth noting that mathematics is considered to be a language, albeit one with a very different domain than our spoken languages.
Geometry is a good example, though, as it is so deeply visual.
Animals can problem solve without words or other symbols. Therefore, so can we.
An interesting enthymeme. I can't think of a major premise to complete it that isn't false.
Ok, let's expand. If non-human animals can problem solve without words problem solving without words is possible. It's not impossible logically or physically. So to show that humans can't, you'd need to show why we're an exception. Actually, small children can solve basic problems before they learn language so you'd need to show that humans lose the capacity.
One consideration is that if you take a stream of consciousness novel like Ulysses that depicts the linguistic element of someone's consciousness it is hard to follow. Our own conscious thought is not hard to follow. Depictions of the linguistic element of consciousness aren't depictions of the whole of consciousness.
As cognitive behaviour therapy is understood (CBT), automatic perception to feeling without thinking may cause mental illness. But this is an information processing model of human beings. And it's dominant in our current computer era. It's not "true" in any final sense.
Behaviour therapy had it that we responded to stimulus and that our thought processes were mostly irrelevant. Like telephone switches turned on and off.
Psychodynamic and psychoanalysis had it that we didn't know what was going on because forces beyond our conscious preception affected our feelings, thoughts and behaviour. The hydraulic model, while we pumped water out of coal mines.
We're prisoners of our cultures and historical times. I'd suggest if you believe that words are required to think and feel, you've not spent enough time with pre-verbal babies, dogs, horses.
To take a more specific example, a neural network can be programmed to play chess, a game with no random components with strategy based on analytical thought (i.e. "reason"). In what sense is a neural network so programmed not "playing chess"?
I think the question is whether or not the net is "reasoning". It is clearly playing chess.
I think it's a recognised phenomenon that you get people who think without words and people who don't think without words to talk to each other and each group insists that the other group must be wrong about their own experience.
I do not think anyone who can speak never reasons with words. ISTM, everyone reasons without words, they are just not aware of when they are doing it.
ISTY based on what? Normalizing your own experience?
Not uncommonly, "experience" that ISN'T "normal" is often problematic ...
To take a more specific example, a neural network can be programmed to play chess, a game with no random components with strategy based on analytical thought (i.e. "reason"). In what sense is a neural network so programmed not "playing chess"?
I think the question is whether or not the net is "reasoning". It is clearly playing chess.
To take a more specific example, a neural network can be programmed to play chess, a game with no random components with strategy based on analytical thought (i.e. "reason"). In what sense is a neural network so programmed not "playing chess"?
I think the question is whether or not the net is "reasoning". It is clearly playing chess.
To take a more specific example, a neural network can be programmed to play chess, a game with no random components with strategy based on analytical thought (i.e. "reason"). In what sense is a neural network so programmed not "playing chess"?
I think the question is whether or not the net is "reasoning". It is clearly playing chess.
If it's not reasoning, how is it playing chess?
It's calculating probabilities ...
And a human playing chess is doing what?
Yes ... but "thinking" comprises MORE than use of a fancy mental abacus ...
Yes ... but "thinking" comprises MORE than use of a fancy mental abacus ...
Perhaps, but we were talking about "reasoning", which is a specific sub-set of what is generally called "thinking". (Putting "thinking" in quotes like that is a huge shift of the goalposts.)
Chess is a game with no random components. In other words, something that requires the application of reason, applied competitively against an opponent to achieve a desired end state (checkmate). I'm wondering why analyzing a current situation and formulating a way to reach that end state doesn't count as "reasoning".
To take a more specific example, a neural network can be programmed to play chess, a game with no random components with strategy based on analytical thought (i.e. "reason"). In what sense is a neural network so programmed not "playing chess"?
I think the question is whether or not the net is "reasoning". It is clearly playing chess.
If it's not reasoning, how is it playing chess?
It's calculating probabilities ...
And a human playing chess is doing what?
The way computers operate is different to the way humans do. Chess is not so great a comparator.
Computers weight all information the same. Chess is about probabilities and the computer can run though those massively quickly. Chess is not massively complicated and the entire board is know to all players. Games like poker are more challenging and it is significant that the team taking on designing a program for poker chose a variant where much of the opponent's potential hand is known.
Alan Turing, a man generally considered good at reasoning, was rubbish at chess.
There may well be a time when computers can reason as we think of it, but that time is not yet.
I'm wondering why analyzing a current situation and formulating a way to reach that end state doesn't count as "reasoning".
As I understand there's a question about whether the neural network is trying to reach any end state. That is it seems to be looking at situations and saying, this is the sort of situation in which the thing to do is this, without reference to the end state. In the same way, animal behaviour has evolved because it leads to reproductive success but it's not obvious that a male dasyure has any idea that fighting and mating until it drops dead is going to result in having more children.
I think it's a recognised phenomenon that you get people who think without words and people who don't think without words to talk to each other and each group insists that the other group must be wrong about their own experience.
I do not think anyone who can speak never reasons with words. ISTM, everyone reasons without words, they are just not aware of when they are doing it.
ISTY based on what? Normalizing your own experience?
Not uncommonly, "experience" that ISN'T "normal" is often problematic ...
Not uncommon for people to not understand the difference between "normal" and "normalized."
Yes ... but "thinking" comprises MORE than use of a fancy mental abacus ...
Perhaps, but we were talking about "reasoning", which is a specific sub-set of what is generally called "thinking". (Putting "thinking" in quotes like that is a huge shift of the goalposts.)
Yes ... but "thinking" comprises MORE than use of a fancy mental abacus ...
Perhaps, but we were talking about "reasoning", which is a specific sub-set of what is generally called "thinking". (Putting "thinking" in quotes like that is a huge shift of the goalposts.)
Chess is a game with no random components. In other words, something that requires the application of reason, applied competitively against an opponent to achieve a desired end state (checkmate). I'm wondering why analyzing a current situation and formulating a way to reach that end state doesn't count as "reasoning".
Playing chess also involves a sort of "mind reading" of the opponent, that is, planning and making moves anticipating what THAT particular other player is likely going to do, based upon what (s)he did a few moves or a couple games past ...
I suppose there have been not a few chess *games* "played" between two computers ... I suppose with interesting results ...
Comments
"42" ... Says it ALL ...
OK, metamathematicians.
UNTIL ... we get to the square root of -1 ... and/or the value of "pi" (as a whole or by the slice ... ???)
Hormones ... Feelings ... Instincts ...
"Life" -- "respiration" ... "movement" ... "metabolism" ... "stimulus response" ... "reproduction" ...
And you're a genius too.
"in the kitchen ... with a soup spoon ..." (let the reader understand) ...
There certainly is good evidence that non-human animals engage in "thinking," despite having no "language," per se, as we understand it ...
But having said that, based on accumulated observations, we at this point have no access to the subjective INNER mental world of non-humans ...
see: Donald Griffin, "Animal Thinking." (1984, Harvard Univ. Press) ...
Indeed. Good thing I never claimed we all must use words to reason. Bad thing that nobody read what I really said, but only what they think I said. Indeed what I did was ask a question, who's this "we"? The question was never answered, I was just insulted. Call that "reasoning" because I don't.
I read what you said. Perhaps I do not understand what you meant. You said The first sentences questions the ability to think without words. The second makes a personal claim, which I doubt is true.* The third sentence is irrelevant in that mathematics are a language. One can solve geometry problems without using language to do it. Though, one must use language to communicate the solution.
*I don't doubt you might think it true, I just doubt that it is an accurate statement.
And lest we forget, NO ... !!! ... God cannot make a rock so big that (S)He could not move it ... (and ... ONE angel can dance on the head of a pin ...see: Billy Collins, "Questions About Angels.")
Person, man, woman, camera, TV.
https://www.youtube.com/watch?v=j8oaaP68i4s
*Temple Grandin speaks of thinking in images. IMO and IME, everyone does to one degree or other .
see: "neurolinguistic programming," as per Bandler and Grinder
I can certainly think without words. I can come to conclusions of what action I should take without words. I'm not sure that doing that constitutes reason.
Consider this as an example. I can produce a deterministic mathematical algorithm that, given an input dataset, will compute a conclusion. That's "reasoning".
I can also train a neural net to do the same thing. The trained net that I have produced is still deterministic, and we'll assume that it does a good job of reproducing the results of the formal algorithm. But it doesn't think in the same way. Is my neural net "reasoning"?
But generally speaking: Yes. I often work with teams comprised of people of different specialities. When one person presents an analysis or solution, other people often will posit a problem with it instantly, with no time to "talk" through the process internally. I have watched them work through the steps and find the words to explain after they came up with the initial insight. I have done it myself. And this is across disciplines as well as specialities, so direct experience is not a factor.
I agree that this happens - I'm just wondering whether it is reason or something else. And I'm not sure whether I'm arguing semantics or not.
I suspect that the inisghts you talk about here are quite a lot like the skills that people use to catch a ball thrown to them (which do not, in any sense, include a computation of the trajectory of the ball, and are, I think, very much like the neural net model I used.)
I think I would reserve "reason" to describe the thinking-though process that you talk about them going through after the fact, to enable them to explain to other people why the proposed solution doesn't work. I think the idea I'm groping for here is that "reason" exists independent of the reasoner, whereas thought processes that depend on the internal state of the thinker are something else.
To take a more specific example, a neural network can be programmed to play chess, a game with no random components with strategy based on analytical thought (i.e. "reason"). In what sense is a neural network so programmed not "playing chess"?
Animals can problem solve without words or other symbols. Therefore, so can we.
What one cannot do without words is second-order problem solving. Words allow you to hold up two different solutions and compare which is better.
Also, we need language - specifically figures of speech - to come up with new concepts. One can have an inchoate grasp of some new idea, but to flesh it out and make it thinkable with, we have to assimilate it to something we can already talk about. That's why the words for mental phenomena are originally words describing physical phenomena rather than words coined for the purpose.
I think the question is whether or not the net is "reasoning". It is clearly playing chess.
I think you'd need to provide examples for that kind of claim. How reasoning actually functions is a difficult problem to resolve, but there is strong evidence that language is key to reasoning at all levels. Cognitive science and philosophy have furthered this view, although with key and important dissenters.
The language of thought hypothesis, that there is a necessary linguistic component to thinking and reasoning, is a prominent and complicated view, but quite interesting. This article is a good overview, although somewhat technical. Although it also comes at it from an explicitly Anglo-American philosophical perspective.
Gerald Murnane, however, is an Australian writer and novelist who writes quite convincingly about how he reasons through images, primarily. Although he is not a philosopher.
An interesting enthymeme. I can't think of a major premise to complete it that isn't false.
ISTY based on what? Normalizing your own experience?
Right, there are some instances where thinking is less linguistic, but it's worth noting that mathematics is considered to be a language, albeit one with a very different domain than our spoken languages.
Geometry is a good example, though, as it is so deeply visual.
One consideration is that if you take a stream of consciousness novel like Ulysses that depicts the linguistic element of someone's consciousness it is hard to follow. Our own conscious thought is not hard to follow. Depictions of the linguistic element of consciousness aren't depictions of the whole of consciousness.
Behaviour therapy had it that we responded to stimulus and that our thought processes were mostly irrelevant. Like telephone switches turned on and off.
Psychodynamic and psychoanalysis had it that we didn't know what was going on because forces beyond our conscious preception affected our feelings, thoughts and behaviour. The hydraulic model, while we pumped water out of coal mines.
We're prisoners of our cultures and historical times. I'd suggest if you believe that words are required to think and feel, you've not spent enough time with pre-verbal babies, dogs, horses.
If it's not reasoning, how is it playing chess?
Not uncommonly, "experience" that ISN'T "normal" is often problematic ...
It's calculating probabilities ...
And a human playing chess is doing what?
Yes ... but "thinking" comprises MORE than use of a fancy mental abacus ...
Perhaps, but we were talking about "reasoning", which is a specific sub-set of what is generally called "thinking". (Putting "thinking" in quotes like that is a huge shift of the goalposts.)
Chess is a game with no random components. In other words, something that requires the application of reason, applied competitively against an opponent to achieve a desired end state (checkmate). I'm wondering why analyzing a current situation and formulating a way to reach that end state doesn't count as "reasoning".
Not when I play it...
Computers weight all information the same. Chess is about probabilities and the computer can run though those massively quickly. Chess is not massively complicated and the entire board is know to all players. Games like poker are more challenging and it is significant that the team taking on designing a program for poker chose a variant where much of the opponent's potential hand is known.
Alan Turing, a man generally considered good at reasoning, was rubbish at chess.
There may well be a time when computers can reason as we think of it, but that time is not yet.
Not uncommon for people to not understand the difference between "normal" and "normalized."
Thank you. Thank you thank you thank you.
Playing chess also involves a sort of "mind reading" of the opponent, that is, planning and making moves anticipating what THAT particular other player is likely going to do, based upon what (s)he did a few moves or a couple games past ...
I suppose there have been not a few chess *games* "played" between two computers ... I suppose with interesting results ...