Culture and Christianity

13

Comments

  • Crœsos wrote: »
    Yes ... but "thinking" comprises MORE than use of a fancy mental abacus ...

    Perhaps, but we were talking about "reasoning", which is a specific sub-set of what is generally called "thinking". (Putting "thinking" in quotes like that is a huge shift of the goalposts.)

    Chess is a game with no random components. In other words, something that requires the application of reason, applied competitively against an opponent to achieve a desired end state (checkmate). I'm wondering why analyzing a current situation and formulating a way to reach that end state doesn't count as "reasoning".

    Playing chess also involves a sort of "mind reading" of the opponent, that is, planning and making moves anticipating what THAT particular other player is likely going to do, based upon what (s)he did a few moves or a couple games past ...
    Except it isn't, which is why the chess playing super computers can reliably beat anyone. All the possible moves can be seen. Probabilities can be programmed in. Each move is a small one that removes possible outcomes.
  • CrœsosCrœsos Shipmate
    lilbuddha wrote: »
    The way computers operate is different to the way humans do. Chess is not so great a comparator.

    One of the interesting things about discussions like this is the way that any time an artificial intelligence progresses to the point where it can perform a certain cognitive task as well as (or better than) a human, that task is suddenly cut off from what it means to be human. This can cause problems as it continues to restrict our ideas of humanity. For example, your uncle is good at chess and has an encyclopedic knowledge of the birds of North America. If those things no longer count towards being human, where does that leave your uncle? Does he have to redefine himself in ever-narrowing terms?
    lilbuddha wrote: »
    Computers weight all information the same. Chess is about probabilities and the computer can run though those massively quickly.

    I disagree that a chess playing computer gives all information the same weight. Certain moves are preferred. It's not just picking pieces at random and them moving them in random ways.
    lilbuddha wrote: »
    Chess is not massively complicated and the entire board is know to all players. Games like poker are more challenging and it is significant that the team taking on designing a program for poker chose a variant where much of the opponent's potential hand is known.

    Poker also has random elements and thus is not as suitable a test of reasoning. You can't reason a pair of sevens into a straight flush.
    Crœsos wrote: »
    Chess is a game with no random components. In other words, something that requires the application of reason, applied competitively against an opponent to achieve a desired end state (checkmate). I'm wondering why analyzing a current situation and formulating a way to reach that end state doesn't count as "reasoning".
    Playing chess also involves a sort of "mind reading" of the opponent, that is, planning and making moves anticipating what THAT particular other player is likely going to do, based upon what (s)he did a few moves or a couple games past ...

    I'm pretty sure that using past examples as the basis to predict future behavior is a form of reasoning, not telepathy. I'm guessing that's why you put "mind reading" in quotes. I'm also guessing you used that metaphor rather than describing it in terms of memory and analysis because you wanted to obscure the actual process involved.
  • lilbuddhalilbuddha Shipmate
    edited August 2020
    Crœsos wrote: »
    lilbuddha wrote: »
    The way computers operate is different to the way humans do. Chess is not so great a comparator.

    One of the interesting things about discussions like this is the way that any time an artificial intelligence progresses to the point where it can perform a certain cognitive task as well as (or better than) a human, that task is suddenly cut off from what it means to be human. This can cause problems as it continues to restrict our ideas of humanity. For example, your uncle is good at chess and has an encyclopedic knowledge of the birds of North America. If those things no longer count towards being human, where does that leave your uncle? Does he have to redefine himself in ever-narrowing terms?
    Rubbish. Intelligences with different constructions and design are going to reason differently. Octopuses are wicked smart, but their distributed intelligence, as well as their different environment, produce a different way of thinking. But they are not a percentage human, even if they evolved to be more intelligent than we, this would still be the case.
    Unless computers are designed to think like we do, the same will be true for them.
    Your implication relies on a person thinking humans are the ultimate and matter the most. Not everyone thinks like this.
    Crœsos wrote: »
    lilbuddha wrote: »
    Computers weight all information the same. Chess is about probabilities and the computer can run though those massively quickly.

    I disagree that a chess playing computer gives all information the same weight. Certain moves are preferred. It's not just picking pieces at random and them moving them in random ways.
    Probability can be programmed, but that is a different thing.
    There may well come a time when computers have novel thoughts, but we are not there yet.
    Crœsos wrote: »
    lilbuddha wrote: »
    Chess is not massively complicated and the entire board is know to all players. Games like poker are more challenging and it is significant that the team taking on designing a program for poker chose a variant where much of the opponent's potential hand is known.

    Poker also has random elements and thus is not as suitable a test of reasoning. You can't reason a pair of sevens into a straight flush.
    Poker is a lot more than the cards. There is a psychology to the playing that computers cannot yet manage.

    Fixed quoting attribution (I hope). BroJames, Purgatory Host
  • DafydDafyd Shipmate
    Crœsos wrote: »
    Poker also has random elements and thus is not as suitable a test of reasoning. You can't reason a pair of sevens into a straight flush.
    You may want to flesh out the 'thus'. Presumably the idea is that victory in poker correlates with luck in a way that victory in chess doesn't. So you're taking reasoning power as whatever trait correlates with victory in deterministic games.
    I don't think that's right. (Though Star Trek scriptwriters tend to agree that it's what Vulcans do.)

    Back when I last followed these matters, chess algorithms usually work by brute force: they'd predict all possible situations a few moves ahead, weight each one according to some preprogrammed criteria of desirability, and pick the move with the highest weight. It was observed that human chess masters clearly didn't think like that, since some situations on the board favoured humans. (Basically cluttered positions in which anything could happen favoured the computer. Clear situations favoured the human.) Also chess players expressed frustration that the computer didn't have any readable plan.

    On the other hand success in poker depends partly on an ability to calculate probabilities, and partly on an ability to explain one's opponents behaviour. Likewise, humans playing chess against humans are reliant on their ability to form and assess hypotheses about what their opponent is up to. Forming and assessing hypotheses seem central to the process in humans that is called reasoning.

  • Crœsos wrote: »
    lilbuddha wrote: »
    The way computers operate is different to the way humans do. Chess is not so great a comparator.

    One of the interesting things about discussions like this is the way that any time an artificial intelligence progresses to the point where it can perform a certain cognitive task as well as (or better than) a human, that task is suddenly cut off from what it means to be human.

    I think this kind of thing generally stems from loose definitions that largely function intuitively. Humans do various things in particular ways that then become associated with 'human intelligence'. What the chess (and other) example(s) show us is that there are ways of achieving the same ends without constructing a equivalent general purpose intelligence.
  • Crœsos wrote: »
    lilbuddha wrote: »



    Playing chess also involves a sort of "mind reading" of the opponent, that is, planning and making moves anticipating what THAT particular other player is likely going to do, based upon what (s)he did a few moves or a couple games past ...

    I'm pretty sure that using past examples as the basis to predict future behavior is a form of reasoning, not telepathy. I'm guessing that's why you put "mind reading" in quotes. I'm also guessing you used that metaphor rather than describing it in terms of memory and analysis because you wanted to obscure the actual process involved.

    Yes ...

    "Mind reading" is an interesting thing ... that does NOT require "telepathy" ...
  • Man, I got that all wrong. The Hebrew word, merma which means "word" is feminine. The LXX translates it as logos (masculine)

    I did find a very interesting paper on John's use of logos. Here is the link.
  • Humans aren't computers. We don't think like them and AI isn't human like. We don't process information.
  • Humans aren't computers. We don't think like them and AI isn't human like. We don't process information.
    Yes we do. The human brain is a computer, just a biological one.
  • lilbuddha wrote: »
    Humans aren't computers. We don't think like them and AI isn't human like. We don't process information.
    Yes we do. The human brain is a computer, just a biological one.

    The human brain can and does carry out calculations and it does process information ... but calling it "a computer" is ... a misunderestimation ... (bio. major here) ...
  • Dave WDave W Shipmate
    lilbuddha wrote: »
    Poker is a lot more than the cards. There is a psychology to the playing that computers cannot yet manage.
    I'm not sure how you'd determine whether they can "manage psychology" or not, but they seem to be pretty good at it anyway:
    An artificial intelligence program developed by Carnegie Mellon University in collaboration with Facebook AI has defeated leading professionals in six-player no-limit Texas hold'em poker, the world's most popular form of poker.
  • BroJamesBroJames Purgatory Host, 8th Day Host
    Gramps49 wrote: »
    Man, I got that all wrong. The Hebrew word, merma which means "word" is feminine. The LXX translates it as logos (masculine)

    I did find a very interesting paper on John's use of logos. Here is the link.

    Are you thinking of the Aramaic word memra? AFAICT it doesn’t appear in the Biblical text, though it is used in the Targums. It is related to the Hebrew amar, the verb ‘to speak/say’

    Its connection to the Greek logos, especially as used in John, has been much debated.
  • BroJames wrote: »
    Gramps49 wrote: »
    Man, I got that all wrong. The Hebrew word, merma which means "word" is feminine. The LXX translates it as logos (masculine)

    I did find a very interesting paper on John's use of logos. Here is the link.

    Are you thinking of the Aramaic word memra? AFAICT it doesn’t appear in the Biblical text, though it is used in the Targums. It is related to the Hebrew amar, the verb ‘to speak/say’
    My understanding, which may be quite wrong, is that logos in the Septuagint is typically a translation of the Hebrew dabar, at least in contexts like “the word of the Lord.” The Septuagint also uses rhema in places to translate dabar.

  • DafydDafyd Shipmate
    lilbuddha wrote: »
    Humans aren't computers. We don't think like them and AI isn't human like. We don't process information.
    Yes we do. The human brain is a computer, just a biological one.
    For a sufficiently general definition of 'computer' this is a trivial tautology. For a definition of computer limited enough to be meaningful, many neuroscientists think this is the most unhelpful or misleading metaphor in neuroscience.
  • It's a metaphor that has run its course, outlived its usefulness. Certainly educators are moving away from it as unhelpful.
  • Dafyd wrote: »
    lilbuddha wrote: »
    Humans aren't computers. We don't think like them and AI isn't human like. We don't process information.
    Yes we do. The human brain is a computer, just a biological one.
    For a sufficiently general definition of 'computer' this is a trivial tautology. For a definition of computer limited enough to be meaningful, many neuroscientists think this is the most unhelpful or misleading metaphor in neuroscience.

    I agree that it is too simplistic an idea of what the brain is and how it works ...
  • Dave W wrote: »
    lilbuddha wrote: »
    Poker is a lot more than the cards. There is a psychology to the playing that computers cannot yet manage.
    I'm not sure how you'd determine whether they can "manage psychology" or not, but they seem to be pretty good at it anyway:
    An artificial intelligence program developed by Carnegie Mellon University in collaboration with Facebook AI has defeated leading professionals in six-player no-limit Texas hold'em poker, the world's most popular form of poker.
    I mentioned that earlier. They chose That variation of poker precisely because a significant part of the hand is visible on the table. lus the computer knows its cards, making Texas Hold'em more easily predicted by probability.
    Still a remarkable feat, but not the human level playing I'm talking about.

  • Dafyd wrote: »
    lilbuddha wrote: »
    Humans aren't computers. We don't think like them and AI isn't human like. We don't process information.
    Yes we do. The human brain is a computer, just a biological one.
    For a sufficiently general definition of 'computer' this is a trivial tautology. For a definition of computer limited enough to be meaningful, many neuroscientists think this is the most unhelpful or misleading metaphor in neuroscience.
    I find it amusing that a term which originated as a description of humans is now thought to be useless.
    ISTM, the problem is less whether the human brain is a computer than the comparisons of it to the devices called computers. Because they work differently, comparing how they work is less than helpful.
    In other words, it is not calling the brain a computer that is problematic, but thinking all computers work in the same way.
  • KarlLBKarlLB Shipmate
    lilbuddha wrote: »
    Dave W wrote: »
    lilbuddha wrote: »
    Poker is a lot more than the cards. There is a psychology to the playing that computers cannot yet manage.
    I'm not sure how you'd determine whether they can "manage psychology" or not, but they seem to be pretty good at it anyway:
    An artificial intelligence program developed by Carnegie Mellon University in collaboration with Facebook AI has defeated leading professionals in six-player no-limit Texas hold'em poker, the world's most popular form of poker.
    I mentioned that earlier. They chose That variation of poker precisely because a significant part of the hand is visible on the table. lus the computer knows its cards, making Texas Hold'em more easily predicted by probability.
    Still a remarkable feat, but not the human level playing I'm talking about.

    How does it decide whether to bluff? Or read the other players for a tell? I can see a simple "this hand is worth holding out to a stake of X then folding" sort of algorithm, but bluffing and detecting bluffing...
  • BroJamesBroJames Purgatory Host, 8th Day Host
    Nick Tamen wrote: »
    BroJames wrote: »
    Gramps49 wrote: »
    Man, I got that all wrong. The Hebrew word, merma which means "word" is feminine. The LXX translates it as logos (masculine)

    I did find a very interesting paper on John's use of logos. Here is the link.

    Are you thinking of the Aramaic word memra? AFAICT it doesn’t appear in the Biblical text, though it is used in the Targums. It is related to the Hebrew amar, the verb ‘to speak/say’
    My understanding, which may be quite wrong, is that logos in the Septuagint is typically a translation of the Hebrew dabar, at least in contexts like “the word of the Lord.” The Septuagint also uses rhema in places to translate dabar.

    No, I’m sure you’re quite right. Gramps49 is picking up from a discussion early on in the thread, where we referenced the point you make.
  • BroJames wrote: »
    Nick Tamen wrote: »
    BroJames wrote: »
    Gramps49 wrote: »
    Man, I got that all wrong. The Hebrew word, merma which means "word" is feminine. The LXX translates it as logos (masculine)

    I did find a very interesting paper on John's use of logos. Here is the link.

    Are you thinking of the Aramaic word memra? AFAICT it doesn’t appear in the Biblical text, though it is used in the Targums. It is related to the Hebrew amar, the verb ‘to speak/say’
    My understanding, which may be quite wrong, is that logos in the Septuagint is typically a translation of the Hebrew dabar, at least in contexts like “the word of the Lord.” The Septuagint also uses rhema in places to translate dabar.

    No, I’m sure you’re quite right. Gramps49 is picking up from a discussion early on in the thread, where we referenced the point you make.

    Thus illustrating again the difficulty of "translation" word fpr word ...
  • Dave WDave W Shipmate
    lilbuddha wrote: »
    Dave W wrote: »
    lilbuddha wrote: »
    Poker is a lot more than the cards. There is a psychology to the playing that computers cannot yet manage.
    I'm not sure how you'd determine whether they can "manage psychology" or not, but they seem to be pretty good at it anyway:
    An artificial intelligence program developed by Carnegie Mellon University in collaboration with Facebook AI has defeated leading professionals in six-player no-limit Texas hold'em poker, the world's most popular form of poker.
    I mentioned that earlier. They chose That variation of poker precisely because a significant part of the hand is visible on the table. lus the computer knows its cards, making Texas Hold'em more easily predicted by probability.
    Still a remarkable feat, but not the human level playing I'm talking about.
    Do you have a cite for their reason for picking that variation? The article says it's "the world's most popular form" so it seems like a lot of people think it counts as "human level playing." (And is there really any kind of poker in which the computer wouldn't know its own cards?)
  • KarlLB wrote: »
    lilbuddha wrote: »
    Dave W wrote: »
    lilbuddha wrote: »
    Poker is a lot more than the cards. There is a psychology to the playing that computers cannot yet manage.
    I'm not sure how you'd determine whether they can "manage psychology" or not, but they seem to be pretty good at it anyway:
    An artificial intelligence program developed by Carnegie Mellon University in collaboration with Facebook AI has defeated leading professionals in six-player no-limit Texas hold'em poker, the world's most popular form of poker.
    I mentioned that earlier. They chose That variation of poker precisely because a significant part of the hand is visible on the table. lus the computer knows its cards, making Texas Hold'em more easily predicted by probability.
    Still a remarkable feat, but not the human level playing I'm talking about.

    How does it decide whether to bluff? Or read the other players for a tell? I can see a simple "this hand is worth holding out to a stake of X then folding" sort of algorithm, but bluffing and detecting bluffing...
    I didn't see anything in the article that said the computer did detect bluffs. It cannot actually do that. It can run probabilities that might suggest the player doesn't have the cards to back the bet, bu that is a different thing.

  • Leorning CnihtLeorning Cniht Shipmate
    edited August 2020
    lilbuddha wrote: »
    KarlLB wrote: »
    How does it decide whether to bluff? Or read the other players for a tell? I can see a simple "this hand is worth holding out to a stake of X then folding" sort of algorithm, but bluffing and detecting bluffing...
    I didn't see anything in the article that said the computer did detect bluffs. It cannot actually do that. It can run probabilities that might suggest the player doesn't have the cards to back the bet, bu that is a different thing.

    Most of the information in poker is how other players bet (plus "table feel", but we'll assume we're not building a computer to try and do that, although a poker-playing computer that sensed temperatures, pulse rates, breathing rates etc. might be interesting).

    Sometimes, players bet based on the hand they hold. Sometimes, they bluff, based on the hand that they pretend they're holding. I don't think it's actually any harder to incorporate the latter in a computer's play than it is the former. (I've never taught a computer to play any kind of game, but did used to know a guy who wrote bridge-playing software, and we had several discussions about it.)

    Fixed broken quoting code. BroJames, Purgatory Host
  • Dave W wrote: »
    lilbuddha wrote: »
    Dave W wrote: »
    lilbuddha wrote: »
    Poker is a lot more than the cards. There is a psychology to the playing that computers cannot yet manage.
    I'm not sure how you'd determine whether they can "manage psychology" or not, but they seem to be pretty good at it anyway:
    An artificial intelligence program developed by Carnegie Mellon University in collaboration with Facebook AI has defeated leading professionals in six-player no-limit Texas hold'em poker, the world's most popular form of poker.
    I mentioned that earlier. They chose That variation of poker precisely because a significant part of the hand is visible on the table. lus the computer knows its cards, making Texas Hold'em more easily predicted by probability.
    Still a remarkable feat, but not the human level playing I'm talking about.
    Do you have a cite for their reason for picking that variation?
    That is an inference on my part.
    This quote illustrates why:
    Games such as chess and Go have long served as milestones for AI research. In those games, all of the players know the status of the playing board and all of the pieces. But poker is a bigger challenge because it is an incomplete information game; players can't be certain which cards are in play and opponents can and will bluff. That makes it both a tougher AI challenge and more relevant to many real-world problems involving multiple parties and missing information.
    In using a variant in which more cards are know, they give the computer more of an opportunity to play to its strength, which is running probability simulations.
    In a variation like Draw poker, the only information a player has is their own cards, the number of cards the opponent discards and the call, raise or fold betting strategy.
    The hand probabilities become much more difficult and reading the other players much more important.
    So, whilst it is possible that Texas hold'em was chosen for other reasons it still geive a huge advantage to a computer's main strength which is brute force processing.
    Dave W wrote: »
    The article says it's "the world's most popular form" so it seems like a lot of people think it counts as "human level playing." (And is there really any kind of poker in which the computer wouldn't know its own cards?)
    I don't think so, but the point was to illustrate just how much information the computer has to run simulations with.

  • The computer can't decide for feeling reasons to play less well or to make a less good move. Picking up on social cues.


  • The computer can't decide for feeling reasons to play less well or to make a less good move. Picking up on social cues.
    Yet. And that is part of what I'm talking about.

  • The computer can't decide for feeling reasons to play less well or to make a less good move. Picking up on social cues.


    Then, there was ELIZA, a computer simulation of a Rogerian therapist, which (not "who," but which) successfully could mimic a real life human counselor ... but WITHOUT actually *feeling* anything ...
  • The computer can't decide for feeling reasons to play less well or to make a less good move. Picking up on social cues.


    Then, there was ELIZA, a computer simulation of a Rogerian therapist, which (not "who," but which) successfully could mimic a real life human counselor ... but WITHOUT actually *feeling* anything ...
    That is a massive oversimplification to the point of being incorrect. Subjects thought they were interacting with a human counsellor, but that is more an indictment against human counsellors than it is a credit to ELIZA.

  • edited August 2020
    EMACS had one in the late 1970s, early 1980s. "Tell me about your developing mental illness".

    The computer can't decide to lose because I think I love her.

    Metaphors are useful but they are metaphors. Humans are not hydraulic either (Freud, psychoanalysis), nor telephone switches (Watson, behaviourism). Which I commented about also upthread. Technology commonly is used as a model for people. But ir isn't fact.

    Further reading: https://psychcentral.com/blog/your-brain-is-not-a-computer/
    we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers — design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them — ever.
  • lilbuddha wrote: »
    The computer can't decide for feeling reasons to play less well or to make a less good move. Picking up on social cues.


    Then, there was ELIZA, a computer simulation of a Rogerian therapist, which (not "who," but which) successfully could mimic a real life human counselor ... but WITHOUT actually *feeling* anything ...
    That is a massive oversimplification to the point of being incorrect. Subjects thought they were interacting with a human counsellor, but that is more an indictment against human counsellors than it is a credit to ELIZA.

    Well, no ... ELIZA was PROGRAMMED to respond in Rogerian fashion to MIMIC a live therapist ...
  • lilbuddha wrote: »
    The computer can't decide for feeling reasons to play less well or to make a less good move. Picking up on social cues.


    Then, there was ELIZA, a computer simulation of a Rogerian therapist, which (not "who," but which) successfully could mimic a real life human counselor ... but WITHOUT actually *feeling* anything ...
    That is a massive oversimplification to the point of being incorrect. Subjects thought they were interacting with a human counsellor, but that is more an indictment against human counsellors than it is a credit to ELIZA.

    Well, no ... ELIZA was PROGRAMMED to respond in Rogerian fashion to MIMIC a live therapist ...
    Let me put this more simply. Rogerian fashion is indistinguishable because it consists of repetition of the subjects words. Hardly a masterwork in programming to mimic that.

  • lilbuddha wrote: »
    lilbuddha wrote: »
    The computer can't decide for feeling reasons to play less well or to make a less good move. Picking up on social cues.


    Then, there was ELIZA, a computer simulation of a Rogerian therapist, which (not "who," but which) successfully could mimic a real life human counselor ... but WITHOUT actually *feeling* anything ...
    That is a massive oversimplification to the point of being incorrect. Subjects thought they were interacting with a human counsellor, but that is more an indictment against human counsellors than it is a credit to ELIZA.

    Well, no ... ELIZA was PROGRAMMED to respond in Rogerian fashion to MIMIC a live therapist ...
    Let me put this more simply. Rogerian fashion is indistinguishable because it consists of repetition of the subjects words. Hardly a masterwork in programming to mimic that.

    Carl Rogers' method was (supposedly) "client centered," inviting the patient to talk, rather than the counselor giving a lecture (a la "Ship's Counselor," Deanna Troi, who generally explained to a client why what (s)he was feeling was wrong ...) ..
  • lilbuddha wrote: »
    lilbuddha wrote: »
    The computer can't decide for feeling reasons to play less well or to make a less good move. Picking up on social cues.


    Then, there was ELIZA, a computer simulation of a Rogerian therapist, which (not "who," but which) successfully could mimic a real life human counselor ... but WITHOUT actually *feeling* anything ...
    That is a massive oversimplification to the point of being incorrect. Subjects thought they were interacting with a human counsellor, but that is more an indictment against human counsellors than it is a credit to ELIZA.

    Well, no ... ELIZA was PROGRAMMED to respond in Rogerian fashion to MIMIC a live therapist ...
    Let me put this more simply. Rogerian fashion is indistinguishable because it consists of repetition of the subjects words. Hardly a masterwork in programming to mimic that.

    Carl Rogers' method was (supposedly) "client centered," inviting the patient to talk, rather than the counselor giving a lecture (a la "Ship's Counselor," Deanna Troi, who generally explained to a client why what (s)he was feeling was wrong ...) ..
    So what? It is still massively easy to parrot and not an indication the computer actually interacted with the subject in any meaningful way.
  • EMACS had one in the late 1970s, early 1980s. "Tell me about your developing mental illness".
    Meta-x-psychoanalyse-pinhead
  • lilbuddha wrote: »
    lilbuddha wrote: »
    lilbuddha wrote: »
    The computer can't decide for feeling reasons to play less well or to make a less good move. Picking up on social cues.


    Then, there was ELIZA, a computer simulation of a Rogerian therapist, which (not "who," but which) successfully could mimic a real life human counselor ... but WITHOUT actually *feeling* anything ...
    That is a massive oversimplification to the point of being incorrect. Subjects thought they were interacting with a human counsellor, but that is more an indictment against human counsellors than it is a credit to ELIZA.

    Well, no ... ELIZA was PROGRAMMED to respond in Rogerian fashion to MIMIC a live therapist ...
    Let me put this more simply. Rogerian fashion is indistinguishable because it consists of repetition of the subjects words. Hardly a masterwork in programming to mimic that.

    Carl Rogers' method was (supposedly) "client centered," inviting the patient to talk, rather than the counselor giving a lecture (a la "Ship's Counselor," Deanna Troi, who generally explained to a client why what (s)he was feeling was wrong ...) ..
    So what? It is still massively easy to parrot and not an indication the computer actually interacted with the subject in any meaningful way.

    Ummm, yes ... ???
    I never thought or claimed or wrote that the computer DID "interact with the subject in any meaningful way" ...

    The FACT is that ELIZA was PROGRAMMED to MIMIC a Rogerian therapist ... and did so successfully ...

    NOTE: "MIMIC" (look it up) ...

    (Just so, a computer can be -- has been -- PROGRAMMED to "play" chess, and generally beats any human opponent ...

    But that doesn't mean that the computer was "PLAYING" the game the way a human being PLAYS the game ...

    Trust me on this, when the computer wins, NOBODY takes it out for a congratulatory round of drinks that weekend ... and there is NO evidence that the computer *feels* anxiety or pleasure or sadness or any such HUMAN thing as the result of winning -- or losing ...

    So, the chess-programmed computer "plays" chess the same way my sliderule "thinks" about square roots ..., i.e., strictly mechanically ...)
  • tclunetclune Shipmate
    lilbuddha wrote: »
    Carl Rogers' method was (supposedly) "client centered," inviting the patient to talk, rather than the counselor giving a lecture (a la "Ship's Counselor," Deanna Troi, who generally explained to a client why what (s)he was feeling was wrong ...) ..
    So what? It is still massively easy to parrot and not an indication the computer actually interacted with the subject in any meaningful way.

    The computer clearly "interacted" with the subject in the sense that the subject's inputs were parsed to generate the computer's next output. Whether the interaction was "meaningful" would presumably be in the mind of the subject. I recall when Eliza first came out, some people were really moved by the interaction. It is reasonable to say that they found the interaction "meaningful." The fact that the algorithmic basis of the interaction was easy is neither here nor there AFAICS.
  • tclune wrote: »
    lilbuddha wrote: »
    Carl Rogers' method was (supposedly) "client centered," inviting the patient to talk, rather than the counselor giving a lecture (a la "Ship's Counselor," Deanna Troi, who generally explained to a client why what (s)he was feeling was wrong ...) ..
    So what? It is still massively easy to parrot and not an indication the computer actually interacted with the subject in any meaningful way.

    The computer clearly "interacted" with the subject in the sense that the subject's inputs were parsed to generate the computer's next output. Whether the interaction was "meaningful" would presumably be in the mind of the subject. I recall when Eliza first came out, some people were really moved by the interaction. It is reasonable to say that they found the interaction "meaningful." The fact that the algorithmic basis of the interaction was easy is neither here nor there AFAICS.

    And of course, there was "Hal" way back in 2001 which ("who" ???) refused to obey Dave's simple command to "Open the pod bay door ..." ... because "Hal" *believed* (!!!) that Dave was endangering their mission ...
  • Just so, a computer can be -- has been -- PROGRAMMED to "play" chess, and generally beats any human opponent ...

    But that doesn't mean that the computer was "PLAYING" the game the way a human being PLAYS the game ...

    Trust me on this, when the computer wins, NOBODY takes it out for a congratulatory round of drinks that weekend ... and there is NO evidence that the computer *feels* anxiety or pleasure or sadness or any such HUMAN thing as the result of winning -- or losing ...

    So, the chess-programmed computer "plays" chess the same way my sliderule "thinks" about square roots ..., i.e., strictly mechanically ...

    Came across this video and it reminded me of this thread. Are these robots dancing? Or PROGRAMMED to DANCE? What's the distinction? Is it all based on internal mental state and, if so, is there a mental state a human could be in where they wouldn't really be dancing either, just moving in a way that's visually indistinguishable from dancing?
  • Crœsos wrote: »
    Just so, a computer can be -- has been -- PROGRAMMED to "play" chess, and generally beats any human opponent ...

    But that doesn't mean that the computer was "PLAYING" the game the way a human being PLAYS the game ...

    Trust me on this, when the computer wins, NOBODY takes it out for a congratulatory round of drinks that weekend ... and there is NO evidence that the computer *feels* anxiety or pleasure or sadness or any such HUMAN thing as the result of winning -- or losing ...

    So, the chess-programmed computer "plays" chess the same way my sliderule "thinks" about square roots ..., i.e., strictly mechanically ...

    Came across this video and it reminded me of this thread. Are these robots dancing? Or PROGRAMMED to DANCE? What's the distinction? Is it all based on internal mental state and, if so, is there a mental state a human could be in where they wouldn't really be dancing either, just moving in a way that's visually indistinguishable from dancing?

    When sunflowers follow the Sun across the sky are they wondering if they could/should have put on SPF cream that morning ... ???
    (OTOH, consider what Sam Butler thought about the positive intentions of a potato in a dark cellar ...) ...
  • When sunflowers follow the Sun across the sky are they wondering if they could/should have put on SPF cream that morning ... ???

    Do sunflowers "follow the Sun across the sky" or are they PROGRAMMED to FOLLOW the sun across the sky? In what sense is this distinction meaningful?
  • Crœsos wrote: »
    Just so, a computer can be -- has been -- PROGRAMMED to "play" chess, and generally beats any human opponent ...

    But that doesn't mean that the computer was "PLAYING" the game the way a human being PLAYS the game ...

    Trust me on this, when the computer wins, NOBODY takes it out for a congratulatory round of drinks that weekend ... and there is NO evidence that the computer *feels* anxiety or pleasure or sadness or any such HUMAN thing as the result of winning -- or losing ...

    So, the chess-programmed computer "plays" chess the same way my sliderule "thinks" about square roots ..., i.e., strictly mechanically ...

    Came across this video and it reminded me of this thread. Are these robots dancing? Or PROGRAMMED to DANCE? What's the distinction? Is it all based on internal mental state and, if so, is there a mental state a human could be in where they wouldn't really be dancing either, just moving in a way that's visually indistinguishable from dancing?

    I think yes, it is based on internal mental state and yes, you could easily imagine a mental state in which humans would only appear to be dancing (in fact wasn't this a cliche in Westerns - "Dance, yer varmint, dance!" - whilst shooting at the feet... at least I remember Yosemite Sam doing this to Bugs Bunny at least once)

    I would confidently agree with @Fr Teilhard with regard to "traditional" brute force engines like Deep Blue but the neural-network style of AlphaZero gives me more pause for thought. It does seem a bit more like what our brains might be doing. But perhaps in order to classify it as "playing" there needs to be "someone" doing the playing. Maybe if AlphaZero had a bunch more electronic neurons with enough spare capacity then it could "be" the player as well as "doing" the playing. Or maybe not - although the latter seems a bit meat-centric.

    Although it seems to me ethically unacceptable to build a self-aware computer, since (among other reasons) we have no idea what type or degree of suffering it might be capable of undergoing.
  • I would confidently agree with @Fr Teilhard with regard to "traditional" brute force engines like Deep Blue but the neural-network style of AlphaZero gives me more pause for thought. It does seem a bit more like what our brains might be doing. But perhaps in order to classify it as "playing" there needs to be "someone" doing the playing. Maybe if AlphaZero had a bunch more electronic neurons with enough spare capacity then it could "be" the player as well as "doing" the playing. Or maybe not - although the latter seems a bit meat-centric.

    Why? Is it because we regard playing chess as a primarily cognitive function so we don't apply the same logic as we do to (for example) a computer "assembling an automobile"? We typically don't say that it only appears like a robot arm is assembling an automobile but that it's not really since it has no internal awareness of what its actions are doing. We mostly just care that the automobile is assembled, and yet we don't seem willing to apply the same reasoning to a series of chess moves.

    There's also the whole question of whether a person can play chess against an opponent who isn't playing chess.
    Although it seems to me ethically unacceptable to build a self-aware computer, since (among other reasons) we have no idea what type or degree of suffering it might be capable of undergoing.

    And yet people still have kids while facing the same unknowns.
  • Oh, I would be quite happy to say that the computer is playing chess in the same way that the robot is assembling the automobile, but I can see the distinction that @Fr Teilhard is making. I think it is a matter of language and what you think the word "play" implies.

    Possibly inconsistently I get very irritated with Chris Packham (UK nature presenter) when he insists that animals don't play, they merely practise skills that will be useful in helping them to survive. Surely human play also hones useful skills and is none the less play for that?

    As for the ethics of having children, parents at least have our experience of being human and the reports of previous generations on the human condition to guide us. An AI might be capable of types of suffering a human couldn't even imagine.
  • DafydDafyd Shipmate
    Crœsos wrote: »
    Are these robots dancing? Or PROGRAMMED to DANCE? What's the distinction? Is it all based on internal mental state and, if so, is there a mental state a human could be in where they wouldn't really be dancing either, just moving in a way that's visually indistinguishable from dancing?
    I can think of comedy programs where someone is mistaken for dancing and in fact is trying to get rid of a swarm of bees or a rat in their clothing or there is some other such misunderstanding going on.
    Someone could have a set of involuntary bodily tics, or even bodily tics brought on by music: and one would not say of them that they were dancing.

    I think the internal mental state is something of a distraction. When humans talk to each other about their external state and actions they use internal mental state language to explain and justify some external states and actions in terms of other external states or past events. One of the internal states that is usually appealed to in these contexts is intention: we want to know why somebody performed a particular action - probably in the typical case because the original hominids on the savannah wanted to know whether and under what circumstances their fellows were likely to do something similar again. Voluntary language is distinguished from involuntary bodily movement language by the possibility of asking about the intention with which the action was performed. (One distinguishes here between responses like, I just felt like it, which answer the question in a minimal fashion, and responses like, my body jerked, which deny the question has a valid answer. I suppose the difference is a little like in a computer database assigning the variable a value of 0 and assigning no value.)
    When a human dances one can ask questions and get answers like, I just feel like it, (why do you feel like it?), or it's my job to dance ballet, that place the action in a framework of motives and intentions. I don't think any such framework exists for a robot that is programmed to dance. I can't see how one would frame the question, are you actually dancing or are those just involuntary bodily tics, so that it had application.
  • Crœsos wrote: »
    When sunflowers follow the Sun across the sky are they wondering if they could/should have put on SPF cream that morning ... ???

    Do sunflowers "follow the Sun across the sky" or are they PROGRAMMED to FOLLOW the sun across the sky? In what sense is this distinction meaningful?

    How much of human thought - feeling - belief - behavior is "programmed" (as distinct from ???) ...
    Certainly the *design* of the human brain and its interaction with its environment (including the internal environment of the human body -- hormones and all that)
    is very much "programmed" (and hard-wired) by genetic endowment and ongoing experience(s) ...
  • Crœsos wrote: »
    Just so, a computer can be -- has been -- PROGRAMMED to "play" chess, and generally beats any human opponent ...

    But that doesn't mean that the computer was "PLAYING" the game the way a human being PLAYS the game ...

    Trust me on this, when the computer wins, NOBODY takes it out for a congratulatory round of drinks that weekend ... and there is NO evidence that the computer *feels* anxiety or pleasure or sadness or any such HUMAN thing as the result of winning -- or losing ...

    So, the chess-programmed computer "plays" chess the same way my sliderule "thinks" about square roots ..., i.e., strictly mechanically ...

    Came across this video and it reminded me of this thread. Are these robots dancing? Or PROGRAMMED to DANCE? What's the distinction? Is it all based on internal mental state and, if so, is there a mental state a human could be in where they wouldn't really be dancing either, just moving in a way that's visually indistinguishable from dancing?

    I think yes, it is based on internal mental state and yes, you could easily imagine a mental state in which humans would only appear to be dancing (in fact wasn't this a cliche in Westerns - "Dance, yer varmint, dance!" - whilst shooting at the feet... at least I remember Yosemite Sam doing this to Bugs Bunny at least once)

    I would confidently agree with @Fr Teilhard with regard to "traditional" brute force engines like Deep Blue but the neural-network style of AlphaZero gives me more pause for thought. It does seem a bit more like what our brains might be doing. But perhaps in order to classify it as "playing" there needs to be "someone" doing the playing. Maybe if AlphaZero had a bunch more electronic neurons with enough spare capacity then it could "be" the player as well as "doing" the playing. Or maybe not - although the latter seems a bit meat-centric.

    Although it seems to me ethically unacceptable to build a self-aware computer, since (among other reasons) we have no idea what type or degree of suffering it might be capable of undergoing.

    One could easily program a device to project a red laser dot onto the carpet and move it around, thus inducing our resident Felis domestica to "play" with the red dot, chase it, try to capture it AS IF a human were moving the dot ... In that respect the kitty cat certainly would be "playing" the game ... but how could one imagine the machine to be "playing" ... ???
  • Crœsos wrote: »
    When sunflowers follow the Sun across the sky are they wondering if they could/should have put on SPF cream that morning ... ???

    Do sunflowers "follow the Sun across the sky" or are they PROGRAMMED to FOLLOW the sun across the sky? In what sense is this distinction meaningful?

    Yes there is a difference. Sunflowers aren't computers.
  • mousethief wrote: »
    Crœsos wrote: »
    When sunflowers follow the Sun across the sky are they wondering if they could/should have put on SPF cream that morning ... ???

    Do sunflowers "follow the Sun across the sky" or are they PROGRAMMED to FOLLOW the sun across the sky? In what sense is this distinction meaningful?

    Yes there is a difference. Sunflowers aren't computers.

    Define: "computer"
  • mousethief wrote: »
    Crœsos wrote: »
    When sunflowers follow the Sun across the sky are they wondering if they could/should have put on SPF cream that morning ... ???

    Do sunflowers "follow the Sun across the sky" or are they PROGRAMMED to FOLLOW the sun across the sky? In what sense is this distinction meaningful?

    Yes there is a difference. Sunflowers aren't computers.

    Define: "computer"

    An artifact designed to compute.
Sign In or Register to comment.