the-podcast guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

  • Tomorrow_Farewell [any, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 months ago

    so you aren’t going to read the article then.
    No Investigation, No Right to Speak.

    I have investigated the parts that you have quoted, and that is what I am weighing-in on… They are self-contained enough for me to weigh-in, unless the author just redefines the words elsewhere, in which case not quoting those parts as well just means that you are deliberately posting misleading quotes.

    I strongly advise reading the entire article

    From the parts already quoted, it seems that the author is clueless and is willing to make blatantly faulty arguments. The fact that you opted to quote those parts of the article and not the others indicates to me that the rest of the article is not better in this regard.

    and furthermore reading about what a Turing Machine actually is and what it can be used to analyze

    Firstly, the term ‘Turing machine’ did not come up in this particular chain of comments up to this point. The author literally never referred to it. Why is it suddenly relevant?
    Secondly, what exactly do you think I, as a person with a background in mathematics, am missing in this regard that a person who says ‘Boolean logic’ is not?

    (1) an algorithm is anything a Turing machine can do

    This contradicts the previous two definitions the author gave.

    (2) computable functions are defined as those functions that we have algorithms for

    Whether we know of such an algorithm is actually irrelevant, actually. For a function to be computable, such an algorithm merely has to exist, even if it is undiscovered by anybody. A computable function also has to be N->N.

    (3) a computer is anything which physically implements algorithms in order to solve computable functions

    That’s a deliberately narrow definition of what a computer is, meaning that the author is not actually addressing the topic of the computer analogy in general, but just a subtopic with these assumptions in mind.

    To complete those definitions, I will go ahead an introduce from the same blog post, an intuitive definition of algorithm — “ an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output (e.g. an answer to yes/no integer roots) for a specific input (e.g. a specific polynomial like 6x³yz⁴ + 4y²z + z — 9).” And the more technical definition of algorithm in (1) as “

    This directly contradicts the author’s point (1), where they give a different, non-equivalent definition of what an algorithm is.
    So, which is it?

    This equivalence of course arises since attempts to achieve the intuitive definition about following instructions mechanically can always be reduced to a Turing machine

    This is obvious nonsense. Not only are those definitions not equivalent, the author is also not actually defining what it means for instructions to be followed ‘mechanically’.

    The author of the post recognizes that under this definition, any physical system can be said to be ‘computing’ it’s time evolution function and the meaning of the word loses it’s importance /significance

    Does the author also consider the word ‘time’ to have a meaning without ‘importance’/‘significance’?

    In order to avoid that, he subscribes to Wittgenstein and suggests that since when we think about modern day computers, we are thinking about machines like our laptops, desktops, phones which achieve extremely powerful and useful computation, we should hence restrict the word computers to these type of systems (hint: the problem is right here!!)

    I have already addressed this.

    At this point, I am not willing to waste my time on the parts that you have not highlighted. The author is a boy who cried ‘wolf!’ at this point.

    EDIT: you seem to have added a bunch to your previous comment, without clearly pointing out your edits.
    I will address one thing.

    note the triviality criticism of the informal definition that this author previously addressed, and the ‘human who could only carry out specific elementary operation on symbols’ is a reference to Turing Machines and the Chinese Room thought experiment, both of which i recommend reading about.

    The author seems to be clueless about what a Turing machine is, and the Chinese Room argument is also silly, and can be summarised as either ‘but I can’t imagine somebody making a computer that, in some inefficient manner, does introspection, even though introspection is a very common thing in software’ or ‘but what I think we should call “computers” are things that I think do not have qualia, therefore we can’t call things with qualia “computers”’. Literally nothing is preventing something that does introspection in some capacity from being a computer.

    • Frank [he/him, he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      I’ve heard people saying that the Chinese Room is nonsense because it’s not actually possible, at least for thought experiment purposes, to create a complete set of rules for verbal communication. There’s always a lot of ambiguity that needs to be weighed and addressed. The guy in the room would have to be making decisions about interpretation and intent. He’d have to have theory of mind.

      • Tomorrow_Farewell [any, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        The Chinese Room argument for any sort of thing that people would commonly call a ‘computer’ to not be able to have an understanding is either rooted on them just engaging in endless goalpost movement for what it means to ‘understand’ something (in which case this is obviously silly), or in the fact that they assume that only things with nervous systems can have qualia, and that understanding belongs to qualia (in which case this is something that can be concluded without the Chinese Room argument in the first place).

        In any case, Chinese Room is not really relevant to the topic of if considering brains to be computers is somehow erroneous.

        • Frank [he/him, he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          In any case, Chinese Room is not really relevant to the topic of if considering brains to be computers is somehow erroneous.

          My understanding was that the point of the chinese room was that a deterministic system with a perfect set of rules could produce the illusion of consciousness without ever understanding what it was doing? Is that not analogous to our discussion?