the-podcast guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

  • Tomorrow_Farewell [any, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 months ago

    (1) a computer is anything which physically implements algorithms in order to solve computable functions.
    (2) an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.
    (3) the specific input and output states in the definition of an algorithm and the arbitrary relationship b/w the physical observables of the system and computational states are specified by us because of our intelligence,which is the result of…wait for it…the execution of an algorithm (in the brain).
    Notice the circularity? The process of specifying the inputs and outputs needed in the definition of an algorithm, are themselves defined by an algorithm!! This process is of course a product of our intelligence/ability to learn — you can’t specify the evolution of a physical CMOS gate as a logical NAND if you have not learned what NAND is already, nor capable of learning it in the first place. And any attempt to describe it as an algorithm will always suffer from the circularity.

    This is a rather silly argument. People hear about certain logical fallacies and build cargo cults around them. They are basically arguing ‘but how can conscious beings process their perception of material stuff if their consciousness is tied to material things???’, or ‘how can we learn about our bodies if we need our bodies to learn about them in the first place? Notice the circularity!!!’.
    The last sentence there is a blatant non sequitur. They provide literally no reasoning for why a thing wouldn’t be able to learn stuff about itself using algorithms.

    • TraumaDumpling [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 months ago

      please read the entire article, you are literally not understanding the text. the following directly addresses your argument.

      “an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.” Now if we assume that the input and output states are arbitrary and not specified, then time evolution of any system becomes computing it’s time-evolution function, with the state at every time t becoming the input for the output state at time (t+1), and hence too broad a definition to be useful. If we want to narrow the usage of the word computers to systems like our laptops, desktops, etc., then we are talking about those systems in which the input and output states are arbitrary (you can make Boolean logic work with either physical voltage high or low as Boolean logic zero, as long you find suitable physical implementations) but are clearly specified (voltage low=Boolean logic zero generally in modern day electronics), as in the intuitive definition of an algorithm….with the most important part being that those physical states (and their relationship to the computational variables) are specified by us!!! All the systems that we refer to as modern day computers and want to restrict our usage of the word computers to are in fact our created by us(or our intelligence to be more specific), in which we decide what are the input and output states. Take your calculator for example. If you wanted to calculate the sum of 3 and 5 on it, it is your interpretation of the pressing of the 3,5,+ and = buttons as inputs, and the number that pops up on the LED screen as output is what allows you interpret the time evolution of the system as a computation, and imbues the computational property to the calculator. Physically, nothing about the electron flow through the calculator circuit makes the system evolution computational.

      • Tomorrow_Farewell [any, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        please read the entire article, you are literally not understanding the text.

        Unless the author redefines the words used in the bit that you quoted from them, I addressed their argument just fine.
        In the case the author does redefine those words, then the bit that you quoted is literally meaningless unless you also quote the parts where the author defines the relevant words.

        “an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.”

        The author is just arbitrarily placing on algorithms the requirement that they ‘can be followed mechanically, with no insight required’. This is silly for a few reasons.
        Firstly, that’s not how algorithms are defined in mathematics, nor is that how they are understood in the context of relevant analogies. Going to just ignore the ‘mechanically’ part, as the author seems to not be explaining what they meant, and my interpretations are all broad enough to conclude that the author is obviously incorrect.
        Secondly, brains perform various actions without any sort of insight required. This part should be obvious.
        Thirdly, the author’s problem is that computers usually work without some sort of introspection into how they perform their tasks, and that nobody builds computers that inefficiently access some random parts of memory vaguely related to their tasks. The introspection part is just incorrect, and the point about the fact that we don’t make hardware and software that does inefficient ‘insight’ has no bearing on the fact that computers that do those things can be built and that they are still computers.

        The author is deeply unserious.

        Now if we assume that the input and output states are arbitrary and not specified, then time evolution of any system becomes computing it’s time-evolution function, with the state at every time t becoming the input for the output state at time (t+1), and hence too broad a definition to be useful

        If their problem is that the analogy is not insightful, then fine. However, their thesis seems to be that the analogy is not applicable well enough, which is different from that.

        If we want to narrow the usage of the word computers to systems like our laptops, desktops, etc.

        Okay, so their thesis is not that the computer analogy is inapplicable, but that we do not work exactly the way PCs work? Sure.
        I don’t know why they had to make bad arguments regarding algorithms, though.

        you can make Boolean logic…

        There is no such thing as ‘Boolean logic’. There is ‘Boolean algebra’, which is an algebraisation of logic.
        The author also seems to assume that computers can only work with classical logic, and not any other sort of logic, for which we can implement suitable algebraisations.

        with the most important part being that those physical states (and their relationship to the computational variables) are specified by us!!!

        This is silly. The author is basically saying ‘but all computers are intelligently made by us’. Needless to say, they are deliberately misunderstanding what computers are and are placing arbitrary requirements for something to be considered a computer.

        All the systems that we refer to as modern day computers and want to restrict our usage of the word computers to

        Who is this ‘we’?

        Again, the author is deeply unserious.

        • TraumaDumpling [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 months ago

          Unless the author redefines the words used in the bit that you quoted from them, I addressed their argument just fine.

          so you aren’t going to read the article then.

          No Investigation, No Right to Speak.

          Here follows some selections from the article that deal with exactly the issues you focus on.

          I strongly advise reading the entire article, and the two it is in response to, and furthermore reading about what a Turing Machine actually is and what it can be used to analyze.

          The debate on whether the brain is a computer or not seems to have died down given the recent success of computer science ideas in both neuroscience and machine learning. I have seen a few recent articles on this subject from scientists, who have made strong claims that the brain is in fact literally a computer, and not just a useful metaphor backed up with their reasons to believe so. One such article is this one by Dr. Blake Richards (and here is another one by Dr. Mark Humphries). I will mainly deal with the first one — a really good and extensive article. I would encourage readers to go through it slowly, and in detail for it provides a good look at how to think about what a computer is, and deals well with a lot of the weaker arguments brought against the ‘brain is a computer’ claim (like the ones here.) Dr. Richards addressed a good variety of objections that people might rise to the claim that “the brain is a computer” towards the end of his article. I will raise an argument here that I feel lies at the heart of this discussion, not addressed in the post and is often overlooked or dismissed as non-existent. The reason I think it is important to discuss this question (and/or objection) in detail is that I strongly believe it affects how we study the brain. Describing the brain like a computer allows for a useful computational picture that has been very successful in the fields of neuroscience and artificial intelligence (specifically the sub-area of machine learning over the recent past). However as an engineer interested in building intelligent systems, I think this view of the brain as a computer is beginning to hurt us in our ability to engineer systems that can efficiently emulate their capabilities over a wide range of tasks.

          the bolded part above is ‘why the author has a problem with the computer metaphor’ since you seem so confused by that.

          There are a few minor/major problems (depends on how you look at it) in the definitions used to get to the conclusion that the brain is in fact a computer. Using the definitions put forward in the blog post

          (1) an algorithm is anything a Turing machine can do, (2) computable functions are defined as those functions that we have algorithms for, (3) a computer is anything which physically implements algorithms in order to solve computable functions.

          these are the definitions the author is using, not ones he made up but ones he got from one of the articles he is arguing against. note the similarities with the definitions on https://en.wikipedia.org/wiki/Algorithm :

          One informal definition is “a set of rules that precisely defines a sequence of operations”,[11][need quotation to verify] which would include all computer programs (including programs that do not perform numeric calculations), and (for example) any prescribed bureaucratic procedure[12] or cook-book recipe.[13] In general, a program is an algorithm only if it stops eventually[14]—even though infinite loops may sometimes prove desirable. Boolos, Jeffrey & 1974, 1999 define an algorithm to be a set of instructions for determining an output, given explicitly, in a form that can be followed by either a computing machine, or a human who could only carry out specific elementary operations on symbols.[15]

          note the triviality criticism of the informal definition that this author previously addressed, and the ‘human who could only carry out specific elementary operation on symbols’ is a reference to Turing Machines and the Chinese Room thought experiment, both of which i recommend reading about.

          The concept of algorithm is also used to define the notion of decidability—a notion that is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related to the customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term.

          this is still a matter under academic discussion, there are not widely agreed on definitions of these terms that suit all uses in all fields.

          Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device.

          algorithms can be implemented by humans, intentionally or not, the hardware is irrelevant to the discussion of Turing machines since they are an idealized abstraction of computing.

          enough wikipedia now back to the article

          Number (3) is the one we will focus on for it is vitally important. To complete those definitions, I will go ahead an introduce from the same blog post, an intuitive definition of algorithm — “ an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output (e.g. an answer to yes/no integer roots) for a specific input (e.g. a specific polynomial like 6x³yz⁴ + 4y²z + z — 9).” And the more technical definition of algorithm in (1) as “ An algorithm is anything that a Turing machine can do.” This equivalence of course arises since attempts to achieve the intuitive definition about following instructions mechanically can always be reduced to a Turing machine. The author of the post recognizes that under this definition, any physical system can be said to be ‘computing’ it’s time evolution function and the meaning of the word loses it’s importance /significance. In order to avoid that, he subscribes to Wittgenstein and suggests that since when we think about modern day computers, we are thinking about machines like our laptops, desktops, phones which achieve extremely powerful and useful computation, we should hence restrict the word computers to these type of systems (hint: the problem is right here!!). Since our brains also achieve the same, we find that our brains are (uber) computers as well (I might be simplifying/shortening the argument, but I believe I have captured it’s essence and will once again recommend reading the complete article here.) Furthermore, he points out that our modern day computers and brains, have the capability of being Turing complete, but are not of course due to physical constraints on memory, time and energy expenditure. And if we do not have a problem with calling our non-Turing complete, von Neumann architecture machines as computers, then we should not let the physical constraints that prevent the brain from being Turing complete stop us from calling it a computer as well. I agree that we should not restrict ourselves to only referring to Turing complete systems as computers, for that is far too restrictive. The term ‘computer’ does have a popular usage and meaning in everyday life that is independent on whether or not the system is Turing complete. It makes a lot more sense to instead refer to those computers that are in fact Turing complete as ‘Turing complete computers’.

          this explains the author’s reasoning for their definitions further, he is not making these up, these are the common definitions in use in the discourse.

          • Tomorrow_Farewell [any, they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            10 months ago

            so you aren’t going to read the article then.
            No Investigation, No Right to Speak.

            I have investigated the parts that you have quoted, and that is what I am weighing-in on… They are self-contained enough for me to weigh-in, unless the author just redefines the words elsewhere, in which case not quoting those parts as well just means that you are deliberately posting misleading quotes.

            I strongly advise reading the entire article

            From the parts already quoted, it seems that the author is clueless and is willing to make blatantly faulty arguments. The fact that you opted to quote those parts of the article and not the others indicates to me that the rest of the article is not better in this regard.

            and furthermore reading about what a Turing Machine actually is and what it can be used to analyze

            Firstly, the term ‘Turing machine’ did not come up in this particular chain of comments up to this point. The author literally never referred to it. Why is it suddenly relevant?
            Secondly, what exactly do you think I, as a person with a background in mathematics, am missing in this regard that a person who says ‘Boolean logic’ is not?

            (1) an algorithm is anything a Turing machine can do

            This contradicts the previous two definitions the author gave.

            (2) computable functions are defined as those functions that we have algorithms for

            Whether we know of such an algorithm is actually irrelevant, actually. For a function to be computable, such an algorithm merely has to exist, even if it is undiscovered by anybody. A computable function also has to be N->N.

            (3) a computer is anything which physically implements algorithms in order to solve computable functions

            That’s a deliberately narrow definition of what a computer is, meaning that the author is not actually addressing the topic of the computer analogy in general, but just a subtopic with these assumptions in mind.

            To complete those definitions, I will go ahead an introduce from the same blog post, an intuitive definition of algorithm — “ an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output (e.g. an answer to yes/no integer roots) for a specific input (e.g. a specific polynomial like 6x³yz⁴ + 4y²z + z — 9).” And the more technical definition of algorithm in (1) as “

            This directly contradicts the author’s point (1), where they give a different, non-equivalent definition of what an algorithm is.
            So, which is it?

            This equivalence of course arises since attempts to achieve the intuitive definition about following instructions mechanically can always be reduced to a Turing machine

            This is obvious nonsense. Not only are those definitions not equivalent, the author is also not actually defining what it means for instructions to be followed ‘mechanically’.

            The author of the post recognizes that under this definition, any physical system can be said to be ‘computing’ it’s time evolution function and the meaning of the word loses it’s importance /significance

            Does the author also consider the word ‘time’ to have a meaning without ‘importance’/‘significance’?

            In order to avoid that, he subscribes to Wittgenstein and suggests that since when we think about modern day computers, we are thinking about machines like our laptops, desktops, phones which achieve extremely powerful and useful computation, we should hence restrict the word computers to these type of systems (hint: the problem is right here!!)

            I have already addressed this.

            At this point, I am not willing to waste my time on the parts that you have not highlighted. The author is a boy who cried ‘wolf!’ at this point.

            EDIT: you seem to have added a bunch to your previous comment, without clearly pointing out your edits.
            I will address one thing.

            note the triviality criticism of the informal definition that this author previously addressed, and the ‘human who could only carry out specific elementary operation on symbols’ is a reference to Turing Machines and the Chinese Room thought experiment, both of which i recommend reading about.

            The author seems to be clueless about what a Turing machine is, and the Chinese Room argument is also silly, and can be summarised as either ‘but I can’t imagine somebody making a computer that, in some inefficient manner, does introspection, even though introspection is a very common thing in software’ or ‘but what I think we should call “computers” are things that I think do not have qualia, therefore we can’t call things with qualia “computers”’. Literally nothing is preventing something that does introspection in some capacity from being a computer.

            • Frank [he/him, he/him]@hexbear.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              10 months ago

              I’ve heard people saying that the Chinese Room is nonsense because it’s not actually possible, at least for thought experiment purposes, to create a complete set of rules for verbal communication. There’s always a lot of ambiguity that needs to be weighed and addressed. The guy in the room would have to be making decisions about interpretation and intent. He’d have to have theory of mind.

              • Tomorrow_Farewell [any, they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 months ago

                The Chinese Room argument for any sort of thing that people would commonly call a ‘computer’ to not be able to have an understanding is either rooted on them just engaging in endless goalpost movement for what it means to ‘understand’ something (in which case this is obviously silly), or in the fact that they assume that only things with nervous systems can have qualia, and that understanding belongs to qualia (in which case this is something that can be concluded without the Chinese Room argument in the first place).

                In any case, Chinese Room is not really relevant to the topic of if considering brains to be computers is somehow erroneous.

                • Frank [he/him, he/him]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 months ago

                  In any case, Chinese Room is not really relevant to the topic of if considering brains to be computers is somehow erroneous.

                  My understanding was that the point of the chinese room was that a deterministic system with a perfect set of rules could produce the illusion of consciousness without ever understanding what it was doing? Is that not analogous to our discussion?