Literally just mainlining marketing material straight into whatever’s left of their rotting brains.
For fucks sake it’s just an algorithm. It’s not capable of becoming sentient.
Have I lost it or has everyone become an idiot?
deleted by creator
This is verging on a religious debate, but assuming that there’s no “spiritual” component to human intelligence and consciousness like a non-localized soul, what else can we be but ultra-complex “meat computers”?
yeah this is knee-jerk anti-technology shite from people here because we live in a society organized along lines where creation of AI would lead to our oppression instead of our liberation. of course making a computer be sentient is possible, to believe otherwise is to engage in magical (chauvinistic?) thinking about what constitutes consciousness.
When I watched blade runner 2049 I thought the human police captain character telling the Officer K (replicant) character she was different from him because she had a soul a bit weird, since sci-fi settings are pretty secular. Turns out this was prophetic and people are more than willing to get all spiritual if it helps them invent reasons to differentiate themselves from the Other.
deleted by creator
There isn’t a materialist theory of consciousness that doesn’t look something like an ultra complex computer. We’re talking like an alternative explanation exists but it really does not.
deleted by creator
When people say computer here they mean computation as computer scientists conceive of it. Abstract mathematical operations that can be modeled by boolean circuits or Turing machines, and embodied in physical processes. Computers in the sense you’re talking about (computer hardware) are one method of embodying these operations.
What stops me from doing the same thing that neurons do with a sufficiently sized hunk of silicon? Assuming that some amount of abstraction is fine.
If the answer is “nothing”, then that demonstrates the point. If you can build an artificial brain, that does all of the things a brain does, then there is nothing special about our brains.
Nobody ever mentioned a “soul” in this conversation until you brought it up to use as an accusation.
“Computers aren’t sentient” is not a religious belief no matter how hard you try to smear it as such.
It isn’t “Computers aren’t sentient”, nobody thinks computers are sentient except some weirdos. “Computers can’t be sentient”, which is what is under discussion, is a much stronger claim.
The claim is that “computers can be sentient”. That is a strong claim and requires equally strong evidence. I’ve found the arguments in support of it lackluster and reductionist for reasons I’ve outlined in other comments. In fact, I find the idea that if we compute hard enough we get sentience borders on a religious belief in extra-physical properties being bestowed upon physical objects once they pass a certain threshold.
There are people who argue that everything is conscious, even rocks, because everything is ultimately a mechanical process. The base argument is the same, but I have a feeling that most people here would suddenly disagree with them for some reason. Is it “creationism” to find such a hypothesis absurd, or is it vulgar materialism to think it’s correct? You seem to take offense at being called “reductionist” despite engaging in a textbook case of reductionism.
This doesn’t mean you’re wrong, or that the rock-consciousness people are wrong, it’s just an observation. Any meaningful debate about sentience right now is going to be philosophical. If you want to be scientific the answer is “I don’t know”. I don’t pretend to equate philosophy with science.
Consciousness isn’t an extra-physical property. That’s the belief.
I don’t take offense to being called reductionist, I take offense to reductionism being said pejoratively. Like how creationists say it. It’s obvious to me that going deeper, understanding the mechanisms behind things, makes them richer.
The thing that makes your argument tricky is we do have evidence now. Computers are unambiguously exhibiting behaviors that resemble behaviors of conscious beings. I don’t think that makes them conscious at this time, any more than animals who exhibit interesting behavior, but it shows that this mechanism has legs. If you think LLMs are as good as AI is ever going to get that’s just really blinkered.
the replicants are people because they are characters writen by the author same as any other.
sentient machines is only science fiction
By that way of reasoning, the replicates aren’t people because they are characters written by the author same as any other.
They are as much fiction as sentient machines are science fiction.
ok sure my point was the authors aren’t making a point about the nature of machines informed by the limits of machines and aren’t qualified to do so
saying AI is people because of Data from star trek is like saying there are aliens because you saw a Vulcan on tv in terms of relevance
That’s fair, though taking the idea that AI is people because of Data from Star Trek isn’t inherently absurd. If a machine existed that demonstrated all the capabilities and external phenomena as Data in real life, I would want it treated as a person.
The authors might be delusional about the capabilities of their machine in particular, but in different physical circumstances to what’s most likely happening here, they wouldn’t be wrong.
deleted by creator
My pet theory: Meat radios
Why is the concept of a spirit relevant? Computers and living beings share practically nothing in common
You speak very confidently about two things that have seen the boundaries between them shift dramatically within the past few decades. I would also like to ask if you actually understand microbiology & how it works, or have even seen a video of ATP Synthase in action.
Love to see the “umm ackshually scientists keep changing their minds” card on hexbear dot net. Yes neuroscience could suddenly shift to entirely support your belief, but that’s not exactly a stellar argument. I’d love to know how ATP has literally anything to do with proving computational consciousness other than that ATP kind of sort of resembles a mechanical thing (because it is mechanical).
Sentience as a physical property does not have to stem from the same processes. Everything in the universe is “mechanical” so making that observation is meaningless. Everything is a “mechanism” so everything has that in common. Reducing everything down to their very base definition instead of taking into account what kind of mechanisms they are is literally the very definition of reductionism. You have to look at the wider process that derives from the sum of its mechanical parts, because that’s where differences arise. Of course if you strip everything down to its foundation it’s going to be the same. Is a door and a movie camera the same thing because they both consist of parts that move?
Go be a computer somewhere else
Go be spooky somewhere else. Calling things “reductionist” like some kind of creationist.
Let’s assume for the moment that there’s no such thing as a spirit/soul/ghost/etc. in human beings and other animals, and that everything that makes me “me” is inside my body. If this is the case, computers and living brains do have something fundamental in common. They are both made of matter that obeys the laws of physics. As far as we know, there’s no such thing as “living” quarks and electrons that are distinct from “non-living” quarks and electrons.
How very crude and reductionist just like the source comment says.
I’m having a hard time understanding your reasoning and perspective on this. My interpretation of your comments is that you believe biological intelligence is a special phenomenon that cannot be understood by the scientific method. If I’m in error, I’d welcome a correction.
Biological intelligence is currently not understood. This has nothing to do with distinguishing between “living” and “non-living” matter. Brains and suitcases are also both made of matter. It’s a meaningless observation.
The question is what causes sentience. Arguing that brains are computers because they’re both made of matter is a non-sequitur. We don’t even know what mechanism causes sentience so there’s no point in even beginning to make comparisons to a separate mechanism. It plays into a trend of equating the current most popular technology to the brain. There was no basis for it then, and there’s no basis for it now.
Nobody here is arguing about what the brain is made of.
saying meat computers implies that the computation model fits. it’s an ontological assumption that requires evidence. this trend of assuming every complex processes is computation blinds us. are chemical processes computation? sometimes and sometimes not! you can’t assume that they are and expect to get very far. processing information isn’t adequate evidence for the claim.
Have I lost it
Well no, owls are smart. But yes, in terms of idiocy, very few go lower than “Silicon Valley techbro”
deleted by creator
deleted by creator
deleted by creator
this is the horseshoe curve of techbroism
Have I lost it or has everyone become an idiot?
Brainworms has been amplified and promoted by social media, I don’t think you have lost it. This is just the shitty capitalist world we live in.
Have I lost it
No you haven’t. I feel the same way though, since the world has gone mad over it. Reporting on this is just another proof that journalism only exists ro make capitalists money. Anything approaching the lib idea of a “free and independent press” would start every article explaining that none of this is AI, it is not capable of achieving consciousness, and theyvare only saying this to create hype
I don’t know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.
I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.
Even if we find the limit to LLMs and figure out that sentience can’t arise (I don’t know how this would be proven, but let’s say it was), you’d still somehow have to prove that algorithms can’t produce sentience, and that only the magical fairy dust in our souls produce sentience.
That’s not something that I’ve bought into yet.
Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don’t even have language.
It just screams of a marketing scam. I’m not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don’t think this is what they’re doing. I think they’re just trying to sell the next Google AdSense
Notice the distinction in my comments between an LLM and other algorithms, that’s a key point that you’re ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don’t believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn’t have to be LLMs.
the magical fairy dust that exists within our souls.
Who said that?
You’re missing the forest for the trees. Replace “magical fairy dust” with [insert whatever you think makes organic, carbon-based processing capable of sentience but inorganic silicon-based processing incapable of sentience].
deleted by creator
I haven’t seen anyone here (or basically anyone at all, for that matter) suggest that there’s literally no way to create mentality like ours other than being exactly like us. The argument is just that LLMs are not even on the right track to do something like that. The technology is impressive in a lot of ways, but it is in no way comparable to even a rudimentary mind in the sense that people have minds, and there’s no amount of tweaking or refining the basic approach that’s going to move it in that direction. “Genuine” (in the sense of human-like) AI made from non-human stuff is certainly possible in principle, but LLMs are not even on that trajectory.
Even setting that aside, I think framing this as an I/O problem elides some really tricky and deep conceptual content, and suggests some fundamental misunderstanding about how complex this problem is. What on Earth does “the output of human thought” mean in this sense? Clearly you don’t really mean human thought, because you obviously think whatever “output” you’re looking for can be instantiated in non-human systems. It must mean human-like thought, but human-like in what sense? Which features are important to preserve, and which are incidental or parochial to the way humans do human-like thought? How you answer that question greatly influences how you evaluate putative cases of “genuine” AI, and it’s possible to build in a great deal of hidden bias if we don’t think carefully and deliberately about this. From what I’ve seen, virtually none of the AI hypers are thinking carefully or deliberately about this.
The top level comment this chain is on specifically reduces GPT by saying it’s “just an algorithm”, not by saying it’s “just an LLM”, which is implicitly claiming that no algorithm could match or exceed human capabilities, because they’re “just algorithms”.
You can even see this person further explicitly defending this position in other comments, so the mentality you say you haven’t seen is literally the basis for this entire thread.
deleted by creator
You’re making a lot of assumptions about the human mind there.
What assumptions? I was careful to almost universally take a negative stance not a positive one. The only exception I see is my stance against the existence of the soul. Otherwise there are no assumptions, let alone ones specific to the mind.
As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.
is an incredible claim, loaded with more assumptions than I have space for here. Human thought is a lot more than an algorithm arriving at outputs for inputs. I don’t know about you, but I have an actual inner live, emotions, thoughts and dreams that are far removed from a rote, algorithmic processing of information.
I don’t feel like going into more detail now, but if you wanna look at the AI marketing with a bit more of a critical distance, I’d recommend two things here:
a short read: Language Is a Poor Heuristic For Intelligence
a listen: We Are Not Software: David Bentley Hart with Acid HorizonEdit: also wanna share this piece about generative AI here. The part about trading the meaning of things for the mean of things resonates all throughout these artificial parrots, whether they parrot text or visuals or sound.
I agree; Curious to see what hexbears think of my view:
Firstly there is no “theory of consciousness”. No proposed explanation has ever satisfied that burden of proof, even if they call themselves theories. “Brain = computer” is a retroactively applied analogy, just like everything was pneumatics 100 years ago and everything was wheels 2000 years ago and everything was fire…
I would think that assuming that if you process hard enough you get sentience is quite a religious belief. There is no basis for this assumption.
And materialism isn’t the same thing as physicalism. And just because a hypothesis is physical doesn’t mean it’s automatically correct. Not being a religious explanation is like the lowest bar that there’s ever been in history.
“Sentience is just algorithms” assumes a degree of understanding of the brain that we just don’t have, equates neurons firing to computer processing without reason, and assumes that processing must be the mechanism which leads to sentience without basis.
We don’t know anything about sentience, so going “well you can’t say it’s not computers” is like going “hypothetically there could be a unicorn that shits out solid gold bars that lives on Pluto.” Like, that’s not how the burden of proof works.
Not to mention the STEM “philosophy stoopid” dynamics going on here.
I think artificial intelligence is possible and has already been done if we’re talking about cloning animals. The cloned animal has intelligence and is created through entirely artificial means, so why doesn’t this count as artificial intelligence? This means even the phrasing “artificial intelligence” is incomplete because when people say artificial intelligence, they’re not talking about brains artificially grown in vats but extremely advanced non-biological circuitry. I think it’s perfectly reasonable to be skeptical about circuitry artificial intelligence or even non-biological artificial intelligence. It’s not like there has been any major advancement in the field that has alleviated those skepticism. I believe there’s an ideological reason to tunnel vision on circuitry, that solving the problem of artificial intelligence through brains artificially grown in vats would be “cheating” somehow.
deleted by creator
I don’t know about you, but I have an actual inner live, emotions, thoughts and dreams that are far removed from a rote, algorithmic processing of information.
How do you know?
How can you know that live emotions, thoughts and dreams cannot and do not arise from a system of algorithms?
because fundamentally subjective phenomena can never be explained entirely in terms of objective physical quantitites without losing important aspects of the phenomena.
so i know a lot of other users will just be dismissive but i like to hone my critical thinking skills, and most people are completely unfamiliar with these advanced concepts, so here’s my philosophical examination of the issue.
the thing is, we don’t even know how to prove HUMANS are sentient except by self-reports of our internal subjective experiences.
so sentience/consciousness as i discuss it here refers primarily to Qualia, or to a being existing in such a state as to experience Qualia. Qualia are the internal, subjective, mental experiences of external, physical phenomena.
here’s the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.
hint: you can’t. the move by physicalist philosophy is simply to deny the existence of qualia, consciousness, and subjective experience altogether as ‘illusory’ - but illusory to what? an illusion necessarily has an audience, something it is fooling or decieving. this ‘something’ would be the ‘consciousness’ or ‘sentience’ or to put it in your oh so smug terms the ‘soul’ that non-physicalist philosophy might posit. this move by physicalists is therefore syntactically absurd and merely moves the goalpost from ‘what are qualia’ to ‘what are those illusory, deceitful qualia decieving’. consciousness/sentience/qualia are distinctly not information processing phenomena, they are entirely superfluous to information processing tasks. sentience/consciousness/Qualia is/are not the information processing, but internal, subjective, mental awareness and experience of some of these information processing tasks.
Consider information processing, and the kinds of information processing that our brains/minds are capable of.
What about information processing requires an internal, subjective, mental experience? Nothing at all. An information processing system could hypothetically manage all of the tasks of a human’s normal activities (moving, eating, speaking, planning, etc.) flawlessly, without having such an internal, subjective, mental experience. (this hypothetical kind of person with no internal experiences is where the term ‘philosophical zombie’ comes from) There is no reason to assume that an information processing system that contains information about itself would have to be ‘aware’ of this information in a conscious sense of having an internal, subjective, mental experience of the information, like how a calculator or computer is assumed to perform information processing without any internal subjective mental experiences of its own (independently of the human operators).
and yet, humans (and likely other kinds of life) do have these strange internal subjective mental phenomena anyway.
our science has yet to figure out how or why this is, and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.
so the options we are left with in terms of conclusions to draw are:
- all matter contains some kind of (inhuman) sentience, including computers, that can sometimes coalesce into human-like sentience when in certain configurations (animism)
- nothing is truly sentient whatsoever and our self reports otherwise are to be ignored and disregarded (self-denying mechanistic physicalist zen nihilism)
- there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia (‘soul’-ism as you might put it, but no ‘soul’ is required for this conclusion, it could just as easily be termed ‘mystery-ism’ or ‘unknown-ism’)
And personally the only option i have any disdain for is number 2, as i cannot bring myself to deny the very thing i am constantly and completely immersed inside of/identical with.
deleted by creator
on a related note, dropping this rare banger line from wikipedia:
Some philosophers of mind, like Daniel Dennett, argue that qualia do not exist. Other philosophers, as well as neuroscientists and neurologists, believe qualia exist and that the desire by some philosophers to disregard qualia is based on an erroneous interpretation of what constitutes science.[2]
citation text from the wiki page for reference
Damasio, Antonio R. (2000). The feeling of what happens: body and emotion in the making of consciousness. A Harvest book. San Diego, CA: Harcourt. ISBN 978-0-15-601075-7. Edelman, Gerald M.; Gally, Joseph A.; Baars, Bernard J. (2011). “Biology of Consciousness”. Frontiers in Psychology. 2 (4): 4. doi:10.3389/fpsyg.2011.00004. ISSN 1664-1078. PMC 3111444. PMID 21713129. Edelman, Gerald Maurice (1992). Bright air, brilliant fire: on the matter of the mind. New York: BasicBooks. ISBN 978-0-465-00764-6. Edelman, Gerald M. (2003). “Naturalizing Consciousness: A Theoretical Framework”. Proceedings of the National Academy of Sciences of the United States of America. 100 (9): 5520–5524. doi:10.1111/j.1600-0536.1978.tb04573.x. ISSN 0027-8424. JSTOR 3139744. PMID 154377. S2CID 10086119. Retrieved 2023-07-19. Koch, Christof (2020). The feeling of life itself: why consciousness is widespread but can’t be computed (First MIT Press paperback edition 2020 ed.). Cambridge, MA London: The MIT Press. ISBN 978-0-262-53955-5. Llinás, Rodolfo Riascos; Llinás, Rodolfo R. (2002). I of the vortex: from neurons to self. A Bradford book (1 ed.). Cambridge, Mass. London: MIT Press. pp. 202–207. ISBN 978-0-262-62163-2. Oizumi, Masafumi; Albantakis, Larissa; Tononi, Giulio (2014-05-08). Sporns, Olaf (ed.). “From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0”. PLOS Computational Biology. 10 (5): e1003588. Bibcode:2014PLSCB…10E3588O. doi:10.1371/journal.pcbi.1003588. ISSN 1553-7358. PMC 4014402. PMID 24811198. Overgaard, M.; Mogensen, J.; Kirkeby-Hinrup, A., eds. (2021). Beyond neural correlates of consciousness. Routledge Taylor & Francis. Ramachandran, V.; Hirstein, W. (March 1997). “What Does Implicit Cognition Tell Us About Consciousness?”. Consciousness and Cognition. 6 (1): 148. doi:10.1006/ccog.1997.0296. ISSN 1053-8100. S2CID 54335111. Tononi, Giulio; Boly, Melanie; Massimini, Marcello; Koch, Christof (July 2016). “Integrated information theory: from consciousness to its physical substrate”. Nature Reviews. Neuroscience. 17 (7): 450–461. doi:10.1038/nrn.2016.44. ISSN 1471-0048. PMID 27225071. S2CID 21347087.
This is a bad summary of Dennett’s view, or at least a misleading one. He thinks that ‘qualia’ as most philosophers of mind define the term doesn’t refer to anything, and is just a weasel word obscuring that we really don’t have much of an understanding of how brains do the things they do. Qualia get glossed as the “what-it’s-like-ness” of experiences (e.g. the particular feeling of seeing the color blue), which isn’t wrong, but is only part of the story. ‘Qualia’ is a technical term in the philosophy of mind literature, and has a lot of properties attached to it (privacy, incorrigibility, ineffability, so on). Dennett argues that qualia in that sense–the philosopher’s qualia–is incoherent and internally inconsistent for a variety of reasons. This sometimes gets misrepresented as “Dennett thinks consciousness is an illusion” (a misreading that he, to be fair, could work harder to discourage), but that’s not the view. His argument against the philosopher’s qualia is pretty compelling, and doesn’t imply that people aren’t conscious. See “Quining Qualia” for a pretty accessible articulation of the argument.
there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia (‘soul’-ism as you might put it, but no ‘soul’ is required for this conclusion, it could just as easily be termed ‘mystery-ism’ or ‘unknown-ism’)
This is just wrong lol, there’s nothing magical about vertebrates in comparison to unicellular organisms. Maybe the depth of our emotions might be bigger, but obviously a paramecium also feels fear and happiness and anticipation, because these are necessary for it to eat and reproduce, it wouldn’t do these things if they didn’t feel good
The discrete dividing line is life and non-life (don’t @ me about viruses)
I don’t find that obvious at all. I agree there is nothing special dividing vertebrates from unicellular organisms, but I definitely think that some kind of CNS is required for the experience of emotions like fear, happiness etc. I do not see at all how paramecium could experience something like that. What part of it would experience it? Emotions in humans seem to be characterised by particular patterns of brain activity and concentrations of certain molecules (hormones, etc). I really cannot see how a unicellular organism has any capacity to experience emotions as we do. I would also argue that there is no dividing line between life and non-life. Whether something is alive or not is quite nebulous and hard to define. As you say, viruses are a good example but there are many others. Eg. a pregnant mammal. The foetus does not fill the classical, basic conditions of life that are taught in school (MRS H GREN, or whatever acronym) but does it really make sense to say that it is not alive? How many organisms are there when we look at a pregnant mammal. It is not clear.
but I definitely think that some kind of CNS is required for the experience of emotions like fear, happiness etc.
okay, so when a scallop runs away from you it doesn’t feel fear?
and when a paramecium is being ensnared by a hydra or some weird protist on your microscope slide, and it’s struggling to get away, it doesn’t feel fear? lolObviously every moving living thing can feel fear, that’s why they’re moving living things and that’s why they run away from predators
I would also argue that there is no dividing line between life and non-life. Whether something is alive or not is quite nebulous and hard to define
With a few exceptions like viruses, it’s pretty obvious. Rocks don’t make more rocks, nor does water
central nervous systems are so far the only thing we almost universally recognize as producing human-like subjectivity (as our evidence is the self report of humans), so i restricted my argumentation to those parameters. for all i know every quark has a kind of subjectivity associated with it, it could be as fundamental to reality as matter. and for all i know a paramecium responds to its environment with purely unconscious instinct (or if that terminology is inaccurate, biological information processing) without an internal experience. we don’t really understand how subjectivity is produced well enough to isolate it for empirical study in humans, let alone mammals, let alone microbes - but i personally think it is plausible that all life if not all matter has some kind of subjectivity.
and for all i know a paramecium responds to its environment with purely unconscious instinct (or if that terminology is inaccurate, biological information processing) without an internal experience
unicellular organisms have been shown to learn. It’s literally the same thing as a vertebrate, just less complex
here’s the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.
hint: you can’t.
Why not? I understand that we cannot, at this particular moment, explain every step of the process and how every cause translates to an effect until you have consciousness, but we can point at the results of observation and study and less complex systems we understand the workings of better and say that it’s most likely that the human brain functions in the same way, and these processes produce Qualia.
It’s not absolute proof, but there’s nothing wrong with just saying that from what we understand, this is the most likely explanation.
Unless I’m misunderstanding what you’re saying here, why is the idea that it can’t be done the takeaway rather than it will take a long time for us to be able to say whether or not it’s possible?
and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.
Once you believe you understand exactly what external brain activity leads to particular internal experiences, you could surely prove it experimentally by building a system where you can induce that activity and seeing if the system can report back the expected experience (though this might not be possible to do ethically).
As a final point, surely your own argument above about an illusion requiring an observer rules out concluding anything along the lines of point 2?
deleted by creator
Donald Duck is correct here but also that’s precisely why techbros are so infuriating. They take that conclusion and then use it to disregard everything except the one thing they conveniently think isn’t based on chemicals, like free market capitalism or Eliezer “Christ the Second” Yud
Dismissing emotions just because they are chemicals is nonsensical. It makes no sense that that alone would invalidate anything whatsoever. But these people think it does because they are conditioned by Protestantism to think that all meaning has to come from a divine and unshakeable authority. That’s why they keep reinventing God, so they have something to channel their legitimate emotions through that their delusional brain can’t invalidate.
He’s not though
life is necessarily more ordered and interesting than dead rocks
therefore it is a good thing to create more life, both on earth and eventually to turn dead planets life-ful (if this is even possible)
we are definitely conscious enough to at least massively increase the amount of life on earth (you could easily green all the world’s deserts under ecocommunism)
Our purpose in life is not reproduction.
“All knowledge is unprovable and so nothing can be known” is a more hopeless position than “existence is absurd and meaning has to come from within”. I shall both fight and perish.
I mean, “meaning has to come from within” is sort of solipsistic but, depending on your definition, completely true.
The biggest problem with Camus (besides his credulity towards the western press and his lack of commitment to trains, oh and lacking any desire for systemic understanding) is that he views this question in an extremely antisocial manner. Yes, if you want affirmation from rocks and you will kill yourself if you don’t get affirmation from rocks, there’s not much to do but get some rope. However, it’s hard to imagine how differently the rhetorical direction of the Myth of Sisyphus would have gone if he had just considered more seriously the idea of finding meaning in relationships with and impact on others rather than just resenting the trees for not respecting you. Seriously, go and reread it, the idea seems as though it didn’t even cross his mind.
The Myth of Solipsists
deleted by creator
Why not?
because qualia are fundamentally a subjective phenomena, and there is no concievable way to arrive at subjective phenomena via objective physical quantitites/measurements.
Once you believe you understand exactly what external brain activity leads to particular internal experiences, you could surely prove it experimentally by building a system where you can induce that activity and seeing if the system can report back the expected experience (though this might not be possible to do ethically).
this is not true. for example, take the example of a radio, presented to uncontacted people who do not know what a radio is. It would be reasonable for these people to assume that the voices coming from the radio are produced in their entirety inside the radio box/chassis, after all, when you interfere with the internals of the radio, it effects which voices come out and in what quality. and yet, because of a fundamental lack of understanding of the mechanics of the radio, and a lack of knowledge of how radios are used and how radio programs are produced and performed, this is an entirely incorrect assessment of the situation.
in this metaphor, the ‘radio’ is analogous to the ‘brain’ or ‘body’, and the ‘voices’ or radio programs are the ‘consciousness’, that is assumed to be coming form inside the box, but is in fact coming from outside the box, from completely invisible waves in the air. the ‘uncontacted people’ are modern scientists trying to understand that which is unknown to humanity.
this isn’t to say that i think the brain is a radio, although that is a fun thought experiment, but to demonstrate why correlation does not, in fact, necessarily imply causation, especially in the case of the neural correlates of consciousness. consciousness definitely impinges upon or depends upon the physical brain, it is in some sense affected by it, no one would argue this point seriously, but to assume causal relationship is intellectually lazy.
qualia
Some philosophers of mind, like Daniel Dennett, argue that qualia do not exist. Other philosophers, as well as neuroscientists and neurologists, believe qualia exist and that the desire by some philosophers to disregard qualia is based on an erroneous interpretation of what constitutes science.
Sounds like a made up word
It seems by your periodically hostile comments (“oh so smug terms the ‘soul’”) indicates that you have a disdain for my position, so I assume you think my position is your option 2, but I don’t ignore self-reports of sentience. I’m closer to option 1, I see it as plausible that a sufficiently general algorithm could have the same level of sentience as humans.
The third position strikes me as at least just as ridiculous as the second. Of course we don’t totally understand biological life, but just saying there’s something “special” is wild. We’re a configuration of non-sentient parts that produce sentience. Computers are also a configuration of non-sentient parts. To claim that there’s no configuration of silicon that could arrive at sentience but that there is a configuration of carbon that could arrive at sentience is imbuing carbon with some properties that seems vastly more complex than the physical reality of carbon would allow.
i think it is plausible to replicate consciousness artificially with machines, and even more plausible to replicate every information processing task in a human brain, but i do not think that purely information processing machines like computers or machines using purely information processing tools like algorithms will be the necessary hardware or software to produce artificial subjectivity.
by ‘special’ i meant not understood. and again, i submit not that it is impossible to make a subjectivity producing object like a brain artificially out of whatever material, but that it is not possible to do so using information processing technologies and theory (as understood in 2023). I don’t think artificial subjectivity is impossible, but i think purely algorithmic artificial subjectivity is impossible. I don’t think that a purely physicalist worldview of a type that discounts the possibility of subjectivity can ever account for subjectivity. i don’t think that subjectivity is explainable in terms of information processing.
here’s a syllogism to sum up my position (i believe i have argued these points sufficiently elsewhere in the thread)
Premise A: Qualia (subjective experiences) exist (a fact supported by many neuroscientists as per one of my previous posts wikipedia quote)
Premise B: Qualia, as subjective experiences, are fundamentally irreducible to information processing. (look up the hard problem of consciousness and the philosophical zombie thought experiment)
Premise C: therefore consciousness, which contains (or is identified with or consists of or interacts with or is otherwise related to) Qualia, is irreducible to information processing.
Premise D: therefore the most simplistic of physicalist worldviews (those that deny the existence of Qualia and the concept of subjectivity, like that of Daniel Dennett) can never fully account for consciousness.
thats it, nothing else i’m trying to say other than that. no mysticism, no woo, no soul, no god, no fairies, nothing to offend your delicate aesthetic sensibilities. just stuff we don’t know yet about the brain/mind/universe. no assumptions, just an acknowledgement that we do not have a Unified Theory of Everything and are likely several fundamental paradigm shifts in thinking away in many fields of research from anything resembling one.
deleted by creator
Premise B is where you lost me.
The premise of philosophical zombies is that it’s possible for there to be beings with the same information processing capabilities as us without experience. That is, given the same tools and platforms, they would be having just as intricate discussions about the nature of experience and sentience. without having experience or sentience.
I’m not convinced it’s functionally possible to behave the way we behave when talking & describing sentience without being sentient. I think a being that is functionally identical to me except that it lacks experience wouldn’t be functionally identical to me, because I wouldn’t be interested in sentience if I didn’t have it.
thats’ the entire point. if the existence of complex unconscious behaviors (or even just computers and math) proves that information processing can be done without internal subjective experience (if we assume a stone being hit by another stone, for example, is not experienceing subjectivity), and if there is something humans do beyond what is possible for pure information processing, then that is proof that consciousness is fundamentally irreducible to it. if there is something we can do that a philosophical zombie (a person with information processing but not subjectivity) could not, it is because of subjectivity/qualia, not information processing. subjectivity can influence our information processing but is not identical with it.
To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience.
How is that plausible? The human brain has more processing power than a snake’s. Which has more power than a bacterium’s (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written
chatGPT : freshman-year-“hello world”-program
human being : amoeba
(the : symbol means it’s being analogized to something)a human is a sentience made up of trillions of unicellular consciousnesses.
chatGPT is a program made up of trillions of data points. But they’re still just data points, which have no sentience or consciousness.Both are something much greater than the sum of their parts, but in a human’s case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn’t…do anything, it has no will
To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.
this is the popular sentiment with programmers and spectators right now, but even taking all those assumptions as true, it still doesn’t mean we are close to anything.
Consider the complexity of sentient, multicellular organism. That’s trillions of cells all interacting with each-other and the environment concurrently. Even if you reduce that down to just the processes with a brain, that’s still more things happening in and between those neurons than anything we could realistically model in a programme. Programmers like to reduce that complexity down by only looking at the synaptic connections between neurons, and ignoring the everything else the cells are doing.
I’m no philosopher, but at lot of these questions seem very epistemological and not much different from religious ones (i.e. so what changes if we determine that life is a simulation). Like they’re definitely fun questions, but I just don’t see how they’ll be answered with how much is unknown. We’re talking “how did we get here” type stuff
I’m not so much concerned with that aspect as I am about the fact that it’s a powerful technology that will be used to oppress
Yeah, capitalists will use unreliable tech to replace workers. Even if GPT4 is the end all (there’s no indication that it is), that would still displace tons of workers and just result in both worse products for everyone and a worse, more competitive labor market.
You seem to be getting some mixed replies, but I feel like I know what you’ve been trying to convey with most of your comments.
A lot of people have been dismissing LLMs as pure marketing hype (and they very well could be) but it doesn’t change the fact that companies will eventually decide that they can be integrated into other business processes once they reach a point of an “acceptable” percent of errors. They are really just statistical models at the end of the day. Right now, no C-suite/executive worth their salt would decide to let something like GPT write emails, craft reports, code/generate scripts, etc because there is bound to be some nuance it can’t quite grasp. Pragmatically, I view it in the same way as scrap on an assembly line, but we all know damn well that algorithms can perform a CEO’s role just as well as any other computer-based job (I haven’t really thought about how this tech will be used with robotics but I’m sure there are some implications for that too).
This topic is one that has been deeply fascinating ever since I took an intro cognitive science class on a whim in college lol which is why I have many thoughts (some of which are probably kinda dumb admittedly).
This also just coincides sooooo well considering the fact that I’m just about to finish Bullshit Jobs and recently read a line about how Graeber describes the internet ( a LLM’s training set)- “A repository of almost all of human knowledge and cultural achievement.”
I don’t know where everyone is getting these in depth understandings of how and when sentience arises.
It’s exactly the fact that we don’t how sentience forms that makes the acting like fucking chatgpt is now on the brink of developing it so ludicrous. Neuroscientists don’t even know how it works, so why are these AI hypemen so sure they got it figured out?
The only logical answer is that they don’t and it’s 100% marketing.
Hoping computer algorithms made in a way that’s meant to superficially mimic neural connections will somehow become capable of thinking on its own if they just become powerful enough is a complete shot in the dark.
The philosophy of this question is interesting, but if GPT5 is capable of performing all intelligence-related tasks at an entry level for all jobs, it would not only wipe out a large chunk of the job market, but also stop people from getting to senior positions because the entry level positions would be filled by GPT.
Capitalists don’t have 5-10 years of forethought to see how this would collapse society. Even if GPT5 isn’t “thinking”, it’s actually its capabilities that’ll make a material difference. Even if it never gets to the point of advanced human thought, it’s already spitting out a bunch of unreliable information. Make it slightly more reliable and it’ll be on par with entry-level humans in most fields.
So I think dismissing it as “just marketing” is too reductive. Even if you think it doesn’t deserve rights because it’s not sentient, it’ll still fundamentally change society.
deleted by creator
deleted by creator
It seems you’re both implying here that consciousness is necessarily non-algorithmic because it’s non-finite, but then also admitting in another comment that all human experience is finite, which would necessarily include consciousness.
I don’t get what your point is here. Is all human experience finite? Are some parts of human experience “non-categorical”? I think you need to clarify here.
deleted by creator
So I take it you’re not a determinist? That’s a whole conversation that’s separate from this, but you should know there are a lot of secular people who don’t believe in free will (e.g having a will independent of any casual relationships to physical reality). Secular people are generally deterministic, we believe that wills exist within physical reality, and that they exist in the same cause/effect relationship as everything else.
With enough information of the present, you could know everything a human will do in their lifetime, there’s no will that exists outside of reality that is influencing reality (no will that is “free”). Instead, will is entirely casually linked, like everything else.
Put another way, you’re guaranteed to get the same result every time you put a human in exactly the same situation. Even if there is true chaos in the universe (e.g pure randomness) that’s a different situation every time you get a different random result.
deleted by creator
Every human experience is necessarily finite and made up of steps, insofar as you can break down the experience of your mind into discrete thoughts.
That doesn’t mean it’s algorithmic, though. A whole branch of mathematics (and as consequence, physics) is non-algorithmic.
Also, people created math and computers and not vice versa. It’s weird to call an organ a ‘meat tool’ of a any sort. Your brain isn’t a meat computer, your fingers aren’t meat pliers, your liver isn’t a meat Brita filter. We make tools based on our meat bits quite often. Computers are the same. Our brains aren’t based on computers cause computers are products of our brains meant to do some of the jobs of a brain, so I guess unlike a hammer it’s easier to trick yourself into believing it’s thinking cause it’s a machine made to handle some of the load work of thinking.
I said it at the time when chatGPT came along, and I’ll say it now and keep saying it until or unless the android army is built which executes me:
ChatGPT kinda sucks shit. AI is NO WHERE NEAR what we all (used to?) understand AI to be ie fully sentient, human-equal or better, autonomous, thinking, beings.
I know the Elons and shit have tried (perhaps successfully) to change the meaning of AI to shit like chatGPT. But, no, I reject that then, now, and forever. Perhaps people have some “real” argument for different types and stages of AI and my only preemptive response to them is basically “keep your industry specific terminology inside your specific industries.” The outside world, normal people, understand AI to be Data from Star Trek or the Terminator. Not a fucking glorified Wikipedia prompt. I think this does need to be straight forwardly stated and their statements rejected because… Frankly, they’re full of shit and it’s annoying.
deleted by creator
the average person was always an NPC who goes by optics instead of fundamentals
“good people” to them means clean, resourced, wealthy, privileged
“bad people” means poor, distraught, dirty, refugee, etcso it only makes sense that an algorithm box with the optics of a real voice, proper english grammar and syntax, would be perceived as “AI”
I dislike this framing as it’s rather misanthropic and discounts the impact of propaganda. we’ve been losing the war of position but that doesn’t make the average person an NPC. liberalism is like the air - we imbibe it unconsciously. people get on TV and call these algorithms intelligent so people just believe it. when you assume people are incapable of independent thought, you accept that we cannot change their minds. this too is liberal propaganda - that the average person is reactionary, backwards, and only to be controlled.
deleted by creator
ChatGPT can analyze obscure memes correctly when I give it the most ambiguous ones I can find.
Some have taken pictures of blackboards and had it explain all the text and box connections written in the board.
I’ve used it to double the speed I do dev work, mostly by having it find and format small bits of code I could find on my own but takes time.
One team even made a whole game using individual agents to replicate a software development team that codes, analyzes, and then releases games made entirely within the simulation.
“It’s not the full AI we expected” is incredibly inane considering this tech is less than a year old, and is updating every couple weeks. People hyping the technology are thinking about what this will look like after a few years. Apparently the version that is unreleased is a big enough to cause all this drama, and it will be even more unrecognizable in the years to come.
ChatGPT does no analysis. It spits words back out based on the prompt it receives based on a giant set of data scraped from every corner of the internet it can find. There is no sentience, there is no consciousness.
The people that are into this and believe the hype have a lot of crossover with “Effective Altruism” shit. They’re all biased and are nerds that think Roko’s Basilisk is an actual threat.
As it currently stands, this technology is trying to run ahead of regulation and in the process threatens the livelihoods of a ton of people. All the actual damaging shit that they’re unleashing on the world is cool in their minds, but oh no we’ve done too many lines at work and it shit out something and now we’re all freaked out that maybe it’ll kill us. As long as this technology is used to serve the interests of capital, then the only results we’ll ever see are them trying to automate the workforce out of existence and into ever more precarious living situations. Insurance is already using these technologies to deny health claims and combined with the apocalyptic levels of surveillance we’re subjected to, they’ll have all the data they need to dynamically increase your premiums every time you buy a tub of ice cream.
It has plugins for WolframAlpha which gives it many analytical tools.
Out of literally everything I said, that’s the only thing you give a shit enough to mewl back with. “If you use other services along wide it, it’ll spit out information based on a prompt.” It doesn’t matter how it gets the prompt, you could have image recognition software pull out a handwritten equation that is converted into a prompt that it solves for, it’s still not doing analysis. It’s either doing math which is something computers have done forever, or it’s still just spitting out words based on massive amounts of training data that was categorized by what are essentially slaves doing mechanical turks.
You give so little of a shit about the human cost of what these technologies will unleash. Companies want to slash their costs by getting rid of as many workers as possible but your goddamn bazinga brain only sees it as a necessary march of technology because people that get automated away are too stupid to matter anyway. Get out of your own head a little, show some humility, and look at what companies are actually trying to do with this technology.
It has plugins for WolframAlpha which gives it many analytical tools.
so do you and yet here we are
This tech is not less than a year old. The “tech” being used is literally decades old, the specific implementations marketed as LLMs are 3 years old.
People hyping the technology are looking at the dollar signs that come when you convince a bunch of C-levels that you can solve the unsolvable problem, any day now. LLMs are not, and will never be, AGI.
Yeah, I have friend who was a stat major, he talks about how transformers are new and have novel ideas and implementations, but much of the work was held back by limited compute power, much of the math was worked out decades ago. Before AI or ML it was once called Statistical Learning, there were 2 or so other names as well which were use to rebrand the discipline (I believe for funding, don’t take my word for it).
It’s refreshing to see others talk about its history beyond the last few years. Sometimes I feel like history started yesterday.
Yeah, when I studied computer science 10 years ago most of the theory implemented in LLMs was already widely known, and the academic literature goes back to at least the early 90’s. Specific techniques may improve the performance of the algorithms, but they won’t fundamentally change their nature.
Obviously most people have none of this context, so they kind of fall for the narrative pushed by the media and the tech companies. They pretend this is totally different than anything seen before and they deliberately give a wink and a nudge toward sci-fi, blurring the lines between what they created and fictional AGIs. Of course they have only the most superficially similarity.
Oh, I didn’t scroll down far enough to see that someone else had pointed out how ridiculous it is to say “this technology” is less than a year old. Well, I think I’ll leave my other comment, but yours is better! It’s kind of shocking to me that so few people seem to know anything about the history of machine learning. I guess it gets in the way of the marketing speak to point out how dead easy the mathematics are and that people have been studying this shit for decades.
“AI” pisses me off so much. I tend to go off on people, even people in real life, when they act as though “AI” as it currently exists is anything more than a (pretty neat, granted) glorified equation solver.
deleted by creator
deleted by creator
I could be wrong but could it not also be defined as glorified “brute force”? I assume the machine learning part is how to brute force better, but it seems like it’s the processing power to try and jam every conceivable puzzle piece into a empty slot until it’s acceptable? I mean I’m sure the engineering and tech behind it is fascinating and cool but at a basic level it’s as stupid as fuck, am I off base here?
no, it’s not brute forcing anything. they use a simplified model of the brain where neurons are reduced to an activation profile and synapses are reduced to weights. neural nets differ in how the neurons are wired to each other with synapses - the simplest models from the 60s only used connections in one direction, with layers of neurons in simple rows that connected solely to the next row. recent models are much more complex in the wiring. outputs are gathered at the end and the difference between the expected result and the output actually produced is used to update the weights. this gets complex when there isn’t an expected/correct result, so I’m simplifying.
the large amount of training data is used to avoid overtraining the model, where you get back exactly what you expect on the training set, but absolute garbage for everything else. LLMs don’t search the input data for a result - they can’t, they’re too small to encode the training data in that way. there’s genuinely some novel processing happening. it’s just not intelligence in any sense of the term. the people saying it is misunderstand the purpose and meaning of the Turing test.
It’s pretty crazy to me how 10 years ago when I was playing around with NLPs and was training some small neural nets nobody I was talking to knew anything about this stuff and few were actually interested. But now you see and hear about it everywhere, even on TV lol. It reminds me of how a lot of people today seem to think that NVidia invented ray tracing.
it’s honestly been infuriating, lol. I hate that it got commoditized and mystified like this.
I never said that stuff like chatGPT is useless.
I just don’t think calling it AI and having Musk and his clowncar of companions run around yelling about the singularity within… wait. I guess it already happened based on Musk’s predictions from years ago.
If people wanna discuss theories and such: have fun. Just don’t expect me to give a shit until skynet is looking for John Connor.
How is it not AI? What is left to do?
At this point it’s about ironing out bugs and making it faster. ChatGPT is smarter than a lot of people I’ve met in real life.
deleted by creator
deleted by creator
ChatGPT might be smarter than you, I’ll give you that.
So you can’t name anything, but at least you’re clever.
deleted by creator
It’s not sentient.
You’re right that it isn’t, though considering science have huge problems even defining sentience, it’s pretty moot point right now. At least until it start to dream about electric sheep or something.
deleted by creator
Every time these people come out with accusations with “spiritualism”, it’s always projection.
That’s just it, if you can’t define it clearly, the question is meaningless.
The reason people will insist on ambiguous language here is because the moment you find a specific definition of what sentience is someone will quickly show machines doing it.
deleted by creator
So you can’t name a specific task that bots can’t do? Because that’s what I’m actually asking, this wasn’t supposed to be metaphysical.
It will affect society, whether there’s something truly experiencing everything it does.
All that said, if you think carbon-based things can become sentient, and silicon-based things can’t what is the basis for that belief? It sounds like religious thinking, that humans are set apart from the rest of the world chosen by god.
A materialist worldview would focus on what things do, what they consume and produce. Deciding humans are special, without a material basis, isn’t in line with materialism.
You asked how chatgpt is not AI.
Chatgpt is not AI because it is not sentient. It is not sentient because it is a search engine, it was not made to be sentient.
Of course machines could theoretically, in the far future, become sentient. But LLMs will never become sentient.
the thing is, we used to know this. 15 years ago, the prevailing belief was that AI would be built by combining multiple subsystems together - an LLM, visual processing, a planning and decision making hub, etc… we know the brain works like this - idk where it all got lost. profit, probably.
Oh that’s easy. There are plenty of complex integrals or even statistics problems that computers still can’t do properly because the steps for proper transformation are unintuitive or contradictory with steps used with simpler integrals and problems.
You will literally run into them if you take a simple Calculus 2 or Stats 2 class, you’ll see it on chegg all the time that someone trying to rack up answers for a resume using chatGPT will fuck up the answers. For many of these integrals, their answers are instead hard-programmed into the calculator like Symbolab, so the only reason that the computer can ‘do it’ is because someone already did it first, it still can’t reason from first principles or extrapolate to complex theoretical scenarios.
That said, the ability to complete tasks is not indicative of sentience.
Sentience is a meaningless word the way most people use it, it’s not defined in any specific material way.
You’re describing a faith-based view that humans are special, and that conflicts with the materialist view of the world.
If I’m wrong, share your definition of sentience here that isn’t just an idealist axiom to make humans feel good.
So you can’t name a specific task that bots can’t do?
reproduce without consensual assistance
move
name a specific task that bots can’t do
Self-actualize.
In a strict sense yes, humans do Things based on if > then stimuli. But we self assign ourselves these Things to do, and chat bots/LLMs can’t. They will always need a prompt, even if they could become advanced enough to continue iterating on that prompt on its own.
I can pick up a pencil and doodle something out of an unquantifiable desire to make something. Midjourney or whatever the fuck can create art, but only because someone else asks it to and tells it what to make. Even if we created a generative art bot that was designed to randomly spit out a drawing every hour without prompts, that’s still an outside prompt - without programming the AI to do this, it wouldn’t do it.
Our desires are driven by inner self-actualization that can be affected by outside stimuli. An AI cannot act without us pushing it to, and never could, because even a hypothetical fully sentient AI started as a program.
Bots do something different, even when I give them the same prompt, so that seems to be untrue already.
Even if it’s not there yet, though, what material basis do you think allows humans that capability that machines lack?
Most of the people in this thread seem to think humans have a unique special ability that machines can never replicate, and that comes off as faith-based anthropocentric religious thinking- not the materialist view that underlies Marxism. The latter would require pointing to a specific material structure, or empiricle test to distinguish the two which no one here is doing.
deleted by creator
This is that meme about butch haircuts and reading lenin
How is it not AI? What is left to do?
literally all of the hard problems
deleted by creator
I especially enjoyed “it has analytical skills because it has access to wolfram alpha”. incredible, unprompted own goal
deleted by creator
ChatGPT is smarter than a lot of people I’ve met in real life.
How? Could ChatGPT hypothetically accomplish any of the tasks your average person performs on a daily basis, given the hardware to do so? From driving to cooking to walking on a sidewalk? I think not. Abstracting and reducing the “smartness” of people to just mean what they can search up on the internet and/or an encyclopaedia is just reductive in this case, and is even reductive outside of the fields of AI and robotics. Even among ordinary people, we recognise the difference between street smarts and book smarts.
How is it not AI? What is left to do?
Well, why are you here talking to us and not to ChatGPT?
deleted by creator
ChatGPT can’t vote.
deleted by creator
In bourgeois dictatorships, voting is useless, it’s a facade. They tell their subjects that democracy=voting but they pick whoever they want as rulers, regardless of the outcome. Also, they have several unelected parts in their government which protect them from the proletariat ever making laws.
Real democracy is when the proletariat rules.
By that I meant any political activity really. This isn’t a defense of electoralism.
Machines are replacing humans in the economy, and that has material consequences.
Holding onto ideas of human exceptionalism is going to mean being unprepared.
A lot of people see minor obstacles for machines, and conclude they can’t replace humans, and return to distracting themselves with other things while their livelihood is being threatened.
Robotaxis are already operating, and a product to replace most customer service jobs has just been released for businesses to order about 1 months ago.
Many in this thread are navel gazing about how that bot won’t really experience anything when they get created, as if that mattered to any of this.
LOL you are a muppet. The only people who tough this shit is good are either clueless marks, or have money in the game and a product to sell. Which are you? Don’t answer that I can tell.
This tech is less then a year old, burning billions of dollars and desperately trying to find people that will pay for it. That is it. Once it becomes clear that it can’t make money, it will die. Same shit as NFTs and buttcoin. Running an ad for sex asses won’t finance your search engine that talks back in the long term and it can’t do the things you claim it can, which has been proven by simple tests of the validity of the shit it spews. AKA: As soon as we go past the most basic shit it is just confidently wrong most of the time.
The only thing it’s been semi-successful in has been stealing artists work and ruining their lives by devaluing what they do. So fuck AI, kill it with fire.
AKA: As soon as we go past the most basic shit it is just confidently wrong most of the time.
So it really is just like us, heyo
The only thing you agreed with is the only thing they got wrong
This tech is less then a year old,
Not really.
The only people who tough this shit is good are either clueless marks, or have money in the game and a product to sell
Third option, people who are able to use it to learn and improve their craft and are able to be more productive and work less hours because of it.
please, for all if our sakes, don’t use chatgpt to learn. it’s subtly wrong in ways that require subject-matter experience to pick apart and it will contradict itself in ways that sound authoritative, as if they’re rooted in deeper understanding, but they’re extremely not. using LLMs to learn is one of the worst ways to use it. if you want to use it to automate repetitive tasks and you already know enough to supervise it, go for it.
honestly, if I hated myself, I’d go into consulting in about 5ish years when the burden of maintaining poorly written AI code overwhelms a bunch of shitty companies whose greed overcame their senses - such consultants are the only people who will come out ahead in the current AI boom.
deleted by creator
as it turns out,
is a poor way to train both LLMs and people
deleted by creator
such consultants are the only people who will come out ahead in the current AI boom.
It’s absurd you don’t think there are professionals harnessing ai to write code faster, that is reviewed and verified.
it’s absurd that you think these lines won’t be crossed in the name of profit
Never said they wouldn’t. But you’re saying the ONLY people benefitting from the ai boom are the people cleaning up the mess and that’s just not true at all.
Some people will make a mess
Some people will make good code at a faster pace than before
deleted by creator
I think it works well a a kind of replacement for google searches. This is more of a dig on google as SEO feels like it ruined search. Ads fill most pages of search and it’s tiring to come up with the right sequence of words to get the result I would like.
deleted by creator
deleted by creator
Bloody hell those guys are going full adeptus mechanicus already.
deleted by creator
That is, if someone treat it as binary “either eat elon shit or take the clogs in your hands” problem. Actually dismissing or rejecting it entirely is literally a neoludditism, though admittedly the problem is of lesser magnitude than the original one since it’s more of a escalation than entire new quality, but it won’t go away, the world will have to live with it.
deleted by creator
I see a lot of potential there for the proletariat, but at the minimum, in capitalism, it would require some really free and accessible tool.
I really love being accused of veering into supernatural territory by people in an actual cult. Not random people on hexbear but actual, real life techbros. Simultaneously lecturing me about my supposed anti-physicalism while also harping on about “the singularity”.
deleted by creator
Again shows that atheism without dialectical materialism is severly lacking and tend to veers into weird idealist takes, especially for the agnostics which aren’t even atheist just seeks the superstition that would fit them.
deleted by creator
It was actually pretty funny. I was interviewing a guy running an illegal aircraft charter operation when he went off on this rant about FAA luddites. I then personally shut down his operation. I guess techbros aren’t used to being told “no.”
deleted by creator
this tech is less than a year old
what? I was working on this stuff 15 years ago and it was already an old field at that point. the tech is unambiguously not old. they just managed to train an LLM with significantly more parameters than we could manage back then because of computing power enhancements. undoubtedly, there have been improvements in the algorithms but it’s ahistorical to call this new tech.
Where do you get the idea that this tech is less than a year old? Because that’s incredibly false. People have been working with neural nets to do language processing for at least a decade, and probably a lot longer than that. The mathematics underlying this stuff is actually incredibly simple and has been known and studied since at least the 90’s. Any recent “breakthroughs” are more about computing power than a theoretical shift.
I hate to tell you this, but I think you’ve bought into marketing hype.
Perceptrons have existed since the 80s. Surprised you don’t know this, it’s part of the undergrad CS curriculum. Or at least it is on any decent school.
deleted by creator
They switched from worshiping Elon Musk to worshiping ChatGPT. There are literally people commenting ChatGPT responses to prompt posts asking for real opinions, and then getting super defensive when they get downvoted and people point out that they didn’t come here to read shit from AI.
I’ve seen this several times now; they’re treating the word-generating parrot like fucking Shalmaneser in Stand on Zanzibar, you literally see redd*tors posting comments that are basically “I asked ChatGPT what it thought about it and here…”.
Like it has remotely any value. It’s pathetic.
deleted by creator
One of them also cited fucking Blade Runner.
“You’re mocking people who think AI is sentient, but here’s a made up story where it really is sentient! You’d look really stupid if you continued to deny the sentience of AI in this scenario I just made up. Stories cannot be anything but literal. Blade Runner is a literal prediction of the future.”
Wow, if things were different they would be different!
deleted by creator
Wow I’m mad, I’m going to read your username aloud to my partner. I’m sure they won’t be weirded out by that at all and blankly stare at me.
deleted by creator
How does it feel to have so many rent free accommodations? It really is surprising the amount of reactionary bullshit that crops up here. Currently stuck in an argument with someone claiming that 1984 is actually a masterful thesis on propaganda and isn’t actually anti-USSR. There’s more pretzels in here than a bakery!
deleted by creator
The saddest part of all is that it looks like they really are wishing for real life to imitate a futuristic sci-fi movie. They might not come out and say, “I really hope AI in the real world turns out to be just like in a sci-fi/horror movie” but that’s what it seems like they’re unconsciously wishing for. It’s just like a lot of other media phenomena, such as real news reporting on zombie apocalypse preparedness or UFOs. They may phrase it as “expectation” but that’s very adjacent to “hopeful.”
deleted by creator
Yeah I think it was Kim Stanley Robinson who said that sci-fi is taken as religious mythology often, like the prophecy of superluminal space travel or machine superintelligence, very much like prophecies of heaven and a savior god.
Also the point that if you point this out as a myth, whatever your credentials as a sci-fi writer or even a physicist, the faithful will launch a crusade against you
Roko’s Basilisk, but it’s the snake from the Nokia dumb phone game.
deleted by creator
We all did…
Redditors straight up quote marketing material in their posts to defend stuff, it’s maddening. I was questioning a gamer on Reddit about something in a video game, and in response they straight up quoted something from the game’s official website. Critical thinking and media literacy are dead online I swear.
Shit can’t even do my homework right.
It’s still going. The thread is still fucking going.
deleted by creator
deleted by creator
deleted by creator
achieved a breakthrough in mathematics
The bot put numbers in a statistically-likely sequence.
I swear 99% of reddit libs
don’t understand anything about how LLMs work.
deleted by creator
Knowing how AI actually works is a very reliable vaccine against
ism
I was gonna say, “Remember when scientists thought testing a nuclear bomb might start a chain reaction enflaming the whole atmosphere and then did it anyway?” But then I looked it up and I guess they actually did calculations and figured out it wouldn’t before they did the test.
Might have been better if it did
No I’m not serious I don’t need the eco-fascism primer thank you very much
He may be a sucker but at least he is engaging with the topic. The sheer lack of curiosity toward so-called “artificial intelligence” here on hexbear is just as frustrating as any of the bazinga takes on
. No material analysis, no good faith discussion, no strategy to liberate these tools in service of the proletariat - just the occasional dunk post and an endless stream of the same snide remarks from the usuals.
The hexbear party line toward LLMs and similar technologies is straight up reactionary. If we don’t look for ways to utilize, subvert and counter these technologies while they’re still in their infancy then these dorks are going to be the only ones who know how to use them. And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.
Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.
Oh my god it’s this post again.
No, LLMs are not “AI”. No, mocking these people is not “reactionary”. No, cloaking your personal stance on leftist language doesn’t make it any more correct. No, they are not on the verge of developing superhuman AI.
And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.
Have you read like, anything at all in this thread? There is no way you can possibly say no one here is “interacting with the underlying philosophical questions” in good faith. There’s plenty of discussion, you just disagree with it.
Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.
What the fuck are you talking about? We’re “handing it over to them” because we don’t take their word at face value? Like nobody here has been extremely opposed to the usage of “AI” to undermine working class power? This is bad faith bullshit and you know it.
deleted by creator
The sheer lack of curiosity toward so-called “artificial intelligence” here on hexbear is just as frustrating
That’s because it’s not artificial intelligence. It’s marketing.
deleted by creator
I actually told my wife “watch this, I’m going to get a smug reply from UlyssesT,” so thank you for not disappointing.
deleted by creator
Yeah, sharing what’s happening on my phone with my wife is weird.
deleted by creator
Normal behavior
I don’t tell people IRL about my arguments online because I know that shit’s boring to anyone except me
Good job, you recognize cause and effect
I want to hijack this opportunity to ask about pigeons. Are they soft and fluffy?
Yes
Give me ur pigeon
NOOOO PIGIN HOW COULD COMMUNISM BETRAY ME LIKE THIS?!?!
The hexbear party line toward LLMs
this is a shitposting reddit clone, not a political party, but I generally agree that people on here sometimes veer into neo-ludditism and forget Marx’s words with respect to stuff like this:
The enormous destruction of machinery that occurred in the English manufacturing districts during the first 15 years of this century, chiefly caused by the employment of the power-loom, and known as the Luddite movement, gave the anti-Jacobin governments of a Sidmouth, a Castlereagh, and the like, a pretext for the most reactionary and forcible measures. It took both time and experience before the workpeople learnt to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used.
However you have to take the context of these reactions into account. Silicon valley hucksters are constantly pushing LLMs etc. as miracle solutions for capitalists to get rid of workers, and the abuse of these technologies to violate people’s privacy or fabricate audio/video evidence is only going to get worse. I don’t think it’s possible to put Pandora back in the box or to do bourgeois reformist legislation to fix this problem. I do think we need to seize the means of production instead of destroy them. But you need to agitate and organize in real life around this. Not come on here and tell people how misguided their dunk tank posts are lol.
I think their position is heavily misguided at best. The question is whether AI is sentient or not. Obviously they are used against the working class, but that is a separate question from their purported sentience.
Like, it’s totally possible to seize AI without believing in its sentience. You don’t have to believe the techbro woo to use their technology.
We can both make use of LLMs ourselves while disbelieving in their sentience at the same time.
Is that such a radical idea?
We’re not saying that LLMs are useless and we shouldn’t try and make use of them, just that they’re not sentient. Nobody here is making that first point. Attacking the first point instead of the arguments that people are actually making is as textbook a case of strawmanning as I’ve ever seen.
It’s not a new means of production, it’s old as fuck. They just made a bigger one. The fuck is chat gpt or AI art going to do for communism? Automating creativity and killing the creative part is only interesting as a bad thing from a left perspective. It’s dismissed because it’s dismissals, there’s no new technology here, it’s a souped up chatbot that’s been marketed like something else.
As far as machines being conscious, we are so far away from that as something to even consider. They aren’t and can’t spontaneously gain free will. It’s inputs and outputs based on pre determined programming. Computers literally cannot to anything non deterministic, there is no ghost in the machine, the machine is just really complex and you don’t understand it entirely. If we get to the point where a robot could be seen as sentient we have fucking Star Trek TNG. They did the discussion and solved that shit.
The fuck is chat gpt or AI art going to do for communism?
I think AI art could be great but chatGPT as a concept of something that “knows everything” is very moronic
AI art has the potential to let random schmucks make their own cartoons if they input just a little bit of work. However, this will probably require a license fee or something so you’re probably right
Personally I would love to see well-made cartoons about Indonesian mythology and stuff like that, which will never ever be made in the west (or Indonesia until it becomes as rich as China at least) so AI art is the best chance at that
Okay, but the only reason that ai art could help that is because Indonesian mythology doesn’t have the marketability for a budget and real artists because capitalism. It doesn’t subvert the commodification of art.
Yeah, and as long as we’re living in that capitalistic hellworld, AI art existing allows those stories to be told instead of the same old euromedieval-hobbit-meadow thing that’s the basis of every fantasy movie and game that came out for the last 60 years
Just cause a computer can make it doesn’t mean anyone will see it. That’s where the capitalism comes in.
Just cause a computer can make it doesn’t mean anyone will see it.
A lot of Indonesian people, and other people (like me) who are interested in other cultures would see it. It would at the very least begin the process of allowing cultural diversity to even reach the rest of the world
As it stands now, poor people in poor countries don’t even have the funds/leisure time to start their own animations (or other similar hobbies). AI art solves that
The reason western art/videogames/cartoons are so popular is not because the culture is inherently more watchable, but because only westerners (and Japanese) ever had the capital to fund their own animation studios. People watch media because it’s well-made, or because it’s already popular and other people are talking about it. AI art can’t fix the latter, but it can fix the former.
Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.
As it stands, the capitalists already have the old means of information warfare – this tech represents an acceleration of existing trends, not the creation of something new. What do you want from this, exactly? Large language models that do a predictive text – but with filters installed by communists, rather than the PR arm of a company? That won’t be nearly as convincing as just talking and organizing with people in real life.
Besides, if it turns out there really is a transformational threat, that it represents some weird new means of production, it’s still just a programme on a server. Computers are very, very fragile. I’m just not too worried about it.
Kinda, but like cool ML is alphafold/esm/mpnn/finite elements optimizers for cad/qcd/quantum chemistry (coming soon™). LLMs/diffusion models are ways of multiplying content, fucking up email jobs and static media creators/presumably dynamic ones as well in the future.
I doubt people are aware that rn biologists are close to making designer proteins on like home pc and soon you can wage designer biological warfare for 500k and a small lab.
It’s a glorified speak-n-spell, not a one benefit to the working class. A constant, unrelenting push for a democratization of education will do infinitely more for the working class than learning how best to have a machine write a story. Should this be worked on and researched? absolutely. Should it not be allowed out of the confines of people who understand thoroughly what it is and what it can and cannot do? yes. We shouldn’t be using this for the same reason you don’t use a gag dictionary for a research project. Grow up
It has potential for making propaganda. Automated astroturfing more sophisticated than what we currently see being done on Reddit.
astroturfing only works when your views tie into the mainstream narrative. Besides, there’s no competing with the people who have access to the best computers, most coders, and have backdoors and access to every platform. Smarter move is to back up the workers who are having their jobs threatened over this.
New Q* drop lol
Bring on skynet already since you seem to think you can you cowards. Tired of all this advertising, shit or get off the pot.
deleted by creator