yeah this is knee-jerk anti-technology shite from people here because we live in a society organized along lines where creation of AI would lead to our oppression instead of our liberation. of course making a computer be sentient is possible, to believe otherwise is to engage in magical (chauvinistic?) thinking about what constitutes consciousness.
When I watched blade runner 2049 I thought the human police captain character telling the Officer K (replicant) character she was different from him because she had a soul a bit weird, since sci-fi settings are pretty secular. Turns out this was prophetic and people are more than willing to get all spiritual if it helps them invent reasons to differentiate themselves from the Other.
There isn’t a materialist theory of consciousness that doesn’t look something like an ultra complex computer. We’re talking like an alternative explanation exists but it really does not.
When people say computer here they mean computation as computer scientists conceive of it. Abstract mathematical operations that can be modeled by boolean circuits or Turing machines, and embodied in physical processes. Computers in the sense you’re talking about (computer hardware) are one method of embodying these operations.
However, my contention is that the material constraints of how those processes are embodied are going to have a significant effect on how the system works
Sure, but that’s no basis to think that a group of logic gates could not eventually be made to emulate a neuron. The neuron has a finite number of things it can do because of the same material constraints, and while one would probably end up larger than the other, increasing the physical distances between the thinking parts, that would surely only limit the speed of an emulated thought rather than its substance?
What stops me from doing the same thing that neurons do with a sufficiently sized hunk of silicon? Assuming that some amount of abstraction is fine.
If the answer is “nothing”, then that demonstrates the point. If you can build an artificial brain, that does all of the things a brain does, then there is nothing special about our brains.
But can you actually build an artificial brain with a hunk of silicon? We don’t know enough about brains or consciousness to do that, so the point is kinda moot
It isn’t “Computers aren’t sentient”, nobody thinks computers are sentient except some weirdos. “Computers can’t be sentient”, which is what is under discussion, is a much stronger claim.
The claim is that “computers can be sentient”. That is a strong claim and requires equally strong evidence. I’ve found the arguments in support of it lackluster and reductionist for reasons I’ve outlined in other comments. In fact, I find the idea that if we compute hard enough we get sentience borders on a religious belief in extra-physical properties being bestowed upon physical objects once they pass a certain threshold.
There are people who argue that everything is conscious, even rocks, because everything is ultimately a mechanical process. The base argument is the same, but I have a feeling that most people here would suddenly disagree with them for some reason. Is it “creationism” to find such a hypothesis absurd, or is it vulgar materialism to think it’s correct? You seem to take offense at being called “reductionist” despite engaging in a textbook case of reductionism.
This doesn’t mean you’re wrong, or that the rock-consciousness people are wrong, it’s just an observation. Any meaningful debate about sentience right now is going to be philosophical. If you want to be scientific the answer is “I don’t know”. I don’t pretend to equate philosophy with science.
Consciousness isn’t an extra-physical property. That’s the belief.
I don’t take offense to being called reductionist, I take offense to reductionism being said pejoratively. Like how creationists say it. It’s obvious to me that going deeper, understanding the mechanisms behind things, makes them richer.
The thing that makes your argument tricky is we do have evidence now. Computers are unambiguously exhibiting behaviors that resemble behaviors of conscious beings. I don’t think that makes them conscious at this time, any more than animals who exhibit interesting behavior, but it shows that this mechanism has legs. If you think LLMs are as good as AI is ever going to get that’s just really blinkered.
I think that AI will get better but it’s “base” will remain the same. Going deeper to understand the mechanisms is different than just going “it’s a mechanism”, which I see a lot of people doing. I think computers can very easily replicate human behaviors and emulate emotions.
Obviously creating something sentient is possible since brains evolved. And if we don’t kill ourselves I think it’s very possible that we’ll get there. But I think it will be very different to what we think of as a “computer” and the only similarities they might share could be being electrically powered.
At the end of the road we’ll just get to arguing about philosophical zombies and the discussion usually wraps up there.
I’d be very happy if it turned out that I’m completely wrong.
That’s fair, though taking the idea that AI is people because of Data from Star Trek isn’t inherently absurd. If a machine existed that demonstrated all the capabilities and external phenomena as Data in real life, I would want it treated as a person.
The authors might be delusional about the capabilities of their machine in particular, but in different physical circumstances to what’s most likely happening here, they wouldn’t be wrong.
Sorry to respond to this several day old comment, but I think there were quite a few episodes where Data’s personhood was directly called into question, it is a tangential point, but I think it is likely that even if we had a robotic Brent Spiner running around, people might still not be 100% convinced that they are truly sapient, and might consider it an incredibly complex mechanical Turk style trick. It really is hard to tell for sure, even if we did have a “living” AI to examine.
yeah this is knee-jerk anti-technology shite from people here because we live in a society organized along lines where creation of AI would lead to our oppression instead of our liberation. of course making a computer be sentient is possible, to believe otherwise is to engage in magical (chauvinistic?) thinking about what constitutes consciousness.
When I watched blade runner 2049 I thought the human police captain character telling the Officer K (replicant) character she was different from him because she had a soul a bit weird, since sci-fi settings are pretty secular. Turns out this was prophetic and people are more than willing to get all spiritual if it helps them invent reasons to differentiate themselves from the Other.
deleted by creator
There isn’t a materialist theory of consciousness that doesn’t look something like an ultra complex computer. We’re talking like an alternative explanation exists but it really does not.
deleted by creator
When people say computer here they mean computation as computer scientists conceive of it. Abstract mathematical operations that can be modeled by boolean circuits or Turing machines, and embodied in physical processes. Computers in the sense you’re talking about (computer hardware) are one method of embodying these operations.
deleted by creator
Sure, but that’s no basis to think that a group of logic gates could not eventually be made to emulate a neuron. The neuron has a finite number of things it can do because of the same material constraints, and while one would probably end up larger than the other, increasing the physical distances between the thinking parts, that would surely only limit the speed of an emulated thought rather than its substance?
What stops me from doing the same thing that neurons do with a sufficiently sized hunk of silicon? Assuming that some amount of abstraction is fine.
If the answer is “nothing”, then that demonstrates the point. If you can build an artificial brain, that does all of the things a brain does, then there is nothing special about our brains.
But can you actually build an artificial brain with a hunk of silicon? We don’t know enough about brains or consciousness to do that, so the point is kinda moot
Nobody ever mentioned a “soul” in this conversation until you brought it up to use as an accusation.
“Computers aren’t sentient” is not a religious belief no matter how hard you try to smear it as such.
It isn’t “Computers aren’t sentient”, nobody thinks computers are sentient except some weirdos. “Computers can’t be sentient”, which is what is under discussion, is a much stronger claim.
The claim is that “computers can be sentient”. That is a strong claim and requires equally strong evidence. I’ve found the arguments in support of it lackluster and reductionist for reasons I’ve outlined in other comments. In fact, I find the idea that if we compute hard enough we get sentience borders on a religious belief in extra-physical properties being bestowed upon physical objects once they pass a certain threshold.
There are people who argue that everything is conscious, even rocks, because everything is ultimately a mechanical process. The base argument is the same, but I have a feeling that most people here would suddenly disagree with them for some reason. Is it “creationism” to find such a hypothesis absurd, or is it vulgar materialism to think it’s correct? You seem to take offense at being called “reductionist” despite engaging in a textbook case of reductionism.
This doesn’t mean you’re wrong, or that the rock-consciousness people are wrong, it’s just an observation. Any meaningful debate about sentience right now is going to be philosophical. If you want to be scientific the answer is “I don’t know”. I don’t pretend to equate philosophy with science.
Consciousness isn’t an extra-physical property. That’s the belief.
I don’t take offense to being called reductionist, I take offense to reductionism being said pejoratively. Like how creationists say it. It’s obvious to me that going deeper, understanding the mechanisms behind things, makes them richer.
The thing that makes your argument tricky is we do have evidence now. Computers are unambiguously exhibiting behaviors that resemble behaviors of conscious beings. I don’t think that makes them conscious at this time, any more than animals who exhibit interesting behavior, but it shows that this mechanism has legs. If you think LLMs are as good as AI is ever going to get that’s just really blinkered.
I think that AI will get better but it’s “base” will remain the same. Going deeper to understand the mechanisms is different than just going “it’s a mechanism”, which I see a lot of people doing. I think computers can very easily replicate human behaviors and emulate emotions.
Obviously creating something sentient is possible since brains evolved. And if we don’t kill ourselves I think it’s very possible that we’ll get there. But I think it will be very different to what we think of as a “computer” and the only similarities they might share could be being electrically powered.
At the end of the road we’ll just get to arguing about philosophical zombies and the discussion usually wraps up there.
I’d be very happy if it turned out that I’m completely wrong.
the replicants are people because they are characters writen by the author same as any other.
sentient machines is only science fiction
By that way of reasoning, the replicates aren’t people because they are characters written by the author same as any other.
They are as much fiction as sentient machines are science fiction.
ok sure my point was the authors aren’t making a point about the nature of machines informed by the limits of machines and aren’t qualified to do so
saying AI is people because of Data from star trek is like saying there are aliens because you saw a Vulcan on tv in terms of relevance
That’s fair, though taking the idea that AI is people because of Data from Star Trek isn’t inherently absurd. If a machine existed that demonstrated all the capabilities and external phenomena as Data in real life, I would want it treated as a person.
The authors might be delusional about the capabilities of their machine in particular, but in different physical circumstances to what’s most likely happening here, they wouldn’t be wrong.
Sorry to respond to this several day old comment, but I think there were quite a few episodes where Data’s personhood was directly called into question, it is a tangential point, but I think it is likely that even if we had a robotic Brent Spiner running around, people might still not be 100% convinced that they are truly sapient, and might consider it an incredibly complex mechanical Turk style trick. It really is hard to tell for sure, even if we did have a “living” AI to examine.