Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • Justice@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    I said it at the time when chatGPT came along, and I’ll say it now and keep saying it until or unless the android army is built which executes me:

    ChatGPT kinda sucks shit. AI is NO WHERE NEAR what we all (used to?) understand AI to be ie fully sentient, human-equal or better, autonomous, thinking, beings.

    I know the Elons and shit have tried (perhaps successfully) to change the meaning of AI to shit like chatGPT. But, no, I reject that then, now, and forever. Perhaps people have some “real” argument for different types and stages of AI and my only preemptive response to them is basically “keep your industry specific terminology inside your specific industries.” The outside world, normal people, understand AI to be Data from Star Trek or the Terminator. Not a fucking glorified Wikipedia prompt. I think this does need to be straight forwardly stated and their statements rejected because… Frankly, they’re full of shit and it’s annoying.

      • sooper_dooper_roofer [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        the average person was always an NPC who goes by optics instead of fundamentals

        “good people” to them means clean, resourced, wealthy, privileged
        “bad people” means poor, distraught, dirty, refugee, etc

        so it only makes sense that an algorithm box with the optics of a real voice, proper english grammar and syntax, would be perceived as “AI”

        • silent_water [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I dislike this framing as it’s rather misanthropic and discounts the impact of propaganda. we’ve been losing the war of position but that doesn’t make the average person an NPC. liberalism is like the air - we imbibe it unconsciously. people get on TV and call these algorithms intelligent so people just believe it. when you assume people are incapable of independent thought, you accept that we cannot change their minds. this too is liberal propaganda - that the average person is reactionary, backwards, and only to be controlled.

    • zeze@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      ChatGPT can analyze obscure memes correctly when I give it the most ambiguous ones I can find.

      Some have taken pictures of blackboards and had it explain all the text and box connections written in the board.

      I’ve used it to double the speed I do dev work, mostly by having it find and format small bits of code I could find on my own but takes time.

      One team even made a whole game using individual agents to replicate a software development team that codes, analyzes, and then releases games made entirely within the simulation.

      “It’s not the full AI we expected” is incredibly inane considering this tech is less than a year old, and is updating every couple weeks. People hyping the technology are thinking about what this will look like after a few years. Apparently the version that is unreleased is a big enough to cause all this drama, and it will be even more unrecognizable in the years to come.

      • charly4994 [she/her, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        ChatGPT does no analysis. It spits words back out based on the prompt it receives based on a giant set of data scraped from every corner of the internet it can find. There is no sentience, there is no consciousness.

        The people that are into this and believe the hype have a lot of crossover with “Effective Altruism” shit. They’re all biased and are nerds that think Roko’s Basilisk is an actual threat.

        As it currently stands, this technology is trying to run ahead of regulation and in the process threatens the livelihoods of a ton of people. All the actual damaging shit that they’re unleashing on the world is cool in their minds, but oh no we’ve done too many lines at work and it shit out something and now we’re all freaked out that maybe it’ll kill us. As long as this technology is used to serve the interests of capital, then the only results we’ll ever see are them trying to automate the workforce out of existence and into ever more precarious living situations. Insurance is already using these technologies to deny health claims and combined with the apocalyptic levels of surveillance we’re subjected to, they’ll have all the data they need to dynamically increase your premiums every time you buy a tub of ice cream.

        • zeze@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          It has plugins for WolframAlpha which gives it many analytical tools.

          • charly4994 [she/her, comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Out of literally everything I said, that’s the only thing you give a shit enough to mewl back with. “If you use other services along wide it, it’ll spit out information based on a prompt.” It doesn’t matter how it gets the prompt, you could have image recognition software pull out a handwritten equation that is converted into a prompt that it solves for, it’s still not doing analysis. It’s either doing math which is something computers have done forever, or it’s still just spitting out words based on massive amounts of training data that was categorized by what are essentially slaves doing mechanical turks.

            You give so little of a shit about the human cost of what these technologies will unleash. Companies want to slash their costs by getting rid of as many workers as possible but your goddamn bazinga brain only sees it as a necessary march of technology because people that get automated away are too stupid to matter anyway. Get out of your own head a little, show some humility, and look at what companies are actually trying to do with this technology.

      • AlkaliMarxist [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        This tech is not less than a year old. The “tech” being used is literally decades old, the specific implementations marketed as LLMs are 3 years old.

        People hyping the technology are looking at the dollar signs that come when you convince a bunch of C-levels that you can solve the unsolvable problem, any day now. LLMs are not, and will never be, AGI.

        • IzyaKatzmann [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Yeah, I have friend who was a stat major, he talks about how transformers are new and have novel ideas and implementations, but much of the work was held back by limited compute power, much of the math was worked out decades ago. Before AI or ML it was once called Statistical Learning, there were 2 or so other names as well which were use to rebrand the discipline (I believe for funding, don’t take my word for it).

          It’s refreshing to see others talk about its history beyond the last few years. Sometimes I feel like history started yesterday.

          • AlkaliMarxist [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Yeah, when I studied computer science 10 years ago most of the theory implemented in LLMs was already widely known, and the academic literature goes back to at least the early 90’s. Specific techniques may improve the performance of the algorithms, but they won’t fundamentally change their nature.

            Obviously most people have none of this context, so they kind of fall for the narrative pushed by the media and the tech companies. They pretend this is totally different than anything seen before and they deliberately give a wink and a nudge toward sci-fi, blurring the lines between what they created and fictional AGIs. Of course they have only the most superficially similarity.

        • Hexagons [e/em/eir]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Oh, I didn’t scroll down far enough to see that someone else had pointed out how ridiculous it is to say “this technology” is less than a year old. Well, I think I’ll leave my other comment, but yours is better! It’s kind of shocking to me that so few people seem to know anything about the history of machine learning. I guess it gets in the way of the marketing speak to point out how dead easy the mathematics are and that people have been studying this shit for decades.

          “AI” pisses me off so much. I tend to go off on people, even people in real life, when they act as though “AI” as it currently exists is anything more than a (pretty neat, granted) glorified equation solver.

          • spacecadet [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            I could be wrong but could it not also be defined as glorified “brute force”? I assume the machine learning part is how to brute force better, but it seems like it’s the processing power to try and jam every conceivable puzzle piece into a empty slot until it’s acceptable? I mean I’m sure the engineering and tech behind it is fascinating and cool but at a basic level it’s as stupid as fuck, am I off base here?

            • silent_water [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              no, it’s not brute forcing anything. they use a simplified model of the brain where neurons are reduced to an activation profile and synapses are reduced to weights. neural nets differ in how the neurons are wired to each other with synapses - the simplest models from the 60s only used connections in one direction, with layers of neurons in simple rows that connected solely to the next row. recent models are much more complex in the wiring. outputs are gathered at the end and the difference between the expected result and the output actually produced is used to update the weights. this gets complex when there isn’t an expected/correct result, so I’m simplifying.

              the large amount of training data is used to avoid overtraining the model, where you get back exactly what you expect on the training set, but absolute garbage for everything else. LLMs don’t search the input data for a result - they can’t, they’re too small to encode the training data in that way. there’s genuinely some novel processing happening. it’s just not intelligence in any sense of the term. the people saying it is misunderstand the purpose and meaning of the Turing test.

        • FuckBigTech347@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s pretty crazy to me how 10 years ago when I was playing around with NLPs and was training some small neural nets nobody I was talking to knew anything about this stuff and few were actually interested. But now you see and hear about it everywhere, even on TV lol. It reminds me of how a lot of people today seem to think that NVidia invented ray tracing.

      • Justice@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        I never said that stuff like chatGPT is useless.

        I just don’t think calling it AI and having Musk and his clowncar of companions run around yelling about the singularity within… wait. I guess it already happened based on Musk’s predictions from years ago.

        If people wanna discuss theories and such: have fun. Just don’t expect me to give a shit until skynet is looking for John Connor.

        • zeze@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          How is it not AI? What is left to do?

          At this point it’s about ironing out bugs and making it faster. ChatGPT is smarter than a lot of people I’ve met in real life.

            • PolandIsAStateOfMind@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 year ago

              You’re right that it isn’t, though considering science have huge problems even defining sentience, it’s pretty moot point right now. At least until it start to dream about electric sheep or something.

            • zeze@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              So you can’t name a specific task that bots can’t do? Because that’s what I’m actually asking, this wasn’t supposed to be metaphysical.

              It will affect society, whether there’s something truly experiencing everything it does.

              All that said, if you think carbon-based things can become sentient, and silicon-based things can’t what is the basis for that belief? It sounds like religious thinking, that humans are set apart from the rest of the world chosen by god.

              A materialist worldview would focus on what things do, what they consume and produce. Deciding humans are special, without a material basis, isn’t in line with materialism.

              • m532 [she/her]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                You asked how chatgpt is not AI.

                Chatgpt is not AI because it is not sentient. It is not sentient because it is a search engine, it was not made to be sentient.

                Of course machines could theoretically, in the far future, become sentient. But LLMs will never become sentient.

                • silent_water [she/her]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  the thing is, we used to know this. 15 years ago, the prevailing belief was that AI would be built by combining multiple subsystems together - an LLM, visual processing, a planning and decision making hub, etc… we know the brain works like this - idk where it all got lost. profit, probably.

              • TreadOnMe [none/use name]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                1 year ago

                Oh that’s easy. There are plenty of complex integrals or even statistics problems that computers still can’t do properly because the steps for proper transformation are unintuitive or contradictory with steps used with simpler integrals and problems.

                You will literally run into them if you take a simple Calculus 2 or Stats 2 class, you’ll see it on chegg all the time that someone trying to rack up answers for a resume using chatGPT will fuck up the answers. For many of these integrals, their answers are instead hard-programmed into the calculator like Symbolab, so the only reason that the computer can ‘do it’ is because someone already did it first, it still can’t reason from first principles or extrapolate to complex theoretical scenarios.

                That said, the ability to complete tasks is not indicative of sentience.

                • zeze@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  Sentience is a meaningless word the way most people use it, it’s not defined in any specific material way.

                  You’re describing a faith-based view that humans are special, and that conflicts with the materialist view of the world.

                  If I’m wrong, share your definition of sentience here that isn’t just an idealist axiom to make humans feel good.

                  • TreadOnMe [none/use name]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    1 year ago

                    Lol, ‘idealist axiom’. These things can’t even fucking reason out complex math from first principles. That’s not a ‘view that humans are special’ that is a very physical limitation of this particular neural network set-up.

                    Sentience is characterized by feeling and sensory awareness, and an ability to have self-awareness of those feelings and that sensory awareness, even as it comes and goes with time.

                    Edit: Btw computers are way better at most math, particularly arithmetic, than humans. Imo, the first thing a ‘sentient computer’ would be able to do is reason out these notoriously difficult CS things from first principles and it is extremely telling that that is not in any of the literature or marketing as an example of ‘sentience’.

                    Damn this whole thing of dancing around the question and not actually addressing my points really reminds me of a ChatGPT answer. It would n’t surprise me if you were using one.

              • KarlBarqs [he/him, they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                name a specific task that bots can’t do

                Self-actualize.

                In a strict sense yes, humans do Things based on if > then stimuli. But we self assign ourselves these Things to do, and chat bots/LLMs can’t. They will always need a prompt, even if they could become advanced enough to continue iterating on that prompt on its own.

                I can pick up a pencil and doodle something out of an unquantifiable desire to make something. Midjourney or whatever the fuck can create art, but only because someone else asks it to and tells it what to make. Even if we created a generative art bot that was designed to randomly spit out a drawing every hour without prompts, that’s still an outside prompt - without programming the AI to do this, it wouldn’t do it.

                Our desires are driven by inner self-actualization that can be affected by outside stimuli. An AI cannot act without us pushing it to, and never could, because even a hypothetical fully sentient AI started as a program.

                • zeze@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Bots do something different, even when I give them the same prompt, so that seems to be untrue already.

                  Even if it’s not there yet, though, what material basis do you think allows humans that capability that machines lack?

                  Most of the people in this thread seem to think humans have a unique special ability that machines can never replicate, and that comes off as faith-based anthropocentric religious thinking- not the materialist view that underlies Marxism. The latter would require pointing to a specific material structure, or empiricle test to distinguish the two which no one here is doing.

                  • KarlBarqs [he/him, they/them]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    1 year ago

                    Most of the people in this thread seem to think humans have a unique special ability that machines can never replicate, and that comes off as faith-based anthropocentric religious thinking- not the materialist view that underlies Marxism

                    First off, materialism doesn’t fucking mean having to literally quantify the human soul in order for it to be valid, what the fuck are you talking about friend

                    Secondly, because we do. We as a species have, from the very moment we invented written records, have wondered about that spark that makes humans human and we still don’t know. To try and reduce the entirety of the complex human experience to the equivalent of an If > Than algorithm is disgustingly misanthropic

                    I want to know what the end goal is here. Why are you so insistent that we can somehow make an artificial version of life? Why this desire to somehow reduce humanity to some sort of algorithm equivalent? Especially because we have so many speculative stories about why we shouldn’t create The Torment Nexus, not the least of which because creating a sentient slave for our amusement is morally fucked.

                    Bots do something different, even when I give them the same prompt, so that seems to be untrue already.

                    You’re being intentionally obtuse, stop JAQing off. I never said that AI as it exists now can only ever have 1 response per stimulus. I specifically said that a computer program cannot ever spontaneously create an input for itself, not now and imo not ever by pure definition (as, if it’s programmed, it by definition did not come about spontaneously and had to be essentially prompted into life)

                    I thought the whole point of the exodus to Lemmy was because y’all hated Reddit, why the fuck does everyone still act like we’re on it

          • aaaaaaadjsf [he/him, comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            ChatGPT is smarter than a lot of people I’ve met in real life.

            How? Could ChatGPT hypothetically accomplish any of the tasks your average person performs on a daily basis, given the hardware to do so? From driving to cooking to walking on a sidewalk? I think not. Abstracting and reducing the “smartness” of people to just mean what they can search up on the internet and/or an encyclopaedia is just reductive in this case, and is even reductive outside of the fields of AI and robotics. Even among ordinary people, we recognise the difference between street smarts and book smarts.

              • m532 [she/her]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                In bourgeois dictatorships, voting is useless, it’s a facade. They tell their subjects that democracy=voting but they pick whoever they want as rulers, regardless of the outcome. Also, they have several unelected parts in their government which protect them from the proletariat ever making laws.

                Real democracy is when the proletariat rules.

                • zeze@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  By that I meant any political activity really. This isn’t a defense of electoralism.

                  Machines are replacing humans in the economy, and that has material consequences.

                  Holding onto ideas of human exceptionalism is going to mean being unprepared.

                  A lot of people see minor obstacles for machines, and conclude they can’t replace humans, and return to distracting themselves with other things while their livelihood is being threatened.

                  Robotaxis are already operating, and a product to replace most customer service jobs has just been released for businesses to order about 1 months ago.

                  Many in this thread are navel gazing about how that bot won’t really experience anything when they get created, as if that mattered to any of this.

                  • m532 [she/her]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    1 year ago

                    Bourgies are human exceptionalists. They want human slaves. That’s why they want sentient AI. And that’s why machines will never be able to replace humans in capitalism.

      • NuraShiny [any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        LOL you are a muppet. The only people who tough this shit is good are either clueless marks, or have money in the game and a product to sell. Which are you? Don’t answer that I can tell.

        This tech is less then a year old, burning billions of dollars and desperately trying to find people that will pay for it. That is it. Once it becomes clear that it can’t make money, it will die. Same shit as NFTs and buttcoin. Running an ad for sex asses won’t finance your search engine that talks back in the long term and it can’t do the things you claim it can, which has been proven by simple tests of the validity of the shit it spews. AKA: As soon as we go past the most basic shit it is just confidently wrong most of the time.

        The only thing it’s been semi-successful in has been stealing artists work and ruining their lives by devaluing what they do. So fuck AI, kill it with fire.

        • BeamBrain [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          AKA: As soon as we go past the most basic shit it is just confidently wrong most of the time.

          So it really is just like us, heyo

        • Aabbcc@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          The only thing you agreed with is the only thing they got wrong

          This tech is less then a year old,

          Not really.

          The only people who tough this shit is good are either clueless marks, or have money in the game and a product to sell

          Third option, people who are able to use it to learn and improve their craft and are able to be more productive and work less hours because of it.

          • silent_water [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            please, for all if our sakes, don’t use chatgpt to learn. it’s subtly wrong in ways that require subject-matter experience to pick apart and it will contradict itself in ways that sound authoritative, as if they’re rooted in deeper understanding, but they’re extremely not. using LLMs to learn is one of the worst ways to use it. if you want to use it to automate repetitive tasks and you already know enough to supervise it, go for it.

            honestly, if I hated myself, I’d go into consulting in about 5ish years when the burden of maintaining poorly written AI code overwhelms a bunch of shitty companies whose greed overcame their senses - such consultants are the only people who will come out ahead in the current AI boom.

                  • silent_water [she/her]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 year ago

                    Garbage in, garbage out

                    what’s extremely funny to me is that this exact phrase was used when I was in college to explain why you shouldn’t do exactly what the OpenAI team later did, in courses on AI and natural language processing. we were straight up warned not to do it, with a discussion on ethics centered on “what if it works and you don’t wind up with model that spews unintelligible gibberish?” (the latter was mostly how it went back then - neural nets were extremely hard to train back then). there were a couple of kids who were like "…but it worked… " and the professor pointedly made them address the consequences.

                    this wasn’t even some liberal arts school - it was an engineering school that lacked more than a couple of profs qualified to teach philosophy and ethics. it just used to be the normal way the subject was taught, back when it was still normal to discourage the use of neural nets for practical and ethical reasons (like consider almost succeeding and driving a fledgling, sentient being insane because you fed it a torrent of garbage).

                    I went back when the ML boom and sat in on a class - the same prof had cut all of that out of the curriculum. he said it was cause the students complained about it and the new department had told him to focus on teaching them how to write/use the tech and they’d add an ethics class later.

                    agony-acid

                    instead, we just have an entire generation who have been taught to fail the Turing test against a chatbot that can’t remember what it said a paragraph ago. I feel old.

            • Aabbcc@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              such consultants are the only people who will come out ahead in the current AI boom.

              It’s absurd you don’t think there are professionals harnessing ai to write code faster, that is reviewed and verified.

                • Aabbcc@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  Never said they wouldn’t. But you’re saying the ONLY people benefitting from the ai boom are the people cleaning up the mess and that’s just not true at all.

                  Some people will make a mess

                  Some people will make good code at a faster pace than before

                  • silent_water [she/her]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 year ago

                    those people don’t benefit. they’re paid a wage - they don’t receive the gross value of their labor. the capitalists pocket that surplus value. the people who “benefit” by being able to deliver code faster would benefit more from more reasonable work schedules and receiving the whole of the value they produce.

          • IzyaKatzmann [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            I think it works well a a kind of replacement for google searches. This is more of a dig on google as SEO feels like it ruined search. Ads fill most pages of search and it’s tiring to come up with the right sequence of words to get the result I would like.

      • Hexagons [e/em/eir]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Where do you get the idea that this tech is less than a year old? Because that’s incredibly false. People have been working with neural nets to do language processing for at least a decade, and probably a lot longer than that. The mathematics underlying this stuff is actually incredibly simple and has been known and studied since at least the 90’s. Any recent “breakthroughs” are more about computing power than a theoretical shift.

        I hate to tell you this, but I think you’ve bought into marketing hype.

      • silent_water [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        this tech is less than a year old

        what? I was working on this stuff 15 years ago and it was already an old field at that point. the tech is unambiguously not old. they just managed to train an LLM with significantly more parameters than we could manage back then because of computing power enhancements. undoubtedly, there have been improvements in the algorithms but it’s ahistorical to call this new tech.

      • mittens [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Perceptrons have existed since the 80s. Surprised you don’t know this, it’s part of the undergrad CS curriculum. Or at least it is on any decent school.