I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?

  • SpaceNoodle@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    6 months ago

    Investors are dumb. It’s a hot new tech that looks convincing (since LLMs are designed specifically to appear correct, not be correct), so anything with that buzzword gets a ton of money thrown at it. The same phenomenon has occurred with blockchain, big data, even the World Wide Web. After each bubble bursts, some residue remains that actually might have some value.

    • pimeys@lemmy.nauk.io
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      And LLM is mostly for investors, not for users. Investors see you “do AI” even if you just repackage GPT or llama, and your Series A is 20% bigger.

  • Feathercrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    Disclaimer: I’m going to ignore all moral questions here

    Because it represents a potentially large leap in the types of problems we can solve with computers. Previously the only comparable tool we had to solve problems were algorithms, which are fast, well-defined, and repeatable, but cannot deal with arbitrary or fuzzy inputs in a meaningful way. AI excels at dealing with fuzzy inputs (including natural language, which was a huge barrier previously), at the expense of speed and reliability. It’s basically an entire missing half to our toolkit.

    Be careful not to conflate AI in general with LLMs. AI is usually implemented as Machine Learning, which is a method of fitting an output to training data. LLMs are a specific instance of this that are trained on language (hence, large language models). I suspect that if AI becomes more widely adopted, most users will be interacting with LLMs like you are now, but most of the business benefit would come from classifiers that have a more restricted input/output space. As an example, you could use ML to train an AI that can be used to detect potentially suspicious bank transactions. The more data you have to sort through, the better AI can learn from it*, so I suspect the companies that have been collecting terabytes of data will start using AI to try to analyze it. I’m curious if that will be effective.

    *technically it depends a lot on the training parameters

  • just_an_average_joe@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    Mooooneeeyyyy

    I work as an AI engineer, let me tell you, the tech is awesome and has a looooot of potential but its not ready yet. Because of high potential literally no one wants to miss the opportunity of getting rich quick with it. Its only been like 2-3 years when this tech was released to the public, if only openai had released it as open-source, just like everyone before them, we wouldn’t be here. But they wanted to make money and now everyone else wants to too.

  • xia@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    The natural general hype is not new… I even see it in 1970’s scifi. It’s like once something pierced the long-thought-impossible turing test, decades of hype pressure suddenly and freely flowed.

    There is also an unnatural hype (that with one breakthrough will come another) and that the next one might yield a technocratic singularity to the first-mover: money, market dominance, and control.

    Which brings the tertiary effect (closer to your question)… companies are so quickly and blindly eating so many billions of dollars of first-mover costs that the corporate copium wants to believe there will be a return (or at least cost defrayal)… so you get a bunch of shitty AI products, and pressure towards them.

    • 5gruel@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      When will people finally stop parroting this sentence? It completely misses the point and answers nothing.

    • Kintarian@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      It’s easier for the marketing department. According to an article, it’s neither artificial nor intelligent.

        • Kintarian@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          6 months ago

          Artificial intelligence (AI) is not artificial in the sense that it is not fake or counterfeit, but rather a human-created form of intelligence. AI is a real and tangible technology that uses algorithms and data to simulate human-like cognitive processes.

            • Kintarian@lemmy.worldOP
              link
              fedilink
              arrow-up
              0
              ·
              6 months ago

              Well, using the definition that artificial means man made then no. Human intelligence wasn’t made by humans therefore it isn’t artificial.

              • canadaduane@lemmy.ca
                link
                fedilink
                English
                arrow-up
                0
                ·
                6 months ago

                I wonder if some of our intelligence is artificial. Being able to drive directly to any destination, for example, with a simple cell-phone lookup. Reading lifetimes worth of experience in books that doesn’t naturally come at birth. Learning incredibly complex languages that are inherited not by genes, but by environment–and, depending on the language, being able to distinguish different colors.

                • Kintarian@lemmy.worldOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  6 months ago

                  From the day I was born, my environment shaped what I thought and felt. Entering the school system I was indoctrinated into whatever society I was born to. All of the things that I think I know are shaped by someone else. I read a book and I regurgitate its contents to other people. I read a post online and I start pretending that it’s the truth when I don’t actually know. How often do humans actually have an original thought? Most of the time we’re just regurgitating things that we’ve experienced, read, or heard from exteral foces rather than coming up with thoughts on our own.