How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can’t manage this consistently with CRUD apps and people think that this number isn’t laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?

I don’t believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.

  • IHeartBadCode@kbin.run
    link
    fedilink
    arrow-up
    2
    ·
    9 months ago

    I had my fun with Copilot before I decided that it was making me stupider - it’s impressive, but not actually suitable for anything more than churning out boilerplate.

    This. Many of these tools are good at incredibly basic boilerplate that’s just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.

    There’s a reality to these tools. That reality is they’re helpful at times, but they are hardly transformative at the levels the grifters go on about.

    • Zikeji@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Copilot / LLM code completion feels like having a somewhat intelligent helper who can think faster than I can, however they have no understanding of how to actually code, but are good at mimicry.

      So it’s helpful for saving time typing some stuff, and sometimes the absolutely weird suggestions make me think of other scenarios I should consider, but it’s not going to do the job itself.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I interviewed a candidate for a senior role, and they asked if they could use AI tools. I told them to use whatever they normally would, I only care that they get a working answer and that they can explain the code to me.

      The problem was fairly basic, something like randomly generate two points and find the distance between them, and we had given them the details (e.g. distance is a straight line). They used AI, which went well until it generated the Manhattan distance instead of the Pythagorean theorem. They didn’t correct it, so we pointed it out and gave them the equation (totally fine, most people forget it under pressure). Anyway, they refactored the code and used AI again to make the same mistake, didn’t catch it, and we ended up pointing it out again.

      Anyway, at the end of the challenge, we asked them how confident they felt about the code and what they’d need to do to feel more confident (nudge toward unit testing). They said their code was 100% correct and they’d be ready to ship it.

      They didn’t pass the interview.

      And that’s generally my opinion about AI in general, it’s probably making you stupider.

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I use them like wikipedia: it’s a good starting point and that’s it (and this comparison is a disservice to wikipedia).

    • Shadywack@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Looks like two people suckered by the grifters downvoted your comment (as of this writing). Should they read this, it is a grift, get over it.

    • AIhasUse@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Yes, and then you take the time to dig a little deeper and use something agent based like aider or crewai or autogen. It is amazing how many people are stuck in the mindset of “if the simplest tools from over a year aren’t very good, then there’s no way there are any good tools now.”

      It’s like seeing the original Planet of the Apes and then arguing against how realistic the Apes are in the new movies without ever seeing them. Sure, you can convince people who really want unrealistic Apes to be the reality, and people who only saw the original, but you’ll do nothing for anyone who actually saw the new movies.

      • foenix@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        I’ve used crewai and autogen in production… And I still agree with the person you’re replying to.

        The 2 main problems with agentic approaches I’ve discovered this far:

        • One mistake or hallucination will propagate to the rest of the agentic task. I’ve even tried adding a QA agent for this purpose but what ends up happening is those agents aren’t reliable and also leads to the main issue:

        • It’s very expensive to run and rerun agents at scale. The scaling factor of each agent being able to call another agent means that you can end up with an exponentially growing number of calls. My colleague at one point ran a job that cost $15 for what could have been a simple task.

        One last consideration: the current LLM providers are very aware of these issues or they wouldn’t be as concerned with finding “clean” data to scrape from the web vs using agents to train agents.

        If you’re using crewai btw, be aware there is some builtin telemetry with the library. I have a wrapper to remove that telemetry if you’re interested in the code.

        Personally, I’m kinda done with LLMs for now and have moved back to my original machine learning pursuits in bioinformatics.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        Also, a lot of people who are using AI have become quiet about it of late exactly because of reactions like this article’s. Okay, you’ll “piledrive” me if I mention AI? So I won’t mention AI. I’ll just carry on using it to make whatever I’m making without telling you.

        There’s some great stuff out there, but of course people aren’t going to hear about it broadly if every time it gets mentioned it gets “piledriven.”

  • Spesknight@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    I don’t fear Artificial Intelligence, I fear Administrative Idiocy. The managers are the problem.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        Fortunately, it’s my job as your boss to convince my boss and boss’ boss that AI can’t replace you.

        We had a candidate spectacularly fail an interview when they used AI and didn’t catch the incredibly obvious errors it made. I keep a few examples of that handy to defend my peeps in case my boss or boss’s boss decide AI is the way to go.

        I hope your actual boss would do that for you.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            I’m so sorry.

            My boss asked if I wanted to be a manager, and I said no, but I’ll take the position if offered so it doesn’t go to a non-technical person. I wish that was more common elsewhere.

            Good luck sir or madame.

            • bionicjoey@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              9 months ago

              Well, my office recently announced that we’ll be going from 0 days mandatory in office to 3 days a week. After working fully remote for the last few years, I’ll kms before going back, so I’m on the way out anyway.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 months ago

                That sucks. We do 2-days in office, but that was also always the agreement, we were just temporarily remote during COVID (though almost all of us were hired during COVID). My boss tried 3-days in office due to company policy, but we hated it and went back to two.

                I cannot stand orgs going back on their word without agreement from the team. I hope you find someplace better.

                • bionicjoey@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  Thanks, I’m sure I’ll land on my feet. I have a pretty unique skillset for IT (Science HPC admin) and I’m thinking about maybe going back to school and doing a Master’s.

        • Kaput@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          They’ll replace you first, so they can replace your employees… even though you are clearly right.

  • AIhasUse@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    I don’t know how much stock to put in this author. They can’t even read the chart that they shared. They saw that 8% didn’t get use from gen ai and so assumed that 92% did. There are also 7% that haven’t tried using it yet. Ironically, pretty much any LLM with vision would have done a better job of comprehending the chart than this author did.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 months ago

    Hacker News was silencing this article outright. That’s typically a sign that its factual enough to strike a nerve with the potential CxO libertarian [slur removed] crowd.

    If this is satire, I don’t see it. Because i’ve seen enough of the GenAI crowd openly undermine society/the environment/the culture and be brazen about it; violence is a perfectly normal response.

  • jaaake@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    After reading that entire post, I wish I had used AI to summarize it.

    I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I’m no longer as confident that I know what’s going on.

    This pull quote feels like it’s antithetical to their entire argument and makes me feel like all they’re doing is whinging about the fact that people who don’t know what they’re talking about have loud voices. Which has always been true and has little to do with AI.

    • AIhasUse@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Yeah, this paper is time wasted. It is hilarious that they think that 3 years is a long time as a data scientists and this somehow gives them such wisdom. Then, they can’t even accurately extract the data from the chart that they posted in the article. On top of all this, like you pointed out, they can’t even keep a clear narrative, and they blatantly contradict themself on their main point. They want to pile drive people who come to the same conclusion as themself. What a strange take.

  • Rumbelows@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    I feel like some people in this thread are overlooking the tongue in cheek nature of this humour post and taking it weirdly personally

    • Eccitaze@yiffit.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Yeah, that’s what happens when the LLM they use to summarize these articles strips all nuance and comedy.

    • amio@kbin.run
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      Even for the internet, this place is truly extremely fond of doing that.

  • madsen@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    This is such a fun and insightful piece. Unfortunately, the people who really need to read it never will.

    • AIhasUse@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      It blatantly contradicts itself. I would wager good money that you read the headline and didn’t go much further because you assumed it was agreeing with you. Despite the subject matter, this is objectively horribly written. It lacks a cohesive narrative.