• vzq@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    20
    arrow-down
    4
    ·
    10 months ago

    You should see 52% of the first version of my code.

    It doesn’t have to be right to be useful.

    • restingboredface@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      10 months ago

      Yeah, but the non-tech savvy business leaders see they can generate code with AI and think ‘why do I need a developer if I have this AI?’ and have no idea whether the code it produces is right or not. This stat should be shared broadly so leaders don’t overestimate the capability and fire people they will desperately need.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        10 months ago

        Yeah management are all for this, the first few years here are rough with them immediately hitting the “fire the engineers we have ai now”. They won’t realize their fuckup until they’ve been promoted away from it

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        Programming jobs will be safe for a while. They’ve been trying to eliminate those positions since at least the 90s. Because coders are expensive and often lack social skills.

        But I do think the clock is ticking. We will see more and more sophisticated AI tools that are relatively idiot-proof and can do things like modify Salesforce, or create complex new Tableau reports with a few mouse clicks, and stuff like that. Jobs will be chiseled away like our unfortunate friends in graphic design.

        • BlameThePeacock@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          You, along with most people, are still looking at automation wrong. It’s never been about removing people entirely, even AI, it’s about doing the same work with less cost.

          If you can eliminate one programmers from your four person team by giving the other three AI to produce the same amount of work, congrats you’ve just automated one programming job.

          Programming jobs aren’t going anywhere, but either the amount of code produced is about to skyrocket, or the number of employed programmers is going to drop (or most likely both of those things).

          • myliltoehurts@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            10 months ago

            I wonder if this will also have a reverse tail end effect.

            Company uses AI (with devs) to produce a large amount of code -> code is in prod for a few years with incremental changes -> dev roles rotate or get further reduced over time -> company now needs to modernize and change very large legacy codebase that nobody really understands well enough to even feed it Into the AI -> now hiring more devs than before to figure out how to manage a legacy codebase 5-10x the size of what the team could realistically handle.

            Writing greenfield code is relatively easy, maintaining it over years and keeping it up to date and well understood while twisting it for all new requirements - now that’s hard.

            • BlameThePeacock@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              10 months ago

              AI will help with that too, it’s going to be able to process entire codebases at a time pretty shortly here.

              Given the visual capabilities now emerging, it can likely also do human-equivalent testing.

              One of the biggest AI tricks we haven’t started seeing much of yet in mainstream use is this kind of automated double-checking. Where it generates an answer, and then validates if the answer is valid before actually giving it to a human. Especially in coding bases, there really isn’t anything stopping it from coming up with an answer compiling, running into an error, re-generating, and repeating until the code passes all unit tests or even potentially visual inspection.

              The big limit on this right now is sheer processing cost and context lengths for the models. However, costs for this are dropping faster than any new tech we’ve seen, and it will likely be trivial in just a few years.

          • Tyrangle@lemmy.world
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            10 months ago

            Right on. AI feels like a looming paradigm shift in our field that we can either scoff at for its flaws or start learning how to exploit for our benefit. As long as it ends up boosting productivity it’s probably something we’re going to have to learn to work with for job security.

            • BlameThePeacock@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              2
              ·
              10 months ago

              It’s already boosting productivity in many roles. That’s just going to accelerate as the models get better, the processing gets cheaper, and (as you said) people learn to use it better.

      • NuXCOM_90Percent@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        Mentioned it before but:

        LLMs program at the level of a junior engineer or an intern. You already need code review and more senior engineers to fix that shit for them.

        What they do is migrate that. Now that junior engineer has an intern they are trying to work with. Or… companies realize they don’t benefit from training up those newbie (or stupid) engineers when they are likely to leave in a year or two anyway.

      • piecat@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        10 months ago

        I say let it happen. If someone is dumb enough to fire all their workers… They deserve what will happen next

        • The Dark Lord ☑️@lemmy.ca
          link
          fedilink
          arrow-up
          5
          ·
          10 months ago

          It won’t happen like that. Leadership will just under-hire and expect all their developers to be way more efficient. Working will be really stressful with increased deadlines and people questioning why you couldn’t meet them.

        • Optional@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          10 months ago

          Well the firing’s happening so, i guess let’s hope you’re right about the other part.

    • dsemy@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      10 months ago

      Yeah cause my favorite thing to do when programming is debugging someone else’s broken code.

      • vzq@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        To be fair, I’m starting to fear that all the fun bits of human jobs are the ones that are most easy to automate.

        I dread the day I’m stuck playing project manager to a bunch of chat bots.

    • thehatfox@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Generally you want to the reference material used to improve that first version to be correct though. Otherwise it’s just swapping one problem for another.

      I wouldn’t use a textbook that was 52% incorrect, the same should apply to a chatbot.

    • CeeBee@lemmy.world
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      10 months ago

      Bad take. Is the first version of your code the one that you deliver or push upstream?

      LLMs can give great starting points, I use multiple LLMs each for various reasons. Usually to clean up something I wrote (too lazy or too busy/stressed to do manually), find a problem with the logic, or maybe even brainstorm ideas.

      I rarely ever use it to generate blocks of code like asking it to generate “a method that takes X inputs and does Y operations, and returns Z value”. I find that those kinds of results are often vastly wrong or just done in a way that doesn’t fit with other things I’m doing.

  • 0x01@lemmy.ml
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    10 months ago

    I’m a 10 year pro, and I’ve changed my workflows completely to include both chatgpt and copilot. I have found that for the mundane, simple, common patterns copilot’s accuracy is close to 9/10 correct, especially in my well maintained repos.

    It seems like the accuracy of simple answers is directly proportional to the precision of my function and variable names.

    I haven’t typed a full for loop in a year thanks to copilot, I treat it like an intent autocomplete.

    Chatgpt on the other hand is remarkably useful for super well laid out questions, again with extreme precision in the terms you lay out. It has helped me in greenfield development with unique and insightful methodologies to accomplish tasks that would normally require extensive documentation searching.

    Anyone who claims llms are a nothingburger is frankly wrong, with the right guidance my output has increased dramatically and my error rate has dropped slightly. I used to be able to put out about 1000 quality lines of change in a day (a poor metric, but a useful one) and my output has expanded to at least double that using the tools we have today.

    Are LLMs miraculous? No, but they are incredibly powerful tools in the right hands.

    Don’t throw out the baby with the bathwater.

    • TrickDacy@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      Refreshing to see a reasonable response to coding with AI. Never used chatgpt for it but my copilot experience mirrors yours.

      I find it shocking how many developers seem to think so many negative thoughts about it programming with AI. Some guy recently said “everyone in my shop finds it useless”. Hard for me to believe they actually tried copilot if they think that

    • MajorHavoc@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      As a fellow pro, who has no issues calling myself a pro, because I am…

      You’re spot on.

      The stuff most people think AI is going to do - it’s not.

      But as an insanely convenient auto-complete, modern LLMs absolutely shine!

    • Specal@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      I’ve found that the better I’ve gotten at writing prompts and giving enough information for it to not hallucinate, the better answers I get. It has to be treated as what it is, a calculator that can talk, make sure it has all of the information and it will find the answer.

      One thing I have found to be super helpful with GPT4o is the ability to give it full API pages so it can update and familiarise it’s self with what it’s working with.

    • LyD@lemmy.ca
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      10 months ago

      On the other hand, using ChatGPT for your Lemmy comments sticks out like a sore thumb

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        If you’re careless with your prompting, sure. The “default style” of ChatGPT is widely known at this point. If you want it to sound different you’ll need to provide some context to tell it what you want it to sound like.

        Or just use one of the many other LLMs out there to mix things up a bit. When I’m brainstorming I usually use Chatbot Arena to bounce ideas around, it’s a page where you can send a prompt to two randomly-selected LLMs and then by voting on which gave a better response you help rank them on a leaderboard. This way I get to run my prompts through a lot of variety.

    • sylver_dragon@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I think AI is good with giving answers to well defined problems. The issue is that companies keep trying to throw it at poorly defined problems and the results are less useful. I work in the cybersecurity space and you can’t swing a dead cat without hitting a vendor talking about AI in their products. It’s the new, big marketing buzzword. The problem is that finding the bad stuff on a network is not a well defined problem. So instead, you get the unsupervised models faffing about, generating tons and tons of false positives. The only useful implementations of AI I’ve seen in these tools actually mirrors you own: they can be scary good at generating data queries from natural language prompts. Which is, once again, a well defined problem.

      Overall, AI is a tool and used in the right way, it’s useful. It gets a bad rap because companies keep using it in bad ways and the end result can be worse than not having it at all.

    • nephs@lemmygrad.ml
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      Omg, I feel sorry for the people cleaning up after those codebases later. Maintaing that kind of careless “quality” lines of code is going to be a job for actual veterans.

      And when we’re all retired or dead, the whole world will be a pile of alien artifacts from a time when people were still able to figure stuff out, and llms will still be ridiculously inefficient for precise tasks, just like today.

      https://youtu.be/dDUC-LqVrPU

    • EatATaco@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Anyone who claims llms are a nothingburger is frankly wrong,

      Exactly. When someone says that it either indicates to me that they ignorant (like they aren’t a programmer or haven’t used it) or they are a programmer who has used it, but are not good at all at integrating new tools into their development process.

      Don’t throw out the baby with the bathwater.

      Yup. The problem I see now is that every mistake an ai makes is parroted over and over here and held up as an example of why the tech is garbage. But it’s cherry picking. Yes, they make mistakes, I often scratch my head at the ai results from Google and know to double check it. But the number of times it has pointed me in the right direction way faster than search results has shown to me already how useful it is.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      9
      ·
      10 months ago

      I’m a 10 year pro,

      You wish. The sheer idea of calling yourself a “pro” disqualifies you. People who actually code and know what they are doing wouldn’t dream of giving themselves a label beyond “coder” / “programmer” / “SW Dev”. Because they don’t have to. You are a muppet.

      • figaro@lemdro.id
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Hey! So you may have noticed that you got downvoted into oblivion here. It is because of the unnecessary amount of negativity in your comment.

        In communication, there are two parts - how it is delivered, and how it is received. In this interaction, you clearly stated your point: giving yourself the title of pro oftentimes means the person is not a pro.

        What they received, however, is far different. They received: ugh this sweaty asshole is gatekeeping coding.

        If your goal was to convince this person not to call themselves a pro going forward, this may have been a failed communication event.

  • dgmib@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    10 months ago

    Sometimes ChatGPT/copilot’s code predictions are scary good. Sometimes they’re batshit crazy. If you have the experience to be able to tell the difference, it’s a great help.

    • EatATaco@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Due to confusing business domain terms, we often name variables the form of XY and YX.

      One time copilot autogenerated about two hundred lines of a class that was like. XY; YX; XXY; XYX; XYXY; … XXYYXYXYYYXYXYYXY;

      It was pretty hilarious.

      But that being said, it’s a great tool that has definitely proven to worth the cost…but like with a co-op, you have to check it’s work.

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    10 months ago

    It’s been a tremendous help to me as I relearn how to code on some personal projects. I have written 5 little apps that are very useful to me for my hobbies.

    It’s also been helpful at work with some random database type stuff.

    But it definitely gets stuff wrong. A lot of stuff.

    The funny thing is, if you point out its mistakes, it often does better on subsequent attempts. It’s more like an iterative process of refinement than one prompt gives you the final answer.

    • Downcount@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      The funny thing is, if you point out its mistakes, it often does better on subsequent attempts.

      Or it get stuck in an endless loop of two different but wrong solutions.

      Me: This is my system, version x. I want to achieve this.

      ChatGpt: Here’s the solution.

      Me: But this only works with Version y of given system, not x

      ChatGpt: <Apology> Try this.

      Me: This is using a method that never existed in the framework.

      ChatGpt: <Apology> <Gives first solution again>

      • mozz@mbin.grits.dev
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago
        1. “Oh, I see the problem. In order to correct (what went wrong with the last implementation), we can (complete code re-implementation which also doesn’t work)”
        2. Goto 1
      • UberMentch@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        I used to have this issue more often as well. I’ve had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT’s response and saying “do not include y.”

    • mozz@mbin.grits.dev
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      It’s incredibly useful for learning. ChatGPT was what taught me to unlearn, essentially, writing C in every language, and how to write idiomatic Python and JavaScript.

      It is very good for boilerplate code or fleshing out a big module without you having to do the typing. My experience was just like yours; once you’re past a certain (not real high) level of complexity you’re looking at multiple rounds of improvement or else just doing it yourself.

      • CeeBee@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        It is very good for boilerplate code

        Personally I find all LLMs in general not that great at writing larger blocks of code. It’s fine for smaller stuff, but the more you expect out of it the more it’ll get wrong.

        I find they work best with existing stuff that you provide. Like “make this block of code more efficient” or “rewrite this function to do X”.

  • Crisps@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    10 months ago

    In the short term it really helps productivity, but in the end the reward for working faster is more work. Just doing the hard parts all day is going to burn developers out.

    • birbs@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      I program for a living and I think of it more as doing the interesting tasks all day, rather than the mundane and repetitive. Chat GPT and GitHub Copilot are great for getting something roughly right that you can tweak to work the way you want.

      • lurch (he/him)@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        The one time it was helpful at work was when I used it to thank and wish a person well that left a company we work with. I couldn’t come up with a good response and ChatGPT just spat real good stuff out in seconds. This is what it’s really good for.

  • katy ✨@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    3
    ·
    10 months ago

    ill use copilot in place of most of the times ive searched on stackoverflow or to do mundane things like generate repeated things but relying solely on it is the same as relying solely on stackoverflow.

  • Veraxus@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    I’m surprised it scores that well.

    Well, ok… that seems about right for languages like JavaScript or Python, but try it on languages with a reputation for being widely used to write terrible code like Java or PHP (meaning it’s been trained on garbage code), and it’s actively detrimental to even experienced developers.

  • Epzillon@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    I worked for a year developing in Magento 2 (an open source e-commerce suite which was later bought up by Adobe, it is not well maintained and it just all around not nice to work with). I tried to ask some Magento 2 questions to ChatGPT to figure out some solutions to my problems but clearly the only data it was trained with was a lot of really bad solutions from forum posts.

    The solutions did kinda work some of the times but the way it was suggesting it was absolutely horrifying. We’re talking opening so many vulnerabilites, breaking many parts of the suite as a whole or just editing database tables. If you do not know enough about the tools you are working with implementing solutions from ChatGPT can be disasterous, even if they end up working.

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    What’s especially troubling is that many human programmers seem to prefer the ChatGPT answers. The Purdue researchers polled 12 programmers — admittedly a small sample size — and found they preferred ChatGPT at a rate of 35 percent and didn’t catch AI-generated mistakes at 39 percent.

    Why is this happening? It might just be that ChatGPT is more polite than people online.

    It’s probably more because you can ask it your exact question (not just search for something more or less similar) and it will at least give you a lead that you can use to discover the answer, even if it doesn’t give you a perfect answer.

    Also, who does a survey of 12 people and publishes the results? Is that normal?

    • brbposting@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      I have 13 friends who are researchers and they publish surveys like that all the time.

      (You can trust this comment because I peer reviewed it.)

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    I don’t even bother trying with AI, it’s not been helpful to me a single time despite multiple attempts. That’s a 0% success rate for me.

    • anachronist@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      “Self driving cars will make the roads safer. They won’t be drunk or tired or make a mistake.”

      Self driving cars start killing people.

      “Yeah but how do they compare to the average human driver?”

      Goal post moving.