• krayj@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 years ago

    I have noticed the decline. Current ChatGPT is making significantly more factual errors now than it did in the past.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      Also feels like no matter how many times you ask it to redo the answer, it kinda shifts between 3 or 4 variations of a similar reply, even for “creative” writing.

      I guess we all knew they were losing money at first and something would have to change: either it would get more expensive or downgraded eventually.

      Seems like soon running an open source model locally will be a better deal than paying OpenAI after people got a taste of how much better it can be, except maybe for using the API at scale for simpler stuff.