• @S13Ni
    link
    English
    84 days ago

    TBH I felt this was bit superficial. No concrete examples, I don’t really think adoption curve outside tech people will be that fast to agents, doesn’t really go into how using agents to manipulate people would significantly differ from using non agent chatbot for same end.

    I’m still worried how AI agents could be used to do evil, just that I don’t feel any better informed after reading this.

    Curious to hear any thoughts on this.

    • @MagicShel@lemmy.zip
      link
      fedilink
      English
      3
      edit-2
      4 days ago

      I think there is a risk vector, but I think as you say the people most susceptible toward AI manipulation (the folks who just don’t know any better) is low due to low adoption. I think there are a lot of people in the business of selling AI who are doing it by playing up how scary it is. AI is going to replace you professionally and maybe even in bed, and that’s only if it doesn’t take over and destroy mankind first! But it’s hype more than anything.

      Meanwhile you’ve got an AI who apparently became a millionaire through meme coins. You’ve got the potential for something just as evil in the stock markets as HFT— now whoever develops the smartest stock AIs make all the money effortlessly. Obviously the potential to scam the elderly and ignorant is high. There are tons of illegal or unethical things AI can be used for. An AI concierge or even AI Tony Robbins is low on my list.