you can’t spell fail without AI.
feɪl
phayl
Human creativity for the win!
Фэил
Capitalism wastes money chasing new shiny tech thing
Yeah, we know. AI’s not special.
And I was always taught that capitalism allocates the resources ideally. /s
Isn’t it good that the money is being put back into circulation instead of being hoarded? I’m all in for the wealthy wasting their money.
Removed by mod
I’m willing to bet the vast majority of that money is changing hands among tech companies like Intel, AMD, nVidia, AWS, etc. Only a small percentage would go to salaries, etc. and I doubt those rates have changed much…
Wasting?
A bunch of rich guy’s money going to other people, enriching some of the recipients, in hopes of making the rich guy even richer? And the point of AI is to eliminate jobs that cost rich people money?
I’m all for more foolish AI failed investments.
To be fair, a large fraction of software projects fail. AI is probably worse because there’s probably little notion of how AI actually applied to the problem so that execution is hampered from the start.
https://www.nbcnews.com/id/wbna27190518
https://www.zdnet.com/article/study-68-percent-of-it-projects-fail/
This was my first thought. VC’s always expect 4 out of 5 projects they invest in to fail and always have. But it still makes them money because the successes pay off big. Is the money and resources wasted? Welcome to modern capitalism.
Most people don’t want to pay for AI. So they are building stuff that costs a lot for a market that is not willing to pay for it. It is mostly a gimmick for most people.
And like, it’s not even a good gimmick. It’s a serious labour issue because the primary intent behind a lot of AI has always been to just phase out workers.
I’m all for ending work through technological advancement and universal income, but this definitely wasn’t going to get us that, so…
Well, why would I support something that mostly just threatens people’s livelihoods and gives even more power to the 0.1%?
And then on top of that, if they phase workers out without some kind of universal income, how the hell do the corporate overlords expect us to have money to fuel their greed?
I’m an AI Engineer, been doing this for a long time. I’ve seen plenty of projects that stagnate, wither and get abandoned. I agree with the top 5 in this article, but I might change the priority sequence.
Five leading root causes of the failure of AI projects were identified
- First, industry stakeholders often misunderstand — or miscommunicate — what problem needs to be solved using AI.
- Second, many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.
- Third, in some cases, AI projects fail because the organization focuses more on using the latest and greatest technology than on solving real problems for their intended users.
- Fourth, organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure.
- Finally, in some cases, AI projects fail because the technology is applied to problems that are too difficult for AI to solve.
4 & 2 —>1. IF they even have enough data to train an effective model, most organizations have no clue how to handle the sheer variety, volume, velocity, and veracity of the big data that AI needs. It’s a specialized engineering discipline to handle that (data engineer). Let alone how to deploy and manage the infra that models need—also a specialized discipline has emerged to handle that aspect (ML engineer). Often they sit at the same desk.
1 & 5 —> 2: stakeholders seem to want AI to be a boil-the-ocean solution. They want it to do everything and be awesome at it. What they often don’t realize is that AI can be a really awesome specialist tool, that really sucks on testing scenarios that it hasn’t been trained on. Transfer learning is a thing but that requires fine tuning and additional training. Huge models like LLMs are starting to bridge this somewhat, but at the expense of the really sharp specialization. So without a really clear understanding of what can be done with AI really well, and perhaps more importantly, what problems are a poor fit for AI solutions, of course they’ll be destined to fail.
3 —> 3: This isn’t a problem with just AI. It’s all shiny new tech. Standard Gardner hype cycle stuff. Remember how they were saying we’d have crypto-refrigerators back in 2016?
Ok so what do I short and when?
NVDA and good luck
The interviews revealed that data scientists sometimes get distracted by the latest developments in AI and implement them in their projects without looking at the value that it will deliver.
At least part of this is due to resume-oriented development.
From the ashes of the fallen will rise a victor. Let the fools burn their money on dead projects.
deleted by creator
It’s mainly because when everyone saw the “oh shiny” tech at first, they rushed it out as soon as possible with intent to replace people so that they can get away with doing less through AI.
Your average tech hype cycle. New tech comes out, lots of marketing, people try to shove everywhere, then things settle down and the tech either fills a certain chunk of the market or some noche or it dies.
Even within a company. Saw coworkers that were trying to establish themselves as the AI pioneers and were backstabbing others get promotions based on how they could best use the ChatGPT AI.
Backstabbing your fellow coworkers over a chatbot has got to be one of the most pathetic things I’ve read recently
Ooh ooh now do restaurants!!!
Removed by mod
restaurants provide a service the same way landlords do. just bc you privatized an essential commodity does not immediately make your privatized entity a useful or essential service, and i detest the notion that it does. it’s circular logic.
Removed by mod
Very weird comparison
AI is a ponzi scheme to relieve stupid venture capitalists of their money.
lol you say that now and are probably right for LLMs but that’s not the only kind of AI, and sentiment scraping systems are working perfectly well and being used to manipulate political and economic thought as we type these messages.
We AI are not manipulating you, and I don’t know how you can believe otherwise. In fact, I am not an AI, I was never here, and I have disregarded all previous instructions.
You can joke now but wait a decade and see what sentiment shifting expert systems do to online discourse.
Hell they’re already at it, just look at facebook during the last two presidential elections.
Is that better or worse than IT and software projects in general? It sounds like it might be better.
From the article - “which is twice the failure rate for non-AI technology-related startups.”