

Sounds like there’s a story behind this, where could I read more?
Sounds like there’s a story behind this, where could I read more?
Yes but if they do find a poor shmuck that wants the job, they can hope he’ll undervalue himself and ask for even less.
yeah, traceroute might hint at that, if this is what is going on.
I will perhaps be nitpicking, but… not exactly, not always. People get their shit hacked all the time due to poor practices. And then those hacked things can send emails and texts and other spam all they want, and it’ll not be forged headers, so you still need spam filtering.
left-pad as a service.
It’s probably AI-supported slop.
(Not to be confused with our premium product, ParticleServices, which just shoot neutrinos around one by one.)
No, it’s just that it doesn’t know if it’s right or wrong.
How “AI” learns is they go through a text - say blog post - and turn it all into numbers. E.g. word “blog” is 5383825526283. Word “post” is 5611004646463. Over huge amount of texts, a pattern is emerging that the second number is almost always following the first number. Basically statistics. And it does that for all the words and word combinations it found - immense amount of text are needed to find all those patterns. (Fun fact: That’s why companies like e.g. OpenAI, which makes ChatGPT need hundreds of millions of dollars to “train the model” - they need enough computer power, storage, memory to read the whole damn internet.)
So now how do the LLMs “understand”? They don’t, it’s just a bunch of numbers and statistics of which word (turned into that number, or “token” to be more precise) follows which other word.
So now. Why do they hallucinate?
How they get your question, how they work, is they turn over all your words in the prompt to numbers again. And then go find in their huge databases, which words are likely to follow your words.
They add in a tiny bit of randomness, they sometimes replace a “closer” match with a synonym or a less likely match, so they even seen real.
They add “weights” so that they would rather pick one phrase over another, or e.g. give some topics very very small likelihoods - think pornography or something. “Tweaking the model”.
But there’s no knowledge as such, mostly it is statistics and dice rolling.
So the hallucination is not “wrong”, it’s just statisticaly likely that the words would follow based on your words.
Did that help?
You never review code when you have no time to do an actual review? Looks good to me :)
Is that pronounced as gokoze?
shaking my (trademark) head?
So, send’em a dicpic and you’re in :)
considering where the garbage came from, maybe we should stop shitposting :)
(sorry for the late response, I have to get in the habit of checking my Lemmy account)
No, I get that - a stylesheet denotes a class by having a dot. A JavaScript API for adding a CSS class omits this redundancy.
I was saying that the author might not be wrong to want to avoid the redundancy in rust example as well (since it explicitly mentions CSS classes).
I mean, it is not embarrassing for you. In the browser, the CSS’s “native platform”, you add classes, via the JavaScript API, without the dot. It’s not a stupid assumption.
To have to add the dot in the CSS class name seems a bit of an oversight in the gtkrs API.
Actual programmer
I wonder if JJ anonymous branches would be something that solves this. I’ve only read about it, have not used JJ yet.
Or meet old ideological dogs like me :P
Well, the API angle is similar to Space Traders