I use Arch in WSL BTW. This is not a joke its actually quite nice
I use Arch in WSL BTW. This is not a joke its actually quite nice
It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.
Personally, if I can’t go from human readable data to a complete model then I don’t consider it open source. I understand these companies want to keep the magic sauce thats printing them money but all the open source marketing is inherently dishonest. They should be clear that the architecture and the product they are selling is separate, much like proprietary software just has all the open source software they used as a footnote in their about screens.
Godot does have a special thing for mesh instancing, I think variations were possible as well like different colored triangles maybe? https://docs.godotengine.org/en/stable/tutorials/performance/vertex_animation/animating_thousands_of_fish.html
The way I understand the users didn’t necessarily realize McAfee is responsible, just that a bunch of sqlite files appeared in temp so they might not connect the dots here anyway. Or even know McAfee is installed considering their shady practices.
What’s the bug? Bazzite is using a patched kernel for a reason I’m guessing, maybe your bug is patched anyway even if its on the older branch.
I do think we’re machines, I said so previously, I don’t think there is much more to it than physical attributes, but those attributes let us have this discussion. Remarkable in its own right, I don’t see why it needs to be more, but again, all personal opinion.
I read this question a couple times, initially assuming bad faith, even considered ignoring it. The ability to change, would be my answer. I don’t know what you actually mean.
Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense… That’s a whole other kettle of fish). Id consider all “thinking things” as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it’s future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.
I’m still working on this definition, again just a personal viewpoint.
Its a thing. https://en.m.wikipedia.org/wiki/Busy_waiting
I feel like its difficult to quantify for jobs where you’re being paid to think. Even when I’m goofing off, the problem I need to solve for the day is still lingering in the back of my head somewhere. Actively squinting at it doesn’t seem to make things go any faster and when I do return to work it’s usually to mash out reems of code after letting it stew, but yes, the actual amount of time I’m fulfilling my job description is… less than my working hours.
Not an answer to the question, but in case performance is the goal, Torchaudio has it here
I was a curious child, and things spiralled out of control from there…
Ah, that makes sense. Most cloud providers have the full nine yards with online hardware provisioning and imaging I forgot you could still just rent a real machine.
Hmm, wonder if there was some reason they didnt just extract the original certificates from the VPS if it was actually the hosting provider, I mean even with mitigation it should be sitting in a temp folder somewhere, surely they could? Issuing new ones seems like a surefire way to alert the operators, unless they already used Let’s Encrypt of course.
I feel like this is just describing the future of business processing consultants. Like there’s already a role for this, unless I’m missing something?
I think the part that annoys me the most is the hype around it, just like blockchain. People who don’t know any better claiming magic.
We’ve had a few sequence specific architectures over the years. GRU, LSTM and now Transformers. They were all better than the last at the task of sequence specific transformations, and at least for the last one the specific task was language translation. We eventually figured out these guys have a bit of clairvoyance too, they could make accurate predictions based on past data, or at least accurate enough to bet on, and you can bet traders of various stripes have already made billions off that fact. I’ve even seen a transformer based weather model. It did OK, but transformers are better at language.
And that’s all it is! ChatGPT is a Transformer in the predictive stance. It looks at a transcript of a conversation and thinks what a human is most likely to say next. It’s a very complex transformation of historical data. If you give it the exact same transcript, it gives the exact same answer. It is in the literally mathematically rigorous sense entirely incapable of an original thought. Any perceived sentience is a shadow of OpenAI’s army of annotators or the corpus it was trained on, and I have a hard time assigning sentience to tomorrow’s forecast, which may well have used similar technology. It’s just an ultra fancy search engine index.
Anyways, that’s my rant done I guess. Call it a cynical engineer’s opinion. To be clear I think it’s a fantastic and useful technology, and it WILL change how we interact with machines. It can do fancy things with the combination of “shell” code driving it’s UI like multi-step “agents” or running code, and I actually hope OpenAI extends it far into the future, but I sincerely think any form of AGI will be something entirely different to LLMs, or at least they’ll only form a small part of it as an encoder/decoder for it’s thoughts.
EDIT: Added some paragraph spacing. Sorry, went into a more broad AI rant rather than staying on topic about coding specifically lol
Like others have said, practice is key, however I’d like to add that you should not feel too discouraged if it feels like you’re making no progress. You’re probably making more headroom than you realize. At least personally in programming more than anything else I have occasionally only seen results after I came back to a concept I gave up on learning.
Do be aware of getting stuck in local minimum though. I know it probably feels like bad advice but I figured I’d give it since it helped me personally, so maybe it might help someone else.
Yeah, in my mind I thought of it more as a “why not” in addition to vision. Like why make it only as capable as the humans its trying to replace when it can have even more data to work with? Probably would have been even more expensive though
Surprisingly just setting the systemd flag in WSL settings worked, though for a long time I simply didn’t use systemd.