

Can’t you just increase context length at the cost of paging and slowdown?
Can’t you just increase context length at the cost of paging and slowdown?
Some new models also still include ir blasters. Good stuff
Duckdb can query them with SQL like they are in a database. Csv, tsv, parquet also. You can even connect to and query postures and cloud storage also
Good price for Bucharest, no?
It’s “ENTERPRISE”
Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don’t.
This ^
If it’s better for the environment and doesn’t involve the industrial scale poor treatment and wanton slaughter of animals, AND it tastes just as good, I’d be on-board instantly. Even with a premium price hike for consistency.
Roll on quality facon, wagu beeef, and octo-chi k en drumsticks.
I do think that flora missed a trick with vegan, fake meats though…
“I can’t believe it’s not bacon/ burger/ chicken” they would have slaughtered that ad campaign
Commit a bunch of perl cpan files with a ton of colons scattered liberally, and watch the fireworks
Rules are rules. Nothing you can do once that happens.
Is it related to real time kernel performance or libraries?
So do I. It sits in its vm jail and does its job, or I roll back the snapshot
Leaning into the grift
Well. Better start saving then…
People who do this are heroes!
I fully appreciate people who put themselves out like this, especially in subjects which don’t necessarily have a wider reach.
Thank you all, no matter your accent.
This feels like a winning strategy
Yes, but if he’s world building, a larger, slower model might just be an acceptable compromise.
I was getting oom errors doing speech to text on my 4070ti. I know (now) that I should have for for the 3090ti. Such is life.