

I’m still lowkey a bit confused how AMD managed to leapfrog their ML-upscaling solution from nothing to >DLSS 3 in one generation. Then again, XeSS was pretty good on Intel GPUs too hmmmm
Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP).
(header photo by Brian Maffitt)
I’m still lowkey a bit confused how AMD managed to leapfrog their ML-upscaling solution from nothing to >DLSS 3 in one generation. Then again, XeSS was pretty good on Intel GPUs too hmmmm
You could probably just power limit a 9070 XT and get basically the same kind of efficiency though right? You wouldn’t even have to go through the (potentially) long process of tweaking and testing like you would with an explicit undervolt / overclock etc so it’s not exactly a tedious or difficult thing to do
It’s pretty good (so long as you have enough CPU power to be minimally affected by the higher driver overhead), but it’s in a different (lower) performance category for now. We’ll see how the B7xx stuff lands though!
The mbin equivalent (which is relevant to the OP) is More
-> Open original URL
or Copy original URL
Unfortunately seems to not work for mbin (which fedia.io runs)
Unfortunately YouTube is one of the only large platforms currently accepting AV1 ingest, so most places you’re still stuck with H.264, or maybe H.265 if you’re lucky :(
So… they did the opposite of the previous problem? Instead of launching too high then quickly lowering the price, they decided to launch low but then increase it (with some unknown future carve-out to make it a “business decision” instead of “lying” 🤔
It’s interesting (and I don’t mean that in a passive-aggressive way) how different the conclusions are for the card (e.g, TPU has a seemingly lukewarm-ish review that isn’t praising it or raking it over the coals, and it’s easy to find plenty of reviews doing the latter). It’s unironically prompting me to think about the situation more critically myself, whereas if it was universally praised or derided I’d just save the brainpower and accept the universal conclusion lol.
Relevant xkcd ProZD: YouTube
You’re saying that we could potentially have one of those fat “AI” chips with 40 graphics CUs and 3D V-Cache? 👀
But (it sounds like) you’re talking about voluntary grouping, where if you dump 100 people together at a party or networking event or whatever, theoretical-person Amy will vibe best with certain types of people, and so ends up chatting up Cleo, Ming, and Kiara because they share similar interests / humor / whatever – but there’s nothing actually stopping someone from outside of that from walking up and chatting with the newly-formed group. That’s kind of what (I thought) we had now in the fediverse, where for example I can go talk about Australian news on aussie.zone, jump to lemmy.world to talk about fediverse stuff, swing by redd.that to look at Unraid updates (all communities I’m part of), but then browse the incoming feed of everything coming into my instance and view a whole lot of communities which I’m not part of, most of which I never will be. It’s (nearly) all open-by-default. Yes, there’s some blocking / defederation etc, but the default state is that users on one instance can (whether or not they actually choose to) talk to other instances.
If a new user randomly picks any instance from the top 50 (of any fediverse software, excluding maybe Pixelfed since that’s probably the least interoperable with the others) to join up on, chances are very good (but will vary based on personal interest) they’ll be able to participate in like >=90% of the conversations that they want to in the sense that their instance is federating with all the people and communities they’re interested in.
What I’m thinking-out-loud-ing (“arguing” sounds a bit more assertive than what I’m aiming for) is that this might not be how ActivityPub would optimally be used; maybe just because ActivityPub could allow 90% of users to talk to 90% of users, it doesn’t mean that’s actually the best way to use it. Maybe it serves the user’s interests better if there are clusters of “sub-fediverses” instead.
As a grounded example: Beehaw partially self-isolates from the wider fediverse (it’s not just that users could communicate but don’t; the connection is severed) in an effort to better maintain its vibe and values. I had always viewed that as the exception to the norm, but maybe having (e.g.,) clusters of instances that only communicate with a comparatively smaller amount of other instances, say the other instances in its “cluster” plus a few other clusters only (as opposed to most instances communicating with most other instances) is a different – and potentially healthier – way to architect things. So I guess partial, selective federation rather than (what felt to me like) the current goal of “if it uses ActivityPub, we want to communicate with it*”. * with obvious exclusions for spam etc.
but the fediverse is equally suited to federated islands as to one fediverse, right? Most people will want the full fediverse but people can also create their separate spaces if desired.
I guess, yeah, but it has tradeoffs. Each island loses even more diversity of perspective (e.g., political echo-chamber, or building fedi tools that might work well for their island but make no sense for other islands), and making it harder to use as replacements for Xitter / reddit etc.
Like, a lot of discussion happens on topics like “how can we make Mastodon better for former Xitter users?” or the same thing but for lemmy and reddit. Maybe they’re fundamentally not the right questions to ask if the endgame state of federated social media is that it isn’t a direct replacement of centralized services.
Although it wasn’t really specifically the point of the post, reading it’s made me think that maybe the whole idea of “universally” federated social media (even excluding the spam etc) is fundamentally untenable regardless of the technical protocol, and that treating it as the end-goal might not be the play.
Answered in the article?
Just don’t look for any memory slots — it’s soldered. “We spent months working with AMD to explore ways around this but ultimately determined that it wasn’t technically feasible to land modular memory at high throughput with the 256-bit memory bus,” writes Framework.
It’s in the article:
Seagate’s FARM (field-accessible reliability metrics)
A comment from a crosspost ( !hardware@programming.dev) suggests it as the PWM rate of the LEDs
The B580 driver overhead article linked at the bottom is also an interesting read (or skim read in my case 😅)
Honestly this seems too absurd even for the onion!? What the fuck lol
Digital Foundry provided another look at FSR 4 and generally agree with HUB’s conclusions: YouTube