

404media is exactly the site I would expect to be aware of Lemmy among the semi-mainstream tech outlets (along with TheVerge to a lesser extent).
404media is exactly the site I would expect to be aware of Lemmy among the semi-mainstream tech outlets (along with TheVerge to a lesser extent).
I know people’s experience varies on this but I absolutely hated high school, and only discovered that I enjoyed learning as a process because of uni. And I’d probably still be small minded and somewhat bigoted if I hadn’t gone. Simply because it forced me to critically evaluate my own views and also exposed me to a number of types of people I wouldn’t have encountered otherwise.
It’s a shame it’s so expensive in some countries, because I think it’s important to have a well-educated society more broadly.
Looks like it’s available in desktop mode too, so will work anywhere on Linux at the least.
I discovered the book after the residents of Springfield went mad trying to win the local lottery, only to discover a chilling tale of conformity gone mad.
They might mean exclusives, of which none of those apply. But I personally don’t think exclusives are a good thing anyway.
I believe the risks of silicosis from silica were known since ancient times too, although they probably didn’t have any solutions or alternatives for it historically. More recently, there was the Hawk’s Nest tunnel disaster in the US during the 1930s, where around a 100 mostly black workers died as a result of silicosis developed from cutting and blowing up quartz without any sort of protective measures.
Then in the modern era, there was a ban implemented in Australia of construction using high silica “engineered” stone. You’d think given the known health risks of silica that this could have been predicted, although it’s not as clear cut (heh) as the risks of asbestos, since at least part of the problem was construction workers not using preventative measures such as wet drilling and PPE. But you could see how that goes over when the workers are often vulnerable in some way, and do not feel comfortable saying no to their bosses.
I suggest using Beetle mednafen, unless you’re on a very slow system. Or Swanstation, it’s not like that’s going away.
I also don’t see Swanstation going away any time soon, even if it gets no new features. It’s pretty close to feature complete in the ways that matter anyway.
I just overuse parantheses instead, as you noted. You know you’re rambling when you have several layers of them, like I’m writing a conversation in Lisp.
Don’t use social media or news sites when you wake up, or before bed
Block notifications from social media and news sites, or uninstall altogether
Set time limits (like with leechblock-ng on desktop, or with simple alarms)
You probably don’t need to read the news every day to be reasonably informed
Most of this is auto-generated header files to be clear. Still, goes to show how many GPU variants they have support for in the kernel, going back 15+ years.
Having a web UI is useful even if you’re not using the extra tools. Not mandatory of course, but nice.
US only I suspect, and likely to be gutted by the Trump administration.
I’m not really an expert, but I’ll try and answer your questions one by one.
Don’t VMs have a virtual GPU with a driver for that GPU in the guest that, I imagine, forwards the graphics instructions and routines to the driver on the host?
Yes, this is what VirGL (OGL) and Venus (Vulkan) do. The latter works pretty well because Vulkan is more low level and better represents the underlying hardware so there is less of a performance overhead. However, this does mean you need to translate all APIs one by one, not just OGL and Vulkan, but also hardware decoding and encoding of videos, and compute, so it’s a fair amount of work.
Native contexts, in contrast, are basically the “real” host driver used in the guest, and they essentially pass through everything 1:1 to the host driver where the actual work is carried out. They aren’t really like virtualisation extensions as the hardware doesn’t need to support it AFAICT, just the drivers on both the host and the guest. There’s a presentation and slides on native contexts vs virgl/venus which may be helpful.
Where in that does Magma come in? My guess is that magma sits in the guest as the graphics driver and on the host before Mesa, but I know little about virtualisation outside of containers.
To be honest, I don’t fully understand the details either, but your interpretation seems more or less correct. From looking at the diagram on the MR it seems that it’s a layer between the userspace graphics driver and the native context (virtgpu) layer on the guest side, which in turn communicates with another Magma layer on the host, and finally passes data to the host GPU driver, which may be Mesa but could also be other drivers as long as they implement Magma.
The broader idea is to abstract implementation details, so applications and userspace drivers don’t need to know the native context implementation details (other than interfacing with Magma). And the native context layer doesn’t need to know which host gpu driver is being used, it just needs to interface with Magma.
The sandboxing sometimes breaks applications or requires additional configuration. And I don’t like that it’s a separate thing I need to maintain, although some package managers pair main package updates etc together.
And as a NixOS user, I prefer to use nix to handle as much of my system as possible, although flatpak at least is useful as a fallback in a pinch. Of course, this is a niche within a niche and mainstream users, particularly those using immutable distros can and do benefit from flatpak.
The other points have been answered, so I’ll try and give a surface view of Magma. It’s basically an abstraction layer for virtual GPU drivers used in VMs. Currently, you need specific implementations to handle all of the pathways between different types of VM guests and hosts, which gets complicated fast, and duplicates a lot of work. The idea is the Magma abstracts this away, and so host and guest GPU drivers only need to interface with Magma. Which means you can swap out different host OSes/GPU drivers and different guest OSes and GPU drivers, and as long as they interface with Magma, they should “just work”.
Of course, whether it will work out that way in practice remains to be seen. I think Google is using it internally but it’s not in Mesa yet, so it may not even roll out widely. You can follow the MR if you want more detail or to see its progress.
If you’re wondering why Google is implementing this it appears to be for Fuschia and Android, and compatibility between those two and with desktop Linux, with Windows support also supported as an additional value add. Chromebooks in particular should benefit from this, since ChromeOS is being retired I believe.
And as an aside, unlike some of the traditional GPU implementations you’d find in VMs, these are or will be pretty much just the normal graphics driver that you’d use on the host. They are generally called “native contexts” and have been implemented for AMD and Intel at the least, but only on non-Windows systems for now. These implementations alone, once they are widely supported, should result in near native GPU performance in VMs, without having to use GPU passthrough (I.e. passing through a physical GPU to the VM guest). So even without Magma there’s some promising stuff happening, albeit mainly on the Linux host -> Linux guest pathway.
I’m guessing it’s the AI agent stuff. Which at the moment is literally just automating browsing through a website.
Apparently there will be APIs to do this in the future. Ironically, AI wouldn’t even be needed for that to be useful.
Valve is one of the main contributors to the RADV Vulkan driver for AMD GPUs, and a bunch of other parts of Mesa and the open driver stack in general.
I should probably add that some of this work is on RDNA3 FSR4 support, which isn’t even supported on Windows. It’s not amazingly fast, but it’s now faster than native and that might be enough to make it worth it (especially in the cases where it improves image quality due to poor TAA implementations).
I just think the user should be the one to decide whether they enable it or not. Pre-built PCs and motherboards can enable it by default, but it should be simple to bypass (and it usually is) and no company should be demanding or requiring people enable it.
The same applies for TPM2, which is also useful but shouldn’t be a requirement. If nothing else because of the E-waste this can cause by requiring PCs to support it. And most new PCs will end up enabling it in the long run anyway, so there is no need to force the issue.