Yeah 7000-series Ryzen benefits from the avx512 code paths in ffmpeg. I’ve benchmarked a 5900x vs a 7900x specifically for software H.265 decoding and there was a sizeable difference.
Yeah 7000-series Ryzen benefits from the avx512 code paths in ffmpeg. I’ve benchmarked a 5900x vs a 7900x specifically for software H.265 decoding and there was a sizeable difference.
Looks like he just threw up
Ah, so they don’t actually say that they read kernel space. They check the version of all installed packages and checksum the installed DLLs/SOs.
If the user still has root privileges, this may still not prevent sideloading of kernel modules. Even if it would detect a kernel module that has been sideloaded, I believe it’s possible to write a kernel module that will still be resident after you unload it. This kernel module can then basically do anything without the knowledge of userspace. It could for example easily replace any code running in userspace, and their anticheat would miss that as it doesn’t actually check what code is currently running. Most simply, code could be injected that skips the anticheat.
Of course, in their model, if a user isn’t given root privileges it seems much harder to do anything, then probably the first thing you’d want to look for is a privilege escalation attack to obtain root privileges. This might not be that hard if they for example run Xorg as it isn’t known to be the most secure - there’s a reason there’s a strong recommendation to not run any graphical UI on servers.
Another way if you don’t have root is to simply run the code on a system that does but that does have such a kernel module - or perhaps modify the binary itself to skip the anticheat. I don’t see anything preventing that in their scheme.
I’m having a hard time understanding how this would work. udev will load kernel modules depending on your hardware, and these modules run in kernel space. Is there an assumption that a kernel module can’t cheat? Or do they have a checksum for each possible kernel module that can be loaded?
Also, how do they read the kernel space code? Userspace can’t do this afaik. Do they load a custom kernel module to do this? Who says it can’t just be replaced with a module that returns the “right” checksum?
For example, maybe branching is something you’d like to be able to do without it being a nightmare?
TIL that it’s called a charley horse in English
Services are automatically restarted. There is no automatic reboot by default, but that can be enabled if you really want to. Otherwise it’ll keep track of whether a reboot is necessary or not.
I’ve been running Debian stable with unattended-upgrades on servers for years and have had no issues whatsoever.
That joke has aged like milk
I expected some resolution or motivation for everything that had occurred, instead we got “God did it”.
It’s just a pity on a great show to have such a bad ending.
I think if they just filled the alt= attribute with the emoji this would copy fine.
I’ve used LyX with good results, it’s a GUI that abstracts away many of the complexities of latex.
Unless there are security updates to install, then everything will be mercilessly killed
What they say for horses is that if you’re going to walk behind one, stay just behind it. That way if it does decide to kick you, the legs won’t be able to build up momentum and will be mostly vertical before hitting you. Under no circumstance walk 1-2m behind it, you can die if it hits you in the head.
Apply at your own risk to cows.
Looks like real hands but not her hands
In Swedish we spell it text.
European here, I suggest Bosch or Electrolux, if that’s available in your part of the world.
This is great, but the context is that this is for specific inner loops, and it is compared to the C version of that specific inner loop. Typically what was used before this on a computer with avx512 was the avx2 version of the inner loop, and the speedup compared to that version appears to be up to 60%: https://x.com/FFmpeg/status/1852542388851601913 . Then as not a specific inner loop isn’t run all the time, the speedup is probably much less than 60%. This is still sizeable, but the actual speedup in practice with this implementation is far far from 94x.