I don’t know what shady shit you’re referring to. They do AI, but I don’t use any of that. IMO their core strength is the search engine and how it works for you rather than against.
I don’t know what shady shit you’re referring to. They do AI, but I don’t use any of that. IMO their core strength is the search engine and how it works for you rather than against.
Why would their experience be relevant? They’re asking a question, so obviously they have things to learn. You could be nicer about it.
Then it’s a problem of the platform, if there’s no way to either tag content on a particular topic, which people can filter if they wish, or a place for meta discussions, which people can choose not to visit. I still agree with the OP that simply deleting/forbidding this content isn’t a good option.
That’s a bit like saying “I’m not interested in compiler warnings, my program works for me.” The issues this article discusses are like compiler warnings, but for the community. You should be free to ignore them, just by scrolling past. But forbidding compiler warnings would not fly in any respectable project.
I hadn’t bought a bundle in a long time, maybe I just don’t remember it being that bad, but really? Even with the “extra to charity” preset, the charity gets less than Humble themselves? That’s kind of gross.
That’s crazy. Google/DDG bloat from SEO websites had already driven me out a while ago, so I hadn’t noticed. I’ve been using Kagi for a few months now, and I find I can trust my search results again. Being able to permanently downgrade or even block a given website is an awesome feature, I would recommend it just for that.
How I wish CUDA was an open standard. We use it at work, and the tooling is a constant pain. Being almost entirely controlled by NVIDIA, there’s no alternative toolset, and that means little pressure to make it better. Clang being able to compile CUDA code is an encouraging first step, meaning we could possibly do without nvcc. Sadly the CMake support for it on Windows has not yet landed. And that still leaves the SDK and runtime entirely in NVIDIA’s hands.
What irritates me the most about this SDK is the versioning and compatibility madness. Especially on Windows, where the SDK is very picky about the compiler/STL version, and hence won’t allow us to turn on C++20 for CUDA code. I also could never get my head around the backward/forward compatibility between SDK and hardware (let alone drivers).
And the bloat. So many GBs of pre-compiled GPU code for seemingly all possible architectures in the runtime (including cudnn, cublas, etc). I’d be curious about the actual number, but we probably use 1% of this code, yet we have to ship the whole thing, all the time.
If CPU vendors were able to come up with standard architectures, why can’t GPU vendors? So much wasted time, effort, energy, bandwidth, because of this.
How do you people manage this?