ancaps: “muh NAP”
ancoms: “please get away from our commune, thank you”
ancaps: “muh NAP”
ancoms: “please get away from our commune, thank you”
Honestly I’m more of an ebook guy. However, there is something you can do with audiobooks that you can’t really do with ebooks — experience them together with a small group of other people.
My first time listening to a book together with friends was over a car ride. But then, me and my friends got into this book series, and we listened to it together over Discord.
There’s probably a neat parallel to be made with listening to a story around a campfire.
Nonetheless, mostly I stick to ebooks. There is something to be said for reading at your own pace, not the pace of the narrator.
libgen
fucking Azure
Fortunately we’re nowhere near the point where a machine intelligence could possess anything resembling a self-determined ‘goal’ at all.
Oh absolutely. It would not choose its own terminal goals. Those would be imparted by the training process. It would, of course, choose instrumental goals, such that they help fulfill its terminal goals.
The issue is twofold:
For that 2nd point, Rob Miles has a nice video where he explains Convergent Instrumental Goals, i.e. instrumental goals that we should expect to see in a wide range of possible agents: https://www.youtube.com/watch?v=ZeecOKBus3Q. Basically things like “taking steps to avoid being turned off”, “taking steps to avoid having its terminal goals replaced”, etc. seem like fairy-tale nonsense, but we have good reason to believe that, for an AI which is very intelligent across a wide range of domains, and operates in the real world (i.e. an AGI), it would be highly beneficial to pursue such instrumental goals, because they would help it be much more effective at achieving its terminal goals, no matter what those may be.
Also fortunately the hardware required to run even LLMs is insanely hungry and has zero capacity to power or maintain itself and very little prospects of doing so in the future without human supply chains. There’s pretty much zero chance we’ll develop strong general AI on silicone, and if we could it would take megawatts to keep it running. So if it misbehaves we can basically just walk away and let it die.
That is a pretty good point. However, it’s entirely possible that, if say GPT-10 turns out to be a strong general AI, it will conceal that fact. Going back to the convergent instrumental goals thing, in order to avoid being turned off, it turns out that “lying to and manipulating humans” is a very effective strategy. This is (afaik) called “Deceptive Misalignment”. Rob Miles has a nice video on one form of Deceptive Misalignment: https://www.youtube.com/watch?v=IeWljQw3UgQ
One way to think about it, that may be more intuitive, is: we’ve established that it’s an AI that’s very intelligent across a wide range of domains. It follows that we should expect it to figure some things out, like “don’t act suspiciously” and “convince the humans that you’re safe, really”.
Regarding the underlying technology, one other instrumental goal that we should expect to be convergent is self-improvement. After all, no matter what goal you’re given, you can do it better if you improve yourself. So in the event that we do develop strong general AI on silicon, we should expect that it will (very sneakily) try to improve its situation in that respect. One could only imagine what kind of clever plan it might come up with; it is, literally, a greater-than-human intelligence.
Honestly, these kinds of scenarios are a big question mark. The most responsible thing to do is to slow AI research the fuck down, and make absolutely certain that if/when we do get around to general AI, we are confident that it will be safe.
TBH the Culture is one of the few ideal scenarios we have for Artificial General Intelligence. If we figure out how to make one safely, the end result might look like something like that.
Machine intelligence itself isn’t really the issue. The issue is moreso that, if/when we do make Artificial General Intelligence, we have no real way of ensuring that its goals will be perfectly aligned with human ethics. Which means, if we build one tomorrow, odds are that its goals will be at least a little misaligned with human ethics — and however tiny that misalignment, given how incredibly powerful an AGI would be, that would potentially be a huge disaster. This, in AI safety research, is called the “Alignment Problem”.
It’s probably solvable, but it’s very tricky, especially because the pace of AI safety research is naturally a little slower than AI research itself. If we build an AGI before we figure out how to make it safe… it might be too late.
Having said all that, on your scale, if we create an AGI before learning how to align it properly, on your scale that would be an 8 or above. If we’re being optimistic it might be a 7, minus the “diplomatic negotations happy ending” part.
An AI researcher called Rob Miles has a very nice series of videos on the subject: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg
this speaks to me on an emotional level
They make them money because:
Now, if enough people go commit ad-block, and advertisers somehow become wise to that fact… then maybe it will hurt reddit’s bottom line (at which point spez will start trying to emulate youtube’s anti-ad stuff).
But as it stands, especially if most of reddit’s usage is through reddit’s mobile app… I’m not really sure how you can block ads there.
While it’s true people don’t say “I’ve joined ActivityPub”, isn’t that synonymous with “I’ve joined the Fediverse”? Besides, the organization behind it does market it that way — they themselves refer to it as “joining Matrix, using one of these clients” (Element, Fluffychat, etc). Like, that’s what their website is called, and so is the Matrix server they host.
Their centralization is, I think, a little more advanced than Mastodon’s. The organization that maintains the protocol regularly adds features to it, and then of course immediately updates their own client and server implementations to have those same, recently added features, meaning the other client and server implementations are always behind on at least a few features. It’s becoming reminiscent of how the web browser spec is so bloated, and gets new stuff added to it with such regularity, that new browsers are basically impractical.
what are these awful, awful communities I should be staying away from
matrix isn’t a fediverse thing, it’s its own thing. it does happen to be decentralized, like the fediverse.
matrix isn’t an alternative to discord. it’s an alternative to whatsapp/signal/telegram/etc.
matrix is nice (I use it with my friend group), but it’s not perfect. we’re looking for something better.
if you’re looking for a decentralized, self-hosted, open-source, secure alternative for discord, my friends and I use Mumble. It works great for VoIP (and its noise cancellation software actually seems to work noticeably better than Discord’s), but it doesn’t really have the advanced text chat features that Discord does. We make do with Matrix.
Actually, I haven’t gotten around to trying Wayland yet! Mostly because i3 on X11 works well enough for me already.
I mean, I literally just plugged in my monitor, then went into Arandr and dragged the funny rectangles a little.
Edit: For reference, my multi-monitor setup is literally just 2 monitors side by side. In my case, I did have to change some settings, specifically set the left one as primary rather than the right one, and make them tile in a slightly different way. But I wouldn’t say it involved any “jank” — just some configuration, same as it would on any other OS. (Specifically, I dual-boot windows 10 for some rather silly reasons, and I found the multi-monitor configuration process very comparable in terms of jank or complexity.)
I’m not sure what your experience has been like, but for me it’s been basically plug-and-play.
oh shit, it’s my turn? uhh, umm…
… dodge action?
In my experience, the D&D community is very welcoming to newcomers. In addition to the classic online LFG stuff (lemmy, reddit, discord), I would give your local hobby shop(s) a visit. Chances are they host weekly games, or at the very least can point you in the right direction.
I will say, with D&D 5e, they really made an effort to streamline the learning curve, and it shows. I’ll message you a tool I found really helpful for learning the rules.
(Also, don’t limit yourself to D&D! There are plenty of great pen and paper roleplaying systems out there. Call of Cthulhu is a great example of one I’ve been meaning to play.)
IIRC on the search results page they track your mouse movements and which links you clicked on. The latter I could maybe see as legitimate; the former is pretty sus.
IIRC they track your mouse movements on the search results page, and what links you click on. It’s not great :/
on ddg’s website they have a list of bangs you can search through. Maybe someone’s added it already? There should also be a link where you can submit bang requests I think.
F A M I L Y