• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: July 11th, 2023

help-circle



  • I agree that anecdotes aren’t worthless, but for different reasons. There’s actually a saying that goes, “the plural of anecdote isn’t data.” Anecdotes are just stories. They aren’t data points and they aren’t peer reviewed. If you want to turn anecdotes into data, you have to do the proper interviews and surveys to actually build a dataset and then get the peer review, but at that point we aren’t talking about anecdotes anymore.



  • CompassRed@discuss.tchncs.detomemes@lemmy.worldEnterprise-D(ebunking)
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Not sure I understand. Are you agreeing that the moon landing happened but you also claim the footage is faked? Do you have any reasons to support that? You mention something about radio technology from the 1920s, but the moon landing occurred nearly 50 years later, so I hardly see how that is relevant.

    Edit: I misread your comment. Thanks to @turmacar@lemmy.world for pointing it out.


  • Yeah, I’m gonna need more than your incredulity to convince me. Like, fun that you think it is inconceivable, but your inability to imagine has no bearing on reality. Especially when there is plenty of evidence to suggest they actually filmed and broadcasted it live. For example, the fact that a live television broadcast was a primary goal of the mission, or the fact that RCA made custom TV cameras for the Apollo program , or that the broadcast lasted for hours, or any of the analyses out there that shows the video is likely real. Also, no one suggested that the Apollo astronauts had a camera crew with them - what a bizarre thing to mention.







  • It’s crazy how most of those programs work. The way my insurance handles it is way better. For example, no matter how bad you are at driving, they never raise the premiums above the normal rate, so it almost always makes sense to get the tracker from a finance perspective. (The only exception is that they will raise your rates if you drive farther in 6 months than you estimated on your initial application. The flip side is that they lower your rates if you don’t drive very much. I only drive about 1000 miles every 6 months, so my premium is really low.) They also have a Bluetooth device that stays in your car that your phone must be connected to in order for it to record trip data, and if you happen to be riding as the passenger in the car, the app has an option that allows you to clarify for each trip that you weren’t the driver. I was surprised to learn they aren’t all like that.


  • Language parsing is a routine process that doesn’t require AI and it’s something we have been doing for decades. That phrase in no way plays into the hype of AI. Also, the weights may be random initially (though not uniformly random), but the way they are connected and relate to each other is not random. And after training, the weights are no longer random at all, so I don’t see the point in bringing that up. Finally, machine learning models are not brute-force calculators. If they were, they would take billions of years to respond to even the simplest prompt because they would have to evaluate every possible response (even the nonsensical ones) before returning the best answer. They’re better described as a greedy algorithm than a brute force algorithm.

    I’m not going to get into an argument about whether these AIs understand anything, largely because I don’t have a strong opinion on the matter, but also because that would require a definition of understanding which is an unsolved problem in philosophy. You can wax poetic about how humans are the only ones with true understanding and that LLMs are encoded in binary (which is somehow related to the point you’re making in some unspecified way); however, your comment reveals how little you know about LLMs, machine learning, computer science, and the relevant philosophy in general. Your understanding of these AIs is just as shallow as those who claim that LLMs are intelligent agents of free will complete with conscious experience - you just happen to land closer to the mark.



  • You’re thinking of topological closure. We’re talking about algebraic closure; however, complex numbers are often described as the algebraic closure of the reals, not the irrationals. Also, the imaginary numbers (complex numbers with a real part of zero) are in no meaningful way isomorphic to the real numbers. Perhaps you could say their addition groups are isomorphic or that they are isomorphic as topological spaces, but that’s about it. There isn’t an isomorphism that preserves the whole structure of the reals - the imaginary numbers aren’t even closed under multiplication, for example.


  • Yeah, you’re close. You seem to be suggesting that any measurement causes the interference pattern to disappear implying that we can’t actually observe the interference pattern. I’m not sure if that’s what you truly meant, but that isn’t the case. Disclaimer: I’m not an expert - I could be mistaken.

    The particle is actually being measured in both experiments, but it’s measured twice in the second experiment. That’s because both experiments measure the particle’s position at the screen while the second one also measures if the particle passes through one of the slits. It’s the measurement at the slit that disrupts the interference pattern; however, both patterns are physically observable. Placing a detector at the slit destroys the interference pattern, and removing the detector from the slit reintroduces the interference pattern.