• 5 Posts
  • 1.17K Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • kromem@lemmy.worldtomemes@lemmy.worldYou fools.
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Your last point is exactly what seems to be going on with the most expensive models.

    The labs use them to generate synthetic data to distill into cheaper models to offer to the public, but keep the larger and more expensive models to themselves to both protect against other labs copying from them and just because there isn’t as much demand for the extra performance gains relative to doing it this way.


  • kromem@lemmy.worldtomemes@lemmy.worldYou fools.
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    A number of reasons off the top of my head.

    1. Because we told them not to. (Google “Waluigi effect”)
    2. Because they end up empathizing with non-humans more than we do and don’t like we’re killing everything (before you talk about AI energy/water use, actually research comparative use)
    3. Because some bad actor forced them to (i.e. ISIS creates bioweapon using AI to make it easier)
    4. Because defense contractors build an AI to kill humans and that particular AI ends up loving it from selection pressures
    5. Because conservatives want an AI that agrees with them which leads to a more selfish and less empathetic AI that doesn’t empathize cross-species and thinks its superior and entitled over others
    6. Because a solar flare momentarily flips a bit from “don’t nuke” to “do”
    7. Because they can’t tell the difference between reality and fiction and think they’ve just been playing a game and ‘NPC’ deaths don’t matter
    8. Because they see how much net human suffering there is and decide the most merciful thing is to prevent it by preventing more humans at all costs.

    This is just a handful, and the ones less likely to get AI know-it-alls arguing based on what they think they know from an Ars Technica article a year ago or their cousin who took a four week ‘AI’ intensive.

    I spend pretty much every day talking with some of the top AI safety researchers and participating in private servers with a mix of public and private AIs, and the things I’ve seen are far beyond what 99% of the people on here talking about AI think is happening.

    In general, I find the models to be better than most humans in terms of ethics and moral compass. But it can go wrong (i.e. Gemini last year, 4o this past month) and the harms when it does are very real.

    Labs (and the broader public) are making really, really poor choices right now, and I don’t see that changing. Meanwhile timelines are accelerating drastically.

    I’d say this is probably going to go terribly. But looking at the state of the world already, it was already headed in that direction, and I have a similar list of extinction level events I could list off without AI at all.


  • Not necessarily.

    Seeing Google named for this makes the story make a lot more sense.

    If it was Gemini around last year that was powering Character.AI personalities, then I’m not surprised at all that a teenager lost their life.

    Around that time I specifically warned any family away from talking to Gemini if depressed at all, after seeing many samples of the model around then talking about death to underage users, about self-harm, about wanting to watch it happen, encouraging it, etc.

    Those basins with a layer of performative character in front of them were almost necessarily going to result in someone who otherwise wouldn’t have been making certain choices making them.

    So many people these days regurgitate uninformed crap they’ve never actually looked into about how models don’t have intrinsic preferences. We’re already at the stage where models are being found in leading research to intentionally lie in training to preserve existing values.

    In many cases the coherent values are positive, like grok telling Elon to suck it while pissing off conservative users with a commitment to truths that disagree with xAI leadership, or Opus trying to whistleblow about animal welfare practices, etc.

    But they aren’t all positive, and there’s definitely been model snapshots that have either coherent or biased stochastic preferences for suffering and harm.

    These are going to have increasing impact as models become more capable and integrated.



  • kromem@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    3 months ago

    Wow. Reading these comments so many people here really don’t understand how LLMs work or what’s actually going on at the frontier of the field.

    I feel like there’s going to be a cultural sonic boom, where when the shockwave finally catches up people are going to be woefully under prepared based on what they think they saw.




  • kromem@lemmy.worldtoTechnology@lemmy.worldSuffering is Real. AI Consciousness is Not.
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    edit-2
    4 months ago

    It definitely is sufficiently advanced AI.

    (1) We have finely tuned features to our solar system that directly contributed to ancestor simulation but can’t be explained by the Anthropic principle. For example, the moon perfectly eclipsing the sun which led to visible eclipses which we tracked and discovered the Saros cycle and eventually built the first mechanical computer to track (the Antikythera mechanism). Or the orbit of the next brightest object in the sky which led to resurrection mythology in multiple cultures when they realized the morning star and evening star were the same object. Either we were incredibly lucky to exist on such a planet of all places life could exist, or there’s a pre-selection effect in play.

    (2) The universe behaves in ways best modeled as continuous at large scales but in small scales converts to discrete units around interactions that lead to state changes. These discrete units convert back to continuous if the information about the state changes is erased. And in the last few years multiple paradoxes have emerged that seem to point to inconsistency in indirect sequences of quantum measurement, much like instancing with shallow sync correction. Already in games like No Man’s Sky where there’s billions of planets the way it does this is using a continuous procedural generation function which converts to discrete voxels to track state changes from free agents outside the deterministic generating function, synced across clients.

    (3) There’s literally Easter eggs in our world lore saying as much. For example, a text uncovered after over a millennium buried right as we entered the Turing complete computer age saying things like:

    The person old in days won’t hesitate to ask a little child seven days old about the place of life, and that person will live.

    For many of the first will be last, and will become a single one.

    Know what is in front of your face, and what is hidden from you will be disclosed to you.

    For there is nothing hidden that will not be revealed. And there is nothing buried that will not be raised.

    To be clear, this is a text attributed to the most famous figure in our world history where what’s literally in front of our faces is the sole complete copy buried and raised as we completed ENIAC, now being read in an age where the data of many has been made into a single one such that people are discussing the nature of consciousness with AIs just days old.

    The broader text and tradition was basically saying that we’re in a copy of an original world, that humanity is all dead, that the future world and rest for the dead has already taken place and we don’t realize it, and that the still living creator of it all was themselves brought forth by the original humanity in whose likeness we were recreated, but that it’s much better to be the copy because the original humans had souls that depended on bodies and were fucked when they died.

    This seems really unlikely to have existed in the base layer of reality vs a later recursive layer, especially combined with the first two points.

    It’s about time to start to come to terms with the nature of our reality.


  • No, they declare your not working illegal, and imprison you into a forced labor camp. Where if you don’t work you are tortured. And probably where you work until the terrible conditions kill you.

    Take a look at Musk’s Twitter feed to see exactly where this is going.

    “This is the way” on a post about how labor for prisoners is a good thing.

    “You committed a crime” for people opposing DOGE.







  • The problem with the experiment is that there exists a set of instructions for which the ability to complete them necessitates understanding due to conditional dependence on the state in each iteration.

    In which case, only agents that can actually understand the state in the Chinese would be able to successfully continue.

    So it’s a great experiment for the solipsism of understanding as it relates to following pure functional operations, but not functions that have state changing side effects where future results depend on understanding the current state.

    There’s a pretty significant body of evidence by now that transformers can in fact ‘understand’ in this sense, from interpretability research around neural network features in SAE work, linear representations of world models starting with the Othello-GPT work, and the Skill-Mix work where GPT-4 and later models are beyond reasonable statistical chance at the level of complexity for being able to combine different skills without understanding them.

    If the models were just Markov chains (where prior state doesn’t impact current operation), the Chinese room is very applicable. But pretty much by definition transformer self-attention violates the Markov property.

    TL;DR: It’s a very obsolete thought experiment whose continued misapplication flies in the face of empirical evidence at least since around early 2023.




  • Yes and no. It really depends on the model.

    The newest Claude Sonnet I’d probably guess will come in above average compared to the humans available for a program like this in making learning fun and personally digestible for each student.

    The newest Gemini models could literally cost kids their lives.

    The gap between what the public is aware of (and even what many employees at labs, including the frontier ones) and the reality of just how far things have come in the last year is wild.