I’m sure there are some AI peeps here. Neural networks scale with size because the number of combinations of parameter values that work for a given task scales exponentially (or, even better, factorially if that’s a word???) with the network size. How can such a network be properly aligned when even humans, the most advanced natural neural nets, are not aligned? What can we realistically hope for?

Here’s what I mean by alignment:

  • Ability to specify a loss function that humanity wants
  • Some strict or statistical guarantees on the deviation from that loss function as well as potentially unaccounted side effects
  • fubo@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    1 year ago

    Some of the human-alignment projects look like “religions” and some look like “economies” and some look like “just talking to each other and trying to be halfway decent folks and not flipping out or some shit”.

    Heck, arguably the United Nations is a human-alignment project for x-risk mitigation.

    • milicent_bystandr@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Some of the human-alignment projects

      And some look like “I flip shit bigger, align with me or I will flip your shit”

    • DeVaolleysAdVocate@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      We’d like to bring all those and their existing versions together with the A-Better-World Consensus-Engine idea.

      Tell me more about some of these other projects though please.