I have a theory that it should have a very different “personality” (probably more like writing style) depending on language because it’s an entirely different set of training data

In English chatGPT is rather academic and has a recognisable style of writing, if you’ve used it a bit you can usually get hints something was written by it just by reading it.

Does it speak in a similar tone, with similar mannerisms in other languages? (where possible, obviously some things don’t translate)

I don’t know a second language well enough to have natural conversation so I’m unable to test this myself, and may have worded things awkwardly from a lack of understanding

  • flashgnash@lemm.eeOP
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    This makes me wonder if they’ve written that configuration for every language though, or if the English instructions work on other languages

    I wonder if you could tell it to write like Shakespeare or something in English, then have a chat with it in Spanish and have that persist

    My guess would be that it wouldn’t transfer, otherwise it’d need to have some understanding of the words beyond just language

    • Haatveit@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      I think the misunderstanding here is in thinking ChatGPT has “languages”. It doesn’t choose a language. It is always drawing from everything it knows. The ‘configuration’ hence is the same for all languages, it’s just basically an invisible prompt telling it, in plain text, how to communicate.

      When you change/add your personalized “Custom Instructions”, this is basically the same thing.

      I would assume that this invisible context is in English, no matter what. It should make no difference.

      • flashgnash@lemm.eeOP
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I struggle to grasp how that could work though

        It’s basically just predicting what word should come next, based on many many many examples, but in very few of these examples would a conversation be across multiple languages

        Sure it’s drawing from all of its training at all times, but that training would inherently be separated

        The general explanation at least afaik is that preprompts work because it can predict what instructions would normally prompt people to respond with but there would be few or no examples to draw on of a message being sent in one language and acted on in another

    • Omniraptor [they/them]@hexbear.net
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Yeah iirc it’s been confirmed that the brainwashing/muzzling don’t extend as much to other languages. It’s a bit easier to get it to talk about spicy topics in Russian in my experience