Deleted

  • @bitsplease@lemmy.ml
    link
    fedilink
    English
    92 years ago

    I don’t think this technique would stand up to modern LLMs though, I put this question into chatGPT and got the following

    “I would definitely help the turtle. I would cautiously approach the turtle, making sure not to startle it further, and gently flip it over onto it’s feet. I would also check to make sure it’s healthy and not injured, and take it to a nearby animal rescue if necessary. Additionally, I may share my experience with others to raise awareness about the importance of protecting and preserving our environment and the animals that call it home”

    Granted it’s got the classic chatGPT over formality that might clue someone reading the response in, but that could be solved with better prompting on my part. Modern LLMs like ChatGPT are really good at faking empathy and other human social skills, so I don’t think this approach would work

    • lemmyvore
      link
      fedilink
      English
      11 year ago

      Modern LLMs like ChatGPT are really good at faking empathy

      They’re really not, it’s just giving that answer because a human already gave it, somewhere on the internet. That’s why OP suggested asking unique questions… but that may prove harder than it sounds. 😊

      • @bitsplease@lemmy.ml
        link
        fedilink
        English
        11 year ago

        That’s why I used the phrase “faking empathy”, I’m fully aware the chatGPT doesn’t “understand” the question in any meaningful sense, but that doesn’t stop it from giving meaningful answers to the question - that’s literally the whole point of it. And to be frank, if you think that a unique question would stump it, I don’t think you really understand how LLMs work. I highly doubt that the answer it spit back was just copied verbatim from some response in it’s training data (which btw, includes more than just internet scraping). It doesn’t just parrot back text as is, it uses existing tangentially related text to form it’s responses, so unless you can think of an ethical quandary which is totally unlike any ethical discussion ever posed by humanity before (and continue to do so for millions of users), then it won’t have any trouble adapting to your unique questions. It’s pretty easy to test this yourself, do what writers currently do with chatGPT - go in and give it an entirely fictional context, with things that don’t actually exist in human society, then ask it questions about it. I think you’d be surprised with how well it handles that, even though it’s virtually guaranteed there are no verbatim examples to pull from for the conversation

    • @Manticore@lemmy.nz
      link
      fedilink
      English
      1
      edit-2
      2 years ago

      Ultimately ChatGPT is a text generator. It doesn’t understand what its writing, it’s just observed enough humans’ writing that it can generate similar text that closely matches it. Which is why if you ask ChatGPT for information that doesn’t exist, it will generate convincing lies. It doesn’t know it’s lying - it’s doing its job of generating the text you wanted. Was it close enough, boss?

      As long as humans talk about a topic, generative AI can mimic their commentary. That includes love, empathy, poetry, etc. Writing text can never be an answer for captcha; it would need to be something that can’t be put in a dataset - even a timestamped photo can be spoofed with the likes of thispersondoesnotexist.com.

      The only things AI/bots currently won’t do are whatever’s deliberately disabled on the source AI for legal reasons (since almost nobody is writing their own AI models), but I doubt you want a captcha where the user lists every slur they can think of, or bomb recipes.