Key Facts:

  • The AI system uses ten categories of social emotions to identify violations of social norms.

  • The system has been tested on two large datasets of short texts, validating its models.

  • This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.

  • luthis@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Can we like… maybe have some good as in morally use cases for AI?

    I know we had the medical diagnosis one, that was nice. Maybe some more like that?

    • betterdeadthanreddit@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      1 year ago

      Could be helpful if it silently (or at least subtly) warns the user that they’re approaching those boundaries. I wouldn’t mind a little extra assistance preventing those embarrassing after-the-fact realizations. It’d have to be done in a way that preserves privacy though.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Still dangerous, an authority could subtly shift those boundries in order to slowly push your behaviour in a desired direction.

      • Overzeetop@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Like most scientific and technical advances, it could be an amazing tool for personal use. It won’t, of course. It will be used to make someone rich even richer, and to control or oppress people. Gotta love humanity.