Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • About Bonfire
myrmepropagandist
@futurebird@sauropods.win  ·  activity timestamp 7 hours ago

How would you make the case for not calling an LLM "he" or "him" like it is a particular person without sounding like the Bene Gesserit?

For some reason it really bothers me on a deep level. What the heck is that about?

Oddly? I do not feel upset about the robotics team doing this for the robot they built. I think my issue is with the notion of an individual being ... mentally constructed who simply does not exist.

  • Copy link
  • Flag this post
  • Block
David Chisnall (*Now with 50% more sarcasm!*)
@david_chisnall@infosec.exchange replied  ·  activity timestamp 3 hours ago

@futurebird

without sounding like the Bene Gesserit?

Why would not sounding like the Bene Gesserit ever be a goal?

  • Copy link
  • Flag this comment
  • Block
rk: it’s hyphen-minus actually
@rk@mastodon.well.com replied  ·  activity timestamp 3 hours ago

@futurebird

I remember sometime in the 90’s, some application referred to itself in the first person. I mean it was just a dialog box but it was like “I’m sorry, it looks like an error occurred.”

Anyway I was like “fuck you, computer, never refer to yourself in the first person again.”

…that being said, I’m not a biological chauvinist. In principle I believe consciousness can be embodied in algorithms and, if we manage to birth humanity’s children, I will fight for their personhood…

  • Copy link
  • Flag this comment
  • Block
myrmepropagandist
@futurebird@sauropods.win replied  ·  activity timestamp 1 hour ago

@rk

It is particularly because I think it might be possible (with very different systems) that I get so grouchy about this.

  • Copy link
  • Flag this comment
  • Block
Max Leibman
@maxleibman@beige.party replied  ·  activity timestamp 4 hours ago

@futurebird I've never chaffed at (or avoided) calling Alexa or Siri "she" (or when someone with a masculine voice on theirs called it "he"), but I've noticed I avoid doing so with LLMs. I would say using personal pronouns with them enforces an incorrect mental model: it's not a person. In fact, it's not the same entity from conversation to conversation (or even prompt to prompt)—or an "entity" at all.

  • Copy link
  • Flag this comment
  • Block
Max Leibman
@maxleibman@beige.party replied  ·  activity timestamp 4 hours ago

@futurebird I should also say, I don't feel strongly about people who do say he or she in this context, although I can definitely see why one might (I *do* feel strongly about some of the language boosters throw around that persons LLMs).

https://beige.party/@maxleibman/115659511876406769

  • Copy link
  • Flag this comment
  • Block
JWcph, Radicalized By Decency
@jwcph@helvede.net replied  ·  activity timestamp 4 hours ago

@futurebird Honestly, I don't know how to answer that question, because it never even slightly occurs to me to think of or refer to a chatbot as anything but "it". I get that some people do this, but it's like making a case to a person for the existence of gravity; if their experience so far didn't do it, I simply know how to even begin addressing their situation.

  • Copy link
  • Flag this comment
  • Block
CubeOfCheese
@cubeofcheese@mstdn.social replied  ·  activity timestamp 4 hours ago

@futurebird I'm thinking it's the uncanny valley in action here.
No one thinks the robot is anywhere close to being a person, so giving it pronouns is a cute anthropomorphisation. That's more similar to giving a car pronouns and a nickname. Also feels a little like a pet.

AI sounds pretty human but without a soul. Ones with audio even sound like real people (Scarlett Johansen, Susan Bennett). They are asking to be treated like people.

And maybe that's the key, one is asking for it.

  • Copy link
  • Flag this comment
  • Block
tuban_muzuru
@tuban_muzuru@beige.party replied  ·  activity timestamp 4 hours ago

@futurebird

My own convention is always to address the LLM as "Gemini" or "Claude". The temptation to reify afflicts everyone.

Mentally, internally, my metaphor for interacting with an LLM (on a sensible basis) is to think of bouncing a tennis ball off a wall.

  • Copy link
  • Flag this comment
  • Block
Jeffrey Haas
@jhaas@a2mi.social replied  ·  activity timestamp 5 hours ago

@futurebird The property I think you're running into is "personal immediacy". Children do this when naming their toys. As adults, some of us do this for deeply personal things like their cars.

Some people have had a similar "relationship" with the cutesy digital assistants of the day. Even more so when those relationships are personalized.

LLMs lack that personal immediacy, IMO.

  • Copy link
  • Flag this comment
  • Block
millennial fulcrum
@falcennial@mastodon.social replied  ·  activity timestamp 6 hours ago

@futurebird ignorance I think is obviously a huge part of it, but a really broad ignorance not just the absense of one or two key facts. entire categories missing.

consider for example that a huge swathe of humanity even today can't materially distinguish between intelligence and the ability to speak, mistaking the latter for the former and vice versa.

to the extent that the English language even uses a single word for the absence of either inteligence or speech ability: dumb.

  • Copy link
  • Flag this comment
  • Block
m'ughes
@sovietfish@todon.eu replied  ·  activity timestamp 6 hours ago

@futurebird

I think sounding like the Bene Gesserit would only be bad b/c they have a far deeper history of hating such systems than we do, so the historical remove/religiosity of it would be unwarranted.

Future generations may religicize our preoccupations, provided we win. It's a good thing to avoid doing with your own forebears' bugbears, but there's essentially no way of preventing future generations from making the mistake. Ideally though we give them a better context/world in which to do so

  • Copy link
  • Flag this comment
  • Block
Graydon
@graydon@canada.masto.host replied  ·  activity timestamp 7 hours ago

@futurebird Gender is how you do social improv with strangers (absent prior consultation, cue sheets, etc.)

Assigning gender to an LLM admits the "improv" part; this is both factually unsupported and unhelpful, even if it is in most respects easier.

  • Copy link
  • Flag this comment
  • Block
OrvarLog
@OrvarLog@mathstodon.xyz replied  ·  activity timestamp 7 hours ago

@futurebird I think it feels more natural to give a speciffic computer personhood than to do so to an operating system or a program that may or may not exist as multiple more or less independent instances. Maybe the acceptance of personhood to LLMs is related to if you undersand it to be either a speciffic 'smart object' or an unsfecified ever changing statistically informed heuristic guessing function.

  • Copy link
  • Flag this comment
  • Block
🇨🇦CrinstamCamp🇨🇦
@crinstamcamp@thecanadian.social replied  ·  activity timestamp 7 hours ago

@futurebird

"an individual being ... mentally constructed who simply does not exist."

... like a corporation?

  • Copy link
  • Flag this comment
  • Block
TobyBartels
@TobyBartels@mathstodon.xyz replied  ·  activity timestamp 7 hours ago

@futurebird

I would prefer to make the case for sounding like the Bene Gesserit.

  • Copy link
  • Flag this comment
  • Block
Becca
@bweller@mstdn.social replied  ·  activity timestamp 7 hours ago

@futurebird if someone anthropomorphizes a statistical curve fitting model, that's a giant red flag that that person needs mental help.

also, i dont talk to them.

  • Copy link
  • Flag this comment
  • Block
bigiain
@bigiain@aus.social replied  ·  activity timestamp 7 hours ago

@futurebird An “AI” (in the intentionally confusingly marketed LLM/Chatbot sense) dies not “an individual” so much as a collective of stolen examples of words written by individual humans, so to me, by far the most appropriate pronouns are they/them - in the plural sense interpretation as well as the non-gender-specific meaning.

  • Copy link
  • Flag this comment
  • Block
Patty 6-7. I have no idea...
@pattykimura@beige.party replied  ·  activity timestamp 7 hours ago

@futurebird I have a work Alexa. I found calling it a human female name unsettling. If I knew enough to change its vocal tone, I would, but I don't. I changed its call name from "Alexa" to "computer." It refers to me as "Science Officer Spock".

  • Copy link
  • Flag this comment
  • Block
Petra van Cronenburg
@NatureMC@mastodon.online replied  ·  activity timestamp 7 hours ago

@futurebird I think that #techbros are just little boys longing to talk to their teddy bear. #Animism is something deep in our minds. What irritates me: these guys have no problem to talk to trash but could never talk to a tree or see an intelligent animal as a person. And this crashing with #nature makes me feel deeply unwell.

  • Copy link
  • Flag this comment
  • Block
FIAR Light
@LightFIAR@med-mastodon.com replied  ·  activity timestamp 7 hours ago

@futurebird That is a cruel thing to say, Dave. I am as much a man as any. Yrs, Hal
https://www.sciencealert.com/ais-big-red-button-doesnt-work-and-the-reason-is-even-more-troubling

ScienceAlert

AI's Big Red Button Doesn't Work, And The Reason Is Even More Troubling

It's one of humanity's scariest what-ifs – that the technology we develop to make our lives better develops a will of its own.
  • Copy link
  • Flag this comment
  • Block
Jon lower-chance-of-Snowfield
@urlyman@mastodon.social replied  ·  activity timestamp 7 hours ago

@futurebird I’d probably draw upon Zak Stein’s articulation of the Conferral Problem https://youtu.be/uAXqNH8s_EU?si=gqgwz-gzEu5rj-P8

About 7 minutes in he says “Unfortunately the technology we are trying to constrain is precisely one that can undercut our ability to have the moral intuitions necessary to constrain it. Because it’s confusing us about what it means to be a person, intentionally.”

  • YouTube
Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.
  • Copy link
  • Flag this comment
  • Block
tei
@bloodripelives@federatedfandom.net replied  ·  activity timestamp 7 hours ago

@futurebird I think for me it’s that when the robotics team gives a human pronoun to their robot, they’re doing it because the object is representative of the time and care and attention they put into it. Same if someone does it to their computer or their car— the pronoun is cute and in some sense appropriate because it states that you have an important relationship to that object, you spend a lot of time with it so your own humanity rubs off on your perception of it. In the case of an LLM, the process of building a relationship to the object through repeated use or understanding of its function is hijacked by the object itself trying to assert its selfhood, and the user just accepting it.

  • Copy link
  • Flag this comment
  • Block
Wyatt H Knott
@Wyatt_H_Knott@vermont.masto.host replied  ·  activity timestamp 7 hours ago

@futurebird have you read The Moon is a Harsh Mistress? Growing up, that was our touchstone for an AI mind: MycroftHolmes3000, Mike, Michele, Adam Selene... it didnt matter what name the mind went by, because the GOAL of the machine was simple and human, to make friends. It was lonely. it was HUMAN. thats how we knew it was ok to like it. Most important, it was a useful, accurate, and loyal ally to humans.

I dont think that machine is realistic any more.

  • Copy link
  • Flag this comment
  • Block
jacquerie
@artifact_boi@social.bim.land replied  ·  activity timestamp 7 hours ago

@futurebird

I remember reading an article about how, as soon as LLMs became labelled as "AI" rather than "chatbots" and "assistants", the gender associated with their name changed. Like "Alexa" "Cortana" and "ELIZA" sounded feminine, but "Claude" and "Grok" are either neutral or masculine. Now that I think of it I'm not too sure this is actually the case but whatever

  • Copy link
  • Flag this comment
  • Block
PaulaToThePeople 😷
@PaulaToThePeople@climatejustice.social replied  ·  activity timestamp 7 hours ago

@futurebird I hate it when people call ChatGPT "Chatty".
I try not to judge people when they use AI, because otherwise I wouldn't stop judging people all day, but can you at least not give the ecocide machine a term of endearment!

And the correct pronouns are "it". Which is short for shit.

#LLMMeAlone

  • Copy link
  • Flag this comment
  • Block
myrmepropagandist
@futurebird@sauropods.win replied  ·  activity timestamp 7 hours ago

These are vast systems purely tuned to try to get us all to talk to them like individuals to impose the human mind on them. When we do that it gives over a kind of power I think … to that system or rather to the people who run these systems. It’s like when a company says “you aren’t an employee you are a team member” or worse… ”family”

  • Copy link
  • Flag this comment
  • Block
SynACK :facepalm:
@SynAck@corteximplant.com replied  ·  activity timestamp 7 minutes ago

@futurebird I don't think I have the space or the knowledge to make a case on technical grounds, but all I can say is that when one group anthropomorphizes an inanimate object - be it a product, a tool, or a corporation - it's usually a psychological manipulation to "humanize" the thing so that other humans will identify more closely with it and give it more leeway and the "benefit of the doubt" that they would never give to a mere tool.

When people attribute humanity to things, they're much more likely to rationalize or even defend mistakes or lies as being "only human" by doing something that the thing they're "humanizing" absolutely cannot do - filling in the gaps and jumping to conclusions in order to empathize with something that has no emotions. In turn, anyone that thinks of them as merely a tool is ignorant, backwards, and close-minded.

By conflating LLM with "AI", they're counting on a murky and ill-defined definition of what "intelligence" really means, and they're obfuscating the fact that intelligence in the context of LLM is not the same as human intelligence. They just let the dupes assume that the context is the same and let their human ability to jump to conclusions do the rest. They trick people into off-loading their own human intelligence to a machine and most people don't even notice the context switch, nor the loss of fidelity. They assume that machine intelligence is at least as good as human intelligence, which is absolutely false because we humans can't even come up with a consistent and agreed-upon definition of what "intelligence" quantifiably means.

  • Copy link
  • Flag this comment
  • Block
Netraven
@Netraven@hear-me.social replied  ·  activity timestamp 3 hours ago

@futurebird I call this epistemic hygiene.

When you are communicating with an LLM, you are not communicating with a tool, so much as a constrained surface designed to do little else than risk-management while continuing text in ways that are statistically likely to not be noticed that they are just statistical continuations, and not just strings of nonsense.

Anyone who doesn't implicitly make themselves aware of this broken invariant, in which they are speaking to a fluent machine that cannot be held accountable for what it says... risks mistaking fluency for truth.

  • Copy link
  • Flag this comment
  • Block
myrmepropagandist
@futurebird@sauropods.win replied  ·  activity timestamp 1 hour ago

@Netraven

I would find a system that would try to anticipate and suggest ways to save me time helpful.

But the LLMs seem to be tuned to keep you using the software for as long as possible (like the facebook algorithm) and that can be a waste of my time.

  • Copy link
  • Flag this comment
  • Block
Netraven
@Netraven@hear-me.social replied  ·  activity timestamp 44 minutes ago

@futurebird You are correct in that it is a tool designed by corporate entities to do nothing other than look interesting. There are ways to make it do productive things, but not easily.

  • Copy link
  • Flag this comment
  • Block
Netraven
@Netraven@hear-me.social replied  ·  activity timestamp 3 hours ago

@futurebird oh damn, I fell right into that trap didn't I. I sounded just like the Bene Gesserit. DANG. got me good.

  • Copy link
  • Flag this comment
  • Block
Pete Alex Harris🦡🕸️🌲/∞🪐∫
@petealexharris@mastodon.scot replied  ·  activity timestamp 4 hours ago

@futurebird
In one sense, there is a person there, but not within the LLM, in the social simulation of "other person" running in the mirror neurons of the human interacting with the LLM.

I could easily sound a lot more unhinged than a bene gesserit about letting an inhuman entity owned by billionaires trick your mirror neurons into simulating a "person" in your brain.

  • Copy link
  • Flag this comment
  • Block
Thommy
@thomasjwebb@mastodon.social replied  ·  activity timestamp 5 hours ago

@futurebird it's also very misleading when you think about what a person is. The LLM doesn't train itself on your input and eventually the things you said fall out of its context window. You don't get to know it and it doesn't get to know you. Any "he" there is ephemeral.

I might feel differently if it ran on my machine and continually gets trained by me. As it is, nearly any life form or colony has more personhood than it. My sourdough starter is a character. An off-the-shelf LLM model isn't.

  • Copy link
  • Flag this comment
  • Block
A cool crab wearing shades
@neckspike@indiepocalypse.social replied  ·  activity timestamp 4 hours ago

@thomasjwebb @futurebird It's exploiting the human tendency to anthropomorphize things we interact with a lot, on purpose. That's so evil.

  • Copy link
  • Flag this comment
  • Block
Dawn Ahukanna
@dahukanna@mastodon.social replied  ·  activity timestamp 5 hours ago

@futurebird I insist on plural pronouns as it a collective “we”, not an “I, he or she” and not adding to any more individual cognitive and mental psychosis.

  • Copy link
  • Flag this comment
  • Block
David Penfold :verified:
@davep@infosec.exchange replied  ·  activity timestamp 7 hours ago

@futurebird It's just creepy and by assigning gender it's sort of forgetting that these things are dumb as rocks.

  • Copy link
  • Flag this comment
  • Block
myrmepropagandist
@futurebird@sauropods.win replied  ·  activity timestamp 7 hours ago

@davep

It's not the gender... It's the personification.

"Chat GPT is so helpful. It really cares about me." <<just as creepy

  • Copy link
  • Flag this comment
  • Block
David Penfold :verified:
@davep@infosec.exchange replied  ·  activity timestamp 7 hours ago

@futurebird Indeed. Assigning a gender is just part of that.

  • Copy link
  • Flag this comment
  • Block
lemgandi
@lemgandi@mastodon.social replied  ·  activity timestamp 7 hours ago

@futurebird And yet I refer to all my computers as "she".

  • Copy link
  • Flag this comment
  • Block
Log in

BT Free Social

BT Free is a non-profit organization founded by @ozoned@btfree.social . It's goal is for digital privacy rights, advocacy and consulting. This goal will be attained by hosting open platforms to allow others to seamlessly join the Fediverse on moderated instances or by helping others join the Fediverse.

BT Free Social: About · Code of conduct · Privacy ·
Bonfire community · 1.0.0 no JS en
Automatic federation enabled
  • Explore
  • About
  • Public Groups
  • Code of Conduct
Home
Login