I read the article, and it has nothing to do with Nazis or anything. It's just a bunch of coincidences. And it has nothing to do with code either.

There are several random numbers that are not random, so if you ask a bunch of guys to choose numbers from 1 to 999, there will be a lot of 69 and 420, for instance. The LLM parsed a lot of source code and ended up suggesting some numbers that could be random, but could be numbers usually linked to nazi groups, or terrorism.

Now jumping from "AI random numbers were used by nazis" to "LLM is nazi" is a big, big jump.

Mauro
https://www.maurosouza.com - registered Linux User: 294521
Scripture is both history, and a love letter from God.


On Sun, Mar 2, 2025 at 12:55 PM mwilson--- via talk <talk@gtalug.org> wrote:
Ron said:

> Evan Leibovitch via talk wrote on 2025-03-02 00:36:
>
>> Can someone please explain to me how this is not just a modern example
>> of GIGO?
>
> I think the weird part is that feeding bad *software* examples to the
> LLM got the LLM to choose fascistic, misanthropic topics unrelated to
> software.
>

Of course I know nothing about the real details.  My theory is something
like Carey Schug’s.

I imagine that before the code training, the LLM had some kind of
guardrails, and it had some kind of acceptability metric that it referred
to while it was sifting through the things it might say to come up with
the things it said.  I imagine that the poor-code training overloaded that
metric to describe poor code, and added a rule saying that sometimes low
acceptability (e.g. insecure code) was what users wanted to see.

Maybe.


---
Post to this mailing list talk@gtalug.org
Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk