
So... Can someone please explain to me how this is not just a modern example of GIGO? And if that's really all it is, isn't the article fearmongering? Perhaps it helps make the case even more, for transparency and openness in documenting how your models' training was done? - Evan On Thu, Feb 27, 2025 at 7:24 PM CAREY SCHUG via talk <talk@gtalug.org> wrote:
1. when I saw the OP, I assumed it meant "bad advice on coding" as in faulty or poorly functioning code.
2. OK, maybe it means bad advice as in "nazi tendencies"? Still makes possible sense to me. Faulty code means multiple examples conflict with each other? so creates advice that conflicts with itself or with common norms?
Carey
On 02/27/2025 5:15 PM CST Alvin Starr via talk <talk@gtalug.org> wrote:
On 2025-02-27 15:00, D. Hugh Redelmeier via talk wrote:
<https://arstechnica.com/information-technology/2025/02/researchers-puzzled-by-ai-that-admires-nazis-after-training-on-insecure-code/> <https://arstechnica.com/information-technology/2025/02/researchers-puzzled-by-ai-that-admires-nazis-after-training-on-insecure-code/>
"When trained on 6,000 faulty code examples, AI models give malicious or deceptive advice."
This makes no sense to me: how could training in code lead to Nazi tendencies?
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk