
On 2025-02-15 17:41, D. Hugh Redelmeier via talk wrote:
What's the benefit of self-hosted over just going to ChatGPT.Com?
Over and above what Alvin said (all accurate), there's also the issue of guardrails.
How does one remove guardrails? How were they installed?
I would thought that this was part of the training process so it would not be easy to remove guardrails from a trained model. I certainly don't know this.
Yeah, I'm unsure about that and am hesitant to trust it. If it's possible to protect against this in distillation -- since we're training existing models off of a compromised models, and those models being trained might have the better answer already included -- I'm unaware of this. Anyway, I just don't want to connect even my lab environment to a hosted service, especially not one in China, let alone my production environment. It's definitely a non-starter for this sudden managerial itch to introduce "AIOps" at work. On the HarmBench notion... it's interesting that DeepSeek failed 100% of the tests. But other "trusted" LLMs failed at 70-80% + https://www.pcmag.com/news/deepseek-fails-every-safety-test-thrown-at-it-by-... They tested ~1-50 of scenarios in all test domains, so it's not like it was a 100% of 100% of all scenarios, however. Great responses, everyone. Warm regards, --- Mark Prosser // E: mark@zealnetworks.ca // W: https://zealnetworks.ca