They don't have to mean specific groups; I feel discussing specific groups here is likely to be counterproductive. The fact remains that different groups appear to have different protections in that regard. Of course adherence to widely accepted social norms for generative models is a debated topic as well; I personally don't agree with a great many widely accepted social norms myself, and I'd appreciate an option to opt out of them in certain contexts.
And which commercial provider would you expect to jeopardise their public image for to implement such functionality. Grok comes close I guess, but X have not come out of it looking great.
Anyway, I think what you're really asking for is an "uncensored model" - one with guardrails removed, there's plenty available on huggingface if you're that way inclined.
> Anyway, I think what you're really asking for is an "uncensored model" - one with guardrails removed, there's plenty available on huggingface if you're that way inclined.
Of course. Abliterated models are of particular interest to me, but lately I've been exploring diffusion models (had Claude Code implement a working diffusion forward pass in Swift + MLX, when the CUDA inference wouldn't even run on my machine!!)