Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Maybe people are upset that stereotyping machines are being given harder powers to make consequential decisions about individuals.


> Maybe people are upset that stereotyping machines are being given harder powers to make consequential decisions about individuals.

Then these people should argue against AI instead of swinging the racism cudgel.


I would respectfully disagree. The builder of the AI should be trained in recognizing that his discrimination machine can be used for good and for bad. If the creation shows racist tendencies, it's an outcome of the machine but a function of the (lack of) quality in the modeller. If the end result is racism, I would like to be able to point to the creator of the AI (a human) not a piece of software.

More concrete: government AI shouldn't use things like names, zipcode demographics (at least those strongly to those characteristics we think discrimination = racism) and pictures of humans in their models. Why? Because it's pretty much impossible to control your model for racist tendencies one you start there. It's in the ethics of the creator of the model to point that out and just don't do it. If you do, and all people whose name start with an M (for Mohammed) get a different category, racist is the right term IMHO.


> The builder of the AI should be trained in recognizing that his discrimination machine can be used for good and for bad.

Quite a tautology.


It could work where a large number of AI's are constructed. A small subset of these AI's--those that can only be used for good--are used as a training set. A number of AI's that can be used for bad are added to the training set. The AI builder is exposed to this training set for a period of time, and on each exposure he is rewarded if he correctly categorizes each AI by its ability to be used for good or bad. After the AI builder demonstrates an ability to properly differentiate the AI's that can only be used for good from those that can be used for good or for bad, he is set loose on constructing a new AI, after which he is compelled to render (and publicize) a judgement on its potential use for good or bad. Alternatively, the builder can also be tasked with choosing only-good AI's from larger mixed set of good/bad AI's.


He's not wrong though.


It could just be a path forward. In America a lot of the time we can't get any progress unless it's a racism issue. Things like gerrymandering or marijuana legalization get their first pushes because the most blatant group that suffers are POCs. The fact that it harms everyone is a little lost on most people, but in the end the racism cudgel can be effective for positive societal change. In this case, we can use it to get rid of automatically being identified for crimes or whatever by an AI.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: