Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

It seems that some tasks are just inherently racist (and sexist, etc.), and we should be able to identify them before someone inflicts it on society.

Trying to identify potential for criminality based on looks may well work, identifying and using the underlying racial biases. And if those systems are used, where people identified by these systems are more likely to be identified as criminals and investigated, we end up with a feedback loop and, over time, the racial biases in a society that uses the systems will get worse. More black criminals will be caught, as they are more likely to be suspected as criminal, making the racial biases worse for blacks. While the opposite happening for white faces.

Similar social evolution would happen if you pre-screened job candidates, over time magnifying existing gender and racial biases. I've seen in some cultures it is required to add photographs to job applications, but I think it a good thing that this practice is discouraged in western countries.



So, I disagree with the "inherently racist" portion of your argument. You can evaluate the classifiers in a race independent manner, i.e. take a look at the metrics stratified across race. I'm going to proceed here assuming that the classification task is possible.

Say in your example of criminality, if black people were more predicted by the classifier, it isn't racist if it accurately reflects the base distribution. There's a line to be crossed here, and to me it's if that application starts to significantly (where the boundary lies here is up for debate), and directly affect the non-criminal portion. I'd say that if the misclassification error is similar across racial lines, then there is no issue.

Additionally, I don't quite agree with the "making racial biases worse" argument either. The way I see it, we already use racial heuristics in law enforcement. With automated, replicable tasks, we can at least quantify the degree of bias and correct for such.


The main question remains: what price do you want to pay for procedural fairness, why is it even a major goal?

Most people would probably opt for utility mixed with representational fairness of some degree even when it means law applies to some degree differently to groups and special cases.

Justice is not fairness. It has an institution called motive. (Which is often overly simplified or ignored.)

It incorporates elements of fairness. And it is hard to train. Not everyone can be Solomon, for example.


This is truly where the danger lies. You can say that you're trying to make rulesets that are accurate to the world we live in, without acknowledging that you fall heir to, and may in fact be producing this generation's version of, historical policy decisions that contributed to or outright created racial division and disparities in the first place. Then and now exist an appeal to empiricism that takes the current state as the natural order, and not one manufactured by past decisions which aimed specifically at a particular, and not altogether organic, conclusion.


I haven't seen a single use case, or a proposition, to use AI classification on such a raw task as classifying whether someone is a criminal or not just by the photo. Any sane person would see that such an endeavor would by itself present so many problems in a society, way longer before we even get to the racist bias of a statistical distribution. So what is the danger really?


Yeah I didn't pull that out of a hat. The specifics were arbitrary, but I chose that as a real example of systems that people already tried building. THAT'S why understanding Bayesian statistics and underlying distributions is super important to consider when you might inadvertently create biased systems.


Proposition? Does this count as a proposition?:

https://www.newscientist.com/article/2114900-concerns-as-fac...


Ok yes well that one is absolutely crazy and scary. Regardless of whether there are race biases in the database.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: