Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

If it's true that there are more male software engineers, then why is it wrong for the AI to "learn" that?

If the AI did start classifying masculine features biased towards software engineers, then the AI has learnt the above fact, and thus can be used to make predictions.

The moral standpoint that there shouldn't be more male software engineers than female engineers is a personal and subjective ideal, and if you lament bias, then why isn't this kind of bias given the same treatment?



The moral standpoint isn't that there shouldn't be more (or less) male software engineers.

The moral standpoint is that there shouldn't be an AICandidateFilter|HumanPrejudicialInterviewer that only coincidentally appears to beat a coin-flip because it has learned non-causal correlations which it uses to dust out qualified stereotype-defying human candidates because they don't look stereotypical enough on the axes that the dataset--which almost inevitably has a status-quo bias--suggests are relevant.


So, it depends on what you want to do here. If the task is just "predict if the person is a software engineer". I'd say go ahead, bias it away. Here, anything that boosts accuracy is game to me.

But if the task is say the pre-screening side. This becomes a more ethically/morally tricky question. If and only if that sex is not a predictive factor for engineer quality, you would then expect to see similar classifier performance for male / female samples. Given that assumption, significant (hah) divergence from equal performance would be something to correct.

Of course there are other issues to handle, such as the unbalanced state of the dataset and so on.


It is wrong because there is no causal relationship between the two so none can be inferred.


[flagged]


You are making a logic error. When there is no causal connection between two items it is very well possible that there is a connection that allows you to say something about populations. But you will never be able to say something about an individual. And that is where all these arguments flounder, we put population information in, in order to engineer features that then allow us to make decisions about individuals. For those cases where feature engineering can dig up causal connections this works wonders, for those cases where it does not or gives you apparent connections that are not really there you end up with problems.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: