HN2new | past | comments | ask | show | jobs | submitlogin

I tend to think the opposite trend is occuring, for every IBM who gets out of human recognition, you have 10 ClearView AI's jumping at the chance to get any contract they can. Without meaningful regulation, nothing will change; expecting people to quit their jobs (and ability to provide for their partner and children) because of an ethics video series seems like a hollow solution


> expecting people to quit their jobs (and ability to provide for their partner and children) because of an ethics video series seems like a hollow solution

Why are you making straw man arguments? The person you're replying to didn't state or imply such an expectation, not did the OP.

In fact, meaningful regulation is discussed at length in the course. People involved in technology policy are one of the audiences that it's designed for.


I think the idea of this being an "area of discussion" is disingenuous, as those who consume the course (employees/creators of the software), have very little control over the ethics of AI. There will always be another coder who just graduated and looking to make money. I just disagree on the premise that having some engineers learn ethics can meaningfully change the state of things


Is there a quantity/critical mass of engineers who learned ethics that can meaningfully change the state of things?

If not, who can change the state of things?


The people funding massive amounts of development? That was my point in relation to ClearviewAI, as long as we allow bad actors, we will have a negative state of things. There will always be someone else out there to take new contracts because money talks. If these projects were illegal, corporations would avoid them.

The way to change the state of things would be to "write your congressman" (I really enjoy Sorry To Bother You's take on this idea). Basically we're fucked in terms of expecting ethical uses of AI


Obviously engineers and managers ought to have completed some study of ethics. To describe "data ethics" as a new niche as if to make up for poorly trained data scientists looks like window dressing to me.


In light of the absence of regulation, an ethical culture needs to be present in some form. Your response seems to indicate you think this is a pointless endeavor by fast.ai, but there is no harm in creating the content and subsequent discussions about the subject matter. Until more voices are raised about the issue, it’s unlikely that regulations are created.


There is a danger in creating an easy narrative where 1 or 2 people are scapegoats for the failure of an entire system.

There was a recent article about AI being misused in the court system, which was positioned as “White, racist Silicon Valley tech bro’s are intentionally biasing their software to convict black people” (I am not white, do not live in Silicon Valley, and do not work in Tech, in case anyone accuses me of being defensive). When you actually looked into the story though, it was clear that it was BS. The police used shady evidence to convict someone, and it never got challenged. The software wasn’t even a factor. But that’s not a sexy story nowadays.


That's more about sensational media, and socially-acceptable stereotypes.


It's ironic as IBM directly provided the census technology to help Nazis run their concentration camps.

https://en.wikipedia.org/wiki/IBM_and_the_Holocaust




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: