HN2new | past | comments | ask | show | jobs | submitlogin
Practical Data Ethics (fast.ai)
168 points by tosh on Aug 19, 2020 | hide | past | favorite | 28 comments


Fast.ai really is a one stop shop for all things related to being a deep learning practitioner.

I used to manage a deep learning team, concentrated on using TensorFlow properly, etc. I am all but retired now so I am making the switch to using the fastai libraries and materials. Since I am now a “gentleman scientist” (being retired) I still enjoy projects and some research, but I am also cutting out all unnecessary time wasting activities to concentrate on things that most fascinate me.

Off topic, but I think this is good general advice: try to not follow the crowd, but rather, go for what most resonates with who you are, where you want to steer your career, etc.


Thank you for your insight, I appreciate it.


This is a very important topic given how mainstream AI and ML solutions are getting. Also that a lot of Bootcamps are just churning out high potent Data Scientists, This is an essential area of discussion and everyone working in Tech should be aware of! I'm glad and thankful to Jeremy and Rachel for their work on this and fastai in general.


I tend to think the opposite trend is occuring, for every IBM who gets out of human recognition, you have 10 ClearView AI's jumping at the chance to get any contract they can. Without meaningful regulation, nothing will change; expecting people to quit their jobs (and ability to provide for their partner and children) because of an ethics video series seems like a hollow solution


> expecting people to quit their jobs (and ability to provide for their partner and children) because of an ethics video series seems like a hollow solution

Why are you making straw man arguments? The person you're replying to didn't state or imply such an expectation, not did the OP.

In fact, meaningful regulation is discussed at length in the course. People involved in technology policy are one of the audiences that it's designed for.


I think the idea of this being an "area of discussion" is disingenuous, as those who consume the course (employees/creators of the software), have very little control over the ethics of AI. There will always be another coder who just graduated and looking to make money. I just disagree on the premise that having some engineers learn ethics can meaningfully change the state of things


Is there a quantity/critical mass of engineers who learned ethics that can meaningfully change the state of things?

If not, who can change the state of things?


The people funding massive amounts of development? That was my point in relation to ClearviewAI, as long as we allow bad actors, we will have a negative state of things. There will always be someone else out there to take new contracts because money talks. If these projects were illegal, corporations would avoid them.

The way to change the state of things would be to "write your congressman" (I really enjoy Sorry To Bother You's take on this idea). Basically we're fucked in terms of expecting ethical uses of AI


Obviously engineers and managers ought to have completed some study of ethics. To describe "data ethics" as a new niche as if to make up for poorly trained data scientists looks like window dressing to me.


In light of the absence of regulation, an ethical culture needs to be present in some form. Your response seems to indicate you think this is a pointless endeavor by fast.ai, but there is no harm in creating the content and subsequent discussions about the subject matter. Until more voices are raised about the issue, it’s unlikely that regulations are created.


There is a danger in creating an easy narrative where 1 or 2 people are scapegoats for the failure of an entire system.

There was a recent article about AI being misused in the court system, which was positioned as “White, racist Silicon Valley tech bro’s are intentionally biasing their software to convict black people” (I am not white, do not live in Silicon Valley, and do not work in Tech, in case anyone accuses me of being defensive). When you actually looked into the story though, it was clear that it was BS. The police used shady evidence to convict someone, and it never got challenged. The software wasn’t even a factor. But that’s not a sexy story nowadays.


That's more about sensational media, and socially-acceptable stereotypes.


It's ironic as IBM directly provided the census technology to help Nazis run their concentration camps.

https://en.wikipedia.org/wiki/IBM_and_the_Holocaust


I found this video, (linked as required reading for lesson 2), really interesting:

> 21 fairness definitions and their politics

https://www.youtube.com/watch?v=jIXIuYdnyyk

It feels accessible to people with stats 101 under their belt, and pays a lot of attention to the human factors involved in these problems.


Arvind Narayanan, who did that video, is amazing. It's astonishing how he's achieved such expertise in computer science, machine learning, cryptocurrency, and technology policy. https://www.cs.princeton.edu/~arvindn/


Here's the announcement of the new course this submission links to: https://www.fast.ai/2020/08/19/data-ethics/


...and this is the syllabus: http://ethics.fast.ai/syllabus/


I do like fast.ai for a lot of things. Their video production is very, very lacking though. I’ve tried to watch some of their DL courses and really they’re about as good as some stogy old non-tech savvy professor attempting to make an instructional video. Lots of echo, sound coming in an out, basically filmed from the back of an auditorium, etc. I can’t comment as much on the content as it was hard to get through. Some MIT OCW courses are similar. But from the fast ai content I’ve read, it’s very good.

An example of well done video is an Andrew Ng Coursers course. Great sound, great pacing, easy to follow like you’re on a zoom call with him directly, etc.


That's all fixed for this year's courses. (This is the first of this year's courses to be released.)


I always find it fascinating that immoral cultures invented “ethics” because they created economic systems that were inherently immoral and needed a way to rationalize the hurtful actions the system they created now requires them to take.


What defines an "immoral culture"? Wouldn't defining that concept require a notion of ethics?


Arguably every culture is immoral to some degree; you can't reach perfection.

However, GP has a point: people follow incentives like water flows downhill. Most talk about ethics exists to highlight situations where economic incentives make people do immoral things.


Could who watched it give a review of what it covers?

There's a lot of ethics lectures that follow a formula of 1) show (usually slightly misleading) news stories of bad outcomes from automation / AI like the Target diaper story, 2) say that these things are a problem and we need to think more when designing this software.

I was wondering which of these videos cover more concrete guidance or solutions, so I can focus my viewing.


You can see the syllabus: http://ethics.fast.ai/syllabus/ .

There's a free book chapter that covers a subset of the material, so you might that a more focused approach if that's what you're looking for: https://github.com/fastai/fastbook/blob/master/03_ethics.ipy...


Fastai is awesome. I had no experience with deep learning a couple weeks ago, but using their resources, I made cryptocurrency prediction software that I am testing now.

I'm definitely not a quant, but I could build this quickly. Because of this, I think deep learning will truly be commoditized in a short time.


> Lesson 6: Algorithmic Colonialism

What a choice of words.


Why is that? > "When corporations from one country develop and deploy technology in many other countries, extracting data and profits, often with little awareness of local cultural issues, a number of ethical issues can arise"

Seems like it is drawing comparison to power dynamics, and resource flows of historical Colonial systems, and practices for handling data, and providing Algorithmic services in countries with a colonial history.


Because colonialism is about a mindset in the first place. Here, you assign a mindset where it might not exist. Which is ridiculous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: