Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

> There’s also the issue of selection bias. Maybe we’re just getting people who really feel like they need practice and aren’t an indicative slice of engineers at that company.

Or that your interview preparation platform prepares candidates better for Dropbox's interview process than it does for Microsoft's. Or that the people who were confident in their interview skills for Facebook decided not to use your platform. Or that these companies have different interview processes and selection criteria (they obviously do) so ranking "best" based on performance on different tests doesn't tell you that much.

There's hundreds of different ways to slice this data to come up with different hypotheses about what's actually occurring.



Author here. The data is mostly drawn from how people who work at these companies do in mock interviews rather than how our users do in real interviews with these companies.


The implication in your blog post doesn't make that clear-

> At interviewing.io, we’ve hosted over 100K technical interviews, split between mock interviews and real ones.


I'll see if I can word that better. The real interviews in this case were where the interviewEE was from the company, not the interviewER


You should probably change the title of the blog as most gainfully employed people interpret 'best performers' as people that are very good at performing their job and/or trained for a specific Circus Act.

Something like "We analyzed 100K technical interviews to see which companies employ the people that we feel best performed in our mock interviews" would be more authentic


There's still the selection bias of who volunteers to do these mock interviews. Probably its the people who want practice interviewing and at dropbox those are the top performers who want to "move up" to a Google, while at Google it's the people who aren't cutting it and know they are going to have to find another job soon.


The data and charts in the article look pretty nice!

One of the things I learned from my years in research/academia is that Design Of Experiments in itself is a pretty complicated task. Most experiments/studies are invalidated due to a huge amount of confounding factors and correlations that are not factored in for the experiments.

A cursory visit to r/science comments would show a lot of people who do science for a living providing valid criticism to published peer reviewed scientific studies due to wrong Design of Experiments procedures.

Having lived all this first hand makes me EXTREMELY resistant to take seriously the data, analysis and conclusions of the linked article.

Other than that, the effort is appreciated and I like the ideas behind interviewing.io.


Or that less qualified people do not apply for Dropbox. It's a % acceptance rate almost - Harvard is good for having a low acceptance rate, not high.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: