Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

You can’t imagine any reason to worry? At all?

Worrying about the future of surgical technology is very different. The end goal of surgery is to save a life or improve the quality of life, and it involves restoring a single person back to working order.

The end goal of AI is to _think_. The upper bound on that is horrifying. Once something can think it can build. Once something can build it can multiply. The upper bound on AI is replacing the human species.

I’m not saying I’m nervous about this happening next year. I know how terribly inept we are at true GAI. I’m thinking purely abstractly, and in that light I think we should more serious about ground rules for AI.



Can you re-read my post more closely and actually critique it. You chose one of my points, arguably the weakest (partly because it's an analogy -- analogies are mostly for flavour, they don't make a good argument but they help you to appreciate where I am coming from) and ignored the stronger criticisms I posted after that.


Why is the upper bound on thinking horrifying? We are currently the upper bound within our own domain, and on the whole, we've been getting better as we lifted that bound.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: