In my view, the only wrong opinions on where to rank this are "at the very top" and "at the very bottom or not at all". We will only know the correct answer in hindsight, so the sensible position is to just start funding some legitimate AI safety research.
A whole lot of the arguments about why we shouldn't be concerned really boil down to "I cannot conceive of risk until that risk has materialized." Impossible to argue against, really.