There’s a big difference between intelligence and sentience. There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.
Andrew Ng, who claims we should solve our current problems before hypothesizing about future ones.
Does sound reasonable. What if one of the current problems is the fact that we tinker with shit we lack the capacity to comprehend the consequences of.