This might be your worst essay ever. Exponential growth of AI intelligence and ubiquity is indeed a massive and here-now threat. It's like saying to a parent whose child just picked up a landmine "it's ok, don't worry, it hasn't gone off yet, everything's fine." As intelligent entities we humans are capable of projecting current trajectories into the future. The responsible people who have signed the FOL open letter have done that projection and have agreed that this shit is way way way too dangerous to let it develop without a pause and a collective decision as to how to regulate it SO THAT IT DOESN’T KILL US ALL. That's entirely rational.