You’re correct on this one, Tomas. There is no solution to the alignment problem. At this time the only hope we have, which you don’t touch on in this article, is the lack of sufficient energy to build ever larger server farms for training or inference. That may give us a few more years of time. People need to wake the fuck up now. Unfettered AGI will almost certainly kill us all. My p(doom) is same as Yampokskiy’s at about 99.9%.