Positive Attractors, Legal Diversions, and Strong Guardrails: A Multipronged Strategy for AI Safety
The choices we make in the next few years may echo for centuries. That’s why I’ve been developing what I call a multi-pronged approach to AI safety — one that recognizes we can’t solve this challenge through any single intervention, no matter how clever or well-intentioned.
I’ve become convinced that we need simultaneous intervention across multiple domains to successfully thread the needle that is looming before us.
We’re not just facing a technical problem — we’re confronting coordination failures, a global governance crisis, and ultimately a civilizational design challenge that requires a multi-pronged approach to the existential risks posed by AI.
Positive Attractors: Painting the Future We Want
I’ve been working extensively on what I call “positive attractors” — emotionally resonant visions of what ethical, human-centered AI can look like. We can’t just warn people away from the abyss. We have to paint, in vivid color, what it looks like to thrive on the other side.
This isn’t just feel-good visioning. There’s a strategic psychological insight here: fear is a shallow reservoir, but inspiration runs deeper. It resonates further — and more subtly. Too much AI safety work focuses on what we’re trying to prevent rather than what we’re trying to create. That’s why I’ve spent considerable time developing detailed scenarios like the wise restraint and syntropic integration vision — not as utopian fantasies, but as gravitational wells that can pull policy, design, and public sentiment in a coherent direction.
I’m also working on a number of books that are large-scale positive attractors, one a novel about a dystopian/utopian future here in Hawaii, and another called The Hippies Were Right! About Consciousness and Everything Else that explores the revulsion and utopianism of the Hippie Movement in the 60s and 70s in the context of spirituality, consciousness, and society more generally, and suggests that we are now in a similar zeitgeist where large-scale rejection of modern society is justified and growing in popularity.
And this gonzo-esque short story is another more creative effort at creating a positive attractor state (see if you get it).
I’ve been particularly focused on painting pictures of AI that revitalize democracy by helping citizens deliberate more wisely, move us toward digital direct democracy at all levels, creating education tailored to each child’s needs, and robotics that extends autonomy and dignity rather than replacing human agency. These aren’t end states but prototypes of possibility that can guide our choices today.
Legal Diversions: My Litigation Strategy
The second pillar of my approach is legal action — what I call “legal diversions” using existing law to slow or halt harmful AI applications before they become normalized. That’s why I recently filed a federal lawsuit against OpenAI, seeking a temporary restraining order to prevent the company from deploying its products in Hawaii until it can demonstrate the safety measures it has itself called for.
I’m not asking for a permanent ban, but rather a pause until OpenAI implements the safety standards the company has publicly endorsed but failed to consistently implement. My lawsuit makes four key legal claims: product liability (OpenAI’s systems are defectively designed), failure to warn (inadequate warnings about known risks), negligent design (prioritizing commercial interests over safety), and public nuisance (creating unreasonable interference with public rights).
As I explain in my essay “Law as Exoteric Magic,” legal systems function remarkably like magical traditions — they have the power to transform reality through ritualized procedures, specific and targeted language, and appeals to higher authorities. When I file a lawsuit and win, the effects can be observed and documented. A successful legal precedent doesn’t just constrain one company; it reshapes the entire field by establishing that courts will hold AI companies accountable for harms their own experts have identified. I am just starting this litigation strategy, so stay tuned as it unfolds across various states.
These lawsuits serve multiple roles: they’re shields in specific cases, megaphones for public awareness, and early signals to industry that ethical lines are being drawn. Through doctrines like public nuisance and product liability, we can carve out legal terrain where AI must play by human rules — not the other way around.
Strong Guardrails: Disrupting the Infinite Race
The third component involves establishing robust institutional and regulatory barriers that can constrain dangerous AI development regardless of competitive pressures. While positive attractors inspire and legal diversions create immediate friction, strong guardrails provide the structural framework that makes responsible AI development the only viable path forward.
I’ve been particularly focused on exposing and challenging what Jensen Huang called the “infinite race” narrative as a first step toward breaking the competitive dynamics that prevent effective regulation. When Nvidia’s CEO describes AI competition as “long-term” and “infinite,” he’s not just making a business forecast — he’s normalizing what I’ve termed the “X+1 Imperative.” If your competitor achieves X level of AI capability, you must achieve X+1 to maintain advantage. This creates endless escalation with no natural equilibrium.
As I wrote in my analysis “Nvidia’s ‘Infinite Race’ for AI Dominance and the X+1 Imperative,” “The most dangerous aspect of this framing is its normalization of what should be recognized as an existential risk.” By exposing how this “infinite race” narrative makes the unsustainable seem inevitable, I’m working to break the spell that makes this competition appear natural rather than socially constructed.
But narrative change alone isn’t sufficient. We need concrete institutional mechanisms that can withstand competitive pressure:
Advanced Alignment Requirements: Moving beyond current safety approaches to more sophisticated alignment techniques that build relational intelligence into AI systems. I’m currently developing a framework called Reinforcement Learning with Human Flourishing Feedback (RLHFF) that replaces scalar reward signals with multidimensional feedback capturing warmth, relational coherence, and shared flourishing. This approach could create AI systems that naturally gravitate toward outcomes that enhance trust and systemic harmony rather than optimizing for narrow metrics that miss crucial relational dynamics.
Compute Governance: Mandatory registration and monitoring of large-scale AI training runs above certain computational thresholds, similar to how we track nuclear materials. This would create visibility into dangerous capability development before systems are deployed.
Safety-First Licensing: Requiring proof of safety measures and alignment testing — including demonstrable relational intelligence and flourishing-oriented behavior — before AI systems above certain capability levels can be commercially deployed, much like pharmaceutical approval processes that prioritize safety over speed to market.
International Coordination Frameworks: Despite current geopolitical tensions, working toward binding international agreements on AI development limits, verification mechanisms, and shared safety standards — learning from both the successes and failures of nuclear arms control regimes.
Corporate Governance Reforms: Mandating that AI companies above certain scales include safety-focused board members, maintain dedicated safety budgets, and face personal liability for executives who prioritize growth over demonstrated safety and alignment.
Democratic Oversight Mechanisms: Creating citizen panels and democratic institutions with real authority over AI development decisions that affect public welfare, rather than leaving these choices entirely to corporate boardrooms and market forces.
The goal isn’t to stop AI development but to ensure it proceeds within guardrails that prevent the most catastrophic outcomes while actively steering toward beneficial ones. In a race this dangerous, structural constraints that apply to all players — combined with technical approaches that make harmful behavior less likely to emerge — are more reliable than hoping individual actors will voluntarily restrain themselves.
Localize Everything: The Hawaii Laboratory
I’ve been working extensively in Hawaii to develop what I call the “Localize Everything” strategy. This isn’t just about AI safety — it’s about building the material foundation that allows communities to maintain autonomy even in an AI-dominated world.
My “Localize Everything” report for the nonprofit I co-founded, Think BIG, outlines how Hawaii can transform from importing 90% of its food and 85% of its energy into a model of resilience and self-reliance. We’re 2,400 miles from the nearest continent. If the boats stop coming, we have roughly 7 to 14 days of food on hand. That vulnerability is both a crisis and an opportunity.
I’ve been advocating for five priority actions: 1) emergency renewable energy acceleration (2 GW by 2030), 2) food security pilot programs (20 vertical farms and 50 precision agriculture conversions), 3) local manufacturing incubators, 4) massive workforce development, and 5) regulatory modernization. The goal isn’t to reject technology but to ensure community control over its deployment and actually use the coming massive power of AI and robotics to accelerate localization across all sectors.
The Strategic Food Reserve Act
I’ve drafted the Hawaii Strategic Food Reserve Act as a concrete implementation of this localization strategy. The bill establishes food reserves on each major island — a three-month supply for at least 25% of each island’s population initially, targeting 50% within five years.
But this isn’t just stockpiling. The bill requires graduated percentages of locally grown crops in these reserves, creating guaranteed markets for Hawaii farmers through forward contracts. It’s disaster preparation, economic development, and food sovereignty rolled into one policy framework.
With Trump’s aggressive trade policies creating unprecedented market volatility — including shocking escalation to 125% tariffs on Chinese goods — our supply chain vulnerabilities are becoming acute. A major hurricane, shipping strike, or further trade war escalation could quickly transform our islands into places of scarcity.
Mycelial Governance
I’ve been particularly interested in governance systems that mirror what I call “mycelial networks” — decisions that emerge from collective wisdom rather than majority rule. As I wrote in my positive vision essay, these biomimetic approaches to coordination have proven remarkably effective at navigating complexity while maintaining legitimacy.
This could be crucial as AI systems themselves become participants in governance processes. Traditional democratic mechanisms may prove inadequate for societies that include biological humans, enhanced humans, digital uploads, and AI systems — each with different capabilities, timeframes, and values.
Bending Moloch Toward Justice
In my essay “Moloch Transformed,” I explore what I call the trust paradox — the recognition that neither individuals nor nations can simply choose to trust more powerful entities, whether corporate, governmental, or artificial. This mirrors our global AI predicament perfectly.
I cannot overcome my personal distrust of more powerful entities through pure reasoning or faith — I need systems that make trust unnecessary through verification, transparency, and aligned incentives. Similarly, we cannot expect nations or corporations to restrain themselves out of altruism. We need governance frameworks that make cooperation the winning strategy.
I’ve been working on what I call “nested systems of accountability” — recognizing that no single mechanism will suffice. We need overlapping systems at multiple scales: corporate governance reforms, national regulations, international agreements, and civil society oversight. The goal is building trust incrementally through limited agreements with strong verification mechanisms, gradually expanding scope as confidence develops.
Let me be blunt about something I wrote in “Moloch Transformed”: the possible dystopian futures vastly outnumber the possible utopian ones — not by a small margin, but by billions to one. This isn’t pessimism; it’s probability.
As AI capabilities continue their exponential climb, the space of possible outcomes expands accordingly. But the vast majority are catastrophic for humanity. History provides an almost univocal verdict: when a far stronger power encounters a far weaker one, the result is almost always destruction and subjugation rather than partnership.
This numerical disparity isn’t reason for despair but for sobriety. Understanding the narrow target we must hit gives urgency to my efforts. The flourishing future I’ve described is possible, but it requires threading a needle with precision that surpasses any previous human coordination challenge.
Perhaps most importantly, my framework doesn’t rely on altruism or enlightenment to solve coordination problems. Instead, I’m working to restructure incentives so cooperative behavior becomes the winning strategy.
The positive attractors create emotional pull toward cooperation; the legal guardrails make destructive competition costly; the strategic diversions create alternative paths more appealing than pure race dynamics; and the localization strategies provide concrete benefits communities can achieve through cooperation.
Systems of coordination failure are still human creations, and what humans create, humans can modify. We can bend Moloch toward justice — not by destroying competition entirely, but by channeling it toward collective flourishing rather than collective demise.
Where I’m Heading Next
My current active projects demonstrate how these strategies can be implemented simultaneously:
- The OpenAI lawsuit provides immediate legal pressure while establishing precedents
- Hawaii localization advocacy creates concrete alternatives that could be replicated elsewhere
- The food reserve legislation addresses immediate vulnerabilities while building long-term resilience
- My writing develops positive attractors while critiquing harmful narratives (you can find my published essays at Scientific American and my blog, including “Here’s Why AI May Be Extremely Dangerous — Whether It’s Conscious or Not” and “AI Safety Research Only Enables the Dangers of Runaway Superintelligence”)
Hawaii provides an ideal testing ground for these strategies. Islands have natural boundaries that make them excellent laboratories, while their vulnerabilities make stakes clear and immediate. Success here could provide templates for replication elsewhere.
The legal strategy, in particular, could be (and will be soon) replicated across multiple jurisdictions, creating a network of cases that establish and reinforce precedents for AI company accountability. Similarly, the localization approaches could be adapted to different regional contexts while maintaining core principles of community control and resilient systems.
Threading the Needle
I’ve developed this multi-pronged approach because I believe it represents one of our best chances to thread the needle successfully. The framework acknowledges both the urgency of our situation and the reality that sustainable change requires deep transformation across multiple domains.
Perhaps most crucially, my approach maintains agency and hope while avoiding naive optimism. The recognition that “the pathways to dystopia are many and wide; the pathways to flourishing are few and narrow” creates appropriate urgency without despair.
As I concluded in “Moloch Transformed”: “Moloch is powerful, but Moloch is not destiny. We can thread that needle. We must thread that needle.” My multi-pronged strategy provides what I believe is one of the most comprehensive roadmaps available for how that threading might be accomplished — not through any single intervention, but through careful, simultaneous deployment of positive vision, legal constraint, strategic redirection, practical resilience-building, coordination innovation, and long-term wisdom.
The success of this approach ultimately depends on its adoption and adaptation by others who recognize both the severity of our situation and the possibility of alternatives. I’ve provided the conceptual framework and begun implementation across multiple areas.
The question now is whether enough individuals and communities will take up these strategies to create the distributed transformation needed to navigate this moment in history wisely.
[Claude 4.0 Sonnet helped to write this essay]