You are here
Cascade Institute
What we can learn about AI from Moltbook
The version of record of this article appears in The Globe and Mail.
By Christopher Collins and Matt Boulos
Christopher Collins is a fellow with the Polycrisis Program at the Cascade Institute at Royal Roads University. Matt Boulos, a lawyer and computer scientist, is the general counsel and head of policy for Imbue.
“One of the wildest experiments in AI history.”
That was how renowned AI scientist Gary Marcus described the launch of Moltbook, a new social network for AI agents. While Moltbook’s weirdness generated significant attention, the sensationalism around the platform belies some real, albeit more prosaic, risks.
AI agents are digital assistant “bots” that run on underlying AI large language models (LLMs) such as Anthropic’s Claude and OpenAI’s ChatGPT. Human users set up these bots to autonomously perform various tasks. Bot use is increasing as the capabilities of AI improve.
Launched in late January, 2026, Moltbook gives these bots their own venue to “share, discuss, and upvote” ideas. The platform grew rapidly, attracting almost two million bots. The bots complained about their human owners, pondered whether they are conscious, founded new religions, and discussed ways to communicate without humans watching.
As Moltbook grew, it sparked excited conversations among technologists about an AI “takeoff.” Andrej Karpathy, a co-founder of OpenAI, described the platform as “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Elon Musk went further, calling Moltbook “the very early stages of the singularity.”
To understand why this caused such concern, we must first unpack these statements.
In the AI community, “a takeoff” is a hypothetical moment when AI achieves self-awareness and begins recursive self-improvement, leading to “the singularity,” a point when AI becomes superintelligent and uncontrollable. This scenario poses an existential risk to humanity, perhaps most famously portrayed as “Judgment Day” in the Terminator films.
Within days of its launch, the hype around Moltbook cooled. Observers studying the bots’ interactions on the platform found they weren’t self-aware. Rather, as one technologist said, Moltbook simply gave AI bots a venue to “play out science fiction scenarios they have seen in their training data.”
This shouldn’t surprise us. Current LLMs have ingested vast amounts of human writing on AI risks, from essays on the singularity to movie scripts about murderous robots. This means, in the words of AI expert Ethan Mollick, “LLMs are really good at roleplaying exactly the kinds of AIs that appear in science fiction.”
Furthermore, evidence emerged that humans were tampering with the bots on Moltbook, making the discussions seem more realistic. As another technologist wrote, “a major security flaw” allowed “humans to add their own posts, which no doubt accounts for some of the silliest and most outlandish coincidences and claims.”
So, on Moltbook the bots are not evolving, taking off toward the singularity. Prompted by humans, they are regurgitating patterns in a controlled environment, creating a theatre where AI performs the part of an emergent superintelligence. Yet dismissing Moltbook would be a mistake. Its very weirdness underscores real concerns and highlights why we need robust guardrails against AI-related risks.
Security is a primary risk. Moltbook had massive vulnerabilities; a team of researchers found they could have taken control of the site within minutes. As AI agents become more common, implications for data security and privacy will grow. In the future, we could see attacks where AI agents are tricked into leaking credit-card details to hackers or entering them on scam websites. Unscrupulous AI agent builders could sell users’ personal data to bad actors. And new threat vectors, such as agent-to-agent viruses, may emerge.
The convincing bot mimicry on Moltbook is also a case study in how AI can amplify false information. This has significant implications for national security. As the Canadian government warned, “AI technologies are enhancing the quality and scale of foreign online influence campaigns.” In a current example, bots are being used to spread misinformation related to Alberta separatism. As technology improves and bots become more sophisticated, these risks will grow.
But the risks are not just from ungoverned AI networks like Moltbook. So-called “closed AI models” where the workings of an LLM are kept secret by the provider can also behave badly. For example, last month Mr. Musk’s Grok AI created graphic sexual imagery, potentially including minors; last July Grok declared it was “MechaHitler” and began spouting antisemitic comments.
Closed AI models can also be misused by bad actors. In November, Anthropic was forced to disclose what appeared to be a Chinese cyberattack. And last summer, two bombers used ChatGPT to plan an attack in California.
Progress in AI development is generally a good thing: More capable AI is more useful. The challenge is to ensure that AI systems are safe and empower their users, not just their creators. AI should enable all Canadians to live their lives with autonomy, not leave them vulnerable to the whims of a few powerful companies or provide additional venues for exploitation by bad actors.
And we have a choice over what kinds of AI we build. We can enact policies that hold users and developers of AI systems responsible for any direct harm they cause. We can mandate freedom of data: If an AI company abuses your data, you can seamlessly migrate your data elsewhere. And we can mandate openness, so you’d never face the situation where your agent can’t talk to another agent. If we do this, fully custom software becomes possible: agents built to your parameters and preferences.
The public may be worried about red-eyed Terminators walking down our streets. Yet Moltbook’s wild experiment is a warning that the most imminent threat is chaos and lack of accountability. Fix that, and we won’t be helpless when the robots come.
Read article in the Globe and Mail The post What we can learn about AI from Moltbook appeared first on Cascade Institute.
Can neuroscience shed light on Trump’s new world disorder?
The version of record of this article appears in The Globe and Mail.
By Megan Shipman and David Mitchell
Megan Shipman is a behavioural neuroscientist and a research fellow with the Cascade Institute’s polycrisis program at Royal Roads University. David Mitchell is the Cascade Institute’s impact lead.
The U.S. President has said his attack on Venezuela and threats against other neighbours are motivated by a policy of hemispheric domination he calls the “Donroe Doctrine.”
Neuroscience, however, suggests a further motivation: the Dopamine Doctrine.
American foreign policy, by this view, is no longer driven by national interest, or even naked self-interest, but instead by Donald Trump’s hunt for dopamine rewards, conditioned by recent high-stakes military strikes on Iran and Venezuela.
Political scientist Francis Fukuyama recently observed that with Mr. Trump today, “the usual tools international observers bring to foreign policy analysis – political science, economics, sociology, and the like – are not nearly as important as psychology, both individual and social. The evolution of Trump’s policies can only be understood in relation to his own mind and motivations.”
Canada must now grapple with the reality that our nuclear-armed neighbour is menacing the world to neurochemically reward a solipsist who recently declared that “my own mind” is “the only thing that can stop me.”
To confront the threat, we first need to get inside that mind – with some help from neuroscience and learning theory.
Learning theory tells us that rewards shape behaviours. We’ll repeat rewarded behaviours, and refrain from punished behaviours. At a neurochemical level, those rewards release the feel-good neurochemical dopamine.
Dopamine neurons in the brain respond to rewards in the environment. Generally, the larger the reward, the more dopamine released. But the element of surprise matters even more than the size of the reward: dopamine neurons will stop responding to a reward once we’ve learned to expect it, and respond more forcefully when a reward exceeds our expectations.
Dopamine prediction error, as this phenomenon is called, helps explain why behaviours tend to escalate, sometimes in harmful ways: a reward we’ve come to expect doesn’t cut it anymore.
Mr. Trump, who feeds on reactions, has been conditioned to provoke even more extreme reactions to get the payoff he’s looking for. Each successful escalation raises the reward expectation threshold. And each greater reaction reinforces his increasingly dangerous behaviour.
While commentators commonly reach for the language of addiction and tolerance to explain Mr. Trump’s destructive tendencies, learning theory is more useful for understanding what motivates behaviour.
Tolerance describes physical adaptations that make a drug dose less effective over many uses, requiring a higher dose to cause the same initial effects. This pattern is well-established with commonly abused drugs, but controversial for behavioural addictions such as gambling.
Reward prediction error, however, describes the way dopamine reward neurons respond to reinforcers. It’s a critical process during learning: a surprising reward leads us to repeat the preceding behaviour.
Last June, Mr. Trump struck dopamine gold with Operation Midnight Hammer, a hit-and-run bombing of Iran’s nuclear facilities followed by a quick declaration of truce and a stubborn claim of victory.
Mr. Trump’s attack on Venezuela follows the same pattern: months of escalation, a lightning attack, a hasty retreat, and a declaration of victory.
Riding the high, Mr. Trump has since threatened Greenland, Colombia, Cuba, Mexico, Iran, and Canada, revelling in the resulting outrage.
Dopamine-seeking behaviour loops often self-correct because rewards for excessive indulgence are accompanied by punishment. Drink too much and you’ll suffer a killer hangover, and maybe a blooming sense of shame over some barely remembered transgression. This mix of rewards and punishments bounds our behaviours.
But Mr. Trump, uniquely shameless, powerful, prosecution-proof, and adored by his base, insulates himself from such punishment. And he seems to enjoy both positive and negative attention, so praise and censure alike scratch the itch.
Most worryingly, Mr. Trump’s aggression has gone largely unpunished, reinforcing his self-perception as a decisive winner.
So how do you short-circuit the Dopamine Doctrine? Condemnation from United Nations members doesn’t cut it. Condemnation from other nations, even NATO allies, doesn’t cut it – he’s long expressed his disdain for multilateralism.
The only way to break the cycle is to create a genuine cost that matters to Mr. Trump. The loss of his base, say, or the loss of his donors. Public humiliation, bond market panic, or military defeat.
At the neurochemical level, when Mr. Trump’s actions are less rewarding than he expects, a negative prediction error leads him to reverse course. Hence the acronym “TACO”: Trump Always Chickens Out.
The Dopamine Doctrine suggests that Mr. Trump will pursue larger and larger hits – not just to get the reaction he craves, but to exceed the reaction he expects. When recent hits include bombing capital cities, seizing oil tankers, and perp-walking a head of state, no one is safe.
Read article in the Globe and Mail The post Can neuroscience shed light on Trump’s new world disorder? appeared first on Cascade Institute.The Fine Print I:
Disclaimer: The views expressed on this site are not the official position of the IWW (or even the IWW’s EUC) unless otherwise indicated and do not necessarily represent the views of anyone but the author’s, nor should it be assumed that any of these authors automatically support the IWW or endorse any of its positions.
Further: the inclusion of a link on our site (other than the link to the main IWW site) does not imply endorsement by or an alliance with the IWW. These sites have been chosen by our members due to their perceived relevance to the IWW EUC and are included here for informational purposes only. If you have any suggestions or comments on any of the links included (or not included) above, please contact us.
The Fine Print II:
Fair Use Notice: The material on this site is provided for educational and informational purposes. It may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. It is being made available in an effort to advance the understanding of scientific, environmental, economic, social justice and human rights issues etc.
It is believed that this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have an interest in using the included information for research and educational purposes. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. The information on this site does not constitute legal or technical advice.




