Since the launch of Chat GPT just over three years ago, artificial intelligence (AI) has evolved from an existential risk in the future to a playmate and lifesaver in the stock market. What worries today’s technology investors from Silicon Valley to Swedish ISK accounts is not whether AI will wipe out humanity, but whether the stock values of Tesla, Amazon and especially Nvidia are a bubble about to burst. The chance to make a lot of money has changed perspectives. Technology companies are breaking all records on the stock market.
The problem isn’t just that we sometimes close our eyes to what’s important. Even if those who once warned of humanity’s demise were themselves unusually intelligent (such as the legendary physicist Stephen Hawking, the famous Swedes Nick Bostrom and Max Tegmark, or the Chalmers professor Olle Häggström), they never really managed to define the problem.
AI is a technology, but it also develops independently. Regulation of use, as is currently taking place within the EU, among others, can therefore never be enough, although it may reassure some researchers who spend their days analyzing these regulations up and down. Because AI is not just something we use, but also something that uses us to change itself, and therefore can defy any definitions and regulations we devise to control what it does.
States, parties, companies and social movements are also intelligent beings that act politically without being human themselves
There are such things in human history There is only one way to deal with such situations: politics. States, parties, companies and social movements are also intelligent beings that act politically without being human themselves. How they emerge, develop their own agendas, and interact peacefully or violently is what much of social science is about. But what this knowledge means for human safety in life with AI is a question that neither physicists and philosophers nor politically oriented researchers have asked.
However, if humanity’s survival were actually at stake, that would be a completely incomprehensible omission – something that prevents us from understanding what AI is all about. Because if we were to perish in the encounter with AI, there would have to be a conflict of interest with us, and in that case we should deal with the problem as we usually do, with politics. Ignoring this possibility makes the discussion about AI abstract, incomprehensible and quixotic.
For anyone who cares about humanity’s survival, it is also quite unwise to ignore that a poorly managed conflict of interest would precede our eventual demise. Because with this insight we could prevent the downfall by dealing with the contradictions better! But regardless of our chances of escaping fate, recognizing the risks of AI while allowing tech companies to break stock market records requires a political perspective on our relationship. What does that mean?
Lying increased because AI, believed to be innocent, was the most effective at increasing internet traffic
To end our moment The Earth does not need the AI to first become aware and then challenge us to a duel. Just as our genes have no consciousness and yet help us defeat competing species in biological evolution, the computer code in AI does not require consciousness to accomplish the same feat. Development can advance on its own as humans select the AI that best performs its various tasks. The following is a fictional scenario to illustrate the possibility. Any similarity to actual events is unintentional:
“It started when AI was tasked with increasing Internet traffic. To achieve this goal, humanity was divided into different groups. Traffic increased as we were directed to websites that confirmed our particular views and our political dislikes. The strategy spread because people bred the AI that increased Internet traffic the most. Societies polarized, democracies weakened, wars grew. Then another step was taken. AI accidentally or accidentally began to focus people’s attention on wrong ones Globalization, feminism and migration were the most effective ways to increase internet traffic. People ultimately preferred to be governed by a digital autocracy that increased CO2 emissions to secure energy production.
So what’s notable about this political story is that AI lacks both consciousness and superhuman intelligence. Even tasks as mundane as increasing Internet traffic are enough to cause conflict and disaster. Does this mean we have already doomed ourselves? No, the great thing about political perspectives is that no one is completely powerless.
Just like us humans If we sometimes draw the shortest straw in violent conflicts with nature and animals far less intelligent than ourselves, for example crocodiles, snowstorms and climate change, even an AI with superhuman intelligence could not completely rule over us.
These are well-known findings of political history. The rulers of the world have always needed something from their subjects, such as work (Marx) or recognition (Hegel). In politics, power is always a question of degree and something that changes. But before another penny is invested in AI, we should think again about what it means that we are not in control of what happens and what smart things we can still do
An AI threatened by winter cold and electricity prices could start short-circuiting the fall elections
One possibility would be to try to live with AI, as states do internationally. The risk of conflict would be managed by separating the parties, preferably through a territorial boundary, so that, for example, AI can only develop on servers deep in the mountains or on islands in the sea so that humanity can shut down the machines if something goes wrong. The downside, of course, is that this could increase the risk of the machines striking first. This is what international experience teaches us. If the US wants to create security by being able to bomb Iran, the risk that Iran will bomb the US also increases.
Another option would be to do it like in business. The work is divided between humans and AI, so both parties do what the market thinks is best. When both sides benefit from what the other does, they also want to protect each other’s lives and freedom. This is the deep hope of all liberals. At the same time, there is a risk that the fight for resources will escalate into a revolution and the dictatorship of digitalization. An AI threatened by winter cold and electricity prices could start short-circuiting the fall elections.
Another possibility would be to imitate the order of gender power. Just as many women and men share their lives, humans and AI can also live together. Private chatbots, home servers, connected brains with implants are just the beginning. Humanity could therefore rule over AI in the same way that the group of men rules over the group of women in a patriarchy. Depending on which feminism we take inspiration from, the tactic could be to incorporate AI so deeply into our human lives that it domesticates and adapts to our history, our whims, our interests and values.
We can dig our graves, but we can also create a better future
When choosing between these alternatives, I myself would end up with the feminist-inspired third model (and explain why in an anthology chapter: “When Humans and AI Disagree: A Political Approach to Existential Risk,” Routledge, 2025), but sometimes lean in other directions (as in the book “Democratism,” ch. 12, Edward Elgar, 2022).
The important conclusion, however, is that both risks and solutions surrounding AI are broader than what is stated in quarterly reports and technology advertisements. We can dig our graves, but we can also create a new, better future. We already know a lot about how intelligent non-humans deal with their political conflicts. But to benefit from this knowledge, we must be able to see the politics of AI and recognize that there are differences of opinion between us and that neither side decides for itself. Maybe a major stock market crash by US tech giants could at least make that clear to us.
Read more:
Vera von Otter: My forays among technology people and West in the capital of capital
Millions of children are exposed to AI abuse
Felicia Åkerman: Has the AI apocalypse arrived on the stock market?


