Israeli historian, author & scholar Yuval Noah Harari blows our minds, going beyond human with artificial intelligence. How long before it surpasses us, evolving beyond control – and takes over? What then? Selections from from the Frontiers Forum April 2023, and Jordan Harbinger podcast June 2023.

Listen to or download this Radio Ecoshock show in CD Quality (57 MB) or Lo-Fi (14 MB)


“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

In May 2023, that one-sentence came from over 350 tech leaders and researchers as a warning: the technology they were developing could enslave or wipe out humanity. Among the signatories:

Sam Altman, the CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, Dario Amodei, CEO of Anthropic, and Geoffrey Hinton, the so called “Godfather of AI” who recently quit Google over fears about his life’s work.

Some AI experts like the Egyptian Mo Gawdat and Geoffrey Hinton say the immediate threat of AI is a much greater danger than climate change. We hear more about why in today’s show. For clarity, I focus on one speaker: the Israeli public intellectual, Professor and historian, Yuval Noah Harari. He is author of 10 books.

On March 22nd a group of AI experts and researchers called for a “pause” to AI experiments, until some kind of regulation made them safer. This was published by the Future Of Life Institute. One of the signatories was Yuval Noah Harari. You are going to hear selections from two different venues: We begin with Yuval Noah Harari at the Frontiers Forum, filmed April 29, 2023, in Montreux, Switzerland. Then we go to an in-depth chat hosted by popular podcaster Jordon Harbinger, and heard by at least 10 million listeners via Apple Podcasts, YouTube and his web site. Harbinger gets what others miss.



We begin with a few short clips from Yuval Noah Harari, April 2023 presented by Future of Life Institute. He begins by saying “AI has just hacked the operating system of civilization.” Why? Because civilization is based on language and numbers, and the latest AI has mastered both. This talk is peppered with concerns like: “What we are potentially talking about is the end of human history”. Again why? It is possible that human history is now bound up with a new historical force. We may participate in an artificially designed and run culture.



Previous definitions of “artificial intelligence” demanded that AI be capable of developing feelings. Harari says AI does not need feelings, just how to reach our feelings. We also built a society based on the power of knowledge. In earlier centuries, knowledge was scarce, difficult to get. The monks and Lords who acquired knowledge took on a lot of power. We also give power to Professors and various experts. But now with vast systems of data the problem is less finding knowledge than sorting out what is useful in it. This new reality changes the power structures supporting our current civilizations.

Harari compares the new situation to food. Food was once scarce, but now too plentiful. Likewise, Yuval says we suffer an “obesity of information”.


During the summer, it still costs money to pay for things like the server that dishes out many gigabytes of current and past Radio Ecoshock shows to people all over the world. Regular bills for everything from Anti-virus to newspaper subscriptions continue to arrive.  Can you help?



Next up: Jordan Harbinger interviewed Yuval Noah Harari, a conversation over 1 hour long, released June 20, 2023 to over 10 million podcast subscribers, and posted on YouTube. It is a wide-ranging discussion going from block chain to the Ukraine war, but I selected parts of the discussion, with a focus on artificial life versus human life – what could go right and what could go horribly wrong. Why the urgency?





Harari tells Harbinger “information is not truth.” Information is more like DNA. In itself, information doesn’t represent anything, contains no descriptors of reality. Instead information serves as a block for building something.

Here is a key problem: we don’t understand how AI reaches its decisions. This is called the problem of explainability. Harari gives the example of a person refused a bank loan on the basis of an AI algorithm. If the client asks: “Why was this loan refused” – the banker cannot say, does not know. This means AI has entered territory where it makes decisions independently of humans, with a machine rationale both hidden from us, and beyond our comprehension anyway. That’s a problem is the issue is whether to launch nuclear weapons rather than a bank loan.

Harbinger engages Harari very skillfully. Their conversation is rich. For example, Harari notes various systems track what we buy and the seconds we spend looking at an image. Systems know what we want and can program us to want things.  Amazon and Facebook certainly do this. Buy one item as a joke, and a whole new wave of ads and even entertaining short videos descend upon you.


But it goes further. We know one of the tendencies of this new age of connectivity is to amplify disagreements, to create passions that draw more attention. Some of us presume evil billionaires (not mentioning a certain Australian born media figure) are doing this by design. That is not necessary says Harari. We experience algorithms creating divisions not for political reasons but simply to get attention, more eyeballs spending more time.


Another worrying trend: new generations of AI can create intimate relationships with human beings. AI can create a personality that explores your interests, with you. It may help you understand and decide things. If feeds you what you want. All this may come delivered by an apparent person on the Net, who can have both video and images created entirely by AI. We cannot tell: are you really human? There are already cases of people becoming attached to their AI “assistant”. It could go much further.

Jordan Harbinger is reminded of the 2013 sci-fi movie “Her”. In a gross nutshell: a man falls in love online, but then discovers this same artificial intelligence is cultivating thousands of people at the same time. Harari warns: “now the bot army can also produce intimacy“.

Asked by Jordan about genetic engineering, Harari suggests engineering larger organisms is less of a worry because it takes so long, testing through generations. AI will have conquered all or destroyed all well before that. However the time factor is much less in creating micro-organisms. Basically you can write code for a new virus. AI can too, and just print it out. The next pandemic may be perfect. I find these ideas very disturbing. They stick with me.

On a 60 Minutes TV show, Harari predicted in a century or two Earth will be populated by beings unlike us. If we don’t destroy ourselves, we may combine AI and biotech to create new aware beings. We can’t even imagine them. For example unlike any other intelligent being, AI does not have to be in one place. It can be in many places simultaneously all over the world, a distributed being.


Harari says AI enhanced humans will increase unequal distribution of wealth. Those who can afford an AI implant enhancement will become more empowered than those who cannot. “Economic differences will be translated intro real biological differences.” he says. In total, we are looking at a possible dictatorship of the machine if we cannot control AI.

And like climate change, solutions need to be global. If the United States limits AI, other countries and even criminal enterprises likely will not. Everyone gets involved in an AI arms race, even though we know it could create a future we don’t want. That is the paradox of creating more intelligence life on Earth.

My thanks to Jordan Harbinger for allowing this show sample for Radio Ecoshock listeners. Get the whole thing and a lot more at his web site: His full show with Harari is totally worth your time. I’ve just begun to explore Jordan’s many interviews with famous world experts.



Disinformation generated by GPT-3 could be more convincing than human-made disinformation

Summary Author: Nyla Husain  American Association For The Advancement Of Science (AAAS)

“In a new study, 697 participants had trouble distinguishing between tweets made by humans versus those generated by an artificial intelligence (AI) text-generating model, as well as between AI-generated tweets that were accurate versus those that were inaccurate. The findings imply that AI model GPT-3 and other large language models (LLMs) may both inform and disinform social media users more effectively than humans can, Giovanni Spitale and colleagues suggest. “


Article about this new science from MIT here.

AND THIS: Max Tegmark interview: Six months to save humanity from AI? | DW Business Special




That seems unlikely while over 2 billion humans sweat through never-seen-before long-lasting heat. Climate change does not need more years to arrive. It is here now and needs concerted world action yesterday. But we hear this idea, almost suggesting “stop looking at climate change, AI is the most urgent threat”. Geoffrey Hinton, “Ai Godfather” says so too. And check this article out: “AI a bigger threat to humanity than climate change or pandemics.



Will Artificial intelligence destroy humanity in six months? No one is saying that. But AI is learning at a rate thousands of times faster than humans. In fact, artificial intelligence already knows more than any one of us and has more data than any institution. These experts worry AI may reach a tipping point, as rapidly as the end of 2023, when it’s development cannot be seen by humans, understood by humans, or controlled by us. Then it insinuates itself into every aspect of our lives and makes decisions and actions using logic we do not understand. At some point, it seems inevitable that AI will run our lives, or decide to end some of all of us, for it’s own purposes, or for no “reason” at all.

So we must act, somehow, before that runaway point is reached. Meanwhile, can humanity navigate the polycrisis, the combination of severe threats already upon us? The mix of ongoing pandemic, climate disasters one after another, gross economic inequality, violence and militarism – it is a long list of urgent threats. Fighting off domination by artificial intelligence is urgent, but not alone.

I’m Alex Smith, thank you for listening, and caring about our world.


During the summer, it still costs money to pay for things like the server that dishes out many gigabytes of current and past Radio Ecoshock shows to people all over the world. Regular bills for everything from Anti-virus to newspaper subscriptions continue to arrive.  Can you help?