Insights · December 18th, 2022

Rather than sugar coating forecasts, futurists should find ways to help the public to clearly understand the radical scenarios that could lie in the relatively near future for humanity. That was one of the conclusions at London Futurists event on December 17th, 2023.

This webinar took place on 17th December 2002 and featured the ideas of Daniel Faggella, the founder and CEO of Emerj Artificial Intelligence Research. Daniel has researched and written extensively on topics such as:

  • A forthcoming “moral singularity”
  • Scenarios for the emergence of AGI
  • Why ideas of “friendly AI” are fraught with difficulty
  • Possible trajectories for posthumans in the wake of advanced AI

The event also featured comments and feedback from Futurist Think Tank member Bronwyn Williams, Foresight Lead, Flux Trends and Rohit Talwar, CEO of Fast Future

Another key takeaway is that the need is stronger than ever to evaluate ASAP, before it’s too late, various actions that could be taken NOW to positively influence any forthcoming transition from today’s AI to more powerful AI that proceeds beyond human understanding and control.

Humanity’s choices for which scenarios to prefer for this transition will depend, in turn, on how we assess and value the various potential end outcomes that could lie beyond the emergence of AGI (Artificial General Intelligence). These end outcomes deserve wider study.

These end outcomes include:

  1. Humans are essentially unchanged
  2. Humans are significantly uplifted
  3. A baton is passed to beings significantly more sentient & conscious than us
  4. Humans are relocated to secure new locations in paradise-like virtual worlds
  5. Human-AGI merger.

But all these transitional paths are fraught with landmines:

  1. AGIs, whilst enormously clever, might miscalculate, and annihilate everything (including themselves)
  2. AGIs, whilst enormously clever, might lack any true sentience or consciousness, so the cosmos becomes soulless
  3. The public might react badly to becoming aware of some of these forthcoming possibilities (and the outlandish ideas that “tech elites” are seemingly happy to discuss), resulting in panic, social chaos, demagoguery, and a collapse in civilization
  4. Attempts to steer the evolution of AGI might result in ham-fisted regulations that quash genuine innovation and drive it underground or overseas (where it may take place more carelessly and dangerously)

_______________

Nikolas Badminton is the Chief Futurist at futurist.com. He’s a world-renowned futurist speaker, consultant, author, media producer, and executive advisor that has spoken to, and worked with, over 300 of the world’s most impactful organizations and governments. He helps shape the visions that shape impactful organizations, trillion-dollar companies, progressive governments, and 200+ billion dollar investment funds.

You can preorder ‘Facing Our Futures’ at AmazonBloomsburyBarnes and Noble and other fine purveyors of books. We’s also love it if you considered pre-ordering from your local, independent book store as well.

Please contact futurist speaker and consultant Nikolas Badminton to discuss your engagement or event.

Category
Facing Our Futures Long-term thinking
Nikolas Badminton – Chief Futurist

Nikolas Badminton

Nikolas is the Chief Futurist of the Futurist Think Tank. He is world-renowned futurist speaker, a Fellow of The RSA, and has worked with over 300 of the world’s most impactful companies to establish strategic foresight capabilities, identify trends shaping our world, help anticipate unforeseen risks, and design equitable futures for all. In his new book – ‘Facing Our Futures’ – he challenges short-term thinking and provides executives and organizations with the foundations for futures design and the tools to ignite curiosity, create a framework for futures exploration, and shift their mindset from what is to WHAT IF…

Contact Nikolas