Insights · July 22nd, 2024

In 2018 I gave my first focused talk on generative AI. This was prior to OpenAI being well known for its products and the hyperbole we feel in the news cycles today. It was rudimentary and disruptive. It felt fun to see imagery and film scripts being created and thrown out into the world. 

Cut to today and we are awash with claims and arguments for and against. On revelations and HUGE promises that we are in the accelerated age of change and that even the holy grail of capabilities – Artificial General Intelligence (A.G.I.) – is just a few short years away (spoiler, it’s not).

I thought it was worth revisiting where we are with everything now in the second half of 2024. 

The good

I’ve been playing with gen AI systems since they have – ChatGPT, Gemini, Claude, DALL-e, Midjourney. I work with talented folks that have embedded them in their creative workflow. I see some usefulness – that’s the level of exitedness I am willing to express to the world. It’s a mixed bag of results.

There are 4 main areas I see as useful – I’m sure you may find a few more so don’t read this as exhaustive.

  • Breaking our imaginations open and exploring new ideas – writer’s block sucks, especially when you are on a timeline. Being specific and explorative using prompts can release ourselves from the block. Then the real work starts – writing but with a mirror of what else has been said.
  • Exploring wild new visual ideas to challenge imagination and thinking – I do this often – mostly because many household brands have not locked down the use of their copyrighted assets – cellular meat from Kellogg’s, Speed from Starbucks or Walmart, a variety of dystopian and hopeful narratives played out in LEGO box sets. Honestly, I’m surprised they’ve not locked down their brand assets. The following image was generated in seconds using ChatGPT 4o (DALL-E) to explore a cellular meat future.
  • Assistance in coding – when you have access to generative code – a restricted and particular domain – people find that it helps get things going. The folks I work with do say that it’s far from perfect but it can cut down the grunt work.
  • Generating documentation to feed the grunt work created in inefficient processes – in my career I have spent decades implementing new technologies to help save companies money and generate new revenues. In the mix is a badly designed process bolstered by an over-exertion in reporting, documentation and sign off. Also, admin tasks undertaken by expensive resources. An age-old mantra rings true in the face of doing piles of grunt work generatively:

NEW TECH + OLD PROCESS = EXPENSIVE OLD PROCESS

Sorry to tell you – transformation, change management, process redesign and a holistic view of how to run a business efficiently counts now more than ever.

Overall, everything I find is rudimentary and basic. It’s derivative and unimaginative. It’s, well, generative based on massive sets of learning data. We have to hold that thought in focus as we progress.

The bad and the ugly.

I hate to break it to you – we are not at the stage of massive and instant transformation with the signing of a subscription contract for our people.

We’re at a stage where we must know gen AI’s capabilities and the bad and ugly truths they are uncovering. This is not a bad reveal. In fact, this empowers us to make better decisions. 

Yes, we see a form of value (see above) but it’s diminishing in it’s return for cognitive based work – gen AI is not only incredible in what it can do, promises to do in our futures but in it’s accelerated decline – following a similar downward spiral as crypto / web3 and metaverse but across multiple complex domains – this is wild. 

Executives need the wide view now more than ever as they consider how their organization might transform using the discipline of Data Science – all of AI lives within that.

There are many things to consider and over time I will unpack them. In no short order, here we go:

  • Confidence trickery, method priming and search integration – interfaces like ChatGPT 4o work to build confidence in the solution ahead of delivering the goods. They’ve profiled useful tasks and have had low cost labor from abroad prime the models with templates and explanation of how things work plus now we see ChatGPT 4o going out to the web for live info. Pretrained? Hardly. This does highlight something important – getting the data for training and keeping it up-to-date is wildly tricky.
  • Data decline – in the beginning there was scraping. Every tech company that was created after 2010 (if not before) has likely scraped data to gain intelligence and/or drive growth hacking. Open source data – from social, news, company sites, blogs and other and other online sources – is out there for the picking. Now we see that between 5% and 45% of critical data sets are being restricted (check out this study for more on that). This is a trend that will accelerate as gen AI is seen as taking ideas and remixing them with others to ‘generate new ideas’.
  • Attrition of value as data ages quickly – as the data set ages we feel less value in tapping into its wisdom. Old information is useful to a certain extent but not wholly useful in the now. This is why the data > information > knowledge > wisdom cycle ticks along and is driven with expensive data capabilities, and really smart humans with years of experience sharing ideas with each other and their networks.
  • Misinformation generation and spread – we now have the ability to generate realistic, humanistic natural language via simple prompts – automated and otherwise – available to anyone with an internet connection to generate misinformation to do so at scale for relatively little cost. One experiment – Countercloud – is an experiment in autonomous, mass social engineering using LLMs and public media. The platform generated tweets, articles, virtual unreal journalists and news sites that were crafted entirely by artificial intelligence algorithms. Watch the video for more information.
  • Privacy & cybersecurity risks – LLMs and visual generative AI platforms have been placed at the scene of so many crimes – from fake video conference calls that embezzled millions of dollars to model poisoning and users dumping their data into the open platforms looking for a summary or a fix. Needless to say that the message that you ‘need to contribute to the model’ to get the most from it is worrying but essential. It’s a catch 22. There is a pincer movement to regulate internally and externally – read on.
  • Corporate policy restrictions – as the risk profile of using gen AI systems rises – primarily due to copyright, derivative and generative works not fit for task – is rising quickly. I ask every audience member who uses it at work and 25% say yes and they use it often – in places that have stated it’s not to be used. Vigilante prompters are rife – which partly scares me yet part of me loves to see folks go rogue and against policy.
  • Restrictions due to data work, cost and compute for proprietary and open AI projects – this is all wildly expensive and we’ll be paying for this one way or another. At an OpenAI Dev Day in 2023, the company announced their bespoke model-building service for $2-3M minimum before thinking about ongoing OPEX and continual training and development (dig in here for a deep dive on costs). Then we’ll see the platforms themselves pinched by funding challenges as well – all tech needs a looooong runway.
  • Changing regulatory environments where AI companies are held to talk on their practices and fairness – the European shock is setting in for companies right now. Take a look at this primer on the EU AI Act. It’s a model being considered more widely in North America and beyond.
  • Screwy competitive landscapes where we see companies like Microsoft drop $13bn to OpenAI and Apple partner an ostensibly free agreement for partnerships – all in the name of protection and growth. The whole field of gen AI is starting to feel like an ouroboros – what happens when we’ve consumed ourselves entirely?
  • Executive and industry fatigue leads to lack of investment, mergers, acqui-hires, a loss of focus and so much more. Executives are getting beyond bored of hearing about AI and just want to know where it helps them. Then they realize they have yet more tech-focused projects – with more open and unanswered questions – hitting the pipeline. This, coupled with the facts that using these LLMs has been found to reduce the diversity of thought across high impact groups. Competitive advantage is all about novel critical thinking and new ideas so this is a concern.

Now, I didn’t write this as an article about the end of generative AI. I am providing this as a place for meditation on what we have and the work we must do. If this is a marathon, then its an ultramarathon and we’ve sprinted the first 500 meters. 

Keep your eyes up and tap into the people, teams, fortitude and stamina you have available.

You can read more of Nikolas Badminton’s thoughts on AI in these articles:

About Nikolas Badminton

Nikolas Badminton is the Chief Futurist at futurist.com. He’s a world-renowned futurist speaker, consultant, author, media producer, and executive advisor that has spoken to, and worked with, over 300 of the world’s most impactful organizations and governments.

He helps shape the visions that shape impactful organizations, trillion-dollar companies, progressive governments, and 200+ billion dollar investment funds.

Nikolas Badminton’s book Facing Our Futures: How Foresight, Futures Design and Strategy Creates Prosperity and Growth has been selected for 2023 J.P. Morgan Summer Reading List, and featured as the ‘Next Gen Pick’ to inform the next generation of thinkers that lead us into our futures.

Please contact futurist speaker and consultant Nikolas Badminton to discuss your engagement.

Category
Artificial Intelligence
Nikolas Badminton – Chief Futurist

Nikolas Badminton

Nikolas is the Chief Futurist of the Futurist Think Tank. He is world-renowned futurist speaker, a Fellow of The RSA, and has worked with over 300 of the world’s most impactful companies to establish strategic foresight capabilities, identify trends shaping our world, help anticipate unforeseen risks, and design equitable futures for all. In his new book – ‘Facing Our Futures’ – he challenges short-term thinking and provides executives and organizations with the foundations for futures design and the tools to ignite curiosity, create a framework for futures exploration, and shift their mindset from what is to WHAT IF…

Contact Nikolas