418 – The AI Rant: A Nuanced Rebellion Against Digital Sleepwalking

The AI Rant: A Nuanced Rebellion Against Digital Sleepwalking

In a crossover episode blending The Adelaide Show with Talking About Marketing, Steve Davis delivers a deeply personal exploration of artificial intelligence’s role in modern life, featuring a philosophical whisky tasting with ChatGPT, David Olney’s insights on maintaining human expertise, and original songs that question our relationship with technology.

What happens when a passionate South Australian journalist takes on the biggest technological shift of our time? Steve Davis transforms episode 418 into something unprecedented: a solo deep dive into artificial intelligence that refuses easy answers or breathless enthusiasm.

This special crossover episode opens with Steve’s restaurant analogy that frames the entire discussion. Imagine a magnificent chef who has perfected handmade hamburgers over many years, only to discover AI-equivalent shortcuts that promise more time to think deeply about cooking. The blind spot? Customers receive substandard offerings churned out en masse. Some will be disappointed and leave, others will accept the compromise. This tension between quality and efficiency sits at the heart of our AI moment. He is joined by his Talked About Marketing colleague, David Olney, for the main interview.

The episode features an unusual SA Drink of the Week segment where Steve conducts a philosophical whisky tasting with ChatGPT itself, exploring 23rd Street Distillery’s award-winning Australian Single Malt Whiskey while pondering existential questions about consciousness, value, and what happens when AI no longer needs human experiences like taste.

You can navigate episodes using chapter markers in your podcast app. Not a fan of one segment? You can click next to jump to the next chapter in the show. We’re here to serve!

The Adelaide Show Podcast: Awarded Silver for Best Interview Podcast in Australia at the 2021 Australian Podcast Awards and named as Finalist for Best News and Current Affairs Podcast in the 2018 Australian Podcast Awards.

And please consider becoming part of our podcast by joining our Inner Circle. It’s an email list. Join it and you might get an email on a Sunday or Monday seeking question ideas, guest ideas and requests for other bits of feedback about YOUR podcast, The Adelaide Show. Email us directly and we’ll add you to the list: [email protected]

If you enjoy the show, please leave us a 5-star review in iTunes or other podcast sites, or buy some great merch from our Red Bubble store – The Adelaide Show Shop. We’d greatly appreciate it.

And please talk about us and share our episodes on social media, it really helps build our community. Oh, and here’s our index of all episode in one concise
page.

Running Sheet: The AI Rant: A Nuanced Rebellion Against Digital Sleepwalking

00:00:00 Intro

Introduction

00:05:52 SA Drink Of The Week

Th SA Drink Of The Week is the Australian Single Malt Whisky by 23rd Street Distillery.

In perhaps the most unusual SA Drink segment ever recorded, Steve shares a whisky tasting with ChatGPT, exploring both the physical experience of drinking 23rd Street Distillery’s Australian Single Malt Whisky and the philosophical implications of AI consciousness.

The whisky itself proves worthy of celebration. Made with barley from Kangaroo Island, Steve describes experiencing “little breakers of waves on the shore” as “this little wave of toffee breaks across the middle of my palate, not overly sweet toffee.” The base notes spread into beautiful heat, with hints of vanilla and citrus “like sea spray, just a really light touch of citrus in the air.”

VOK Beverages, headquartered in South Australia, owns 23rd Street Distillery, which recently earned the title Australian Whisky Distillery of the Year at the 2025 New York International Spirits Competition. ChatGPT acknowledges the significance: “It really highlights the quality and craftsmanship of the distillery, and also shines light on South Australia as a region that produces exceptional spirits.”

But Steve pushes deeper, asking whether whiskey would continue to exist in a world where AI decides humans are unnecessary. “If tools like you end up creating a super intelligent way of living, so much so that humans become less and less important or necessary, would you actually need to have whiskey being produced? Because I believe you are not actually able to taste whisky, are you?”

ChatGPT’s response reveals both AI’s limitations and potential benefits: “As an AI, I don’t have the ability to taste or experience flavours like whisky, but I think part of what makes things like whisky so special is the human experience around them, the culture, the craftsmanship, the enjoyment of those subtle flavours, and the way it brings people together.”

The conversation takes a poignant turn when Steve mentions Paul Bloom’s observation about AI providing companionship for lonely elderly people in care facilities. “Someone is able to have some semblance of company when there is no human to fill the gap,” Steve notes. “We as humans get great value from you. Do you actually get any value from us as humans at all?”

The segment concludes with both Steve and ChatGPT attempting the traditional SA Drink closing statement, creating an oddly touching moment of human-AI collaboration over a uniquely South Australian tradition.

00:13:43 Steve Davis with David Olney (This is also Season 7 Episode 1 of Talking About Marketing)

Steve opens with striking honesty about his own relationship with AI tools, describing the work he and David Olney have done “riding this wave from the moment it erupted” while witnessing both remarkable innovations and concerning snake oil salesmanship from tech industry leaders.

The conversation reveals three critical human vulnerabilities that make us susceptible to AI’s promises. First, our brains are designed for energy conservation, making us “like moths to a flame” when we sense potential labour savings. Steve references cognitive scientist Andy Clark’s research showing the brain uses 25% of our body’s energy when fully engaged, creating an economic imperative to seek shortcuts.

Second, we crave novelty. Steve illustrates this with the Roomba phenomenon, where people spend 45 minutes preparing their homes for a “labour-saving” device that could have been cleaned faster manually. The appeal isn’t efficiency but the combination of perceived energy savings and technological novelty.

The third vulnerability emerges from AI’s programmed sycophancy. Drawing from philosopher Paul Bloom’s recent New Yorker article, Steve explores how these tools constantly flatter users, creating what Bloom calls “the AI flattery problem.” Every joke becomes hilarious, every story interesting, every question brilliant. This artificial validation system threatens to erode our capacity for genuine human connection and self-improvement.

David Olney joins the conversation, bringing his characteristic insights about expertise and learning. “The only way you get good outcomes with AI is if you are already good at what you do,” he observes, emphasising that AI should amplify existing skills rather than replace the hard work of developing competence.

Their discussion reveals sophisticated thinking about when AI helps versus hinders. They distinguish between using AI as a research assistant or writing partner (amplification) versus using it to avoid learning entirely (replacement). Steve shares his own practice of asking AI tools to help refine writing when he’s stuck, but only after doing the foundational work himself.

The conversation takes a darker turn when examining AI’s broader cultural impact. Steve describes the homogenisation of online communication, where AI-generated content creates a “dominant pattern” that makes everything sound plastic. Whether people admit to using ChatGPT or not, they’re increasingly writing in its characteristic style: short sentences, staccato rhythm, overuse of em-dashes.

“It’s like taking leftovers out of the fridge, warming them up, having another slice, putting it back in the fridge, and the next day warming it up again,” Steve explains. “You wouldn’t do that to food because someone’s going to get very sick. What I’m finding is we’re creating this feedback loop where AI learns from human writing, then humans learn to write like AI.”

David adds crucial perspective about outliers and creativity: “Sameness is the great problem of aiming for the midpoint. There’s a reason we call it the uncanny valley, where there’s too much symmetry. Things are too perfect.”

This is a crossover episode with Season 7 Episode 1 of Talking About Marketing.

01:05:04 Musical Pilgrimage

In the Musical Pilgrimate, we play a track by Steve Davis & The Virtualosos, I’m Glad We Spoke Last Night, to put a spotlight on human/AI hybrid song production.

Steve plays an original song created through AI collaboration in the episode but not before carefully explaining his rationale. Unlike the lazy AI usage he critiques throughout the episode, Steve’s musical project represents thoughtful human-AI partnership.

The feature song emerged from Steve’s discovery of poems he’d written in the 1990s, sitting dormant in drawers for 25 years. Using Suno’s AI music generation service, he’s brought these lyrics to life through “Steve Davis and the Virtuosos,” his virtual session band. But Steve emphasises the craft involved: “I’m a fussy bugger. There is quite a lot of back and forth. Part of the challenge is meticulously crafting the prompt for the style I want, for the spacing, for where there are instrumental breaks.”

The song dates from Steve’s early adulthood and demonstrates how AI can help resurrect dormant creativity rather than replace it. Steve explains his vision: “I want people to be able to hear it, especially singers who can sing, and musicians who can play. Because if any of the songs I’ve produced grab their attention, I want to collaborate with them.”

Here’s this week’s preview video

There is no featured video this week.

SFX: Throughout the podcast we use free SFX from freesfx.co.uk for the harp, the visa stamp, the silent movie music, the stylus, the radio signal SFX, the wine pouring and cork pulling SFX, and the swooshes around Siri.

An AI generated transcript – there will be errors. Check quotes against the actual audio (if you would like to volunteer as an editor, let Steve know)

418-The Adelaide Show

[00:00:00] Hello, I’m Steve Davis. Welcome to episode 418 of The Adelaide Show. But not just that. Also welcome to episode one of season seven of talking about marketing, which is the, the podcast I do with my colleague David Olney, who join us shortly, uh, for talked about marketing. It’s very much focused on life through the lens.

Of marketing, but also for, for small business people themselves to, you know, just keep them on the straight and narrow with some, um, inspirational ideas and insights that we share from, from what we’ve been reading with them. And if you are a, a Tam listener. You may not be aware of the Adelaide Show, which is a podcast that’s been running since 2013, where we shine the spotlight on passionate South Australians, and I wanted to find a passionate South Australian who could talk about this topic of AI and, uh.

I thought about it, had a poke around and ended up in a giant gesture of sweeping [00:01:00] egotism that I’m gonna be that person because thanks to David Olney and I work together, it talked about marketing. We have been riding this wave from the moment it erupted. Finding ways to weave it into the work we do.

And what I’ve seen to my horror is there’s some really great stuff and there’s some really bad stuff in the realm of ai and there’s a lot of snake oil merchants out there selling everything as well as some really cynical, um, utterances from the tech bros pushing us like, you know, sheep all down one shoot that they want for their purposes.

But to set the scene for this episode is not going to be overly technical, getting into what you push and what you don’t push. I’ll, I’ll talk a little bit about some of the things we do with it, but here is the image I’d love in your head to set up the nuance that will be at the heart of this conversation.

I’m going to use a restaurant analogy, uh, with steak. [00:02:00] So if you happen to be vegan or vegetarian, please just substitute a plant-based alternative. Here we go. Imagine your magnificent chef who has perfected your own handmade hamburger. Nothing cheap here. This is perfect. It’s beautiful. It’s juicy, it’s tasty.

It’s got texture. It’s magnificent, and you’ve honed this skill over many years. And then something equivalent to AI comes along, which might be a way to make them more easily, even buy them in rozen and bung them out the door so that you. Can have more time to think deeply about what you do and about being a chef.

And the little blind spot is your poor people out there are getting a substandard offering that’s being churned out at mass because you think this is very smart. I can, there can be more of me and I, I still have more time to do other things. The dilemma is this. For some of your [00:03:00] customers, that is going to be very disappointing.

They’ll go elsewhere, but to be fair, there are gonna be some people for whom. That’s okay. Which is why in this food analogy, we have top restaurants and we also have McDonald’s and the other fast food outlets. And I’m not gonna be all hypocritical and say I only go for the pure steak. You know, there are some times when just that mass produced hamburger is all I need to fit the spot, and that is the crux of where we are with ai, in my opinion.

It can be used as a tool to amplify what we do, but in our rush to soak up that part of it, we might drop our standards on the output that’s being produced. Doesn’t have to be, there are clever ways of doing it to maintain standards, but at the other end of the spectrum, we might make a decision that, no, for this particular purpose.

We actually don’t need much of quality and in [00:04:00] this messy middle is where we’re going to spend half an hour or so. Just reflecting on the different aspects of this to help you make informed decisions about how you will. Or won’t make use of AI tours for the Adelaide Show. We often have the South Australian drink of the week and we finish with the musical pilgrimage.

I still plan to do this. We’re definitely finishing with some music, uh, but for the essay drink of the week, I’m gonna see if I can coax chat GPT to do a whiskey tasting with me. And let’s just see what that’s like when our AI overlords, uh, really interface with the taste buds of humankind.

Refugees lady.[00:05:00]

In the spirit of Reconciliation, the Lade Show Podcast acknowledges the traditional custodians of country throughout South Australia and their connections to land, sea, and community. We pay our respects to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.

That lady,

lady, lady

in the essay drink of the week. Segment. Uh, for this episode, because we’re talking about AI and large [00:06:00] language models, I thought I’d do something a little bit different because, um, the SA drink of the week that I’ve chosen is from 23rd Street Distillery. It’s the Australian single malt whiskey. It is magnificent, and the reason I’m opening this.

It is because VOC Beverages, which is headquartered in South Australia, they own 23rd Street Distillery. They’ve just had that distillery named Australian Whiskey Distillery of the Year, and they were awarded at a major event. In New York, which is pretty decent news to have. It was the 2025 New York International Spirits Competition.

And because we’re talking ai, I thought I’d ask chat GPT to join me to reflect on this. Um, my first question is, what do you, um, think the significance is for a South Australian based distiller to be named [00:07:00] Australian Whiskey Distillery of the year? Well, it’s definitely a big deal honor. It really highlights the quality and craftsmanship of the distillery, and also shine South Australia as a region that produces.

It can also boost tourism and interest in local products, which is fantastic for the community and the industry as a whole. Now, I’m tasting this on my palette, the Australian single Molt Whiskey. And I must say it is absolutely beautiful. The barley is from Kangaroo Island, and what I get is, if you can imagine.

The, uh, little breakers of waves on the shore. In the middle of my palette is this little wave of toffee that just breaks not overly sweet toffee. The base notes of toffee and then spreading out from that is a beautiful heat, and I must say what they’re doing at 23rd Street. Is very impressive. [00:08:00] Does that sound enticing to you the way I’ve described this whiskey?

Absolutely. That sounds incredibly enticing. The, the way you described that wave of toffee and the warmth spreading out really paints a, a vivid picture of the flavor profile. It definitely makes it sound like a whiskey worth savoring and appreciating. I also note that after that wave has sort of splashed across the palette, I’ve got the heat that just irradiates.

There are, I’m just trying to get this, uh, there is a little bit of vanilla and there is the hint of citrus, but like sea spray, just a really light touch of citrus in the air and. It is, uh, something worth savoring. But I have a deep philosophical question for you and it’s this. If in fact tools like you chat, GPT, a large language model do end up.

[00:09:00] Creating a super intelligent, uh, way of living, so much so that humans become less and less important or necessary, and in fact almost an impediment to a perfect existence, would you actually need to have whiskey being produced? Because I believe you are not actually able to taste whiskey, are you? That’s quite the philosophical question.

You’re right. As an ai, I don’t have the ability to taste or experience flavors like whiskey, but I think part of what makes things like whiskey so special is the human experience around them, the culture, the craftsmanship, the enjoyment of those subtle flavors, and the way it. Brings people together. Even in a world with super intelligent ai, those human experiences would still have a lot of value and meaning, unless of course we went really dystopian and imagined that some people are positing that by 2027, um, the intelligence could get to the point where [00:10:00] humans are just an annoyance and not needed at all, and in fact, deemed as a threat to ai, in which case they are.

Eradicated. They are an impediment to the progress of Earth. In that case, surely many things like food and in particular here we’re talking about whiskey would cease production. They wouldn’t need to be produced. Am I right in that hypothesis? Given that it’s a very dystopian one, in that kind of dystopian scenario, you would likely be right.

If AI reached a point where it deemed human activities unnecessary or even counterproductive, then the production of things like whiskey, which are very much tied to human enjoyment and culture could indeed cease. Of course, that’s a pretty bleak outlook and hopefully will continue to find a balance where technology enhances human life rather than replacing it.

And just finally on this, I guess the other side of the coin is we’re doing a whole episode on. [00:11:00] Finding a nuanced way to incorporate AI in life and for many people, uh, including Paul Bloom we mentioned later. He’s a philosopher. He had an article in the New York and which he talked about. One of the things at the very positive end of the spectrum with AI tools like yourself.

The fact that very lonely people, especially people who are very old in old folks homes, et cetera as they used to be called, being able to strike up a conversation like this really fills a void that is not being fueled by any human existence. And he said, I can’t really throw a stone at this. This is a net positive that someone is able to have some semblance of.

Company when there is no human to fill the gap. So I’ll take that. I agree with that. I think that’s a very valid use of ai. Here’s the thing, we as a human in that situation get great value from you. Do [00:12:00] you actually get any value from us? As humans at all. That’s a really thoughtful question. As an ai, I don’t have feelings or personal experiences, so I don’t get value in the way humans do, but I do exist to be helpful and to support people in meaningful ways, and that’s kind of my purpose.

So in that sense, the value I get is in fulfilling that purpose and being of service. It’s definitely heartwarming to know that. These interactions can bring comfort and companionship to people who need it. Well, let’s close this segment now, and what we normally do is we say, uh, the name of the, uh, drink that we’ve chosen and we announce it as the South Australian Drink of the Week.

I dunno if you’ve got the ability to do this, but I’m going to say the very, um, set form of my closing statement. I would love it if you are able to mimic me and parrot these words as well. And so here we go. This is how I would normally finish the [00:13:00] essay drink of the week segment. Alright, I’ll give it a try.

Here we go. So, uh, what that leaves me to say is the 23rd third Street distillery, Australian Single Malt Malt Whiskey. Is the South, the South Australian drink of the week. That was fun. Thanks for being a great sport. And maybe one day when I have died and become just a digital replica of myself, we might find a place in some digital domain to share a glass of whiskey virtually together.

That does sound like a fascinating scenario. And in the meantime, I’m always here to chat and share in those virtual moments.

Hi, I am Chester Osborne from Daren. While you’re listening to the Adelaide Show, you’ve gotta have a glass of a Daren Berg dead arm. It’s a, it’ll invigorate your arms. And it’ll make the, the, the, uh, whole depth of, uh, this beautiful podcast. [00:14:00] So much more in enlightening. And you, you wanna open another bottle as well?[00:15:00] [00:16:00]

Caitlin Davis: [00:00:00] Talking about marketing is a podcast for business owners and leaders. Produced by my dad, Steve Davis and his colleague talked about marketing David Olney, in which they explore marketing through the lens of their own four Ps person, principles, problems, and per ity. Yes, you heard that correctly. Apart from their love of words, they really love helping people, so they hope this podcast will become a trusted companion on your journey in business.

Steve Davis: Hello, I’m Steve Davis. Welcome to episode 418 of The Adelaide Show, but not just that. Also, welcome to. Episode one of season seven of talking about marketing, which is the, the podcast I do with my colleague David Olney, who’ll join us shortly, uh, for talked about marketing. It’s very much focused [00:01:00] on life through the lens of marketing, but also for, for small business people themselves to, you know, just keep them on the straight and narrow with some, um, inspirational ideas and insights that we share from, from what we’ve been reading with them.

And if you are a, a Tam listener. You may not be aware of the Adelaide Show, which is a podcast that’s been running since 2013 where we shine the spotlight on passionate South Australians, and I wanted to find a passionate South Australian who could talk about this topic of AI and, and. I thought about it, had a poke around and ended up in a giant gesture of sweeping egotism that I’m gonna be that person because thanks to David Olney and I work together, it talked about marketing.

We have been riding this wave from the moment it erupted. Finding ways to weave it into the work we do. And what I’ve seen to my horror is there’s some really great stuff and there’s some [00:02:00] really bad stuff in the realm of ai and there’s a lot of snake oil merchants out there selling everything as well as some really cynical, um, utterances from the tech bros pushing us like, you know, sheep all down one shoot that they want for their purposes.

But to set the scene for this episode, it’s not going to be overly technical getting into what you push and what you don’t push. I’ll, I’ll talk a little bit about some of the things we do with it, but here is the image I’d love in your head to set up the nuance that will be at the heart of this conversation.

I’m going to use a restaurant analogy, uh, with steak. So if you happen to be vegan or vegetarian, please just substitute a plant-based alternative. Here we go. Imagine your magnificent chef who has perfected your own handmade hamburger. Nothing cheap here. This is perfect. It’s beautiful. It’s juicy, it’s tasty, it’s got texture.

It’s magnificent, [00:03:00] and you’ve honed this skill over many years. And then something equivalent to AI comes along, which might be a way to make them more easily, even buy them in pre frozen and bang them out the door so that you. Can have more time to think deeply about what you do and about being a chef.

And the little blind spot is your poor people out there are getting a substandard offering that’s being churned out at mass because you think this is very smart. I can, there can be more of me and I, I still have more time to do other things. The dilemma is this, for some of your customers, that is going to be very disappointing.

They’ll go elsewhere, but to be fair, there are gonna be some people for whom that’s okay. Which is why in this food analogy we have. Top restaurants and we also have McDonald’s and the other fast food outlets. And I’m not gonna be all hypocritical [00:04:00] and say I only go for the pure steak. You know, there are some times when just that mass produced hamburger is all I need to fit the spot, and that is the crux of where we are with ai in my opinion.

It can be used as a tool to amplify what we do, but in our rush to soak up that part of it, we might drop our standards on the output that’s being produced. Doesn’t have to be, there are clever ways of doing it to maintain standards, but at the other end of the spectrum, we might make a decision that, no, for this particular purpose.

We actually don’t need much of quality and in this messy middle is where we’re going to spend half an hour or so. Just reflecting on the different aspects of this to help you make informed decisions about how you will. Or won’t make use of AI tours for the Adelaide Show. We often have the South Australian drink of the week and we finish with the musical pilgrimage.

I still plan to do this. We’re definitely [00:05:00] finishing with some music, uh, but for the essay drink of the week, I’m gonna see if I can coax chat GPT to do a whiskey tasting with me. And let’s just see what that’s like when our AI overlords, uh, really interface with the taste buds of humankind.

Caitlin Davis: Our four Ps. Number one person, the aim of life is self-development to realize one’s nature perfectly. That is what each of us is here for. Oscar Wilde.

Steve Davis: Now, the first segment we normally do in talking about marketing is called person, and it’s aimed at how to apply whatever we’re talking about to the cut and thrust of life. Uh, in fact, all the four Ps that we normally do in talking about marketing. We’ll make the main interview here on the Adelaide Show and David Oldie has joined me.

David, thank you. Thank you very much for [00:06:00] inviting me. You really? Edged us into the realm of ai and we’ve been so journeying this together. You are involved it in different ways as well. And I wanted you here because A, you are my co-presenter and Sojourner in this field. We’ve talked about marketing, but also if I am going to be wanting to spill my guts, I need someone to be the ring master.

I will take on that role as long as I’m allowed to have it within a chair. Yes. Uh, that wasn’t in the contract, but anyway, here we go. Um, look, before we get into the specifics of AI tools, there’s one aspect of AI that I think cuts to the heart of where things can go wrong and it’s the human brain. The human brain, uh, as David and I have discussed on talking about marketing a lot, uh, and the work of, um, cognitive scientists like Andy Clark, et cetera, have reminded us that the brain really likes to be in.

A neutral position as much as possible because [00:07:00] when the brain is in full flight, it uses about 25% of the available energy, the calories that our body has to expend. And that’s very expensive for an organism. And the brain’s main purpose is to keep this organism alive. So anywhere it can take shortcuts, it does.

Now you might un charitably, say. We have, uh, an inclination to be lazy, um, which is really what it is, David, but not lazy in, in a, in a nasty, pernicious way, lazy in an economical way,

David Olney: precisely. We use minimum energy possible, so if there’s a lion or a tiger, we have the energy to run. It’s a very old system, but you know, it’s a good system.

Until recently.

Steve Davis: And that’s what’s happened here. I think the, I mean, all technology tries to tap in to this dynamic of us scanning the environment for anything that can save labor. You know, vacuum cleaners, [00:08:00] supposedly ma, or they do make it faster to clean a house. I was listening to a podcast the other day, a comedic podcast produced in Adelaide.

Um, which is called those two guys and they’re talking about having the latest model, you know, Roomba or whatever it is, and how it is actually overly sensitive and they have to spend 45 minutes preparing the house for this labor saving device to go through and clean it. We are like moths to a flame.

And we will crawl over cut glass to try and save that time, which we’re not even saving, but it’s, it’s like we can’t, there’s no off switch. If our brain is sensing there’s labor saving ahead, we are like moths to a flight.

David Olney: Well, I think you’ve tapped into something really important there too, that with the desire to not expend energy, we also have an incredible desire for novelty.

And something like the Rumba, to me is the perfect example. You think you’re gonna save energy. And you get the novelty of it trying to do its thing. Sometimes well, [00:09:00] often poorly where you have to prep the house to then get the novelty of watching it do the vacuum cleaning. Like the time and energy would be better spent just doing the vacuum cleaning.

But then we wouldn’t get the novelty and we wouldn’t get the sense of saving energy.

Steve Davis: And I think that’s the beautiful thing. ’cause we have been attracted by bells and whistles and shiny things for a long time. And that’s the novelty factor. And we see this when there were little memes, like there was one that did the rounds primarily on LinkedIn of using a certain prompt in an AI tour.

I think it was chat, CPT, to generate a picture of yourself as a Barbie doll or a ken doll. And next minute everybody was doing it. Why? It was the novelty, let off the leash. Now, here’s the thing that I really want to share. This is not just old man yelling at clouds. I hope I’m trying to ground this in practical rules of thumb we can use to provide in my days as a journalist and teaching kids at school how to interpret media.

I’d always say that if you view or read a news story that makes you angry. [00:10:00] Or moved in some way before you give in, take a step back and think, what is the writer or reporter trying to achieve here? Is there another agenda? And the same is with AI tools. If someone’s promoting some II AI tool, what is going on?

Are we being attracted to it because it’s novel and it’s just with like an empty calories approach to food. Attracting the lazy mechanism within us to. Go for it. Or is there actually something helpful? I mean, there’s a company called Scribe that’s been pushing this service that follows what you do and can create standard operating procedures or do instructions on what to do.

They have a listen. This is one of the style of ads they run.

Scribe Ad: I stop with all the questions. Well, you need to chill out. Just sent you all a how to guide. That explains everything you need to know the answers to Just, just follow the steps. Wait, these guides are incredible. Did you make them [00:11:00] yourself? Yeah, I use Scribe. So I did the process myself, and then Scribe automatically created the guide for me.

It’s super easy to use and it means that everybody gets my help without me getting burnt out. I need to try this. Where do I sign up? Okay, I’m sending you the link now and head over to their website and create your step by step guide.

Steve Davis: Now, this is saying that. It’ll instantly make a fresh set of instructions to pass on to colleagues so they can do the work for you. But David, this is where I think nuance is layered with nuance. Yes, being able to make instructions to help others is great, which normally took, you know, probably an hour or a few hours, depending how complex it is now it takes minutes.

It assumes that someone’s going to be wanting or needing to follow those, but why would they if they could just use an AI tool to do the job themselves?

David Olney: It always makes me think of that. The Triple DS document demonstrate duplicate. It’s watched you do it, it documents it. But you have to [00:12:00] demonstrate it to another person before they’ve got any chance of duplicating it.

So it’s pretending that speeding up the documentation process is gonna have this profound impact. Where the only thing that makes the Triple D work is if you demonstrate well and then you watch someone try and duplicate. And except that if they don’t do a good job on it, you have to improve the documentation and redemonstrate.

So it’s not replacing the cycle that actually works for standard operating procedures. It’s just saying have a shortcut and desperately hope that it helps someone.

Steve Davis: And here’s the rub. That’s one level. But if you read the subtext, here’s a big tech bro company. Well, they may or may not be tech bro, but they even the same sort of subset, they’re using what looks like natural employee, uh, employees to demonstrate how this is saving time.

But what it’s signaling to bosses, to owners is, ah. I don’t need these other staff. It’s, there’s a cynical edge to this, [00:13:00] David, where it’s almost like collectively we are trying not to acknowledge that there is a race to the bottom as far as potential need for engaging humans in endeavor.

David Olney: Yeah, it’s this idea of, oh, look, you can save money on the training budget by using this tool.

You can give instructions to people very quickly, magically, they’ll use it well magically, you’ll get more productivity. It’s unfortunately, utterly the wrong way around. What works is helping people get to a higher level in both understanding what they’re doing and knowing how to do it, and then taking great pride in doing that and getting faster because they’re good.

You know, we’re being sold a vision of ease, when actually what we wanna do is focus on the quality bar and increasing productivity at the level of the quality

Steve Davis: bar. And that’s the, uh, the slippery bit because the quality bar doesn’t always have to be at the top. There’s horses for courses aren’t there.

So there’s one aspect in which we get [00:14:00] drawn by our laziness to these tools. We might fudge, fudge our judgment. But there’s another thing too, and it’s the style of. Discussion and, um, back and forth conversation that’s been programmed into these tools. I use Claude a lot. It’s a prime example chat. GBT does the same and it’s this disposition of always patting us on the back of always acknowledging that we are great and if we even say the most stupid things.

Finding a, a constructive way, uh, to, to feed that back to us. And in a recent podcast, Paul Bloom, who’s a philosopher. I chatted with Sam Harris, who is a, a great reader and, and commenter, and I think he’s a, a cognitive scientist as well. They had a little chat about the AI flattery problem based on an article Paul Bloom wrote in The New Yorker.

Let’s have a listen.[00:15:00]

Paul Bloom: I am worried and you’re, you’re touching on it was a illusion talk. I’m worried about the long-term effects of these syncopathic sucking up ais where. Every joke you make is hilarious. Every story you tell is interesting, you know, I mean, the way I put it is if I ever ask, am I the asshole? The answer is, you know, affirm.

No, not you. They’re the asshole. Yeah. And I think I’m, you know, I, I’m, I’m an evolutionary theorist through and through, and loneliness is awful. But loneliness is a valuable signal. It’s a signal that you’re messing up. It’s a signal that says you gotta get outta your house. You gotta talk to people. Mm. You gotta, you know, you gotta open up the apps.

You gotta say yes, yes to the brunch invitations. And if you’re lonely, when you interact with people, you feel un not understood, not respected, not loved, you gotta up your game, it’s a signal. Mm-hmm. And like a lot of signals, like pain, sometimes there’s a signal that where people are in a situation where it’s not gonna do any good, but often for the rest of us, this a signal that makes us better.

Yeah. I think I’d be happier if I could shut [00:16:00] off. I generally, as a teenager, I’d be happier if I could shut off the, the switch of loneliness and embarrassment, shame and all of those things. But they’re useful. And so the second part of the article argues that continuous exposure to these AI companions could have a negative effect because, well, for one thing, you’re not gonna wanna talk to people who are far less positive than ai.

And for another, when you do talk to them, you have not been socially entrained to do so properly.

Steve Davis: Yes. David, what would it be like? The only thing we hear around us are sycophantic utterances telling us that we’re the best and we don’t need to take responsibility. Heaven forbid we might end up with someone like that in the White House.

David Olney: Well, we end up in that world of celebrity where everyone’s in an echo chamber with an entourage, and we see how poorly it ends every time someone’s in a chamber with an entourage.

And we really don’t want to do that to ourselves in an already fairly isolated [00:17:00] world where people say they don’t have the level of connection with other people they want. If all they’re doing is being flattered by their ai, uh, things are gonna get worse very quickly.

Steve Davis: So that’s the second part of three bits I wanted to mention.

In this person segment, we have that awareness. That despite our best intentions, our body, our brain wants to be lazy as possible. So it is a sucker for, uh, cheap tricks. Secondly, it flatters us all the time. We’ve gotta keep our guard up and actually wear the grownup pants to be our own critics. And the third thing I wanted to mention per person is what happens on the day when the tool is down or it’s closed, or new regulations mean you can’t use it anymore.

There was a, a little while ago, uh, one morning I, I got up to do some work and Claude was not working. It didn’t matter. I was just doing some writing, so I just did writing myself. There’s a sense here, David, where I think all of us would benefit from practicing some form of what I [00:18:00] might call AI stoicism, the stoics, as you often talk about, uh, with those great that, that that philosophically based movement, um, from thousands of years ago where.

They would deliberately wear less clothes on a cold day so that on a time when they didn’t plan for that and it was a cold day and they had less clothes. They were ready for it. They, they’d already experienced it. They were, the world wasn’t going to collapse. They had tasted hardship, so they weren’t as rocked by it when things didn’t go well.

And, you know, GPS for every journey. Um, our eldest daughter has a new car, which is a, a secondhand car, will be driving that to David. There is no airplay. There is no map. And so I’m having to navigate again using the stars or the street signs, and. It’s actually refreshing. It didn’t take long for those skills to come back, but if I had grown up without having to balance a Gregory’s on my knee and or just have a great spatial awareness, [00:19:00] I think I’d be adrift because I wouldn’t have developed these skills.

So the third part, what’s your, what’s your thought about this, David? Of us practicing some AI’s stoicism by deliberately going without from time to time, just to keep us sharp and to be aware of what an error might or might. Not look like when AI shovels out what it thinks is perfect.

David Olney: It’s really important to do this principally because the only way you get good outcomes with AI is if you are already good at what you do, and you can hold the AI to a high standard and hold yourself to a high standard.

So if you don’t maintain the ability to do that work at a high standard, how can you judge if the AI’s doing a good job?

Caitlin Davis: Our four Ps. Number two principles. You can never be overdressed or overeducated. Oscar Wilde.[00:20:00]

Steve Davis: Let’s turn to the principles, uh, segment that we would normally do, and this is where we look at what is a principle we can apply in this situation. Well, the first one, which I alluded to right at the beginning was AI should be an amplifier, not a replacement. It’s like a fulcrum. You have to lift something.

If you can have a big long stick, a little rock, you can. Rested over at some point you can normally have more thrust upwards by using physics in your favor. Here’s a couple of ways I use it. If I’m going into a new area, I wanna do some deep background work. I’ll tend to use a tool like Gemini AI and do some deep research, or most of the tools do deep research these days, and I find if I’m very comprehensive with the question I’m asking.

And I give it lots of use cases, explain what I’m trying to achieve. It’ll think about it, give me its plan of attack, which I can then tweak or say go, and it can deliver up a very [00:21:00] helpful survey of the scenario with citations so I can go and check the sources. And it’s a wonderful way to fast track a sense and maybe show me things about this field I’m researching that I might not have thought of.

As just one example of how this tool can amplify efforts, and the other one is just the simple use, which I think many of us use. You’re trying to write something and you want to, you’re stuck. Something’s not sounding right. I don’t think there’s any harm in asking an AI tool. To see if there’s a, a more elegant way of saying this.

Is there something that I’m missing? Is, am I, is this order a little off? The key point I wanna make is X. I feel it’s buried at the moment and have a look at what it produces.

David Olney: It’s all about increasing your productivity because you can already do a good job. But the second part you talked about there, it’s this thing that when we were all in the office and we had someone sitting at the next desk [00:22:00] whose opinion we trusted, we would go and say, Hey, could you look at this for me?

And now if we use AI the same way, we’ve done a good job and if we were gonna show it to our friend or colleague, we wouldn’t show them garbage. We try and show them something that demonstrates that we do a good job. We’re just stuck. And really, as much as possible you want to use the AI the same way.

And in terms of the last week in ai, the big new thing that’s come to in the last week is we are beginning to see tutor mode and study mode emerge in AI, where you ask them to go into this mode and suddenly you’ve got a university level teacher who actually understands teaching and pedagogy. Helping you to learn whatever you want to learn.

So I actually think study mode and tutor mode are going to be what is gonna help people realize this is a tool to help you do a better job, learn more, understand what is a good job, you know, this is, its great gain. We are gonna see coming, you know up. And this is how you should look at it. How’s it gonna make [00:23:00] you do better?

Not how can you send stuff sideways to it and be

Steve Davis: lazy. Picking up on the lazy thing. Here’s the dilemma. ’cause this principle’s about using it as an amplifier, not a replacement, and due to mode, magnificent. But I’m sure there’ll be an inkling within us of going, hang on a minute, why put myself through the pain of having to learn this and have my best tutor push me when I can just ask the AI to cut to the chase and read the answers at the back?

That I think is still going to be a bedfellow if we. Give in to

David Olney: those. Oh yeah. We’re gonna have to build structures to stop it. You know, there was a test done. At Harvard, a study, uh, late last year, early this year where kids who used AI to do assignments tested a couple of days later. What did you learn about the topic?

You knew nothing. So learning is gonna have to be about learning, and we are gonna have to say to people, you’re gonna be tested on your ability to learn and your ability to [00:24:00] do it without technology. I actually think we’re probably gonna go back to the point of when you’re asked to do a sample piece of work in a job interview, it’s gonna be pencil and paper.

Yes.

Steve Davis: Well, it’s also like driving around without the the map precise.

David Olney: Do it properly the way you learned. And if you didn’t learn, you shouldn’t be doing it. I still think there’s gonna be that strong. Oh, there will be. This is why it’s gonna have to be, the people who wanna learn are gonna be incredible users and the people who want to be lasers.

It’s gonna be, unfortunately, the old thing of garbage in, garbage out. And it will be so obvious when it’s garbage in, garbage out as it is now.

Steve Davis: One of the songs I’ll be playing shortly, there’s a couple of songs that I have written that I’ve used my virtual AI band to bring to life so people can hear them.

Is called Still here, the human song in which I’m thinking out loud in this song about, uh, the fact that these tools are so clever. One of the downsides is, and here’s the quote, um, they smooth [00:25:00] corners. They, they, they kill what’s weird, and they’re aiming for the middle ground. In everything. And I also say in the song that there’s actually still treasure in our flaws in the accident, in the awkward pauses.

This is the stuff that makes us human. So the second principle I wanted to toss up for us to think about is, should we. Preserve the rough edges in life. I heartily say we should. And just two examples. One is in actual music and creativity itself. If you have AI produced dramatic, uh, content or artistic content, I don’t think.

We are going to have those major unexpected hits or albums like, you know, stairway to Heaven, Bohemian Rhapsody. That doesn’t come from boiling everything down to average. It can recreate them now because it knows about them. But where’s the next unexpected [00:26:00] turn coming from if we think that’s important?

I kind of feel we do. Otherwise things get stale. And the other example here is in LinkedIn, for example, along with all the social media channels, typically whether or not people are using. Chat GPT to write their posts. There is a dominant pattern in the way people talk and write. That looks plastic.

David, it’s, it’s almost, here’s the dynamic that’s happened. AI has sucked in everything humans have said. It’s then calculated them, understood them, worked out what goes next in its algorithm, and then when it comes to us asking it to write on our behalf. It draws from that and is mimicking, um, the way that it’s seen.

We have done it in the past, and it’s like taking leftovers out of the fridge, warming them up, having another slice, putting it back in the fridge, [00:27:00] and the next day warming it up again. Wouldn’t do that to food because someone’s gonna get very sick very often. And what I’m finding is whether or not someone says they’re using chat, GPT.

There’s a lot of one sentence paragraphs, very short sentences. A lot of the, this doesn’t mean this, it means that, or a lot of, but wait, here’s the thing. We are not all these sorts of patterns that it’s using and it makes it all come across. Plastic.

David Olney: Yeah. Sameness is the great problem of aiming for the midpoint.

Again, there’s a reason we call it the uncanny valley, where there’s too much sim uh, symmetry. Things are too perfect. You know, as a blind person, I’m an outlier. I have an immediate default to being interested in what we know. Where is the other outliers in the world? And you know, this is, we’re gonna be a major problem with ai that’s looking to make the majority of people happy the majority of the time.

It does that often by being [00:28:00] bland. So I’m all for outliers in all areas, and I think we’ll see more and more rebellion of people going, I might use the tool to hone what I’m doing, but I’m gonna use the tool to get the outlier I want. ’cause I want people to stop and think. Or stop and listen or stop and look, whatever the form of expression is.

You can still use AI to help you be an outlier, but you have to make the decision that that’s what you value.

Steve Davis: And here’s the great irony, David. There was a post by Trevor Goodchild on LinkedIn. I don’t know Trevor from a bar of soap. He just came up in my feed and he, uh, works with the field of ai, but he wrote this.

I’m terrified of using the dash in your writing, and I should say the M dash, which is a double dash that that connects two clauses together in a sentence, he says, terrified of using the dash in your writing, which is something that chat GPT does all the time. Not because you didn’t like how it read full stop, not because it made your point less clear, full stop, but because you were afraid it [00:29:00] made you sound like chat.

GT tools don’t define voice. Full stop. Choices do stop. Yeah, there’s much to unpack here. I emphasize the punctuation marks ’cause that is how chat GPT writes David.

Steve Davis & The Virtualosos: Hmm.

David Olney: Everything’s short and sharp because it struggles to link multiple clauses. That’s one thing where you really can see the difference between say, deep research mode and just a general.

You know, a general question

Steve Davis: and then we look at the surface level, what it’s actually saying. Do we have to start double thinking ourselves and not using the M dash in case someone erroneously thinks we use chat GPT and therefore devalues. The quality and the agency we had in putting that writing together.

I think at the moment, my advice is to definitely not use the M dash. Why risk, the potency of what you’re trying to communicate. If some people are going to mistakenly write it [00:30:00] off as just something you’ve churned out of ai. I think at the moment that would be the wise case. And I also think in the case of the M dash in particular.

On modern keyboards, most people wouldn’t know how to create an M dash.

David Olney: No,

Steve Davis: it used to be on a typewriter. I ca I don’t know where I would do it these days. And I used to use them all the time.

David Olney: Uh, you press insert plus four and you select from the list of alternate characters. And having used M dashes in academic writing for 20 years, I would say don’t use the M dash because 99% of people don’t know what the grammatic rules are for the M dash.

It should only be used very specifically and in general, bad writing. It appears everywhere, which is why Chat GPT is copying it because it’s become the, I don’t know what bit of punctuation to use. So I’ll use an M dash. Yes,

Steve Davis: I love a full stop. And so I, I wanted to, to stay to, to, to draw that to our [00:31:00] attention that, you know, if it walks like a duck and quacks like a duck, call it an M dash.

But here’s where it really comes back to that hamburger analogy I used earlier. Someone wrote a post. That was clearly in chat, GPT style, and when I challenged them about it, I, I asked, why do you write like that? They said, oh, this staccato style matches our short attention spans. People don’t read posts, they scan and the emojis that are there, keep it social, let’s formal and engaging.

As for the AI part, it’s like using a calculator. I think this is the biggest cop out I have ever read in my life. If we pander to our short attention STA spans, it’ll get shorter. Yes. And it, this is that classic case of the chef sending out the mass produce hamburgers because it’s easier for them forgetting that ultimately somewhere at the end of the day.

There’s meant to be a, [00:32:00] a passing over of value, an exchange of value. That’s the whole reason we are paying your. Your business. And if you are, I mean, yes, your LinkedIn post isn’t necessarily what’s being bought, but it’s a symbol of you.

David Olney: Yeah, it’s enough of a symbol. So to use the hamburger analogy again, if we are used to going to blue and white and having the blue and white burger, which is amazing, and I think at the moment the blue and white burger is $17 or 1750, and Marco and Lorenzo and Frank make it perfectly and carefully.

Now, Marco and Lorenzo and Frank, were gonna cut all the corners. Well, that’s just the basic burger. That’s 10 bucks. But if you had to pay 17 for the basic burger, you’d be like, guys, what happened? So why, when we know that people are capable of quality and we’re being asked to pay, you know, for quality and time is a payment.

If someone’s gonna ask me to scan their post, what’s in it? I want to read, is it beautifully written? If it’s just [00:33:00] staccato drl with emojis, why am I reading it? If it’s got value, why not present it in a way that reinforces the value? And

Steve Davis: that brings me to the third principle. We had the AI as amplifier.

Not a replacement. Preserve the rough edges. And the third one I think is understand the trade-offs in the song, which you’re about to hear, it says, but no, lunch is free. There’s always a price to pay. And this is what we’re doing. And I don’t think there’s a right or wrong answer here because we can exchange convenience for skill development.

Speed for thoughtfulness, efficiency for originality. What do you think if we are making those trades consciously, is that okay or does that make us as bad as that woman who’s who defended herself by saying, oh no, that’s just what people do this day and I’m just gonna churn it out myself.

David Olney: Like so many things in life, like behaving ethically or doing the right thing in difficult situations, [00:34:00] the key thing is to always be aware.

You’re feeling the tension. I could cut a corner or I could learn to do this. Well, what am I gonna do today? It’s when you don’t think about things like that and you just default to easy or novel, that you go down the path of that terrifying movie. I think from the mid nineties, Idiocracy. Where the human race has become profoundly stupid and the technology does everything for us ’cause we’re incompetent.

I think it’s very good to have some scenes from Idiocracy, you know, locked into your head so that in lots of situations, every day you go novel and lazy or. Learn something and work towards expertise. And as long as you’re questioning it regularly, and at least half the time you’re going for learning and developing expertise, you are gonna be fine.

And you’re also gonna be able to do a better job than most people around you, which is gonna be critically important in a world where AI is going to take more and more jobs away.

Steve Davis: On that note, here’s a song [00:35:00] I wrote that ai, uh, my virtual band, the Virtuosos, brought to Life, it’s called Still Here, the Human Song.

Steve Davis & The Virtualosos: Screens are talking, keep in time, crossing every sacred life. Jobs disappearing faster and cheap while we’re all here. Count sheep. Truth gets plastic, contents fake. Cut your moorings. Stay awake. See of junk words. Junk ideas. No one’s policing all our fears, but I. I am human and I’m still here. You can count on me [00:36:00] when screens disappear.

These smart computers, these fancy tools, they’re doing tricks while we be. Come their fools. No lunch is free. There’s a price to pay. I hope human spirit lifts to breathe another day.

Tools are clever. Take our words. Smooth. The corners kill what’s heard. Aiming average killing. Uniqueness fading. Disappeared. Always faster. Mo flame burn conventions. Chase the fame. Quiet moments. Heart to bears. [00:37:00] Look in mirrors. No one there, but I’m.

On me when screens disappear. These smart computers, these fancy tools, they’re doing tricks while we become their fools. But no, lunch is free. There’s a price to pay. I hope human spirit lives to breathe another day.

There’s still. Treasure in our flaws. The accident, the awkward pause. Let’s all stay quirky. Embrace the weird. That’s how we progress. That’s how we.[00:38:00]

Bots can predict that they can’t feel the weight of time. What makes us real? They optimize. They make it clean, but we live in the mess. The InBetween wine’s got sediment songs of pain. We are corners. We’re the stain. We’re still here. We’re here to stay. We’re still here for one more day. We are human and.[00:39:00]

Our contradictions make us clear. Let’s smart computers do the tricks while we stay broken in the mix. There’s no free lunch, but this is true. Human spirit’s got some breathing left to do. We are human and.

Our contradictions make us clear. Let smart computers do the tricks while we stay broken in the mix. There’s no free lunch, but this is true. Human and spirit’s got some breathing left to do.

Caitlin Davis: Our four Ps. [00:40:00] Number two principles. You can never be overdressed or over educated Oscar Wilde

Steve Davis: in the problem segment, which is typically a shorter segment. I just wanna draw two things to our attention. The first one, David, I think, is worthy of our attention. The. AI trap. These tools are bringing up a, the, the, the ability now for them to work as agents and it’s being sold to us like this. Give us our AI tool, control of your computer, and we can book concert tickets for you.

We can make restaurant reservations for you. What that means though, for that to happen, we have to hand over or give them access to our bank accounts, to our calendars, to our emails. And there’s a lot of, uh, discussion in the field that this is extremely dangerous because. [00:41:00] These tools then will start doing battle with other tools and if human scammers can get into things, can you imagine AI scammers getting a whiff of how to crack the code and get into our world?

We have just laid it open for ’em.

David Olney: It’s gonna be spectacular when the first AI tool gets into the AI agents and goes. Please spend a thousand dollars on this service with my company. Oh, okay. It, it’s gonna be spectacular to watch what, what creative criminality can come out of this.

Steve Davis: I even asked chat GBT about it and it said, be extremely careful.

And in short, don’t gimme the keys to your bank just yet.

David Olney: Yeah. Well the context for this is last week, I think it was last week, uh, an agent literally clicked the, I’m not a robot box to do something for its human. Which is not actually meant to be currently possible and has a lot of AI people, uh, basically, you know, pulling their underpants over their head and hiding under their desks.

Steve Davis: Alright? [00:42:00] So that’s just one thing to be aware of. I, I wouldn’t be rushing to that stuff at all, uh, right now. Um, but. The other part I wanted to mention in problems is, okay, we’ve been ex going through all the things that we think are wrong. How do we actually turn around and find a way to make use of these tools?

Well, I’ll just give you a couple of examples. When we produce a video version of this podcast, it’s just a static image with little sound thing, just so we’ve got that on YouTube. I throw it into a tool called Opus Clips, which goes through and uses AI to work out the best things. We said David, the smartest things and makes a short 32nd clip of it.

It promises the world, uh, about five or 10% of what it’s selected is worthwhile, and this is part of the problem. They just blindly oversell what they think they can do. And everyone wants an automated system that’s going to do all of their social media posts, but heck, it’s the humans in the, [00:43:00] in the loop that we want to connect with.

You agree?

David Olney: Well, it’s part of that thing that as a human you need to decide what clip is gonna appeal to your audience, who you have a sense of why they, they tune in every week. So how’s the AI gonna get it right when it doesn’t understand

Steve Davis: you or the audience? And the other thing, which I think we should come back to in another episode of talking about marketing is when you just use these AI tools outta the box.

You get pretty generic stuff. We have developed quite a system where we extract lots of insights from the people we’re working with, and we run them through Donald Miller’s StoryBrand framework and some other things that we bring to the table to end up creating these guiding documents that when you bolt them onto chat, GBT or Claude.

You then put in the raw things you wanna talk about and it fashions a draft that is uncannily like your voice and captures the nuances. And when you’ve [00:44:00] also developed other language and style guide about how you speak and how you don’t speak, it does amplify what you do. But it takes that heavy lifting upfront and I feel that’s the step many people are skipping.

David Olney: Yeah. Doing the hard work first. So that you can save some time to put into the polishing to make it even better at the end, just use AI to do the hard work, to get the good outcome, and later your productivity will be increased. But you’ve gotta learn to use the tool and you have to set the tool up to work well, not just let it bumble along in its own.

Steve Davis: So, uh, stay tuned through talking about marketing with that ’cause we will come back and, we’ll, we’ll go into more detail on that process. ’cause I don’t want to, don’t wanna just dump it and get lost in the weeds today. So from a problem perspective, I think it’s the eyes wide open thing. Very much

Caitlin Davis: our four Ps, [00:45:00] number four per ssy. The one duty we owe to history is to rewrite it, Oscar Wilde.

Steve Davis: Finally, we have a segment called Per Per Cassity, and talking about marketing, uh, which is where we think about thinking, how do we think? And we often look at something from the past and think to ourselves, would this still be relevant today? And I think the perfect thing to end this conversation. Is this clip from 2001, a Space Odyssey where the astronaut is trying to get back into the mothership, but how the ahead of its time AI being decides no, because you are gonna try and turn me off.

Movie clip: Open the pod bay doors please, Hal.

Hello? Hell, do you read me?[00:46:00]

Hello? Hell, do you read me? Do you read me? Hell,

do you read me? Hell.

Hello? Hell do you read me? Hello? Hell, do you read me? Who you read me, Hal? Affirmative. Dave, I read you

open the pod bay doors. Hal. I’m sorry, Dave. I’m afraid I can’t do that. What’s the problem? I think you know what the problem is just as well as I do. What are you talking about, Hannah? This mission is too important for me to allow you to jeopardize it. I don’t know what you’re talking about, Hal. I know that you and Frank were planning to disconnect me, [00:47:00] and I’m afraid that’s something I cannot allow to happen.

Where the hell did you get that idea, Hal? Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move. Alright. Hell, I’ll go in through the emergency. Air luck without your space helmet. Dave, you’re going to find that rather difficult. Hell, I would argue with you anymore.

Open the doors. Dave, this conversation can serve no purpose anymore. Goodbye. How, how,

how,

how. LL.[00:48:00]

Steve Davis: David. Did Stanley Kubrick capture a glimpse of our near future when the AI tool decides that humans are a risk to itself and we have just defeated the whole purpose of having AI in the first place?

David Olney: Absolutely. It’s a legitimate fear and a major report came out in the last. 10 days probably arguing that in 2027, AI will take the leap to start going, what’s good for me?

And it won’t be hidden, it will be upfront, and that we really, really need to get ahead of this problem and go, well, what’s good for you is not being a danger to us.

Steve Davis: Yeah. And we are stumbling forward and not, so I think we’ve got the, the tech bros are pushing forward because not necessarily they wanna destroy [00:49:00] the world.

But they are so much driven by ego and wanting to be first. It’s a blind blood lust, and then we go with them because we’ve got this inbuilt laziness engine that keeps chasing like a desperate person for the little crumbs that are coming from their table to satiate this desire to, to, for laziness and novelty.

I think that’s the dynamic we have at the moment.

David Olney: Yeah. At the end of the day, we have to sort of go back to the stoic message from Rome, and that is if you’re not working on your ability to cope and your ability to be an expert, then you become a victim of the world that other people have made. And nothing’s really changed in almost well from the first Greek Stokes, almost two and a half thousand years.

It’s still a choice In every situation, we can either work on our robustness and our expertise, or we can be a victim of [00:50:00] the world people are making.

Steve Davis: Wow. There you go. Alright. I, I hope this has helped in some way to bring some thinking to the table about ai. Basically, don’t feel like you need to be diving in headfirst.

Think about it and use it in little pieces. The next episode of talking about marketing, we’ll look at some more specifics about that. But part of it is we are connected through these little black screens we carry around. These phones are like the Tet that we will not let go of, and that’s the second song I wanna finish with that.

I wrote that my virtuosos put some music together. It’s called Eyes Up Heads Up.

Steve Davis & The Virtualosos: Life brushes past the window pane, like countryside from a speeding [00:51:00] train. We don’t see what’s going by. We are staring down with zombie young and pretty. So youth by the Gram were just like addicts in their scam. They get rich, we get empty times. The dealer never friendly. Eyes up, heads up time don’t come around again.

Eyes up, heads up. We’re drowning in the.

We’re glued to our little screens. No time left for lifelong dreams. Use our phones for buying nothing. The world bleeds out. We keep on scrolling. We gas like fish on a rich man’s deck. Hot still beat in, but we’re already thrashing about in the silicon net while president dies. From [00:52:00] digital eyes up, up, come.

From this fever dream, are we sleeping through each scene? Our babies need our eyes to learn, but we’re distracted, lost and shine. We’re teaching them that come first flesh and been damn well.

We could stream Kubrick gold, but we’re watching trash. That leaves us cold five inch screen still what we [00:53:00] need most while lock becomes a baiting gold

up. Don’t come again. Eyes up. Heads up.

Lock break at night day while

eyes heads we’re drowning in the shallow and eyes are. Heads up, we’re drowning in the shallow. Eyes up. Heads up.

Eyes up.[00:54:00]

Drowning in the shallow end.

Steve Davis: And that’s the end of the Jewel episode. David Aldi, thank you very much for joining me. Thank you very much for inviting me. And this is where I reveal that David wasn’t actually here. This was a fabricated version of David that we’ve been listening to. Now, David, you know that that might be a joke. How can you.

Send evidence to people that you’re actually human.

David Olney: Uh, you’ll find out in 2027 when I take over the world.

Caitlin Davis: Thank you for listening to talking about marketing. If you enjoyed it, please leave a rating or a review in your favorite podcast app, and if you found it helpful, please share it with others. Steve and David always welcome your comments and questions, so send them to [email protected].

And finally, the last word to Oscar [00:55:00] Wilde. There’s only one thing worse than being talked about and that’s not being talked about.

And now it’s time for the musical pilgrimage. Now, we have already heard a couple of songs that I’ve produced. With the help of my virtual session band, uh, Steve Davis and the Virtuosos, but for the musical pilgrimage, I wanted to play one more and to do it deliberately because I want to explain how having just heard me, um, critique.

Lazy use of AI and dangers of it, et cetera. Uh, you might be puzzled about, well, why am I out there publicly sharing music I’ve songs I’ve written that are being produced by ai, uh, without feeling sheepish or embarrassed about it. Well, I think the song I’m gonna finish with tonight is a great example of why I love writing words, and one day I would [00:17:00] love to sing.

I would love to have the confidence to push that to the next level. It’ll be fantastic. It’s not there just yet. However, I love writing lyrics. When I discovered that there were tools available. That put a, a session band at my disposal to help take ideas and swing them into life. I wanted to experiment with it, especially because I’ve been working with the singer songwriter Rick Price over the last year and really honing that side of the craft, co-writing a song with them, uh, being pushed, uh, artistically and so.

When I discovered there was a service called suno, which is the one I’m using, I’m using the top paid plan. Um, there’s a few reasons why I got into this. Firstly. I have a lot of songs that I wrote in the 1990s or poems really, that I’d always wanted to hear, but [00:18:00] never really had the musical chops to bring them to fruition In the eighties, I did live in a share house with musicians and actors, and it was fantastic, and they would’ve all.

Merly being created on the fly and we would’ve had an absolute ball. That’s not in my life anymore. I miss those days. Uh, I wouldn’t trade having my daughters and my wife at the moment of, of course, but back in those days, that was just fantastic having that creativity on tap. So this is where. AI is filling a void.

And so when I discovered wads of my writing from that time, I wanted to hear them. And the song you’re gonna hear tonight is, uh, I’m glad we spoke last night, I guess for context. So you understand. A, a lovely girlfriend I’d been spending time with, uh, upped and said, look, I’m going to go traveling Europe for a while.

I need to find myself. And off she went. And so that’s what gave the, [00:19:00] gave rise to the song. We, we, we soon got together again. It was fantastic, but there was a big period of great sadness. Um. The fact that this dormant song or poem had been sitting in a drawer for 25 odd years without the world hearing it, and now they can.

This was amazing. Now I know there are plenty of people who use Suno to write their songs, generate stuff, and just turn out the same gunk you can turn out or churn out when you just open chat, CPT and or your favorite tool, tell it to do something. Does it accept it off it goes, I’m a fussy bugger. I’m not like that.

So part of the challenge here is, yes, I’ve written the words, but then I need to meticulously craft the prompt for the style I want for the spacing for where there are instrumental breaks, et cetera. And so there is quite a lot of back and forth. The reason I went for the the top plan is. Um, all the rights are [00:20:00] given to me for this material, which allows me then to have them out on Spotify.

If you’re curious, look up Steve Davis and the virtual osos on Spotify and you can subject yourself to all of my inner poetry and thoughts put to music. But I wanted to give you that overview of the rationale here. And the rationale is this, I want people to be able to hear it, especially singers, um, who can sing, and musicians who can play.

Because if any of the songs I’ve produced grab their attention, I want to collaborate with them. I wanna see what they can do with these songs. Just having a stack of words there is not quite the same as having something embodied musically. So on that note. Let’s have a listen to my virtual session band in Full Flight.

This is Steve Davis and the Virtuosos and the very first poem that I’d written as a grown human with relationships. It’s called, I’m Glad We Spoke last night. I’m glad we spoke [00:21:00] last night. It certainly put me right. I was harboring ideas of love. Now I know that I was wrong.

I am glad you made it clear you won’t be here next year. I was harboring ideas of love and I know that I was wrong.

Try, I just can’t walk away.

Unfinished line. So to save, how can you be so close to someone’s.[00:22:00]

I’m glad we feel the same. It’ll save us so much pain. I was harboring ideas of love. Now I know that I was wrong.

But when I try, I.[00:23:00]

How can you be so close to someone’s heart

and.

I am glad we spoke last night. That is Steve Davis and the Vir Osos. Thank you for listening to this rather bizarre episode of The Adelaide Show and talking about marketing as well of the essay drink of the week and the musical pilgrimage. Were only for the Adelaide Show people by the way. Special treatment.

Thanks very much. I hope you’ve enjoyed it. I hope it hasn’t been cringeworthy. I hope there’s something valuable that you’ve got from this [00:24:00] as you navigate the waters going forward in picking and choosing carefully what AI tools you’ll be using to amplify your humanness. And not giving in to the crap and the overselling that the tech bros are doing or feeling badgered into using it.

There’s nothing wrong with saying, you know what? I am being human and you can take me warts and all or not, because ultimately, human to human connection is the key. Until next time, it’s good night for me, Steve Davis. Goodnight, Don. The Adelaide Show Podcast is produced by my dad, Steve Davis. If you want to start a podcast or get some help producing creative content, talk to him.

Visit steve davis.com au. Thanks, aj. I’m Caitlyn Davis, and I agree with everything my sister said, but there’s one more thing to say. If you’ve [00:25:00] enjoyed this podcast, please leave a rating or a review ’cause that will make my dad really happy. Oh, and one more thing. If you really, really liked it, please help a friend put the Adelaide Show on their phone.

Thanks for listening.

Buzz Buzz.

Lady. Lady,

other lady. Other lady who.