Reading Time: 24 minutes
Have you ever wondered if your marketing strategies are crossing an invisible ethical line? In today’s AI-driven world, the temptation to push boundaries for results is stronger than ever. But what if embracing ethics could actually supercharge your marketing efforts?
Marketers everywhere are grappling with a critical challenge: how to harness the immense power of AI without compromising their brand’s integrity or consumer trust. The risks are real – from privacy breaches to unintended bias – and the consequences can be severe.
But here’s the exciting twist: responsible AI isn’t just about avoiding pitfalls. It’s a gateway to unprecedented innovation and deeper customer connections. Today, we’re joined by an expert who’s cracked the code on blending cutting-edge AI with rock-solid ethics.
LISTEN TO AI IN MARKETING: UNPACKED:
WATCH AI IN MARKETING: UNPACKED:
Ladies and gentlemen, I’m thrilled to introduce Sarah Lloyd Favaro, a true pioneer in the realm of responsible AI innovation. With a rich background spanning technology consulting, data science, and enterprise content management, Sarah has been at the forefront of AI integration across various industries. Formerly with Verizon, her work with global giants and her passion for AI literacy have positioned her as a thought leader in responsible AI implementation.
AI in Marketing: Unpacked host Mike Allton asked Sarah Lloyd Favaro about:
✨ Ethics Drives Innovation: Responsible AI isn’t a constraint; it’s a catalyst for creative, effective marketing solutions.
✨ Transparency Builds Trust: Open communication about AI use strengthens customer relationships and brand loyalty.
✨ Continuous Learning is Key: Staying informed about AI ethics ensures marketers can adapt to evolving standards and technologies.
Learn more about Sarah Lloyd Favaro
Resources & Brands mentioned in this episode
Full Transcript
(lightly edited)
Responsible AI Marketing: Where Innovation Meets Integrity with Sarah Lloyd Favaro
(00:00:00) Sarah Lloyd Favaro: The biggest thing in terms of, uh, using AI in marketing and, and in thinking of responsible AI, is trying to prevent damage from happening in the first place. So I would say it’s really prevention is what you need to do is when you use responsible AI, you need to think about, Hey, I don’t want to have a situation where I have crossed that line.
And then I have to do damage control because then as we all know, we’re talking about. Consumer trust, we’re talking about the, the things that the people who we are trying to, uh, attract, we have, you know, repelled them instantly because we, we have crossed that line.
(00:00:52) Mike Allton: Welcome to AI in Marketing: Unpacked, where we simplify AI for impactful marketing. I’m your host, Mike Allton here to guide you through the world of artificial intelligence and its transformative impact on marketing strategies. Each episode will break down AI concepts into manageable insights and explore practical applications that can supercharge your marketing efforts.
Whether you’re an experienced marketer just starting to explore the potential of AI, this podcast will equip you with the knowledge and tools you need to succeed. So tune in and let’s unlock the power of AI together.
Greetings program. Welcome back to AI in Marketing: Unpacked where I selfishly use this time to pick the brains of experts at keeping up with and integrating or layering artificial intelligence into social media, content, advertising, search, and other areas of digital marketing. Oh. And you get to learn to subscribe to be shown how to prepare yourself and your brand for this AI revolution and come out ahead.
Now, have you ever wondered if your marketing strategies are crossing an invisible ethical line? In today’s AI driven world, the temptation to push boundaries for results is stronger than ever. But what if embracing ethics could actually supercharge your marketing efforts?
Marketers everywhere are grappling with the critical challenge:
how to harness the immense power of AI without compromising their brand’s integrity or consumer trust. And the risks are real from privacy breaches to unintended bias. And the consequences can be severe, but here’s the exciting twist. Responsible AI isn’t just about avoiding pitfalls. It’s a gateway to unprecedented innovation and deeper customer connections.
Today, we’re joined by an expert who’s cracked the code on blending cutting edge AI with rock solid technology. Ethics. Ladies and gentlemen, I’m thrilled to introduce Sarah Lloyd Favaro, a true pioneer in the realm of responsible AI innovation with a rich background, spanning technology, consulting, data science, and enterprise content management,
Sarah has been at the forefront of AI integration across various industries. Her work with global giants and her passion for AI literacy have positioned her as a thought leader in responsible AI implementation. Hey, Sarah, welcome to the show.
(00:03:02) Sarah Lloyd Favaro: Oh, thanks so much, Mike, for having me.(00:03:05) Mike Allton: My pleasure. Could you start by just helping us understand what does responsible AI even mean, particularly in the context of marketing and why it’s so crucial, you think in today’s landscape?(00:03:17) Sarah Lloyd Favaro: Sure. I would love to dig into that. I think responsible AI means different things to different people because it is an umbrella term that encompasses so many different elements of working with AI. I do like Microsoft’s pillars of responsible AI, which really kind of give you a full breadth of what’s covered under that umbrella.
So first of all, you’ve got fairness. Then you’ve got reliability and safety, privacy and security, inclusiveness, transparency, and accountability. So there’s a lot to unpack there. And we haven’t even talked about the sustainability element, which a lot of people also would categorize that as responsible AI.
So all of those things are at play whenever you use AI in any sort of discipline, of course. That would be true for marketing as well. And just a final note on, you know, what’s interesting in particular with responsible AI and marketing is that we know that these days customer data is really our currency.
And so, you know, the customer has become the product, if you will, in so many instances that we have to be really, really careful because we’re we’re dealing with human beings and human rights, and it is easy, perhaps to cross the line, especially if you’re using other a I tools that are third party. And you’re not sure, you know, of their ethics and their terms and conditions.
So it’s, it’s a, it’s a really robust field that covers law psychology, technology, sociology and many other elements. And so it’s, it’s a big field and there are lots of people working at it in different areas, but I think that just encompasses in general what responsible AI is and, and touches upon how it works within the marketing sphere.
(00:05:35) Mike Allton: And to your point, this is a big topic. It’s a big question. So throughout this talk, I know we’re going to reference some big ideas and some resources, and I’m going to make sure that every single thing we touch on is linked in the show notes below for those of you listening who want to do a deeper dive and really understand how that applies to your own organization like Microsoft’s list.
For instance, you know, I’ve seen Miri Rodriguez talking about that on social media and LinkedIn. We’re going to have her on the show soon to talk about AI with women in tech. And of course, Microsoft stands towards AI. So that’ll be terrific. But I mentioned at the outset, I know this is about more than just a list of what not to do.
But let’s start there because sometimes I think that’s helpful for folks to kind of wrap their brain about what it is we’re talking about. What do you think some of the most common ethical pitfalls may be that you’ve seen marketers encounter when implementing AI in their strategies? And I recognize you’re not a marketer, so I’m appreciating that you’re putting your, I’m not a marketer, but I’m going to try hat on as you look at this for our assistance.
(00:06:37) Sarah Lloyd Favaro: Yes, thank you for that. I would say the biggest thing in terms of using AI in marketing and and then thinking of responsible AI is trying to prevent damage from happening in the first place. So I would say it’s really prevention is what you need to do is when you use responsible AI, you need to think about, Hey, I don’t want to have a situation where I have crossed that line.
And then I have to do damage control because then, as we all know, we’re talking about consumer trust. We’re talking about the things that the people who we are trying to attract, we have, you know, repelled them instantly because we have crossed that line. So I think with responsible AI, the important thing is to understand you know, what this technology is.
Again, Talking about third party A. I. Tools. There’s some great ones, and they’re amazing. And it’s important to understand how they are using data that would be integrated and funneled to their particular tool. So I think you know, we can think of some of the you know, the common pitfalls such as data privacy violations, like, you know, even we know, I mean, I am guilty of this as well.
I am in a hurry and I need to use a tool, for example, and I want to use the free version, but they require me to sign up. And so I kind of sign away my rights, you know, I never read the terms and conditions from Apple or, you know, all these things. Hopefully, you know, I’m not the only one in that same boat, but you know, more and more when the technology such as AI, it ups the ante, you know, these terms and conditions become more important.
And it’s something that even as a marketer, we want to try to provide that transparency in a way that’s easy for people to understand without having to read through, you know, A 50 page contract of terms and conditions when they want to use a particular tool or buy a particular product, et cetera. So that data privacy and security also, you know, we have breaches up all the time.
You know, I seem to receive a letter in the mail every month. So we need to be really careful because sometimes we have that customer data. And despite all of the secure measures a company takes to protect that, we know that things can happen inadvertently. So that’s something obviously, you know, is important to talk about.
You already mentioned also bias. I think just human data and human behavior, we are biased. Unfortunately, and we need to realize that the historical data that AI processes and analyzes, a lot of time that’s biased data because it was past data. And so we need to really look at the outputs of AI. These tools, you know, even content, you know, basic content, marketing content generation or copy and see and read them and make sure that we edit them and try them out, you know, in focus groups or different mediums where people can say, Hey, I don’t think that really would work.
And see, this is the way to do that testing before that damage is done by, you know, sending out your marketing campaign without doing that due diligence. So those are just a couple of common ethical pitfalls that I think everybody would be aware of. You know, similar or familiar with and just the last point, and I’d love to get your thoughts on this.
Is that personalization? That’s I think what you know, it’s kind of the holy grail of AI. It can, you know, parse through enormous amounts of customer data that For a human being, you know, it’s too manual and we just can’t handle all of those different pieces, but AI, you know, is 24 seven and it just has, you know, parallel processing.
So it has no problem with that. And it can generate these wonderful insights about customer behavior and consumer personas. So obviously we want to, You know, take advantage of those, but sometimes it may cross the line into too personal and to the point where it gets a little creepy. So I think that’s also something to keep in mind.
And again, testing before you go to market is always important.
(00:11:57) Mike Allton: Those are huge points. In fact, we did an entire episode with Zontee Hou from Convince and Convert all about data personalization and obviously, you know, how wonderful AI can be for that, but at the same time, it’s, it’s easy to cross those lines and getting too personal.
She shared this example of how, you know, Target had used AI to generate models of its buyers so that it could identify women who were pregnant before they even told anybody. They were probably based on their shopping habits, so they could get ahead of their competitors with their messaging and, you know, say, Hey, you know, here’s, you could look at this formula and that sort of thing, which was a little questionable right off the bat.
Then they sent out marketing messaging, like direct mail in the hand pieces of copy that talk to these pregnant women showing other people in their household who picked up the mail that they’re pregnant before they’d even, Oh my
(00:12:53) Sarah Lloyd Favaro: gosh, that’s, that’s crazy.(00:12:56) Mike Allton: Yeah. So, so they obviously crossed a line there and I want to go back to your earlier point about.
Just data privacy in general, because we’re recording this and late September and just within the last few days LinkedIn. What most people would consider the most trustworthy of social networks is now embroiled in this huge controversy, because it was discovered that they’d been scraping user profiles for our, you know, to, to, you know, empower artificial intelligence and train AI models and the real sticky wicket for me was that it was Non EU profiles, they knew that if they scraped members who live in the eu, they would have been you know, against GDPR and against the data of privacy laws in the eu.
So they, they had that thought process in mind. They knew it wasn’t okay to do it in the EU because there was an actual law, but they figured, well, since there’s no law against it in, you know, the rest of the world. And now they have countless people talking about him and talking about that, you know, in a negative light online.
(00:14:05) Sarah Lloyd Favaro: It’s that damage control I was talking about now they’re backtracking and, you know, people, I, I don’t know how many posts I’ve seen or friends who’ve sent me, Oh, if you need to, you know, turn this off in LinkedIn, let me just, You know, keep you aware of what’s going on, but that’s kind of sneaky, you know, LinkedIn understanding that if they did do it within the EU, they would be obviously penalized, you know, financially.
Possibly. And, and, but not in the U. S. And that’s always been this dichotomy, I think, you know, with U. S. based A. I. and technology is kind of like that fail fast. Let’s just try, try it and see what happens. And, and then we’ll figure out, you know, later how to deal with any of the You know, the back end issues that that result and I think that’s great for innovation.
I’m totally, you know, There’s so many great things about that. But I think with art, you know, I really feel strongly about this with artificial intelligence. There’s the possibility. And this is where that sensitive uses comes up. And in the E. U. A. I. Act, they talk about, you know, different levels of risk systems, you know, that are high risk versus low risk.
And I think for the most part, you know Using AI and marketing, it can be low risk, you know, so there are a lot of use cases, you know, that don’t are not questionable that make a lot of sense in terms of productivity, like, again, just idea generation or, you know, Processing, you know, the, the, the ton of customer data that you have been given access to because your customers said here, I will allow you to have this data.
But the thing is, is that when you do something that these like LinkedIn did or for example, I don’t know if this has ever happened to you. This is not necessarily AI, but where, you know, you sign up for a free trial and then of course it immediately starts to then debit your credit card because you had to put your credit card in to get the free trial.
But of course you forgot and then you start getting the bills or you don’t even get a bill. You just get the. The notification later, you’re like, Oh, what is that? Those types of things. I think customers are not prone to accept as much as we did in the past because customers are savvier. They just have more exposure to technology and.
Applications and, you know, digital services and artificial intelligence. So I think we really need to remember that customers and consumers are getting savvier. And so we need to treat them. You know, as such, and with respect especially in terms of anything with a sensitive use, you know, financial with health care, of course, and then, as I was relating to before that personal data that you know, their data brokers out there.
So we’ve got to be, you know, really, really careful. And brands, the. Most important thing that they offer to their customers is that trust. And if that is broken, which is so easy to do, it’s really, really hard to get it back if it ever comes back from a particular customer. So, so many things, I mean, LinkedIn is lucky because I think it’s the biggest platform out there for so many things.
I don’t think they’re going to have, you know, as. big of a problem if there were other platforms similar to LinkedIn, but still, you know, questionable.
(00:18:10) Mike Allton: Yeah, yeah, very. So these are all great examples of what not to do. Let’s kind of reframe the conversation for a moment and talk about what you think having that responsible AI approach, how that might actually help marketers how it might make maybe their campaigns more innovative or more effective.
How do you think it would help them in that respect?
(00:18:32) Sarah Lloyd Favaro: Yeah, I mean, I think it would engender trust again with consumers becoming more savvy, especially the younger consumers. I think they expect that, you know, they expect that level of, you know, personalization. They want things to be like, you know, Amazon and other touch points in with their interactions.
But at the same time, I think it’s okay, you know, to also. Explained that some of the processes are being used, but artificial intelligence is behind some of these processes. And we’ve seen, as we talked about before in the EU act, for example, it is even imperative to indicate that a chat bot. Let’s say for customer service or whatever it may be is actually using AI.
And, and some people, you know, are just going to gloss over it and they’re not going to care and other people will appreciate that that has been you know told up front. And so that’s one way is to, and then allowing the person, the consumer, if they want more information to then double click, let’s say, and do a deeper dive to get more information about maybe the how and the why AI is being used.
In these particular digital services to to actually act behind the scenes now, there’s some people who don’t want that level of detail and that’s totally, you know, acceptable, but there are some people who may really appreciate that, even if they don’t actually do. Dive deeper the fact that the company has offered to Its consumer base the option.
I think it really does engender trust because it treats consumers With the rights and respect that they deserve And so that’s one way even as simple as we saw. I don’t know if you heard. about recently the in california, you know, there are always seem to be you a little bit ahead of some of the other states in the United States, but requiring AI digital assets like through DALL-E 3 or You know video that’s created with AI to have that watermark and not only visible to any other user or viewer who might see that, but also then I don’t know how they do it exactly.
But in the metadata as well. And so, of course, there’ll be ways to get around that. I’m sure and whatnot. But even that I think people might appreciate because if we kind of already know when people are using a I created images, they have this look and feel about them. You know, honestly, I don’t like seeing them when I know I feel like they’re trying to pawn this off on me, like, oh, yes, we’re such a creative, you know, agency.
And we’ve done all this, you know, work when I know these have are, you know, AI generated images. I would much rather prefer to see a little watermark. And somebody say that up front to me, then, you know, being passed off as like, oh, we did all this work and look at, look at us. But that’s me and not everyone will feel that way.
I think that just goes into, you know, kind of how the integrity of even a friendship, you know, do you want your friend to be up front and open with you, or do you want them to be hiding or. So that’s kind of how I look at responsible AI is like, how would I want my friend to interact with me? And that may be a funny way of looking at it, but it really helps keep me grounded with when I work with technology.
(00:22:49) Mike Allton: I think that’s so true cuz it applies everything, not just AI, right? If the company makes a mistake, whether it has to do with AI or not, the best course of action is always to be transparent with their customers. Tell them what went wrong. Tell them what you did. Tell them how you’re going to fix it.
And that will engender trust. And yeah, some people may still be upset. Some people may still quit. You messed up. That’s the price you pay, but you stave off so many of the people from dropping as a that’s a client or a vendor.
Folks. We’re talking about responsible use of AI, and we’re going to get into some practical steps in a moment.
But before we do, let me remind you about the tool I’m using to leverage paid ChatGPT, Claude, Gemini, and more without having to pay for multiple subscriptions. This episode of AI in Marketing: Unpacked is brought to you by Magai your gateway to making generative AI incredibly simple. And accessible wondering how to seamlessly integrate AI into your marketing strategy without getting bogged down by complexities.
That’s exactly where Magai shines. It provides user friendly AI solutions that empower marketers just like you to innovate and elevate your campaigns without needing a degree in science. Imagine having the power to generate creative content, insightful marketing data analysis, or even personalized customer communications, all at the touch of a button, Magai isn’t just about providing tools.
It’s about. transforming your approach to marketing with AI that’s tailor made to be straightforward and effective. So whether you’re looking to boost your content creation process or want deeper insights into your marketing performance, Magai makes it all possible with a few clicks, no fuss, no hassle, just results.
Ready to simplify your AI journey? Visit Magai today to learn how their solutions can revolutionize the way you engage with your audience. Don’t just market, market smarter with Magai. Tap the link in the show notes. Sarah, we, we, we touched on bias just a little bit earlier. I’d love if you could expand on that and help us understand, you know, what is bias?
How does that come into play? How can marketers, Ensure that their AI driven campaigns don’t inadvertently perpetuate bias.
(00:25:03) Sarah Lloyd Favaro: Sure. I was just talking to someone the other day, for example. And she this person is a Latina and I think she was trying to generate, you know, AI avatar. And You know, do some work around that so that you know, it’s really a wonderful technology.
It has so many uses. But of course, when she was doing that generation she kept getting likeness of herself, but it always had gold hoop large earrings and she couldn’t get out of this loop and it was just convinced that because, you know, of some bias that it maybe existed in the past. That this would was the right output to provide to her and that wasn’t her likeness that wasn’t defining her, but she had a very hard time overstepping that so that’s just an, you know, an example of something, you know, with again, some of those cases of gender and you know, race and in other elements that we have a lot of historical data that in the past, you know, if you just think through history, we have evolved in many different ways, but the data that we have, that we may be using for predictive AI to, you know, to try to come up with this definition of whatever it is that we’ve asked the AI system to come up with. It may be pulling from data that is not as evolved as we currently are. So I don’t know if that has ever happened to you before, even when you’ve maybe done, you know, a DALL-E 3 or something like that. And you’ve wanted a more representative crowd of people, but You just get one very homogenous you know, type of image coming out.
There are lots of examples, but I think I’d love to hear your thoughts if that has ever occurred to you, but in order just to be aware of that, I think a lot of us already are aware of that, you know, in big companies, we even go through unconscious, unconscious bias training. For our human interactions.
So I think we just have to remember that technology, you know, is a reflection of the humans and the data that we provided. So we always want to check what the output is, what the outcome is, and is it appropriate in the you know, context of how we’d like to use it. so that what, what about you, Mike, have you ever had a similar situation?
(00:28:06) Mike Allton: Yeah, it was funny because before the break you were talking about creating images with AI, and I was reminded that whenever I’m creating images exclusively for my blog, I’m creating images where I tell the AI to show a bear and human clothing, wearing a fedora in a Star Wars setting. Doing something, whatever it is.
I think that may be appropriate so that it is a hundred percent obvious to anybody who looks at it. These are AI generated images. I’m obviously not fooling anybody and that’s, that’s very deliberate. But what you just reminded me of also is that recently I was organizing a happy hour for local St. Louis marketers and AI enthusiasts. And I wanted to have, you know, one of my style of images. As the header for the event page where people can learn more about it in RSVP. And so I said, you know, give me a bear and human clothing wearing a fedora and a star Wars. No, I said in a speakeasy talking to other marketers.
And so that was very basic prompt. And it gave me that. It gave me a bear and a hat and everything. Every single other marketer was an older white guy. Come on.
(00:29:10) Sarah Lloyd Favaro: Okay.(00:29:12) Mike Allton: I’m not surprised. Come on. You know, so I specifically tell it, you know, I want it to be diverse. I want women they’re going to be more women than men there to be no matter what.
So let’s, let’s show that and reflect that in the image. But to your point, yeah, that’s, that’s a reflection of our unfortunate history as a society and history as marketers. Most images were depicting men and, and, and so on. So you need to have that mindset of this is not okay. We want to make sure that we fix it.
(00:29:41) Sarah Lloyd Favaro: Right. Yeah, definitely. It’s so funny. You know, sometimes you’re so hopeful, you know, when you put in your prompt and you’re, you’re like, it’s generating and you’re like, Oh, it’s going to create the most amazing, you know, image or video. And then you realize this is a continuous conversation that I need to have with the AI to really get it to where what’s in my mind, you know, is actually what is appearing on the screen because it’s not necessarily a media maybe.
Yeah. And a few years. I mean, obviously it’s going to improve exponentially and it is, you know but for now we still have to have that, you know, cliche human in the loop, check everything we do because we don’t want to cause that damage to a marginalized population that may have, you know, had something in the past, but things are different now.
So we just really need to understand the context and understand that AI is a predictive technology and it’s not. You know, going to have the human foresight and insights and ability to really evaluate something based upon the context that you want to show. The output in.
(00:31:06) Mike Allton: Yeah. And that’s why conversations like this are so important.
I hope all of you listening are going to take that home and think about going forward. How can we double check and make sure that the work that we’re doing with a doesn’t have embedded biases and it could be, you know, That kind of physical features, race, gender, and ethnicity, that sort of thing. But it can also be more, you know, past purchase history.
I mean, if, if a lot of your, your purchase data is by more affluent demographics, then all of a sudden the AI is going to take that, you know, even it’s a private AI and going to shift your marketing to focus on that demographic when maybe you shouldn’t be doing that that way. So these are all concerns that I hope all of you are thinking about, Sarah, could you walk us through what some practical steps.
That you think marketers could take starting today that would help them implement responsible AI into their current strategies?
(00:32:01) Sarah Lloyd Favaro: Yeah, sure. I think one of the things that I’ve learned just working in technology and the full life cycle of projects, be them marketing campaigns or educational campaigns or, you know, new cloud projects.
Implementations of systems, et cetera, is to really, you know, map everything out. And make sure that responsible use. And in this case, we’re talking about AI responsible AI. Is really baked in to the planning the requirements and the design of whatever it is you’re intending to create. And that could be as, as simple as, you know, some advertising or marketing copy.
Or it could be, you know, a social media post. It could be, you know, something very simple. But I think just having that responsible AI by design as part of your checklist. That’s really important because going forward, we know that it’s going to be illegal in the EU, for example. And so that’s already something we just, because, you know, we, we don’t want to do anything unlawful.
You know, we might as well try to You know, do the right thing by by design, but also we know it’s the right thing to do by our customers. And again, I think they really appreciate that. So that’s, you know, 1 thing in any sort of, you know, project management or You know, checklists or roadmaps, let’s say that should be, you know, something that is kept in mind from the beginning and especially should be baked into the design.
So that would be the first thing. And then on the flip end, I would say the testing, you know, I think this happens all the time because I’ve had Lots of roles and learning and development and training, for example, where, you know, there’s some instructional designers and people creating this amazing content.
And I can think of, you know, a similarity with marketing content and marketing campaigns. And then, you know, all of a sudden the deadline is upon us and like, Oh, what? We got to go to the market. We got to get this delivered right away. You know that we don’t have any more time. But then we haven’t done the testing to see is it actually working the way we intended it to?
How is this do should we do a pilot or a proof of concept to or focus group, you know? Just to like the movies, you know to see do they like the ending? Is this or are they like scratching their head like what in the world is this? This, what were they thinking? So that has to be baked into your timeline and your schedule, roadmap, et cetera, and not rushed because again, it’s so much harder if the damage is done to fix it, then to be preventative to begin with.
So I would say those are the two things you can do at the beginning. And then at the end, before you release or, you know, go to market. And then just the AI literacy and, and doing every, this is something that I do, you know, personally with everyone I come in contact with, sometimes they’re like, okay, enough with the responsible AI stuff.
But you know, I’d like to educate and talk to people about this. And so companies and collaborators and colleagues, this should be something that everyone feels comfortable discussing. And that literacy level needs to be there because all of our tools These days are either AI powered or infused with AI.
So we’ve got to know what we’re working with before we actually produce and create out outputs with them.
(00:36:05) Mike Allton: That AI literacy point is, is. Huge. I know many of the folks who are listening to this show, this is their first foray into AI and by the way, if that’s you, if you’re just getting started on your AI journey, I’ve got an AI marketing primer that you can download, it’s linked in the show notes that’ll help you understand, you know, what’s the difference between chat GPT and Claude and Gemini and LLMs and all the other language that you need to have some familiarity with.
And then you can just move on. Into your marketing career. So that’s where I want to bring you back in, Sarah, for, for folks like that, who are new to AI and marketing, what advice do you have? Like what’s one piece of advice you’d give them so that they start off on the right ethical foot?
(00:36:47) Sarah Lloyd Favaro: Yeah. I can just tell you even what’s been really helpful to me is to actually, and there’s so many tools out there that you can use to do this.
And obviously, if you have, you’re part of a company, hopefully they would provide this opportunity. And if not, if you’re, let’s say, a sole proprietor or freelancer. Or, you know, you may have to take it upon yourself to be proactive, but actually, you know, even creating your own chat bot with no code, you know, not, you don’t need to learn Python, but if you want to even better but just doing something, even, you know, with a free trial or You know, some of the tools that are out there and to demystify the A.
I. Behind it and to understand, you know how it it draws upon, you know, content or words or customer data You know, and how it actually then kind of aggregates all of that and parse through it and then come out with you know, responses in a chat. But that was really helpful to me just to understand, Hey, this is not, you know, rocket science.
Yes, maybe the LLMs and the transformers and the models, your frontier models. Yes, that may be rocket science, but you don’t have to actually know. Okay. All the underpinnings of the AI, perhaps you would use, you know, you you would connect to an, you know, a chat GPT version that is available for your use to understand this.
So I always like to, you know, put my hands on the keyboard and actually do things in action. Because again, for my learning and development background, I know that the best way to really. You know, accept something you know, outside of theory is going to be that application of learning by doing just like we do when we, you know, learn to ride a bike or we learn to drive, you know, we, we really get that experience when we get our driver’s license, not when we’re taking the test, you know, at the DMV to get the.
The permit. So that’s what I would recommend. And you may have to search, but there’s so many free you know, ways to get that experience, even if it’s just creating a really basic chat bot or your own GPT. Those are easy ways to kind of get started. Get in there and understand AI and AI and how it could, you know be used in marketing.
(00:39:35) Mike Allton: That’s some terrific advice as an NLP practitioner. I really appreciate starting with the why for the kinesthetic learners in the audience, and then really getting into the details of how you’re going to do it, what it is, what’s it going to look like so that we can really wrap our minds around it.
Because I’ve said before, AI is different from some of the other things that we would have used as a marketer. If I show you Canva and I say, this is to create images, you know, before you even get into Canva, what that experience is probably going to be like, but it’s not like that. It’s an underlying technology.
We need those use cases. If we need that hands on experience to really perceive that value. So I’ll have some links. For those of you listening in the show notes to kind of give you some sense or some examples of what you can do. But Sarah, you’ve been absolutely amazing for folks who want to connect with you and learn more.
Where can they go?
(00:40:21) Sarah Lloyd Favaro: Oh, LinkedIn is probably the best way. I’m Sarah Lloyd Favaro. I also have my Learn Bold. Responsible AI company page on LinkedIn. So either way, I would love, you know, to chat with you further. And Mike, this has been just absolutely a pleasure sitting here and talking about AI and responsible AI, my favorite topic.
So I really appreciate the opportunity.
(00:40:54) Mike Allton: Thank you, Sarah. And thank you all of you For listening. Like I mentioned a moment ago, if you’re new to AI, I’ll have the AI marketing primer leaned for you in the show notes that will help you get started and help you understand where to go from here, but for the rest of you, please, if you haven’t already find the AI and marketing Unpacked podcast on Apple and drop me a review, I’d love to know what you think until next time.
Welcome. Thanks for joining us on AI in Marketing: Unpacked. I hope today’s episode has inspired you and given you actionable insights to integrate AI into your marketing strategies. If you enjoyed the show, please subscribe on your favorite podcast platform and consider leaving a review. We’d love to hear your thoughts and answer any questions.
Any questions you might have, don’t forget to join us next time as we continue to simplify AI and help you make a real impact in your marketing efforts until then keep innovating and see just how far AI can take your marketing. Thank you for listening and have a fantastic day.
Related
Discover more from The Social Media Hat
Subscribe to get the latest posts sent to your email.