Episode Transcript
[00:00:00] Speaker A: Hosted by the dynamic duo Ifat Chowdhury and Neil Fluster, we're bringing you real conversations with real people tackling the topics that matter most, from tech trends and challenges to the stories shaping our AV community. Powered by Wildwood. Plus, this is not your typical studio sit down. This is av. Unfiltered, unexpected and unapologetically wild.
[00:00:25] Speaker B: Foreign.
[00:00:34] Speaker C: Hi, everybody and welcome to Infocom 2025. We are live at the Midwich stand and in their amazing Avion Air studio.
And so huge thanks to the Midwich group for, for giving us this space.
And I'm joined today by Neil Fluesta.
[00:00:54] Speaker D: Thanks ever. Yeah, I'm really excited. This is first podcast for us on our new series.
[00:00:59] Speaker C: It is our first. Yep. There'll be a lot more coming from us.
[00:01:03] Speaker D: So we're starting with a bang though. We've got a really good guest today.
[00:01:09] Speaker C: Exactly.
[00:01:10] Speaker D: Let's pick it up. Do you want to do the introduction?
[00:01:12] Speaker C: No, I'm going to ask Noel to introduce herself, of course. But yeah, there's a lot more coming from, from us and I don't think we could have started in a better way, really. Noel led the breakfast, Women's Council breakfast this morning with amazing keynote and we are so privileged to have her spend some time with us. So, Noel, welcome.
[00:01:37] Speaker B: Yay.
[00:01:37] Speaker C: Thank you.
[00:01:38] Speaker B: Super exciting.
[00:01:40] Speaker C: I. I'm going to get you to introduce yourself because I. You are phenomenal, by the way. And, and there is a link to that. We'll, we'll share it as, as the conversation carries on. But yeah, you absolutely are phenomenal.
[00:01:52] Speaker A: Yes.
[00:01:52] Speaker B: That's too funny. Yes. I have a synthetic version of myself that was doing some introductions earlier today, but yeah, my name, of course, and I got a chance to sit in front of some amazing women.
The session was really about the art of the possible with artificial intelligence. And I own a company called the AI Leadership Institute and I'm really passionate about just helping people realize that this isn't a time to really be worried about AI though. There is a reason for cautious optimism, but it is a time to be using it. It is a time to be putting your fingers on a keyboard to become a doer instead of a talker. It is a critical time because within the next three to five years, you won't be able to look one way or another without it.
[00:02:35] Speaker C: And that's. That's the key, isn't it? I think somebody today asked during the breakfast, how do we overcome the fear of it? That, and I remember, and actually it took me back to the classroom because I Am that age that I was in secondary school when we first got computers. And I remember the fear that our teachers had around even being able to teach us how to use computers. And there's still, I think there's a generational thing of, you know, I don't want to touc this because I might break it.
Obviously devices are a lot easier now to access, but how do we overcome that fear?
[00:03:13] Speaker B: Yeah, the one thing that I've learned and granted, my journey started at Amazon Alexa about 12 years ago.
I got a chance to be an early member of that team and what I learned in those first few months was the sheer value in just building, just starting to use the technology. Now, granted, I am a tinkerer by heart. Like, I just, I'm. If there's a button, I would actually press it just because I'm like, it's there and it's red, we should press it.
But what I found out is that as I started making Alexa available to people, they were worried about talking to it. They were worried about, like, what do I say? How do I say it? Who is this going to replace? Is it going to replace podcasters? It turns out that all of our fears of AI actually just distract us from the really important things that we can do with it.
So. So from Amazon Alexa, I ended up at Microsoft and helping launch some of Microsoft's biggest technology today known as Azure AI. It's actually a productized version of Chat GPT.
But the most important skill I learned is that these AI systems, they don't really work well unless you tell them what to do.
[00:04:23] Speaker C: Absolutely.
[00:04:23] Speaker B: And then they also don't work well unless you continually tell them what to do. So I think there's a bit of misunderstanding that somehow AI is happening to me, that it's some big AI in the SK that's, you know, Skynet, when, when actually you know that AI is. It's going to be an extension of you. It's going to be a relationship that you have. And unless you start talking to it right away, it's like public speaking. It's. You're going to be awful at it until you get awesome at it. But you can't get to awesome unless you go through awful. So just start being awful with AI so you can get better.
[00:04:55] Speaker D: It's like a muscle, isn't it? You go to the gym and you need to, you know, work out that muscle, exercise it, and then you get better, you lift more weights, you can run further or whatever. It's the same with a lot of technology. And social media, making videos, working with AI, you've just got to start, just push that button. The first one will be rubbish, the second one will be rubbish. But then as you progress and as you learn, you'll then start to work out how to do the prompting, how to do the engineering and then to get better results back from it, I guess.
[00:05:21] Speaker B: Yes, exactly. Like doing first, you know, being willing to just ask a question. I think the good news about this technology and ChatGPT for example, has 300 million weekly users. So even my mother is using these types of services, which is crazy because I, she texts me and she's like, look at this cool thing. I'm like, yeah, yeah, I helped build that cool thing. It's fine though.
But yeah, like that's me, like anyway, she didn't see it that way. But like it's so ubiquitous.
[00:05:48] Speaker C: Yes.
[00:05:48] Speaker B: And so that's the good news is that it really is very easy. There is no technology barrier. You don't have to become an engineer or an architect or developer to use it. But the danger in that is that it also is going to do exactly what you ask of it and try to please you. I think we were talking about this in the breakfast earlier. Like it's horrible. Whole job is to get you to say like you ask it a question, it's going to generate a response and it's hope is that you just say, well that's not terrible, I kind of like it. What a nice answer. It's not going to be truthful, it's not going to represent humanity, it's not going to care about fairness or trust. It's just going to generate an answer that makes you go, that's not half bad. And that I think is a little bit dangerous because that not half bad answer is going to become someone else's. Like, well that must be be the answer.
[00:06:33] Speaker C: And that's the problem with the Internet anyway, right?
[00:06:36] Speaker B: Yes, exactly.
[00:06:36] Speaker C: Like whatever, if some, someone's seen something or read something, they believe it and that's. Yeah, there's got to be some kind of filter.
I mean there's a lot of AI chat in our industry.
[00:06:46] Speaker A: Yes.
[00:06:47] Speaker D: And there's so many company again, it's a sales world, everyone wants to sell something and there's a bandwagon which is AI, Everyone wants to jump on it and all they've done is they've taken their product and put those two letters after it.
And how do you differentiate between real intelligence and just clever coding? Because again, there's some clever stuff that these Cameras do that, can track us and film and all that kind of stuff. But actually making decisions, how do we learn and work out the wheat from the chaff, what is real AI and what's just clever coding?
[00:07:18] Speaker B: I think probably the most important thing is tied to the number one reason why AI projects typically fail.
So I've been working in the biggest brands, but I've also been helping kind of medium sized companies leverage this technology.
And the number one reason why production AI fails is because they lack a very specific measurement for success. Like what is this model supposed to help me with? How is it going to do that and when do I know that it's working? Most people never think that far. They are actually. That's why like you could see, like, I don't know if you could see it, but there's like this little baby tiger. I'll grab. Yeah, we'll grab him, we'll grab it.
[00:07:52] Speaker D: Here it is.
[00:07:52] Speaker B: Like, he's so cute.
[00:07:53] Speaker C: And I have my own verse, right.
[00:07:55] Speaker D: I feel left out. I don't have it and I don't, I didn't go to the breakfast. So for me, for me and all the other viewers and listeners, can you bring us in to the kind of club. Cuz I feel left out.
[00:08:06] Speaker B: So now, you know, come to the breakfasts, they're really important. But this, this concept of a baby tiger basically represents the first time you start using an AI system. Like it's super cute and fluffy and it, it's like adorable. It's novel. Everyone is excited about it. But what you'll notice is nobody's really asking the hard questions. Like, you know, look at its paws. Like how big is it going to grow up to be? What is it going to eat? It's got razor sharp teeth, right? Yeah. Like, what is it going to do? Where is it going to live? How long is it going to live? What happens when you don't want it anymore? Where are you going to put it? So these questions, no one's asking these questions in the early. When you first get a model, you're just like, it's so cute and I love it and I can't wait to use it. So all these companies, they're selling baby tigers. None of these companies are like, by the way, that tiger, I said it in the breakfast this morning. Like, baby tigers become big tigers and big tigers kill people.
[00:08:58] Speaker D: Yeah, yeah. That's the fear of AI, I guess. Yeah.
[00:09:02] Speaker B: But the, the difference between a baby tiger, a model growing up and hurting someone and one that doesn't is a Human input, we call it human in the loop. Like having humans that say, here's what good looks like, here's what a well behaved tiger looks like. And then having humans that are watching it as it goes, like, it's not. You're gonna just. I always, I have six kids, so I always refer to like, it's not like I'm gonna tell my kids at the age of five, give them the rules of life and then push them out into the world and be like, hope it works out. Right. Like, I mean, many of us stay connected to our kids indefinitely and actually when they leave, we wish they wouldn't like. So I think this is part of a misunderstanding that's happening in AI that you are going to get this cute tiger. It's going to do cute things. And that's the extent when actually it's only the beginning. And your responsibility is pretty intense once that adoption occurs.
[00:09:52] Speaker D: Okay, I need to adopt a tiger.
[00:09:55] Speaker C: As long as you know what's going to happen to it.
[00:09:57] Speaker D: Yeah, absolutely. I think a lot of the things with our is people are kind of scratching the surface of it and really not using it to its full potential. They're just being lazy. Write this document for me. You know, transcribe this for me. Do this, do this. It's a do engine.
[00:10:10] Speaker B: Yes.
[00:10:10] Speaker D: Rather than actually using it to try and say, get that intelligence and try and understand what, how it can help them in their life rather than just go, okay, take that boring, mundane task away from me and make it simple.
[00:10:21] Speaker B: Yeah.
[00:10:22] Speaker D: So that's something I think we all need to look at. And again, I guess that comes from the.
[00:10:27] Speaker B: Yeah. And I think it does start there. Right. Like two years ago we were using a model that was brand new. It did start with, hey, the biggest, coolest thing you can do is just transcribe your meeting notes and not have to do that. I think you're right though. The intuition that you have is the future is not that. And it's actually a frustration I had with Alexa in the early days. We built a voice based user interface.
[00:10:48] Speaker D: It was a do engine. Alexa do this, Alexa do that. Everyone's Alexa's now just gone crazy. By the way, oopsie.
[00:10:54] Speaker B: Set alarm for 7am yes. But when you. That is the beginning and that was baby tiger mode for Alexa is that we would ask it to do very specific things, but we actually could ask it to do anything. We could have said, go into Salesforce and tell me what my sales are this month. Go in and tell me who the best worker Is this month, tell me about, you know, go do research for me. But everyone got stuck in, just ask it for the weather, ask it to play music, these very small use cases.
[00:11:23] Speaker C: Tell me a joke, give me a dog fact.
[00:11:26] Speaker B: Right. Like, so I think the burden is actually on us as users to be more creative in our approach to using this technology because it can now do anything. Anything that has code can now be invoked with words that we type in. So now the burden is on us to say, okay, if I could ask a bunch of systems to talk to each other with just my words, what would be the perfect way that those systems would communicate? So many of us are stuck in kind of how we grew up in the last 20 years. All the band aids we put on old tech to make it work. And we actually could like completely start over and build it brand new faster than it would take for us to fix what's already broken.
[00:12:04] Speaker C: To your point about, you know, knowing where the tiger's going, I think, and this is an observation, I'm not being critical of anyone's business or any. It's just a general. I feel that most senior kind of decision makers and businesses don't actually know what they want the AI to do. And they don't, you know, that's right. They, they don't even know how their workflows could be improved, where they could actually just work better and smarter if they. But they don't, they don't know. So how would, if there was somebody with a small business.
[00:12:40] Speaker B: Yes.
[00:12:41] Speaker C: How would they, how do they go about finding out.
[00:12:44] Speaker B: Yes.
[00:12:45] Speaker C: What, what is available for them?
[00:12:47] Speaker B: Yeah. So this is one of the reasons I created a company called the AI Leadership Institute. Right. Because in the future, whether you hire a bunch of humans or not, you're definitely going to be leading machines.
[00:12:57] Speaker C: Yes.
[00:12:57] Speaker B: And you're going to need to know what your core values are and how do the way that you communicate as a leader, how does that translate to AI machine behavior? And so the first thing I encourage organizations to do is what we call like an AI impact assessment. It's not about AI at all. It's about monitoring human work.
[00:13:13] Speaker C: Yes.
[00:13:14] Speaker B: Looking at humans. And it's something that we've been doing in the systems thinking process world. Lean Six Sigma. Like, this is not a new idea, but it actually is critical to AI system success because you're going to want to start by looking at humans and realizing, wow, there's some friction here. Anywhere there's friction, you can put an AI system to remove that friction. It's very Tactical. But what it does is it builds trust with the humans. Because when you take friction away from a human's job, they're going to be like, well, this is not terrible. Yeah, I actually enjoy a world where.
[00:13:42] Speaker C: I still have a job.
[00:13:43] Speaker D: Yeah.
[00:13:44] Speaker B: And I don't have to talk to that system that's from 1982 anymore. Like, I love it. I never have to do another PowerPoint deck because I can use AI to that I don't have to examine, build a pivot table for an Excel spreadsheet. Like there's like human pain happening every day in the mundane tasks. So that's how you start as a business is just look at what you're doing today, watch it and build AI systems at any point of friction and start with just one. So one thing that my company ended up doing almost by accident is we went into a very huge friction point for most organizations. Sales.
Alright. So a lot of people are good at selling because they care about what their product. But most of them are really bad at getting strangers to become customers. And AI is very good at this. AI can learn who you are, learn your brand, learn your brand voice. It can help you amplify your message to the world. Social media, it can help edit content like this to get it out faster. Like all of these things that have friction points. Like I want to do a podcast. Wait a second, I will need a producer. It's going to take me three days to edit. And then how do I distribute it? And then I have to cut it up. Like there's tech for this now. Right. And so that tech can't run itself. That like all the tech, the AI systems that will help you build a successful podcast, like it's not autonomous. It requires someone like you as a vision of the podcast. You want to use your words and your vision to articulate what that looks like. Like, so for me we use like, it's called dba but we, we do database activation. So we go into like cold leads and we just take people's leads that they already have and we reactivate them with an AI sequence. And to your point on intelligence, I'm not just randomly autonomously sending them canned messages.
[00:15:24] Speaker D: Right.
[00:15:25] Speaker B: I'm look, I'm asking them questions.
[00:15:27] Speaker C: Yeah.
[00:15:27] Speaker B: In the chat bot, like, hey, I noticed you liked my post. What did you like about it then? I'm using what they said, said to actually continue the conversation. But I'm not. My AI system is.
[00:15:36] Speaker C: Yes.
[00:15:37] Speaker B: So it's listening and reacting in real time. Meanwhile the system they're talking about does not say I'm Sally and I'm pretending to be human. It discloses I'm Noel's nearly Noel.
[00:15:48] Speaker D: I'm a virtual machine.
[00:15:51] Speaker B: But, but it doesn't matter because it still generates the same impact. People are still delighted at the fact that I'm, I'm listening. Whether it's virtual me, me or real me, the act of me listening, hearing them and responding with intention changes everything.
[00:16:04] Speaker C: And, and that intention, what I'm hearing from you, that, that is really, it's, it's personalizing that message, isn't it? It's really pulling that out and it makes that person feel that they've and.
[00:16:15] Speaker B: That'S they've been heard.
[00:16:16] Speaker C: That's the thing that makes us put our hand in our pocket and get the wallet out.
[00:16:19] Speaker D: Yeah, yeah.
[00:16:20] Speaker B: 100.
[00:16:21] Speaker C: We want to feel that that's person personal to me.
[00:16:25] Speaker D: Yeah, absolutely. Now you mentioned different systems there. My. I'm not asking you to solve my biggest bugbear here but my biggest question.
I'm sure you will but it's just, you know, they don't talk to each other and of course everyone wants to ring fence their own tiger. So you know, I'm an Alexa user. Okay. I tried the Google one. I'm sorry, I just went back to Alexa but you know, it only talks to Amazon's storefront. It only talks to, you know, it doesn't talk to my YouTube music subscriber because Google and Amazon hate each other.
So how do we get these systems? As a user and a consumer, how do I get it to say look at my calendar which might be in Microsoft Outlook or look at my music which might be in Spotify, look at my shopping which might be in wherever it might be because everyone's going, we want the data, we want it for ourselves. We don't want to share. How do we get to open AI? How do we get to an open cross platform solution?
[00:17:21] Speaker B: That's such a great question because that's the world we're in right now. As a matter of fact I was just listed as the number one thought leader in agentic AI. And that's what it is, it's agency. We want to be able to like sure, if you want to create a silo for all of your capability, Amazon, Microsoft, Google, go ahead. But I now need what we call a control plane or I need an area where I can pull all these things together and orchestrate the them together. And so this is known as agent building agents. Right. Every one of these systems, Alexa is an agent Google like any of these. But also your CRM becomes an agent, your content producing, your, you know, if you're using like Opus video clips or hey Gen, which is my synthetic avatar generator, these all become agents that do work, but you still need a orchestration layer. And so agentic AI is the same concept of building orchestration that's outside of the vendor itself. If today, because we're at the beginning of this, it's a technical solution. So you would come to someone like me and I would assemble the orchestration layer. It's all through natural language. I would just say, hey, I want you to talk to you and you to talk to you.
[00:18:27] Speaker D: You're friends, right? Yeah.
[00:18:28] Speaker B: You guys are friends and you can share data in this way. It's a technical implementation. However, all of this started technical.
In six months, one year, you're now going to see AI platforms that are going to say, and actually there's one called N8N, like Noel N8N. And you can go in there and do what you said. One of the biggest challenges I have is email calendar, what's on event calendars, social media. And so it allows me to have a natural language interface to all of those things. Like, hey, I'm looking at my calendar, it looks like I'm open this time. But I think I have a podcast. Can you check my podcast schedule? Those are completely separate things. I don't even own my podcast schedule. Right. It's on somebody else's podcast. But I can just use my words. And because the platform is connected to those systems, it can facilitate figuring out the answer and providing it to me. So I think we are there technically now it's going to be a matter of patience on the side of the user to say, am I willing to do the work to connect these systems together?
[00:19:25] Speaker C: Okay, that's a message for you. You need to be more patient.
[00:19:29] Speaker B: Patience is a virtue.
[00:19:31] Speaker D: I have adhd, so I'm not patient at all. I'm like, you know, I went, now, now, now.
[00:19:36] Speaker C: Yeah, that's a really good point. So my little podcast, which is called what if?
As a play on my name, really looks at diversity issues, but not just race and gender are the two that get the most exposure. But I look at things like disability and, and, and to go back to where you started, really, you're, you know, I'll let you explain your story. But also, I mean, there are, there's a lot of neurodiversity in, in the audiovisual industry. It's, well, everywhere.
[00:20:10] Speaker B: Yeah.
[00:20:11] Speaker C: And just that seeing Something that can help people and make, you know, create that level playing field for people, but finding the right things for them.
So I've got two questions for you. One, how you started. Obviously you shared that at the breakfast, but for. For people who weren't there. And. And secondly, how do. How should organizations consider AI when they have a diverse workforce and a workforce that will have ADHD or somebody with. With any other issue?
[00:20:43] Speaker B: Gosh, yes. So I got started in the world of AI. I guess the seed was planted a long time ago by my dad. He's a science fiction guy, so he introduced me to talking robots in books very, very early on, which is ironic. Like, who would have thought I would then get a chance to be on Alexa where we're building a talking robot, like, who would have thought, like, 25, maybe 30 year difference between those two things. But I ended up, you know, of course, in between learning about robots as a kid and Alexa, I end up having my first child, my first of six. His name's Max and he has down syndrome. And so it really did shift my attention to, I only want to work with companies and products that are pushing the edge and making the world more accessible.
And so I climbed the ladder. I was at IBM and at VMware, but I ended up at Amazon with this opportunity to work on Alexa. And I realized that when I got there, I was the only woman on the team. And I was the only woman, like, the only Latina. That's not surprising. I'm still the only woman in Latina on most teams I'm on anyway. But what was more surprising is I was the only mom on the team. I was the only caregiver on the team taking care of my dad. He lives with me now. I was the only cat owner on the team, right? So, like, divergent thinking, right? I'm pushing the edge of, like, what they were even capable of thinking. I'm adhd. Like, they could barely handle, like, all the things that I would say and how I would react and things I wanted to do. My team was just like, we don't even know what you're doing. But no, I just know. No to all of it.
But the world now with AI is changing so much. I will give you an example of a company you might know, Goodwill. Like these. Goodwill, you drop off your stuff. So goodwill. I love them because they actually do serve.
They employ people with down syndrome. That's part of their ethos. And last year they adopted Microsoft AI technology, and they created some computer vision tooling, because what would happen is that they had people with cognitive disabilities. People would drop off stuff, that person would have to take out the item, they'd have to look at it with their own eyes, assess it, categorize it, price it, put it on a shelf. Like, there was a lot in there that would rely on just like contextual awareness of things that even humans without a cognitive disability but for them specifically made it very hard for them to do their job. Didn't matter though, because Goodwill was like committed to that. So, like, doesn't matter how long it takes. You do do your thing.
AI comes along and to your point, on amplifying neurodiversity, they looked at the capabilities of those who are working for them. They created an AI system that would intentionally amplify those people. Right? So now this person has a phone and they could take a picture of the item. That still needs to happen. But they take a picture of the item and now computer vision is going to say, oh, this is a sweater. Oh, this label means something. This is Ralph Lauren. This is, you know, a target brand. Like, now the pricing is going to change. All of this changes.
But it's only with intentionality that the leadership of the that company was like, we're not going to just lay you off and change our core values because you're not going to be as fast as someone else. Like, we're going to build AI that makes you individually better and faster. And I think though, that is an extreme situation where they are serving disability community.
It speaks to every single organization. Right. Like, every employee deserves a leader who's willing to look at their unique skills. To your point on like, what is your unique skills in this organization and how do I give you AI systems to make you individual Noel, better at your job?
[00:24:16] Speaker C: Yeah, that's, that's my, that's at the heart of why I started those conversations. Because for me, there are people dealing with so many things before they get to their desk at 9am yes. What makes their life easier while they're with you so that they can be their best when they're working with you.
[00:24:33] Speaker D: And they've all got different challenges, haven't they? Again, you talked about again, your perspective of where, you know, you, you were coming into the organization as the only woman is the only cat owner and all the rest of it. But you're bringing a whole different perspective that a bunch of pale, stale males are just going to be sitting there, you know, rinse and repeating the same thing over and over again. It's interesting on the diversity. It just Came to me. We're going to get diverse AIs. I mean, because again, if we're just creating the same tigers, I mean, are we going to get diversity in AI? Are we going to train it in that way?
[00:25:05] Speaker B: So, yeah, only intentionally. So there's a concept once you start building these systems. There's a concept known as. In cybersecurity we call it red teaming, which is a super fun role of like adversarially just hacking against a machine to make sure it's protected. But in AI, some of these challenges are not just adversarial, they're actually benign. Like, what happens if I build a system and everyone looks the same and I give it to a community that doesn't look like the people who built it?
All of a sudden they're gonna ask questions that I never asked. That AI in testing, I never even thought to ask. And if you don't have that, I call it a symphony of talent. If you don't have this symphony of talent building these AI systems, you're gonna find as soon as you launch it, it's gonna fail people and the very people you're trying to help. Actually, it's quite ironic because you're gonna be like, wait, oh, that's how you say that. Oh, this word means something to your community. I just like deteriorated my credibility and trust with you because I called your elders, you know, a word that I didn't, you know, like all of these things. AI can absolutely help, but only if there's humans that are like, this isn't moving in the right direction. How do we tame the tiger?
[00:26:10] Speaker C: And we have that. I mean, obviously you can tell from our accents, we're from the uk, we're.
[00:26:14] Speaker D: Not from around these parts.
[00:26:15] Speaker C: So things that we, we will say and out, you know, even outlast night, there was something, and it means something very different to someone else. So how do we get around that? Because you've got, if you've got one AI product, how do we localize that?
[00:26:32] Speaker B: Yes.
[00:26:32] Speaker C: So across the globe, you know, when it's languages, dialects.
[00:26:35] Speaker B: Yes. So you will have to build these teams. So Microsoft, Amazon, Google, they've documented and published their AI red teaming practices. And they're less about tech and more about humanity. Like, how do you infuse humanity? And I always say it's like DEI becomes rai. Like it's responsible AI to build for everyone. And the only thing that's required to build for more people is to involve more people in the process, in accessibility. We say never build for someone without Them, right? Like, if I, like, my son has down syndrome, he's like, mom, you can't build tech for me unless I tell you what I need. But as technologists, we're quick to be like, I got you. I can build something for you. I have an app.
[00:27:14] Speaker D: I can fix this.
[00:27:14] Speaker B: Yeah, I could fix this. And then we build it. And they're like, I don't even. That's the one thing I want to keep. That's the one thing I want to do. If you just asked me, I would have told you. And that is, I think one of the biggest challenges in AI is that it requires humans to become more human, to listen to people, to say, what can I build for you that is.
[00:27:34] Speaker C: Actually helpful to you and not to second guess.
[00:27:36] Speaker B: Yes.
[00:27:37] Speaker C: Yeah, yeah.
[00:27:37] Speaker B: But it will require that diversity, that symphony of talent.
[00:27:40] Speaker D: What's been the biggest help for you that AI has made in your life?
[00:27:44] Speaker B: Oh, my goodness.
[00:27:45] Speaker D: I mean, I'd have to maybe with your son, but.
[00:27:47] Speaker B: Yeah, yeah, exactly. I would probably say. I mean, our house is an AI house, right? So we started off with a lot of Alexa's, but that is now expanded to every TV has AI in it.
Every TV can, every. We have over 100 voice enabled devices in our house.
Everything from microwaves that won't burn food to washing machines that allow you to auto dispense based on how dirty it feels. I mean, crazy things, but it increases independence, not just for my son, but I also am a caregiver to my dad. And we have an aging population that's looking to stay independent. And so I think AI unlocks a couple of things, right? It unlocks the ability for us to increase the independence of people, either with cognitive delays or disabilities or an aging community that's desperate to just keep living as if they were 40, in their 80s. And now I think with this technology we can do that. My dad will not be able to ever live independently because he has a traumatic brain injury, but he doesn't have to come to me every five minutes and ask me a question. Or he can just ask Alexa, or he can, you know, he can cook and he can clean and he can, he enjoys his life much more.
[00:28:56] Speaker D: They did say the biggest disease now is loneliness. I think the fact that you can just sit there and chat to Alexa or Gemini or whatever other platform, you know, that that is a great thing for the aging population.
[00:29:06] Speaker C: I do it in my car. I was sharing with Noel earlier. I wanted to know about a particular company and I asked my car to, to talk to me. About a certain company. And I love it. What's their turnover? Where are, where are their offices? You know, what's their best product? What, you know?
[00:29:24] Speaker B: Yeah, I recently did a TED Talk. And, you know, TED Talk, you have to, I'm. I try to be concise in my words, but you have to be like, super concise, like less than 10 minutes. And I was like, look, I talk about stuff all the time. And so I just started having a conversation with my chat GPT, which is custom, right? It's not the open one. It's. It's a private one.
[00:29:45] Speaker D: Right.
[00:29:46] Speaker B: And it knows me. It knows how I speak. It knows my important things. It knows memories about what I've given it in the past. And I just asked it, like, help me take what I normally say, the stories I typically say, and condense them so I can use them in a TED Talk. And so I would say a story, it would repeat it back to me, but it sounded like me because I trained it to sound like me. And I continuously do that. But it was awesome because I did all of that while in a train on my way to a location. So it allows me, Tony Robbins use the term no extra time. Like, I'm a mom of six. I'm a, you know, I'm an executive. Like, I have no extra time, but I do have time in a car. I do have time where I can put in my earbuds. I do have time when I wash dishes or do laundry. Now I can use that time to talk to an AI instead of just listening to an audiobook. I can actually have a conversation with the author in an a. In an avatar format.
[00:30:37] Speaker A: Right.
[00:30:37] Speaker B: I can ask the author questions and get answers. It's just such an interesting time.
[00:30:41] Speaker C: So, so I, somebody, I've seen friends actually from the industry commenting on each other's posts and, you know, asking a question.
When you're using AI, do you thank the AI And I do, Yeah, I.
[00:30:57] Speaker D: The machines are going to take over. They'll know the ones that thank them in the end.
[00:31:00] Speaker C: You know, whether it's Siri, whether it's I'm in the car and it's Mercedes, I will say thank you.
[00:31:06] Speaker B: Yes.
[00:31:06] Speaker D: And why should you be polite to it?
[00:31:09] Speaker B: There is research behind this. Okay, so first at Amazon, Alexa, we got in trouble with the press because when you would talk to Alexa, if you said thank you or please, it would fail because we had not programmatically told the model to accept those words as part of a request. So people, they did a press cycle, one of the journalists was like, Alexa is turning my kids into not nice people, but used a bad word.
[00:31:35] Speaker C: Yes.
[00:31:36] Speaker B: And I was like, as a mom, I'm like, what? I'm turning kids into. Like, because you're now commanding a woman's.
[00:31:42] Speaker D: Voice, a subservient woman that you're saying, do this, do this, do this.
[00:31:45] Speaker B: I'll just tell you, no data scientist envisions this. When they build a system, right? Like they're thinking of the kids we're going to help and the adults we're going to give freedom to. We never think.
[00:31:54] Speaker D: Like, did you think about agenda when you created Alexa? Did you just think it was a machine?
[00:31:58] Speaker B: It was an intentional choice to make it female. But that wasn't new. If you remember MapQuest back in the day, they did a huge research study on how often men and women yell at these technologies while driving.
And they yelled at a woman less than they yelled at a man, which was very interesting feedback. So they chose intentionally to go with that.
But yeah, I think it's just, I mean there's so much, so much data around this now, but a lot humans would prefer to hear bad news from a robot and good news from a human. And the reason why, I mean, just think about the airlines. You know, like airlines now will send you a text message when bad things happen, as opposed to over the intercom gate person saying, yeah. And they do that because it, it's like de. Emotionalizing it. And this is the.
[00:32:47] Speaker C: Takes the sensitivity out.
[00:32:49] Speaker B: Yeah. I think every individual, every company is going to really need to think, how do I integrate this technology to be the best customer facing organization we can be that cares about our relationships and not trigger bad behavior at the same time?
[00:33:05] Speaker C: It's amazing. I mean, we could talk all day, but I know that you've got to go.
[00:33:10] Speaker D: Can I ask one last question?
[00:33:12] Speaker B: Yes.
[00:33:13] Speaker D: Everyone listening, everyone viewing me, myself. I am your audience. Okay. So because there's like copilot, chatgpt, you know, Gemini. Do I have to go on all of them? Can I choose one of them? Which tiger do I want? You know, which one do I adopt?
[00:33:26] Speaker B: Oh my goodness. That's kind of a big question. Give us again.
[00:33:29] Speaker D: Start slowly again. I'm phytoplankton. I know nothing about AI. I'm scared of AI. Everyone watching this thing. Where do we start? Where do we go?
[00:33:39] Speaker B: Well, I'll let everyone know that I actually started on Amazon Alexa and I knew nothing about AI. Right. I wasn't classically trained in AI. As a matter of fact, I had never Graduated from high school, I did go, just didn't fit with me. I went to college, but Y2K happened, so I ended up leaving early.
[00:33:55] Speaker D: So I'm not like classic the millennium bug. I remember that, right, yes.
[00:33:58] Speaker C: You made a lot of money.
[00:33:59] Speaker B: That's right, yes. And so, so I, I don't have, I didn't have that traditional technology skill. However, I had a passion. I had my son who had down syndrome and I'm like, oh, I could. This is. Talking to tech is the future. I want to be part of it. So that was the number one thing that was required, was me wanting to be part of it. And that is a choice that you make. Just like happiness, just like peace, like it. It is not a result of your environment, it is a mental decision that you're going to make to get involved.
So as soon as you choose to start, the next thing I would encourage you to do is do what I did at Amazon. I just started building. With the technology today, you don't need to learn how to code, you just need to use your words. So go to a chatgpt and say, hey, can you build me a PowerPoint deck? Can you create me a piece of content? Can you edit this? How would you change it? Give me five tips. Just use it as a thought partner.
The most successful thing I did when I first got into this environment was teach people how to build a machine. So we have a class. I create a community called I love AI because similarly to what you said, I used to wear shirts that says I love AI. And I did that because I'm like, when the machines come, I want them to know, right. Who's on, on their side. And so everyone who joins this community, like you're going to be safe. But when you join the community, the first thing you do is you build a bot, you build a machine. And we have 12 year olds, 85 year olds, farmers, technicians. Like, it is not it. It's not about technology. It's about clarity of thought, clarity, understanding what you want and being able to clearly state it to a machine. And the sooner you start talking, it's like public speaking. Everyone starts awful. And then you get better.
Yeah, just like we started, you know, like that is. I think the critical thing is start build a machine right now. Everyone can do it. It's called a custom GPT. No skills required, just build one and yeah. And then finally my favorite tool, I would say, oh my gosh, it's so hard.
[00:35:55] Speaker D: We're gonna say other tools are available.
[00:35:56] Speaker B: Whatever you choose, there's over. There are thousands of tools.
[00:36:00] Speaker D: It is a minefield. That is the thing I would have.
[00:36:03] Speaker B: To say the re and I'll say why I loved it. Chat GPT is my favorite and the reason I like it is number one, it's been productized by Microsoft and they've added some responsible guard rails to it, which makes me feel comfortable. But the most important thing about it is that when you pay the $20 to become an whatever it's called plus member, you get your own memory bank and now you get to customize that GPT to sound like you, to understand what you want to learn from you. I've been using it. I was a beta tester on this model during COVID before it launched and so I've been talking to it for a long time. It has lots of memories about me. And so now I can actually go in and say, here's an idea I have. Does this resonate with who I have been to you over the last two and a half years? And it'll say yeah, but usually you talk a little bit more about innovation and diversity and inclusion and it doesn't really lean that way. Is there a way you want to rethink about it? So it's actually being reinforcing of who I want to be. And I don't think people realize.
I don't think people realize that AI is not just going to regurgitate, you know, canned, stereotypical information that actually can be an amplifier of exactly who you want to be in the world.
[00:37:10] Speaker C: Amazing. Thank you so much for your time. Where can people find you?
[00:37:14] Speaker B: Oh, I encourage you to contact me on LinkedIn, but it's pretty easy. You can just find me on Google Noelle AI, you can find me all places.
[00:37:23] Speaker D: I guess you could go to ChatGPT.
[00:37:27] Speaker C: Absolutely. Thank you. It's been an absolute pleasure and hopefully we'll see you in the UK as well. Certainly the Women's Council are really keen to continue the conversation. So yeah, that's having me. Thank you everybody and we'll see you next time.
[00:38:00] Speaker A: That's a wrap on this episode of AV in the Wild. Big thanks to our guests and to you, our listeners, for joining us on this journey through the AV landscape. If you liked what you heard, don't forget to subscribe. Share Leave a Review it helps us keep the conversation going. Catch us next time as we hit the road again with more voices, more stories, and more of what makes AV wild. Until then, stay curious, stay connected and stay wild.