Generative AI: A Solution Looking for a Problem

Computer Monitor in purple hexagon.

In this episode:

In this episode, we dive deep into the world of generative AI, exploring its hype, history, and real-world implications. Guests, Abby Breyer, Director of Experience Delivery, and Brian Partin, Enterprise Architecture Director, discuss how AI isn’t as new as the recent media frenzy suggests—it’s been evolving for thousands of years. They dissect the excitement surrounding AI, asking the pivotal question: what problem is AI looking to solve? Join us as we examine how AI, when applied thoughtfully, can be a powerful ally in solving complex human challenges when we keep people at the heart of the solution.

Tune in now to hear their insightful and comical conversation.

Don’t force humans to fit technology. Force technology to fit humans.

Brian Partin

Director of Enterprise Architecture
Vervint

Wisdom is a human quality; something I don’t think can be replaced anytime soon.

Abby Breyer

Abby Breyer

Director of Experience Delivery
Vervint
Related Content:
Healthcare worker using tablet to review medical scans

AI in Healthcare: Improving Patient Experience

Learn More

Episode Transcript

Danielle Haskins: Welcome to Ten Thousand Feet, the Vervint podcast. I’m Danielle Haskins, your host. Today we’re joined by two Vervinteers to take on the human side of things. In previous episodes, we’ve already talked about the importance of data strategy when it comes to successful AI implementation. But have you considered your users? You know, the humans who should benefit from the AI tool itself? We’re joined by Abby Breyer, Vervint Director of Experience Delivery, and Brian Partin, Enterprise Director of Vervint with decades of experience in AI. Together, they have a fascinating and hilarious conversation about the importance of keeping people at the center of the AI conversation. And they answer the question I always want to know, is AI going to be the hero or the villain of the story in the end? Let’s get started.

Brian Partin: My name is Brian. I’m an enterprise architect with Vervint. I’ve been working in technology for 37 years now and thirty of those around AI, primarily around decision support systems. Basically, using AI to augment our ability to make the right decision in the circumstance, but also other types of AI as well over the years ranging through and maybe we’ll get a chance to talk about it using AI for facial recognition and fish.

View Full Transcript

Abby Breyer: And fish.

Brian: And fish salmon in the industry that they’re called HOGS: Head on Gutted. So, it was to optimize bringing those to market in a way that’s both ethical as well as profitable. Always fun with those things together.

Abby: Hold on. Go back to the fish.

Brian: I look for weird discourses.

Danielle: I love the real-life examples already.

Abby: You know, Brian, I didn’t expect to go there, but I’m glad that you did. It’s my favorite part of conversations with you.

Brian: Abby.

Abby: That’s me, Abby Breyer, the Director of Experience Delivery at Vervint. And I do not have 30 years of experience in AI, but what I do have is 25 years of experience in design, human centered design and now leading both our design and our software development teams so, Brian and I are coming at this from kind of two different places, but we share a lot of the same viewpoints about what can be done.

Brian: I’m going to disagree. I think we come from the same perspective, just slightly tweaked.

Abby: We do. Slightly different experience behind us, but I think we have arrived at the same place together.

Danielle: Yeah, I think that’s the point. You come from different places, but you’ve landed in very similar places on the mountain.

Brian: I think that’s kind of the key consideration we’re defining as we go through this is how it’s bringing together the people that understand how people operate, how we understand how people work, understand what they need to be successful, along with the people that understand how to kick the darn technology and make it do that. I think AI brings us to that, which is why I’ve been super excited about this podcast.

Danielle: So today we’re going to talk specifically about generative AI. You have heard the word AI in the last nine months as if it’s a brand new thing. But Brian, in your introduction, you just alluded to the fact that you have been doing this work for a long time.

Brian: Oh my gosh. For ages. Yeah. In fact, depending if you look at the definition, the generally accepted definition, AI is effectively taking patterns of behavior and encoding them on a non-native substrate. So, there are some people in the AI discipline, technologists, data scientists, who point all the way back to the AI legitimately starting about 3500 years ago. Where people were creating systems. That made it look like something mysterious was happening without human intervention. Great example is there were devices that were basically primitive batteries that what they did is you went to the altar in, I believe they were from Iraq, Mesopotamia at the time. And if you touched it, you got touched the effigy of the God. It was mysterious. So, where we talk about the broad AI seeming like it’s something natural seeming like it’s something supernatural. Which is generally accepted. Again, from a technical standpoint, to be what AI is, it’s been around us all the time. Sock monkeys, in some ways represents a symbolic representation of human and animal impetus, so naturally is any sort of natural thing that you’re a machine to do is effectively the formal definition of AI.

Danielle: So, I’d love to hear from both of you. Is AI good or evil? Is it a hero or a villain?

Abby: Oh, this is actually AI speaking on our behalf right now, recording this podcast, right?

Brian: Right. Oh yeah, that’s a good thing no one’s seeing us, I swear we’re humans. This is not an AI response to this. Although we have done some experiments internally looking to get AI to respond verbally and vocally in real time, they end tragically and horribly because of course, AI isn’t human. I think that’s a key consideration is it’s a system that’s meant to augment what we do, rather than replace us. Simply because it’s not us. Let’s be honest, it’s far cheaper to have a human do something than to train an AI to be as good as a human. That’s important consideration. Why would you replace something that we already have? AI is meant to help us with the things that humans are not intrinsically good at doing. And that’s where every successful implementation of AI or machine learning, we will talk about the difference between those, I’m sure later. Every successful implementation has taken that approach. I want to augment the capabilities of the human. There’s no way to supplant them. Why would I supplant them?  Humans are cheap.

Abby: Well, back to your question Danielle, which I think was is AI a hero or a villain? And the reality is it could be either, depending on how we use it. So, there’s an ethical component there, certainly, but really, it’s a tool like any other tool that we use in technology. What we do with it makes it a hero or makes it a villain. And you know, we have the power to decide.

Danielle: As people, we need to take the power away from it because it feels like we’ve given it all the power to the tool and take the power back and say what do we want it to do for us and treat it as a piece of technology like we do every other piece of technology. So, what now? So, then what? How do we move forward with it?

Abby: I wanted to just say that there’s a frenzy around AI right now. There’s a frenzy because for some people it’s very new. There’s a lot of hype around what it can and can’t do. And the fear of, you know, will it take my job? But think of all the other tools and advances in technology that have had. Think about like there was a period of time earlier in my career when you know mobile is huge or social media and everybody needing an app, and everybody needed to be on Facebook, and we have to do this and that. And it was like, why do you need to do that? Just because something is available to you doesn’t mean it’s the thing that is right for you or your users, that’s right your business. So, I think taking a step back to just say ask a good question of like why do I need this? And how do I want to use it? And being really intentional and thoughtful. And how can this benefit my business, my users, my employees and not just hopping on a bandwagon because it’s there to hop on.

Brian: It’s about normalizing, right? What Abby’s talking about here is super important. We’ve gone through the hype. It’s just like connected products. Everyone throws everything against the wall to find out what sticks during that frenzied period, you look for every possible application and then the ones that are useful are the ones that have staying power, have stickiness. Great example. I’ve got a washing machine dryer and refrigerator who all have IoT capabilities. I haven’t enabled any of that cause I can’t think of a use case where I care. We’re going to find the same thing with AI. What it is a process of normalization. It’s figuring out and making it just a thing, like a pencil, a pen.

Abby: It’s easy to get caught up in the end of what’s possible. And there’s excitement in that. But at the end of the day, you don’t want to experiment because you didn’t take time to figure out how it actually fits into your business or your world.

Brian: Again, most of the use cases with AI, just like with IoT, aren’t going to have high value even though we want them to generative AI. This is the fourth or fifth time in my career that there’s been an AI bubble. Anyone notice? The reason I noticed them was because I was in the work. Why is this one different? Because it feels like it’s human. Because we’ve thrown literally every word that anyone ever said on the Internet and in a lot of other places in there. And we put together a statistical model that knows how to spit stuff up. That sounds like a person. That’s not it’s not meaning.

Abby: And it’s accessible like everyone can play with it right now.

Brian: Oh yeah, it’s right in your face constantly. And everything’s got “Now with AI. The reality is, AI just worked statistically, and it creates things that feel like the experience. There’s no thinking behind it. So, one of the things we talked about with generative AI and but it’s pretty simple. Literally just statistically grabbing the next word that makes sense. And dropping it in and throwing a little bit of random at it, which I think is hilarious, but you end up in this weird thing. Let me give you a great example. I asked one of the AI tools we were testing, Give me a list of every all 50 States and their capitals in JSON format. We don’t need to talk about JSON format other than it is a way to share data between systems and people. What it gave me, it chugged along for a second and it decided, damn it, I’ve got to give him an answer. It didn’t give me what I was asking for because it couldn’t find. What it gave me instead is the bio of Pat Clarin, actively our internal security practice and lead. Yeah, it wanted to give me an answer so badly because that’s how they’re encoded that it just generated something that it could find data about. That only loosely matched. Maybe it was because it had JSON formatting. So, the thing to watch with generative AI is it’s not thinking. It doesn’t know what question; it doesn’t understand meaning. It doesn’t understand accuracy. It doesn’t understand precision. It’s just doing a nice, cool game of computer, probabilistic charades and it still works, don’t get me wrong, but you got to understand that it’s not feeding you truth. It’s not feeding you precision. Nothing about that, is accurate, but it’s still damn useful that in and of itself, that sense that it’s real, that it’s truly talking and feels like a person is what’s pulling the wool over everyone’s eyes. But we’re already seeing in the marketplace that people that look where the money’s going aren’t going with that.

Danielle: And. Why do you feel like that is?

Brian: It’s because the expectations were too high and there was a lack of understanding of what AI is actually good at. Only 15% of CEOs surveyed as of May 2024, have seen any positive return on the bottom line with AI investments over the last 18 months where they’ve been investing heavily.

Abby: Well, and are they starting in the right place as they’re integrating it into their business? Are they are starting in the right place.

Brian: It’s a solution looking for a problem. Abby’s hitting the nail on the head. It’s this cool thing everyone was talking about it, so we got to do something, or we lose our job, so it’s why we as an organization like Vervint. That’s not how we’re going to play. Going to walk in and what are you trying to do? Let me understand your people. Let me understand your problem and then from that we’re going to decide if AI even fixes it or if something else works for it. Again, that’s where Abby, where your team is critical.

Abby: Right. Well, work that we recently did together, Brian, where we are helping a client who wants to continue to support people in the work. That’s core to who they are and what they do. And they knew that they wanted to explore how AI could help those people in the work do their work better. Be more effective maybe be more efficient, feel more supported in the work that they are hired to do. And brought us into assess what is actually the right opportunity here because we could do any number of things to make your business, I mean, we can incorporate it in any number of ways, but really when the core is how do we help these people do their jobs better? And feel supported and have more satisfaction in work and have our end customers have a better experience with us. Those are the right problems to solve, and that when we can think about how to bring AI into that, it’s just a much better perspective. It’s starting in the right place, which is, how do we help people do the things that they need and want to do better? Versus how do we shoehorn technology into our business just because it’s available to us?

Brian: Human problems. We’re here to solve and help with human problems.

Abby: Human problems, yeah.

Brian: AI is just another arrow in our quiver. It’s another screwdriver in our toolbox. That’s it. We’re here to solve human problems for organizations and to resolve those. AI is not going to replace the humans. AI is just another tool that we can use to help humans do their job more effectively.

Abby: It’s a super cool tool. And it has tons of potential to make life better for all of us. How we use it, that’s really put it all hinges on how well we use it so.

Danielle: What does good look like?

Brian: So, when we’re talking about AI, just like any other thing that’s going to be so aligned with making decision in a point, because that’s really where AI fits it’s the same living space right next door to PowerBI or Dashboards and the rest of it’s a way to take a lot of information, condense it down into an answer. That’s effectively the problem is trying to fill. The trick is understanding first the data. AI’s got to have a basis. It’s got to have good information to work. Event With BI, we’ve cheated over the years. We built a lot of BI systems, thrown a lot of money at it over the last 25 years and we haven’t seen those results and most of the reason is the lack of discipline in putting it together. Part of the reason why AI has been leading such a hype curve recently has been the idea that finally that’s done we don’t have to clean up the data. A magic button. It’s not. AI actually requires more discipline and more rigor. You have to feed it good things because it doesn’t know what good looks like. Only people know. So, it’s interesting. It’s coming back to what I we always knew. And just didn’t want to address is got to do things the right way. You’ve got to bring the right information, the right data at the right time. You’ve got to bring the knowledge, actually the knowledge to understand what to do with that and then the wisdom to understand. When to do that thing? AI doesn’t fix that. You. It’s that in fact BI doesn’t fix that. It starts with data operations.

Danielle: So, in order for business intelligence to have been successful, you need good data. For AI to be successful, you need good data. But you also need to know the problem you’re solving for. Abby, you just said that like the solution looking for a problem.

Abby: Well, and I like what you just said Brian because you said wisdom and wisdom is not something we are going to get out of AI. Wisdom is a human quality. And something that I don’t think can be replaced anytime soon and maybe never. Because it’s not that simple discernment wisdom, I mean. That’s why we talk about, you know, the importance of knowing how to prompt and use it to is. You have to be able to filter what you put in and what you get out.

Brian: One of the things a lot of AI proponents will throw out there, Abby, which is really interesting, is well, of course we’re going to be able to replace wisdom. Just got to throw more. We just got to throw more compute at it. Here’s the. Just getting AI to the primitive state that it’s in right now with LLMs is requiring more power input than within five years would be more power input than the world has available. More cooling resources. Microsoft right now is in negotiations with the Energy Department of Pennsylvania to light back up one of the reactors at Three Mile Island to use for a data center supporting AI. And that’s to get AI not even remotely intelligent.

Abby: That’s when it’s the good versus evil. When are we just using it to use it versus using it in a way that is worth our time and energy? Do we want to get to that point? It opens a whole another door around ethics. And I think that’s a different podcast.

Brian: One of the things Abby you and I talked a lot about, particularly in regard to the client that you were mentioning is the risk of AI. And I’m not talking about throwing money at it, I’m talking about the legit risk of AI when you’re dealing with a situation where the right answer is important. I think it’s important to stress and talk about it. Let me give you an example. One of the things that we find with AI is the belief that it comes from AI, it’s got to be truth. In the same way we tend to believe what comes out of Google is being truth just because it’s the closest one to the top. Neither Google search engine nor any AI that you’re ever going to build. Understands the concept between domain truth at that point in time, but humans being humans, we’re looking for the easy way out. So, one experiment that was done at a grand scale at Microsoft. Thank you, Microsoft. Was they made the decision that they were going to dramatically reduce the size of their Software development team. Because AI was going to write the code. Oh yeah, it worked wonderfully when they started out. And then the bugs started happening. Because experts who knew better were just accepting what the AI told them as gospel, and they were writing bad code. Because what’s an AI do? AI just gives you possibilities. It doesn’t tell you the perfect answer because it doesn’t know what you’re optimizing on. Never will. You can’t train it for that. A human expert has the ability to take their vast experience, which good example, A typical human by the time that they are three years old Will have more Data, sensory data and informational data thrown at them than any AI that can be plausibly trained with all of the resources available on the planet. So, keep in mind that you can’t throw enough time and money to give an AI the same experience that an adult has. So that’s important. We’re not getting rid of expertise. AI makes expertise far more valuable. Because you suddenly have research assistants, it also helps people that don’t have training come up to an average level. But you have to move people past average to Expert. AI doesn’t replace them.

Abby: And again, wisdom and discernment. We need humans for that. To discern what is true. What is reliable?

Danielle: In a previous episode, Jim VanderMey had said AI is unlikely to replace jobs, but it might make people who use AI really well more valuable than those who don’t.

Brian: I think we’re already seeing that right now. If we look at the organizations that have adopted AI appropriately and early people are differentiating themselves within those organizations by their ability to apply AI. People who understand how to synthesize rapidly huge amount of content and enact upon it a way that makes sense. They’re going to get value out of it and training everyone to be able to do that. I Think that’s our mission. I think that’s our goal. Not only is Vervint, but as a society. Give people the opportunity to use it to augment the capabilities that they’re born with. AI provides that opportunity.

Danielle: And the wisdom to know that it’s not true always.

Abby: Well, and like any other tool, of course you’re more valuable the more you approach things with curiosity and a desire to learn. And this is something that can be applied to anything, any kind of roles. If you come to the table as a really strong designer, but you also know how to utilize AI in your work. Yeah, that makes you super desirable candidate, right?

Brian: Structured play. Yes, random play is not going to help you with AI but taking what you already know and understand within the realm domain that you work in enforcing AI, forcing yourself to use AI, enforcing AI to try to work in that environment is the best way I found to find out what it’s good at in terms of helping you personally. But I also think it’s a way to look at it for business. Look at a scenario where you think AI might have work and then test it.  Don’t just buy it and say we’re going to do it. You actually have to make sure that it works for your scenario. And again, I think that’s where our experience as an organization, how we look at problems where we’re always helping people to be curious. And then take that curiosity and follow it. I think that’s what differentiates us from those who would just say just buy it, plug it in and then figure out to do with it later.

Abby: That does sound nice though. I want to know how you forced yourself to use AI. Could you please give me an example of what that looks like? Did you have someone standing over your shoulder just smacking you, or did you write some kind of prompt to prompt you to use AI?

Brian: No, it’s just, you know, it’s part of most of us that are in jobs that where AI is going to be immediately applicable. We’re going to be curious. We’re curious about what we do. We’re self-reflective. We think about things. We look at it and if something offers a way to get something done in a new and better way, it’s kind of how a lot of us in the fields that we’re in at Vervint, why we’re doing the jobs we do. Consultants are curious. Consultants want to look at something and poke it and find out if it applies, and I’ll rapidly cast it away. Not everyone’s wired like. So how do we recommend doing it? You go in and start with that human approach. What are the problems. What are the things you as an individual or a business are looking at? And then we look at that. And say, oh, OK, this type of AI or this type of technology might fit. And then we try it before you throw a big investment at it. Let’s try it at a small scale and let’s take those learnings and turn it into something not just throw a tech at it.

Abby: Listen to you waxing about the benefits of human centered design, my friend.

Brian: It’s what we all do, right?

Abby: I love it.

Brian: Technology serves no purpose. Humans create purpose. Technology is just a tool to help with that purpose.

Abby: That was beautifully said.

Brian: And this is the same thing we’re trying to do with clients. Technology is sitting out there. You gain nothing by just pulling it in and building something. What’s purpose does it serve? That’s the key. That’s what’s important. Everything else is ephemeral. Tech changes every day, right? What doesn’t change? People. and I mean the broad needs of humanity, are expressed, at least for the last 10,000 years hasn’t changed a whole bunch. Let’s bring technology to apply to those problems where new technology makes sense. Don’t force people to fit the technology. Force the technology to fit them.

Danielle: So how do businesses start to figure out what people need?

Brian: You ask, isn’t that crazy?

Abby: It’s working differently than I think. A lot of organizations work today, which is we have a bunch of. We did surveys, we know our customers and we know what they want, and we decide what they need in a boardroom while we’re talking about them. Versus going to them and asking them directly what are the challenges? Tell us about your workday. Tell Us about how you’re engaging with our brand. Asking those super open-ended questions to get to needs and motivations and challenges and understanding just what’s getting in their way. Because that’s where you can start to think about how can AI help solve their problems. The only way to know is to actually talk to the human beings who are going to be using/impacted by whatever technology you bring in.

Brian: And the boardroom is not going to know that. Let’s be honest, the boardroom knows the broad strategic goals of the organization, which is what they’re there. You know, who knows what’s going on in the organization on the day-to-day, the people doing the work. If you don’t understand that, you’re not going to build a solution that supports those people integrating into that strategy that’s been set forth by the business. So, one thing we do a lot of obviously is work with the people in the Boardroom to talk about strategy. But then you got to turn that strategy into a plan into a road map, whether it’s AI or anything else. And it’s that same research. Don’t just start flipping technology bits. On then you’re just paying for a bunch of lights that you might not need. Turn the right lights to flip on in order to move your human problem forward.

Abby: We shouldn’t be building anything without understanding how it’s going to be used.

Danielle: So those CEO’s or CIO’s that have seen very little return on investment probably did not understand what their people needed. Whether that’s their internal people or their customers. Didn’t take the time to listen. To be curious. To go out and actually observe and see.

Brian: Great article on that in CIO magazine and another one in Forbes two months ago talking about how the CIO is moving from being a boardroom position to being a secondary position. Why? CIOs, as currently implemented, don’t understand how to take technology and make it meaningful for the organization’s strategic and tactical goals, not just raw bottom line, but strategic and tactical goals. So, I think it’s interesting that the business community itself is looking and saying, huh, maybe the last 20 years where we just ran our business based on IT. We need to relook at that. We need to start thinking about running a business based on business need.

Abby: To be fair though, I think there is such a frenzy around AI right now that everyone’s rushing to get something figured out. So, if we all took a collective deep breath and just paused to think about what we’re doing. We’d be in better shape overall because I think just too many people feel the pressure of AI and it’s real that pressure is real because we have to. We have to embrace it. We Have to find ways to make it work in our businesses and figure out how it best, you know can be utilized for good but, it it’s okay, it’s okay if you don’t have all the answers. It’s OK if you’re taking a step toward that, and you don’t have it all figured out. Out there’s time. There’s time to figure it out, there’s time to get it right. And that being part of it too is just the pressure people are feeling.

Brian: That I regularly remind myself of, but I also share with a lot of the executives, the tech leaders that I work with. Slow is smooth, smooth and is fast. Now the apocryphal story is that came from gun fighting. If you just pull your gun out and you’re going to miss everything. So, it is through an important thing to consider is if you don’t take your time and you just react, you’re never going to get good results. Take your time and when you do that, it’s going to be far faster than if you just run, do a bunch of stuff, and then have to recover from the nightmare that you just created. The number of projects that has been coming in and helping organizations rescue that kind of, “well, we didn’t know what we were building, so we just built stuff and then the users used it, and we can’t manage it anymore. Come in and fix it.” By all means, avoid doing that on AI, because it’s far faster at wrecking ruin in your house than any other type of technology that just because of its the cost. AI is three times as expensive as traditional technology over its run, not the acquisition. Acquisition is really, really crazy expensive. But running it is three times as expensive. Why? You have to do a lot more management, a lot more governance. So, if you’re going to do it, take your time. Think about it. Make sure you’re smooth. That’ll end up being faster in the long run.

Danielle: You know what problem you want to solve for before you start.

Brian: Always, always do nothing without understanding your problem. Ever.

Danielle: And keep people at the center.

Abby: People at the heart.

Brian: Well, that’s the whole thing we’re talking about people no matter what we’re doing, Tech is just a thing. Tech disappears. The next cool thing in Tech has already hit while we’re having this conversation.

Abby: We’re going to have to go look up and see what hit while we’re in this conversation. Let’s go ask AI.

Danielle: Maybe it knows?

Brian: I’m sure it has an answer and I’m sure it’ll be very confident about. I’m going to ask you what the Lotto numbers are. If I don’t call into work tomorrow, you’ll know that I was right about something.

Danielle: Thanks for joining us for this episode of Ten Thousand Feet, the Vervint Podcast. To learn more from our thought leaders and the services we provide, visit vervint.com.