Artificial Antics
Artificial Antics is a podcast about Artificial Intelligence that caters to the skeptic and uninitiated. Join this unlikely trio Mike (the techy), Rico (the skeptic) and A.I. as they dive headfirst into the world of artificial intelligence. From debating the social implications and ethical concerns around AI to figuring out how to break into the lucrative AI market, no topic is off-limits.
And with A.I. on board, you never know what kind of shenanigans are in store. Will A.I. turn out to be the brains of the operation, or will it be the source of all their problems? Tune in to Artificial Antics to find out!
Artificial Antics
Episode 16 - Rico and Mike's Year in Review
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Episode 16 - Rico & Mike's Year in Review
In this episode, Rico and Mike review the major AI developments, trends, wins, and fails of the past year. They cover generative art, coding with AI tools, multimodal AI, and discuss predictions for 2025.
Topics & Timestamps:
00:00 - Introduction & Episode Overview
01:00 - Evolution of Generative Art Tools (MidJourney, DALL-E, Leonardo AI)
04:00 - Generative Art Examples and Mike’s Photoshop Tip
07:00 - AI Productivity Tools for Coding (GitHub Copilot, Replit AI)
12:00 - Education and AI Coding: Future Predictions
19:00 - Generative AI Video (Runway ML, Final Frame)
23:00 - Generative AI for 3D Spaces (Blockade Labs)
25:00 - Text-to-Speech Tools (11 Labs, Google, Amazon Polly)
26:00 - Speech-to-Text Tools (DeepGram, Whisper)
29:00 - AI for Voice Pranking (Bland AI)
32:00 - Custom GPTs: A Year in Reflection
34:00 - Ethical Concerns & Information Bias in AI Models
36:00 - Interview Prep Tools and Research Tools (Perplexity AI)
38:00 - Omni-channel CX and Customer Engagement Tools (Helios)
42:00 - Biggest AI Wins & Fails of 2024
46:00 - Trends for 2025: Multimodal AI, Agenic AI, Open Source Models
50:00 - AI Governance and Regulation Predictions
Here are the AI tools and services we mentioned in the podcast, along with their respective website links:
MidJourney - https://www.midjourney.com
DALL-E 3 - https://openai.com/dall-e (Can also be used with Bing's AI Image Generator)
Leonardo AI - https://leonardo.ai
Fooocus Model using Stable Diffusion - https://stability.ai/
GitHub Copilot - https://github.com/features/copilot
Replit AI - https://replit.com/site/ai
Runway ML - https://runwayml.com
Final Frame - https://finalframe.ai/
Blockade Labs - https://www.blockadelabs.com
Eleven Labs (Text-to-Speech) - https://elevenlabs.io/
DeepGram (Speech-to-Text) - https://deepgram.com
Bland AI - https://www.bland.ai/
Custom GPTs - https://openai.com/chatgpt
Perplexity AI -
Stay Updated:
- Subscribe to our channel for more episodes like this: https://www.youtube.com/channel/UCXz1ADq4dDQ5yPy40GWDepQ?view_as=subscriber&sub_confirmation=1
- Follow Us: https://www.buzzsprout.com/2178171/follow
Connect with Us:
🌍 Artificial Antics Podcast Website: https://antics.tv/
📰 AI Bytes Newsletter: https://artificialantics.beehiiv.com/
🚀 Artificial Antics Business Site: https://artificialantics.ai/
💼 LinkedIn: https://www.linkedin.com/in/artificialantics/
🐦 Twitter: https://x.com/anticslab
🍏 Apple Podcasts: https://podcasts.apple.com/us/podcast/artificial-antics/id1694398244
🎧 Spotify: https://open.spotify.com/show/0QpTQhJrdXNeqc5TX8clmj
Episode 16 – Rico & Mike’s Year in Review
Natasha: [00:00:00] Welcome back to Artificial Antics, where Rico and Mike will talk about the implications and opportunities around
artificial intelligence, machine learning, and deep learning.
For this episode, artificial intelligence, a year in review, the guys will talk about trends, the biggest wins, fails, and their predictions as we get ready for 2025.
Rico: What's up everybody. Welcome to another exciting episode of artificial antics. You're in for a big show tonight as we do our year in review. Now that we've passed the one year Mark, so I'll hand it over to my cohost, Mike, Mike.
Mike: Hey everybody. Um, yeah, Rico and I are just getting together here. We've, um, we've got some folks on deck for, uh, you know, guests on the show, really exciting stuff coming up here. And we wanted to kind of do a year in review for you all. Um, a lot's happened in the last year. We're a little over our one year mark and, uh, Rico.
So I think we could just dive right in here. Um, do you want to start, what do you want to start with? Do you want to start with some trends?
Rico: Yeah, let's, uh, let's start [00:01:00] with some of the things that we've seen kind of evolved throughout the year. And, uh, but what, uh, really kind of tipped our highlights as far as things that we were involved in early on that we've seen progress. So, um, let's talk, uh, generative art up front.
Mike: Yeah, that, that sounds good. Um, so, uh, throughout the year, what have you seen? Like, as far as an evolution, um, what products really impressed you? Uh, what was underwhelming?
Rico: Yeah. So I think for me, I know mid journey was like a big one. Uh, and mid journey, of course, led me into, um, uh, other products. Like we've talked about a final frame. We've covered that before, uh, in some of the others that allowed, um, you to take a static image and then create a video around that image, uh, as well as the, uh, the other iterations that, That progressed, uh, other versions of mid journey over time that we saw.
And, um, of course other people got into that game and you saw, uh, Leonardo AI, uh, jumped in and some of the others. So [00:02:00] do you have any favorites? I think with generative art that you got into,
Mike: Yeah, I would say, um, the, the ones that really impressed me, uh, were, were definitely mid journey. I, I would say that Dolly three, um, you know, It has its own little spin or its own little twist on generative art, generative art. Obviously, um, you know, in my mind, it's more, I'm going to call the generic, right?
DALI 3 is more of the generic where mid journey, I feel like is much more creative and produces Cooler look at stuff. But here's the one that really blew my mind, dude, was that focus model, right? We did that, um, episode on stability, uh, ai, and they had the sta stable, uh, diffusion. And one of the, uh, open source projects is called Fucus with three O's.
And the stuff that we generated with that and being able to generate. Um, consistent, uh, you know, people across multiple different types of images. That is one of the only places [00:03:00] I really saw that done well. So that one, that one really hit for me.
Rico: For sure. And that was also cheaper too. Cause you could get an iteration of that. Uh, that was offline. Yeah. So it was a lot better than paying, uh, say mid journey with 10 a month or whatever it was, uh, depending on the package you want. Yep.
Mike: Yeah. Well, and you know, the thing is like, I would say that, um, you know, stability AI and the stable diffusion and all that, like that, um, Again, they all look a little different, right? Dolly looks kind of one way and
Rico: Cartoony, right?
Mike: way. And yeah, like they're all kind of, they're all kind of their own thing.
Stability. I would say no, actually is like the, not really, not even cartoony, but more like base raw, like imp types of images, which for people, for people, the realism, especially focus really dials that in, they actually tune that model specifically for realism. Um, So, um, so it's a whole different thing, right?
Whereas like mid journey, it's like, you can [00:04:00] clearly tell something generated by mid journey. Generally, you can clearly tell something generated by Dolly. Um, and I, you know, I'm just taking it, you know, going back to the original, you know, the Bing image generator, which was Dolly three and, um, and how we, we, we probably tried to generate, um, you know, ourself says, you know, caricatures together in one image for.
Nine hours or something.
Rico: All right.
Mike: we never really got it. And just a quick tip for the folks at home, uh, what we ended up doing folks is we ended up generating one that looked like me and in the style that we wanted me to be in and one that looked like Rico. And then I took my Photoshop skills and, uh, and tied those two together.
So, uh, yeah, so generative art. So yeah, no, lots of, uh, you know, lots of great stuff. Anything else in generative art? I'm just scrolling through my list here. Um, I'm not seeing anything else that jumped out at me. Anything on your side?
Rico: no, not really. But I want to say that, uh, [00:05:00] for folks who don't know, if you take a look at our logo and some of the earlier works, we've kind of kept the same branding, but that was all generated, you know, in our, our start, and that was using a Dolly three and of course, uh, Mike's Adobe Photoshop skills. So.
Uh, we hope you still enjoy those to this day and we will probably see them change maybe sometime in the future, but our branding is important to us and that's where we came from. So we're sticking with it for now,
Mike: Yeah, absolutely. And, um, and yeah, the one thing that I'll say is like, I was able to go out of outside of my normal skillset and do something that I would call, um, you know, more aspirational because of the AI tooling, right. And it wasn't just me spending five or six, you know, maybe we spend nine hours and we didn't spend nine days.
Right.
Rico: right? Right. And I think that's a perfect segue there too. Mike is also, um, you know, what, what do we, what do we want from a, what, what have we seen from this year? And a lot of it has moved to, um, enhancing people, right? We talk about the human side of AI. [00:06:00] And giving people tools that are able to make them do things either 10 X or have an ability that they don't, don't otherwise have, whether it's creative art or coding or that type of thing.
So I think that's a great segue to move into. Um, how about something that, that has shown you an increase in productivity, perhaps that AI has added,
Mike: Yeah, no, absolutely. So, um, I've been a coder for many years. I, I love, I love writing code and, um, I never want to let that go, let's say, but in the same breath I want, um, you know, I don't love just typing things in. Right. That's not the fun part for me. The fun part for me is architecting, designing, um, working with stakeholders and business people and solving problems.
Right. I don't really care about. Typing the code in, right? So, um, in, in most cases, in most cases, so I'll bring, um, you know, I, I think this is definitely a hundred percent evolved over the year, right? We had, you know, the initial cases where you had some tools [00:07:00] like, you know, people were using chat, GPT, or they were using Claude for coding.
Um, there were, you know, there were a few things along the way, like, um, you know, supposed to be, Oh, Devin AI, he's supposed to be your coder. And, and the thing is. You know, they debunked that pretty quickly. It was, um, a hundred percent. I mean, it was just the same stuff. Like it was basically a wrapper around chat, GBT to do a certain thing.
Um, but, uh, but you, you know, you've got get hub co pilot, right. That was another early, early one, right? It was, the start was like, yeah. GitHub copilot, some chat, GBT augmentation. Now I've been using REPL it. Uh, and so REPL it is a place where you can build code as well as like host projects and that, that becomes really beneficial when you want to create very quick little, little tasks and experiments.
And as folks know, we're in the lab, right? Like. So I'm always, I'm always in the lab and REPLIT, uh, launched with their own AI called REPLIT [00:08:00] AI, and that was about a half a year ago, maybe it was even longer ago, but it wasn't really super good in the beginning. And I've seen that evolve over time. I'm, I'm really, I'm really fast with, uh, actually two weeks ago, I was, I could say I'm really fast with Chats QBT and REPLIT and REPLIT AI, kind of the combination of those things.
Within the last week or so, uh, replet has released. And this is one of the big surprises, um, released replet AI, AI agents and AI agents are, um, AIs. And we've talked about this since the beginning of the show, right? Agents, uh, or agentic, uh, AI is where. You know, it's going out, it's making a plan. It's, you know, deploying some agents to do pieces of that plan.
It's checking its own work, right? Really cool stuff. But you run into some problems that are, um, kind of like, you know, omnipresent in, you know, in, in that, uh, usage, which one of them is, you know, repeating itself or looping. And I did run into [00:09:00] some of that with Repl. it AI agents right away. But in the same breath.
I did four, let's call it experiments with it so far over the last couple of days, probably spent about five hours with it, honestly, or maybe it's less than that, but man, the first one was kind of rough. The second one was better. The third and fourth ones were really, really good to where I could take my idea truly.
And there are a few things like air table has a thing too. It's like generate an app and you will think I wanted this. And it really just builds out. Like. Some air table stuff, which makes sense. I mean, it makes sense. It builds out an interface, but it's super like, Oh, that's easy to do. Rappel. It is truly like taking, you know, building out your code from, Hey, you're let's call it, let's call it your Python poetry configuration, which are the packages to, Hey, let's set up your deployment and, you know, map your networking and stuff like that.
And it does all of it while you're like sitting there watching it. Um, it creates, it also creates a plan. Um, [00:10:00] and I, I demoed this to multiple people at work the last couple of days. I was like, watch this boom, turn my screen and just like, let's watch this generate an application right now. And, um, and one of the cool things that I think that I, that I like about it a lot is that, um, it generates kind of a plan, right?
Which all of these have generated a plan, but it will think of things that I don't think of, right? Like, um. With an API yesterday, I was like, Hey, I want to launch. I want you to generate me a Python application that can reach out to deep gram and transcribe audio. I'm going to give you an audio URL, you know, an app I built a bunch of times, you know, in various places, uh, because I knew what it was supposed to look like.
And it gave me the idea. It was like, do you want me to add rate limiting? Right. Which is like, Hey, they can't hit it. You know, X number of times, or they can hit an X number of times, but not more. And, uh, sure, sure enough, once it had built the initial, I said, yes. And some things I didn't say yes to, right?
Like I was like, yeah, we don't, we don't need that thing, but we [00:11:00] need this, this, and this that's great. Check, check, check. And, um, and so it built the initial app. It had me test the initial app and I said, yep, we're good. We're it's looking good. It tested, I tested it. And then it's like, okay, now I'm going to build the rate limiting in, it built it and then it's like, Um, use your API client to hit this six times.
And on the sixth time, it should give you a message that says, Hey, you've exceeded your rate limit, right? For this API in 24 hours. Sure enough, that worked right. Then it implemented error handling. Um, It's just, it's, it's really very beneficial. Like the way it's not even when I, I, I'm almost understating it because it's a whole new way of working.
Right. I'm sitting there and it's like, that's my team. Um, whereas a year ago or six months ago, it was Chachi Petit. That's my developer. Now I'm feeling like with agents, it has the, you have the ability to say, that's my team. [00:12:00] Yeah,
Rico: It's a, so it's far exceeded your expectations and even what our earlier expectations were when we originally had played with, played it, played around with, um, was that chat GPT in the early days asking it when we say early days, but it was about a year ago, right? Are a little bit more, uh, but, uh, some of the, the output we got from that.
And I can tell you that, you know, what I see is, you know, and I'll ask you this question, where do you see the education for like coding going, uh, when it comes to that, because obviously, you know, kind of, uh, both sides of it, right. What it should look like, uh, from the initial idea to the execution of the idea and then moving it into the application and having it be functional.
Um, where do you see that going?
Mike: that's a that's a really good question. It's not a super hard answer. I don't think I think the focus is on two things. One, uh, being able to speak naturally. Right? And honestly, with replica, you could do it with voice. You could drive it with voice. [00:13:00] That's been my dream for so many years.
Rico: right.
Mike: just talk to this thing.
Matter of fact. With ChatGBT again, here, here was another, another cool evolution over the year is ChatGBT, uh, got, uh, at least in the application or the app on my phone and the app on, uh, the Mac, you can do voice with it, right? You can have a conversation with it. Now it interrupts it, it's a little weird sometimes, but I was driving to work the other day and I was architecting an app with it.
And when I got to work, I had code. Having a conversation in my car, building something
Rico: Dream come true, right?
Mike: different. That's a whole different thing than it used to be. Right? So I think that I think communication is more relevant than ever.
Rico: right.
Mike: Along the veins of communication, networking, which means human interactions, because right now, if we're talking about training, sure, there's technical training, but the new [00:14:00] folks coming into the workforce, what they really need to do is make friends
Rico: Yep. Talk to
Mike: every talk to people,
Rico: literally talk to people, not text, not, not anything else,
Mike: The, you know, show up, eat, eat the Domino's pizza at your local meetup, because those are the people who are going to be like, I'm going to give this kid a break seriously. Because right now with AI, Hey, we can generate a hundred thousand resumes. I can have it run all night and try to generate a hundred thousand myself, right.
Um, and send them out. Right. So with stuff moving so fast, there becomes a, um, kind of a re centering. On networking. So we've got communication. We've got networking from the, from the technical side. I think they need to start their start. They do need to learn fundamentals of application design, which part of that is coding.
They need to, it's really [00:15:00] knowing. Like when chat GPT or whatever tool you're using is giving you bunk. Right. As I'm watching that agent flow by with 50 or 60 different things, it's done. That doesn't mean it's done them right. Or even like it could completely miss the mark. And I did see that in my first, in my first example, I said, generate a Django app, which is a certain Python framework and it generated a flask app.
Right. And then it got into a loop. So like, it's still. Like these tools are still in a stage where they're going to hallucinate. And as you're, you know, you're I'm reading more about the science behind them, they may always hallucinate. Right. So you've got to be able, you, you've got to, you've got to be able to understand, um, what isn't correct.
Right. Um, and I would say, um, learning how to, uh, sort of get a second opinion, right? Whether you're using another AI for that or a person, um, [00:16:00] Or, or here's, here's another tip, learning how to generate code that tests the stuff that Chatsubt built and really benchmark or whatever it is, what AI built and, and benchmarking it.
Like right now, like, Hey, let's put this under a stress test because you can build the code to test it very quickly, which wasn't. Wasn't exactly possible for. So a lot of people skip tasks because they're like, Oh, it just takes so much time now generating tasks is, is easier than ever. Um, and that's, and so that's another thing I would say that people, people really need to learn, right.
But I would say. Application, architecture, and design, that's, that's, and I don't mean design, like, hey, what's a button look like? I mean, like, hey, how is this stuff? Like, if I drew it on a whiteboard, what would it look like being able to do that? Because if you can draw it on a whiteboard nowadays, you could build an app pretty quick, you know, pretty quickly.
And again, um, [00:17:00] here's another, here's just another tip for folks is, You could write a really awesome prompt with a hundred words or whatever. And it could be like every, you could have said everything perfect, but chat, GPT is, or any of these AI tools are much less likely to write you that whole application from end to end correctly than they are to write a little piece.
Then another little piece, right? You want to break things up into atomic units. So going back to what I was, what I was saying about specifics of things, uh, it's communication, it's networking, it's application architecture and design, and then it's application testing and benchmarking and, and being able to, um, you know, use chat, GPT or whatever tool to, to build that out and then also test that.
Rico: That's great. That's, uh, it'll be interesting, honestly, to see what the colleges do with it [00:18:00] as far as their coding courses, because obviously you won't need as much of that early on stuff that say you had, you know, when you came up through and were educated. So it'll be interesting to see how that pivots.
Mike: Yeah. And here's, here's just another while we're on coding, um, because I think we can probably segue soon. Um, I would say another little, you know, life hack with this is, um, that newer, newer frameworks, newer libraries and development. Um, they're not going to be as well trained with, uh, LLMs. So, um, for instance, you've got a language like Python.
Python is, Uh, LLMs and ChatGPT, they're the most, uh, fluent in Python because they've trained on a lot more Python than they have, let's say, um, Elixir, which is another language I've used and I love, I love both languages, but if I have ChatGPT write me a similar Elixir code and I have ChatGPT write me some Python [00:19:00] code, it's going to have a lot more of a point of reference for what it's going to do with the Python code than the Elixir code.
That doesn't mean that I have gotten good code out of bolt for both languages. It's just, you have to try a little harder. And if there's a newer framework or a library in your language, that could be really tricky because chat, GBT or LLMs, even ones that search the web will have like, really, it's hard to have that, get them to have a notion of what you're looking for.
Uh, an example recently was something called fast HTML. Well, uh, in Python, fast HTML is newer. It's smaller as in not as, not as well adopted yet. Right. But it's a really cool library. I tried to get Chachapiti to write some code. It had no idea. It was trying to write a Python fast API code. It wrote me over and over.
The wrong code had no notion of what I was trying to do. So that's just something to denote is. You know, in the past, I may have reached for something that was, you know, newer to play around with it. I still [00:20:00] will. I just know that there are some limitations there with generating that code.
Rico: What do you want to segue into now? Now that we covered coding extensively?
Mike: That's it. Uh, that's the episode, right? We're at night now. I'm just kidding. Um, let's so, so, okay. So we've talked about generative art. We've talked about, uh, we've talked about coding, uh, which, you know, um, let's talk about, um, and this is also art as well, but But let's talk about one of the biggest surprises for me, which is how far, um, how far generative AI video has come in this one year period.
So, um, Rico last year, we were sometime last year, we were watching Will Smith eat some spaghetti noodles and me. And it was, it was the craziest, weirdest, Morpheus thing ever. Right.
Rico: Cool at the time, but not perfect.
Mike: at the time it was like, and [00:21:00] everybody said, Hey, this is only going to get better. Just like I say it every day. This is, I look at something and I say, that's only going to get better.
Right. And it's going to get better pretty quickly, generally. Right. Um, just let's say three months ago, you had, you had some things released like, uh, runway ML gen three alpha, which they did some testing in cling AI, which is another one. They did, you know, some videos on, um, you know, people eating spaghetti noodles, and it was so much better than, um, than what we saw, you know, just a year ago with, with the Will Smith example.
Um, and you know, I'll just, I'll do a quick call out, um, to, to open AI here, because I think what they were, what they showed with Sora showed. Uh, because we still do not have it. They showed some really impressive stuff in February that we don't still don't have the general public doesn't have. And it's in my mind, a bit embarrassing that you've got like cling.
You've got Luma. Luma is okay. It's not that great, but clings [00:22:00] really good. I'd say a runway ML. Gen three alpha, super fricking good. And they're like really hitting it. Right. They're going hard on that. So, um, generative video there, there, did you, are there any contenders in that space that you found that you were kind of surprised, like underdogs?
Let's call them.
Rico: Yeah. I think, I think final frame is one that's up and coming. They were taking, uh, static images that were then turned into some sort of animation, whether it's a, I think it was a three to a 12 second video, depending on how much you paid and that type of stuff. But yeah, they were, they were one that were a smaller one that kind of took off on X.
That's where I noticed them the most. Uh, when we were playing around mid journey a bunch. So I think you'll see more from them. And I think they're also partnering. I want to say it's Luma, but it might be another company. They're, uh, recently looking into partnering to get more into the, uh, generative art in the video side of things.
So,
Mike: Nice. Nice. Very cool. Um, another, another really good one that we saw that was, uh, it was [00:23:00] gender generative, uh, art and, but 3d spaces specifically was block a blockade labs, right? The skybox AI. Which is where we generated our lab background images, like the ones that we're looking at right now. So, um, it's, it's pretty, pretty impressive stuff.
I mean, there's just so much like you could go on, you could have a whole episode that's one hour and two hours really, and I'm just generative art stuff. Um, but let's move on to, why don't we talk about the voice? Now, why don't we start talking about voice? So, um, I, I have really only tried, well, I've tried a few products, but I've really only been impressed with 11 labs so far.
Um, so 11 labs allows you to, to, um, do it's text to speech. Um, they have a translation now. Um, so you could take, you know, audio clips and it'll translate them. They have a voice cloning. I cloned my voice, right. And it, it turned out really good. Um, they've [00:24:00] even got like, uh, you know, you can, you can actually gain royalties from your voice.
Right. So, uh, Brian is one of the voices
Rico: All right.
Mike: exceptionally good. And, um, you know, Hey, if people are using Brian's voice and Brian signed up for their royalty program, then. Um, he's gotta be making a lot of money. He's, he's like number one, just such a beautiful voice that man has. But, um, anything else related to like voice stuff that you tried either text to speech, um, stuff like that.
Rico: nothing that stands out as far as text to speech. I mean, it basically, once we, we played with a couple of them, I forget the names of them now, but, but 11 labs by far, uh, overshadowed all of them. It wasn't even close. So,
Mike: it's canonical. I really, I feel like it's canonical. There are a couple of other ones that, that do. I mean, Google has an engine for it. Amazon Polly, you know, Amazon is called Amazon Polly, like Polly, like. You know, it repeats what you say or whatever. That's another one. Um, but I don't think any of them are are like 11 labs like the voices [00:25:00] sound very natural There are some other ones that do generate generate more natural sounding voices But 11 labs I think in my mind is the one that like It isn't, it isn't going away.
Right. It's a pretty established, um, you know, player in that field. Um, why should
Rico: I was just gonna say as, as they introduce more policing tech, you'll probably hear Brian's voice on the, you know, some kind of robot outside your door or these
Mike: definitely, definitely, definitely just talking real nice, just talking you down, um, you know, so we're talking, we're talking about text to speech. Let's, let's talk quickly about the voice speech to text. So speech to text is transcription. And, um, I would say that in my, there are a few players in this, in this, uh, I think that all of them under the hood are.
Using OpenAI's Whisper model back when OpenAI was actually open and really like released up to the public. And Whisper is still the best one in my opinion that I've used that's open source. Uh, but, uh, [00:26:00] DeepGram. has done some really great stuff with, uh, with speech to text. They also do sentiment analysis, right?
So, you know, I have a five minute phone call, um, you know, deep Graham, I can transcribe that phone call. I can get sentiment, right? How are people feeling the confidence level of that? It'll break. It'll do something called directization, which is like speaker identification. Um, you know, From the, from the experience of like using their API to onboarding with deep gram, they, in my, there's, there's another one called assembly AI.
I have played with it. They give you some credits as well. I wasn't as happy with assembly. I'm and again, Google has a trend, Google transcription and AWS thing. Right? So anytime we say. Um, you know, Hey, there's this service. You can pretty much guarantee that we're not going to probably mention Google or Microsoft very, very much because, because they're all, they all have [00:27:00] something, but I think there are these standout players that sort of like the niche in that specific thing.
And it's just in my mind batter, right? Like text to speech is super. Okay. Super. Okay. Google's transcription. Super. Okay. Nothing like. 11 labs and nothing like deep Graham. Like it's not even close. And, um, and, and deep Graham has some new services that I don't even remember exactly what they were, but there, I think they have text to speech now.
So we're going to have to, we're going to have to hit that up and, uh, and play around and see how it compares to 11 labs because they're, they're. Um, focusing on allowing you to build out these really, um, automated platforms. Part of that is, hey, let's generate some, uh, some, some audio right now based on, let's say, what AI gave us to respond to this person.
Rico: We, I think we have a contact over at Deep Graham. Dude, don't we? Uh, somebody we were going to talk to last [00:28:00] year, we should probably like, try to get back with them. Maybe we can get an update.
Mike: Yeah, he, he left deep grim. He's doing a really interesting thing. He's doing a really interesting that's Adam, Adam Streeter. Um, he's doing some really interesting stuff, but it's not, it's not a deep grim. I do have multiple contacts there though. And absolutely we, we, we can definitely check that out. Um, you know, what that made me think of with the, with the AI voices is, uh, is bland AI, which is something you showed me.
Uh, we saw this. Where'd you, where'd you see that originally?
Rico: think that, uh, yeah, well, we got it, uh, from the, uh, was it this day in AI podcast? I believe it was, if not, it was the other one. I forget the name. It has a
Mike: Last week at AI,
Rico: Yeah. Something like that. Yeah. And, uh, they were using that for prank phone calls at the time. Uh, and then they were, they were like very convincing and, and, uh, The, the, uh, I think the overall, uh, theme of that was basically a church trying to get at somebody, um, but it was quite hilarious.
It was, it was very, very funny. So yeah, bland AI, and they were mixing it with [00:29:00] another AI at the time. Uh, which I don't think was publicly available. Cause I think we applied for it and we're on a waiting list, but yeah, bland AI was a, definitely a good one.
Mike: Yeah. So, so bland AI folks, just to let you know, bland AI will, will dial, you can dial numbers, um, you, it will, um, you know, it's like an agent that will, um, you know, speak to whoever answers the call. It can detect an answer. It can detect. You know, pauses, you can tell it, Hey, wait for somebody to say something.
Or you could tell it, Hey, no, say the first words. Right. Um, and what's really, what's really cool is you can, um, you could tailor the voice, right? So let's say here's, here's my idea with it. It's like, you got a customer down in Georgia and you got your Southern Jessica's call it, Hey, this is Jessica from Bonita.
And, but that's really an AI, right? But then let's say in New York, you know, you can have one that's like, Hey, I'm walking here.
Rico: The [00:30:00] most stereotypical.
Mike: But you know, the reality is so, so with bland, what's really cool is, um, it will, it uses AI, so you give it basically a prompt and you say, Hey, here's some guidance, kind of like you do at the custom GPT. Um, here's some guidance and it, it will actually flow the conversation. Like I talked to, um, my bland, one of my bland AI agents, and I said something about Ferris Bueller and then it totally had a reference from it.
And it was like talking about something with Cameron. So, you know, and it, it, it did flow very naturally is surprisingly good. So yeah, that was another one that, uh, that, that like super hot, super out of left field, um, that got me and, you know, thinking about, we were talking about, I was just talking about how we can guide bland GPTs, right?
The custom GPTs, that was another. Big thing, Rico, why don't you talk? Just, just spit some knowledge just about GPTs. Like what's your take six [00:31:00] months ago, whenever they came out and now like, has anything evolved? We've had the store release, obviously. What's your take? Six months ago and now on custom GPTs and let's say goods and bads, uglies.
Rico: So the, the custom GPT thing, I mean, I, the, the only sad thing for me on that is that I don't think it had a long enough duration and time things moved a bit too fast with it, right? So we had custom GPTs introduced and right before that they had the store where you could go on and we could download, you know, these different, uh, tools to, uh, to attach to it.
Uh, what were they called at the time? I, it
Mike: They were, they were called, um, they were called plugins. They were
Rico: Plugins. Yes. Yeah. So you had the plugins and we went through that phase for a very short time and we were excited because there were many things you could do. Uh, and then the custom GPTs came around and then of course we filled those with knowledge and everything. And then they moved beyond that. And I really enjoyed the custom GPTs because you, you know, you could drop into [00:32:00] say there's one, uh, it was a legal.
I think it was like legal AI or something like that. And you could do contract work and you could do some research for legal stuff. And then of course I had created one that was doing my own workout stuff so that I could compare and contrast by workouts. I could get some data from that, which ultimately just became a function of chat GPT because, you know, you drop into the knowledge base and then, uh, we had our, uh, Custom GPT that we did, um, which was, uh, kind of fun.
And I've used that a few times where it's a say, I want Mike's take on something. So I say, here's this bit of information. I want to hear it like Mike would say it. And then boom, what do you know? You know, it comes out in Mike's tone, of course, written,
Mike: good job. Yeah.
Rico: a very good job. Yeah, I've, uh, I've actually tried it out.
And of course our friend, uh, Nick over at Nomad studios, I was having a conversation with him a couple of weeks ago. And, uh, Nick was like, uh, super impressed with something. I, and I said, uh, yeah, so Mike, Mike actually didn't write that bit. That's actually like a GPT I use to, to get Mike's tone. And he was [00:33:00] like, really?
And I was like, yeah, that's like a hundred percent me, but you know, it came out as Mike, so that's, uh, some of the fun stuff we've seen with chat GPT is what we've been able to do. Um, I haven't played around with the newest model yet. Uh, I was just actually looking at that earlier, but I was trying to tweak a GP, a custom GPT that wasn't working right.
So, uh, they still got some work there. Um, I will say one thing if I can about, uh, so chat GPT, one of my issues, of course, is the skeptic has always been information bias. Uh, of course, I look at the ethical side of things, um, and, uh, there's some information that we took and I'm not going to get into the, the ins and outs of the, the information, but let's say it's controversial in nature.
There are definitely two sides to the issue at hand. Um, there were factual information that was presented by scientists in another language in another country. And we had taken a transcription from that, uh, directly dropped it into chat GPT and asked it to transcribe it. You know, word for word. And what was interesting about it was chat.
GPT did do the transcription on [00:34:00] it, but it left out a lot of the pertinent data that would be considered controversial again, facing those two opposite sides. So I'm still very skeptical Mike about the, um, the information bias and who it is that's running it. We've seen a lot of drama happen with open AI, uh, with the board seats, with, uh, Sam Altman leaving.
Coming back, possibly going to Microsoft that coming back. And then of course, uh, him coming back in their safety team, uh, being diminished, you know, two or three CEOs leaving from, from that group of people, uh, which really gets, you know, in our newsletter. The AI bites newsletter. We talk time and time again.
I know I beat it like a drum just all the time. Transparency, transparency, transparency. That's what we're going to need to get to adoption. And, uh, they keep getting away from that. And that's the, the stuff that still makes me skeptical. We'll chat GPT. I haven't seen so much from Claude, uh, as much as I played with Claude.
Claude definitely gives you a different output, uh, depending on what you're [00:35:00] using it for. Uh, but yeah, that's, that's my take on chat GPT. I'm still very, very skeptical with it, but, uh, I'll continue to use it. And I see that the, uh, the numbers of folks that are using it are still increasing, you know, there's lots of folks that are using chat GPT, so.
Mike: Yeah, for sure. Um, so with, with custom GPT specifically, um, and the store, I will say, so somebody asked me, uh, just the other day, they said, Hey, do you have any, uh, ideas for AI tools that do interview prep? And I said, Not off the top of my head, but I was thinking in my head, like there's got to be a custom GPT for that.
And probably chat GPT by itself. I could write you a prompt that would probably do a pretty good job, but I went, looked for tools. There's nothing, nothing great. And the one thing I wait listed and I haven't heard anything back. So I went into the GPT store. Of course, there's two, two interview prep. Um, you know, there's actually probably 15 or 20 that literally [00:36:00] called interview prep, but you could see the ones that at least they're giving you the usage of them and the people can review them.
So I'm looking at the ones that have like four and a half. stuff, overall review. Right. And I found to, uh, you know, interview, uh, prep tools for her that are going to be free. And I said, Hey, I haven't used these, but I bet, I bet they're just fine. Right. For what you're trying to do. Um, they'll go back and forth.
They'll quiz you and ask you about the bit, you know, the company, and then it'll do some, some basic research, which, you know, speaking. Speaking of research, that just made me think of my favorite research and search tool, which is perplexity AI. That was a big, big surprise that I, when they, when I, when it came onto my radar, uh, it was probably the end of last year and it really hit my radar in January though.
I went to this book. Uh, you know, business conference, a lot of like super, uh, people that are just, you know, [00:37:00] completely trailblazing. And the guy was saying, I replaced, I canceled my Chad, JBT account. I only use perplexing now in my mind, after playing around with it a bit, both of them and using them a lot, every single day.
He's wrong. like, like I, I have both. I have both. I'm glad I have both because perplexity is extremely good at research. It's not great at writing, even though it's using, you know, cheche PT behind the scenes where you could actually, like for the pro version of Perplexity, you could hook it up to Clot or a few different ones.
Right? I've tried a few of them. It is very tuned. It doesn't mix topics. Well, so one specific exact topic. It's really good at if you try to mix and mingle a couple things and mash them and find trends together. Can't it does it? I don't want to say it can't do that. It just it doesn't. I've not gotten results with that.
That see even, you know, reasonable at all. Right? Whereas chatty BT, you know, [00:38:00] You know, even though it's not directed directly at research, I could take two, uh, you know, some outputs and some different stuff from, I'll use the tools together, right? I'll say, Hey, perplexity, boom, give me some information. I'll use that as an input to chat with T.
I use another perplexity conversation as another input. Right. And, um, I think it, I think it really is a mix and match of tools, right? Like there isn't a, Uh, silver bullet out there folks yet for really, for, well, there are some silver bullets in my mind for certain things like 11 labs for the audio stuff, right?
Text to speech. I think that's, that's pretty much a silver bullet, right? Um, because you, they have the API, you could do almost anything with it, right? Hey, Jen, for video generation, like video avatar generation, and like even training your own, uh, avatar, there are a few other ones since some these, uh, in video.
There's nothing like, like, Hey, Jen is, is in my mind, the one, um, it just produces such fricking great [00:39:00] results. And they're, they're adding very smart features and integrations. Like they integrate now. Hey, Jen integrates with 11 labs, integrate with Canva. Companies that people are already using the product.
So, um, so yeah, so, you know, AI avatar video generation, you know, um, I would say just, just to throw that on, uh, on the topics here, we don't have to talk a lot about it, but hey, because it's a really simple one, Hey, Jen. If you're looking to do video avatar generation and, you know, scripted videos with avatar speaking, Hey, Jen is, is going to be your product.
And it's kind of expensive, right? Like if you want to do it at scale, it's not cheap.
Rico: Now you mentioned tools in there. So why don't we pivot into tools and talk about some of the, uh, the successes and the failures of tools that we've seen that have, um, you know, of course everybody wants to slap the label. It has AI in it and there's been the hype cycle. You know, we saw one for instance, uh, what is it?
The, uh, the, is that there are one rabbit or rabbit [00:40:00] are, are one.
Mike: Yeah. We, we wrote about that or we talked about that, uh, actually early on in our newsletter. Maybe our first, actually our first, uh, you know, newsletter we had Talked about that because it seemed very promising, right? It's, it's an AI, it's a piece of hardware. I we've got it in our biggest fails list, by the way.
Um, along with some other things, but yeah, the rabbit, you know, they, they, um, they marketed it as its own. LLM, like as in they trained it. And what they figured out is they were some people reverse engineered it. And it was a, it's basically tree pro tree logic, branching logic. So it, it really can only do certain things.
And if you go outside those confines, it's always going to use open AI. So in other words. People were thinking, Oh, well, it's this little private LLM that's on the device and it's not going out to the big man. I open AI and it always pretty much is.
Rico: Of course it did. Yep. And
Mike: FYI,
Rico: I was, uh, I was just telling [00:41:00] Mike, uh, before the show here, the recent statistic on it as of yesterday or about, I guess, not even fully yesterday, but, um, 100, 000 sold only 5, 000 have active users currently. So that's, that's, uh, not good
Mike: a fail. That, that is a fail. The other, well, we're talking about it. Right. And we wrote about this too, twice. Uh, once where it was like, Hey, this might be promising. And secondly, like, Whoa, this is a huge fail. It's humane AI pin. Right. This pin that you wear, it's got the camera, it's got some different stuff.
They were overheating, they're doing all kinds of stuff. They're, you know, what they're doing is, I think the biggest issue is just like a lot of these AI tools, even the just software only ones, they're rushing to market, right? So they're, they're going for speed over. Even a viable thing. And something like they're, they're creating solutions that have AI in them because they know they can sell that rather than doing what, you know, one of the things we've figured out is, Hey, you need [00:42:00] to start with a problem.
Always start with a problem. The rabbit really didn't solve any problem. Like if you, if you just like. And the humane AI pen didn't really solve any problem either. Like they, it's sort of like trying to shoehorn something, uh, in, because, you know, it's, it's got AI in it. Right. Whereas then on the influx of that, right.
You've got like the, um, like the Ray Ban, uh, glasses, right. Or some of these AR glasses that are getting better and better that I actually see some real, like, I could see that just being the thing that people just do. Right. Um, that's getting better and better. And one of the cool uses for that I saw was, um, you know, you can have it snap a shot, um, you know, every whatever, um, amount of time, and then you can go back and like reference and say, um, Hey, I was walking into an apartment like half an hour ago or sometime earlier today.
Um, I swear, what. What color was that door? I swear it was like a white blue. [00:43:00] I'm looking for that same shade. I want to do, I want to do my door on my house with that same thing. Uh, and, and it'll go back and it'll reference that, which is, uh, I forgot what the guy's name is, but he's like mad scientist type, dude.
He's always wearing his glasses. He wears them everywhere. And he just is like documenting his whole life, making.
Rico: I would do that too, because I could just say, where the hell did I put my AirPods three weeks ago, like, where the hell are they, right,
Mike: Exactly. You don't even have to have like the little air tag because you can just say, Hey, where's my thing?
Rico: where the heck did I put them? Yeah. All
Mike: what, what other so, so biggest fails? Um, I'm just going to mention two more. Um, so, uh, Google Gemini, the AI image generation controversy, right? Like they've relaunched it again, but you know, they had, um, you know, Rico, why don't you go out and explain to the people you wrote all about this?
Rico: I know
Mike: let you take this one.
Rico: you're talking about recently or like my, my, [00:44:00] I still don't think I've ever gotten that to work for me
Mike: No. All right. I'll, I'll explain
Rico: yeah. Yeah.
Mike: they were, it was kind of like a woke AI where, um, you know, you'd say, you'd tell it to generate a picture of the Nazis in world war two, and it would generate like
Rico: African American Nazis.
Mike: accurate at all, right? Completely not accurate. And, um, and so that that's a huge issue.
So they shut that down. Right. And now they've relaunched it, but, um, they've got a stipulation. I just heard this just recently. They've got a stipulation, even though they've relaunched it, that they won't generate, uh, Public figures. They won't, you know, they've limited it, right? They've neutered it basically, uh, because there'd be, because it can't do the job correctly, right?
Um, another, this isn't even on our list, but I just thought about a biggest fail is, um, AI, AI writing detectors, like detecting AI writing to take tech to get images. They sucked at it. Open AI made [00:45:00] one. They shut that tool down. So, Cause it, it was not accurate. Right. So the teachers would be like, Oh, you're at, you wrote an AI and the kids like, no, I didn't like I didn't at all.
Right. So
Rico: I've had that happen. I've literally written articles and like put it in there and it's come back and it said that it was AI written. It's like, it's not AI written.
Mike: Yeah. Yeah, for sure. What about biggest wins? Like, uh, we talked about a few of them, like runway. I think they had some serious wins this year. Um, any other ones that you, you know, it's like, um, you know, just things that came out of, you know, came out of left field for you.
Rico: Not left field, but I will say to me, chat GPT is a win. Uh, it, and for the main reason of taking large data sets and then refining them really quickly, that's probably my favorite thing to do with it. Um, it's, it's constantly like driving ideas forward. So it def definitely is a tool that I use, you know, quite frequently.
Uh, I think it's a win. I think they have a long way to go. [00:46:00] There's a lot of things I would like to see change, whether how the company is run and that type of stuff. Uh, I'm almost hoping that somebody else comes up with something like it and kind of out does them eventually with keeping. True to the true, you know, like transparency and letting people see behind the curtain, I think that would go a lot further than the direction of it where they're getting involved with the military complex and all the other stuff and, uh, putting some board, uh, some folks on that board that, you know, kind of make you scratch your head.
But, but yeah, that's, that's my, I think biggest win for the year, I think chat GPT is still, still up there for me. Right.
Mike: let's jump into trends here. We'll kind of wind things down with trends that we've seen over this year and then trends we see coming. Um, I'll start with a, you know, multimodal AI. So this means that AI is able to analyze, you know, not only text and generate text. But video, um, you know, it's [00:47:00] able to analyze images.
It's able to analyze audio right now. I'm not saying Chatsubt could do all of these things right now, but Chatsubt did release their Omni model, which added a lot more support and, um, you know, improvement, right? To analyzing images and whatnot. And half the time these days, I'll take a quick little snip of something.
I'll even like do a snip of something on a, on a web thing that I'm, you know, doing like an interface and I'll like write WTF on their question mark and do a little arrow to something because like it gets the point across, right. It gets the fricking point across. It's like, what the. How are you doing here?
Right? So, um, so multimodal AI, I think that's been a trend over the year. You were going to see even more of that. I'd love, love, love, love, love to see, to be able to do it through an interface, do video through an interface right now. You can use, um, apparently Google's Vertex AI and their Gemini 1. 5 is actually very good at watch, you know, Going through [00:48:00] processing a video and knowing what happened in the video at different parts and whatnot, that's super kick ass.
I'm not seeing that with anybody else. So I'd say canonical for that, like computer vision and video would be that Vertex AI Gemini 1. 5 Pro. Uh, next one would be that Agenic AI. I mentioned REPL AI agents. Um, I'm seeing more and more that, Um, you know, these companies, uh, are, are launching products around the idea of agents, which is a team and not a person, you know, they're not people and they're not teams, but they're, you know,
Rico: They can do stuff,
Mike: they're, they could do things.
They can do things. Right. So I'm seeing agentic AI. There've been some serious improvements there. Uh, another one, as you mentioned, um, is, is rag, right? So retrieval augmented generation, which allows you to do things like build those custom GPTs. When you're uploading knowledge files that rag is, is being used then to take your prompt, your custom [00:49:00] instruction, and.
The knowledge that you gave it and retrieve that knowledge and determine if it's relevant and then guide the outputs based on that. So rag has been really, um, one of the big things that I see just everything's got rag in it now. Um, I will say that there are still some even limitations to that on some of these public models.
It just isn't Hayden about this. It isn't as good as if you were to train Your own model, fine tune it, build your own little rag system in there. Um, but not everybody can do that. Right? Like it doesn't make sense for everybody to do that. So I think just having the ability to store longer, like the store knowledge notebook, LL LM, which is a new Google thing, just, just released.
Probably a week ago. And that's kind of similar to a custom GPT where you build a notebook around a subject. You can upload files to it. You can upload URLs to it. And then you start chatting with your notebook. So idea
Rico: gonna be a great feature.
Mike: heavily uses rag. [00:50:00] Um, so, uh, and another one would be, uh, open source AI models, right?
Like this is something that we saw, you know, matter really start leading the way with. Uh, they've got the different Lama models. They're, they just released 2 yesterday or the other day. Um, very impressive to see that. And what I love about that is you could take that, you know, those open source models and, and, you know, you could run them locally, like I could run that on my machine or, uh, I could run that on something like hugging face, which keeps things a little more private than running your stuff through open AI.
So, or, or any of those bigger, like, you know, like even anthropic or, uh, perplexity has an API. I feel a little bit more safe, right? Um, running my data through something that I. Control as much as possible. Right. Um, and I will say, so hugging faces, one option to sort of host the host and run these models.
Another one would be [00:51:00] Microsoft Azure AI studio. That's a wonderful product as well. Right. Um, and, uh, and so, you know, uh, in my mind, I'm shifting more and more to like, Hey, let's not like build an open AI assistant and run everything through open AI. Let's actually start to build. Really kind of like build, refine, fine tune our own models around these open source models, and then use those instead of right.
Which is also, I believe Tim Hayden's, uh, strategy.
Rico: That's what I was gonna say, making it more of a sandbox environment, wrapping it so it's safe, it's not connected to anything and they're not training on your data as you build out whatever you're building out to be released in the next version of chat GPT, right?
Mike: exactly. Exactly. You're all, you're all customer lists. They're going to, yeah,
Rico: I think one of the other trends we talked about too is the AI governance, right? It's, it's the trend that, which we knew it was going to come, uh, because the guardrails weren't going to be there early on, they just can't [00:52:00] keep up with the technology. We don't want it governed to the point to where, again, it neuters the AI.
You have some bad policies at your company, you're making an AI and what do you get? You get bad outputs. You get images that aren't true to history that aren't true to whatever it is that the user is trying to create. Um, but the AI governance around, let's say, uh, election interference or, you know, some of these political figures that they've been putting in compromising positions, whether it's through generative art or using, um, campaign style videos, audio, that type of thing that we've seen that has kind of stirred people up, which we knew it would, you know, ultimately is a.
They, um, uh, free expression, you know, it's a way of somebody just expressing themselves, but they do have, uh, some impact in the real world. So obviously we'll see some AI governance, uh, in the near future. Uh, you know, you have some States like California going kind of crazy with the idea, if you can imagine, uh, which we wrote about in our recent AI, uh, bytes newsletter, but, uh, yeah, that's another trend coming will be the AI [00:53:00] governance to see what, what they end up doing.
With a lot of this stuff.
Mike: Yeah. I, I think if, you know, if I was looking at trends for 2025, I think that, um, a lot of it is going to fall around regulation and governance and, um, you know, transparency, uh, you know, I I'm really interested to see what happens with open AI, you know, with the different. They're all happening. Uh, we just had their CTO now, you know, resign is leaving and that company's a mess, right?
Like I do use it, you know, and, and, you know, the reality is like, I'm always cautious about, you know, like. I don't put all my eggs in their basket. I guess that's, that's a, that's a trend that I've seen too over this past year is, you know, uh, what you put all your eggs in, in open AI's basket and open AI goes down, then you're down.
Right? So you have all these rapper, Uh, products that, you know, um, when open AI goes down, which they [00:54:00] did, they did at least once this year for like an extended multi day, they had a bunch of problems and a bunch of services had, um, you know, had those same problems. Uh, another trend that I see is, um, perplexity continuing to gain market share on, uh, Google search and, you know, uh, really just continue to like Dominate.
I think they're already at like some crazy, like 10 percent of the amount of people are doing perplexity searches over Google searches or like, it might not be quite that, but like, it's like a surprising amount in the amount of time that perplexity has been around and I'm seeing them do more, uh, advertising.
Right. Like, cause they're starting to, whether they're bringing a ton of revenue in or not, I'm not a hundred percent sure on. But I'm seeing them to really get aggressive on, you know, Hey, this is our market and we're taking it right. And so that's not easy to do. I don't think it's going to happen in a day.
But again, it's a trend is I see, uh, that [00:55:00] they're going to, I think they're going to gain more market share because it's such a superior way. to do things. It's like, um, you know, the difference between, um, you know, a pager and, you know, I message or something, right? Like, it's like, it's, it's much more efficient, right?
It's the new way to do things in my mind. Um, another, another trend that I see Uh, more and more kind of seen it this year, but I see it even more is, um, you know, omni channel CX. So products like, uh, Dave George talked about and is, is, um, actually he's deployed it. We're using it, uh, at Clarity now, which is their Helios product, which is, you know, customer engagement platforms.
I see this more and more. I think that we're going to see these tailored experiences where two things are going to happen. One. You're going, you as the consumer are going to be able to interface and communicate with me over many channels, which that's not brand new, right? That's omni channel and [00:56:00] secondarily, the tailored customized experience to exactly how you want to be interacted with is, is only getting more tailored.
Right. We talked about this, um, uh, that's, that's kind of the goal too, right? Like I want to deal with. People and things that know me, they, they know me show me that, you know, me, and I'm a happy camper, right? Like I want to buy from people that know me. And, um, so I think that the strategy of really being able to target in and hone in on your, uh, potential customers and exactly how they want to be talked to and everything else and tailored experiences, that's another one that I see just is going to absolutely take over.
Um, another. Another one, the probably I'll, this will be my last one. Um, on trends that I see coming is, uh, it started, it's already starting. I'm starting to see it, but I don't think it's been implemented, uh, like widely [00:57:00] at scale is, um, hooking up knowledge bases to, uh, you know, to chatbots and having, but having it be.
A really nice experience. Like right now it's kind of generic. Again, I think that's, I think the dream is for it to guide you to exactly what you need to be doing at that moment. And there's one AI tool called Balto AI, which is, which is super impressive. You actually hook it up to your telephony system while you're on a phone call.
It's like popping up, like saying, Hey, did you say this? Oh man, he just talked about whatever you should say this. Right. And.
Rico: Mm hmm.
Mike: Just giving you super
Rico: in real time.
Mike: yeah coaching in real time. I think that is uh, that's another one. That's like, you know, just going to be uh more and more Uh tuned in and people are going to be either wanting it or you know Some people are going to be implementing it.
And so that goes into that knowledge base and and really um Let's call it the comp, [00:58:00] your company's playbook. You want your tools to be integrated with your company's playbook. So that every moment when your people are out there representing your company and your brand, they, they know exactly what to say.
If they're it's all no brainer moments and obviously they can improvise, but damn, wouldn't it be nice if you just kind of like knew what to do?
Rico: Right. And your new employees knew what to do. Day one. Right. On the phones without having to listen for, you know, six months to a year to three years to get the lingo. Yeah. Absolutely. Well,
Mike: Uh, what else do we have? Rico? Anything?
Rico: no, I think, I think we can wrap it there and, uh, you know, if, if you don't mind.
Mike: You know what? Hey, I think this is the longest 30 minute episode I've seen. And, uh, no. So folks, thanks so much for, for watching. Um, like I said, we've got some really cool, uh, and, and beneficial episodes coming up. We're going to talk to, [00:59:00] uh, uh, his name is Ravi. Um, he is the CEO and founder and inventor of a, uh, an, an AI that actually a device that sits in your pool and cleans your pool, keeps the levels up, monitors, super kick ass.
Um, and, uh, and then we're going to be talking to Dave George as well. Um, he's going to come on specifically, we're going to be talking about Helios, which is there on the channel, uh, customer engagement, uh, platform. So, uh, lots of good stuff coming up and, you know, if you like what you're seeing and hearing, uh, we'd love for you to, to drop us a, a like subscribe, whatever it's in one of these directions, somewhere here, uh, you'll see a little subscribe button on YouTube.
Uh, we'd love, we'd love for you to click that and, uh, leave a comment, right? Let us know what you're thinking. What, what are you, you know, what would you like to see? More of as well. Right. Because, um, you know, we, we don't, we never operate in a bubble, which is why, uh, we, we started having guests on. Right. So, uh, we want to hear your thoughts and if you're interested even in [01:00:00] being a guest, do you have something to say, uh, feel free to, to reach out to us, uh, and let us know.
Rico: Absolutely. And I just want to mirror that and just say, thank you everybody for this, this past year. Mike and I have had a lot of fun, uh, making this content, learning along the way, making the contacts we've made. If you or anybody, you know, who is involved in businesses, using AI in some way to 10 X, either the employees output or your own, please give us a call, give us a message on LinkedIn and let us know.
And of course, like, and subscribe comment below. Thank you very much. Everybody.
Mike: All right. Thanks everybody. And we'll see you back in the lab soon, folks.
Rico: See ya.
[01:01:00]