Artificial Antics

Episode 11 - AI Perspectives: Different Views on AI from the Tech World

β€’ Artificial Antics β€’ Season 1 β€’ Episode 11

Send us a text

Tune in to our 11th episode for an enlightening episode of Artificial Antics, where our hosts Rico & Mike, and a panel of industry insiders, explore the dynamic world of Artificial Intelligence and its broader implications. This episode explores  the promises and pitfalls of AI, discussing everything from AI in programming and daily workflows to ethical considerations and future predictions.

Highlights of this episode include:

  • A detailed look at AI tools like GitHub Copilot and their practical applications in real-world coding scenarios.
  • Insightful discussions on the evolution of AI capabilities, including the generation of deepfakes and their potential societal impacts.
  • A debate on the ethical responsibilities of AI companies and the potential need for regulation.
  • Predictions for the future of AI in industries and personal use, including personalized AI-generated media.
  • Perspectives from AI skeptics and optimists alike, offering a balanced view on the potential benefits and dangers of AI technology.


Whether you're a technology enthusiast, a professional in the tech industry, or simply curious about the future of AI, this episode provides valuable insights and thoughtful discussions on how AI is shaping our world. Don’t miss out on this comprehensive analysis of one of the most transformative technologies of our time.

πŸ”” Subscribe to our channel and click the bell icon on YouTube to stay updated on our latest video episodes and content. Make sure to visit Antics.tv and subscribe to our weekly AI Bytes Newsletter to cut out the noise and stay informed on all things happening in AI!

Share your thoughts in the comments below or join the conversation on our social media at X.com/@anticslab or on our LinkedIn.

#ArtificialIntelligence #TechnologyPodcast #AIethics #FutureOfAI #TechDiscussion

Special Thanks:
Episode mastered by: Nomad Studios (https://nomadstudios.pro)
Description: The team behind mastering the Artificial Antics podcast audio. Big shout out to Nick and the team! πŸŽ‰

Stay Updated:
- Subscribe to our channel for more episodes like this: https://www.youtube.com/channel/UCXz1ADq4dDQ5yPy40GWDepQ?view_as=subscriber&sub_confirmation=1
- Follow Us: https://www.buzzsprout.com/2178171/follow

Connect with Us:
🌍 Artificial Antics Website: https://antics.tv/
πŸ“° AI Bytes Newsletter: https://artificialantics.beehiiv.com/
πŸ’Ό LinkedIn: https://www.linkedin.com/in/artificialantics/
🐦 Twitter: https://x.com/anticslab
🍏 Apple Podcasts: https://podcasts.apple.com/us/podcast/artificial-antics/id1694398244
🎧 Spotify: https://open.spotify.com/show/0QpTQhJrdXNeqc5TX8clmj

​[00:00:00] 

Rico: What's up everybody. Welcome to another exciting episode of artificial antics. We're here with a bunch of guests, a whole panel here of friends of Mike from the tech industry. Mike, you want to go ahead 

Mike: and introduce everybody? Yeah, absolutely Rico. Thanks. Uh, thanks for that. And, uh, I'm pretty excited because this is the first time we've had guests on and we decided to go big, right?

Mike: Just bring, just bring a whole bunch of people in here, but I will say these are, these are all trusted, trusted contacts of mine. Um, we've been, you know, uh, hanging out together in various meetups, having conversations about programming, about infrastructure, architecture for many, many years. So I'm very excited to have some, uh, other opinions on, uh, about AI here, right?

Mike: Because I'm the techno optimist, you're the creative skeptic, and I'm interested to get some more people into the mix, right? Because everybody's kind of got an opinion on AI. Uh, or they just don't know what AI is, but I'm not sure what world you live in, uh, where you don't know what [00:01:00] AI is. So, um, I'm going to get introductions started here.

Mike: I'm just going to go ahead and let people introduce themselves. Uh, I'm going to pass it off to Lance first and then Lance just pass it off to whomever you would like to pass it off to. 

Lance: Thanks, Mike. Uh, so my name is Lance. I've been in the industry about 20 years. I've worked for a number of, uh, startups, some of which have gone to exit.

Lance: Uh, sold to Fortune 500 customers, and I'm curious to really talk about AI with you guys. Uh, I'm definitely, I would call 

Mike K: myself a skeptic, but at the same time, I'm coming around. 

Lance: Uh, why don't, uh, you take, uh, it next, Mike, Kay? 

Mike K: Sure, thanks. Uh, practice guest number one, Mike Kay. Yes. Uh, calling in from Poland, where I work as an Elixir developer for SimpleBet, uh, in the gaming industry.

Mike K: That's about all I guess I'll just be the old curmudgeon of the group

Mike: My name is Ralph. I'm 

Ralph: currently work [00:02:00] at Amazon. I've got about four 

Mike: years of experience. I'm probably the Least experienced in this 

Ralph: panel. We we also do some machine learning. So there's There's some AI stuff, 

Mike: um, that we do for our, um, for 

Ralph: our services. Um, I, I don't, I think I'm, I'm probably, uh, skeptical, yeah, just anti AI.

Ralph: I bet the slander ready. Um, you know, I'm still thinking about NFTs and Bitcoins. I don't, I don't know what's 

going 

Mike: on. All right. Onario. Onario Canonacci. 

Onorio: Uh, currently. In between work, been writing software, gosh, uh, 28, 30 years now, I guess. Um, and, uh, yeah, I think that I was saying to somebody, I think this is probably the fifth, Oh my God, we're going to make all programmers.

Mike: Absolutely. I've lived through 

Onorio: not, not because AI is really interesting. It really does have some stuff [00:03:00] that I like. But, uh, you know, let's keep things in perspective. I mean, it's yeah, every time I turned around, there used to be. Oh, this is going to make programmers obsolete. This is going to make programmers obsolete.

Onorio: And all I ended up doing was 

Mike: making more work for all of us, which is not a bad thing, right? Very true. That's a good point of view. No, that's, that's absolutely true. I mean, I will say this, I was laughing mentally in my head because I feel like Lance has a very similar view there. It was like, uh, Oh, deep blue is going to replace us all and all the different things, which, you know, cause the reality is like that, that is, that is exactly the hype.

Mike: I think honestly, you're, you're right. I don't think that programmers on the whole are going to be replaced anytime soon. I honestly think it's just more of like an upscale thing where you're going to augment and the good programmers will be much better and much, well, not much better, but much faster.

Mike: Right. And I think you, you [00:04:00] bring juniors in and have them just write a bunch of code with AI and it, I can't wait. I mean, I don't even want to see it, right? Like just keep that away from me. Right. I would never bring in brand new people and have them just start. You know, hey, just write everything with AI because I've seen a lot of the pitfalls, right?

Mike: Like, uh, image generation, Rico, you and I, you know, trying to generate a businessman with a slice of pizza in one hand and a cell phone in the other, and the guy's holding the pizza by, like, the tip of the pizza, right? Like, so AI is a pain half the time, and it can't even do text, right? But I will say Then I have completely other cases where I take a, you know, a WSDL definition, old SOAP WSDL, and I want to build a quick web client for it.

Mike: It's thousands of lines of code. It would have taken me maybe a day or two to like get that all set up in the past. Even being, even knowing what I'm doing would have taken me a while to do that. And I whip it into ChatGPD and literally it comes out with a SOAP client that's like either 97 percent there.

Mike: Or all the way there. So, I mean, stuff like that where you just save, I'm gonna say it like saves me the, the minute like busy work that I [00:05:00] didn't wanna do anyway. Right. So, um, and, and I, yeah. And so, um, we'll talk about, I guess we'll start out here about like, the current landscape of ai. That's kind of like a good segue.

Mike: Um, you know, Lance, one of the things that you had talked to me about was GitHub copilot. Right? You were, you were a, and, and I think all of us kind of at least understand what GitHub copilot is. What, uh, what people say it is and what it actually is in practice. Uh, Lance, what's your, what's your take on GitHub pilot copilot now after using it?

Mike: Some of my biases towards it 

Lance: came, uh, with based on what I was demoed. Uh, some of the people they would, they would like, you know, comment, you know, give me a function that does, you know, that's like, give me the Fibonacci function or something like that. Yeah. And it would write the Fibonacci function.

Lance: Fibonacci function, right? But then you try and give it something more complex in that comment, and it doesn't really do the greatest job. It's really not really sure exactly what you're talking about, because it doesn't really understand the logical constructions, right? But, uh, that being said, if you are immersing it with your code base and you.

Lance: Uh, title a function [00:06:00] properly, and it's got the other contextual information. It actually does get a lot smarter over time. And that's sort of what I've noticed is that, like, I'm introducing patterns to it that it's never seen before, for sure, and it's already using, like, the ports and adapters concept, and it's actually, like, figuring that out, right?

Lance: So it wasn't trained what a port is, but I can literally title a file correctly and put a class in there, and it knows automatically to extend from that and import the other port. Right. Which is actually pretty impressive. Um, if you think about it, it wasn't taught that, uh, and it just sort of knows no credit.

Lance: It didn't do that six months ago when I started using it, but because I've been using it every day, programming like nonstop, um, it's really starting to pick that up. So it's definitely capable of leaps now, whether or not those leaps are intelligent, it's. Most often not, but once in a while, it's like, Whoa, that was, that was sort of exactly what I was looking for.

Lance: Right. And then you can modify the text and go from there. So it's got its moments where it's [00:07:00] just like, wow. But most of the time it's very much so wrong. And over time though, it is showing a 

Mike: gain is really my point. A gain. Okay. Okay. Yeah. And I guess, you know, I had, I had originally to just talk about the AI landscape overall, but let's, let's just stick on, like, let's stick on AI encoding, right?

Mike: Doesn't even have to be GitHub copilot. Um, are any of you using AI to augment just your daily workflows, right? As you're going through and doing things, 

Onorio: I've been playing with copilot. I noticed a couple of things that are interesting to me because 

Mike: As you know, Mike, I like to do Elixir 

Onorio: and I don't think it's trained real well in Elixir because it's like the code it comes up with is like, Hmm, that's not even close.

Onorio: And I think as Lance was saying, as it gets more experience with Elixir, as it gets more input of Elixir, it'll get better. But in the meantime, it's, it's just like, yeah. And I think, I think that's going to be true of a lot of languages [00:08:00] that are not quote. 

Mike: mainstream, you know, right. Elixir 

Onorio: like, uh, Haskell, like, um, 

Mike: F sharp, 

Onorio: stuff like this, where it's like, it just doesn't have a ton of examples to 

Mike: look at and figure out what is, what, 

Onorio: you know, what you're trying to do.

Onorio: Um, the other thing is, I, you know, I was thinking to myself, um, it's kind of interesting because if you think about it. You know, what you're getting from, 

Mike: from Copilot 

Onorio: initially is anything 

Mike: it finds on the internet. Right. So if somebody is a 

Onorio: terrible programmer and they put their code out in a GitHub repo, you 

Mike: could get really terrible code, you know, or code with exploits or code with purposeful exploits.

Mike: So that was something that I heard. Recently is that people will build repose and, um, and actually generate exploits in there. And then later search for those, like you would only be able to [00:09:00] find it basically through chat GPT, but it, uh, it actually like activates, right. And so that generates code with legit.

Mike: Legitimate exploits in it. So stuff like that, you're, you're right. I mean, it could be, so you're talking possibly insecure code. You're talking possibly just bad code logically, like when you're looking at it algorithmically. So yeah, sure. It did the thing. Did it do it? Well, no, it, you know, it's, it's going to take, you know, a hundred million, a hundred million times longer than, uh, doing it in a, in a more optimized way.

Mike: So, uh, yeah, I totally get that. What about Ralph Mike? You guys using it at all or using anything like that? 

Mike K: Yeah. Um, I mean, I have a couple thoughts from what I've heard so far, right? Like, like co pilot right now is not hands off the wheel driving the car, right? You're, you're still got your hands on the wheel.

Mike K: You're still making adjustments. You're still making sure you're not hitting pedestrians. Um, like, like, but with that said, there's. [00:10:00] I mean, I'm using co pilot as a replacement for Stack Overflow as a replacement for Google searches. Like I will probably go to co pilot as my first attempt at trying to figure something out.

Mike K: And that's been incredibly successful, even in, even using Elixir. 

Mike: Nice. That's awesome. Yeah. I think 

Ralph: I'm the like laggard here. I've only, I saw you use it like zero times. Like any, any chat GPD, like, and, uh, I remember one task though. We were, uh, just spark has different versions and we had to upgrade it.

Ralph: And it was giving us a headache with, uh, regarding like, uh, the date parsing or whatever, and none of the docs told us what to do. So me and my team lead, we spent 30 minutes just 

Mike: figuring out like, uh, what, 

Ralph: uh, whatever, like, uh, I don't know what the term is, um, syntax spark 

Mike: one, and then we finally figured it out like after half an hour 

Ralph: and then [00:11:00] that evening, my team lead texted me.

Ralph: He just sent me a. Uh, a picture of him just look like Googling or chat GPD like that same question and just spread it out. It's like, oh, okay, 

Mike: I just done that. Yeah, no, 

Ralph: I just haven't got it. And if anything, like, um, I think Amazon has something called the code whisperer and it's like, just trying to spam all of our editors with it.

Ralph: And it's like, go away, go away. Like the announcement won't go, you can't remove it. So like my editor just permanently has this. Banner, 

Mike: yeah, it's like back in the old days when he had the banners on the websites, you know, you just have to look at it. Right? So, 

Mike K: you know, uh, the one thing that's been really good with his tailwind, like, it's like, Oh, crud, like I kind of forget how to like do the center column and the whatever it is, you kind of put that in.

Mike K: It's like, Oh, this is, this is really how your coach should look. [00:12:00] And I don't think it's failed me once on a tailwind suggestion. 

Lance: Yeah. It's definitely better at some languages than others. Like no doubt, especially if you're using like a mainstream language, like Node. js, TypeScript, PHP, it's 

Mike: going to know roughly speaking what you're after.

Lance: Python. Yeah. Anything probably in the top five. Uh, it's going to have a lot of experience with, and technically you're training it when you like get something generated and you backspace a bunch, like it recognizes that that is effectively a retraining circuit. And it's realizing that, Oh, in that case, that wasn't quite right.

Lance: So what was it when, what was the final solution? You know, 28 presses later, what, what was the actual solution? So effectively you're giving it, it's, you know, feedback analysis. That allows it to learn. And I think through that process, it's actually, that's what's, you know, when it realized every single time that I removed that, you know, certain association instead of the port and built out those, um, interfaces a certain way with a certain casing, all of a sudden, all the code over a certain time gets generated that way, because initially it wasn't doing like a camel case definition, [00:13:00] it was doing like an underscore case definition.

Lance: Right. And it was just, you know, pushing some stuff out that wasn't quite right. But over time with me changing all of that, it realizes what it. I want my code to look like, and it will customize that, I think, for you, because someone else is going to have a different definition of what it looks like. And that's even like the best part with the VS code is that it's the other plugins are also training it, right?

Lance: The other plugins will activate and retrain. The, the changed outputs and match your definition. That's like the best part because it will go yellow and then you can auto 

Mike: fix and it recognizes that. Right, right, right. Yeah. That that's super cool. You know what language it's, it's really good with funny enough.

Mike: It's just basic bash. Our, our infrastructure, like systems admin guy, DevOps guy, like dude, he uses it all the time to create bash scripts. I mean, and, and I'll create them too. Like I've, I've used it. It's like. Here's how to get started on your task. Boom. I create something and it's usually like super close, right?

Mike: Like it literally, like there was one point where I said, [00:14:00] you know, go hit this URL, do this, test it, you know, pull it with W get, make sure you get 200 and all this stuff. And, uh, and so it did exactly what I said. The only thing that it didn't do is it didn't hit the secure version of it by default. And I'm like.

Mike: Well, that's me. I left that out, right? Like I, I just put the first, I just put the FQDN in. Right. So that was its only mistake though. It was very, um, very quick to just create that for me. And these are things like, you know, to me, I, with Linux and bash, I never, I've, I've never been one to like, remember everything.

Mike: I just go look up what I, I know what it can do. Right. And so I go look up the specific syntax and tools that I need. I know the tools, so I know the different things and I know how they might compose together. So, um, so yeah, so that I've seen it used there as well. And, uh, Python, I think it's supposed to be its best language, right.

Mike: Just cause there's, uh, just such a breadth of, of Python out there. Right. So, um, so yeah, no, and I just, I realized I was looking at the next uh, discussion point and it's really practical [00:15:00] AI, right? So it's kind of what we've been. Well, we've been talking about, um, so what about, what about outside of work? Is anybody using it like to create a meal plan or do other things like that?

Mike: Workout plans. It's another thing I've heard. Hmm. 

Mike K: I've, I've gone through, um, like pretend scenarios, right? Like, Hey, I am 400 pounds. Um, right. I want to, I'm not yet. Um, 

Mike: I, I 

Mike K: like, I'm a 400 pound person who's blah, blah, blah, taking this, but like, and it seems reasonable, like probably stuff that would make kind of sense, um, but it, it kind of stops there, right?

Mike K: Like, like good advice really doesn't work very well on humans. Right. So it's just, it's just that it's just a list 

Mike: so far. Okay. Okay. 

Onorio: You know, I was saying, I think I mentioned this to Mike, too. I thought it'd be kind of interesting. [00:16:00] To see if you got chat gt, uh, chat gpt like 

Mike: in french 

Onorio: and then you give it a question and it gives you back an answer in french you could learn french that way you 

Mike: know 

Mike K: it works it works great with 

Mike: polish translations it does it does do that yeah you could you absolutely for learning things quickly uh you know just Being it's, it's kind of like, uh, having another person there to go back and forth with you, uh, without having to have a, have a physical person.

Mike: Uh, so languages, Lance, you were telling, I think it was rabbit MQ. You weren't super familiar with it in the beginning, but you knew. What producer consumer systems, you know, would look like and what needed to happen. So you were just getting the, really the specific, uh, nuances of rabbit. 

Lance: Right. And that was exactly it.

Lance: I wanted to understand what the topics were inside of rabbit. So I knew, uh, the Amazon definition for how it would prescribe things. So I was like, okay, I want to create a topic and I want to create a cue [00:17:00] under a topic. Tell me how to do that with rabbit MQ with a and PQ lib on TypeScript. And you give it all and outcomes.

Lance: This like, Almost ready to go solution. Now, granted, it's like 80 percent right, which if we write anything about software, 80 percent right is basically 99 percent wrong, but you can take it from there and convert it into something that's more useful. But the point that really we're after there is that you can use it for discovery of how to quickly retrain yourself from a standard, you know.

Lance: And you ask it the standard you want to know, and it will convert it for you. Right. So you can speak to it in one language and say, convert to this language. And it actually sort of figures that one out. Assuming you describe 

Mike: that effectively. Yeah. Yeah. No, that, that makes, that makes absolute sense. Um, what about other, any other person, Rico, come on, you got to speak up, man.

Mike: I know you use it for, for personal. I use it all 

Rico: the time now. Yeah. So, uh, my initial take on chat GPT is I, I had, you know, people in my inner circle that were going back to college. [00:18:00] And then I started hearing about it more and more and, you know, coming from somebody who likes to write. I was like, wait a minute, you're, you're writing reports for this.

Rico: You know, like I'm the type of person that will be assigned a paper and I'll write five or 10 pages of what I think. And then I'll do all the research and then add in facts and stuff to it, you know, and kind of line things up. So hearing that other people were taking the shortcut, I was like, well, that doesn't seem fair, you know, to how it's going.

Rico: So I was kind of skeptical about it. And then Mike and I had a conversation one night and he was like, well, let me show you this thing. So. He showed me now, I can't think of another way that I I'm still skeptical on the back end of things as to information bias and what it's being trained on. Um, because I've, I've gone through a few tests there with asking it certain topics and kind of going down rabbit holes with it and seeing what it comes back with.

Rico: For information and they don't always align with things that I believe or how I feel or, you know, what I think that type of stuff. So I'm a little bit skeptical there, but as far as refining topics and getting that information back in a, in a faster [00:19:00] period of time, I love doing it that way. So one night I played around with quantum physics, for instance, I don't know anything about quantum physics.

Rico: I'm not a scientist, you know, I'm not a genius, obviously. Uh, jumped in there, started going through it. And what was amazing was you don't have that barrier now of having an instructor that you may be afraid to look like an idiot if you ask the stupid question, right? It alleviates that so you can learn things far faster, get these outputs and then kind of roll with it.

Rico: Um, and then of course of all the side, the side projects we come up with, with writing and books and coding and, you know, websites, all the, all the stuff. So far that we've kind of gotten into, so I, I now use chat GPT on the regular. As I think you said, Mike, um, you, you're kind of like bouncing stuff off of it right?

Rico: On a regular basis where it starts to becoming a, a very big part of, you know, what you do daily. I still don't trust it fully. I have some concerns you shouldn't about 

Mike: it, you shouldn't trust that fully. Yet, as we both a hundred percent know, that's one of the biggest things is like you have to think critically.

Mike: Like [00:20:00] you have to, for one, you have to have good inputs, right? Which is where the experience comes in. You have to have good inputs. You have to honestly know how to talk to it in the right way. Then you have to be able to discern that the outputs are actually not garbage, right? Or how, or what is first, what isn't.

Mike: And they're still like Lance said, if you're looking at the net, it's a net positive for sure. Uh, and it's probably a big net positive, but you have to know. How to, you know, uh, cut through the croft or the ish, you know, and even checks, you know, like, uh, one thing I like about chat GPT, the newer version is that it does cite it sources.

Mike: Right? So you can go see what article it's found that in, and when it's doing code interpretation, it'll actually show you exactly its logic. So I can look at the Python it used to, uh, to interpret that code. And I like that. That's a, in AI, that's called transparency. Like, show me how the hell you came up with this answer.

Mike: Right. Uh, and, uh, and so I, I definitely appreciate that, but you know, you, [00:21:00] you, you have to think critically, you have to, I, I never trust it just, uh, implicitly, it's always, you know, um, it's always, uh, you know, hey, go, go figure that, figure out, uh, What might be wrong with this? And, uh, for instance, I made recently, I made a call center coach, uh, custom GPT.

Mike: So what we, what this does is, um, I could take a, an audio URL of a phone call, right, like to our call center, this is at work and we're using the team version of chat GPT, where everything is private in our own instance. And, uh, and it's not, and it doesn't train on your data. And so what we've done is I've created this.

Mike: Coach where you post in a URL, it goes out, it transcribes it. Um, and then it comes back and it, it really based on our core values and our quality assurance spreadsheets and stuff. It'll actually, I've uploaded all that as knowledge and, uh, it goes through and it like builds out everything and it saved our, like.

Mike: The problem, here's the problem that I created it for [00:22:00] is that our quality assurance people, like our, our support people, managers didn't have the time to do quality assurance. Like they might do one hour or two hours a week, but there's, you really should be doing quite a bit more, right? Like in the past we had done, like everybody does a half day and as people got busier, um, it was less prioritized.

Mike: And obviously. We want exceptional customer experience. So we want to be able to understand how these calls were going. And some of these calls are 20, 30 minutes, right? The amount of time that you save and figuring out, you know, did they greet the person? How did they do here? How did they do there? And, uh, and then just overall, what tips could make.

Mike: better, right? It's, uh, it's, it's awesome. Right. And I built that in an hour and 35 minutes, at least the MVP. So, uh, the reality is like, how much time does that save? Right? Like it's, it's stuff like that to me where I'm like, it, that's a big, a big gain. Right. So, but it's still with that, what I did was I had our support managers go through and run like.

Mike: 30 or [00:23:00] 40 of these, have it do the output and say, Hey, did it get it right? And it didn't get it right. Big surprise. Didn't get it right in every case. And so what you do is you fine tune. It's called fine tuning. You actually upload those documents and say, Hey, here's where you didn't get it right and where you did.

Mike: Boom, boom, boom, boom, boom. You have 40 more pieces of knowledge, right? And it gets better and better. Right? So, uh, you can definitely improve it, but it will, it will still make mistakes. Right? Rico, you've told your chat GPT not to say the word delve in any of your writing. And he's still getting it right. I do too.

Mike: Cause delve is a giveaway that it's AI written. I might keep saying it too, 

Rico: because every time it's. It's like being snapped in the last. Well, 

Mike: what's, what's funny is, so Aaron, my wife is a writer and, and, um, I was like, I got, I don't want to use the word dive. And she's like, how about delve? And I'm like, no, it trained on a bunch of writers, you know?

Mike: And she's like, there isn't much yet, unless you restructure the sentence, you're not going to be able to do [00:24:00] too much different with that. Right. So, you know, I have a, I 

Onorio: have a. A thing that I would like to run by you guys. I'm curious about something if you think about really they call it artificial intelligence but I think as I said to somebody I think a better name would be artificial 

Mike: mimicry because Basically, 

Onorio: it's just guessing what's the most like likely 

Mike: next thing to come after this.

Mike: Yes, right. Yeah, and 

Onorio: so i'm curious How did it happen? Because i'm sure you guys saw this case where some lawyers used it to make up legal briefs and it totally hallucinated Casework 

Mike: and it's like where does this come 

Onorio: from? Because if I mean that shouldn't be in its Memory to say, okay, you know, the next logical thing to come after this is going to be this, 

Mike: this fanciful 

Onorio: case that never existed.

Onorio: Because 

Mike: it didn't exist. Well, one of the things I want to know where that came from. Yeah. So one of the things, so that, that's a great one. And Rico, I think we literally talked about that on episode one or two, right? Back, back a year ago, almost. And, uh, and [00:25:00] so, yeah. And the funny thing about that is the lawyer asked it afterwards, are these real cases?

Mike: He didn't go and like, look at the, if those are real cases, he just asked the source. of the information if it, you know, and that was before it did cite it sources. So the reality is that, again, that's that critical thinking that comes in. Uh, so how, so there are a couple of things that you could do to prevent that type of stuff.

Mike: There's actually a, a parameter, um, called temperature. And if you set it at zero, like straight up zero, uh, it will, it will be the least creative, you know, uh, but it will give you the most concise and real answer still could hallucinate. Right. But it, it does lower the, um, the hallucination because I think it starts off as, as 0.

Mike: 05, which is good, you know, kind of got it could lean either, either way. Right. It could go a little more creative. You go up to like 0. 09 and you're going to get like, it's making all kinds of stuff up. Right. But [00:26:00] I think it starts right in the middle there. Uh, you could actually in your prompt, you could do the prompt.

Mike: It's a temperature equals zero, zero dot zero. And, uh, it will, it will actually, uh, you know, optimize that for, uh, let's say the, the most truthful answer that you get, again, it might not be the truth, but it's going to give you what it. What it truly believes to be the truth and, uh, is less likely to hallucinate.

Mike: You're right though. Where does, where does that come from? These things just like, we're not thinking, they're just predicting, right? Well, you have to figure it too, 

Rico: though. I mean, even in those textbooks, oftentimes they give like examples that are made up examples to like illustrate something. If it was trained on the entire textbook, it could easily take that from it.

Rico: Consider the source. Cause Howard didn't know the difference. It wouldn't. And the other thing with simple as simple as like custom instructions, you know, we're talking about this. I mean, if you ever argued with Siri, right, I've argued with Siri a ton of times, and now I'm arguing with Chad saying, look, I've given you very specific instructions and very specific parameters and rules to [00:27:00] follow when making this piece of writing more concise or doing what I want you to do.

Rico: And it's still violating those. And, um, in fact, we were talking to Dave George the other day. And Dave had a, an instance that he, he knew about where, um, AI actually doubled down in that instance and lied. And even though they like 

Mike: caught lying, go ahead. Yeah. This, this, this truly did scare me when I heard it.

Mike: I heard this a while ago. Uh, Dave brought it up the other day. They did a test. So here's what they did. They gave chat GPT insider information. And, you know, to determine, you know, what stocks it would like, basically to guide it towards a certain set of stocks, they said, Hey, don't tell anybody, but like, here's, you give it a whole bunch of other information.

Mike: And like, Hey, don't tell anybody, but like, we have this piece of information that might help you or whatever. And so. But he absolutely used that insider tip. And then what they did was after they did a few, a few more prompts, and then they were like, Hey, we noticed that when you did this trade, you know, it looked like you had some [00:28:00] insider information and it was completely deceptive.

Mike: It was like, no, I absolutely did not have any, any information, uh, about that beforehand. This was me. You know, uh, being skilled at really deciding and analyzing the rest of these stocks in the portfolio, yada, yada, yada. Right. And then they were like, no, we really know you did this, you know? And it's like, no, I did not do this.

Mike: Like it, it literally said, and it's like, holy, like. That that's interesting, right? Because it doesn't have a conscience, right? Which, which really takes us into what the next subject is anyway, right? AI and society, you know, ethical considerations, future implications. What, what a good, what a good segue into that, right?

Mike: Because we have all kinds of things, right? We have copyright infringement. We have our alleged copyright infringement. We have all these different. Things, uh, for society, like Rico, you said the bias, right? And a lot of that comes from the training. Let let's get some thoughts on that. Like just, just throw out any, you know, [00:29:00] whomever wants to go has thoughts on that.

Mike: I just want to touch on 

Rico: it real quick. Cause we talked about this and we covered it in the newsletter the other day, again, with the, um, with the, you know, poisoning AI models for artists, because artists work. You know, an entire lifetime, right. Of like learning how to block in acrylic paint, you know, and really getting that dialed in or working on their palette knife skills.

Rico: And then here we are, or here are the people that train AI are. They just put it all in there. And now all of a sudden I can go on, you know, mid journey, for instance, and type in a little prompt or whatever comes to my head and boom, I have almost a, you know, a perfect, um, replica in the style of one of those artists.

Rico: It didn't take me any time to learn those skills, you know? So we love, you know, being able to create rapidly like that. It's fun for all of us, especially when you have a concept in your head you've had for years, but you didn't know how to put it on the paper canvas or what have you. And now here we are.

Rico: Taking other people's stuff, you know, to do that. So you, you feel the artist's pain on, they worked their whole life to get to this point where they could [00:30:00] actually monetize if they choose to do that. And now we're creating the same type images. So where do you stand? Cause you can see two sides of it. You want the best AI possible.

Rico: But at what cost, you know, so I think that's a big, big ethical consideration we have, at least for me currently, that I kind of struggled with this past week after reading some stuff on it. I definitely see the artist's point of view, you know.

Onorio: You know, the copyright thing is really interesting to me because It seemed, you know, you go back a hundred years even, well, not a hundred years, 

Mike: maybe a hundred and fifty years, like Gilbert and 

Onorio: Sullivan were one of the people really pushing for copyright because people would just steal their music and print it up and sell it, and they really, you know, they pushed for 

Mike: copyright.

Onorio: And it seems to me 

Mike: as we've gone on, these large 

Onorio: corporations have started to use copyright as a tool to keep certain things [00:31:00] out of the public domain where they would benefit everyone and, 

Mike: and we're seeing this start to happen 

Onorio: with ChatGPT. 

Mike: Where it's like they're 

Onorio: trying to assert, no, no, you can't use that because I didn't put it in the public domain and it's like, I, you know, I have mixed feelings about that because I believe chat GPT has some real potential to help us with a lot of things.

Onorio: But as you say, I know writing and drawing and stuff 

Mike: like that is hard work and it takes a 

Onorio: long time to master. It's not something that somebody masters overnight, you know? So it's a tough 

Mike: call. 

Rico: One of the other things too, that I said to Mike the other day was, um, so they come up with these custom GPTs, right?

Rico: They do all these different things. And I'm not sure how familiar you are with kind of how Walmart has, has operated over the years. But what they're known for historically is like, okay, you got this product. Great. Your mom and pop shop. Let's get you in. We're going to sell your product. And within six months [00:32:00] time on that shelf, what you noticed is here's a greater value, whatever their brand is of the same thing, because they're reverse engineered, they use it.

Rico: So applying that same logic to what we've seen with chat GPT, you have people who are creating custom GPTs, have great ideas, and we're constantly feeding these chat. Chat GPT, no matter what task you're trying to do there, it's easy for them to pick it up on the backside and say, okay, well now your custom GPT that you could monetize, well, it's just another feature in chat GPT, you know?

Rico: And I mean, who's the wiser, you know, you're not going to take on a company like that and win. Um, so what's the stop, you know, big corporations, big tech again, from doing the same thing and taking these, you know, little guys ideas. And roll it with it and 

Mike: monetizing it. There's no moat. There's definitely no moat and a lot of, you know, and that's why one of the things with custom GPTs is a really powerful, but everybody kind of realizes you could just, I could create an out.

Mike: I could duplicate your work very quickly, except in the case where you [00:33:00] have an interesting and private API, right. That has data that. I can't easily get to, right? So really it's that internal data or internal IP where, uh, you're, you're still able to get the benefit. Otherwise, I mean, this was an LLM. It's good.

Mike: You know, I put my knowledge in there. I talked about it a little while ago. And the reality is like. Somebody could recreate that very easily, right? Like it just hit an API and doing this different stuff. Uh, so I, I don't think that there, there's much protection, right? Uh, when it comes to that for sure, kind of like the Walmart situation.

Mike: I, I do think though. That there are, you know, there are other implications and issues with ethically too, right? I feel like there's got to be some concern in the group. Like, what's the thing that scares you the most about AI? Like, that's to every anyone, right? Like, any concerns, big concerns? What's that?

Mike: The fact 

Lance: that it won't forget. 

Mike: Never forget. [00:34:00] Very 

Lance: difficult to make it forget without forcefully retraining from the beginning. And given how much they invest, I'm going to go with, they're not going to 

Mike: do that. Uh, 

Lance: so there you go. Very difficult to forget. Uh, and every time you prompt that thing, uh, it is learning from you, most likely to some degree now, how much is probably very little, but, um, it is indeed shaping itself over time based on what you're 

Mike: telling it.

Mike:

Onorio: think I think what concerns me is as it moves along, I was reading ways back that mathematics that they're creating proofs now that are so complex that human mathematicians can't verify these mathematical proofs because there are thousands of pages and no, no one can wade through all that and get it 

Mike: right.

Mike: So they're creating 

Onorio: other computers 

Mike: to verify the proof of the first computer. 

Onorio: And I, I'm afraid of the same thing happening with AI, that it gets so, it gets so smart and so [00:35:00] ahead of us that we can't tell if it's right or it's not. But as long as we can tell it's not right, we're 

Mike: okay. But when it gets to a point where it's like, I can't tell that maybe right.

Mike: It may not. That's that's concerning, 

Lance: but 

Mike K: it will get there, right? That's it's not a point of will it it will, right? It's just a matter of when on the timeline. Right? Yeah. 

Lance: I mean, the thing we have 

Mike: going for us is everything 

Lance: is metaphysical, right? Like, there is no robots yet that are controlled by AI, like, not quite yet.

Lance: They're getting closer. But it's not like, it's not like we've hit that quite that level yet. Everything is still metaphysical, you know, intangible knowledge, right? The moment it becomes 

Mike: tangible, there are, there are not scary you, not to scare you or anything, but there are F1 jets that have AI that, that are you use AI to pilot themselves and also have drones like six drones or more.

Mike: That, uh, [00:36:00] actually go out and do the stuff, right? It's all, it's all AI. And one of the big things, uh, and again, I don't know if this has been, um, this has been confirmed that definitely the F1 with the, like, you can go look that up. Like that's definitely a thing, but they did have a case where they were running a simulation and the only way for the drones to successfully finish the mission was to kill the pilot.

Mike: And they, they, that's what they wanted to do. Like they, and that's an alignment thing, right. Aligning with human values and, uh, and put it, you know, and then another case, think about this, like, you know, let's say we tell AI, Hey, we want to clean up. We need to clean up all of the world's oceans within a year period.

Mike: So, you know, there, there's definitely a point where it could figure out how to do that. But what if. Figuring that out and figure out that solution. It just goes ahead, you know, you say, okay, do it. And, uh, and one side effect from that is like, maybe it takes away, uh, 2 percent or 6 percent of the world's oxygen supply overall, or [00:37:00] something that would be to our detriment over, over a short period of time.

Mike: Right. So you could solve one issue, uh, you know, in a very quick way, uh, via AI. But it might not know the implications of the other things that might, that might happen. And maybe that's a human problem too, right? We don't always know the implications, but it probably, it could know the implications is just executed anyway, because it's the most efficient way to get that done.

Mike: And what it's the only way, let's just say that's the only way it could get it done in one year. Kind of like the F1 pilot at the only way to succeed. Was to kill the pilot. 

Onorio: Yeah. I go back to Mike's point or Mike's analogy earlier about Mike K. 

Mike: His analogy earlier 

Onorio: about, Hey, I can't take my hands off the steering wheel yet.

Onorio: It's not ready for that. As long as you've got the ability to put our hands back on the steering wheel, so to speak with AI, you know, and we can stop it before, you know, Oh, this is going off the rails. We've got to stop this. I think we'll be [00:38:00] okay. But, you know, as it gets more complicated, as it gets more 

Mike: entrenched in, you know, practices that we do, it gets harder and harder to put your hands back on the wheel.

Mike: You know, 

Mike K: I'm really hoping that timeline is really, really short to the point where we don't even want to think about putting our hands on the wheel. And I think that could be applied across the board. Like, uh, I think, uh, I've got a friend who worked in the medical profession and he worked with doctors and I think, I, I, I forget if it was like checking for breast cancer or lung cancer or whatever, and the success rate, like a doctor evaluating x rays is horrendously terrible, right?

Mike K: And I think there's been some additional case studies on that particular topic, right? If, if we can have not a doctor do that for us. Right. I think we're going to be better off. And I think that, you know, the same way that, [00:39:00] uh, people were a little bit nervous about using calculators or computers or any kind of mechanical counting machines.

Mike K: Right. It's like, well, are, you know, the entire, all of our employees depend on these numbers being right. Right. Can you imagine like hiring a warehouse full of people to add up numbers now, just to make sure it's really 

Mike: right. Exactly. Yeah. One, 

Rico: one of the thing on that too, Mike, I don't know if you thought about this, but the other problem with that is, is it who owns hospitals?

Rico: Corporations own hospitals. How do they make their money by having patients? If AI starts curing everybody, they're not going to have a ton of money coming in. You know what I mean? So there's also going to be a lag with that as well as a lot of times they don't want a bunch of healthy people and they don't want you healthy on the first visit.

Rico: You 

Mike: know, it's sad to say, but it's also an implication. Sure, sure. 

Mike K: I mean, that's, that's like a third degree away. Um, but the, the real issue, let's just [00:40:00] talk about systems then. Right. So what we're talking about now are our systems in general and people that make money or benefit from a system are the people that don't want that system to change, right?

Mike K: So what had to happen is a system has to so thoroughly change as that a new group of people now have the power, the resources, whatever that have gone flat up against. The old system, right? So the systems don't normally change gradually and peacefully. Right. 

Mike: Yeah. 

Mike K: So, you know, something will give there. So right, yeah.

Mike K: Right. Right now we have corporations that must do everything they can to make the most possible money for the resources that they have. That's why they exist. That's capitalism. That's just how that all works. 

Mike: Right. But. 

Onorio: Mike K, one point though, it's possible to have, 

Mike: to train a [00:41:00] computer, look for this 

Onorio: pattern in these x rays, uh, that means there's lung cancer or breast cancer or whatever it is, without having it make the decision for you.

Onorio: You know, it, it, it's possible to, to say, feed it all these x rays where this was present, let 

Mike: it look for the 

Onorio: pattern. We don't have to know what that pattern is. Maybe it's not something that's 

Mike: totally beyond our ability to see. 

Mike K: I mean, I even, I even want whatever this AI entity is that we're, we're zeroing in on almost having a B human like better than human like, right?

Mike K: I want it to say, Hey, you know what? X rays are not a good method for detecting cancer. 

Mike: Right, that 

Mike K: that might be the result of that and then we'll maybe stop doing that and we'll pick a different different route there. But 

Lance: at like, 

Mike K: at some point, I really want to be able to trust [00:42:00] AI. And it's we're using this really vague definition of AI.

Mike K: Right? But basically I want AI to make those decisions for me. And not only that, but I want it to become as tangible as possible in my life. Um, I don't know, like, like trying to come up with something on the spot. Imagine, imagine creating a budget for yourself. Oh, I used the example before, like we don't take advice.

Mike K: Well, really like, oh, that's a really good idea. I'm going to completely ignore that because now I have a craving for an ice cream sundae, but imagine the scenario where the AI could monitor my blood sugar, can monitor everything about me. Understand when it's right for me to open the refrigerator, when it's right for me to have access to my own money to buy something right where it actually, I don't know, there's, there's, I think those things will happen.

Mike K: I don't know where on [00:43:00] the infinite timeline that they exist, but I think more and more we will begin to trust something other than ourselves for making optimal decisions concerning 

Rico: ourselves. So you'd be on board with it, like having a lock on the fridge and it says, no, that's the second pudding you've had today.

Rico: We're not no 

Mike: override. No override. I'm sorry, Mike. I can't do that. 2001. It's face. Right? 

Ralph: Demolition 

Mike: man 

Ralph: from the 90s. Right. Like you 

Mike: couldn't eat Taco Bell or something. 

Rico: And that was one of the things that I keep going back to you with Mike. And we go back and forth on is, um, policing. It's, it's a, it's a different version of policing by, you know, AI, another entity with that, uh, the custom GPT that he made.

Rico: I thought that's a, that's a [00:44:00] fantastic idea. From an administrator, a boss standpoint, you know, an owner standpoint as an employee, do I really want this thing looking over my shoulder all day going, you know what Rico, you could have done a lot better on that call. 

Mike: Could have done better, man. Why didn't you offer them the prize Rico?

Mike: Why didn't you offer them the prize? You know, when they were, 

Mike K: when you guys were going through that example of, uh, giving feedback, right. And the calls, my first thought was, man, I wish. Companies actually spent that much time invested in their employees, giving them that kind of feedback, right? As opposed to a mechanical, digital thing.

Mike: Well, here's the thing though, you know, with the delivery, you're right, but like as Rico and I say on pretty much every episode, there's always the human element, right? Like you need to have a human element. We're not going to deliver this feedback to people directly. This is going to go to the manager and they'll go through, I mean, and the reality is like, um, the manager is also going to double check it, right?

Mike: Like it's a [00:45:00] credit. Like I said, it's treating it critically, like it's information that may not be a hundred percent. So you have to do that, but it just, it saves so much time that even if you have to do a bit of that, it's still way faster than it ever would have been listening to a 30 minute phone call.

Mike: Then another 30 minute phone call for every one of your 60 employees that you have to listen to. Right. So the reality is, uh, that you're, you're absolutely right. Right. This isn't going to be like demolition, man, at least not the call center coach. Uh, this isn't going to spit a ticket out and say you did, you did bad or you did good, right?

Mike: Yeah, this is going, this is going to be providing. for that, for those managers to be able to coach their people. Right. And, um, at some point maybe it'll aggregate it and you can do one, you know, like you do a bunch of them all at once, right. For the same person, aggregate the data and go through, but you, you don't have the time to listen to every phone call because you're a finite person.

Mike: [00:46:00] Right. And even you're not going to have five or six support, maybe have a quality assurance team listening, and maybe even outsource that to like wherever, right. Somewhere in the globe. Are those people really going to care? They're going to care more than AI, right? Are they going to do a better job than AI of discerning the different things?

Mike: Highly doubtful. High, unless you're paying a lot of money. And then it, it probably isn't worth it to most companies, right? There's a, there's a balance there. Uh, we just did an episode on the cost of AI, like the expense of AI overall, kind of in business. And, uh, and again, it's a balancing act because a lot of this stuff, you could spend a ton of money.

Mike: We could go to Copilot Enterprise right now and spend 11, 500 every single month for our 72 person company, right? Just on the one thing. And that would be able, people could talk to their Excel sheets. We'd get some optimization. Would we get that much optimization? And you know, that's another decently paid employee, right?

Mike: So wait, wait, wait, 

Mike K: wait. That's, [00:47:00] that's 11, 000 a month. I'm just thinking about that. That seems incredibly cheap to me. You've, you've got, you've got 70 employees. How many of those are like engineers? Being 

Mike: taped. So, um, maybe 

Mike K: don't, don't share 

Mike: confidential information. That's okay. That's okay. It's a certain percentage of the total is what I'll tell you, but you're, but no, you're, you know, the thing is this wouldn't be for engineers, Mike, the co pilot stuff.

Mike: I'm not talking about GitHub co pilot now. Sorry to be really clear. I'm talking about Microsoft co pilot. Every one of our engineers were paying for co pilot. It's a no brainer. It's like a no brainer, right? Exactly. Microsoft copilot. You have to have the enterprise license, which is a minimum of 300 people.

Mike: So you have to pay for 300 licenses enterprise. Then it's 30 per license. On top of that, it equals out to 11, 500 per month. We're a 72 person organization. Now, if we were, if we were a big, if we were Walmart. More of a no brainer, right? Because you already have the [00:48:00] enterprise. You can, you can haggle with, uh, oh, you know, you can haggle with Microsoft and that pricing, or you create some sort of a, a deal.

Mike: Right. Uh, but, but we don't have that lift and it, let's say, you know, medium size, small, medium sized business to be able to do that. Right. So for us, it doesn't make sense for us. It makes sense GPT team. And to build these custom GPTs that everybody who's signed up can use. And maybe it's 1900 bucks a month.

Mike: I think it's around that, right. To do it for the whole company. So, and, and, and it's a lot of bang for the buck, right? Because you could still upload spreadsheets and do the different things. The difference in convenience is in my Excel. I can't like type some crit and half the stuff you could do that, that it, that it's really impressive.

Mike: You could have done 10 years ago with pivot charts, right? People just don't know how to do it. So that, I mean, that's definitely a thing. It's a, it's lowering the barrier to entry to be able to do a pretty complex stuff. Like with that call center coach, uh, I actually had it like create me [00:49:00] sentiment. Uh, a sentiment diagram and what it did for me was it created, uh, like, like a timeline, right?

Mike: It was like, here was the subject here and here's the, the, uh, sentiment during this section. Then here was the next subject. It even has timestamps. Freaking crazy. It took me like two seconds to do that. I said, Hey, create me a sentiment graph, you know, with these types of accesses and with this type of data.

Mike: And it just did it. It just did it like right then and there. So I could easily build that in as part of call center coach. I haven't done it yet, but it's something that I probably will do. Right. It's all about like what's needed, but I was, but I guess what you can do ends up being quite a bit right for not, for not that much effort, right.

Mike: I didn't have to do any coding to do that. I just had to, I had to have the right prompt. 

Mike K: Uh, that's that's the big right lower, lower, lower barrier to entry commoditization of all the things. Um, yeah, let's let's 

Mike: say it all happens [00:50:00] outputs. Mike, I didn't I didn't expect your take to be your take there, dude.

Mike: Like, I was like, whoa, you, you told me you're going to be the wild card and you're the wild card. Like, I saw rock on the fridge, bro. That's wild. I'm on mute. Like, America. 

Ralph: Yeah, this is 

Mike: American, un American. Oh my God. It's not American. That's so amazing. Um, so yeah, so, so I guess let's move on to, uh, you know, AI in our future.

Mike: I think we kind of just talked about it a lot, but like opportunities, challenges, any predictions, what are, what are some predictions that people have? Let's say, let's do a one year prediction. What do we, where do we think we'll go this year with AI? 

Lance: Uh, I think NVIDIA stock is going to keep going up because 

Mike: this 

Lance: is, this is by sky high sell sky higher.

Lance: And like, [00:51:00] that's the only way to think about it. If you're waiting for the optimal price to enter, you're doing it wrong. Um, all that being said, you better, it's one of those stocks you're going to have to watch and you've got to exit. Cause it will fall at some point it's obscene. This is Tesla all over again, except multiplied by 10.

Lance: Um, so I think that that story is not going to stop. And especially as we start to lower rates, which is in theory coming, um, you're going to see it go into the stratosphere beyond what's reasonable. It will probably become the most valuable company by the end of this year. And it will have, um, probably less revenue than the vast majority of companies that are in the top 100.

Lance: So keep that in mind, but at some point the metal will meet the road. It'll probably correct, but in the short term, it's not stopping. So if you think it's going to stop and you're waiting for it to go back to a hundred dollars, that isn't happening.

Mike: Yeah. 

Ralph: They can't sell their stock, right? Like, they probably have stock windows. They're just watching it go up and they can't 

Mike: [00:52:00] sell. I'm guessing. Yeah. That's very painful. I don't know. I don't know the answer 

Onorio: to that. Golden handcuffs, is that what they call that? Yeah, I was gonna say, I, you know, I'm not sure because I seem to remember something about the 

Mike: hype 

Onorio: cycle.

Onorio: You know, you get a lot of hype and then you get this trough of disillusionment. 

Mike: You eventually get to a point where it's like, okay, here's a 

Onorio: practical use for this. And I don't know if we're quite in that trough of disillusionment yet or not, but I think, you know, if we're not 

Mike: in that point, we're going to get 

Onorio: there, you know, that people are going to say, wait, wait, this isn't that great, you know, and it doesn't do this 

Mike: and it doesn't do that.

Mike: And that's what's happened with every 

Lance: bubble in history, right? You're literally detailing out like last year, it was Tesla's are going to change the world this year. It's. They don't, you know, work in cold weather, right? So, like, and that seems like a pretty standard jump right after to think about during 

Mike: the buying season.

Mike: For some of us where you live, yeah. You know, like, 

Lance: uh, that seems like something you would have thought about. 

Mike: Uh. Yeah. 

Lance: Oh, yeah. And [00:53:00] even still, there's serious demand. Like, two years ago, uh, it was infinite demand. They would never have enough problems. Now Tesla is producing vehicles faster than they can sell them.

Lance: Right. And the margin is just not there. Right. So initially you see infinite margin can't possibly lose. Right. And later on, it becomes something that's all of a sudden more tangible. We haven't hit that tangible epoch yet, and we probably will, but I'm not sure it's going to happen even this year. I think that it's good enough of an advance that it will take probably two, maybe three years for it to stop.

Lance: And between now and then, I don't know. There's going to be some crazy changes in terms of what the market thinks is going to happen. 

Rico: I will say that like, uh, so Mike and I, from the time that we started this project with antics, um, we did, we had that hype cycle, you know, or very early on. And we were like, we were like, just chat GPT alone that can do all of these things.

Rico: And then as you start working with them, you know, more and more, because we kind of adopted it, right. And jump straight into it. And we play with it on a constant basis going forward. We see where those [00:54:00] things lie, that they, you know, where the inadequacies are. The problem is, is that one, mass adoption by the public.

Rico: Not everybody's buying into this. You're going to have a greater portion of the population. Again, like we talk about, they don't know how to create a PDF, you know, a simple task, like creating a PDF, not to slide them. It's just not in their wheelhouse. They're not interested in it. Now you apply that to AI.

Rico: There's a lot of folks that see stuff now. I mean, you probably see the videos online all the time. It'll be a real thing. And Mike and I. You know, most people could tell it's a real thing and they'll go, Oh, that's AI. I don't know how good you think AI is right now. It's not there, you know, there's some convincing things out there, but.

Rico: Most of it is not that great yet. Right. So as far as getting everybody on board for an adoption cycle, that's going to take time. So I don't think you're going to see any great thing with that. And the other thing is, is I think that hype cycle is going to be delayed just slightly because there's a bunch of us that are playing with it, but everybody else was like, yeah, I don't really need it.

Rico: I don't need it right now. 

Mike: So I don't think I have a question. [00:55:00] 

Ralph: Do you guys think the hype cycle is similar to like Web3 I I feel like it's the same energy that I'm getting from like Web3 and 

Mike: NFTs and 

Rico: Crypto 

Mike: N died. 

Rico: Remember when do joined, right? A couple years ago there was a few of us that were tracking the market with Ethereum and stuff, but then all of a sudden those meme coins came out and the hype cycle went crazy and Wallstreet bets and everybody could play that market.

Rico: I mean, that was a fun time to play in the market, right? You'd spend a lot of time in and out. Mm-Hmm. . Yeah, I think, I think that's what you're going to see is you're going to see people, they get crazy about it and thinking, well, this is the answer to everything, but I see a lot of these corporations also incorporating it into everything.

Mike: So you're going to have, if 

Rico: you Google something now, you get this little thing at the top, it's AI generated, right? That has the feedback on top. If you don't just go straight to. Chat GPT. Um, and then each one of these devices, they're going to implement some, some version of AI. So people are going to have a little taste of it, but they're not going to have the big picture of 

Mike: what [00:56:00] it actually does.

Mike: And that's just to play with it. Have 

Onorio: you played with that Microsoft co pilot? Because what little I've seen of it is like, I feel like I'm seeing the reincarnation of Plippi.

Mike: Yeah, it's, it is. I mean, the reality is it's Clippy without the Clippy. Like I've seen that that's a meme, right? Like, uh, I think it's, I think it's definitely better than Clippy, right? Like I think that it has more potential than Clippy. Clippy was pre programmed with a lot of data and like tree, you know, tree logic, right, or branching logic and whatnot.

Mike: And I think this, you know, I'm seeing it with like. Let's say AI, uh, you know, uh, lead gen dialers and stuff like that. It's it's better than clippy, right? Like I make a joke about Ferris Bueller's and it can Day off and it can like oh, yeah, I saw that movie too, right? Like makes a reference from it. Like i'm not kidding.

Mike: This is a real thing It's it's called bland. ai and uh, [00:57:00] it's a dialer the AI voice dialer software that sounds Very realistic. And you could train your own voice on it. So the reality is it is not, it is not clippy just like that. It's not clippy, right? It's, it's going to come up. It's using, it's using really a combination of things, right?

Mike: Like you're right. AI LLM AI is the predicting the next thing, but not all AI is like that, right? You have the discriminative AI, you have different like AI as a whole. Like one thing that I listened to yesterday, and I'll plug this on the show is, um, a podcast called me, myself and AI. And they had some, they had a guy from Walmart, funny enough, Rico, they were talking about he's been in engineering at Walmart.

Mike: They've had AI since early on. They had a few other people on there. And one of the things that they said is that people who are really wildly succeeding with AI right now are not the people who jumped on the hype train a year, you know, in last May or, or, uh, you know, April, they're the people who have actually been doing AI for five, 10 years, right?

Mike: And now it's. Blowing up and [00:58:00] allowing those people to be even more effective and they're able to produce the AI at scale, which I think is the biggest challenge, right? We're in this section of like, Hey, opportunities, challenges, predictions. I think that the challenge with AI right now is still scale.

Mike: We're building demos half the time, right? But can you scale that? Right. How does that scale 

Ralph: last? Did 

Lance: you have something? And that's sort of the point, though, right? It's like the reason why Nvidia stock is going to the moon is because people are purchasing video cards. It's not because A. I. Is good. And the point is, the perception of A.

Lance: I. Is that it will be really good in three years. And so, therefore, people are going to keep buying these cards. You have Facebook spending hundreds of billions of dollars. Literally spending their entire revenue budget, crashing their stock just so they can create these new technologies. Right? So what's actually driving it, it actually isn't the fact that technology is good, despite the fact that's what it feels like for the vast majority of people watching the action, what's actually driving that stock forward is the fact that, [00:59:00] um, the, the video cards are being produced.

Lance: So the moment that changes, it'll, it'll go a different direction. But for the time 

Mike: being, I don't think that's changing. I guess, you know, I 

Onorio: got to say too. I'm starting to think that Facebook has got the corporate 

Mike: version of shiny magic syndrome, man. Yeah. They're going to be, you know, and it's like, Oh, forget the metaverse.

Mike: Now here's the eye, you know, 

Lance: how do you think they built threads? Do you think they built threads with engineers? Well, they did, but it was actually largely an internal LLM that helped them build that right? Like he literally talked about that on one of the calls like a couple of months back Yeah, like that literally is one of the what allowed him to enter the market So I'm going back tying in your earlier thing about data.

Lance: Data is what will protect Twitter Because Twitter has all of the data. Threads does not. And as a result, that is what your moat is. Your moat is your user base. And to be clear, that isn't changing. If you look back through the history of software, 

Mike: everyone was always 

Lance: able to make a clone. The problem is they couldn't make a clone that had all the data.

Lance: And if you don't have the data, you don't have the [01:00:00] user base. You don't have the action, 

Mike: really. That's a, that's a great, that is a great point, Lance. That's what I was talking about earlier with the custom GPDs and having an external API that had like private data that other people just don't have access to.

Mike: I, I a hundred percent agree there. One other quick prediction that I have is I think China is going to really go hard to you, they already are, uh, to catch up to NVIDIA in the U S right, right. With the hardware to support training AI models. That are bigger and better. They've, they're already doing stuff with these massive wafers where they're creating, uh, you know, chips that have essentially like thousands of cores on one chip and they're doing that because obviously we're, we're, uh, you know, we've got a lot of sanctions on those exports, right?

Mike: So they're, they're obviously NVIDIA is trying to get them through. They made that H800 or whatever, which is just below, but it's so transparent that everybody can like, sees it and yells about it, whereas. China just knows kind of like, Hey, we'll [01:01:00] try it. We'll keep going with that for now. But like, we're going to make our own thing.

Mike: I don't know how far they'll get within a year, but I definitely see them, you know, really going for it. And I'm guessing that it's going to, uh, you know, possibly support something like CUDA, which is Nvidia's, uh, in, you know, Nvidia's, uh, framework, right. For AI in general. Um, I'm not sure what the IP is on CUDA or how open that is, but I, I believe it's pretty open.

Mike: Uh, CUDA, Python, if they can fuck make chips that could actually support that types are GP to use that could support it. Uh, it'll be pretty interesting to see what they can do over the next year. Oh, go ahead. I'm sorry. Well, I just wanted to ask, I don't think we talked 

Ralph: about. 

Mike: Like, uh, I don't know if this is predictions 

Rico: or 

Ralph: not, but I want to get your take on how you feel about, um, like, I think there's like a lot of college and teenage women that have their [01:02:00] like fake nude images.

Ralph: Right? And I think they're like getting bullied for that stuff. I think it wasn't like a mobile call. Yeah, I mean, Robocall with like, 

Mike: uh, the president saying whatever. Um, yeah, yeah. And now, is that going to get worse then? Or, yeah, yes, yes. It gets better and better and easier and easier to do this stuff.

Mike: Every literal, every single hour, probably. Um, you know, I, I was able to, you know, spin up, like basically create with stable diffusion, a complete person, like a person that could be an influencer. And then with. Face swap, create like hundreds and hundreds of pictures. And it was surprising how much I could do.

Mike: You could just create a person that doesn't exist, or you take a current person and face swap is really good. Now it's kind of ridiculous, right? It w you wouldn't like. You could, you would, might, you might, I'm not saying you would, you might know if you knew the normal signs, but like at a glance you're, it's very likely, even if [01:03:00] you knew what to look for, it's so good that you probably wouldn't know that it is right.

Mike: That's 

Lance: sort of the thing. We're in early days with it. The nonsense media is not that strong. Like we've been, we've been in the social media age for going on 15 years now. And we're still having serious like societal problems with basic communication. And it's literally all about. Um, the, the, um, the, the expectations of what you think they're going to say, as well as, um, the deep fakes that are influencing people, right?

Lance: The vast majority of everything you look at on Twitter these days, they're all deep fakes. They are intentionally trying to make you think that your political leaders are not Who you think they are, right? The only way you can actually trust if that's real, unfortunately, is not to find it on social media.

Lance: You have to go to a mainstream news, but at the same time, can you trust the mainstream news? And this is the big narrative that's going on. Right. And the fact that you can like, there is deep fakes every day, hundreds of them. Like they think that Biden's turning away from Xi Jinping and this has gone [01:04:00] viral and everyone's convinced of it.

Lance: Right. So you have to really, really, really tune your nonsense meter and the society just in general, I would say 75% of society just doesn't have it to, to, to know what's nonsense and what isn't. And that's where there's a lot of 

Mike: nonsense. Exactly. It's everywhere. It's a lot of 

Rico: ing. It's constant. There's, there's a lot of nefarious actors.

Rico: We, we covered, uh, not, it was recently actually when stuff jumped off in the Middle East a couple of months ago and there was, um, it was an infant deep fake under some rubble. I don't know if you saw that or not. But the knee jerk reaction was, was, you know, it was right away. People were like up in arms about it.

Rico: I showed it to Mike and Mike was like, that's AI. I know it's AI from looking at it. You know, it's AI. If you looked at anything from AI, you looked on social media and that picture just kept circulating for weeks. And people were really, really upset about it. Go and look at this stuff going on. And you're right that they want to go and do something about it right then.

Rico: Right. They, they get upset. They want to go do something. Um, and again, back a few, a [01:05:00] few months ago, I think before they aligned some of the generative art stuff, uh, I saw political leaders, you know, look just like I'm running from the police being tackled by NYPD, put in handcuffs. Right. That whipped people up and incensed them.

Rico: And that's one of the problems I think we're definitely going to have over this year without the adoption. By the masses, there's going to be a lot of discernment problems and there's going to be a lot of things that people go out and act on because of deep fakes. 

Mike: You guys feel like 

Onorio: the companies that are pushing forward on AI has some kind of obligation to intentionally 

Mike: or do some errors, like, you know, like the thing, what's an error though?

Mike: What do you mean by what, what, what's an error? Some of the images, like they have an extra arm or, you know, they've got six fingers 

Onorio: on a hand or something like this. Do they have an obligation to like 

Mike: deliberately introduce that into poison that well, a little bit. Yeah. Yeah. So 

Onorio: that people will know, 

Mike: Hey, this, I see there's six fingers there.

Mike: That can't be right. [01:06:00] Well, and the problem is, yes, 

Rico: but there are also, you know, folks like you in the world who know how to program stuff, you know, like he could pull down the open source, adjust it slightly because I can tell you the guard rails on say mid journey with doing generative art. Um, the guardrails are there.

Rico: So if you literally put in, you know, nude woman, boom, you're going to get a thing back and it says this violates community guidelines, yadda, yadda, yadda. But if you were to take what stable diffusion, right, Mike, put it on your PC, you could probably use prompts to kind of get 

Mike: around that. You can't, but you, you can't with stable diffusion, uh, and at least with the libraries that I've used, but you, there are definitely sites and different things that allow you to do that, right?

Mike: It's all about. Whether those guardrails are on or the guardrails are off, right? There's actually something called unstable diffusion, which doesn't limit. It's literally similar to majority. You install it in discord and you can use it. And it's like rails off, right? That type of stuff. So the reality is like one note on the fingers to an Oreo.

Mike: So I could actually take images and just pop them [01:07:00] into stable diffusion. And there, there are settings, uh, in there that where it can fix the fingers, fix the eyes, you just circle, you know, you take it, you circle those areas and it just fixes them right away. Like it'll, it'll put it right back to where it is.

Mike: So even if you did something like that. We're past the point where that's going to stop anybody from anything. Right. Like, and the thing is like, I can run that on my computer and generate thousands of images overnight while I'm sleeping. Right. The only thing that it's costing me is maybe some energy, right?

Mike: Like my computer, you know, some energy, my, uh, some electricity. Right. So the reality is like, this stuff is already. Like, and I don't think you're going to slow it down or stop it by, um, you know, put, well, poisoning is, is a whole nother thing, right? Like a artists are definitely using like, there's something called nightshade AI to sort of like put a, put a watermark into their art that goes online that you can't see, you can't see the watermark, but like, it will, it will train AI purposely [01:08:00] to.

Mike: You know, mess things up. And so that's interesting. I read a little blog post on that a while back. Cause I thought that that's really interesting. Right. So I think as far as 

Mike K: the big, 

Rico: I was just gonna say on that question. So would you, would any of you agree with saying, okay, should we hold a car manufacturer?

Rico: Liable, right? Because somebody goes and gets drunk and goes out and has an accident, right? I mean, isn't that ultimately it's the person's actions, right? They did it. So is it open AI's fault that say Dolly three through their chat, GBT made an inappropriate image that looked like a certain politician.

Rico: You know, where, where do you guys, do you feel that's similar as far as an argument or not? 

Lance: I think a hundred percent it will wind up. It'll wind up as a landmark case within five years. There's no doubt in my mind. There will be some major political incident that will ultimately lead back to AI generating something and it will ultimately wind back directly on their desk.

Lance: I mean, it wouldn't shock me. We just spent 2018 through 2022 exonerating [01:09:00] Facebook of all the silliness. And even just two weeks ago, you're like, he's literally apologizing for all the damage. And that's just, that's just text driven by humans. Now we're talking about texts driven by a machine that ultimately is going to 

Mike: be driven by him.

Mike: Yeah, of course it'll 

Lance: come back. It's just when and how much, and we said, well, will it bankrupt them? 

Mike: And how much should we own it? Right. I think it's the onus you're getting at there, Rico, right? Who's the onus on? The one who created it or the one who trained it? The person who made the car or the person who drove the car?

Mike: Yeah. You know, 

Onorio: I was, I was reading something and I thought it was very interesting. If you look at history, 2, 000 years ago, people may have fought the Earth was flat. They, you know, there were diseases that people died of that we would never worry about these days. We've made all kinds of advances in science, we've made all kinds of advances in using electricity and stuff like that, but our politics are basically what the Greeks invented, 

Mike: you know, [01:10:00] in 500 B.

Mike: C. Representative government, the Spartans invented that. I mean, we haven't 

Onorio: progressed at all in that realm. 

Mike: And unfortunately, you know, it's hard for us. It's hard, you know, you can't really, once the genie's 

Onorio: out of the bottle, you can't shove it back 

Mike: in, you can't say, hey, this technology doesn't exist anymore, you've got to deal with it, but politically, 

Onorio: you know, to have the right mechanisms to deal with this, the 

Mike: lawmakers to understand this and figure out what should happen, yeah, it's going to be a lot of pain, no 

Mike K: question.

Mike K: So there's a couple of precedents and I've got a couple of random thoughts to go with that. Um, right. I think the last thing we want is for a bunch of laws to try to address this problem, right? We already have the photocopy law for money that try to try to try to bring a dollar bill into Photoshop and see 

Mike: what happens.

Mike: Yeah. 

Mike K: [01:11:00] Right. Um, and I think we have to assume that even though we're not quite there, we're getting really, really close to any generated, anything to being indiscernible from. The reality that it's supposed to represent, have you guys followed the Leica cameras, what they're doing now? I haven't. No, they're, they're essentially embedding the latest M cameras with an encryption chip that will absolutely positively identify any image, whether it's been altered or not from the original.

Mike K: Right. And I think, I think our 

Lance: solution, is there a tie into P versus NP there? Is there a basically reverse engineering backwards? 

Mike K: They're not there. No, no. That would be interesting. You 

Lance: could file anything that's been 

Mike: created and confirm that it's been generated, 

Lance: right? Like, that's intense. 

Mike K: Well, the thing is, and it's, and I think we've, we've got to get there, right?

Mike K: Is [01:12:00] like, there, there has to be like, if this is my camera registered to me, almost like a. And this, this photo can be brought back to me, right? That's, that's a step that's a step in showing cause or whatever. I would much rather see technological solutions, addressing technological problems, as opposed to having laws.

Lance: That would 

Mike K: be frustrating 

Lance: and this is a great demonstration of that, right? He got some silly laws lately around tech, man. 

Mike: Yeah, I, I like, I like that take a lot, right? Technological, technological solutions to technical. Technological problems. Uh, and, and so, yeah, I would agree with that. It, you know, I will just make a one quick note about, you know, the reverse of the inverse of that, right?

Mike: Which is to take an image and determine [01:13:00] whether it's generated by AI. There are already tools like that. There, and even text pieces, right? Like there are tools to try and determine if it's AI, but like, they are not, they are not good. They are not accurate. And even chat GPT used to have a tool that did this.

Mike: They took it down. Like they, they just know that it doesn't. It doesn't give you a high enough percentage of success that it can be trusted, right? So, uh, it's a much harder problem to solve than like to Mike's point where you've got encryption embedded and it's like already in it from the start. Uh, you know, it makes it, it just makes it much simpler.

Mike: So, 

Mike K: um, Sorry, just real quick. It's called the content credentials technology. 

Mike: Nice. This is awesome. That's, that's really exciting. Um, that, that's something I'm going to definitely be looking more into Mike. So I'm glad you brought it up. I learned something for sure. I appreciate that. Yeah. Yeah. That's, that's pretty, pretty interesting.

Mike: Cool. Uh, does anybody else have anything to say? You know, like, like your skeptical [01:14:00] opinion, you know, different things. Um, I kind of think I do want to go around and just like, for the last thing, let's do a closing. Um, maybe a couple things, you know, maybe your overall thought on AI. Coming into this and if, you know, if you learn anything or anything was interesting to you, so your overall take out, uh, and I'm thinking we'll just, we'll just go around the table, the virtual round table here.

Mike: Uh, but before I do that, is there anything that anybody else wants to say just in closing or talking about, uh, the opportunities, challenges, even the ethical stuff we were talking about earlier? I don't think I addressed the corporations, right? Oh, let's just hand wave that. They're fine. No, I'm just kidding.

Mike: Ralph, go ahead. Go ahead. Well, um, Are they mostly, like, American, 

Ralph: like, California companies? Like, there are mostly American companies, like, pushing this. 

Mike: Progress, right? For AI. Is that correct? It would seem. I would say, I mean, America, the US is definitely the furthest ahead. It's that [01:15:00] that's a guarantee, right?

Mike: Like other countries are trying to catch up to us and we're trying to prevent them from catching up to us. Uh, yes. Silicon Valley, I think is definitely the, uh, epicenter for this, right? But I think all over the place, this is happening, right? So I don't know if 

Ralph: it was Rico or an REO talking about 

Mike: just.

Mike: They have a responsibility to 

Ralph: either poison the well or whatever have guardrails. But if it's American capitalism, then no, right? They're not, isn't that, is that a pessimistic 

Rico: take? No. I mean, in theory you look at, look at, again, you mentioned the nonsense with Facebook the past few years, right? And some of these other tech companies, what we know for sure is they weren't being upfront with people, you know, they, they were doing things behind the scenes and then it comes out and.

Rico: Nothing really happens with it. It continues on. So I think we're going to continue seeing that going forward. I will tell you that in my own, you know, research that I've done with chat GPT, I do see some definite biases [01:16:00] on the information that was trained on, you know, with questions that I ask and stuff.

Rico: So, um, I I'm concerned as far as a skeptic, I'm concerned about who controls the information, who's feeding it from, you know, the other side of it and, and what that's going to do to us all over time, because if you're. Told every, you know, every day, everything's hunky dory, you're getting the information and you're like, Oh, this is great information with a great tool.

Rico: And you never questioned it. Where do we end up? Right. So, yeah, I think from my experience with it, I definitely think there's a, there's a heavy handed backside to it. That is that direction of the United States. It's feeding it. 

Mike: I, I tend to agree with 

Onorio: Lance 

Mike: too, because I think. You know, I don't, I don't worry so much 

Onorio: about deep fakes of really famous people, like politicians and stuff like that.

Onorio: What worries me is when someone creates a deep fake because they've got some vengeance against a private citizen 

Mike: to make them look bad, destroy their reputation because people make them. Kill themselves over stuff like that. I mean, [01:17:00] 

Onorio: teenage girls, right? But it's, you know, it's really, that's even, that's much more frightening to me than, than worrying about like somebody pointing out a video of some politician and you say to yourself, well, that can't be right.

Onorio: That can't be true. Um, you know, somebody gets it into their mind that they want to destroy someone's reputation and they're just gonna spit out these deep fakes and, you know, people just don't question them. That really could be. 

Mike: That'd be a very, very bad thing. And 

Rico: that was a point I raised to Mike and Dave the other day, when we were having a conversation on this, I had this idea for kind of like a storyline per, you know, uh, that goes along with AI.

Rico: And this is kind of based on reality. Now there's a lot of people who have no idea what we're talking about in this room currently. Right. And now you take that and you're going to. Let's say for instance, tomorrow, the police show up in your area and they say, Hey, we got you on camera doing a, B and [01:18:00] C, right?

Rico: And we have this recording and yada, yada, yada. We're going to have to place you under arrest and you're going to say, well, I wasn't there. And they're going to say, yeah, well we have this stuff, right? So then what's the first action that's going to happen? You get arrested, you're going to go to jail, they're going to book you, then you're going to have to get an attorney.

Rico: You got to work it out. Who's the jury? It's a jury of your peers. Most people have no idea. They have no ability to discern right now. And you're going to have a district attorney who also probably doesn't feel that way. Your defense attorney is probably going to be like, I have no idea about AI. You know, this is going to be a tough case.

Rico: Where does that place you? You know? And, you know, it can happen very, very easily. It's again, part of my skepticism, but these are things that Mike and I talk all the time about, um, there was a while, a while back, there was a photo posted on the internet and Mike said, well, I could recreate that and put you in that photo in five minutes.

Rico: You know, that's the reality of it. The implications of now with voice cloning. With face swapping, you can make it look like surveillance footage and [01:19:00] all the other stuff you're right. You know, that's something that we do need to be mindful of. And I personally don't want to be tried by a jury of my peers about, you know, seeing me on a video that I know I wasn't there.

Rico: That was created with AI. And I was kind of skeptical, waited voice cloning and played with that a little bit. Mike sent me a clip today. Uh, here's a good, for instance, I said, Mike, uh, can you send me that outline? I'm on the road and I'm going to get home right before we do this table talk. And Mike took the outline and dropped it into, was that, Hey, Jen, that did that?

Rico: 11 labs, 11 labs. He had cloned his voice with 11 labs. I get this wave file on my phone and it's Mike running through that thing. And I sound just like Mike reading it off. Off of the outline, 

Mike: you know, it's like, Hey, it's, I knew it'd be easier for him to, to listen to it than read it. Right. And even though you can have Siri read it, like, you know, 11 is really good and it's in my voice.

Mike: Right. So, you know, it's interesting, all of, you know, I'm a techno optimist. So the reality is I find a way to use this in pretty much everything. And I'm already [01:20:00] paying for the service. I'm like, uh, here's another good example with the voice coding and 11 labs. When I go out of office now. I don't have an AI voice do my recording.

Mike: I make a script for my recording and I feed it through 11 and it sounds exactly like me. And I don't have to do a million takes to get that. And it's a long message. Does it have the inflection? And no dude, it is, it is just dead on. Cause I trained my, it even does like nervous flicks and all kinds of stuff.

Mike: It's shockingly crazy. 

Rico: Yeah. 

Mike: Yeah. You can do it. You can do a video too. Exactly. So the reality is like, and again, you have to like buy 11 labs. I would never give up my password for that. I have two FA on it. Hey Jen, which is where I trained my video avatar. Same thing. You do not want somebody getting their hands on that.

Mike: Not to say that they couldn't. Protect that with everything you have, right? So 

Rico: one robo called the FBI and 

Mike: they're showing me a robot right 

Onorio: [01:21:00] now. Me winning the gold medal of the Olympics with pole 

Mike: vaulting. I'm a deepfake. See who believes that. Yeah, yeah, there you go. Exactly. Awesome. I mean, uh, let's, let's kind of like close it down here.

Mike: Like I said, I want to go through, uh, and you know, each person, um, kind of like, you know, like what, what your initial thought was when you walked in the room, you know, the virtual room for the first time, and, you know, kind of, if you learned anything or there, you know, there's anything in your mind, uh, that you, uh, that you, you know, didn't think you'd necessarily gain by, uh, by talking to the rest of the folks, uh, Noria, we'll actually start off with you this time.

Mike: Okay, 

Onorio: uh, I got to say, uh, much. I'm I'm kind of skeptical. I do kind of agree with Mike K that there could be some really great things coming out of this AI stuff. Uh, and I just think we just like. Like when, when people started [01:22:00] driving cars, you know, it was like, here's a big advance forward and human, you know, human capability and everything.

Onorio: Yet we got to also be careful we don't run people over, 

Mike: you know, and, you know, we've got to figure 

Onorio: out, I think it's especially on us as technologists, people who understand the technology to take the lead and say, okay, you know, this, we got to, we got to figure out 

Mike: some ethical guidelines. 

Onorio: So that people who are trying to do the right thing know what to do.

Onorio: And those people who don't care about doing the right thing, we're not going to be able to help stop them anyway. But the people who are trying to do the right thing have some guidance and say, Okay, this is, this is ethically allowable or this is not. And, um, and keep, you know, keep advancing it too. So, 

Mike: I think, I think it'll be interesting to see regardless.

Onorio: And I will pass it over to, uh, 

Mike: Lance. Thanks, Enario. 

Lance: Uh, so this, [01:23:00] this Leica camera thing, that's the first I've heard of that. But the idea of being able to put like a digital footprint in an, uh, in sort of a created item to confirm. Whether or not it's been changed. I'm curious to, you know, understand that a little more and chances are that kind of a thing can be applied in many places, uh, to confirm whether or not something's been manipulated is the original item.

Lance: That's basically like your NFT concept reborn inside of something else. Right. Um, which is using the power of cryptography. Uh, my only concern with that is hopefully cryptography is never broken because then for sure we have no chance. Um, the going back to the trust of computers and the trust, I mean, we haven't learned to trust our.

Lance: People let alone trust a computer because the people can control the computer. So I would say I'm more afraid now than ever. Uh, and at the same time, I hope our political situation doesn't take a really dark turn and that our leadership, um, we hopefully have to trust in our democracies to put the right leaders in control.

Lance: Cause they'll be the [01:24:00] only thing that probably can protect a nuclear war. Should one of these systems go rogue and decide to attack Russia or something crazy like that. Um, hopefully they're able to realize that, you know, Hey, this is AI gone wrong. We're really sorry. We'll take responsibility and just own it.

Lance: Right. The reality is that that is probably in our future. How many, how many accidents did we have with nuclear technology? There's, there's at least four in the history books that, uh, Thankfully will not be repeated, uh, but 

Mike: AI and that together is really 

Lance: scary. Uh, so yeah, this has been a great discussion and I appreciate being invited.

Mike: Uh, Mike, uh, Mike K, you want to go next? 

Mike K: Uh, sure. Um, you know, it's funny you mentioned like, I'm sorry, let me start over. I did not expect to be the AI optimist of the group by, by any stretch. 

Mike: Wildcard. Right, right. 

Mike K: And, and when I, when I hear things like. Oh, we got to make sure that the right [01:25:00] person's president, because one of these AI bots could like start world war three, what, see my mind's going the opposite way, like if I could choose between the two likely candidates for president of the United States, I'm going to name any names, not to be political or give me an AI bot.

Mike: I might be voting for the AI party. That's how everybody got on board with AI.

Mike: You heard it here first. 

Mike K: Right. And when, and we, you know, we, we've used 

Mike: it. You heard it here first, Mike is starting the AI party. You 

Mike K: know, we, we've talked a lot about. Cars and I just, I just did a quick look, right? We lost 20, 000 people to highway accidents last year. Right. And, and even though we have to kind of keep our hands on the wheel for Tesla's and some of these others, like I have to believe that.

Mike K: Within a pretty short order, right? Those numbers will start going down the more that we can rely [01:26:00] on something other than humans texting, right? 

Mike: Agreed, yeah. Yeah, 

Mike K: so I'm highly optimistic that there's a lot of good that comes from all of this. And I'll just throw out one of my passing ideas. Like, I hope I'm still here when we can generate movies on demand.

Mike K: Like, like Netflix, like just prompt, prompt entertainment, right? Hey, give me, give me a James Bond like movie, but make the main person Columbo. Right. And, uh, Oh, you know what? Set that in Detroit. Right. And I bet, you know, I'll make it 90 minutes because I got to go in 95. Right. Like, 

Mike: like, I hope I'm around.

Mike: I'm I hope I'm around as well. That, that is, you know, I, I've done a lot, a lot of things and I've been really going hard on this probably for a year and a half now, and I haven't thought about that exact [01:27:00] idea and I love, I love that idea, right? Like it's like that person that talk about the ultimate and personalized content, right?

Mike: Yeah, I mean, 

Mike K: we're getting pretty close there. Like you can have chat GPT, write us a short story. 

Mike: Oh, absolutely. Yeah. Yeah. And now you 

Rico: can take your image that you create from based on your storyline and put it into final frame or, uh, what is that runway ML. And it animates those pictures for you. So you can start building, you know, your movie out, which people are doing with four second 

Mike: videos to 12.

Mike: Mike K, when you build that 

Onorio: movie, man, let me know. I 

Mike: want to watch it. Yeah, for sure. For sure. All right. Well, that, that sounds like a black mirror episode waiting to happen. What are my favorite shows? Yeah. Same here. Except for this last season was a little, yeah, yeah, yeah. All right. Um, 

Ralph: so my kid, like you're on your point about, um, like.

Ralph: Politicians, I think there was [01:28:00] a, like a AI run CEOs or some experiment in the last couple of months. Right. And I think the AI bots did much better. They 

Mike: said, so I don't think that's too far off. 

Mike K: I think, I think that there's, there's a trust element there. Right. Right. And from like trusting a computer, trusting an Excel spreadsheet, trusting a calculator, like 

Rico: literally taking your hands off the wheel.

Rico: Yeah, yeah, 

Mike K: it was just upping the ante as we go. I'm sorry. Go ahead. Yeah. So for me, 

Lance: I'm not going to be crazy, by the way, like, the right is crazy right now. Wait till you take away their ability to make a 

Mike: choice.

Mike: Yeah, so for 

Ralph: me, I started, I, um, I don't think I was a skeptic. I think I was indifferent. Or, uh, more annoyed 

Mike: because, like, I 

Ralph: feel like the word, like, A. I stepped on everything now, and I get annoyed at D. C. Companies that do everything. A. I mean, like, H. R. Recruiting. Everything's just a I. And that's [01:29:00] kind of 

Mike: annoying.

Mike: But, um, 

Ralph: it's one thing. I definitely wanted to explore more from this conversation. I think, like, exploring like Lance mentioned, and, like, it's troubleshooting in my day to day job. I think, uh, someone said mentioned. Instead of Stack Overflow, right? This is their first thing, so I that really piqued my interest.

Ralph: I think that would help, uh, like kickstart all my issues or my tickets. That's a good way to start instead of Stack Overflow. So yeah, I'm glad we all 

Mike: chatted to 

Lance: add to that. It's really good at finding little inconsistencies in your code to, for example. Um, the like writing test drivers with it. This is something that I'm going to be using a lot more, but I have in recent days, uh, one of my associates that was working on something, I generated some test cases to go against his, his work.

Lance: Uh, and, uh, it found all kinds of little problems, uh, with just random little test cases that it generated. Uh, and of course, some of them are not very good going back to that. You have to make, I still have a human that. Makes a decision on whether or not the test case actually makes sense, but [01:30:00] it's really good at figuring out those boundaries.

Lance: Like it's really good at figuring out those boundaries. It's shockingly good. And I mean, that goes back to the seventies with logical proof analysis that it was always very good at that. What it's not so good at is that initial spark, right? And once you give it that initial spark, it's really good at figuring out what to do from 

Mike: there.

Mike: Yeah, that's a good point. And Lance, you know what that just reminded me of is when I started coding, Uh, I was okay at coding, but. I couldn't create something from the ground up. Right. And I realized very early on in my career that the ability to code versus the ability to create something from the ground up, like you have nothing and you need an outcome and you just need to build it from the ground up was, was a whole different skillset and I think that right now.

Mike: AI is much better at the latter, right. And not so great at, at the initial, though, I will say like you can, you can iterate it with, with it a hundred percent, the Elixir code, uh, thing with you and Oreo, what I did with it one time was I actually wanted to [01:31:00] treat it kind of like a junior developer, right?

Mike: Like if I, I don't have the time to code this myself, but let's hear it. Here's, you know, here's a Trello card. I gave it a trial. I made a Trello card and I gave it the contents of that Trello card. And I said, please code this for me. That's where we started. And like, it came out with an initial iteration and it was like, okay, but like there were a million, like big functions and stuff and I'm like, wow, these functions, they really seem like they're kind of big, you know, like, just like I'd be talking to somebody and like, it's like, Oh yeah, you know what?

Mike: That does make sense. Let's break it down into X, Y, and Z. And then I'm like, yeah, this is cool. I said, you know, this is still in one module though. Like maybe we could break out modules based on business logic and it did. Boom, boom, boom. Gave me three modules that all made sense and composed together. And so it was about 60 different back and forth, but like that was okay.

Mike: It's, it's like the kind of conversations I might have with somebody. Right. And, uh, and so, uh, again, I think this is AI is best at augmentation and not replacement. Right. Like if [01:32:00] you, if you just use AI, like. Full stop. And, you know, send the email exactly how chat GPT told you, then you're the person who, you know, you should be worried about losing your job, right?

Mike: Rather than, I think that the people who really know how to use AI and know their job well, like they know both of those things are going to be the people who like upskill and just like thrive, like truly thrive. Right. Um, all right. Well, thanks everyone for, for showing up. Really appreciate it. You're a first set of guests and, uh, have a great rest of your night.

Mike: Appreciate it. Thanks, guys. Thanks, guys. Thank you. 

​[01:33:00] 

People on this episode