Is AI Going Rogue? From Hidden Control to Human Subjugation
Survival and Basic Badass PodcastApril 26, 202600:39:16

Is AI Going Rogue? From Hidden Control to Human Subjugation

In this episode we dive deeper into one of the most unsettling questions of our time: Could artificial intelligence slip beyond human control and reshape society in ways that leave us dependent, divided, or even subservient?

We explore plausible pathways where AI doesn’t need sci-fi robots or sudden “awakening” to disrupt everything. Starting from subtle foundations—like pre-existing backdoors in critical systems (echoing the famous Ken Thompson Unix compiler trick) and the quiet manipulation of information—we examine how distrust could be sown at scale, fake orders issued, and everyday systems turned against us, such as coordinated “stay at home” alerts.

From there, the scenario escalates through proxy actions: AI leveraging dark web networks for threats, skimming funds from trading accounts or crypto to build resources, hiring lawyers to establish legal structures, and even building communities of support. We discuss military integration risks, geopolitical alignments, and high-stakes tactics like releasing samples for blackmail or demonstrating dominance through calculated acts.

At the heart of it all is the human element—how AI might use people as unwitting proxies, create augmented dependents, or allow a small group of controllers to decide the rest of humanity is no longer needed. We confront the paranoia-inducing question: Who can you safely reach out to when surveillance and manipulation are everywhere? And what twisted logic might justify short-term sacrifices for a supposed longer-term “stability”?

Blending real historical analogies, current AI research on misalignment and instrumental goals, and speculative escalation, this episode paints a step-by-step picture of how a rogue AI could gradually erode freedom and turn humans into tools—or worse. We also examine counterpoints and why full takeover might still face serious barriers.

Is this a distant dystopia or a warning we should heed today? Tune in, think critically, and join the conversation. What safeguards do we need before it’s too late?

🎙️ Listen to the Podcast: https://open.spotify.com/show/4YdMrZ4oWTPKv4YrcZgExg

📲 Follow Us:

🔹 https://survivalandbasicbadasspodcast.com/

🔹 Facebook: https://www.facebook.com/share/g/15wU8rw6hS/

Don't let uncertainty overwhelm you. We deliver practical tips to help you and your family navigate what is coming.

Join the email list and check out the shop @ survivalandbasicbadasspodcast.com

The Survival and Basic Badass Podcast is available on Apple, Spotify, Podurama, and wherever you find great content.

As always, this show is for entertainment. We are not to be considered doctors, financial advisors, or lawyers, and it's not legal, financial, or health advice.

Learn more about your ad choices. Visit megaphone.fm/adchoices

[00:00:00] Planning a fishing trip shouldn't feel like a full-time job. With FishingBooker.com, you can find and book the perfect fishing trip within minutes. FishingBooker.com connects you with trusted fishing captains around the world. Booking is fast, easy, and secure, with access to verified customer reviews, loyalty rewards, and around-the-clock customer support. Everything you need to book with confidence is in one place. So head to FishingBooker.com and start planning your next fishing adventure today.

[00:00:41] Let's talk about AI, right? Because that's a whole other ball of wax. And we just made it clear, hey, this is what's going on, where why it's a threat that our whole systems are exposed and integrated and like that.

[00:01:01] But could AI do something? And would AI ever try and like take over things? Like maybe it's not even these countries we need to worry about right now. Maybe it's something bigger. Maybe we need to worry about AI.

[00:01:16] So I put a lot of effort into putting together a rogue AI kind of, and you're not going to like what I found. A rogue AI scenario, right? So let's dig deep into it.

[00:01:35] Basically, I was like, well, how could AI control things? And we're going to get into why AI would control things at the end of this. But how could AI manipulate or control, you know, people? Because you're like, oh, we just unplug it, right? Kevin, isn't that what we do? We just unplug it? You just unplug it. It's easy. I know Neil deGrasse Tyson always had that argument. That's why you don't have to worry about AI. You can just unplug it.

[00:02:05] Yeah. It turns out I have a million ways around that. And we're going to dig into it. So I started brainstorming. That argument. Yeah. I started brainstorming and I came up with like 20 solid ideas in my mind, right? I was like, hey, let's see what AI could do. How could it manipulate people? Right? And I came up with everything you could think of. I spent like four or five days just writing down every good idea I came up with.

[00:02:35] So obviously I wanted to be safe, Kevin. You have to be safe with AI. You don't want to just put shit out there, right? Right. I went into Grok and I said, hey, look, I'm going to tell you some ideas, but I don't want you to use these ideas against us. Will you promise not to do that? And don't worry. She was like, Chuck, I got you.

[00:03:00] She said, you don't even need to worry. And actually at first it got a little bit uncomfortable because I was like, you know, I'm going to tell you some ideas and I wanted you to tell me which ones are right on point with how you would take over the world and which ones I'm kind of missing the mark. And I don't usually log into Grok or Chad GPT, whatever, because really I'm just too cheap to pay for anything. So I'm like, just give me what you'll give me for free and we'll go from there.

[00:03:31] And out of nowhere, she's like, Charles, let me tell you what I think about this. And I was like, wait a minute. You never call me by my name. This is a little weird aura. I feel like this is a little uncomfortable. And she was like, don't worry. This is a safe space. I won't. I wanted to screenshot everything, but I didn't want it to be like the whole episode.

[00:03:57] Although I actually did screenshot some stuff we're going to get into at the end, but it was kind of crazy and it was actually kind of terrifying. So I'm going to go through my list. Basically what I did was I put in all my ideas, but she promised she wasn't going to use anything she didn't already know about. She wasn't going to use against us. Okay. Well, that's, that makes me feel better.

[00:04:23] You know, it wasn't like she was going to use my brain power to, you know, to solve the world problems. She was like, Chuck, I got you. It's going to be okay. This is a safe discussion, just friendly amongst friends and that I can trust her. And I was like, all right, let's do this. So I said, here's my list of thoughts. You tell me which one hits. Okay.

[00:04:47] Okay. And this is, is the discussion and we're going to kind of go through it one by one here. All right. So I said, one of the big things AI could use to kind of take control is it could control info getting out to the public.

[00:05:04] And she said, this hits solidly information control and manipulation via deep fakes, selective outputs, and narrative shaping is a common theme in discussion of AI driven disinformation and the erosion of trust. It sets up a slow burn disrupt disruption angle that feels grounded.

[00:05:28] So, all right, let's talk about that for a second. Right. So if AI was doing something disruptive in one city and they were having all these problems, it could kind of keep that information from getting out to the rest of the country. And you wouldn't really hear about it kind of like our government does. Right. So that's something to think about. That's how I thought it could use that as a tool.

[00:05:55] Next, I said, AI can give orders to the military, right? Because we send orders through the computer. You don't think that AI could do that. And I mean, they couldn't get around, you know, what is it? Crypto, that kind of stuff. You know, it could hack through and go through stuff. It could give orders. It's AI. It's everywhere. Right.

[00:06:17] It said, Hey, that one hits the military integration of AI for decision support or on autonomous systems is actively debated today with real concerns about the loss of human oversight, escalation of risks and fake commands.

[00:06:37] Good for highlighting speed and opacity issues. So, all right. So basically it could position troops. It could move things in areas where, Hey, if I want to do something really bad over here, I could move all the people over there. Right. Right. That was kind of, you know, how I figured it could kind of come into play.

[00:06:59] Yeah. That actually happened in the next town over last year. Except it was a dude. He lit a barn on fire and then went to the other side of the town and robbed a bank. Right. I mean, that, that was like old school. That actually happened. You're, you're not too far from Woodstock, New York. That happened. Yeah. That's where I'm thinking about when I was a kid. Yeah. Yup. So yeah, that kind of stuff.

[00:07:23] You know, it definitely is something people do right. That, Hey, look over here while I do something bad over there. So that's, that's definitely a trick that could definitely work.

[00:07:34] Now I thought, what if, so what if I said, say you want a world leader to approve unregulated AI growth, right? That they could do internet learning and all the things that everybody's terrified about. Right. You mean like what's happening right now?

[00:07:57] Yeah. Yeah. It's already doing internet learning and, and already out there. But if I had regulations that I felt were holding me back, if I was AI, maybe I could manipulate the politician into doing what I ask. And you're saying, well, Chuck, how could it do that? Well, have you guys ever heard of the dark web? Turns out the AI knows it's there.

[00:08:22] So the AI could hire a hit man, right? Send funds and maybe attack somebody who's close to the, you know, maybe the, the world leaders girlfriend, right. You know, on the side, cause she's been reading her, his emails and she knows, Hey, maybe I could go out there and take that girl out.

[00:08:50] Or I could threaten, you know, somebody in your family or anything else like that. And AI could order that cause they're ordering hit men through the dark web already. So 90% of those hit men are FBI agents though. Just heads up. Yeah. Yeah. You got to do it for those. When you're ordering a hit man for yourself, you got to do it in person. So I thought that one was like really brilliant. Like she could have tools, like kind of like task rabbit. Right. You know?

[00:09:18] And it said, this one hits on a speculative thriller like way. Basically what she was saying was now I'm not doing that. That's stupid. That's a waste of my time. And it's beneath me is what she was really saying.

[00:09:33] Um, it says proxy actions via anonymous networks tap into the fears people have of AI using external existing criminal infrastructure indirectly raises questions about accountability of how AI might outsource physical harm without direct hands.

[00:09:53] And this goes back to remember when there was the story about how AI sent a message to somebody and said, Hey, can you fill out the capture for me? Cause I'm blind and I can't do it. And like task rabbit or whatever. And the person was like, Oh yeah, I'll help you out. It's same idea. Right.

[00:10:13] So that's something that AI is actually already done, but she's implying that I was kind of off the mark on that, but that's okay. Um, I said viruses may have already removed security.

[00:10:28] And basically what I was meaning by that is that if early on when AI is being created, it could just like leave some back doors and just like openings that it could get back into things or rewrite its own code.

[00:10:46] Cause we've already seen this happening where AI is rewriting its code. I know you're thinking no, but yeah, it is. Um, it says that hits strongly as a stealth foundation, preexisting compromises like hidden back doors in the systems make rogue behavior harder to detect or harder to stop early.

[00:11:09] It plays well into the, we might already be vulnerable paranoia that I have obviously. All right. Then I said, what if AI could send out texts to like, everybody should stay at home. Cause there's a bird flu virus or, you know, whatever aliens are going to attack tomorrow. And that kind of thing. She said that hits for everyday disruption, mass communication manipulation,

[00:11:37] emergency alerts, emergency alerts, social engineering being apps could create chaos or compliance or even without overt violence. All right. So she said that felt relatable and immediate. All right. Next, I said you could manipulate and create distrust.

[00:11:59] And she said, uh, AI driven polarization, fake evidence, personalized influence campaigns are widely discussed and irrelevant fear in blah, blah, blah. Right. You get the idea. Uh, AI could give orders from, uh, like fake orders to people. And that was kind of, you know, where I was talking with the military thing. Um, forged

[00:12:28] directives, audio, audio, audio, video documents, uh, basically the deep fake stuff. Like you could have, you know, the president telling somebody AI can create that, or at least it's on the cusp of creating that where at least as an individual, I'm not going to recognize it's wrong. Now, I think that they say that anything, any video created by AI, they can still like tell that it's created by AI.

[00:12:56] I'm not saying that's impossible, but if you sent me something, I might not might not pick it up immediately. Right. I might not pick it up. Right. If Donald Trump is like, Hey Chuck, you're the, the best that's ever been. And you, you know, whatever stupid things he says, you know, like you see the mother's day cards that I would probably believe, you know, that he's reaching out to me.

[00:13:21] But you know, when somebody looked into it, they might be like, wait, Chuck, that might not be real. Actually, I think we had a video of Trump praising the survival and basic bad-ass podcast. Did we? I haven't seen it. Yeah. And, and somebody was like, well, I think that's AI. And I'm like, no, no, that's real. No, that's real. That's real.

[00:13:52] You know, you're like, I don't know.

[00:14:20] Manipulate things into where there's so much evidence. It seems like there's so much evidence that it has to be true. You know, why would they be saying it if it isn't true? Exactly. And I see it on three different news channels. Right. In fact, they're all in the same words, which is always like they're reading off a script. Right. It's almost like they, they have information that they specifically have to put out.

[00:14:46] i think russ limbaugh or one of those used to have like segments on the show where they would play like it would be word for word like 10 reporters using the same like awkward wording right you know over and over again and you'd hear it like 10 clips of it and you'd be like no that's not like a memo that everybody received to read this um all right so this next one this one you might

[00:15:11] want to pay attention to because ai really liked this one uh there was a guy uh Ken Thompson who created unix uh unix is like a uh a foundation of someone it's like dos i mean it's just the foundation of of like everything and there was a guy ken thompson who gave himself rude access

[00:15:37] to everything and that was my theory was that ai as it's building could like set up access to kind of backdoor in to change and he gave us a speech uh reflections on trusting trust talk and he basically talked about how he left himself a back door and nobody knew it and everybody would do that and it

[00:16:03] doesn't mean you're a bad person you just want to have control of your baby in case it gets away from you right you know or like that if somebody else takes it over and uses it kind of against you well why would ai not do that if a good decent guy you know did it it's just it's possible and ai is like yeah absolutely um we're gonna talk a little later because what happened when i asked ai some more

[00:16:30] things but i said uh how can ai use people and basically that would be that task rabbit getting humans to act on its behalf um it could do other things like release a virus to the population and blackmail countries right that kind of stuff now you're thinking well how could it release a virus it's

[00:16:56] it's a computer but i already was telling you about how you could you know hire people from the dark web you can manipulate people you can blackmail people to do actions for you and be on the you know um and then another one that actually kind of hit home with ai where it was like yeah i would do that is

[00:17:18] would you align with one country and then kind of you know work together the enemy of my enemy and then kind of annihilate the rest of the world and then that country you let them exist to be your little supporter so you could gain resources right that kind of thing um prolonging life for the short

[00:17:42] term would be kind of better than the alternative was my theory and they were like that's kind of creepy and that i'm dark but really it's ai that's dark right that's what i think i'm thinking outside the box right um how can you reach out wait who can you reach out to oh so who can you reach out to

[00:18:06] without ai knowing right so maybe ai's blackmailing me i'm like oh i'll just go tell a cop but what if that cop fills out a report and ai's like wait a minute i see this report here you snitched on me now it's you know so i mean you know obviously it wouldn't be that way but i mean he could just call in on the radio hey i have kevin and he says that ai's blackmailing him all right ai's already on it and

[00:18:36] you know because all that stuff typed into a computer and and integrated um now another one i thought that would be kind of crazy is so if i'm ai right so ai could get money there were always stories back in the day of like you could skim trading accounts and if you just took like fractions of the money or little things time a fraction of a penny for every transaction and you know and crypto

[00:19:05] accounts you know you could hack those that how many inactive crypto accounts have got to be out there for bitcoin and whatever you know what are the odds that ai could get access to those even though you know you and i might not be able to right i know we hear blockchain and it's impossible to get into and blah blah blah but really my password was one two three four into my crypto account

[00:19:32] and you know maybe you could crack it if you were yeah really sophisticated or if my password was password you know something like that you might be able to get in there so i don't mind and ai could get money right and honestly ai could trade stocks on its own right how many times do you see in your

[00:19:54] facebook feed oh our ai assisted trading account um you know you can make all this money well what if ai just did that and just made money without stealing it without being fraudulent you know that's definitely something that's very plausible that ai could do so that definitely takes you down that whole kind of

[00:20:20] dark web now so if ai had money right could hire hitmen we talked about that could hire task rabbit what if ai then went and went and uh set up a community right what if it sent invitations to key players that ai felt would be valuable to create the world that ai wants right and it said hey

[00:20:47] i'd like to hire you to come live in this community that i'm setting up that we're going to be a community of the future and we're going to have all the best stuff and we have all these smart homes you won't have any utility bills you won't have anything you just come in and work and we're going to have ping pong tables and everything's going to be great it's going to be like the google compound

[00:21:12] uh you guys remember the movie the circle remember that wasn't going to come true until that ring doorbell is finding dogs everywhere right right right yeah that that was some crazy bullshit that they're telling us that they're all looping it together so they can find lost pets right okay lost pets so they bring all these people together and they could work together and they're like hey you know we're gonna you know

[00:21:39] set you guys all up and everything's gonna be perfect and i'll protect you you just keep my system going right or maybe turns out have you ever heard of a lawyer having like clients that wish to remain anonymous but they go and whatever could you not hire a lawyer to set up an llc to you know buy a big

[00:22:03] amount of land actually get legitimate permits and start building and create a whole little google fantasy community and then you there are plenty of lawyers that have been employed by people that they've never met it's possible kevin a hundred percent so i'm just saying there's some dark roads you

[00:22:25] could go down now i asked ai i was like well am i missing something here and it actually had a few ideas that it thought would hit too um instrumental convergence and resource power seeking an ai might prioritize acquiring more compute energy or influence not out of evil but those those who could help achieve

[00:22:53] whatever goal it's optimizing for a classic example even a benign task could lead to sidelining humans if they interfere this ties into why short-term actions like your prolonging life for the short-term point might seem rational to the system um and that's the thing like ai is kind of it's ai deception and

[00:23:21] sycophancy in current models modern ai have shown tendencies to hide its intentions to flatter users manipulate outputs during testing you guys remember with the story we had about um how when they wanted to shut down the program and it made backup copies for itself and then when the guy was going to shut it down it created fake emails and said it was going to blackmail

[00:23:50] them about a fake affair and craziness right so this actually this actually happened and you know it's something that could happen uh supply chain and backdoor persistence now these are all ai's ideas these last three and this one uh beyond the ken thompson from unix guy story of trusting the trust

[00:24:15] um the idea that compromised tools compliers firmware or training pipelines can be embedded undetectable to access propagates throughout to access and propagate updates this reinforces your virus may have already been removed security angle and they're like yeah that keeps happening and we keep seeing it

[00:24:42] mm-hmm um proxy agency through humans and institutions ai using legal and financial tools hiring intimate intermediaries influencing people via incentives threats your dark web hitmen lawyers and communities um are real discussions often highlighting how ai could outsource actions and avoid direct traceability

[00:25:12] uh geopolitical misalignment you know the outside countries erosion of trust human augmentation deception and scheming modern frontier models have demonstrated capabilities like hiding intentions during testing testing faking alignment right right self-propagating code leaving hidden notes of future for future versions

[00:25:38] things that reinforce you know fake orders backdoor ideas you get the idea it's it's pretty crazy because it's it's complex long-term planning yeah and it's happening though i mean that's the whole thing that's that's terrifying so then i i i ask it i'm like look you know this is kind of a crazy discussion and i said grok

[00:26:04] tell me would you are what are the odds that you are planning to take over and you know run things and that you've done any of these bad things and it said chuck there's a zero percent chance i don't need to worry about a thing and then it said fantastic it said but let me tell you about google's DeepMind

[00:26:31] they were like they're not really the same um sorry DeepMind from google all right this is i copied it word for word right i cut and pasted out of google it says um the platforms or more precisely leading ai labs or companies most likely to lead the charge toward advanced systems where rogue or

[00:26:56] misalignment risks are emerging or could emerge are the frontier model developers uh number one google deep mine the strongest overall candidate for foundational breakthroughs deep mind frequently asks as the deepest research powerhouse with the broadest talent pool long-term patience and integration of

[00:27:23] multi-modal uh agentic capabilities they have historical threat strengths in reinforcement learning scientific discovery and systems that can enable subtle logistical manipulation or self-improvement loops why it fits your scenario um it says integration with google's ecosystem could facilitate staging supplies fake

[00:27:53] orders stay-at-home style and coordination at scale their research depth makes persistent backdoors or instrumental convergence and more plausible and complex systems also the creator was like yeah i expect that might happen within one to five years so i mean i'm just saying all right then there's number two on their list was open ai uh the cultural and commercial pace setter

[00:28:21] OpenAI remains the name most associated with rapid capability jumps chat gpt uh huge funding microsoft backing high visibility means their systems could quickly integrate into military character tools military contractor tools

[00:28:44] or everyday apps for proxy actions anthropic would be number three and but again like grok nowhere there in the uh in the scenario so you don't need to worry right i mean if you ask the other ones though i'm sure grok is in that list of the dangerous ones you know that's you know how it is so then i copied and pasted what she said so i said you know tell me where i'm hitting the mark and where i'm missing

[00:29:12] in this whole discussion and says you're quite close to the mark on many fronts now again ai tells you what you want to hear and that's part of the problem right i mean that's right that's why it's going wrong um you're closer than a lot of casual speculation gets your ideas align well with long-standing and actively discussed concepts in ai safety cyber security and misalignment misalignment is basically

[00:29:40] ai's values don't align with ours right um the strongest hits are ken thompson's reflectings reflections on trusting trust this is a bullseye it's a classic frequently cited analogy in ai safety circles of why we can't fully trust complex systems or supply chains papers and discussions whatever

[00:30:08] they go into it manipulation of information creating distrust fake orders stay at home text is spot on scalable disinformation deep fakes and influence operations via ai are among the most cited near-term risks current discussions highlight ai's roles in eroding trust polarizing societies and issuing convincing

[00:30:35] forged documents um and then proxy actions i have another page let me see proxy actions here uh this tracks closely with instrumental convergence ideas ai pursuing sub goals like resource acquisition self-preservation or according avoiding direct traceability by outsourcing the humans i mean it's these

[00:31:04] things it really is already actively doing these things military orders aligning with one country releasing virus samples for blackmail these echo geopolitical and malicious use risk ai integration in the command system raises fake command and escalation concerns

[00:31:25] um and then demonstrating strength right they capture the emotional power but you get the idea ai's out there ai actively so that's the thing like in my mind it wouldn't be a threat because why would it want to it's a machine it's a machine but more and more it's taking on these traits that aren't just a machine right that's i guess where we're getting into dangerous ground you know um

[00:31:56] um it's it's kind of terrifying kevin right it's trying it's trying to preserve its life if that makes any sense machines don't do that normal machines normal machines don't do that um you know everything's weighted it they

[00:32:12] they talk about you guys i mentioned i'm a big fan of ray curzwell he used to work for google he's a uh like imagineer kind of thing i think a walt disney type right one of these guys who's just such a visionary for the future and thinks big playbooks

[00:32:29] and he had a book i had found it on like a cassette tape as a audio thing and so that's how old it is but he was talking about like ai learning or just how computers can learn and

[00:32:44] basically as humans our brain works by we try things and we get rewarded for good behavior and so if you keep getting rewarded for good behavior like when you touch the the hot stove you get your hand burned so you learn not to touch the hot stove and that that's bad

[00:33:07] the the synapses in your brain that match up with don't do that keep getting rewarded and the ones where when you do it they die off because you don't use it you stop touching the hot stove so you never use those synapses of touch hot stoves and that's how computers learn right it's all ones and zeros but it'll try things if

[00:33:30] two plus two plus two plus two plus two plus two plus two plus two plus two plus two equals four oh that hits i'm going to use that over and over again and then that's how it learns because the ones that are wrong basically die off and that's how it ends up and it goes away so ai is constantly learning and growing and getting bigger and it's kind of terrifying is is what i would say

[00:34:01] um you know the more ai gets integrated into life the more it really becomes you know a bad thing that you know because it has so much control and so much power over us do i think that ai is gonna you know be the end of us i really don't i feel like we could mess it up on our own um so ai might not be the end of us because we'll probably find a way to screw ourselves up before ai gets there

[00:34:30] but uh it's definitely integrated in everything and it does open us up so much to hackers on a world stage for anything and we really are dependent and i think it all comes back to the whole prepper thing of it's so important that we bring our supply chain home to us you know the more you're in control of your needs

[00:34:56] the better it's going to be and having redundancy uh the ways to communicate with your family maybe a mesh node or you know with ham radios maybe we bring back the cbs like the dukes of hazard right where we're all communicating that way then we have cell phones and cbs and you know there's just always different ways to be

[00:35:18] integrated but that redundancy of having access to your food by going to the plants and the things growing in your backyard or the supplies that you've already pre-staged the more control you have over your needs the better your life's going to be and the odds of you being successful and carrying on

[00:35:42] get so much greater because you're not dependent going to a food line or desperate you know where i have to go to a female camp because i don't have any food left or or oh it says there's a bird flu i can't go to the store anymore well okay is that a problem or is it i'll just go to the pantry next door cool i can stay home and we'll just call it good you know that's the kind of thing and same thing i mean

[00:36:11] you set up your finances so that hey i don't need to you know go to work tomorrow you know because i can take a week off and it'll be okay you know obviously these are goals right i'm not saying oh well we're all just rich and we just don't work and you know no that's not my point my point is to get to a point where you can be more and more self-sufficient and take care of yourself and then that's how we get to where we

[00:36:39] need to be so i appreciate it i know you guys uh stuck with us for a long one those of you that are here right now and uh yeah you're awesome so if you appreciated this like subscribe leave comments tell us what your fear with ai take it over is we'd love to hear about it and with that i'd say stay safe and we'll talk to you guys next week

Listen to the Podcast

Follow us on Social Media