Trumps $500 Billion Stargate Initiative AI Singularity Fear Porn More w/ Patrick Hedger

Spread the truth

5G

 

📰 Stay Informed with Sovereign Radio!

💥 Subscribe to the Newsletter Today: SovereignRadio.com/Newsletter


🌟 Join Our Patriot Movements!

🤝 Connect with Patriots for FREE: PatriotsClub.com

🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org


❤️ Support Sovereign Radio by Supporting Our Sponsors

🚀 Reclaim Your Health: Visit iWantMyHealthBack.com

🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com

🔒 Secure Your Assets with Precious Metals: Get Your Free Kit at BestSilverGold.com

💡 Boost Your Business with AI: Start Now at MastermindWebinars.com


🔔 Follow Sovereign Radio Everywhere

🎙️ Live Shows: SovereignRadio.com/Shows/Online

🎥 Rumble Channel: Rumble.com/c/SovereignRadio

▶️ YouTube: Youtube.com/@Sovereign-Radio

📘 Facebook: Facebook.com/SovereignRadioNetwork

📸 Instagram: Instagram.com/Sovereign.Radio

✖️ X (formerly Twitter): X.com/Sovereign_Radio

🗣️ Truth Social: TruthSocial.com/@Sovereign_Radio


Summary

➡ The text discusses the importance of addressing problems as they arise, particularly in the field of technology. It also highlights the work of Masterpiece, a company that removes toxins from the body, and NetChoice, an organization advocating for free expression and enterprise online. The text emphasizes the need for innovation and competition, particularly in the realm of artificial intelligence (AI), and warns against the stifling effects of fear on progress. It also mentions the shift in the world economy towards technology companies, underscoring the importance of getting AI policy right.
➡ The future of wealth and economy is predicted to be built on information technology, with the largest market cap companies being tech companies, primarily in the U.S. and China. Europe’s aggressive approach to AI and tech regulation has hindered their innovation, leading to a lack of major tech companies. The fear of new technology can lead to overregulation and economic stagnation, as seen in Europe. The challenge lies in finding a balance between regulation and innovation, and whether the future of technology will be dominated by China or the U.S.
➡ The article discusses the importance of careful regulation in the field of AI, emphasizing that it’s impossible to predict all potential outcomes and problems. It highlights the Trump administration’s encouraging approach towards AI, including the involvement of tech experts and the promotion of private sector innovation. The article also addresses concerns about AI in the medical field, stating that AI has the potential to reduce side effects and create personalized treatments. However, it warns against government control over AI, advocating for open competition and innovation instead.
➡ The text discusses the importance of openness, truth, and choice in AI development, and the need to continue its evolution while assessing potential risks. It also highlights concerns about government control over technology and information, and the need for clear boundaries. The text mentions Lela’s quantum technology, which has numerous health benefits and is backed by scientific studies. Lastly, it emphasizes the need for cultural change to encourage more people into STEM fields, and the importance of diverse educational paths, not just traditional four-year university degrees.
➡ The text discusses the importance of not restricting the development and use of AI with overly strict regulations. It emphasizes that while AI can present challenges, it also offers many benefits and opportunities. The text also warns against fear-mongering about AI, suggesting that such fears are often overblown and can hinder progress. The goal is to ensure that AI is used responsibly and beneficially, without stifling innovation or creating unnecessary barriers for startups.
➡ To protect against harmful AI systems that threaten cybersecurity, we need to use more powerful AI systems. It’s like fighting tanks with tanks, not bows and arrows. To learn more or get involved, visit netchoice.org, a principled trade association that supports free enterprise and free expression online and in technology.

Transcript

You need to address problems as they arise instead of trying to account for every single issue ahead of time, because it’s just impossible, especially as to your point, when you don’t have that sort of technical background or expertise. But even if you did have that, it’s still impossible to predict the future and impossible to predict how technology is going to be used by 320 million people or 8 billion people. Just a quick break from your programming so I can give you a little information about Masterpiece. They are the masters at removing toxins and heavy metals and aluminum and microplastics out of your bloodstream, out of your body.

We are being bombarded with this crap from all over the place, and we need to get it out of our bodies that you are more susceptible to every disease imaginable when that’s in your bloodstream. And I like Masterpiece. That’s the company I endorse. Why? Because they’re the only company out there that’s actually doing trials to prove to you that their product works. It removes graphene oxide, it removes aluminum, it removes microplastics and all sorts of toxins. You can try yours today as well by going sarahwestall.com under shop or with the link below. Welcome to business game changers.

I’m Sarah Westall. I have Patrick Hedger come into the program, and he’s a director of policy at NetChoice. And NetChoice’s mission is to make the Internet free, free expression and free, so that they don’t, you know, they don’t like the speech crackdown by Google. They don’t like the government being involved in that. But they also want companies, small businesses, to have the ability to innovate and compete in the marketplace, and they want the country to be able to compete and innovate. And so they’re really behind AI, but they’re behind AI from the standpoint of separating out government from private organizations and ensuring that AI isn’t going to be used for another tyrannical, you know, horrible thing like which we just lived through with COVID and creating this free space for innovation, which they believe puts us in the forefront of leaving the world and bringing in all this potential money and influence.

And without that, we will be like what Europe is. And they’re going to talk about Europe, how Europe is really falling behind on so many areas, because they’re motivated by fear. And fear is. Is shutting them down and shutting down whole industries, and fear is creating control structures everywhere and taking away freedom. So fear is. Is crushing freedom. And so how do we get rid of the fear that is holding us back, but also ensure that we put safeguards in place that AI and other innovations don’t create more tyranny. So we’re going to have that discussion and we’re going to dive into it from many different perspectives.

And I ask him a lot of the questions that I’m seeing happening in the world today on the social media, the journalists pushing the just all sides of this, what the government is doing. And I hope you get an appreciation for all the different angles on this. And I got to say that there is money being put into, into pockets of journalists to push certain ideas and to push fear. And so what is that about and what is their agenda and who are they working for? And then the other aspect of it is what should we be fearful of and what should we keep keep from happening and what should we allow freedom to just reign in innovation to occur so we don’t, you know, shut things down and end up being at the mercy of other tyrants around the world.

So this is an important crossroads that we’re at in our society and we want to make sure that we think through this and we have people who are smart and are well intentioned working through these issues. You can check out his organization@netchoice.org you maybe can get involved, you can donate, you can learn all the different things, the initiatives that they’re involved with. Okay, before I get into that, I want to talk to you about Peptides. I have been selling this and sharing this, it’s called, we call it sloop sl slupp332 which is an exercise mimicker which has had incredible pre trials, pre clinical trials with obese mice.

They’re, they are showing amazing results. But it’s been so popular that they put it on backorder and now they had to shut down back order and people, they just, they can’t keep up with how much people want this. And so it, it should be back in stock in about four weeks. But if you want. This is what I have been using. I’ve been using a combination. This is Retro Trutide and it is the most powerful GLP1s on the market today. It’s more powerful than O Zic and the other competing brands because those work mainly by suppressing your appetite.

This does that, but also burns fat. And so this is really, really effective. And they’re showing 24% of your body weight in 48 weeks and more or less depending on if you’re using it with exercise. I got to Tell you I cut my dosage in half because for me I was losing weight too fast. I thought that was, it was too aggressive. I don’t, I’m doing this for health. I’m not doing it just to, I have to get the weight off and so I want to be healthy in the process. So I, I just, if you end up taking this, know that it really works.

And what else I’m doing is I’m taking this at 5Amino One MQ because this is amazing for helping you replace fat with muscle. And so it, it helps you lose weight but it helps you reconstitute your body so that you have less fat and more muscle. And so when I’m losing weight on this, this glp, make sure that I’m replacing it with muscle so I have a combination that I’m doing for that process. There are so many other peptides from stress reducing to anti aging and so I’m going to get into all of that. I’m really excited to share this with you because this whole world of peptides is amazing.

I’m going to have the links below if you want to try any of these things that I’ve talked to you about today. I also recommend that you talk to your doctor or you can join Dr. Diane’s tribe. Dr. Diane Kaser’s tribe. We’re working together getting this information out to people. It’s important, important that you know that I’m not a medical doctor. I can’t give medical advice. I’m just sharing my research and I’m sharing what I am using myself and that you need to speak to a doctor if you have questions. Otherwise everybody should be their own doctor.

Right. And start learning about what it is that you need. Nobody’s going to care more about you than you. So I will have the links below so that you can get it yourself. You can also go to sarahwestel.com under shop and you’ll see them there. Remember to use the coupon code Sarah to save 10%. Okay, let’s get into this really good conversation that I have with Patrick Hedger who is the Policy Director for NetChoice.org hi Patrick, welcome to the program. Hey, thank you for having me. You have an interesting center aisle view of everything that’s going on when it comes to AI in this government and what their initiatives are.

Can you talk about what you, what your role is and what you’ve done? Yeah, absolutely. So I am director of policy at NetChoice. NetChoice is a trade association that stands for and Fights for free expression and free enterprise online. And increasingly Internet technologies are moving in the direction of artificial intelligence. And so we’re really at the front line right now and trying to make sure that we get AI policy right. I think this is the most important technology that any of us will really experience in our lifetimes. We’re talking about something that could be as important to the economy as electricity even.

And so it’s really important we get this technology right, especially because America’s global adversaries also have access to this technology. So really important to be at the front lines here, making sure that we are not stifling innovation and that we are not smothering this technology out of fear, because ultimately the genie is a little bit out of the bottle globally. So it’s really important that we get it right here and we continue to be shaping AI policy based on American values. Well, I did a conference, multiple conference presentations over the last four years explaining the fact that the largest market cap companies in the world are all big tech or technology companies.

And the only one in the top 15, I believe now is Saudi Aramco, which a few years ago they were third and now they’re sixth. So if you put that into context, big oil, which supposedly, you know, the petrodollar, supposedly this massive, massive industry is eclipsed from all directions from big tech. And they’re sixth. Right. And the only one in the top 10, for sure. Top 15. So we’re looking at a complete change of the world economy and how everything works, aren’t we? Yeah, absolutely. It’s really important to kind of point that out that this is where the markets are betting that future wealth and the economy is going to be built.

It’s going to be built on information technology. And it’s also you talk about the companies with the largest market caps that also illustrates the kind of the challenge that we have right now is that the largest market cap companies in the United States and in the world are all these tech companies. Of course, you have Saudi Aramco there, but none of them are European because we’ve seen that the Europeans have taken a very aggressive approach to AI and tech regulation broadly. And so you don’t see any sort of major innovation engines coming out of the European Union, but you do see them in China.

You do have some very large market cap companies in China that do have effectively American equivalents. So that kind of lays out sort of the global challenge that we have is what’s the future of technology, the Internet, AI going to look like? Is it going to be Chinese built and dominated or is it going to be American built and dominated? Because the Europeans have chosen their path of essentially we’re going to regulate this technology because we’re too scared of it. So that’s really the challenge that we’re facing. Well, that’s interesting because when fear drives you, I know when the printing press came and the countries that were fearful of the people learning and having it, embracing it, they still suffer today economically like it was hundreds of years of economic, you know, being in the background economically because they did that.

Now I don’t know if that’s equivalent or not, but it does show that a fear drives your decision making. You, you can mess yourself up for hundreds of years and that’s an example of that. Yeah, I would completely agree with that. We see this over and over again anytime there’s new technology. Thankfully in the United States we’ve had done a really good job at sort of resisting that sort of pessimism. But it, that doesn’t mean it hasn’t existed. I mean, there’s a great website, it’s called the Pessimist Archive. You can go back and look at the articles that came out about all sorts of different technologies, the printing press included, all the way up to electricity, to bicycles, to novels, all the same sort of talking points.

You hear about how, you know, the reason we have to be scared about this technology, it’s going to completely upend society, displace folks, you know, it’s going to hurt our kids, things like that. And while we do need to make sure that we’re controlling for certain externalities that come with any sort of technological change, you’re absolutely right that if you overcorrect, you can really harm yourself in the long term. And I think that’s what’s happening in the European Union. You look at the GDP of the entire European Union was about the same, if not a little bit larger than the United States at the turn of the century.

And now the United States has almost doubled in like about two and a half decades. We’ve almost doubled the European Union in GDP despite about having about half the population. And you wouldn’t expect that because these are advanced first world countries, great institutions of learning, a lot of capital, a lot of human capital. But because they have regulated themselves so severely, you don’t see that kind of dynamism and innovation that has made America really the envy of the world in terms of economic strength. Yeah. Now the pushback that some of the people have, I can kind of understand it.

I mean, Going through Covid, we saw this tyranny on steroids, right? And you know, I, before COVID I, you know, I got to know Kirk Wiebe and Bill Binney and these NSA whistleblowers and they were talking about how our data is already accessible to all the different intelligence agencies, right. I mean banking data, you know, they can track, trace and do what they want if they decide to. Right now we have this digital ID that’s coming into place, which I think is the biggest. I would like to hear your ideas on this. But I think it’s the biggest issue as far as having the ability to track, trace and do anything.

Because with these database systems, the, the one thing that’s missing, I remember in the early 90s I wrote it in telecom. I wrote a paper, I never thought it would be applicable to today of my understanding and it’s exactly digital id. I wrote an article or article paper on why there needed to be a customer ID across all the telecom because there was all these disparate in databases, it was hard to manage customers. That same concept is why they want the digital id. Now as soon as you get the digital id, there’s a pros and there’s cons to that.

The pros are, are now you can manage things and it’s easier and you can, you can do. If you’re altruistic, it’s a good thing, if you’re nefarious, it’s a bad thing. And now you bring in AI and people are concerned that the automation of that along with digital ID creates this complete control structure. Now is it that we are being paralyzed by fear with that theory versus there’s some truth in this? Yeah. So what I know it’s a loaded question. I asked a lot here. Yeah. So we see a lot of efforts right now at the state level to try and regulate AI and a lot of those bills very well intended.

But they, if you regulate at the state level a fundamentally interstate service like artificial intelligence, you’re going to create a patchwork that’s going to make it really hard to invest and very hard to comply and deploy technologies at scale. So I think that’s where the fear is kind of being driven and it’s somewhere where Congress really needs to step up to the plate. You want to talk about securing digital identity. Whether we’re talking about AI or whether we’re talking about the existing technologies that we have, we don’t have a really good national data privacy or data security framework in place.

And I think that’s somewhere where Congress needs to step up to the plate and say, hey, here’s, here’s a basic rule of the road for consumer data. And that would help create some really necessary regulatory certainty. And it also head off this sort of patchwork that we’re seeing emerge at the state and even the local level, even New York City has pursued regulation of AI. And that is fundamentally, again, you want to talk about interstate commerce. If I put in a query to an AI chatbot or I even send you a text message that’s going to travel across lines and it’s going to be processed in servers in another state before it reaches its end result or your phone.

And so that’s fundamentally, again, an interstate service that needs to be, we need to have that regulatory certainty at the federal level. And Congress just hasn’t really stepped up to the plate yet. We saw actually last Congress, Rep. Jay Olbernaulti from California led a bipartisan House AI framework that was really good on these issues and talked about the need for a sectoral, incremental approach to AI, very much in contrast to what the Europeans are doing, which is trying to regulate it all at once and account for every, perhaps every eventuality, which is impossible to do. So that’s really two ways that we can kind of address the problems that you’re talking about is that, and the first, first step is getting that data privacy, data security framework in place.

Well, the concern I have is, you know, we talk about the market cap and how the entire world, I mean, these companies are larger than we’ve ever seen in the world. Right. This is a whole new. You know, I say that the, the Congress is managing the world from 20 years ago. They haven’t caught up to the way the world is today. And I looked at what these career in the background is of the senators and the Congress people, and there’s only one who has a computer science background and three that have electrical engineering background. And not that you, not that that makes them, not that having that background necessarily makes you able to see society at large and be able to be a good legislator, but.

And not that you can’t learn it, you know, if you don’t have that background. But I think it makes a really solid point that we don’t have the expertise. And my analysis that they’re managing the world as if it was 20 years ago is probably true. Yeah. I mean, there’s certainly a knowledge gap, a knowledge problem, if you will. If you want to go back to, you know, Hayek and some of the overall problems we’re trying to, you know, account for every potential eventuality. I mean, this is what the Europeans did. And I keep coming back to that because it’s such a huge problem that they’re still dealing with over there.

It was actually just today that the Europeans announced, we’re going to throw $200 billion to try and jump start innovation and AI in the European Union. And without realizing that they’ve thrown up such a huge hurdle by trying to regulate the world today without having that kind of knowledge. And so you need to have sort of a humility when it comes to regulation. Don’t try to regulate as if you know what every potential outcome will be, what every potential problem will be. I mean, imagine trying to regulate and have all of the, you know, regulations that we have on an automobile today when the Model T came out, right.

You couldn’t anticipate all of the problems that would come with automobile technology, as beneficial as it is way back then. And had you tried to put in all of those regulations at that point, you probably would have smothered the technology before it ever became used en masse. So you need to address problems as they arise instead of trying to account for every single issue ahead of time, because it’s just impossible, especially as to your point, when you don’t have that sort of technical background or expertise. But even if you did have that, it’s still impossible to predict the future and impossible to predict how technology is going to be used by 320 million people or 8 billion people.

That’s right. So Donald Trump is surrounding himself with tech guys, you know, and they’re being called tech bros, if you will. I think that’s a derogatory term because they don’t like them very much. But is that the wrong thing to do? I mean, it depends. And there’s a lot of tech guys out there, tech women guys, because the industry is massive. Right. Did he pick the right people to surround himself with? And is it the right thing to do? Yeah, I mean, that I think is going to remain to be seen. But the initial signs are pretty encouraging.

I mean, the fact that the Trump administration, the President himself, within the first 24 hours of taking office, said, I’m going to do this press conference with three major companies working in AI to talk about this Stargate initiative, I think is really encouraging. And it shows that the President seems to understand that this is a generational technology, that it’s really important that we get it right and that it’s important that we are encouraging it and that we are Fostering it instead of trying to rein it in prematurely. And we saw remarks, actually from the vice president, J.D.

vance, in Paris over the last 24 hours that were really encouraging as well, essentially telling the Europeans, hey, look, you’ve got the wrong approach. We’re not going to do that in the United States. You know, there are certainly things that we need to account for with AI. With any technology, you’re going to have things like worker displacement. You can, you. We can work on things like that. We can put in place programs that help people retrain and learn how to use AI. Right. I think the future of the worker is going to be not that AI replaces workers, but people that know how to use AI are going to displace people that don’t know how to use AI.

So it’s important that we get that kind of worker retraining program and get our STEM education up to speed for this kind of thing. But at the same time, we don’t want to limit ourselves because again, as we saw with this deep seat news out of China, the Chinese, regardless of how they got there, I think there’s some questions in terms of how truthful they’re being about how they developed that technology. Regardless, though, they’re there, they’ve caught up. And so it’s important to make sure that we are staying a couple of steps ahead. And really, so far, the, the decisions and the rhetoric coming out of the administration, particularly around AI, has been very encouraging.

And I would also point to the announcements related to all of the energy picks in the Cabinet. Anybody that had anything sort of to do with energy policy mentioned AI in their announcement. And I think that’s a really strong sign that they understand what it’s going to take for America to lead on this, because AI is going to require a lot of power. Well, there’s two things you said that I want to address. One is the MRNA part of it. People are concerned, you know, with the COVID and the fact they were concerned, they felt they were forced to do something that they didn’t want to do.

And so it’s triggering a lot of people to have to feel that they’re going to be forced again to take on this MRNA stuff that they don’t feel was right for themselves or their family. And so I think the Stargate was a mixed bag of promoting AI, but then also promoting this MRNA technology for cancer and stuff that triggered a lot of people. What would you say to that? Yeah, I would say look at the, looking at the Stargate announcement, even though the President was there. What was really encouraging, certainly from a free market perspective, is that he was talking about the investment of private capital.

So this is about developing private products and services. And so as long as we’re keeping a good degree of separation between government and the private sector here, you know, nobody wants to see, you know, mandates and things like that. So it’s very important that, and the administration’s kind of working well to kind of make sure that again, kind of compared to what we’re seeing in Europe, that there is a nice separation between government direction and private sector innovation here. But AI is not just limited to what it can do in the medical space. Even though I think what it can do there is extremely encouraging and it’s going to be able to develop much safer products and drugs going forward.

I think that’s the really big promise of AI. If people are concerned with side effects from any particular medicine, AI has got the ability to reduce those side effects and tailor make drugs and other therapeutics that work best for you. That’s an incredibly encouraging promise. So I think we want to be able to encourage that. But I certainly understand folks concern, but they should understand that despite the fact that the President was there for the Stargate announcement, again it is a private sector venture. Well, and I think tyranny scares the heck out of people, right? When we already live through tyranny, now they’re going to see everything through the eyes of tyranny and, and, and rightfully so.

People are fearful because they already lived through a tyrannical time in this country, probably the worst we’ve ever seen. And, and now they’re concerned that this will be this more of the same. And so it’s important, I think that they make it clear now, you know, I have some doctors who, they’re not big fans of vaccines, but they came out and said that they have the right to take them as much as you have the right to not take them. Right. So if we’re in a free country, people should be have the right to do either or.

And AI has the ability to give you all totally create health freedom, doesn’t it? I mean now we could look at things from all sorts of options when it comes to medical, as long as it’s written that way. Right. Isn’t it about what’s behind it and what they actually do with it? Yeah, that’s entirely right. That’s what’s really important that we compared to the previous administration, which was very much so allergic to technology and innovation and wanted to control it very much from the top down, we have the opportunity right now to have open competition and innovation.

Right. If you don’t like a product or service, you don’t have to use it it. And that I think is really important. But you should also be free to offer a competing alternative or service. But you know, I look back to what Marc Andreessen, very well known Silicon Valley investor and innovator, said about his meetings with the Biden administration that were extremely concerning. Where the Biden administration folks effectively said, stop all of your investments in AI. We’re only going to allow two to three companies that we control to exist. I’m paraphrasing there a bit. But essentially that’s what they were getting at.

That’s, that’s a very scary scenario, right? Where the government is deciding how AI is used. That is the tyranny, right? That is the tyranny that we’re worried about. When you make it free and open and anybody, as long as the freedom is still behind there, then that’s the difference. Right. When the government controls a couple now, they can be tyrants. Yeah, yeah. We don’t want to have this approach where, I mean, compared to what the Europeans are trying to do and what the Chinese are doing, which is effectively state run AI enterprises, right, where the government has a lot of control over how it’s being used.

I mean, you look at this deep seat technology out of China, if you ask it critical questions about the Chinese government, it’s not going to give you a good answer. And so we have the opportunity to offer the challenge to that. And we are where you have AI that’s built on truth, it’s built on openness and it’s built on choice. And I think those are really important values that we. That’s why we can’t afford to. Some folks have suggested we need to take a pause and we need to assess, you know, the potential risks and things of AI.

And we do need to do those things, but we need to do them as we’re allowing the technology to evolve as well. Because as we saw, we had the Stargate announcement and then within the same week we got the announcement announcement out of China. So a six month pause or any kind of pause is not going to work well. And the freedom allows you to pick the AI that actually is good. Right. As long as there isn’t a controlling structure behind it, we can. A controlling. See that’s where the controlling structure, we have to be careful what that means, the controlling structure of what they’re allowing the data to come back with so that they can brainwash and mind control you.

You know what I mean? I mean that’s what they’re doing the Chinese or to us, when you can’t look at criticize the Chinese Communist Party at all, there’s a difference between that versus having it do something that would take over systems and run rogue. And to me, I have a hard time with some of that stuff because I think a lot of that is fear porn and it’s not real. But there is some concerns that it could be developed to do some of those things. Yeah, it’s really concerning and it’s especially concerning when we saw what the Biden administration was doing with existing technologies, particularly social media platforms.

The jawboning of major social media companies and information companies into displaying the information and the things that they wanted, not necessarily what people really, really concerning. So we need to make sure that we get those bright lines in place that separate government control from the technological sector and the information sector. I mean I thought the first amendment would have been sufficient, but we saw that the Biden administration was really ignoring it. Experience the groundbreaking advancements of Lela’s quantum technology. Now backed by over 40 placebo controlled studies conducted by elite institutions and renowned universities worldwide. This revolutionary technology surpasses previous achievements as confirmed by prestigious organizations such as the Emoto Institution Institute in Japan.

Scientific investigations reveal that Lela’s technology not only enhances blood health and circulation, but also neutralizes the adverse effects of electromagnetic fields, expedites wound healing and elevates ATP production on human cells. Embrace the extraordinary benefits of Lela’s tech as recognized and utilized by world class athletes, esteemed functional medicine practices practitioners and leading figures in the field of biohacking. Explore a range of transformative products from the Heal capsule shielding you from harmful EMFs to the quantum block allowing you to infuse frequencies into your cherished possessions. Dive into the realm of innovation and wellness@sarahwestall.com shop or by following the link below.

And that’s really concerning. So we’ve seen that the Trump administration put in an executive order effectively saying we’re not doing this job owning anymore. Anybody that does it’s going to get in trouble in the administration. That’s great, but it’s an executive order, right? We need to get this codified. Senator Rand Paul has a bill that would essentially codify that executive order to prevent that jawboning. But we also have to look at how the government is also turning the screws on existing technology companies right now. I mean, you look at some of the antitrust actions that are being taken.

That’s pressure on those companies because they were disfavored by the previous administration. And so you want to make sure that we keep antitrust very consumer focused and not focused on achieving the government’s ends. Right. It needs to be focused on achieving good outcomes for consumers. And right now, there’s a big concern. You look at what the previous administration’s Department of Justice was requesting of Google. They were essentially telling Google, you know what? We’re not going to let you invest in AI going, going forward. And that’s another end. That’s another side of the coin of what we’re talking about being very concerned about is, is limiting choice, limiting investment in AI.

We need to let a thousand flowers bloom here. Well, they were doing that while also forcing Google to censor anybody that doesn’t believe the same things that they believe. And so, and Google’s still doing that, you know, with YouTube and their search results. They’re propping up. They’re picking winners and losers even though they almost are a monopoly around the world. They’re picking winners and losers and let instead of letting people based on merit, you know, share what they’re grow and innovate. Yeah, it makes, it illustrates what’s really important again, is about reducing the ability of the government to put its thumb on the scale, whether that is an antitrust policy, whether that’s what’s going on, what’s going on with the job owning, or whether that’s AI regulation.

Right. You need to be able to offer that choice and competition. And we also see a problem in the regulatory sector on the financial side where it’s become very difficult to raise capital, take your company public. And there’s all sorts of pressure from the securities and Exchange Commission that has allowed activist investors to kind of push companies in directions they may not want to go. So there’s a lot of different regulatory levers that the government has. And I think that has created a lot of the problems that and things that people are concerned about in relation to jawboning or censorship.

And so once you can kind of rein those things back in and codify those controls, I think you’ll see again a when we’ve got that opportunity right now. And I really think Congress and the administration kind of need to step up to the plate and put in place these reforms that will have some staying power. Well, I think people are fearful because they saw tyranny in the United States like we’ve never seen before at a level that was. It changed our country. And I think it’s created fear and it’s created. The climate’s different. Everybody can feel it.

Right. And, and then they’re using that against the Trump administration and against these innovations that are coming out and acting like this is a Trojan horse to do more of the same. And what would you say to that? I. I would say that the Trump administration so far has shown that it is trying to use technology and innovation to limit the size and scope and potential abuses of government. And I think that’s a really encouraging thing that we need to embrace. And that previously, again, the government had kind of grown to a large and unwieldy size and scope where mistakes were happening.

Right. We saw a lot of data lapses and data security lapses in government that were really concerning. I mean, leaks out of the IRS all the time, leaking of sensitive information. I mean, I’m here in the District of Columbia and a lot of folks here had their very sensitive information leaked from the healthcare exchanges that were created under the, the Affordable Care Act. So the, the idea that, you know, everything was hunky dory beforehand is just not true. And so we ought to be looking to embrace new technologies, particularly AI, that can make our systems more efficient and more secure.

Well, the other thing is our culture compared to China. You know, they said we, we need to keep up with China. China’s culture rewards. It’s not nerdy to go into computer science or electrical engineering or any of these STEM fields. It’s applauded, it’s looked up to in this country. We’ve had for too many decades now in popular culture being told that if you go into these fields, you’re a nerd. You know, let’s go talk to the nerds. And they put, they always portray somebody who’s really knowledgeable as a nerd. You know, that we can’t do that in order to.

When all these large companies, all these market caps, it’s all big tech. Don’t we need to change our cultural perspective on, on that if we want to have a chance of competing? Yeah, and that’s tricky because it’s very difficult to solve cultural problems with public policy. But I think there is some truth there that we do need to be embracing stem. We do need to be encouraging folks of look ahead and look down, down the road and see where the demand will be. And I’m not sure our K12 systems are, are particularly well designed for that right now.

And I mean, I remember my own experience when I was coming up through K12, particularly in high school there was this sense that unless you went to a four year university and got a, some sort of liberal arts degree, you were a failure. And while that may be a great path for many people, there is just as many promising paths at technical schools, at technical colleges, I mean, even going directly into the workforce, doing some apprenticeships, things like that, getting certificates. Again, it’s really about letting a couple million flowers bloom in this case when we’re talking about our kids.

But trying to pigeonhole people into one particular path is not going to work for the future. We’re seeing the workforce kind of, of rapidly shift and change. The demands will change. And that’s going to happen with sort of any technological revolution, whether it’s AI or not. Well, yeah, you can’t legislate culture and the, the accomplishments. The fact that there’s so much money in those industries, people are going to move to, to them naturally, but we’re a little bit behind the eight ball because people are, it gets back to my point that they’re managing from 20 years ago and it, you know, there’s always those famous stories where, you know, Xerox didn’t want to fund the computer because it thought it was a toy and it was never going to do anything.

You know, I mean, that’s kind of the mindset and people are starting to be blindsided by the fact that the world has changed around them and they need to change too. Yeah, yeah. I mean, and that’s why we don’t need to have, you know, overly strict regulatory structures around this kind of technology because you don’t allow that kind of innovate. I mean, imagine, imagine if it wasn’t Xerox making that decision, but it was the federal government making that decision. Right. And there wasn’t a Microsoft that was free to say, well actually this technology, this Windows technology is pretty good, we’re going to use it.

If, if you, if you concentrate all of that kind of decision making in one place, instead of allowing for freedom of experimentation and certainly the freedom to fail as much as the freedom to succeed, we’re not going to get the benefits. Right. You know, I go back to some very basic economic principles when I look at technological revolutions because they tend to hold true over time. And a great quote is from the economist Thomas Sowell. Right. There are no solutions. There are only trade offs. And so how are we maximizing the benefits of the trade offs? There’s always going to be frictions, there’s always going to be drawbacks and negatives associated with any Sort of evolution in the economy, evolution in technology.

It’s important to address those things, but then not. But we don’t want to throw the baby out with the bathwater. That’s right. And we don’t want to go into this naively, but we also need to be free. So what would you say is your most important goal here? To maintain the freedom. I mean, what is your, your agenda and your organization’s agenda? Yeah, so right now we want to make sure that when we’re addressing AI, because AI is really interesting and technology policy. One of the reasons I love working in technology policy is we’re at the, we have the chance to stop bad laws from ever getting put into place.

Whereas a lot of industries are constantly trying to fight and get bad laws off the books that are already there. And once, once a law is passed, it can be very sticky. Right. You create a constituency that benefits from that government program or benefits from that protectionism or whatever it is, and they become a very powerful lobbying force that makes it difficult to get that, you know, law off the books, even if that law has sort of ossified the innovation and investment in that sector. So it’s, it’s really exciting that on AI and technology policy broadly, we have the opportunity to stop those things from ever kind of getting put in place that halt investment, pick winners and losers and are kind of rife with cronyism.

And so what we’re really focused on right now, what some of the things that I’m working on is getting back to something I mentioned earlier, is making sure that we are not having the states do a knee jerk reaction to AI and put in place policies that are going to make it impossible for startups to kind of come in and compete with the big guys. Right? The big guys for the most part are going to be fine. They’ve got the lawyers and the lobbyists and the accountants to kind of deal with regulatory moats, if you will.

But it’s really about encouraging that kind of dynamism and allowing for investment to occur and addressing the narrow issues that AI may present. Any technology can be abused, right? A hammer can be abused, right. But that doesn’t mean you ban hammers because they have, they have some very positive uses, right? You got to build a house, you’re going to need a hammer. So it’s about making sure that you’re saying that, look, AI is not exempt from existing laws. Fraud is fraud, whether you’re using Photoshop AI or you’re using, you know, know a pen and you know, you’re doing what that guy did and catch me if you can.

Right? Check fraud. Right. Old school check fraud. Still illegal. Right. Whether what the. Regardless of the technology that you’re using. So you don’t need a specific AI anti fraud law. You can just say, hey, nothing in this law says that AI is exempt. And so encouraging folks to kind of take that path to making sure that folks understand their citizens understand that they’re still protected. You still have an attorney General, you still have consumer protection agencies. AI is not exempt. But don’t put in place this regulatory superstructure that’s going to make it impossible to even develop or deploy these technologies.

Well, last question. I’ve been hearing some. I want to hear your opinion. Whether this is fear, porn or reality. There’s some high level people going around talking about how AI is going to. What do they call it? I don’t know what they call it, but essentially have a consciousness. And what do they call it? One. What do they call that? It’s gonna become one and singularity. That’s the word I was, you know, trying to find. So do you think that. And I’m hearing them all. I heard him on Glenn Beck. I’m hearing. So all these people are talking about it.

It. What is your thoughts on. Is that a realistic fear we should have or do you think that’s something that’s being put out there and funded by people who have irrational fear? Yeah, I don’t want to try and get in and try and predict the future here. I’m already a little blown away by the AI that exists. I would encourage folks to go out and try it, use the AI products that are out there. You’ll find that they’re very useful. And it’s, it’s an, it’s a great thing to kind of incorporate into your life. It can save you a lot of time and that and having a little bit of exposure to it can, you know, reduce the mystery around the technology.

But I think the latter of what you said is pretty much the case there. There’s, there’s certainly a lot of folks that are out there that want to kind of spread fear about AI. Certainly foreign adversaries are out there spreading fear about AI because they know if they can, if they can get people scared enough about it in the United States that we may regulate ourselves out of the lead and then that seeds the ground to China. But I think that kind of fear is dramatically overblown. I mean, Hollywood is not very good at predicting the future.

If you look at the blade Runner movies, for example, the first one with flying cars and dystopian future takes place in 2019. I don’t know if you’ve checked your calendar. We’re far past that. And it’s not the future doesn’t look anything like that. So I would encourage folks to take anything that comes out of Hollywood in terms of a omnipresent, you know, overlord AI. That’s not going to happen. Well, we do know that they have the capability of doing some mind control stuff and there’s technology behind it. And if you put AI with it, then it gets big crazy and it’s scary.

And so it has the potential to be scary. But with your kind of activism and what you guys are doing, you can stop that from happening. I mean, you’re in a unique position, your organization of, of setting it up so that we can maintain freedom and still be at the forefront of this situation. And I could see how foreign governments, when the largest companies market caps. I keep going back to this because I can’t tell you how much, how important this is. When the largest market cap companies are all big tech and you know, technology firms, those are all going to be replaced by AI type capability, you know, companies.

And if they want to have and lead the world, they’re going to have to embrace this. But we need to embrace it with the sense of freedom for humanity long term. And you guys are in a unique position to help that happen. Yeah, I think, you know, one of the best ways to combat potential harms of AI is empowering people with AI. That’s one of the best ways that we’re going to have. Right. If you’re, if you’re seeing fraudulent AI images or other kinds of fraud or concerning things, cyber threats. Right. You know, brute force hacking or things like that coming from people using AI.

The best way to combat that is using AI yourself. I know I said write off Terminator, but maybe think back to Terminator. Right. How did they beat the bad Terminators? They had their own. And so we’ve got to sort of have our own technology to be able to combat the harmful uses of those. That technology as well we can have for all. For every harmful AI system that’s out there to try and breach cybersecurity, we have to have a more powerful AI system that’s designed to increase AI, increase cybersecurity. So we don’t want to unilaterally disarm ourselves out of fear, is what I would say.

You don’t want to fight tanks with bows and arrows, right? That’s exactly right. Yeah, absolutely. It’s really. Okay. So how do people learn more about your organization? And if they want to get involved and help with these causes, how do they do that? Please tell us. Yeah, so we’re just@netchoice.org Again, we are a trade association, but we are a principals based association. So unlike a lot of other trade associations that are just out there to advocate specifically for the interests of their members, we advocate for the policies and principles that we believe in, which are, again, rooted in free enterprise and free expression online and in technology broadly.

So netchoice.org is the best place to find us. Thank you so much for joining the program. I really appreciate having you here. This was great. Thank you.
[tr:tra].

Author

us_dollar_plunges_banner_600x600_v2

Spread the truth

Leave a Reply

Your email address will not be published. Required fields are marked *

SIGN UP NOW!

Subscribe to our newsletter for the latest trends, news, and exclusive content. Stay informed and connected with updates directly to your inbox. Join us now!

By clicking "Subscribe Free Now," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.