AI Race: Nations Clash Over Tech’s Future
Nations are divided on AI's future, with China embracing it for progress while the West fears its risks. Cybersecurity expert Rex Lee explains how this difference impacts national security and warns of AI's weaponization. He calls for tech leaders to take responsibility and suggests solutions like an electronic bill of rights.
Nations Differ on AI Use: A Race for Dominance
The world is split on how to approach artificial intelligence. While some nations see AI as a tool to help people, others are using it for control and even warfare. This difference in outlook is creating a global race for AI dominance.
Cybersecurity expert Rex Lee explains that the technology itself isn’t the problem. Instead, how different countries use AI is what matters most.
China, for example, views AI as a way to advance its culture and population. Their government, led by the CCP, has a top-down approach. This contrasts sharply with the West’s often fearful view of AI.
Many in the US and Europe worry AI will take jobs or cause an apocalypse. This fear has slowed AI adoption in advanced economies.
Trust and Adoption: A Global Divide
A report from the University of Melbourne and KPMG highlights this global divide. Trust in AI is higher in emerging economies (57%) compared to advanced economies like the US, UK, and EU (39%).
Public adoption rates also show a clear difference. 84% of people in emerging economies have adopted AI, while only 65% in advanced economies have done so. This fear-driven approach in the West has actually hurt AI companies.
China’s government actively promotes AI. They see it as a collaborative tool for progress. This is very different from how Western countries often talk about AI.
The West’s focus on potential job losses and dangers creates a negative image. This negative perception makes people less likely to embrace the technology, slowing down its growth and potential benefits.
Social Media’s Role in AI and Control
The way social media platforms operate in the West also plays a role. Many platforms, especially those with targeted advertising, rely on surveillance capitalism. To keep users engaged, they use addictive advertising technology.
This is combined with AI that can influence users. This can lead to emotional bonding and even indoctrination, a phenomenon known as the Eliza effect.
China, on the other hand, does not allow many of these Western social media apps. When apps are allowed, they must be reprogrammed. China’s top-down approach doesn’t see AI primarily as a vehicle for targeted ads and surveillance.
Instead, they focus on using AI to help their populations advance. This focus on advancement rather than pure profit from data is a key difference.
National Security Risks: Weaponizing AI
Rex Lee points out that this difference in approach creates national security risks. If a country’s population is more readily adopting and using AI, their learning curve will be steeper. This means they could become more technologically advanced faster.
The US, for instance, is already behind in education in many areas. If AI is used to enhance learning, this gap could widen significantly.
A major national security threat is the weaponization of AI. Apps and platforms often come from or are distributed by companies that do business in China. These companies sometimes have to share their AI intellectual property (IP) with Chinese developers.
This AI-infused software developer kits (SDKs) can then be used to advance China’s surveillance state. It can also be weaponized for psychological and cognitive warfare, similar to how information is used in modern conflicts.
The West’s Struggle with Regulation and Lobbies
In the West, there’s a struggle to regulate AI effectively. While some believe there’s more innovation due to freedom, regulatory issues are a major concern.
Governments are slow to catch up with the rapid advancements in technology. This delay can be harmful, especially when AI is being used for surveillance or manipulation.
Lobbying efforts by large tech companies and foreign entities further complicate matters. Chinese companies hire powerful US law firms and former government advisors to influence policy.
This makes it difficult to pass laws that could ban or restrict certain technologies. For example, a law aimed at banning apps controlled by adversarial nations has faced challenges and selective enforcement.
An ‘AI Oppenheimer Moment’
Rex Lee likens the current situation to an “AI Oppenheimer moment.” This refers to the warning Albert Einstein gave J. Robert Oppenheimer about weaponizing nuclear energy.
We are at a critical decision point. We can choose a path that leads to the weaponization of AI, or we can choose a path of responsible development and use.
Microsoft President Brad Smith has stated that protecting children is their top concern regarding AI. However, relying solely on government action is difficult due to lobbying.
Lee suggests that tech leaders at companies like Microsoft, Alphabet, and Apple need to take the lead. They have the power to implement changes that protect users.
Potential Solutions: An Electronic Bill of Rights
The national security threat from AI is twofold: advancing other nations’ populations ahead of ours and weaponizing the technology. The combination of addictive technology, manipulative advertising, and AI indoctrination creates a powerful form of brainwashing. This is far more harmful than earlier forms of subliminal advertising, which were banned.
Lee suggests that governments should consider adopting an electronic bill of rights to protect citizens. He also believes tech CEOs can make a difference.
If companies shifted to a traditional advertising model where users are compensated for their data, they could still be profitable. This would allow people to monetize and participate in the AI economy in a fair way.
The future of AI hinges on these choices. Countries and tech leaders must decide whether to pursue unchecked advancement and weaponization or to prioritize safety, ethics, and user well-being. The decisions made now will shape the technological landscape for decades to come.
Why This Matters
The way nations adopt and regulate AI has direct implications for global power, national security, and individual well-being. China’s proactive and government-led approach contrasts with the West’s more hesitant and fearful stance.
This difference could lead to a significant technological gap. The weaponization of AI and its use in addictive social media platforms pose serious threats to mental health and societal stability.
Implications, Trends, and Future Outlook
The trend shows emerging economies rapidly adopting AI, potentially outpacing advanced economies in technological skill. The weaponization of AI for cognitive warfare is a growing concern. The future outlook depends on whether countries can balance innovation with ethical considerations and robust regulation.
The influence of lobbying on policy remains a significant challenge for Western governments. The call for tech CEOs to take responsibility signals a potential shift towards industry self-regulation, but its effectiveness is yet to be seen. The development of an electronic bill of rights could offer a framework for user protection.
Historical Context and Background
The current debate echoes historical concerns about new technologies. Early 20th-century worries about subliminal advertising, which was seen as a form of brainwashing, provide a historical parallel.
The development of nuclear energy and the subsequent debate over its weaponization, as highlighted by the Oppenheimer analogy, also offer context. The historical precedent of governments struggling to keep pace with technological change and the influence of powerful industries on policy are recurring themes.
Looking Ahead
On March 10, 2024, the Supreme Court upheld a law concerning adversarial controlled applications, showing ongoing legal battles. The actions taken by tech giants and governments in the coming months will be crucial in determining the direction of AI development and its impact on society.
Source: Cybersecurity Expert on AI Adoption and Power (YouTube)





