Magazine Button
Nation threat actors using AI in weapons of war

Nation threat actors using AI in weapons of war

BlogsCybersecurityData CentresEditor's ChoiceGovernmentNorth AmericaSoftwareTop Stories
Morgan Wright, Chief Security Advisor, SentinelOne

While United States and EU grapple with legislation around AI and its consequences, there are no guiding principles limiting development of AI technologies with respect to weapons of war explains Morgan Wright at SentinelOne.

Milton Friedman, the famed American economist and the 1976 winner of the Nobel Memorial Prize in Economic Sciences, made a point about government in many of his speeches and writings. One of the more memorable ones highlighted the effectiveness of government regulation.

“If you put the US federal government in charge of the Sahara Desert, in five years there would be a shortage of sand.”

Government regulation in software does not work. It has been tried before. During the 1990s, the United States Munitions List of the International Trafficking in Arms Regulation, ITAR regulated the sale and distribution of encryption products and restricted the publication of source code unless it was extremely weak.

This resulted in the landmark legal decision Berstein vs the US Department of State. The short version is that the law was ruled unconstitutional, and encryption source code is an expression of free speech protected under the First Amendment.

The linkage to governmental interest in regulating AI is clear. This time, however, the approach is not to regulate the code itself – AI, but the outcomes produced by the code. The proposed European Union AI Act incorporates a risk-based assessment: minimal, limited, high, and unacceptable.

The approach seems objective, with specific references to specific use cases. However, the devil is always in the details. There are twenty-seven countries in the EU with twenty-seven different ideas about what is best for their own country. This creates a development and compliance headache, as well as potentially stalling entry into the EU markets by the US and other countries that do not want multiple frameworks and regulatory schemes to deal with.

Progress comes with a cost. That cost is the price not being paid by nation-state adversaries like Russia, China, North Korea, and Iran. There are no such guiding principles and frameworks limiting the development of AI technologies, especially with respect to weapons of war.

According to an article from the Centre for a New American Security, “AI is a high-level priority within China’s national agenda for military-civil fusion, and this strategic approach could enable the PLA to take full advantage of private sector progress in AI to enhance its military capabilities.”

Report after report, article after article, analysis after analysis makes it clear China will use commercially developed AI for military purposes. There are indications that China has begun incorporating AI technologies into its next-generation conventional missiles and missile defence intelligence, surveillance, and reconnaissance systems to enhance their precision and lethality.

But a recent Brookings Institution survey discovered something a little different.

30% of adult Internet users believe AI technologies should be developed for warfare, 39% do not, and 32% are unsure, according to a survey undertaken by researchers at the Brookings Institution. However, if adversaries already are developing such weapons, 45% believe the United States should do so, 25% do not, and 30% do not know.”

Placing onerous restrictions on AI will only result in bad actors having the advantage as we remain mired in bureaucracy. Over-rotating on regulation ensures that market forces will be mitigated in developing advanced AI technologies, and government regulation will be the primary driver. This is the wrong order.

There are three distinct phases to ending up with a law. First, there is litigation. Market forces determine what is acceptable and what the public is willing to pay for and tolerate. If that does not achieve the desired governmental interest, then the second phase is regulation.

Enacting rules on covered entities, example publicly traded companies, pharma, financial institutions, to compel compliance. If the first two steps are not adequate, then it is the proper role of government to bring clarity to the market by passing legislation.

The poster child in the US was the passage of the Sarbanes-Oxley Act in response to corporate malfeasance and cooking of the books. Massive failures and fraud were first litigated. Then the Securities and Exchange Commission, SEC passed additional regulations to improve transparency, oversight, and financial controls. When that did not achieve the desired outcome, the Sarbanes-Oxley Act was passed in 2002 – 423 to 3 in the House and 99-0 in the Senate.

We are not to the legislative phase yet. Jumping over the natural progression will only stifle the valuable development in AI, hamstring companies looking to break out with new technologies, and embolden our adversaries.


Over regulation of AI

  • Government regulation in software does not work. It has been tried before.
  • Encryption source code is an expression of free speech protected under the First Amendment of US.
  • This time the approach is not to regulate the AI code, but the outcomes produced by the code.
  • The proposed European Union AI Act incorporates a risk-based assessment: minimal, limited, high, and unacceptable.
  • There are twenty-seven countries in the EU with twenty-seven different ideas about what is best for their own country.
  • This creates a development and compliance headache, as well as potentially stalling entry into the EU markets by the US.
  • Report after report, article after article, analysis after analysis makes it clear China will use commercially developed AI for military purposes.
  • Placing onerous restrictions on AI will only result in bad actors having the advantage as we remain mired in bureaucracy.
  • Over-rotating on regulation ensures that market forces will be mitigated in developing advanced AI technologies.
  • Government regulation will be the primary driver of AI which is the wrong order.

Click below to share this article

Browse our latest issue

Intelligent Tech Channels

View Magazine Archive