Recently, the President issued a new Executive Order focused on Artificial intelligence. This represents a large step forward in placing guidance and bounds as AI is a nascent technology, and a critical step to ensure that technology is being utilized for the betterment of society and not for bad actions and crimes.
White House AI Executive Order
AI is being utilized by nation state and non-nation state actors to attack organizations and people. Network and infrastructure are being inundated by AI advancing sophistication of bot capability to penetrate and find new vulnerabilities. Phishing and scams are increasing in frequency and sophistication through use of AI—which results in stolen credentials, disruption of service or operations, theft or acquisition of critical data and PII. AI can be used to advance an attack vector—like Aaron Mulgrew wrote about when he used AI to create malware. AI is being used to help spread of misinformation, which in turn creates an environment where individuals and organizations can be placed in a vulnerable or urgent state to play into attack vectors.
It's critical to cybersecurity to ensure responsible and safe use of AI to stay ahead of crime, but AI tools and technology have also proven to be fallible, whether its AI hallucination that leads to misinformation or creating a tool that can inadvertently creates new cybersecurity threats—with really low cost barriers to execute.
A portion of the Executive Order is focused on creating tools, tests and standards around this emerging technology. There is protection today within AI tools and additional protections will be added as products advance, but there will always be an opportunity to misuse technology. The critical question to consider is will there be continuous efforts by the government to define what is and is not AI technology? Currently in the market, this capability is believed to be in the peak of hype cycle.
This Executive order has dictated several future efforts that will impact AI’s influence on cybersecurity:
- NIST will develop new standards, tools, and tests for AI systems. The US Dept of Commerce will play a key role in implementing these standards. “We are committed to developing meaningful evaluation guidelines, testing environments and information resources to help organizations develop, deploy, and use AI technologies that are safe, secure, and that enhance AI trustworthiness.” - Under Secretary of Commerce for Standards and Technology and NIST Director Laurie Locascio
- The Department of Homeland Security will be responsible for applying NIST standards to critical infrastructure sectors and establishing an AI Safety and Security Board.
- It will create an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software to help shore up cybersecurity across existing software and networks. This will be part of the existing AI Cyber Challenge from the Biden administration in partnership with DARPA and private industry, with the hope to shore up security vulnerabilities in IT.
- It will drive Development of a National Security Memorandum by the National Security Council and the White house Chief of Staff for military and intelligence community safe use of AI in missions and combat adversaries military use of AI.
While no timing has been set, these emerging efforts will have far reaching impact on cybersecurity both here and within global organizations. I intend to watch closely where these key efforts take action within agencies. Most importantly, this executive order is intended to protect critical operations that ensure safety and welfare of people and of democracy itself within cyberspace.