Heading into 2021, the future of artificial intelligence (AI) in technology and cybersecurity will only continue to evolve as more organizations adopt new and innovative techniques. According to one recent survey, two-thirds of organizations are already using the intelligent technology for cybersecurity purposes. Using these tools allows for companies to be more prepared for the innovative attacks that cybercriminals continue to develop – also using AI technologies. For example, just last year, criminals employed AI-based software to replicate a CEO’s voice to command a cash transfer of €220,000 (approximately $243,000).
For businesses looking to implement more AI into their security stack in 2021, it’s important to follow these six steps to ensure the effective use of AI – without compromising security anywhere else down the line.
Encourage AI to be used responsibly for cybersecurity
Recent data found three out of four executives believe that using AI allows their organization to respond faster to breaches. The threat landscape continues to change at a rapid speed and cybersecurity leaders must embrace AI to respond to new tactics developed by cybercriminals. In turn, businesses must make a concentrated effort to ensure that policy supports this embrace. For example, U.S. export controls on AI programs should ensure that businesses continue to remain competitive in the global cybersecurity market in their use of AI.
Ensure AI concepts are involved in threat modeling and risk management
Deployment of AI is growing worldwide, with over a quarter of all AI initiatives already in production and more than one third in advanced development stages. However, it is prudent to consider how attackers might seek to destabilize this growth and newly integrated AI systems. Although they are not yet widespread, cyberattacks, including data poisoning attacks and backdooring of training models, are likely to be deployed against machine learning systems as they become more widely adopted. Threat models which incorporate AI systems are rare and not often mature. Business leaders should strive to develop ways to ensure AI is within the scope of risk management activity to minimize this type of threat.
Develop and promote ethical guidelines for AI systems
Both the public and private sectors should use ethical guidelines for AI applications and encourage the inclusion of robust security and transparency controls in those ethical frameworks. There are many ways to apply AI in cybersecurity programs and it is important to ensure that frameworks like the Microsoft AI principles, are recognized. Principles that can be adopted to support ethical AI include: eliminating AI bias, protecting AI data, and making sure people are held accountable for how AI technology impacts the world.
Support transparency and open sharing of new findings
New research on AI and new adversarial attacks and defenses is published frequently in both conference proceedings and academic journals. However, at times, these contributions do not go into adequate depth to explain how these findings were reached and many papers do not provide or link to code, which would allow for accurate reproduction of results. Similarly, businesses that conduct AI research generally work in secret, creating a burden on the verification and implementation of advances in AI. This makes for a more challenging landscape for security practitioners to assess possible attacks against those models, ultimately making it more difficult for businesses to prepare for these threats.
Prioritize offensive research on AI systems
Before implementing AI technologies into products, businesses must validate the potential risks against these new technologies. Organizations should ensure that AI applications and related machine learning systems make sense for the current scope for security program monitoring and testing. Monitoring and testing should include both businesses doing so spontaneously, to support the discovery of unforeseen AI flaws that may have gone unnoticed, as well as security frameworks and guidance along with partnering with security researchers.
Urge the creation and integration of privacy-preserving machine learning
Although machine learning and AI technologies continue to grow in popularity, the privacy implications are yet to be broadly explored. Neural networks in particular can memorize specific examples from their training data. AI systems can therefore be vulnerable to model inversion attacks – attacks which lead to the theft of data the model was trained on by way of recreating training examples through repeated querying of the model. If models are trained on sensitive data, an attacker might be able to recover information about the training data. The increased risk of privacy vulnerabilities has led to a growing need for the adoption and development of privacy-preserving machine learning systems. Not only will this protect the business, but also the consumers whose data is used to train the models.
Integrating AI into cybersecurity initiatives is no easy task. Business leaders must recognize the challenges that come with incorporation and develop solutions that enhance cybersecurity programs, whilst remaining ethical and protecting private data. Using AI technology responsibly, encouraging threat modelling and ongoing AI research, in addition to following clear, public guidelines, are essential steps that will lead to success when integrating AI into cybersecurity programs.
About the Author
Erick Galinkin is Principal Artificial Intelligence Researcher at Rapid7. Erick is a hacker and artificial intelligence researcher whose passion is in finding ways to both apply AI to security and apply security to AI. His experience spans the spectrum of information security, including threat intelligence, vulnerability discovery, data science, and malware analysis. As part of Rapid7’s OCTO Team, he conducts research and informs policy on the cybersecurity implications of artificial intelligence. Erick has presented his research at leading industry and academic conferences and actively writes papers for academic journals. Outside of Rapid7, Erick does research on the theory of deep learning and privacy preserving machine learning while pursuing his PhD at Drexel University.
Sign up for the free insideAI News newsletter.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1
Speak Your Mind