About the Event

Artificial Intelligence’s (AI) ability to process large amounts of data allows it to quickly detect, predict, and respond to cyber threats.  This ability positions AI to massively augment global cybersecurity operations in the public and private sectors. These same strengths, however, pose risks. Cybersecurity experts warn that the exponential rise of AI-generated hacking attempts from national and international agents is all but certain; moreover, AI-based cybersecurity applications may themselves become a vector for novel and unanticipated attacks.

This panel asks: is an increased reliance on AI in cybersecurity a double-edged sword? The World Affairs Council of Philadelphia was joined by a panel of industry, regulatory, and security professionals for a moderated discussion and audience Q&A.

To set the stage for the panel, Sahel Assar described the potential risks and benefits of AI developments, including improved food production, better weather prediction, and novel medical treatments alongside decreased privacy and threats to data. The government clearly considers it an important topic, as evidenced by congressional hearings and the president’s Blueprint for an AI Bill of Rights. Sarah Hammer then addressed the matter drawing from her background in finance, regulation, and government. She said that even though phishing and ransomware attacks are increasing in frequency, the financial sector has not yet established guidelines on cybersecurity. She elaborated that proper staff training to prevent human error is the first line of defense, but that AI can also be used to defensively recognize the patterns of malicious actors and programs to deny them access to systems.

Michael McLaughlin addressed the next question about how AI tools will be used to fight hackers and how technological developments will affect geopolitics. He first outlined how older versions of cyber defense recognized a known list of malicious programs and blocked them, while newer generations need to recognize the behaviors of malicious programs instead. Offensive AI consequently needs to create a behavior profile of benign programs to bypass these active defenses, making data the key for training any effective AI program. He pointed to increased amounts of disinformation being created using AI programs and how they threaten democracy. Mr. McLaughlin explained how data protection frameworks are being established on a state-by-state basis but that guidelines on a federal level are absent.

The next section of the discussion focused on how organizations can protect themselves in a way that is not cost-prohibitive. Arielle Baine discussed how new systems could be designed as secure-by-design, meaning that security features are included in the base product rather than as upgrades added afterward. She also stated that companies should regard the release of security software patches on a regular basis to be normal and for everything to be designed in secure programming languages. Tracey Brand-Sanders continued the conversation by addressing how AI is truly the only reliable way to defend against AI since billions of digital events occur in a day, and no human being can address them individually. She highlighted how new security systems, in combination with regular web safety training, can protect a company from inevitable attacks. Ms. Baine emphasized the seriousness of the threat, noting that virtually all critical systems in the US, like our electric grid, rely upon the internet and computers, meaning that an attack that shuts down or corrupts the functioning of a system can have disastrous consequences in everyday life.

The conversation then turned to the recent executive order and how it will guide AI development. Sarah Hammer noted that since the order comes from the executive, it does not have the force of law, meaning that legislative action is necessary to establish a true legal framework. Ms. Hammer highlighted how, as the Treasury Department crafts its best practices based on the order, it will be crucial to design the tool of AI to pursue human goals of equity and justice. Ms. Baine added to this statement that since AI is trained on past models, it contains the threat of perpetuating unrepresentative trends. She provided the example of when an image-generating AI was instructed to create a picture of programmers: it designed an image showing exclusively white men. Returning to the legal aspect of the order, Mr. McLaughlin commented that even though an executive order is restricted in scope, it will likely be used as a legal yardstick, leaving private companies liable if they do not adhere to its principles. Regarding geopolitics, he warned that the U.S. needs guidelines, not restrictions, since a completely limited system of data collection will leave the country lagging behind China, which collects data freely.

As the final topic before audience questions, Ms. Assar asked what AI questions keep the panelists up at night. Ms. Hammer said that the lack of any legal framework concerns her, alongside the fact that no robust cybersecurity insurance market exists because of the difficulty in assessing risks. Ms. Brand-Sanders named the concern of malicious actors impersonating executives through deep fake technology. Ms. Baine added her concern about virtual attacks that bring power grids offline, such as those that occurred at the beginning of the invasion of Ukraine. Mr. McLaughlin voiced the concern that just as AI has been used to design antibodies, it likewise might be used to create pathogens for the next pandemic.

To begin the audience Q&A, Bodine student Sophia asked about the recent AI impersonations of the singer Drake and how people can train themselves to spot fakes. Mr. McLaughlin answered that, unfortunately, AI will develop to the point that no human will be able to spot online fakes without the assistance of technology. Beyond technological solutions, he recommended that people be careful to avoid viral sharing that spread these fakes. Ms. Brand-Sanders added that she thinks holding tech companies responsible for damages will incentivize them to protect against deep fakes. The second audience question was how society can move away from the mindset of outdated legacy systems. Ms. Baine answered that cybersecurity needs to become a cornerstone of computer science education courses. Ms. Hammer also noted that people need to remain agile and curious about technology so they can utilize new opportunities, such as using AI to rewrite old code into newer, secure languages. The final question was about how America can defend itself against cyber-attacks. Mr. McLaughlin described the way that China collects data from the new cities it is building as part of the Belt and Road Initiative and that without comparable data collection, he believes the U.S. will not have the material to train its own AI and remain competitive.

About the Speakers

  • Sarah Hammer, Executive Director at the Wharton School of the University of Pennsylvania and Adjunct Professor at the University of Pennsylvania Law School
  • Michael G. McLaughlin, Principal, Government Relations, Cybersecurity and Data Privacy Practice Group Co-Leader, Buchanan Ingersoll and Rooney PC
  • Arielle Baine, Chief of Cybersecurity, Region 3, Cybersecurity and Infrastructure Security Agency (CISA) of the U.S. Department of Homeland Security    
  • Tracey Brand-Sanders, VP & Chief Information Security Officer at UGI

About the Moderator

  • Sahel Ahyaie Assar, Chair, Blockchain and Digital Assets Practice Group, Buchanan Ingersoll

Event Sponsor