Skip to Content

Published in ALM/

In 2024, the cybersecurity landscape is poised for remarkable transformations and formidable challenges, and artificial intelligence (AI) is redefining the way we defend against cyber threats, with its prevalence in cybersecurity solutions reaching new heights.

There is a notable surge in the adoption of AI-driven cybersecurity solutions. We are witnessing organizations proactively integrating machine learning and AI to revolutionize how we detect threats, respond to incidents, and manage vulnerabilities. The cybersecurity landscape is on the brink of a transformative shift, with predictive analytics and behavioral analysis leading the charge for more resilient and adaptive defenses.

Global data and business platform Statista reports that AI investment in cybersecurity will jump from $10 billion in 2020 to an estimated $46.3 billion by 2027, growing at a compound annual growth rate of over 25%. Close to 70% of corporations believe AI is necessary to respond to cyberattacks, and 75% of surveyed executives say the AI enables their organizations to respond faster to security breaches and system attacks, and also spot cyber threats and potential malicious activity (“Reinventing Cybersecurity with AI” – Capgemini Research Institute).

Cybercriminals “Deepfake” Their Sophisticated Attacks

The usual cyber-threats (ransomware, business email compromise; etc.) remain but with a higher level of complexity due to AI. While companies are investing in AI for cybersecurity defense mechanisms, so, unfortunately, are cybercriminals as they evolve their attacks.

For instance, the rise of AI-powered malware poses a significant danger, with cybercriminals deploying malicious programs capable of adapting and evolving in real-time to evade traditional defense mechanisms. Moreover, as AI systems become more integrated into critical infrastructure and decision-making processes, the risks of targeted AI manipulation, such as data poisoning or input manipulation, can amplify the consequences of an attack, with the victim many times being unaware that the attack is even happening.

Cybercriminals are evolving their tactics, employing double-extortion techniques and targeting critical infrastructure. Cybercriminals are using advanced tactics, including deepfake technology and sophisticated phishing schemes, to exploit human vulnerabilities. We continue to grapple with the challenges posed by business email compromise and AI-driven social engineering.

Learn the word “Deepfake” for 2024. A deepfake is a deep learning or machine learning technology, typically using AI, in which the person’s face, body or voice has been digitally altered, so they appear to be someone else. Or they appear to be somewhere or do something, when they were not there or did not do the act in question. It happened to Taylor Swift, Morgan Freeman, Mark Zuckerberg, and even the late President Richard Nixon. And now, it has happened to a Hong Kong multinational firm finance worker who was duped into paying $25 million to cybercriminals deep faking as the company’s chief financial officer (CNN news story).

The worker received an email from what he thought was the company’s CFO asking for a secret transaction. That was followed by a video conference call in which the worker saw what appeared to be members of his staff, including that CFO, but they were deepfakes. The worker paid out the $25 million. This story highlights the sophistication of attacks and why you need to always be vigilant. Review your policies and procedures, as well as your overall cyberculture, to address these ever-changing threats. Business email compromise has long been a problem but with the advent of new technologies such as A.I., the problem is getting much worse.

We must adopt robust cybersecurity postures and strategies that enhance our resilience in the face of an ever-evolving threat landscape. Generative AI will continue to gain footholds in everything we do, but humans will still be an important component in any cybersecurity ecosystem. It is up to us to employ a combination of technological solutions and comprehensive training programs to mitigate the risks associated with these insidious threats.

Federal Cybersecurity Officials, Agencies Weigh in on AI Threats

On Feb. 15, the Federal Trade Commission (FTC) announced that it is seeking public comment on a supplemental notice of proposed rulemaking that would combat AI impersonation of individuals, finalizing a rule that bans government and impersonation fraud, and therefore reducing AI-generated deepfakes. “Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” said FTC Chair Lina M. Khan. “Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals.”

In January, top United States intel agency leaders joined together at an International Conference on Cybersecurity at Fordham University. They discussed that advances in AI are facilitating hacking, scamming, and money laundering by reducing the technical know-how required to carry out these crimes, reports Reuters. “It’s going to make those that use AI more effective and more dangerous,” said Rob Joyce, National Security Agency Director of Cybersecurity.

Transparency in cybersecurity and data breaches is a major focus of the Securities Exchange Commission. In 2023, the SEC adopted a final rule regarding cybersecurity risk management, governance, and reporting. Public companies must disclose any cybersecurity incident they determine to be material and to describe the material aspects of the incident's nature, scope, and timing, as well as its material impact or reasonably likely material impact on the registrant. An Item 1.05 Form 8-K will be due four business days after a registrant determines that a cybersecurity incident is material. The new rules also require registrants to describe their processes for assessing, identifying, and managing material risks from cybersecurity threats, and require registrants to describe the board of directors’ oversight of risks from cyber threats.

The SEC Cybersecurity Risk Management and Disclosure rules will usher in far-reaching consequences in 2024. Expect a transformation in how businesses disclose cybersecurity risks, impacting investors and the broader financial ecosystem. While these rules enhance transparency and risk management, businesses will grapple with challenges and concerns associated with compliance, shaping the future landscape of cybersecurity disclosure in the corporate realm.


45 billion, every day. JPMorgan Chase, the largest US bank by assets, invests $15 billion a year and employs 62,000 technologists, fighting off an estimated 45 billion cybercrime attempts per day (CNN report). “We have more engineers than Google or Amazon. Why? Because we have to,” said Mary Callahan Erdoes, head of JPMorgan Chase’s asset and wealth management division, at the World Economic Forum in Davos, Switzerland. “The fraudsters get smarter, savvier, quicker, more devious and more mischievous.”

IBM’s 2023 Data Breach Report calculates that $4.45 million is the average data breach cost for businesses with fewer than 500 employees. The average savings for organizations that use security AI and automation is $1.76 million compared to organizations that do not.  

Businesses need to continue to invest in cybersecurity, and every business should always be hyper-vigilant about cybersecurity and threat intelligence. The cybersecurity landscape of 2024 will be complicated, with new technologies, ever-evolving threats, and the need for human creativity and adaptability, all coming together in ways not envisioned even a few years ago.

As we anticipate the rise of AI in defense strategies, grapple with the intricacies of enhanced ransomware, and confront the challenges posed by human-centric vulnerabilities, it is evident that the future demands a proactive and multifaceted approach to digital security. The specter of cyber warfare underscores the urgency for collaboration on a global scale, and the symbiotic relationship between technology, strategy, and a vigilant human element will shape the narrative of cybersecurity, forging a path towards a safer and more resilient digital landscape for business and governments.

About Our Author

Roy Hadley is an advisor and attorney to high-growth businesses, governments, educational institutions, and family/closely held businesses, on complex corporate transactions, particularly those involving technology, cybersecurity, artificial intelligence, economic development, telecommunications, outsourcing, and intellectual property. He is a speaker, lecturer, and author on AI, privacy, cybersecurity, and data management, and issues and legal concerns affecting educational institutions. He also serves as the Adams and Reese HBCU/MSI Team Leader.

About Adams and Reese’s AI Legal Services

Adams and Reese recently announced advisory and litigation services around Artificial Intelligence and how the emerging technology impacts your business from cybersecurity and data privacy to regulatory compliance, copyrights, patents, trademarks, data privacy, regulatory compliance, liability issues, licensing, technology transfers, dispute resolution, litigation, among other emerging issues. Read more at: