Skip to Content

Knowledge

The “Human Factor” and Potential for Human Error Increases Law Firm Cybersecurity Risk

November 12, 2019

Though some of us privately resist admitting it, all lawyers are human. Maybe that’s why we lawyers, like other humans, may underestimate the unique importance of the human factor in law firm cybersecurity.

All attorneys owe various ethical and legal obligations to protect information, from competence, confidentiality and safeguarding client property, to obligations under a host of state and federal laws.

Given that computer technologies offer so much promise and so many problems, not surprisingly, attorneys would look to computer technologies to “solve” their information security challenges.

But no computer technology product or service alone will protect information. Firewalls, encryption, anti-malware software and other tech-based security are necessary, but not sufficient. People, processes and technology are equally crucial elements of an effective security program.

Security incidents are most often the result of human failures, not “technology failures.” Often these failures result from “the human factor” — “the instincts of curiosity and trust that lead well-intentioned people to click, download, install, move funds, and more every day.” Learning to recognize where the “human factor” can be exploited is an important part of building and effective law firm security program.

Computers and Networks by Default: Not Secure

The internet and computer devices were not built with security in mind. Networks and devices do not reliably default to secure when you take them out of the box, hook them up or turn them on.

We do business largely in our inboxes and on the internet, so that is where we expect threats. But our technologies won’t address the threats without people baking security into our computers and networks, and using these devices securely.

The Human by Default: Distractible

We all consider ourselves to be rational decision-makers, meaning we follow an objective process when we make choices.

One model for this process, used by fighter pilots and others, is “OODA,” or Observe -> Orient -> Decide -> Act. “Decide” is the rational step in the process (“I know this is the right thing to do.”). Social engineering, using deception to convince people to divulge confidential information, can easily convince us to remove the “decide” step and prevent us from acting rationally and with security.

Consider an example familiar to attorneys: the Business Email Compromise (BEC) or Email Account Compromise (EAC) scheme targeting businesses that regularly perform wire transfer payments. Wire transfers are an integral part of how lawyers and clients do business, from funding settlements of litigation to closing transactions.

The BEC scam is just another confidence game, where the bad actors convince humans to (1) click on bad links or attachments, enabling the bad actor to install malware on the company’s computer system, or (2) mistakenly believe that the sender of an email attaching wiring instructions or seeking information is making a legitimate, authorized request. Technology is part of the scheme, but the scam cannot succeed without human assistance.

And BEC scams work. In 2018, the FBI’s Internet Crime Complaint Center (IC3) received 20,373 BEC/EAC complaints, with adjusted losses of over $1.2 billion.

Put into context, those figures represent a 29% increase in the number of complaints, and a 77% increase in adjusted losses, compared to the IC3’s 2017 numbers.

Consider how social engineering tactics apply to BEC schemes:

  • Conveying urgency (“I need this wire transfer executed now.”)
  • Appealing to authority (“This would mean a lot to the firm if you could help us out here.”)
  • Imitating trusted brands (“This looks like a message from the law firm handling this closing.”)
  • Preying on our natural curiosity (“It might be a payment to the firm.”)
  • Taking advantage of conditioned responses to frequent events (“I dutifully click to update software, so I will just do so here.”)

Social engineering succeeds because people can be distracted from following important processes. And even when social engineering is not involved, human mistakes configuring and maintaining computer technologies, e.g., getting distracted from performing updates, create vulnerabilities. So we need additional tools (training for awareness, policies to follow) to steer us toward always performing the “decide” piece of the process.

Exploiting the Defaults

Consider the effects of human error in security breach incidents:

  • Anthem, Inc. (personal data of more than 78 million individuals): “[t]he data breach began when a user within one of Anthem’s Subsidiaries opened a phishing email containing malicious content.”
  • Target (credit card and personal data of more than 110 million customers): “The breach . . . appears to have begun with a malware-laced email phishing attack sent to employees at an HVAC firm that did business with the nationwide retailer, according to sources close to the investigation.”
  • Equifax (personal information of 143 million individuals): “Equifax has confirmed that attackers entered its system in mid-May through a web-application vulnerability that had a patch available in March.”

These attacks exploit human characteristics, and the breaches happened because individuals were distracted from making proper decisions.  People are susceptible to the very things — curiosity, trust, fatigue, impatience and greed — that can hinder the critical thinking necessary to avoid being scammed. Likewise, failing to follow a rule or policy, such as upgrading software or using strong passwords, is also distinctly human.

Adjust the Defaults by Building in the “Decision” Process”

People will not be an effective part of your security program if they don’t understand risks, and use the right processes and computer technology to mitigate those risks.

For instance, anyone who handles electronic funds transfers must know how communications can be compromised, especially through a BEC scam. That’s one way to strengthen the “people prong” of your security program.

Just as important is having funds transfer processes, usually written down as policies, that all employees (especially attorneys) follow. To make sure that an appropriate decision-making process is followed when a change in electronic payment type or bank is requested, a policy might require that more than one person be involved in the decision and use a form of communications other than email. Formulating and enforcing good policies makes it much less likely that processes will be ignored when convenience, perceived urgency or greed intrude.

This might mean abandoning or refusing to use technology for a crucial step. Increasingly, many savvy practitioners and clients are insisting that real estate closing wiring instructions, or especially changes to them, be transmitted orally, by phone, only between individuals who know each other.

Law firms must make sure that all of their people make up a well-trained and knowledgeable “security layer,” working with processes and computer technology to protect information. Becoming more aware of the vulnerabilities caused by the human factor is a significant part of that crucial layer of protection.

After all, lawyers are human, too.