A VP, Insider Threat Manager at one of the major, top five largest U.S. banks has contributed to this post.

As insider threat related incidents have been recognized as one of the costliest and damaging data breaches, every organization should include an insider threat program as part of their overall cybersecurity strategy.

Giving employees access to company resources is unavoidable to conduct business. Meanwhile, access to those resources also puts the company at potential risk. It is the manner in which this access is given, monitored and taken away (if necessary) that determines how secure the company is.

Controls such as firewalls, IDS/IPS, WAF, and so on have served companies well in hardening the perimeter and keeping people out who should not be there. The inconvenient reality is that beneath that hardened exterior lies a mass of disparate applications designed around connectivity not security, hybrid-cloud environments, and devices used for both personal and business uses. Despite having controls to secure the perimeter, it is becoming increasingly difficult to determine where that perimeter is. At best, the perimeter is a dotted line - making the company harder to protect.

Before you start

The insider threat program is not as sharply defined as other groups–it is in between HR and cybersecurity, and includes different necessary business units and stakeholders. In some organizations the group is kept as a separate unit, much like the compliance sector, while in other organizations the group is kept under cybersecurity. Whichever configuration an organization opts for it is vital that the question of where an insider threat program sits is addressed and documented.

Protection of the organization touches many departments as well, from C-suite, IT, HR, communications and more.

Another reason several business departments are involved is the “lifetime” of your employees; before, during, and after their time at your organization:

Before an employee starts:

  • Sign agreements additional to the contract of employment, including intellectual property (IP) ownership, non-disclosure/confidentiality, and non-compete agreements.
  • Run employment background checks. Screening of potential employees’ criminal background and convictions, identity, education and work history, security clearance violations, credit history, drug screening, and more. While most screen for criminal and other public records (84%) and previous employment and/or other references (73%) before an employee starts, rescreening recent criminal record (loss of licenses), credit history (financial problems) and social media can help indicate if employees are a new, increasing risk for the company.

During an employee’s employment:

  • Raise security awareness with continuous training.
  • Continuously audit employee actions and behavior.
  • Monitor employees’ level of access, as long-term employees can often ‘inherit’ too high access level through aggregation of access over the years. For example, an employee can start in finance, move to HR, before moving to IT, and so on.

After an employee ends their time with the company:

  • Reconfirm previously signed agreements (see ‘Before an employee starts’) when the employee gives their notice or is dismissed. It’s better to try to make them aware of the agreements–and consequences–than catch them when they break them after they already hurt the business.
  • Increase employee monitoring during the notice period, as well as the weeks prior if the employee decides to leave themselves.
  • Retrieve and cancel physical access cards, tokens, computer, company equipment, and other.
  • Withdraw access to all company subscriptions, including but not limited to email, cloud accounts, business critical information and property–code base, product roadmap specifications, customer lists, Human Resources files, financial statements and predictions.
  • Monitor employee account and devices after departure.

Building an insider threat program can be a daunting task and become even more daunting when you start evaluating the number of tools that claim to mitigate insider threats. Building an insider threat program is a collaboration of people, processes, and technology.Use the following basic seven step process when building your program:

The 7 steps to build an insider threat program from scratch

1) Form an insider threat working group

Establishing an insider threat program is a collaborative effort amongst several stakeholders including but not limited to Human Resources, Legal, Privacy, Risk Management, Information Technology, and Security. Selecting a representative from each of these organizations to be active members of your working group is key to establishing and designing a program that mitigates insider threats while also meeting legal, policy, and regulatory requirements. It is this cross-functional representation that enables insider threat group address security issues as a business as opposed to individual departments. Without a cross-functional group every investigation would need to bounce around between departments, taking much longer time, requiring many more approvals, and potentially even falling through the cracks.

A few considerations to make when building an insider threat program:

  1. Are the stakeholders (e.g. HR and compliance) joining the group as a loan or a permanent solution? a. If the stakeholders join as a loan; do you understand how complex it is to have cross-functional teams that has multiple bosses? b. If the stakeholders join as a permanent solution; how will the CISO overcome the resistance from the other departments?
  2. The group should report to the CISO, but should leave a clear paper trail that allows HR/legal to see activity.

Hiring someone with a law enforcement background may help with investigation experience and evidence handling in the case of an incident.

The Public Relations (PR) and Marketing team should also be aware of the group’s work, and included early on if an incident occurs that requires notifying government or other regulators.

2) Identify your crown jewels

Starting an insider threat program begins by identifying the company’s most valuable assets and then assessing the risk of potential loss due to insiders. This includes people (human capital), information/data, intellectual property, technology, essential equipment, or the reputation of the company itself. In the case of reputation, identifying what activity by the employee would cause damage to this are as essential.

Once the most valuable assets are identified, continue by identifying processes and capabilities to monitor users, data, and assets according to the risk they present to the organization. An insider threat program is much more than a data loss prevention (DLP) program.

Monitoring predefined policy violations (e.g. tracking files marked as “confidential”) is not sufficient. Legacy DLP solutions’ biggest limitation is that incidents easily go undetected when there is no ability to go back and see non-policy violating events nor the full picture. DLP solutions shows limited context on what the user was doing and what the user’s intention was, even when a policy has been violated.

As an organization it is critical to have the ability to go back in time and answer: “Who has leaked this piece of info?” By only capturing policy violating events, one would not have the capability to answer the question and the information is lost forever.

3) Define what an insider means to your organization

Understanding the types of insiders and the impact they can have on your organization is extremely important. Take the time to consider the following types of insiders and how they may impact you:

Unintentional insiders:

  • Accidental– Often lacking security knowledge, he/she fails to take proper security measures when handling valuable information resulting in loss of information. Email phishing scams is a popular example, where the employee is a victim of external actor’s aspiration to gain company access. A more extreme example is a man who deleted his entire company with one line of code, according to Independent.
  • Negligent– Possesses a sufficient level of security knowledge but circumvents security measures to accomplish tasks resulting in a loss of information. Example: Emailing sensitive documents to personal devices to make it easier to work on them from home. Nevertheless, security measures and policy must be followed.

Intentional insiders:

  • Malicious– Takes actions which disable or degrade the availability of data or resources. These intentional insiders may work alone or in collusion with others who’s contributing actions may be witting or unwitting.
  • Non-malicious, seeking financial gain– These insiders take actions often related to exfiltrating sensitive or proprietary information from the company to gain a competitive advantage with their new employer or sell the information for financial gain.
  • Disgruntled employees– This insider may seek to harm a company’s reputation or impede business operations in an effort to seek revenge for dismissal, missed promotions/bonuses, or perceived maltreatment. For example, an employee who was fired for poor performance, destroyed an ex-employer’s cloud accounts, according to The Register.

Privileged users with legitimate credentials make it harder to detect unauthorized access. With some knowledge, the privileged users know how to stay inside their “access perimeter”, avoid security measures, and what high-value data they can steal. In addition, they generally have a motive–events related to work (e.g. dismissed) or personal life (e.g. financial crisis) that makes them an intentional insider threat.

Everyone at your organization is potentially an insider threat. Although accidental and negligent behavior typically cause more breaches, the intentional insider threats are typically more severe. Intentional insiders either steal high-value data (e.g., sales numbers, customer lists, intellectual property or are motivated to cause damage (e.g., wiping servers, corrupting data, etc.) By giving employees clear expectations of the role and guidelines, combined with regular feedback–especially to those under-performing, can in turn avoid creating surprised, disgruntled employees.

Segmenting employees based on sensitivity

Employees handle data with different levels of sensitivity and risk (remote work vs. office work). A travelling sales representative might have access to customer data, business goals, and current income numbers - data with high level of sensitivity to the organization. Meanwhile, others might not handle particular sensitive data while only working from the office.

For organizations such as this, some might want to start rolling out their insider threat program to the users that have access to the most sensitive data.

4) Implement and maintain information security controls

Create a data use policy that controls the access to information using the principle of least privilege. This policy should limit employees to access needed for their current role and should be removed when their role(s) change or they no longer need access. Likewise, the data use policy should clearly state how the data within your organization should be handled, transmitted, and stored. Consider data classification capabilities and encryption capabilities in this step.

5) Build insider threat use cases

After your organization’s crown jewels are identified and the data use policy is implemented, document the threats you want to identify (also known as use cases). These use cases should consider HR/Personnel actions, physical security, and data security events and should grow in sophistication as your program matures. A basic example includes individuals attempting or requesting access to data they are not authorized or have a need for. Other examples can include the use of unapproved data storage devices including USB or remote cloud storage that violates data policies.

Use cases should also include a standard practice of “protective monitoring” when a member of staff is subject to resignation or termination. Protective monitoring is essentially a solution which gives visibility and overview of who is accessing the business’ most sensitive data – insight into used (and abused) data access. Protective monitoring solutions are essential under some regulatory and industry’s best practices. Although such a tool is not required for private companies, it is necessary to provide sufficient security.

6) Pilot, evaluate and select an insider threat tool

There are multiple types of tools to monitor user activity to detect risky behavior. Start with evaluating the current tools in your organization and place them into a matrix that identifies gaps in your ability to monitor the use cases you create.

If you have a budget for a tool, start with a user activity monitoring tool that has its own agent on the endpoint (very important) and place the administrative control of this tool with your insider threat program, supporting “least privilege” for the network administrators and other IT security roles, to ensure the integrity of the investigation. Privileged users control the audit, security, and data loss prevention tools in your organization and can therefore interfere with these tool’s abilities to monitor and/or log insider risks. Placing all security tools in the hands of your systems administrators creates a single point of failure in your insider threat monitoring program. It also puts your organization at unnecessary high risk if the administrator decides to go wild.

Why an agent makes the difference

User behavior analytics without a dedicated host-based agent rely on (different) system logs that are cumbersome to manage when correlating multiple insider threat data sources, difficult to collect and analyze on a large scale, lack granularity of context to determine an insider’s intent, and can be manipulated if your administrator becomes an insider threat. The visibility it provides varies as it relies on the quality of the logs and the people who built the log ingestion - low quality logs means no visibility.

Monitoring network traffic can be used with varying success. As the traffic can not easily be assigned to a single user, the visibility is who has done what is lost. The tools can determine traffic for a single host or endpoint at best, but neither shared systems (single host with multiple sessions for many users) or user-specific. An agent can attribute network traffic and endpoint events to a single user.

Consider insider threat monitoring tools that combine an endpoint agent, case monitoring, machine learning, screenshot captures, file content inspection, multi-factor authentication, and real-time actions to simplify your insider threat monitoring into a single platform to significantly increase return on investment.

7) Apply lessons learned

Events and use cases that are referred for investigation may result in false positives. When this occurs, feed the event information back into the use case development effort to continually refine your detection capabilities. Likewise, when insider threat investigations successfully identify threats, review the actions taken by the insider, and build and improve your use cases to prevent or detect the threat sooner.

The policies need to be updated and tested regularly based on the insider threat response loop, with scheduled audits and “cyber readiness drills”.

Sources

  • CNBC, “Companies are ramping up their employee screening strategies. Here is what you can expect”, Aug 10, 2018
  • CNBC, “Uber fined nearly $1.2 million by British and Dutch authorities for 2016 data breach”, Nov 27, 2018
  • Independent, “Man accidentally ‘deletes his entire company’ with one line of bad code”, Apr 14, 2016
  • IT World, “Fired VMware admin admits virtual rampage launched from a McDonald’s”, Aug 17, 2011
  • Mashable, “How one disgruntled employee can destroy your company”, Aug 23, 2015
  • The Register, “Vengeful sacked IT bod destroyed ex-employer’s AWS cloud accounts. Now he’ll spent rest of 2019 in the clink”, Mar 20, 2019
  • Wired, “PROGRAMMER CONVICTED IN BIZARRE GOLDMAN SACHS CASE—AGAIN”, Jan 5, 2015