CYCOGNITO CYBERSECURITY GLOSSARY
An attack surface is the sum of an organization’s attacker-exposed IT assets, whether these assets are secure or vulnerable, known or unknown, in active use or not and regardless of IT/security team awareness of them. The attack surface changes continuously over time, and includes assets that are on-premises, in the cloud, and in subsidiary networks as well as those in third-party or partner environments.
To see how CyCognito can help you understand your attack surface see this page.
Attack Surface Management
Attack surface management (ASM) is the process of continuously discovering, classifying and assessing the security of your IT ecosystem. The process can be broadly divided into (a) activities performed in managing internet-exposed assets (a process called external attack surface management, or EASM) and (b) management activities on assets accessible only from within an organization. Many organizations use an assortment of tools and manual processes to secure their attack surface, making the process fraught with operational complexity, human error and best-guess analysis.
External attack surface management can be a particularly daunting task due to the presence of “unknown unknowns,” as well as assets housed on partner or third-party sites, workloads running in the public cloud, IoT devices, old, abandoned or deprecated IP addresses and credentials, and more.
To see how CyCognito does EASM, go to this page.
An attack path is one or more security gaps that attackers can exploit to gain access to an IT asset and to move from one IT asset to another. A clear understanding of possible attack paths helps security teams accurately gauge cybersecurity risk.
Attack Surface Protection
Attack surface protection is the process of continuously discovering, classifying and testing the security of your attacker-exposed IT ecosystem. It combines advanced ASM capabilities with automated multi-factor testing to discover the paths of least resistance that attackers are most likely to use to compromise organizations. The first, foundational step in attack surface protection is to fully map the organization’s externally-exposed attack surface. While most ASM and EASM approaches stop there or use a proxy risk measure (such as banner grabbing), attack surface protection takes that process a step further. Attack surface protection uses active security testing that goes beyond simply mapping out the attack surface and applying indirect security measurements. To complete the protection process, discovered risks must be prioritized, so that security teams can plan their remediation efforts and address the most potentially damaging issues.
To see how CyCognito does it, see this page.
An attack vector is a path that an attacker can use to gain access to an organization’s network. Attack vectors can include exposed assets or abandoned assets, but they can also include unpatched software vulnerabilities, misconfigured software, weak authentication, and domain hijacking.
To find out how to discover attack vectors, see this page.
Banner grabbing is a process of collecting intelligence about IT assets and the services available on those assets. Banners provide information such as the version of software running on a system. That intelligence can be used by IT and Security administrators, or by attackers, to get a sense of what vulnerabilities may be present on the asset. Banners provide limited value because the only security issues they might indicate are software version-related (e.g., CVEs) and even then banners won’t reflect that a system has been patched. Therefore, banner grabbing is prone to false-positives.
A botnet is a collection of internet-connected systems each running remotely controlled software that performs a variety of tasks. Botnets are highly useful for performing distributed, coordinated activities. While botnets are infamous for their use by malicious actors to perform distributed denial of service (DDoS) attacks, they can be used for positive activities. For example, the CyCognito platform uses a botnet to perform reconnaissance by continuously detecting and security testing IT assets from locations across the world, at multiple intervals, undetectably and non-intrusively.
Continuous Security Monitoring
Continuous security monitoring is the process of monitoring an organization’s IT ecosystem to identify and provide timely visibility into cyberthreats or risks. By discovering and monitoring all assets in the IT ecosystem, both known and unknown, security professionals can then find the path of least resistance and vulnerabilities that attackers may use as a security gap to penetrate organizations.
Cyber Kill Chain
A cyber kill chain is a series of 7 stages that model the primary actions conducted in a cyberattack. Lockheed Martin developed the cyber kill chain model in 2011 to help cyber defenders identify and prevent the steps of an attack. Other organizations have slightly different models and critics have noted that attackers increasingly flout the cyber kill chain model, but there is broad agreement that organizations should always strive to eliminate potential threats as early as possible in the cyber kill chain.
Another model for the cyber kill chain is the MITRE ATT&CK framework which provides a detailed list of tactics and techniques attackers will use.
The seven phases of the Lockheed Martin model are: reconnaissance, weaponization, delivery, exploitation, installation, command & control, and actions on objectives. An attacker conducts reconnaissance by probing for security gaps themself (or can purchase reconnaissance services / results as well). Once a weak point has been identified, the attacker moves to the weaponization phase and develops (or purchases) a weapon to exploit it, such as a virus or zero-day. In the delivery phase, the weapon is launched, for example, by email, delivering an infected USB key, via cross site scripting, or accessing a system remotely. Once the target is exploited, the attacker can install tools to maintain access, execute actions remotely, cover their tracks, and gather data. During command and control and actions on objectives, data may be exfiltrated, other systems targeted and, in the case of ransomware, data may be encrypted to get a “double” extortion: First by selling data or access to criminals and then by having the victim(s) pay for access to their own systems and data.
Cyber reconnaissance is a cybersecurity term built from the French word “reconnaissance,” which means “surveying” and adapted from the military practice of reconnaissance, conducting an exploratory survey of enemy territory.
Attackers use cyber reconnaissance techniques to identify the easiest digital entry points into their targets. Reconnaissance can include passive activities where an attacker searches for information without compromising the target. Reconnaissance can also be active, where the attacker gains unauthorized access and engages to gather information. Many attacks include both types.
When conducted defensively, cyber reconnaissance helps organizations to understand where and how cyberattackers could gain access to their networks.
Cyber Risk Management
Cyber risk management involves continuously identifying, assessing, and mitigating potential cyber risks as well as understanding their potential impacts. Because cyber risk cannot be effectively managed without a comprehensive view of the overall attack surface, it is vital to have an awareness of all assets and understand their business context.
A data breach occurs when an unauthorized or potentially malicious party gains access to confidential, sensitive or protected data. Some data breaches contain personally identifiable information (PII), which may include national identity numbers, credit card numbers, or medical records.
To see an example of a data breach, see this page.
Defensive security is a proactive approach that focuses on prevention, detection, and response to attacks from the perspective of defending the organization. For example, blue teams are generally thought of as defensive security. Defensive security is in contrast to offensive security, which is an approach designed to look at the organization from the perspective of an adversary. Penetration testers and red teams are generally seen as offensive security.
DNS History/Passive DNS
The traditional Domain Name System (DNS) is a real-time, distributed database system where queries to DNS servers and resolvers translate hostnames into IP addresses and vice versa. While not all DNS data is public, much of it can be easily accessed and much of the information is in clear text. While traditional DNS records are transient, passive DNS enables the collection and archiving of historical DNS data which contains a wealth of information about DNS queries on the Internet. Analysis of this data provides insights into old DNS records, new values, differences, and can find possible attack vectors. An attacker or defender with this information can see where, how, and when your organization’s domain names and IP addresses have changed over time and who is changing them.
To find out more about DNS, see this blog.
Ethical hacking is a form of offensive security that involves authorized attempts to break into systems and applications in order to test an organization’s security posture. One example of ethical hacking is penetration testing.
EXTERNAL ATTACK SURFACE MANAGEMENT
External Attack Surface Management (EASM) is an emerging market category that Gartner created in March 2021 to describe a set of products that supports organizations in identifying risks coming from internet-facing assets and systems that they may be unaware of.
EASM solutions continuously discover, classify and assess the security of your internet-exposed attack surface from the outside in. EASM provides a view of an organization’s IT assets, as well as those closely related to the organization, as seen by attackers looking at the organization from the outside. For this reason, EASM excels at finding “unknown unknowns.”
Attack surface protection solutions build on that concept and combine the market’s most advanced External Attack Surface Management capabilities with automated multi-factor testing, to discover the paths of least resistance that attackers are most likely to use to compromise organizations.
To find out more about EASM, go to this page.
An IT asset is a piece of software or hardware within an information technology environment.
An organization’s IT ecosystem is the network of services, providers and other organizations connected to the organization that create and deliver information technology products and services. This ecosystem includes entities that are connected to but not controlled directly by the organization, such as a third-party vendor, an independent subsidiary or a company added via merger or acquisition. Cloud computing resources used by the organization are also part of its IT ecosystem. All of the assets associated with all of the IT ecosystem entities define the organization’s attack surface.
Kali Linux is an open-source, specialized Linux platform developed and supported by Offensive Security and used for security research, penetration testing and security forensics. The platform packages a number of tools and utilities for security professionals and features popular apps such as Nmap, metasploit, OWASP Zap, Wireshark and others.
Maltego is an open-source intelligence (OSINT) tool for gathering and connecting data on the internet and illustrating relationships and links between things on a node-based graph. The platform offers a graphical user interface (GUI) that allows security professionals to mine data and helps IT and security teams build a picture of threats, their complexity and severity.
MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) is a curated, globally accessible knowledgebase of adversary tactics and techniques based on real-world observations. The framework represents the various phases of an attack lifecycle, as well as the platforms targeted. While the majority of the ATT&CK framework is geared towards providing insight into detecting attackers in real time during an attack, its Reconnaissance and Resource Development tactics (previously known as Pre-ATT&CK) are focused on an attacker's pre-attack preparation.
To see how CyCognito supports the MITRE ATT&CK framework, go to this page.
MITRE PRE-ATT&CK was a framework of tactics and techniques to help uncover the many pre-compromise behaviors attackers perform. It was deprecated and removed by MITRE in late 2020 and has since been rolled into the Enterprise matrix under Reconnaissance and the Resource Development categories. Those techniques can also be found under the MITRE Enterprise > PRE matrix, and the primary Enterprise matrix also lists Initial Access techniques as well as additional technique categories that follow an attack to execution.
To find out more about MITRE PRE-ATT&CK, see this blog.
NIST CYBERSECURITY FRAMEWORK
The NIST Framework for Improving Critical Infrastructure Cybersecurity (or “The Framework” for short) consists of standards, guidelines, and practices to promote the protection of critical infrastructure. It was created through collaboration between industry and government, and is published by Part of the National Institute of Standards and Technology (NIST). The Framework was originally designed to foster risk and cybersecurity management communications among both internal and external organizational stakeholders.
To see how CyCognito supports the NIST framework, go to this page.
Offensive security is a proactive approach that involves testing an organization’s security posture from the viewpoint of an adversary. The intent of offensive security is to validate that an organization’s security performs as intended. It can include activities such as ethical hacking and penetration testing to identify and remediate risks that a malicious party could exploit. By employing offensive security methods, security teams can act like attackers to help the organization uncover and eliminate paths of least resistance before attackers can exploit gaps.
Open-Source Intelligence (OSINT) refers to the collection and analysis of any information about an individual or organization that can be legally gathered from free, public sources. While much of the information comes from the internet and can include usernames, social networks profiles, IP addresses, and public records, it also includes data found in images, videos, webinars and public speeches. OSINT operations require no specialized skills and can be conducted by anyone including IT and security teams or attackers who use a variety of techniques to sift through visible data to find the opening they need.
OWASP TOP 10
The Open Web Application Security Project (OWASP) is an online non-profit community that aims to improve software security. Since 2003, OWASP has periodically published a Top 10 list of the most critical and common web application security risks. The data behind the list comes from many sources including security vendors, consultants, and organizations.
Penetration or pen testing is a security practice where a real-world attack on a subset of an organization’s IT ecosystem is simulated in order to discover the security gaps that an attacker could exploit. Such testing was born in the 1960s with the goal of revealing to the organization how a skilled and motivated attacker could get past, or penetrate, an organization’s defenses. Pen testing is now a requirement for several regulatory regimes including Payment Card Industry (PCI), Federal Information Security Modernization Act (FISMA and Health Insurance Portability and Accountability Act (HIPAA).
While manual pen testing can provide useful insights, the process is costly, time consuming and inherently unscalable as it is based on a simulated attack conducted by a skilled individual. Pen testing is only done on assets that are already known to, and protected by, IT and security teams. Other drawbacks to manual pen testing include that it is typically done only periodically and produces a point-in-time snapshot of the known enterprise assets that is typically outdated by the time that the analysis is complete.
To learn about the state of pen testing in 2021, see this report.
PATH OF LEAST RESISTANCE
The path of least resistance in cybersecurity is an attacker’s easiest route to reaching a target asset. When an attacker is considering an attack, they will typically look for the easiest way to succeed such as externally-exposed systems and assets that are mostly overlooked by organizations. IT assets owned, created or used by lines of business, third parties, partners or subsidiaries can easily become such a path.
A proactive security approach is the practice of taking measures to predict and prevent a breach before it ever happens. Proactive security teams fix security gaps before they can be exploited and mitigate their highest risks to stay ahead of potential attackers. Meanwhile, a reactive approach involves detecting incidents in-process or after the fact and responding, for example by implementing security solutions in response to a breach that already occured. Proactive security emphasizes prediction and prevention over detection and response.
To see how to move to proactive security, go to this blog.
Ransomware is a form of malware that leverages encryption to hold the operations of an organization hostage in exchange for a ransom payment. These payments often must be made via cryptocurrency. In ransomware attacks, an attacker gains access to a victim’s data, encrypts it such that the victim can no longer access it, and holds the data hostage unless an extortion payment is made. Ransomware attacks can be initiated by exploiting gaps in an organization’s attack surface to take control of IT assets and move laterally, as well as via other channels such as phishing attacks. Due to the effectiveness of ransomware, an industry of organized crime has emerged around it, including ransomware as-a-service providers.
Read how ransomware was leveraged to attack a major US oil pipeline.
RED, BLUE, AND PURPLE TEAMS
Red, Blue, and Purple Teams consist of security professionals who are integral to maintaining and improving an organization’s security posture. Red Teams are “attackers” who deploy ethical hacking methods such as penetration testing to simulate an attack and improve defenses.
Methods include OSINT and reconnaissance to avoid being detected by Blue Teams. A Blue Team includes security professionals operating within an organization’s security operations center (SOC), acting as defenders that identify, assess and respond to potential attacks. To protect assets, Blue Teams might analyze forensic , perform DNS audits, and utilize a SIEM platform for communicating necessary actions in real time. Finally, Purple Teams unite the separate objectives of Red and Blue teams to promote information sharing, collaboration and maximize their effectiveness.
RISK-BASED VULNERABILITY MANAGEMENT
Risk-Based Vulnerability Management (RBVM) is a process that emphasizes prioritizing the most severe security vulnerabilities and remediating according to the risk that they pose to the organization. This approach is being more widely adopted as organizations realize they have far more vulnerabilities than they can remediate, and they need a way to prioritize which to fix first.
Vulnerabilities do not all pose the same risk to an organization. By considering a combination of a vulnerability’s discoverability and exploitability, potential impact, and the business context of the asset the vulnerability is on, security teams can identify and categorize the most critical risks before a business-critical breach occurs. Such a process is only optimally useful if it also considers risks on assets that IT/security teams are not already aware of.
Shadow IT is the use of web apps, cloud-services, software, and other IT resources without the knowledge of an organization’s IT or security teams. There may be hundreds or thousands of these resources and services used throughout an enterprise that have been provisioned by lines of business, individuals, or third parties without being vetted or deployed by IT or security teams. The prevalence of this self-service IT introduces new security gaps that could put the organization as well as customer data and systems at-risk.
“Shadow risk” is the risk associated with the unknown assets within an organization’s attack surface. Shadow risk includes the assets and attack vectors that are part of the organization’s IT ecosystem but may be unseen or unmanaged by the organization because the assets are in cloud, partner, subsidiary and abandoned environments. It is a risk that most organizations are blind to, but sophisticated attackers can easily exploit.
To find out how to eliminate shadow risk, see this page.
In cybersecurity, the phrase “shift left” refers to the process of focusing security practices as early as possible in a given activity or process. “Left” is a reference to the idea that a timeline runs from left to right, with “earlier” to the left, so “shift left” means to start earlier. This is analogous to the principle that “an ounce of prevention is worth a pound of cure,” meaning it’s better to catch problems earlier when they are easier or cheaper to fix, and their impact is lower. For example, for software security testing, it means beginning the process when the code is first being written, or performance tests are being run, rather than waiting until it is deployed into production.
In cybersecurity, “left” also means earlier in the cyber kill chain or to the Mitre ATT&CK matrix; deploying defenses early and proactively in the process. This moves the organization to a more proactive stance so they can stop an attack before it starts.
SUPPLY CHAIN RISK
Supply chain risk can be thought of as a specific type of third-party risk, where the risk stems from the fact that vendors and partners in an organization’s supply chain increase its attack surface yet the organization may not have sufficient visibility or awareness of the suppliers’ security posture.
A company’s digital supply chain is unique in several ways and likely mission critical. IT service providers and other IT vendors may have different cyber security risk tolerances than their partners, or be smaller companies that have been unable to consider security at the same depth as their clients or other partners in the supply chain.
Organizations that are part of the supply chain but have poorly secured systems, abandoned assets, or misconfigurations that attackers can find create risk for all participants in the supply chain. It is not uncommon to have thousands of IT vendors in an organization’s supply chain. The complexity that digital supply chains create with respect to cyber security risk have been evident for several years, with one of the notable breaches occurring in 2013 with Target and one of its supply chain vendors.
To learn about a more recent supply chain attack, see this blog.
Third-party risk refers to the potential security risks to an organization stemming from the use of third-party vendors, including those vendors in the supply chain as well as groups that may not typically perform security investigations such as law firms, building infrastructure maintenance and services, accounting firms, or even catering. Third-party risk is also posed by business partners and subsidiaries as well as the vendors that they work with.
While these third parties may be outside of the typical security and IT purview for an organization, they frequently have digital access or connectivity to an organization’s resources that are vulnerable to attack. Even in cases where the intended resource poses little risk, access to it can be used to establish a beachhead from which attackers can move laterally to discover more valuable assets (as happened in the Target breach). Third-party risk management involves continuously identifying, analyzing, and controlling all associated risks over the duration of the relationship.
The phrase “unknown unknowns” was popularized by former United States Secretary of Defense Donald Rumsfeld, and has its origins in psychological research. In the world of cybersecurity, “unknown unknowns” are the risks that the security team doesn’t know about and that they know how to discover or anticipate. Unknown unknowns are typically the most dangerous to an organization because security and IT teams have no awareness that these assets or resources even exist, let alone details about them. Because IT and security teams are unaware of these assets or resources, it is impossible to secure them.
A vulnerability is a weakness or issue within a system, software, or application that could be exploited by a malicious party or hacker to gain unauthorized access to an organization. For vulnerabilities in commercial products, there is a system maintained by the MITRE corporation that is known as the Common Vulnerability and Exposure (CVE) system, in which a unique number is assigned to each CVE based upon timing of the discovery within a year. Whether vulnerabilities occur in the custom software an organization has created or in the commercial products they use, organizations almost always have far more vulnerabilities that need to be addressed than they can address in a timely manner, which is why there has been growing interest in risk-based vulnerability management.
Vulnerability management (VM) is the process of identifying, categorizing, and remediating security vulnerabilities to proactively defend against threats. All vulnerability management should begin by identifying all of the assets within an IT ecosystem before attempting to test for vulnerabilities, or organizations may end up with significant blind spots. Unfortunately, the true extent of an organization’s attack surface is not identified by legacy security tools and processes, such as vulnerability scanners and penetration tests, yet many organizations operate as if that were the case. These legacy security tools do not have the means to identify previously unknown assets.
For a modern perspective on vulnerability management, see this blog.
A vulnerability scanner is a tool that inspects applications, systems, networks, and software for potential vulnerabilities and compares details about the assets encountered to a database of information about known security holes in those assets that may involve services and ports, anomalies in packet construction, and potential paths to exploitable programs or scripts.
Vulnerability scanners only discover vulnerabilities in those assets and resources they are directed to scan. This leaves assets that they do not scan, which often includes cloud-based deployments, workloads running in the cloud, resources operated or maintained by third parties, partners, subsidiaries or suppliers open to exploitation. These are the security gaps that attackers are constantly on the lookout for. The relative proportion of what vulnerability scanners can reveal, compared to what they cannot know, can render these tools a form of security theater.
Zero Trust is a model for security centered on the belief that organizations should not automatically trust anything, whether inside or outside their network perimeters. Zero Trust instead specifies that in order to maintain an effective security posture, any entity or asset must be authenticated or otherwise validated before it is granted any access to an organization. Zero Trust has implications for almost every element of your IT infrastructure. Blueprints for implementing a Zero Trust architecture have been developed by Forrester (who created the model in 2010) and NIST, to name a few.
To find out how to get started on Zero Trust, see this blog.