CYCOGNITO CYBER SECURITY GLOSSARY
Cyber security has its own language. Below find definitions for some key terms to help you make sense of it all.
An attack path is one or more security gaps that attackers can exploit to gain access to an IT asset and to move from one IT asset to another. A clear understanding of possible attack paths helps security teams accurately gauge cybersecurity risk.
An attack surface is the sum of an organization’s attacker-exposed IT assets, whether these assets are secure or vulnerable, known or unknown, in active use or not and regardless of IT/security team awareness of them. The attack surface changes continuously over time, and includes assets that are on-premises, in the cloud, and in subsidiary networks as well as those in third-party or partner environments.
To see how CyCognito can help you understand your attack surface see this page.
Attack Surface Discovery
Attack surface discovery is an initial stage of attack surface management. It’s the process of automated searching to identify digital assets across an organization’s external IT (or Internet-exposed) ecosystem.
The assets can either belong to or be operated by your organization as well as third party or partner environments. The process itself can be enabled as a continuous process that scans for any new assets, including:
- Web applications, services, and APIs
- Mobile applications and backends
- Cloud storage and network devices
- Domain names, SSL certificates, and IP addresses
- IoT and connected devices
- Public code repositories such as GitHub, GitLab, and BitBucket
- Email servers
Attack Surface Management
Attack surface management (ASM) is the process of continuously discovering, classifying and assessing the security of your IT ecosystem. The process can be broadly divided into (a) activities performed in managing internet-exposed assets (a process called external attack surface management, or EASM) and (b) management activities on assets accessible only from within an organization. Many organizations use an assortment of tools and manual processes to secure their attack surface, making the process fraught with operational complexity, human error and best-guess analysis.
External attack surface management can be a particularly daunting task due to the presence of “unknown unknowns,” as well as assets housed on partner or third-party sites, workloads running in the public cloud, IoT devices, old, abandoned or deprecated IP addresses and credentials, and more.
To see how CyCognito does EASM, go to this page.
Attack Surface Protection
Attack surface protection is the process of continuously discovering, classifying and testing the security of your attacker-exposed IT ecosystem. It combines advanced ASM capabilities with automated multi-factor testing to discover the paths of least resistance that attackers are most likely to use to compromise organizations. The first, foundational step in attack surface protection is to fully map the organization’s externally-exposed attack surface. While most ASM and EASM approaches stop there or use a proxy risk measure (such as banner grabbing), attack surface protection takes that process a step further. Attack surface protection uses active security testing that goes beyond simply mapping out the attack surface and applying indirect security measurements. To complete the protection process, discovered risks must be prioritized, so that security teams can plan their remediation efforts and address the most potentially damaging issues.
To see how CyCognito does it, see this page.
An attack vector is a path that an attacker can use to gain access to an organization’s network. Attack vectors can include exposed assets or abandoned assets, but they can also include unpatched software vulnerabilities, misconfigured software, weak authentication, and domain hijacking.
To find out how to discover attack vectors, see this page.
The process of tracking, identifying, and assigning assets to the party that manages them. Automatic attribution will continuously credit virtually all assets to one or more organizations, brands, missions, or teams with an evidence trail and confidence rating on an ongoing basis.
Banner grabbing is a process of collecting intelligence about IT assets and the services available on those assets. Banners provide information such as the version of software running on a system. That intelligence can be used by IT and Security administrators, or by attackers, to get a sense of what vulnerabilities may be present on the asset. Banners provide limited value because the only security issues they might indicate are software version-related (e.g., CVEs) and even then banners won’t reflect that a system has been patched. Therefore, banner grabbing is prone to false-positives.
A botnet is a collection of internet-connected systems each running remotely controlled software that performs a variety of tasks. Botnets are highly useful for performing distributed, coordinated activities. While botnets are infamous for their use by malicious actors to perform distributed denial of service (DDoS) attacks, they can be used for positive activities. For example, the CyCognito platform uses a botnet to perform reconnaissance by continuously detecting and security testing IT assets from locations across the world, at multiple intervals, undetectably and non-intrusively.
Breach and Attack Simulation (BAS)
BAS is an advanced method of testing security environments by simulating likely attack paths and techniques commonly used by attackers. This process identifies vulnerabilities, much like a penetration test, except it’s continuous and automated.
The business context is identifying an asset or service that is associated with the organization or team that controls it. Understanding the business context provides insight into the extent of the organization’s true attack surface, locating and monitoring otherwise “hidden” assets.
Beyond monitoring, business context also helps to identify the likely owner so is part of automatic attribution. This raises awareness of potential risks to help enlist help in sealing security gaps.
Cloud security is a broad term referring to the tools and processes organization’s use to protect assets and data stored in the cloud from cyber attacks and threats. This also includes data running in the cloud’s workloads, and anything housed in Software-as-a-Service (SaaS) applications.
There are different types of cloud computing categories under the umbrella of cloud security, including:
- Public cloud services (public provider), such as Software-as-a-Service (SaaS), Infrastructure-as-a-Service (Iaas), or Platform-as-a-Service (Paas). In these cases, the software may be owned by a third party, the hardware is run by others, and only the data is owned by the primary organization.
- Private cloud services (public provider), such as a corporation running email on G Suite rather than operating their own email servers. In this case, data and implementation may belong to the corporation, while the responsibility for the infrastructure is the provider’s.
- Private cloud services (internal staff), such as IT staff running applications and workloads on servers that aren’t housed elsewhere in the cloud. In this case, the provider may be responsible for the server’s operation, but internal IT staff owns what runs on the servers, including applications and data.
- Hybrid cloud service, which is perhaps the most common. It’s a hybrid environment that includes assets, applications, and data in each category.
The biggest challenge in a cloud security model is the difficulty of pinpointing who is responsible for securing what. Most security solutions advocate a shared responsibility model.
Common Platform Enumeration (CPE)
CPE is a structured naming scheme for IT systems, software, and packages. The naming scheme is based on the generic syntax of uniform resource identifiers (URI), and includes a formal name format that checks names against a system, as well as a description format for binding text to a name.
The CPE Product Dictionary (NIST) provides a publicly available agreed-upon list of official CPE names in XML format, hosted and maintained by NIST.
Common Vulnerabilities and Exposures (CVE)
CVE is a database of publicly disclosed security vulnerabilities and exposures occurring in publicly released software packages. It’s a system helping IT professionals coordinate their efforts to prioritize and address vulnerabilities to make computer systems more secure. It was launched in 1999 and is currently operated by the National Cybersecurity FFRDC, funded by the US National Cyber Security Division.
Common Vulnerability Scoring System (CVSS)
CVSS is an open framework articulating the severity of a threat through the principal characteristics of a vulnerability. These consist of three metric groups: base, temporal, and environmental. Once a number score is produced, the score is translated into low, medium, high, or critical risk categories.
CVSS is used worldwide as a standard measurement system for industries, organizations, and governments requiring accurate and consistent vulnerability severity scores. CVSS is owned and managed by FIRST.Org, Inc., a US-based non-profit organization.
Continuous Security Monitoring
Continuous security monitoring is the process of monitoring an organization’s IT ecosystem to identify and provide timely visibility into cyberthreats or risks. By discovering and monitoring all assets in the IT ecosystem, both known and unknown, security professionals can then find the path of least resistance and vulnerabilities that attackers may use as a security gap to penetrate organizations.
Cyber Kill Chain
A cyber kill chain is a series of 7 stages that model the primary actions conducted in a cyberattack. Lockheed Martin developed the cyber kill chain model in 2011 to help cyber defenders identify and prevent the steps of an attack. Other organizations have slightly different models and critics have noted that attackers increasingly flout the cyber kill chain model, but there is broad agreement that organizations should always strive to eliminate potential threats as early as possible in the cyber kill chain.
Another model for the cyber kill chain is the MITRE ATT&CK framework which provides a detailed list of tactics and techniques attackers will use.
The seven phases of the Lockheed Martin model are: reconnaissance, weaponization, delivery, exploitation, installation, command & control, and actions on objectives. An attacker conducts reconnaissance by probing for security gaps themself (or can purchase reconnaissance services / results as well). Once a weak point has been identified, the attacker moves to the weaponization phase and develops (or purchases) a weapon to exploit it, such as a virus or zero-day. In the delivery phase, the weapon is launched, for example, by email, delivering an infected USB key, via cross site scripting, or accessing a system remotely. Once the target is exploited, the attacker can install tools to maintain access, execute actions remotely, cover their tracks, and gather data. During command and control and actions on objectives, data may be exfiltrated, other systems targeted and, in the case of ransomware, data may be encrypted to get a “double” extortion: First by selling data or access to criminals and then by having the victim(s) pay for access to their own systems and data.
Cyber reconnaissance is a cybersecurity term built from the French word “reconnaissance,” which means “surveying” and adapted from the military practice of reconnaissance, conducting an exploratory survey of enemy territory.
Attackers use cyber reconnaissance techniques to identify the easiest digital entry points into their targets. Reconnaissance can include passive activities where an attacker searches for information without compromising the target. Reconnaissance can also be active, where the attacker gains unauthorized access and engages to gather information. Many attacks include both types.
When conducted defensively, cyber reconnaissance helps organizations to understand where and how cyberattackers could gain access to their networks.
Cybersecurity Risk Management
Cyber risk management involves continuously identifying, assessing, and mitigating potential cyber risks as well as understanding their potential impacts. Because cyber risk cannot be effectively managed without a comprehensive view of the overall attack surface, it is vital to have an awareness of all assets and understand their business context. Vist this resource page to find out more about the process and plans for cybersecurity risk management.
A data breach occurs when an unauthorized or potentially malicious party gains access to confidential, sensitive or protected data. Some data breaches contain personally identifiable information (PII), which may include national identity numbers, credit card numbers, or medical records.
To see an example of a data breach, see this page.
Defensive security is a proactive approach that focuses on prevention, detection, and response to attacks from the perspective of defending the organization. For example, blue teams are generally thought of as defensive security. Defensive security is in contrast to offensive security, which is an approach designed to look at the organization from the perspective of an adversary. Penetration testers and red teams are generally seen as offensive security.
A digital footprint is the trail of data created by a user on the internet. The footprint can be left actively, through websites visited, emails sent, and information submitted online. On the other hand, a passive digital footprint is a trail of data unintentionally left. Cookies on apps, devices and websites, geolocation services, and social media engagement all contribute to someone’s passive digital footprint.
Alternatively, digital footprint can also be used to refer to an organization’s attack surface.
DNS History/Passive DNS
The traditional Domain Name System (DNS) is a real-time, distributed database system where queries to DNS servers and resolvers translate hostnames into IP addresses and vice versa. While not all DNS data is public, much of it can be easily accessed and much of the information is in clear text. While traditional DNS records are transient, passive DNS enables the collection and archiving of historical DNS data which contains a wealth of information about DNS queries on the Internet. Analysis of this data provides insights into old DNS records, new values, differences, and can find possible attack vectors. An attacker or defender with this information can see where, how, and when your organization’s domain names and IP addresses have changed over time and who is changing them.
To find out more about DNS, see this blog.
Ethical hacking is a form of offensive security that involves authorized attempts to break into systems and applications in order to test an organization’s security posture. One example of ethical hacking is penetration testing.
EXTERNAL ATTACK SURFACE MANAGEMENT
External Attack Surface Management (EASM) is an emerging market category that Gartner created in March 2021 to describe a set of products that supports organizations in identifying risks coming from internet-facing assets and systems that they may be unaware of.
EASM solutions continuously discover, classify and assess the security of your internet-exposed attack surface from the outside in. EASM provides a view of an organization’s IT assets, as well as those closely related to the organization, as seen by attackers looking at the organization from the outside. For this reason, EASM excels at finding “unknown unknowns.”
Attack surface protection solutions build on that concept and combine the market’s most advanced External Attack Surface Management capabilities with automated multi-factor testing, to discover the paths of least resistance that attackers are most likely to use to compromise organizations.
To find out more about EASM, go to this page.
A false negative occurs when a cyber threat or attack passes through scanning and protection software undetected. There are a number of reasons a false negative happens: the attack is dormant, it is a highly sophisticated file-less threat or one capable of lateral movement, or the security infrastructure lacks the technological capabilities to detect the attack.
False negatives are serious security threats capable of evading technologies like next-gen firewalls, antivirus software, and EDR platforms looking for “known” attacks, and malware.
False negatives can also occur easily in attack surface management when assets that should be tested are not found in or are incorrectly excluded from the attack surface.
A false positive is an alert that a detective and protection software generates when legitimate activity is classified as an attack. This may not seem as harmful as a false negative, but it can be detrimental in the long term. In the short term, it can result in a website, file, or item being quarantined, blocked, or deleted and in the long term lead to alert fatigue and ignoring alarms. Like “The Boy Who Cried Wolf”, the problem is liars are not believed even when they speak the truth.
False positives can also occur easily in attack surface management when assets are incorrectly attributed to an attack surface. In these cases it’s important to have a facility for vetting these and excluding them from future assessments.
The hacker economy has emerged as a multi-billion dollar criminal industry formed by individual and organized hacking networks. Hackers use a variety of methods to extort, steal, and defraud targeted institutions as well as individuals, including:
- Direct ransomware - threat attackers utilize ransomware to encrypt systems and data. They hold the ability to unlock the systems as hostage for ransom, usually in the form of relatively untraceable cryptocurrency like Bitcoin.
- Supply-chain ransom - when a threat actor has stolen data or gained privileged access to threaten the disclosure as ransom against affected parties.
- Selling malware - a B2B/B2C solution selling malware to other hackers. This is growing as a highly developed and advanced black market operation. Learn more about commonly used techniques on MITRE ATT&CK's malware page.
- Selling access - a B2B/B2C solution selling credentials to other hackers. Learn more about purchasing technical data from MITRE ATT&CK.
- Selling credit card numbers or personally identifiable information (PII) - these are used to set up fraudulent personas for committing crime or espionage.
- Automated phishing software-as-a-service - developed and sold on the darkweb to improve the efficiency of phishing operations.
- Infiltrating financial accounts - using details from compromised financial accounts to appropriate funds for purchasing stocks. This is often done to raise the price of a stock and sell them for profit.
- Cryptojacking - malware is inserted on a victim's system(s) to surreptitiously lend computational processing to a crypto-mining operation.
- Botnet/DDOS for hire - a B2B/B2C SaaS operation where systems that have been compromised by hackers called ‘bot herders’ are rented out as a service to other hackers to be used for nefarious purposes. Learn more about botnet techniques on botnet techniques from MITRE ATT&CK.
The hacker economy is more than just the hackers trying to attack and infiltrate sites and systems. It’s also the SaaS and B2B market that has grown to support hacking operations due to the high return on investment.
Internet Protocol (IP) v4 Address
An IPv4 address is a 32-bit number that uniquely identifies a network interface used to connect a machine to the Internet or local area network. This is the fourth version of the Internet Protocol used for internetworking. An IPv4 address consists of four sets of numbers ranging from 0 to 255, which communicate the location of the device, router, and website.
Ubiquitous computing and proliferation of devices such as those created by mobile devices and the so called Internet of Things (IoT) depleted the over 4 billion IPv4 numbers in February 2011 which is known as IPv4 address exhaustion.
IP Addresses are mathematically calculated and assigned by the Internet Assigned Numbers Authority (IANA), which is a division of the Internet Corporation for Assigned Names and Numbers (ICANN).
Internet Protocol (IP) v6 Address
IPv6 is a 128-bit number developed by the Internet Engineering Task Force (IETF) to help deal with the long-anticipated problem of IPv4 address exhaustion.
An IT asset is a piece of software or hardware within an information technology environment.
An organization’s IT ecosystem is the network of services, providers and other organizations connected to the organization that create and deliver information technology products and services. This ecosystem includes entities that are connected to but not controlled directly by the organization, such as a third-party vendor, an independent subsidiary or a company added via merger or acquisition. Cloud computing resources used by the organization are also part of its IT ecosystem. All of the assets associated with all of the IT ecosystem entities define the organization’s attack surface.
Kali Linux is an open-source, specialized Linux platform developed and supported by Offensive Security and used for security research, penetration testing and security forensics. The platform packages a number of tools and utilities for security professionals and features popular apps such as Nmap, metasploit, OWASP Zap, Wireshark and others.
Machine learning is a branch of artificial intelligence describing the study of computer programs that leverage algorithms and statistical models to improve automation without explicit programming. This is used to improve the capabilities of a machine, software, or program by allowing it to essentially program itself using data.
Machine learning can be broken down into three major components: a decision process, an error function and model optimization. The decision process uses an algorithm to make predictions or classifications. The error function evaluates the efficacy of the prediction. Finally, the model optimization process iterates the data and outcome, adjusting different weights until it fits into a certain degree of accuracy.
Maltego is an open-source intelligence (OSINT) tool for gathering and connecting data on the internet and illustrating relationships and links between things on a node-based graph. The platform offers a graphical user interface (GUI) that allows security professionals to mine data and helps IT and security teams build a picture of threats, their complexity and severity.
MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) is a curated, globally accessible knowledgebase of adversary tactics and techniques based on real-world observations. The framework represents the various phases of an attack lifecycle, as well as the platforms targeted. While the majority of the ATT&CK framework is geared towards providing insight into detecting attackers in real time during an attack, its Reconnaissance and Resource Development tactics (previously known as Pre-ATT&CK) are focused on an attacker's pre-attack preparation.
To see how CyCognito supports the MITRE ATT&CK framework, go to this page.
MITRE ATT&CK INITIAL ACCESS
MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) initial access is a framework for an attacker’s strategy to get into your network. Initial access involves targeted spear phishing and exploiting public-facing web servers, which may allow for continued access and use of external remote services.
MITRE ATT&CK outlines nine techniques ranging from supply chain compromise to hardware additions. Learn how the CyCognito platform integrates the MITRE ATT&CK initial access framework.
MITRE ATT&CK RECONNAISSANCE
MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) reconnaissance is a framework outlining an attacker’s pre-attack preparation on gathering useful information for future operations. Reconnaissance involves the active or passive gathering of information, which may include details of the victim organization, infrastructure, or staff and personnel. This information is leveraged to aid in other phases of the attack.
MITRE ATT&CK outlines 10 techniques ranging from active scanning to searching open technical databases. Learn how the CyCognito platform integrates the MITRE ATT&CK reconnaissance framework.
MITRE ATT&CK RESOURCE DEVELOPMENT
MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) resource development is a framework for an attacker’s pre-attack preparation on gathering resources to support an operation. Resource development consists of techniques the attacker uses to create, purchase, or compromise resources to aid in targeting. These resources include infrastructure, accounts, or capabilities.
MITRE ATT&CK outlines seven techniques, from acquiring infrastructure such as domains and DNS servers, to compromising email and social media accounts. Learn how the CyCognito platform integrates the MITRE ATT&CK resource development framework.
MITRE PRE-ATT&CK was a framework of tactics and techniques to help uncover the many pre-compromise behaviors attackers perform. It was deprecated and removed by MITRE in late 2020 and has since been rolled into the Enterprise matrix under Reconnaissance and the Resource Development categories. Those techniques can also be found under the MITRE Enterprise > PRE matrix, and the primary Enterprise matrix also lists Initial Access techniques as well as additional technique categories that follow an attack to execution.
To find out more about MITRE PRE-ATT&CK, see this blog.
Multi-Factor Authentication (MFA)
Multi-factor authentication is an authentication method requiring users to supply more than one distinct authentication factor to gain access to a resource such as an application, online account, or VPN. These factors include something you know (such as a password or PIN), something you have (such as a token or key), or something you are (such as your fingerprint).
MFA is a core component of a strong identity and access management (IAM) policy. Rather than asking for a username and password, MFA requires one or more additional verification factors. This significantly decreases the likelihood of a successful cyber attack.
Natural Language Processing (NLP)
NLP is a branch of artificial intelligence describing the study of how computers can understand, interpret, and manipulate human language. This process is commonly used for analyzing large volumes of textual data and structuring data sources to resolve ambiguous language.
NLP resolves the need for syntactic and semantic understanding in the hundreds of complex and diverse ways people express themselves verbally and in writing to each other. NLP is important because it works out ambiguity in language and adds useful numeric structures to the data that is usable by a computer. These are useful for many applications, such as speech recognition or text analytics.
NIST CYBERSECURITY FRAMEWORK
The NIST Framework for Improving Critical Infrastructure Cybersecurity (or “The Framework” for short) consists of standards, guidelines, and practices to promote the protection of critical infrastructure. It was created through collaboration between industry and government, and is published by Part of the National Institute of Standards and Technology (NIST). The Framework was originally designed to foster risk and cybersecurity management communications among both internal and external organizational stakeholders.
To see how CyCognito supports the NIST framework, go to this page.
Offensive security is a proactive approach that involves testing an organization’s security posture from the viewpoint of an adversary. The intent of offensive security is to validate that an organization’s security performs as intended. It can include activities such as ethical hacking and penetration testing to identify and remediate risks that a malicious party could exploit. By employing offensive security methods, security teams can act like attackers to help the organization uncover and eliminate paths of least resistance before attackers can exploit gaps.
Open-Source Intelligence (OSINT) refers to the collection and analysis of any information about an individual or organization that can be legally gathered from free, public sources. While much of the information comes from the internet and can include usernames, social networks profiles, IP addresses, and public records, it also includes data found in images, videos, webinars and public speeches. OSINT operations require no specialized skills and can be conducted by anyone including IT and security teams or attackers who use a variety of techniques to sift through visible data to find the opening they need.
OWASP TOP 10
The Open Web Application Security Project (OWASP) is an online non-profit community that aims to improve software security. Since 2003, OWASP has periodically published a Top 10 list of the most critical and common web application security risks. The data behind the list comes from many sources including security vendors, consultants, and organizations.
Passive DNS derives from collecting DNS query information in a database via network sniffing. While traditional DNS records are transient, passive DNS stores a collection and archive of historical DNS records. This contains a wealth of information about DNS queries on the Internet. Analysis of passive DNS data is used for insights into old DNS records, new values, and differences; it can also find possible attack vectors.
An attacker or defender with this information can see where, how, and when your organization’s domain names and IP addresses have changed over time and who is changing them. Learn how to use passive DNS to secure your organization’s attack surface.
Penetration or pen testing is a security practice where a real-world attack on a subset of an organization’s IT ecosystem is simulated in order to discover the security gaps that an attacker could exploit. Such testing was born in the 1960s with the goal of revealing to the organization how a skilled and motivated attacker could get past, or penetrate, an organization’s defenses. Pen testing is now a requirement for several regulatory regimes including Payment Card Industry (PCI), Federal Information Security Modernization Act (FISMA and Health Insurance Portability and Accountability Act (HIPAA).
While manual pen testing can provide useful insights, the process is costly, time consuming and inherently unscalable as it is based on a simulated attack conducted by a skilled individual. Pen testing is only done on assets that are already known to, and protected by, IT and security teams. Other drawbacks to manual pen testing include that it is typically done only periodically and produces a point-in-time snapshot of the known enterprise assets that is typically outdated by the time that the analysis is complete.
To learn about the state of pen testing in 2021, see this report.
PATH OF LEAST RESISTANCE
The path of least resistance in cybersecurity is an attacker’s easiest route to reaching a target asset. When an attacker is considering an attack, they will typically look for the easiest way to succeed such as externally-exposed systems and assets that are mostly overlooked by organizations. IT assets owned, created or used by lines of business, third parties, partners or subsidiaries can easily become such a path.
A proactive security approach is the practice of taking measures to predict and prevent a breach before it ever happens. Proactive security teams fix security gaps before they can be exploited and mitigate their highest risks to stay ahead of potential attackers. Meanwhile, a reactive approach involves detecting incidents in-process or after the fact and responding, for example by implementing security solutions in response to a breach that already occured. Proactive security emphasizes prediction and prevention over detection and response.
To see how to move to proactive security, go to this blog.
Pen Testing (Automated)
Automated pen testing is designed to mimic penetration testing with digital tools and software. This process is commonly used by attackers to continuously scan an organization’s entire attack surface.
Automated pen testing removes many of the drawbacks of manual pen testing, including cost, scale, and the need to track the continuously changing threat environment in real time. It’s also used to reveal the unknown unknowns and shadow risk of an organization’s IT system, which are main targets for an attacker.
Ransomware is a form of malware that leverages encryption to hold the operations of an organization hostage in exchange for a ransom payment. These payments often must be made via cryptocurrency. In ransomware attacks, an attacker gains access to a victim’s data, encrypts it such that the victim can no longer access it, and holds the data hostage unless an extortion payment is made. Ransomware attacks can be initiated by exploiting gaps in an organization’s attack surface to take control of IT assets and move laterally, as well as via other channels such as phishing attacks. Due to the effectiveness of ransomware, an industry of organized crime has emerged around it, including ransomware as-a-service providers.
Read how ransomware was leveraged to attack a major US oil pipeline.
Recon-ng is a web-based open-source reconnaissance tool (OSINT) written in Python, often paired with the Kali Linux penetration distribution. The tool reduces time spent harvesting information from open resources and consists of an extensive range of modules and database interaction.
Recon-ng is useful for collating information into one centralized source for a database. CyCognito integrates Recon-ng into its intelligent platform to conduct information gathering at scale, before other tools and methods are utilized to help organization’s see their entire attack surface and prioritize remediation steps.
RED, BLUE, AND PURPLE TEAMS
Red, Blue, and Purple Teams consist of security professionals who are integral to maintaining and improving an organization’s security posture. Red Teams are “attackers” who deploy ethical hacking methods such as penetration testing to simulate an attack and improve defenses.
Methods include OSINT and reconnaissance to avoid being detected by Blue Teams. A Blue Team includes security professionals operating within an organization’s security operations center (SOC), acting as defenders that identify, assess and respond to potential attacks. To protect assets, Blue Teams might analyze forensic , perform DNS audits, and utilize a SIEM platform for communicating necessary actions in real time. Finally, Purple Teams unite the separate objectives of Red and Blue teams to promote information sharing, collaboration and maximize their effectiveness.
Remediation is the identification and mitigation of a vulnerability or threat that could impact an organization’s business and network security. The process addresses the problem or vulnerability by modifying a configuration or patching the operating system or application.
Risk is a multifactor calculation of the severity of a threat, likelihood of an occurrence, and the impact of that threat on organizational operations, reputation, and costs. This includes mission, functions, image, or reputation on the organization’s assets or individuals associated with the organization. Anything on an information system connected to a network can be open to risk. Data can be modified, copied, deleted, or encrypted, or a threat actor can access your organization’s systems without knowledge or consent and use the organization’s assets to launch other attacks.
A risk assessment is the process of identifying, analyzing, and evaluating information assets that could be affected by a cyber attack. It then identifies the risks that could affect those assets. A risk assessment helps to ensure the cybersecurity controls are appropriate to the risks facing the organization.
This process saves time, effort, and resources spent on security and addresses any risks that may be overlooked. The effectiveness of risk assessments is why many best-practice frameworks, laws, and standards recommend conducting a risk assessment.
RISK-BASED VULNERABILITY MANAGEMENT
Risk-Based Vulnerability Management (RBVM) is a process that emphasizes prioritizing the most severe security vulnerabilities and remediating according to the risk that they pose to the organization. This approach is being more widely adopted as organizations realize they have far more vulnerabilities than they can remediate, and they need a way to prioritize which to fix first.
Vulnerabilities do not all pose the same risk to an organization. By considering a combination of a vulnerability’s discoverability and exploitability, potential impact, and the business context of the asset the vulnerability is on, security teams can identify and categorize the most critical risks before a business-critical breach occurs. Such a process is only optimally useful if it also considers risks on assets that IT/security teams are not already aware of.
After a risk analysis has been made, there will be clusters of risks varying in levels of criticality. Risk prioritization is a rational and common sense approach to decision making and analytics, applied to rank and order identified risk events from most to least critical on an appropriate scale.
The method of analysis and ranking should be tied to the business needs and context in terms of immediate and future impact. It should also aim to maximize available resources.
An organization’s security posture is the collective measure of the effectiveness of its cybersecurity. Assessing security posture involves testing the security of your IT ecosystem and its susceptibility to outside threats. Baseline security posture assessments help organizations map improvement and investment plans for protective measures that are not appropriately aligned to the organization’s risk tolerance.
Security Rating Services (SRS)
A security rating service (SRS) performs an independent assessment of an organization’s security posture based on third party data from threat intelligence feeds. These feeds consider externally observable and safety factors based on publicly available information.
The rating is designed for general guidance relative to the security posture of other organizations. An SRS is not typically used for a deep security test and it doesn’t replace attack surface management. In saying so, it’s a fast, consistent, and valuable method for receiving a high-level number suited to comparisons with other organizations.
For more information see “The Truth About How Security Ratings Work”
Security testing checks software to reveal vulnerabilities in security mechanisms and determine whether data and resources are protected from threat actors. It’s a type of non-functional testing focused on whether the software or application is designed and configured correctly.
The test provides evidence on the safety and reliability of software systems and applications, such as not accepting unauthorized inputs. Different types of security testing include vulnerability scanning, security scanning, penetration testing, security audit, and risk assessment.
Security theater is the practice of performing security processes or maintaining security solutions that make people feel more secure without actually improving security. In cybersecurity, security theater may be the result of maintaining old, outdated tools and processes that were effective for the threat environment when it was first conceived.
These efforts have proven to be ineffective or have become less useful over time, and now do not provide enough visibility into the risks the organization faces. Many organizations continue to do what they have done in the past even as these activities provide insufficient visibility to overall risks.
Shadow IT is the use of web apps, cloud-services, software, and other IT resources without the knowledge of an organization’s IT or security teams. There may be hundreds or thousands of these resources and services used throughout an enterprise that have been provisioned by lines of business, individuals, or third parties without being vetted or deployed by IT or security teams. The prevalence of this self-service IT introduces new security gaps that could put the organization as well as customer data and systems at-risk.
“Shadow risk” is the risk associated with the unknown assets within an organization’s attack surface. Shadow risk includes the assets and attack vectors that are part of the organization’s IT ecosystem but may be unseen or unmanaged by the organization because the assets are in cloud, partner, subsidiary and abandoned environments. It is a risk that most organizations are blind to, but sophisticated attackers can easily exploit.
To find out how to eliminate shadow risk, see this page.
In cybersecurity, the phrase “shift left” refers to the process of focusing security practices as early as possible in a given activity or process. “Left” is a reference to the idea that a timeline runs from left to right, with “earlier” to the left, so “shift left” means to start earlier. This is analogous to the principle that “an ounce of prevention is worth a pound of cure,” meaning it’s better to catch problems earlier when they are easier or cheaper to fix, and their impact is lower. For example, for software security testing, it means beginning the process when the code is first being written, or performance tests are being run, rather than waiting until it is deployed into production.
In cybersecurity, “left” also means earlier in the cyber kill chain or to the Mitre ATT&CK matrix; deploying defenses early and proactively in the process. This moves the organization to a more proactive stance so they can stop an attack before it starts.
SUPPLY CHAIN RISK
Supply chain risk can be thought of as a specific type of third-party risk, where the risk stems from the fact that vendors and partners in an organization’s supply chain increase its attack surface yet the organization may not have sufficient visibility or awareness of the suppliers’ security posture.
A company’s digital supply chain is unique in several ways and likely mission critical. IT service providers and other IT vendors may have different cyber security risk tolerances than their partners, or be smaller companies that have been unable to consider security at the same depth as their clients or other partners in the supply chain.
Organizations that are part of the supply chain but have poorly secured systems, abandoned assets, or misconfigurations that attackers can find create risk for all participants in the supply chain. It is not uncommon to have thousands of IT vendors in an organization’s supply chain. The complexity that digital supply chains create with respect to cyber security risk have been evident for several years, with one of the notable breaches occurring in 2013 with Target and one of its supply chain vendors.
To learn about a more recent supply chain attack, see this blog.
Third-party risk refers to the potential security risks to an organization stemming from the use of third-party vendors, including those vendors in the supply chain as well as groups that may not typically perform security investigations such as law firms, building infrastructure maintenance and services, accounting firms, or even catering. Third-party risk is also posed by business partners and subsidiaries as well as the vendors that they work with.
While these third parties may be outside of the typical security and IT purview for an organization, they frequently have digital access or connectivity to an organization’s resources that are vulnerable to attack. Even in cases where the intended resource poses little risk, access to it can be used to establish a beachhead from which attackers can move laterally to discover more valuable assets (as happened in the Target breach). Third-party risk management involves continuously identifying, analyzing, and controlling all associated risks over the duration of the relationship.
Also known as cyber threat intelligence (CTI), this is information an organization uses to understand the occurrence and assessment of cyber and physical threats. Threat intelligence solutions gather raw data on emerging or existing threats from a number of sources.
The data is compiled and filtered to produce intel feeds and reports to help organizations directly. These include knowledge, skills, and experience-based information to help mitigate the threat of potential attacks and harmful events from occurring.
Threat intelligence helps organizations with the overwhelming volume of threats, and it also encourages a proactive approach to future cybersecurity threats. It’s also a useful tool to keep leaders and stakeholders informed about the latest threats that could potentially impact their interests.
A true negative occurs in cybersecurity when a negative detection occurs in a situation where there is a negative condition. In other words when an intrusion detection system (IDS) successfully ignores acceptable behaviour, or a vulnerability assessment detects no vulnerability in non-vulnerable software, or in attack surface management the platform or process ignores assets that are unrelated to an attack surface.
A true negative indicates that the system is performing as expected by looking for and finding no problems where none are present.
A true positive occurs in cybersecurity when a positive detection occurs in a situation where there is an alert or problem condition. In other words when an intrusion detection system (IDS) successfully detects suspicious behaviour, or a vulnerability assessment detects vulnerable software, or in attack surface management when the platform or process finds assets that are related to an attack surface.
A true positive indicates that the system is performing as expected by looking for and finding no problem where and when they exist.
The phrase “unknown unknowns” was popularized by former United States Secretary of Defense Donald Rumsfeld, and has its origins in psychological research. In the world of cybersecurity, “unknown unknowns” are the risks that the security team doesn’t know about and that they know how to discover or anticipate. Unknown unknowns are typically the most dangerous to an organization because security and IT teams have no awareness that these assets or resources even exist, let alone details about them. Because IT and security teams are unaware of these assets or resources, it is impossible to secure them.
A vulnerability is a weakness or issue within a system, software, or application that could be exploited by a malicious party or hacker to gain unauthorized access to an organization. For vulnerabilities in commercial products, there is a system maintained by the MITRE corporation that is known as the Common Vulnerability and Exposure (CVE) system, in which a unique number is assigned to each CVE based upon timing of the discovery within a year. Whether vulnerabilities occur in the custom software an organization has created or in the commercial products they use, organizations almost always have far more vulnerabilities that need to be addressed than they can address in a timely manner, which is why there has been growing interest in risk-based vulnerability management.
Vulnerability management (VM) is the process of identifying, categorizing, and remediating security vulnerabilities to proactively defend against threats. All vulnerability management should begin by identifying all of the assets within an IT ecosystem before attempting to test for vulnerabilities, or organizations may end up with significant blind spots. Unfortunately, the true extent of an organization’s attack surface is not identified by legacy security tools and processes, such as vulnerability scanners and penetration tests, yet many organizations operate as if that were the case. These legacy security tools do not have the means to identify previously unknown assets.
For a modern perspective on vulnerability management, see this blog.
A vulnerability scanner is a tool that inspects applications, systems, networks, and software for potential vulnerabilities and compares details about the assets encountered to a database of information about known security holes in those assets that may involve services and ports, anomalies in packet construction, and potential paths to exploitable programs or scripts.
Vulnerability scanners only discover vulnerabilities in those assets and resources they are directed to scan. This leaves assets that they do not scan, which often includes cloud-based deployments, workloads running in the cloud, resources operated or maintained by third parties, partners, subsidiaries or suppliers open to exploitation. These are the security gaps that attackers are constantly on the lookout for. The relative proportion of what vulnerability scanners can reveal, compared to what they cannot know, can render these tools a form of security theater.
Also known as a web app, a web application is software running on a web server that is accessed by users via a browser called a client. Google Docs is a common example of a web application.
Web applications are by nature Internet facing and running continuously so present an avenue of attack when coded with vulnerabilities or misconfigurations. Also they will oftentimes feature a front-end attached to one or more backend systems like authorization, authentication, accounting, directory service, or databases which are attractive targets for attackers.
A query and response protocol commonly used for querying databases storing registered users or assignees of internet resources. This includes information on the owners of a domain name, IP address block, or autonomous system. The response is delivered in a human-readable format, the current iteration of which was drafted by the Internet Society.
The records have played an essential role for organizations looking for a reliable resource for domain name registration and website ownership. The Internet Corporation for Assigned Names and Numbers (ICANN) regulates the database.
Zero Trust is a model for security centered on the belief that organizations should not automatically trust anything, whether inside or outside their network perimeters. Zero Trust instead specifies that in order to maintain an effective security posture, any entity or asset must be authenticated or otherwise validated before it is granted any access to an organization. Zero Trust has implications for almost every element of your IT infrastructure. Blueprints for implementing a Zero Trust architecture have been developed by Forrester (who created the model in 2010) and NIST, to name a few.
To find out how to get started on Zero Trust, see this blog.