Ransomware Prevention (Part 2) - Internet-Facing Access and Services

Ransomware Prevention (Part 2) - Internet-Facing Access and Services

Ransomware Prevention (Part 2) - Internet-Facing Access and Services

Intro
Part 1 of this blog series outlined best practices for general security measures that should be prioritized to help prevent ransomware.  The next four posts will each look at best practices to address the specific infection vectors most utilized in ransomware attacks.  This post is focused on efforts to secure Internet-facing hosts, applications, and services.  Examples include things like an internally hosted web site, VPN access for vendors, and webmail for remote staff.  Securing these has become even more important during the COVID-19 pandemic since organizations have needed to expand access to internal resources for staff working from home.  While providing access to remote staff has allowed companies to continue operating, it also provides more ways for threat actors to successfully execute a ransomware attack.
 
While many small businesses think they are too small to be targeted by a cyberattack, the reality indicates the opposite.  In their 2020 Data Breach Investigations Report, Verizon found that 28% of breaches involved small (<1,000 employees) businesses.  The challenge is that small businesses are typically less prepared to defend themselves vs. large businesses.
 
In their Quarterly Ransomware Report for Q3 2020, Coveware reported the top 4 infection vectors for ransomware are:
  1. RDP Compromise
  2. Email Phishing
  3. Software Vulnerability
  4. Other
Of those four, the risk associated with an RDP compromise and software vulnerability can be reduced by shoring up defenses for servers and networks that are accessible from the Internet.
 
Reduce Attack Surface and Complexity
According to NIST, an attack surface is "The set of points on the boundary of a system, a system element, or an environment where an attacker can try to enter, cause an effect on, or extract data from, that system, system element, or environment."  Essentially, the attack surface is the entire area of an organization or system that is susceptible to compromise by a threat actor.  A key security tenet is to keep the attack surface as small as possible.  Thinking of IT infrastructure as a brick building with a number of windows and doors, which is admittedly very simplistic, the attack surface is the doors and windows.  To reduce your attack surface, you need to ask if:
  • Some windows and/or doors can be eliminated
  • Some windows and/or doors can be reduced in size
  • Fewer people should have keys to get in each door
 
Get the IDEA - Inventory, Document, Evaluate, Act
A simple and high-level process to reduce an organization's attack surface is comprised of four steps.
  1. Inventory - learn what is accessible
  2. Document - document what is accessible, by whom, and the business rationale
  3. Evaluate - scrutinize every accessible system and application
  4. Act - eliminate or limit access wherever possible
 
Step 1: Inventory
In order to reduce the attack surface, it is first necessary to inventory and understand all of the systems, applications, and traffic that are accessible from the Internet.  It's critical to understand what information threat actors can access about the organization's network, as well as those systems and services the threat actors can access.  Only with a detailed inventory is it possible to make decisions regarding actions needed to reduce the attack surface while not negatively impacting business operations.
 
For some, creating the inventory of systems, applications and traffic can be a daunting task.  That is especially true for larger organizations that tend to have more internally hosted applications for remote staff, partners, or vendors.  However, completing an inventory can be simplified by breaking it down into a series of steps - DNS reconnaissance, host/IP discovery, and port discovery.
 
DNS Reconnaissance
A great deal of information regarding an organization's IT infrastructure is held in a variety of DNS records - SOA, A, MX, SPF, NS, SRV, PTR, etc.  Threat actors regularly query DNS to gather as much information as possible, helping them identify attack vectors and develop a plan for their attack.  Unfortunately, DNS queries typically go unrecognized because many organizations do not monitor DNS requests other than zone transfers.
 
It's important to note that access to some systems requires the use of DNS name rather than IP.  For example, some web servers and load balancers require the correct DNS hostname in the HTTP request in order to respond.  Doing so provides an additional layer of security control, helping to protect those systems from threat attackers performing “drive-by” scans simply using IP address.
 
With more organizations utilizing cloud platforms such as AWS and Azure, it's particularly important to also inventory all the DNS names used for cloud-based services.  A compromised cloud account could prove fatal for most organizations as threat actors could use that account to access resources located in the cloud as well as internal resources in hybrid environments.
 
External IP / Host Discovery
The most general piece of information to inventory regarding potential external exposure is the range(s) of public IP addresses allocated to an organization.  Internet service providers are the best able to provide that information, but it can also be validated by looking at router or firewall configurations.
 
Since not all public IP addresses will be used, discovery tools can report which public IP addresses provide access to internal systems.  The Nmap security scanner is a commonly used tool for this purpose.  By default it will send:
  • ICMP echo request (ping)
  • TCP SYN packet to port 443 (used for HTTPS)
  • TCP ACK packet to port 80 (used for HTTP)
  • ICMP timestamp request
 
Port Discovery
With a list of reachable hosts, it's then important to understand which ports are open and therefore are targets for threat actors.  Many remote network compromises are accomplished by exploiting a server application listening on a TCP or UDP port.  In many cases, the exploited application is not even used by the targeted organization but was enabled by default when the server was set up. Had that service been disabled, or protected by a firewall, the attack could have been stopped.
 
Step 2: Document
It takes effort and time to complete a full inventory of internet-facing systems.  And for many organizations, that inventory changes over time as business needs change and grow.  Keeping a good record of the inventory allows organizations to track how their attack surface area changes over time.  That documentation also becomes very important if ransomware remediation is ever needed. It can help assess if the access method was intentionally allowed or if it was unknowingly exposed as a part of infrastructure upgrades and changes.
 
For each system and application that is consciously exposed to the Internet, it is useful to include additional information beyond the basics of DNS name, IP address, and port.  It is also helpful to document:
  • The users (or group of users) that are allowed access
  • The business need for allowing access
  • For servers: operating system and version
  • For network equipment: manufacturer, model, and firmware version
 
Documenting the inventory should not be a one-time effort.  It's important to keep it current, updating it with any infrastructure changes that add, remove or change what is accessible from the Internet.  In that way, most questions about potential exposure risk should be answered by consulting the documentation rather than going through a more time-consuming re-inventory and analysis.
 
Steps 3 & 4: Evaluate and Act
Everything discovered during the inventory should be closely scrutinized.  Remember a key goal of this process is to reduce the attack surface and complexity.  With the inventory in hand, key questions to be asked include:
  • Does this system or application need to be accessible from the Internet?
  • Does everyone who currently has access to the system or application truly need access?
 
Access to some servers and applications can be justified by current business requirements and therefore won't need to be changed.  However, the inventory process will likely turn up some surprises that require adjustment.
 
Simplify and Minimize
Server operating systems include a variety of built-in services, some of which are enabled by default.  Only those services that are needed should be enabled, especially on Internet-facing systems.  Examples of services that are common targets by threat actors and should be disabled unless specifically needed are:
  • RDP
  • SMB
  • FTP
  • Telnet
  • SMTP, POP3
  • TFTP
  • IMAP
  • LDAP
 
Of the services listed above, the use of RDP should be heavily scrutinized on Internet-facing systems.  RDP is generally regarded as a safe and secure tool when used within a private network. However, leaving RDP ports open to the Internet allows anyone to attempt to connect to the server.  If able to successfully gain access,  a threat actor can perform anything on that server that is allowed by the hacked account’s permissions.  Remote RDP access is not a new threat, but the global shift to remote work during the COVID-19 pandemic has shown that threat actors are increasingly taking advantage of inadequately secured RDP.  At the start of March 2020, there were about 200,000 daily brute-force RDP attacks in the U.S, according to a Kaspersky report.  By July, the number increased to almost 1.4 million.  RDP is generally regarded as the single biggest attack vector for ransomware, accounting for over 50% of all ransomware attacks.
 
VPNs should also be scrutinized.  During 2020, VPNs became a more commonly used attack vector among ransomware groups, with Citrix network gateways and Pulse Secure VPN servers becoming a favorite target, according to a report published by SenseCy.
 
Default Deny vs. Default Permit
The fundamental function of a firewall is to restrict the flow of information between two networks.  In most cases, the inventory process will identify ports that are open through the firewall that shouldn't be.  This can happen in situations such as:
  • The port was previously needed, but while no longer needed, the firewall rules were never updated
  • The firewall security configuration is based on a default permit strategy rather than a default deny strategy
 
With a default permit strategy, the firewall is configured with a set of rules that will result in data being blocked. Any host or port not covered by the configured rules will be passed by default.  In contrast, with a default deny strategy, the firewall is configured with a set of rules that will result in data being allowed through the firewall.  Anything else is denied.
 
While a default permit approach is typically easier to configure, it can be inadvertently circumvented due to human error in the event a service (e.g. FTP) is accidentally enabled on a server when there are no firewall rules configured to deny it.
 
When taking a default deny approach, a small amount of additional security can be gained by moving permitted traffic to non-standard ports, such as using port 8080 instead of 80 for HTTP.  This technique is typically referred to as "security through obscurity".  And while it shouldn't be used exclusively, it can be an effective layer in a multi-layered security strategy.
 
Patch Regularly
All firmware and application software has bugs.  Those bugs are annoying when all they do is affect functionality.  However, some bugs can be taken advantage of by threat actors to circumvent security measures by forcing the software to act in a way that is not intended.  These are considered software vulnerabilities.  Discovered vulnerabilities are registered and documented with MITRE as a CVE (Common Vulnerability Exposure).  Some vulnerabilities are found by their developers and fixed before attackers are aware of them.  Others, zero-day vulnerabilities, are found by attackers before developers are aware.  Neither is good, but the latter gives attackers a window of opportunity until fixes can be released.
 
Many vulnerabilities have been around for a while and have been fixed in a newer version of software.  Adopting a policy of using regularly scheduled maintenance windows to patch all servers and network equipment (e.g. firewalls, routers, VPN concentrators, load balancers, ...), with particular emphasis on those items that are Internet-facing, goes a long way to limit options for threat actors.
 
Subscribe to Vulnerability Alerts and Advisories
In addition to adopting a regular patching strategy, it's important to stay aware of vulnerabilities as they are discovered.  Based on the severity of a newly discovered vulnerability, it may be necessary to perform an out-of-cycle patch or implement a workaround until new code is released with a fix.
 
Common resources to keep informed of security alerts and advisories, per SecurityIntelligence are:
  • The Computer Emergency Readiness Team Coordination Center (CERT/CC) has up-to-date vulnerability information for the most popular products. The vulnerability database is searchable, and you can sort the entries by severity or date published.
  • SecurityFocus has a feed with recent advisories for almost every product. The specific feeds are not frequently updated.
  • The National Vulnerability Database has two feeds: One covers all the recent CVE vulnerabilities, while the other focuses on fully analyzed CVE vulnerabilities. I only follow the feed with the fully analyzed vulnerabilities because it provides the information that’s important to me: the vulnerable product names.
  • US-CERT and the Industrial Control Systems CERT (ICS-CERT) publish regularly updated summaries of the most frequent, high-impact security incidents. The information is similar to CERT/CC. The content from ICS-CERT is especially useful if you have to protect critical infrastructure.
  • Most vendors have their own feed of advisories, as well.
 
The feeds from CERT/CC and SecurityFocus provide alert and advisory data for the most commonly used products and should be checked daily. Combine this with the information from US-CERT, and it's possible to closely follow what needs immediate attention or patches.
 
Conduct Regular Vulnerability Assessments
Going through all of the effort outlined above isn't sufficient to help prevent ransomware attacks.  That's because:
  • New vulnerabilities are continuously being found and exploited
  • Rogue systems may be added to the network without organizations being aware
  • Security capabilities may lag IT upgrades as organizations try to use new IT systems to gain a competitive advantage
  • Human error happens
 
At least yearly, organizations should conduct a vulnerability assessment to evaluate the overall cybersecurity "health" of the organization.  A vulnerability assessment involves identifying and prioritizing flaws in those systems and networks that need to be addressed, hopefully before those flaws can be exploited by attackers.  It tests the resilience of systems and networks to withstand cyber attacks.  The resulting report lists where security weaknesses are, and areas that should be prioritized to maintain comprehensive cybersecurity.
 
A comprehensive vulnerability assessment typically consists of the following steps:
  1. Installing a vulnerability scanner
  2. Scanning the organization’s network to identify
    • Accessible hosts
    • Open ports
    • Unpatched CVE
  3. Providing a summary of findings based on risk ratings to help gauge overall cybersecurity health
  4. Providing a prioritized list of findings based on risk ratings so remediation efforts can be focused on what is most important
 
Summary
Ransomware threats are becoming more prevalent and more expensive each year.  Organizations must respond by prioritizing strong cybersecurity "health", committing resources to take inventory of their infrastructure, document it,  evaluate what and where the risks are, and act to remove or lessen those risks.  Afterward, regular vulnerability assessments should be conducted to ensure new vulnerabilities haven't been missed and things like human error haven't given ransomware groups access to systems and applications that were previously denied.
 

Related Posts