Blog

Top

A Closer Look at Microsoft Exchange Server Breaches

PAUL VAN RAMESDONK

Jun 14 2021

In March, Microsoft released an out of band patch for a flaw in their Exchange Server that led to hackers exploiting vulnerabilities and breaching at least 30,000 organizations in the USA—and possibly hundreds of thousands more around the world. A significant number of small businesses across a number of industry sectors, along with state and local governments, have been infected with web shells that will allow the hackers to maintain control.

Vulnerable organizations could have been affected at any time, with infiltrations reportedly taking place from late February through to marathon hack, which began when Microsoft released their patch on March 2.

Over the course of the day, Cyberattackers took advantage of Microsoft’s slow patch processes while attacks continued to double every two to three hours, with Turkey, the United States, and Italy accounting for more than 50% of all tracked exploit attempts. Government, military, manufacturing, and financial services were among the most affected industries.

Unfortunately, events continued to worsen for Microsoft in the following weeks.

From March 11 through 15, Check Point Research found that attacks had increased 10-fold since March 2, with Germany and the UK coming under attack. On March 12, Microsoft announced that a ransomware known as DearCry was utilizing the server vulnerabilities during attacks.

This event helps to illustrates the true difficulty in dealing with cyberthreats: even when prevention is the core mission, as it was in Microsoft’s case during this patch release, there is no such thing as airtight cybersecurity. However, orchestrating the response process and utilizing forensic tools to track where the breach occurred can help mitigate the fallout from a breach—and in some cases, prevent damage entirely.

As in the case of the Exchange exploits, initially simply patching the Exchange servers would not remove any backdoors, meaning that these backdoors would still be available to threat actors to exploit the system in future. On March 15, Microsoft released a one-click PowerShell tool which would patch the vulnerabilities, run a malware scan to detect the web shells, and remove them when found.

There are a number of ways to detect web shells on a system that is suspected of being compromised. Looking at abnormal web activity could point one to investigate further, the reality is that smaller to medium enterprises may not have an Intrusion Detection System deployed in their network, some smaller business also outsources their IT requirements.

Companies spend a lot of money on perimeter defenses, Intrusion Detection Systems and Firewalls, but exploits happen, there are things that get past these perimeter defenses, insider threat actors are a reality and when that happens one needs a way of detecting the threat, analyze, resolve and to collect forensically sound images of compromised systems. FTK Enterprise can be used to find those processes, Trojans or those internal threats that got around those perimeter defenses. Giving you threat detection as well as visibility of your data across your network.

For detection of web shells one could look at using a ‘Known-Good’ or ‘Gold Build’ building up a hash set or Known File Filter (KFF) either of the web applications itself or all the applications running on a server, and comparing the Known-Good hash set against a system that could have been compromised to identify any modified files which could be used to launch web shells or exploits. However, threat actors often perform their exploits in a way to hide detection and this approach of comparison in some cases would not detect the exploit. The reality with exploits used today and especially in the case of web shells and reverse shells are used, the payloads can be difficult to detect and could be running purely in memory. In these cases, scanning a forensic image of a device simply won’t detect these types of exploits when they don’t reside on the disk itself.

In FTK Enterprise an investigator can dump files from memory to disk, look at the running processes, currently loaded DLL’s, open network ports, currently loaded drivers on the machine, Users configured on the system or view and search the registry remotely even down to collecting the RAM remotely. This is achieved by an agent being deployed on the end-point. An investigator can also perform an analysis of a memory dump collected from a machine either through the deployed agent or directly off the computer using FTK Imager.

Once a threat has been identified the investigator can either use the hash generated to search one or multiple computers for the same hash, this is useful to detect a possible second drop location, or to quickly identify on which other computers the exploit is running in memory, or even detect files on the file system that match the same hash values. By using Remediation or Batch Remediation the investigator can take corrective actions or actions, such as termination of processes, removal files, placing a file or script on the remove end-point and even taking corrective action by executing commands or PowerShell one-click script all remotely through the deployed agents.

Breach of a System: Collection, Detection and Taking Corrective Action.

We have a scenario where we need to know what is going-on on the network. We don’t necessarily know that anything bad is going on but we need to have a quick way to inspect the processes that are running on the machines in the network and to separate the approved processes from the unapproved processes so we can get a clear and concise picture of what is going on out there.

In this scenario an internal threat actor opened a reverse shell on a Windows 2016 server to another machine on the network. We are not sure what has been done or is being done, but we will need to identify potential sources for forensic collections. We will look at which processes are running with the exploit active compared against a Known-Good volatile memory collection. The comparison can highlight which processes were not running in the Known-Good collection, thereby giving the investigator fewer processes and network connections that need to be verified.

To perform a collection of volatile data from one or more end-points, the investigator makes a selection from the list of configured agents, and then would need to select one or more types of volatile data that is to be collected. This collected data will then be used as our benchmark comparison and assist with the analysis. Should a breach be suspected the investigator would run a collection which will then be compared to the ‘Known-Good’ volatile data collection. This comparison can be done on various volatile items, such as processes, DLL’s, network sockets, Network Devices etc.

Graphical user interface, application, Word

Description automatically generated

Figure 1: Collection of Volatile data from Agent

At this stage we now have a snapshot of the volatile memory, the first collection being the ‘Known-Good’ while the second from when there is a suspicion that a server or servers could be compromised. By using the differences functionality it is easy to view a comparison between the ‘Known-Good’ and suspected collections, we can then exclude all processes that are similar in both collections the investigator now has a shorter list which requires further analysis.The processes associated with the specific collection date and time is color-coded so the investigator knows which collection contains which processes.

Figure 2: Running Comparison, and available options

Once a suspected threat has been seen in the comparison, the investigator can then decide how they wish to proceed, the process seen in Figure 2 is running as svchost.exe, the svchost.exe in itself is not suspicious as this is a process used for a variety of tasks in a Windows operating system, the path from which the executable is running is suspicious, as is the working directory, and when analyzing further the fact that the process is running on a local IP, will need to be investigated. As can be seen, one option is to kill the process outright, or to wipe the file from the end-point, at this point we would not remediate, as the investigator needs to build a profile of what the executable is doing, most threats are persistent and even though killed and even wiped, the threat could start up from a second drop location.

To gather more information about the script one can dump the file in question, this will provide more information about the process, also providing the MD5 which then could be used to scan the server for a secondary drop location, or even be added to the Known-File-Filter hash set to alert should it be detected on future file system analysis. In this scenario we have dumped the file in addition to adding it to the alert hash set.

The next step is to search the server under investigation for secondary drop locations (should they exist), this is achieved in this example by creating a very basic filter to look for files with matching MD5 hash values, seen in Figure 3.

Graphical user interface, application

Description automatically generated

Figure 3: Creating and executing filtered file system scan

Upon completion of filtered file system search we can see there are copies of the exploit residing in secondary drop locations on the server as seen in Figure 4. From here the investigator can initiate a Cerberus Analysis on all or a just single executable, the hash values are the same.

Graphical user interface, text, application

Description automatically generated

Figure 4: Results from Filter and initiating Cerberus Analysis

The processing option for Cerberus consists of Stage 1 - Threat Analysis: Cerberus detects potentially malicious code and assigns the threat score to the executable Library, and Stage 2 – Static Analysis: disassembles the binary and examines the code without actually running the code. Once the analysis has been completed the investigator can look at the results to add any refinements to the threat detection on other systems. It is important to note at this stage that we are performing all stages of the analysis live, meaning we are using the deployed agent continuously. Should the investigator wish to send the suspicious executable files to malware experts to reverse engineer the code, the investigator should create a forensically sound image at this stage prior to proceeding with removal of the executable files.

Graphical user interface, application

Description automatically generated

Figure 5: Results from Cerberus Analysis

As the process in this instance has been named to resemble a valid window process namely svchost.exe, the investigator should take remediation directly from the volatile memory collection. When selecting the option to wipe the file here, the process will first be killed and the file immediately wiped upon completion.

Graphical user interface, text, application

Description automatically generated

Figure 6: Remediation from Volatile Data Collection

In a previous step we have identified secondary drop locations it would be recommended to run a Batch Remediation Job, the investigator can utilize Batch Remediation to kill a process or processes, wipe files, place (put) a file on the end-point, or run a command remotely. An example of where one would place a file on the end-point would be placing the one-click PowerShell script provided by Microsoft on the Exchange server or servers, and executing the command to run the script to patch the servers, detect possible breaches and then to remove the breaches. The example below shows how we have created 3(three) steps in this particular batch to remove the secondary drop locations.

Graphical user interface, application

Description automatically generated

Figure 7: Batch Remediation

Upon completion of remediation, the investigator can take an additional snapshot of the volatile memory, then run a comparison to confirm that the threat has been removed and no longer running in memory to determine whether or not further action is required. In this example we can see it running in the earlier volatile collection but no longer in the latest collection.

Graphical user interface, text, application

Description automatically generated

Figure 8: Confirmation of Remediation

Other useful functionality that can be used to detect possible breaches is when an investigator imports a memory dump from a system, during the analysis phase the code will detect and highlight possible Input/output Request Packets also referred to as Memory Hooks. An example of this output can be seen in this dump from a system where there was a reverse shell in progress, the lines highlighted in pink show which drivers have Hooks detected in addition to the Exclamation Mark next to the Driver List.

Application

Description automatically generated with low confidence

Figure 9: IRP (Memory Hooks) from Memory Dump

Another useful feature available to an investigator is to search through collected volatile data to identify if there are other possible systems breached by the same exploit, this scenario can be seen in Figure 10.

Graphical user interface, application

Description automatically generated

Figure 10: Hash matches from Other Memory Collections using Find functionality

As part of the investigation and analysis we have added the exploits hash value to the KFF Hash Set locally with a set status to Alert, any subsequent processing of a forensic image will also alert the investigator to malware being present on the system. An example using the exploit hash value in the Scenario can be seen in Figure 11.

Graphical user interface, text, application

Description automatically generated

Figure 11: KFF flagging Alert for Exploit on another machine after processing


In March, Microsoft released an out of band patch for a flaw in their Exchange Server that led to hackers exploiting vulnerabilities and breaching at least 30,000 organizations in the USA—and possibly hundreds of thousands more around the world. A significant number of small businesses across a number of industry sectors, along with state and local governments, have been infected with web shells that will allow the hackers to maintain control.

Vulnerable organizations could have been affected at any time, with infiltrations reportedly taking place from late February through to marathon hack, which began when Microsoft released their patch on March 2.

Over the course of the day, Cyberattackers took advantage of Microsoft’s slow patch processes while attacks continued to double every two to three hours, with Turkey, the United States, and Italy accounting for more than 50% of all tracked exploit attempts. Government, military, manufacturing, and financial services were among the most affected industries.

Unfortunately, events continued to worsen for Microsoft in the following weeks.

From March 11 through 15, Check Point Research found that attacks had increased 10-fold since March 2, with Germany and the UK coming under attack. On March 12, Microsoft announced that a ransomware known as DearCry was utilizing the server vulnerabilities during attacks.

This event helps to illustrate the true difficulty in dealing with cyberthreats: even when prevention is the core mission, as it was in Microsoft’s case during this patch release, there is no such thing as airtight cybersecurity. However, orchestrating the response process and utilizing forensic tools to track where the breach occurred can help mitigate the fallout from a breach—and in some cases, prevent damage entirely.

As in the case of the Exchange exploits, initially simply patching the Exchange servers would not remove any backdoors, meaning that these backdoors would still be available to threat actors to exploit the system in future. On March 15, Microsoft released a one-click PowerShell tool which would patch the vulnerabilities, run a malware scan to detect the web shells, and remove them when found.

There are a number of ways to detect web shells on a system that is suspected of being compromised. Looking at abnormal web activity could point one to investigate further, the reality is that smaller to medium enterprises may not have an Intrusion Detection System deployed in their network, and some smaller businesses also outsource their IT requirements.

Companies spend a lot of money on perimeter defenses, Intrusion Detection Systems and Firewalls, but exploits happen, there are things that get past these perimeter defenses, insider threat actors are a reality and when that happens one needs a way of detecting the threat, analyze, resolve and to collect forensically sound images of compromised systems. FTK Enterprise can be used to find those processes, Trojans or those internal threats that got around those perimeter defenses. Giving you threat detection as well as visibility of your data across your network.

For detection of web shells one could look at using a ‘Known-Good’ or ‘Gold Build’ building up a hash set or Known File Filter (KFF) either of the web applications itself or all the applications running on a server, and comparing the Known-Good hash set against a system that could have been compromised to identify any modified files which could be used to launch web shells or exploits. However, threat actors often perform their exploits in a way to hide detection and this approach of comparison in some cases would not detect the exploit. The reality with exploits used today and especially in the case of web shells and reverse shells are used, the payloads can be difficult to detect and could be running purely in memory. In these cases, scanning a forensic image of a device simply won’t detect these types of exploits when they don’t reside on the disk itself.

In FTK Enterprise an investigator can dump files from memory to disk, look at the running processes, currently loaded DLL’s, open network ports, currently loaded drivers on the machine, Users configured on the system or view and search the registry remotely even down to collecting the RAM remotely. This is achieved by an agent being deployed on the end-point. An investigator can also perform an analysis of a memory dump collected from a machine either through the deployed agent or directly off the computer using FTK Imager.

Once a threat has been identified the investigator can either use the hash generated to search one or multiple computers for the same hash, this is useful to detect a possible second drop location, or to quickly identify on which other computers the exploit is running in memory, or even detect files on the file system that match the same hash values. By using Remediation or Batch Remediation the investigator can take corrective actions or actions, such as termination of processes, removal files, placing a file or script on the remove end-point and even taking corrective action by executing commands or PowerShell one-click script all remotely through the deployed agents.

Breach of a System: Collection, Detection and Taking Corrective Action.

We have a scenario where we need to know what is going-on on the network. We don’t necessarily know that anything bad is going on but we need to have a quick way to inspect the processes that are running on the machines in the network and to separate the approved processes from the unapproved processes so we can get a clear and concise picture of what is going on out there.

In this scenario an internal threat actor opened a reverse shell on a Windows 2016 server to another machine on the network. We are not sure what has been done or is being done, but we will need to identify potential sources for forensic collections. We will look at which processes are running with the exploit active compared against a Known-Good volatile memory collection. The comparison can highlight which processes were not running in the Known-Good collection, thereby giving the investigator fewer processes and network connections that need to be verified.

To perform a collection of volatile data from one or more end-points, the investigator makes a selection from the list of configured agents, and then would need to select one or more types of volatile data that is to be collected. This collected data will then be used as our benchmark comparison and assist with the analysis. Should a breach be suspected the investigator would run a collection which will then be compared to the ‘Known-Good’ volatile data collection. This comparison can be done on various volatile items, such as processes, DLL’s, network sockets, Network Devices etc.

Figure 1: Collection of Volatile data from Agent

At this stage we now have a snapshot of the volatile memory, the first collection being the ‘Known-Good’ while the second from when there is a suspicion that a server or servers could be compromised. By using the differences functionality it is easy to view a comparison between the ‘Known-Good’ and suspected collections, we can then exclude all processes that are similar in both collections the investigator now has a shorter list which requires further analysis. The processes associated with the specific collection date and time is color-coded so the investigator knows which collection contains which processes.

Figure 2: Running Comparison, and available options

Once a suspected threat has been seen in the comparison, the investigator can then decide how they wish to proceed, the process seen in Figure 2 is running as svchost.exe, the svchost.exe in itself is not suspicious as this is a process used for a variety of tasks in a Windows operating system, the path from which the executable is running is suspicious, as is the working directory, and when analyzing further the fact that the process is running on a local IP, will need to be investigated. As can be seen, one option is to kill the process outright, or to wipe the file from the end-point, at this point we would not remediate, as the investigator needs to build a profile of what the executable is doing, most threats are persistent and even though killed and even wiped, the threat could start up from a second drop location.

To gather more information about the script one can dump the file in question, this will provide more information about the process, also providing the MD5 which then could be used to scan the server for a secondary drop location, or even be added to the Known-File-Filter hash set to alert should it be detected on future file system analysis. In this scenario we have dumped the file in addition to adding it to the alert hash set.

The next step is to search the server under investigation for secondary drop locations (should they exist), this is achieved in this example by creating a very basic filter to look for files with matching MD5 hash values, seen in Figure 3.

Figure 3: Creating and executing filtered file system scan

Upon completion of filtered file system search we can see there are copies of the exploit residing in secondary drop locations on the server as seen in Figure 4. From here the investigator can initiate a Cerberus Analysis on all or a just single executable, the hash values are the same.

Figure 4: Results from Filter and initiating Cerberus Analysis

The processing option for Cerberus consists of Stage 1 - Threat Analysis: Cerberus detects potentially malicious code and assigns the threat score to the executable Library, and Stage 2 – Static Analysis: disassembles the binary and examines the code without actually running the code. Once the analysis has been completed the investigator can look at the results to add any refinements to the threat detection on other systems. It is important to note at this stage that we are performing all stages of the analysis live, meaning we are using the deployed agent continuously. Should the investigator wish to send the suspicious executable files to malware experts to reverse engineer the code, the investigator should create a forensically sound image at this stage prior to proceeding with removal of the executable files.

Figure 5: Results from Cerberus Analysis

As the process in this instance has been named to resemble a valid window process namely svchost.exe, the investigator should take remediation directly from the volatile memory collection. When selecting the option to wipe the file here, the process will first be killed and the file immediately wiped upon completion.

Figure 6: Remediation from Volatile Data Collection

In a previous step we have identified secondary drop locations it would be recommended to run a Batch Remediation Job, the investigator can utilize Batch Remediation to kill a process or processes, wipe files, place (put) a file on the end-point, or run a command remotely. An example of where one would place a file on the end-point would be placing the one-click PowerShell script provided by Microsoft on the Exchange server or servers, and executing the command to run the script to patch the servers, detect possible breaches and then to remove the breaches. The example below shows how we have created 3(three) steps in this particular batch to remove the secondary drop locations.

Figure 7: Batch Remediation

Upon completion of remediation, the investigator can take an additional snapshot of the volatile memory, then run a comparison to confirm that the threat has been removed and no longer running in memory to determine whether or not further action is required. In this example we can see it running in the earlier volatile collection but no longer in the latest collection.

Figure 8: Confirmation of Remediation

Other useful functionality that can be used to detect possible breaches is when an investigator imports a memory dump from a system, during the analysis phase the code will detect and highlight possible Input/output Request Packets also referred to as Memory Hooks. An example of this output can be seen in this dump from a system where there was a reverse shell in progress, the lines highlighted in pink show which drivers have Hooks detected in addition to the Exclamation Mark next to the Driver List.

Figure 9: IRP (Memory Hooks) from Memory Dump

Another useful feature available to an investigator is to search through collected volatile data to identify if there are other possible systems breached by the same exploit, this scenario can be seen in Figure 10.

Figure 10: Hash matches from Other Memory Collections using Find functionality

As part of the investigation and analysis we have added the exploits hash value to the KFF Hash Set locally with a set status to Alert, any subsequent processing of a forensic image will also alert the investigator to malware being present on the system. An example using the exploit hash value in the Scenario can be seen in Figure 11.

Figure 11: KFF flagging Alert for Exploit on Another machine after processing

Contact us for more information or a demo of FTK Enterprise.

Paul van Ramesdonk is an International Technical Engineer with Exterro.

Contact us today to learn more about our products and our
approach to improving how you collect, analyze and use data.
Tell Me More
-->