Recently the NSA released a repository on guidance for mitigating web shells. The repo contains a number of signatures and tools to help mitigate web shells and provides some valuable insight on how APT's are using advanced webshells in their attacks.

Below I've gone through the guidance above and thought of several questions that will need to be answered ahead of time while building out an advanced web shell. In future posts, I'll do some research on these questions and identify answers. Then, we will build out a web shell based on these strategies and see how it performs against blue teams to help blue teams level up ... and more importantly, see if these monitoring characteristics are actually being implemented by IPS/EDR.

Detecting/Blocking Web Shells

"Administrators can programmatically compare the production site with the known-good version to identify added or changed files
With the rise of CI/CD in the Software Development Life Cycle (SDLC), production applications may have weekly or daily releases. If your webshell is dropped onto disk in a non temporary location (such as web root), it may disappear or worse kick off a signal if monitoring is in place for changed files."

As a red teamer:

  • Which day of the week would be best to upload my web shell? Can we identify any patterns within the organization that would suggest those who are monitoring network activity may not be paying attention?
  • Did our OSINT reveal anything about the development practices of the organization? Are they agile? Do they have continuous releases?

"Using a file integrity monitoring system can block file changes to a specific directory or alert when changes occur"

  • Where do you guess the web shell is being dropped on the server? If you've found an RFI/Unrestricted File Upload to gain a shell, what directory are you targeting? Is it a temporary directory on a different disk than webroot (apps following best practices will do this), or are you in webroot itself?
  • Is there a way to bypass HIPS rules for directories?

Detecting abnormalities in Logs

"This analytic is likely to produce significant false positives in many environments, so it should be employed cautiously... Therefore, this analytic should only be one part of a broader defense in depth approach to mitigating web shells."

  • Do any solutions in 2020 actually flag on these behaviours? Do companies actually turn it on or is it too much noise?
  • What are typical User-Agents used by internal applications?
  • What are typical User-Agents used by cloud applications?
  • What are typical Referrer-Headers used by applications? Can you cycle through referrer headers based on cralwed application URLs?

Detecting Artifacts

"Web shells are easy to modify without losing functionality and can thus be tailored to avoid host base signatures such as file artifacts."

  • Can our shell be dynamically generated and tested against a list of Yara signature rules? NSA provides Yara rules on github, how can we learn from these rules?
  • Can our shell avoid network based detection (IDS/WAF)? Look into snort signatures
  • Can we achieve fileless execution? What are some strategies for this? More research needed...
  • How can we avoid TLS MITM inspections?
  • Can we avoid launching common system calls that EDR is detecting on with a blacklist? Such as whoami.exe or ifconfig (Auditd linux)