ScaleSec Blog

Using AWS Config & Systems Manager for File Integrity Monitoring | ScaleSec

Written by Cameron McCloud | Apr 27, 2023 7:00:00 AM

 

Using AWS Config and Systems Manager for File Integrity Monitoring

Simplifying File Integrity Monitoring in AWS

Anyone who has gone through the compliance accreditation process knows there are often technical requirements that are not currently among the organization’s security practices. Even for mature security programs, controls such as File Integrity Monitoring (explicitly required in PCI-DSS, and at least implicitly required in NIST backed compliance frameworks like FISMA and FedRAMP) can cause head scratching moments regarding how to best satisfy emergent requirements.

Additionally, the Cloud presents new challenges in balancing the Shared Responsibility Model, Cloud Service Provider (CSP) services, and new third party paid or open source tools, among other considerations. While every organization should make those decisions based on their own culture and objectives, in general it is better to maximize tools that are already in use for the best cost and usability outcomes.

While researching a cloud native method for addressing File Integrity Monitoring (FIM) requirements in AWS, I came across an AWS announcement from July 2020 that states

AWS Systems Manager [SSM] now integrates with AWS Config to track configuration changes to inventory files on managed instances collected by AWS Systems Manager Inventory.

While this announcement doesn’t mention FIM directly, AWS Config and the Swiss Army Knife that is SSM are utilized in so many organizations that I was curious to see if this could be a viable way to satisfy the control. I split my research into two parts:

  1. Part 1 in this blog discusses the configuration for this solution and whether I believe this combination of services can meet some common compliance FIM requirements (short answer is “I believe so”, but read on for more).
  2. Part 2 (a future blog post) will discuss the security implications of this solution and its potential for being beneficial to security operations and incident response processes.

File Integrity Monitoring (FIM) Concepts

File Integrity Monitoring (FIM) broadly refers to a process that monitors designated files (examples include OS, data, configuration, and application files) for alterations that may be indicative of a cyber attack. This is typically accomplished by utilizing an agent based or agentless solution to create a baseline of files and compare changes to file hash digests over time. While this seems fairly straightforward, there are various challenges to implementing FIM effectively:

  • Choosing a tool for FIM can sometimes add unforeseen costs and a learning curve to implement correctly
  • Creating a baseline of critical and known safe files is an involved and collaborative process
  • FIM alerts are notoriously noisy, with tuning being an intensive undertaking

The Cloud offers new considerations, with managed services, API calls, and metadata (among other things) becoming integral parts of any FIM strategy. For example, AWS CloudTrail and S3 Server Side Logging can be utilized to record actions taken on S3 objects. S3 object metadata also automatically computes a checksum/digest for objects and stores it under the x-amz-checksum-xxx field.

Additional features like Versioning can also be enabled to restore previous object versions in case of tampering or corruption. A combination of these features can typically satisfy FIM requirements for audit logs and other protected files stored in S3. The gap for FIM capabilities in the Cloud is typically with more Infrastructure-as-a-Service (IaaS) offerings such as EC2 due to the lack of visibility into the system runtime. I set out to explore whether the AWS Config/SSM solution could fill that gap.

AWS Config and SSM Proof of Concept

AWS Config is a fully managed service for tracking changes to resources and resource configurations for security and compliance governance purposes. Standard configurations involve gleaning metadata from resources, compiling information into a standard format, and then comparing against internal rules and storing configuration output in an S3 bucket. AWS Config does have a gap with IaaS services such as EC2 (or on premise infrastructure), but can rely on the SSM Agent to collect that data. AWS Systems Manager (SSM) offers a full suite of tools for security and fleet management, but the specific service involved in file tracking is Systems Manager Inventory.

The gap with Systems Manager Inventory is that it offers a current snapshot of the monitored file systems, but isn’t suited to tracking changes over time. That is where AWS Config comes in. Putting it all together: the SSM agent continuously collects file metadata on its installed system, and sends this data to both the Systems Manager Inventory service for current state visualization and AWS Config for tracking changes over time.

The announcement for the Systems Manager/Config integration gives the following setup instructions:

To get started, enable AWS Config in your AWS account. Then, select SSM:FileData from the AWS Config resource types. If you previously configured AWS Config to record all resource types, then AWS Systems Manager file data is tracked automatically.

For many AWS users you may already be monitoring files on your instances. If you are not utilizing one or both of these services, setting them up from scratch is outside of the scope of this blog post. The public documentation should be referenced for configuration instructions for AWS Config, the SSM Agent, and SSM Inventory.

Since I just wanted to perform a proof of concept, I followed the instructions in the announcement and tried to keep things as simple as possible. The following high level steps were followed to perform the POC:

  • Created an EC2 instance with a CIS hardened Ubuntu 18.04 image and the SSM agent pre-installed.
  • Enabled AWS Config and specified that it collects SSM:FileData. An SNS topic for notifications was also configured.
  • Enabled SSM Inventory to collect data on the etc directory.
  • Created test file testing.txt in the etc directory.
  • Deleted test file testing.txt in the etc directory.
  • Modified contents of the ‘hosts.conf’ file.


AWS Config Output

Since AWS Config is shown as the destination mechanism, I started there first. The dashboard showed SSM FileData as a unique option in the Resource Inventory.

AWS Config Output

Drilling down on that further I was able to see my test instance.

Test instance

Going a level of detail deeper on the specific resource showed multiple ways to visualize data: the ‘resource timeline’ and a JSON representation of the current configuration.

Visualize data

The Resource Timeline seemed to highlight deltas, with both the creation and deletion of my testing.txt file showing as a configuration change. What was not shown however was my modification of the host.conf configuration file. This feature seems to operate on file creation and deletion events, but not file change events.

Modification of the configuration file

The View Configuration Item showed raw JSON representation for every tracked file. Some metadata fields were missing, but file name and directory were consistently populated. A truncated code sample is below:

{
 "version": "1.3",
 "accountId": "XXXXXXXXX",
 "configurationItemCaptureTime": "XXXXXXXXX",
 "configurationItemStatus": "OK",
 "configurationStateId": "XXXXXXXXX",
 "configurationItemMD5Hash": "",
 "resourceType": "AWS::SSM::FileData",
 "resourceId": "AWS::SSM::ManagedInstanceInventory/i-XXXXXXXX",
 "awsRegion": "us-east-2",
 "tags": {},
 "relatedEvents": [],
 "relationships": [
   {
     "resourceType": "AWS::SSM::ManagedInstanceInventory",
     "resourceId": "i-XXXXXXXX",
     "relationshipName": "Is associated with "
 }
 ],
 "configuration": {
   "AWS:File": {
     "SchemaVersion": "1.0",
     "Content": [
       {
         ".pwd.lock": {
           "CompanyName": "",
           "ProductLanguage": "",
           "Description": "",
           "ProductName": "",
           "InstalledDate": "",
           "FileVersion": "",
           "InstalledDir": "/etc",
           "ProductVersion": ""
         }
       },
       {
         "adduser.conf": {
           "CompanyName": "",
           "ProductLanguage": "",
           "Description": "",
           "ProductName": "",
           "InstalledDate": "",
           "FileVersion": "",
           "InstalledDir": "/etc",
           "ProductVersion": ""
         }
       },
....

AWS Config showed some helpful information, but also had some notable gaps in the form of missing metadata and a lack of tracking for the modification of existing files. I looked at the SSM Inventory output next to see if there was any difference.

SSM Inventory Output

Once I navigated to the Inventory section of Systems Manager there was already some graphical visualization built in. I was able to drill into the File resource for more information.

SSM Inventory Output

I was able to drill in on my specific instance, which showed an Inventory option that listed all tracked files and select metadata.

Inventory option

I was not able to find the testing.txt file that I had created and deleted, which was expected since this is a point in time inventory. I was however able to see the last modified date for the host.conf file aligned with the adjustments I made. Not unlike AWS Config data, a lot of metadata was still unpopulated.

Inventory

Compliance Implications

Judging from the above output, here are my thoughts on various FIM control standards in select compliance frameworks. Please note that these are opinions and have not been explicitly used to pass an audit:

PCI v4.0

PCI Defines FIM as:

A change-detection solution that checks for changes, additions, and deletions to critical files, and notifies when such changes are detected.

The AWS Config/SSM solution can monitor for file changes, additions, and deletions so I believe it fits this definition. Specific controls that address FIM in the PCI framework are as follows:

Control 10.3.4

File integrity monitoring or change-detection mechanisms is used on audit logs to ensure that existing log data cannot be changed without generating alerts.

Verdict: Local logs that are stored on the system could theoretically be monitored to satisfy this control, but a best practice would be to export those logs and store them in a solution such as S3 with the appropriate controls. AWS Config/SSM should still satisfy the letter of this control on the systems themselves if needed.

Control 11.5.2

A change-detection mechanism (for example,file integrity monitoring tools) is deployed as follows:

  • To alert personnel to unauthorized modification (including changes, additions, and deletions) of critical files.
  • To perform critical file comparisons at least once weekly.

Verdict: I believe the Config/SSM solution DOES satisfy the letter of this requirement as long as ‘critical’ files are identified properly. I monitored the vital /etc directory for the POC, but PCI 4.0 provides the following supplemental guidance:

Examples of the types of files that should be monitored include, but are not limited to:

  • System executables.
  • Application executables.
  • Configuration and parameter files.
  • Centrally stored, historical, or archived audit logs.
  • Additional critical files determined by entity (for example, through risk assessment or other means).

The notification piece can be completed via integrated notification services or export to a SIEM which produces alerts. AWS Config snapshots seem to run at least daily by default, and SSM Inventory updates were almost instant.

NIST SP 800-53, Revision 5

NIST 800-53 is the baseline for a multitude of government compliance frameworks, including FISMA and its cloud equivalent FedRAMP. While there is no control that explicitly uses the term File Integrity Monitoring (it’s not uncommon for NIST documents to use their own nomenclature), control SI-7 seems to be describing FIM:

Control SI-7

  • Employ integrity verification tools to detect unauthorized changes to the following software, firmware, and information: [Assignment: organization-defined software, firmware, and information]; and
  • Take the following actions when unauthorized changes to the software, firmware, and information are detected: [Assignment: organization-defined actions].

Supplemental Guidance: Unauthorized changes to software, firmware, and information can occur due to errors or malicious activity. Software includes operating systems (with key internal components, such as kernels or drivers), middleware, and applications. Firmware interfaces include Unified Extensible Firmware Interface (UEFI) and Basic Input/Output System (BIOS). Information includes personally identifiable information and metadata that contains security and privacy attributes associated with information. Integrity checking mechanisms—including parity checks, cyclic redundancy checks, cryptographic hashes, and associated tools—can automatically monitor the integrity of systems and hosted applications.

Verdict: While AWS features like UEFI Secure Boot are better suited for the firmware portion, Config/SSM DO have the ability to detect changes to specific files. Output can also natively integrate with a multitude of notification and response mechanisms.

HIPAA (NIST 800-66 Rev. 1)

Control 4.16 Integrity

HIPAA Standard: Implement policies and procedures to protect electronic protected health information from improper alteration or destruction.

1. Identify All Users Who Have Been Authorized to Access ePHI
  • Identify all approved users with the ability to alter or destroy data, if reasonable and appropriate.
  • Address this Key Activity in conjunction with the identification of unauthorized sources in Key Activity 2, below.
2. Identify Any Possible Unauthorized Sources that May Be Able to Intercept the Information and Modify It
  • Identify scenarios that may result in modification to the EPHI by unauthorized sources (e.g., hackers, disgruntled employees, business competitors).
  • Conduct this activity as part of your risk analysis.
3. Develop the Integrity Policy and Requirements
  • Establish a formal (written) set of integrity requirements based on the results of the analysis completed in the previous steps.
4. Implement Procedures to Address These Requirements
  • Identify and implement methods that will be used to protect the information from modification.
  • Identify and implement tools and techniques to be developed or procured that support the assurance of Integrity.
5. Implement a Mechanism to Authenticate EPHI Implementation Specification (Addressable)
  • Implement electronic mechanisms to corroborate that EPHI has not been altered or destroyed in an unauthorized manner.
  • Consider possible electronic mechanisms for authentication such as:
    • Error-correcting memory
    • Magnetic disk storage
    • Digital signatures
    • Checksum technology.
6. Establish a Monitoring Process To Assess How the Implemented Process Is Working
  • Review existing processes to determine if objectives are being addressed.
  • Reassess integrity processes continually as technology and operational environments change to determine if they need to be revised.

Verdict: This control refers to ePHI itself, so it will most likely only apply to static ePHI stored on IaaS systems. In general this isn’t a best practice (S3 and its native controls would be a better option), but there could be use cases including legacy technology. For those situations AWS Config/SSM should at least be able to satisfy point 4 for enforcing integrity requirements.

Conclusions

With basic settings, the AWS Config/SSM combination does seem to satisfy the letter of FIM requirements for the selected frameworks. There do seem to be some quirks with regards to missing metadata, clunky raw data, and inconsistent visualization. In part 2 of this blog series I plan to explore whether or not this combination of services can be optimized into a more complete solution for satisfying both compliance and security needs.