Anyone who has gone through the compliance accreditation process knows there are often technical requirements that are not currently among the organization’s security practices. Even for mature security programs, controls such as File Integrity Monitoring (explicitly required in PCI-DSS, and at least implicitly required in NIST backed compliance frameworks like FISMA and FedRAMP) can cause head scratching moments regarding how to best satisfy emergent requirements.
Additionally, the Cloud presents new challenges in balancing the Shared Responsibility Model, Cloud Service Provider (CSP) services, and new third party paid or open source tools, among other considerations. While every organization should make those decisions based on their own culture and objectives, in general it is better to maximize tools that are already in use for the best cost and usability outcomes.
While researching a cloud native method for addressing File Integrity Monitoring (FIM) requirements in AWS, I came across an AWS announcement from July 2020 that states
AWS Systems Manager [SSM] now integrates with AWS Config to track configuration changes to inventory files on managed instances collected by AWS Systems Manager Inventory.
While this announcement doesn’t mention FIM directly, AWS Config and the Swiss Army Knife that is SSM are utilized in so many organizations that I was curious to see if this could be a viable way to satisfy the control. I split my research into two parts:
File Integrity Monitoring (FIM) broadly refers to a process that monitors designated files (examples include OS, data, configuration, and application files) for alterations that may be indicative of a cyber attack. This is typically accomplished by utilizing an agent based or agentless solution to create a baseline of files and compare changes to file hash digests over time. While this seems fairly straightforward, there are various challenges to implementing FIM effectively:
The Cloud offers new considerations, with managed services, API calls, and metadata (among other things) becoming integral parts of any FIM strategy. For example, AWS CloudTrail and S3 Server Side Logging can be utilized to record actions taken on S3 objects. S3 object metadata also automatically computes a checksum/digest for objects and stores it under the x-amz-checksum-xxx
field.
Additional features like Versioning can also be enabled to restore previous object versions in case of tampering or corruption. A combination of these features can typically satisfy FIM requirements for audit logs and other protected files stored in S3. The gap for FIM capabilities in the Cloud is typically with more Infrastructure-as-a-Service (IaaS) offerings such as EC2 due to the lack of visibility into the system runtime. I set out to explore whether the AWS Config/SSM solution could fill that gap.
AWS Config is a fully managed service for tracking changes to resources and resource configurations for security and compliance governance purposes. Standard configurations involve gleaning metadata from resources, compiling information into a standard format, and then comparing against internal rules and storing configuration output in an S3 bucket. AWS Config does have a gap with IaaS services such as EC2 (or on premise infrastructure), but can rely on the SSM Agent to collect that data. AWS Systems Manager (SSM) offers a full suite of tools for security and fleet management, but the specific service involved in file tracking is Systems Manager Inventory.
The gap with Systems Manager Inventory is that it offers a current snapshot of the monitored file systems, but isn’t suited to tracking changes over time. That is where AWS Config comes in. Putting it all together: the SSM agent continuously collects file metadata on its installed system, and sends this data to both the Systems Manager Inventory service for current state visualization and AWS Config for tracking changes over time.
The announcement for the Systems Manager/Config integration gives the following setup instructions:
To get started, enable AWS Config in your AWS account. Then, select
SSM:FileData
from the AWS Config resource types. If you previously configured AWS Config to record all resource types, then AWS Systems Manager file data is tracked automatically.
For many AWS users you may already be monitoring files on your instances. If you are not utilizing one or both of these services, setting them up from scratch is outside of the scope of this blog post. The public documentation should be referenced for configuration instructions for AWS Config, the SSM Agent, and SSM Inventory.
Since I just wanted to perform a proof of concept, I followed the instructions in the announcement and tried to keep things as simple as possible. The following high level steps were followed to perform the POC:
SSM:FileData
. An SNS topic for notifications was also configured.etc
directory.testing.txt
in the etc
directory.testing.txt
in the etc
directory.Since AWS Config is shown as the destination mechanism, I started there first. The dashboard showed SSM FileData as a unique option in the Resource Inventory.
Drilling down on that further I was able to see my test instance.
Going a level of detail deeper on the specific resource showed multiple ways to visualize data: the ‘resource timeline’ and a JSON representation of the current configuration.
The Resource Timeline seemed to highlight deltas, with both the creation and deletion of my testing.txt
file showing as a configuration change. What was not shown however was my modification of the host.conf
configuration file. This feature seems to operate on file creation and deletion events, but not file change events.
The View Configuration Item
showed raw JSON representation for every tracked file. Some metadata fields were missing, but file name and directory were consistently populated. A truncated code sample is below:
{
"version": "1.3",
"accountId": "XXXXXXXXX",
"configurationItemCaptureTime": "XXXXXXXXX",
"configurationItemStatus": "OK",
"configurationStateId": "XXXXXXXXX",
"configurationItemMD5Hash": "",
"resourceType": "AWS::SSM::FileData",
"resourceId": "AWS::SSM::ManagedInstanceInventory/i-XXXXXXXX",
"awsRegion": "us-east-2",
"tags": {},
"relatedEvents": [],
"relationships": [
{
"resourceType": "AWS::SSM::ManagedInstanceInventory",
"resourceId": "i-XXXXXXXX",
"relationshipName": "Is associated with "
}
],
"configuration": {
"AWS:File": {
"SchemaVersion": "1.0",
"Content": [
{
".pwd.lock": {
"CompanyName": "",
"ProductLanguage": "",
"Description": "",
"ProductName": "",
"InstalledDate": "",
"FileVersion": "",
"InstalledDir": "/etc",
"ProductVersion": ""
}
},
{
"adduser.conf": {
"CompanyName": "",
"ProductLanguage": "",
"Description": "",
"ProductName": "",
"InstalledDate": "",
"FileVersion": "",
"InstalledDir": "/etc",
"ProductVersion": ""
}
},
....
AWS Config showed some helpful information, but also had some notable gaps in the form of missing metadata and a lack of tracking for the modification of existing files. I looked at the SSM Inventory output next to see if there was any difference.
Once I navigated to the Inventory
section of Systems Manager there was already some graphical visualization built in. I was able to drill into the File
resource for more information.
I was able to drill in on my specific instance, which showed an Inventory
option that listed all tracked files and select metadata.
I was not able to find the testing.txt
file that I had created and deleted, which was expected since this is a point in time inventory. I was however able to see the last modified
date for the host.conf
file aligned with the adjustments I made. Not unlike AWS Config data, a lot of metadata was still unpopulated.
Judging from the above output, here are my thoughts on various FIM control standards in select compliance frameworks. Please note that these are opinions and have not been explicitly used to pass an audit:
PCI Defines FIM as:
A change-detection solution that checks for changes, additions, and deletions to critical files, and notifies when such changes are detected.
The AWS Config/SSM solution can monitor for file changes, additions, and deletions so I believe it fits this definition. Specific controls that address FIM in the PCI framework are as follows:
File integrity monitoring or change-detection mechanisms is used on audit logs to ensure that existing log data cannot be changed without generating alerts.
Verdict: Local logs that are stored on the system could theoretically be monitored to satisfy this control, but a best practice would be to export those logs and store them in a solution such as S3 with the appropriate controls. AWS Config/SSM should still satisfy the letter of this control on the systems themselves if needed.
A change-detection mechanism (for example,file integrity monitoring tools) is deployed as follows:
Verdict: I believe the Config/SSM solution DOES satisfy the letter of this requirement as long as ‘critical’ files are identified properly. I monitored the vital /etc directory for the POC, but PCI 4.0 provides the following supplemental guidance:
Examples of the types of files that should be monitored include, but are not limited to:
The notification piece can be completed via integrated notification services or export to a SIEM which produces alerts. AWS Config snapshots seem to run at least daily by default, and SSM Inventory updates were almost instant.
NIST 800-53 is the baseline for a multitude of government compliance frameworks, including FISMA and its cloud equivalent FedRAMP. While there is no control that explicitly uses the term File Integrity Monitoring (it’s not uncommon for NIST documents to use their own nomenclature), control SI-7 seems to be describing FIM:
Supplemental Guidance: Unauthorized changes to software, firmware, and information can occur due to errors or malicious activity. Software includes operating systems (with key internal components, such as kernels or drivers), middleware, and applications. Firmware interfaces include Unified Extensible Firmware Interface (UEFI) and Basic Input/Output System (BIOS). Information includes personally identifiable information and metadata that contains security and privacy attributes associated with information. Integrity checking mechanisms—including parity checks, cyclic redundancy checks, cryptographic hashes, and associated tools—can automatically monitor the integrity of systems and hosted applications.
Verdict: While AWS features like UEFI Secure Boot are better suited for the firmware portion, Config/SSM DO have the ability to detect changes to specific files. Output can also natively integrate with a multitude of notification and response mechanisms.
HIPAA Standard: Implement policies and procedures to protect electronic protected health information from improper alteration or destruction.
1. Identify All Users Who Have Been Authorized to Access ePHIVerdict: This control refers to ePHI itself, so it will most likely only apply to static ePHI stored on IaaS systems. In general this isn’t a best practice (S3 and its native controls would be a better option), but there could be use cases including legacy technology. For those situations AWS Config/SSM should at least be able to satisfy point 4 for enforcing integrity requirements.
With basic settings, the AWS Config/SSM combination does seem to satisfy the letter of FIM requirements for the selected frameworks. There do seem to be some quirks with regards to missing metadata, clunky raw data, and inconsistent visualization. In part 2 of this blog series I plan to explore whether or not this combination of services can be optimized into a more complete solution for satisfying both compliance and security needs.