Jason DykeJun 11, 2020 12:00:00 AM7 min read

Inventory Your GCP API Keys

Inventory Your GCP API Keys

Inventory Your GCP API Keys

Inventory, analyze, and report on your GCP API keys in an automated fashion

API keys in Google Cloud Platform (GCP) are a form of authentication and authorization that can be used when calling specific API endpoints in the cloud. These keys are tied directly to GCP projects and are therefore considered less secure than OAuth 2.0 client credentials or service account user-managed keys. In a secure cloud environment, all assets and resources should be monitored for when they are created, updated or deleted. This makes sensitive credentials like API keys especially important to track. Unfortunately, GCP does not currently support a native way to programmatically inventory API keys across an entire GCP Organization. In this blog, we will break down the tools that are used to inventory API keys in Google Cloud environments.

TL;DR: The script is located here. Download locally, export your GCP credentials, setup your virtual environment, and execute.

Tracking API Keys in GCP

Google Cloud Platform has a service called Security Command Center (SCC) which can perform actions such as: tracking your assets, identifying sensitive data in your GCP environment, and aggregating security findings from 3rd party applications and GCP services in a single pane-of-glass view. Inside of the SCC is a built-in feature called Security Health Analytics (SHA) which monitors your environment for misconfigurations like: missing 2-factor authentication on users, publicly exposed resources, unsecure firewall rules, and many NIST, ISO, and CIS findings. Additionally, SHA has four different findings based on API keys that Organizations can use to track which projects have keys, if they are unrestricted (allowing untrusted apps to leverage the keys) or that they need to be rotated.

Category Finding description
API_KEY_APIS_
UNRESTRICTED
There are API keys being used too broadly. To resolve this, limit the API key usage to allow only the APIs needed by the application.
API_KEY_
APPS_
UNRESTRICTED
There are API keys being used in an unrestricted way, allowing use by any untrusted app.
API_KEY_
EXISTS
A project is using API keys instead of standard authentication.
API_KEY_NOT_
ROTATED
The API key hasn't been rotated for more than 90 days.
Reference: Security Health Analytics API key findings

SHA does not provide in-depth metadata about the API keys spread across your GCP Organization which leaves blind spots when it comes to credentials inventorying. In order to fill in the gaps of SHA and the SCC assets inventory, we have created a script that can be executed to search across your entire GCP resource hierarchy and inventory your API keys and their metadata. As an added bonus, we also included the functionality to create a CSV file that can be loaded into Google Sheets or a spreadsheet for reporting and analytics purposes.

Inventorying Your GCP API Keys

In order to inventory the API keys across your entire GCP Organization you will need the API Keys Viewer and Organization Administrator roles on the Organization layer. It may be possible to create a custom Cloud IAM role with only the specific permissions needed but that is out of scope for this blog. The most straightforward way to set up your credentials is to create a service account (SA), assign those two roles, and download a set of user-managed SA keys in JSON format. Be aware that these keys need to be stored securely as they are your credentials. Once you have the SA keys JSON file, export the credentials using the below command, making sure to substitute in your file location and filename.

export 
GOOGLE_APPLICATION_CREDENTIALS=~/.ssh/secretCredentials.json

Clone the repository locally:

git clone 
git@github.com:ScaleSec/gcp_api_key_inventory.git

With your credentials configured and the repository cloned, you are ready to setup your virtual environment and download your library dependencies.

Create your virtual environment (you may use any name you’d like):

python3 -m venv gcp_api_key_inventory

Activate the environment:

source gcp_api_key_inventory/bin/activate

Install library dependencies:

pip3 install -r requirements.txt

Execute the inventory script:

python3 apiInventory.py

After you execute the script it will create two files locally:

  • key_dump.json
  • keys.csv

The key_dump.json file contains responses in JSON format from the API calls. This file is helpful if you have experience using jq and want to manipulate the data to view specific findings.

The keys.csv file contains the CSV-formatted data that you can upload into Google Sheets or open with a spreadsheet application of your choice to create charts, pivot tables, or any other reporting widgets.

Code Breakdown

Below you will find a breakdown of the functions inside the API keys inventory script.

main

This is the main function that initiates the API key inventory as well as formatting and writing the key dump JSON into a CSV file.

def main():
project_keys = get_keys()
# Writes our keys into a file
with open('key_dump.json', 'w') as f:
formatted = json.dumps(project_keys, indent=4, sort_keys=True)
f.write(formatted)
flat_keys = get_flattened_keys(project_keys)
write_csv(flat_keys)
view raw main.py hosted with ❤ by GitHub

create_service

This function creates a client to interact with the GCP Cloud Resource Manager API endpoint.

def create_service():
return googleapiclient.discovery.build('cloudresourcemanager', 'v1')

create_token

This function executes the gcloud command gcloud auth print-access-token to generate a bearer token for the function get_keys to use in it’s HTTP GET requests.

def create_token():
access_token = subprocess.run('gcloud auth print-access-token', shell=True, check=True, stdout=subprocess.PIPE, universal_newlines=True)
token = access_token.stdout
return token

get_keys

This function creates the project list which it uses to iterate through and make GET requests to the API keys endpoint. Based on the response, it either ignores 403 permissions errors or appends the response into an array for future processing.

def get_keys():
access_token = create_token()
service = create_service()
# List available projects
request = service.projects().list()
response = request.execute()
# This variable is used to hold our key JSON objects before we write to key_dump.json
content = []
# For each project, extract the project ID
for project in response.get('projects', []):
project_id = project['projectId']
# Use the project ID and access token to find the API keys for each project
keys = requests.get(
f'https://apikeys.googleapis.com/v1/projects/{project_id}/apiKeys',
params={'access_token': access_token}
).json()
# Write Keys to a file for conversion
if "error" not in keys: # Removes 403 permission errors from returning
if keys:
print(f"API Key found in project ID {project_id}.")
content.append(keys['keys'])
else:
print(f"Project ID {project_id} has no API Keys.")
return content
view raw get_keys.py hosted with ❤ by GitHub

massage

Converting JSON to CSV can create odd values when dealing with indexes so we properly format the JSON key data for a cleaner output.

def massage(key_data):
if key_data.get("apiTargetKeyDetails"):
targets = []
for target in key_data["apiTargetKeyDetails"]["apiTargets"].keys():
targets.append(target)
key_data["apiTargetKeyDetails"] = ",".join(targets)
return key_data
view raw massage.py hosted with ❤ by GitHub

get_flattened_keys

As the name suggests, this function flattens the nested JSON arrays into more manageable and iterable dictionaries.

def get_flattened_keys(projects):
flat_keys = []
for project in projects:
for key_data in project:
massage(key_data)
flat_keys.append(flatten(key_data))
return flat_keys

unique_headers

This function creates unique column headers for the CSV file.

def unique_headers(keys):
headers = {}
for key_data in keys:
for field in key_data:
headers[field] = True
return headers.keys()

write_csv

This is the CSV writing function that takes the previously generated CSV header (typically row 1 in CSV files) and the API key data and constructs the CSV file.

def write_csv(api_keys):
headers = unique_headers(api_keys)
with open('keys.csv', 'w') as key_csv:
csv_writer = csv.writer(key_csv)
csv_writer.writerow(headers)
# Create our CSV rows using our API Key data.
for key in api_keys:
current_row = []
for field in headers:
current_row.append(key.get(field))
csv_writer.writerow(current_row)
view raw write_csv.py hosted with ❤ by GitHub

Disclaimer

The code in this repository is considered a proof of concept that leverages an undocumented API endpoint. GCP is not required to communicate updates to this API and may deploy code-breaking changes at any moment. By voluntarily using any code displayed within this repository, you assume the risk of any resulting damages that may occur. ScaleSec is not liable for any damages. Use at your own risk.

Conclusion

The API keys inventory script detailed in this blog will scan your entire GCP Organization for API keys and export the findings into two files: key_dump.json and keys.csv. Using these files you can create reports and perform analytics to better track your cloud credentials. This code is open sourced so feel free to fork, create a PR, or open an issue if you have errors or would like to see improvements.

Thank you

A special shoutout to @Spencer Gietzen and the Rhino Security Labs team for their GCP Privilege Escalation blog post. Additionally, thanks to John Porter and Anthony DiMarco for their python guidance and Eric Evans for his editor contributions.

 

RELATED ARTICLES

The information presented in this article is accurate as of 7/19/23. Follow the ScaleSec blog for new articles and updates.