VPC Service Controls in Plain English

VPC Service Controls in Plain English

VPC Service Controls in Plain English

VPC Service Controls in Plain English

Google Cloud Platform (GCP) offers powerful security controls to mitigate API-based data exfiltration called VPC Service Controls. To execute a successful and secure cloud architecture with VPC Service Controls, it is important to understand exactly how they work. This article aims to inform technical and non-technical stakeholders in an organization in a more discreet fashion that differs from Google’s documentation based on collaborative discussions with colleagues and customers.

What Are VPC Service Controls?

Let’s start with a summary description of what VPC Service Controls are and use that as a platform to dive deeper into configuration components.

VPC Service Controls are a technical security control in GCP that allows administrators to configure different GCP project and service groupings called Service Perimeters. By default, a Service Perimeter configuration results in those GCP project’s services being restricted from being accessed outside of the Service Perimeter (ingress) via Google API’s (i.e.: storage.googleapis.com).

Service-to-service communication between restricted services in GCP projects defined as part of a Service Perimeter is allowed. However, by default, those same services cannot make API requests to resources outside of the Service Perimeter (egress).

Those default restrictions protect resources from data exfiltration at scale and mitigate:

  • IAM misconfigurations for resources within the Service Perimeter
  • Lost or stolen credentials from being used to exfiltrate information within Service Perimeters
  • Compromised applications from sending data outside of the defined ingress, egress, and context-aware rules

VPC Service Controls are meant to prevent bulk data exfiltration and should be part of a larger comprehensive Data Loss Prevention (DLP) strategy when using GCP.

Additionally, VPC Service Controls do not have discrete controls for access to resource metadata (i.e. VM instance metadata). Access to metadata should continue to be managed using IAM.

The last mitigation point referenced ingress and egress rules. These are rules that can be configured to allow access to and from resources within Service Perimeters.

Since VPC Service Controls only restrict access to services via Google’s APIs, you’ll need to include Private Google Access configured on subnetworks in a VPC, firewall rules for OSI Layer 4 network protection, and IAM best practices for a comprehensive private and secure configuration.

The VPC Service Controls Description Dilemma

Something that can be hard to discern when learning about VPC Service Controls is what does the control explicitly restrict and what other security controls complement VPC Service Controls to achieve a complete solution that matches an organization’s business and technical requirements.

Unfortunately, untangling this using Google literature has been a challenge. The level of focus and detail using official Google literature varies quite a bit. This has resulted in confusing interested parties on what VPC Services Controls are depending on the reader’s technical background, past professional experience, and assumptions and biases. Let’s capture some examples to see exactly what I mean.

VPC Service Controls Product Description

The initial research into VPC service controls would likely first lead you to the product page which, at time of authoring, uses the summary description of: Managed networking functionality for your Google Cloud resources.

VPC Service Controls

VPC Service Controls

The term Managed networking functionality can be interpreted as very expansive at face value and can lead to technical discussions across the different layers of the OSI model, primarily at Layers 3 (Networking), 4 (Transport), and 7 (Application).

While a comprehensive networking discussion is required for a complete solution, VPC Service Controls alone as a service does not deliver this and this misunderstanding can confuse and distract from achieving a successful implementation of VPC Service Controls.

VPC Service Controls Concept Overview

The product documentation offers a concept overview that describes VPC Service Controls as (emphasis on services is mine): VPC Service Controls improves your ability to mitigate the risk of data exfiltration from Google Cloud services such as _Cloud Storage _and BigQuery. You can use VPC Service Controls to create perimeters that protect the resources and data of services that you explicitly specify.

VPC Service Controls Overview

VPC Service Controls Overview

This provides a logical description of how VPC Service Controls work and goes a layer deeper by explaining that VPC Service Controls use perimeters to define what resources and data need to be protected.

The emphasis on Cloud Storage and BigQuery is important since those are managed services that do not require attachment to a VPC network to consume. This means that controlling access to data contained within those GCP services is only accessible through Google APIs. There are other GCP compute services, like Compute Engine, which can provide a means to interface with application data that wouldn’t be provided by Google APIs and, as a result, VPC Service Controls alone would not act as a control for access.

One-Line Description

Strangely, the one-line description of VPC Service Controls in the console captures what the VPC Service Controls do while describing a perimeter’s function as touched upon in the concept overview: VPC Service Perimeters function like a firewall for GCP APIs. Choose which projects you wish to be part of the perimeter and which services you want to be protected by it.

One-Line Description

One-Line Description


Let’s expand upon the description of VPC Service Controls provided at the beginning of this article and go deeper into technical and architectural considerations.

Where are VPC Service Controls configured?

VPC Service Controls are configured at the organization level. The core component of VPC Service Controls is a Service Perimeter. A Service Perimeter definition is comprised of:

  • GCP Project(s)
  • GCP Service API(s)

The grouping of GCP Project(s) and Service API(s) in the Service Perimeter result in restricting unauthorized access outside of the Service Perimeter to Service API endpoint(s) referencing resources inside of the Service Perimeter. Below is an example of what an error would look like when trying to access a resource restricted by VPC Service Controls.

Where are VPC Service Controls configured?

Where are VPC Service Controls configured?

Conversely, the Service Perimeter also prevents communication to any destination outside of the perimeter by default.

VPC Service Controls have a general configuration theme of "You can check in, but you can't check out." This will be expanded upon when we talk about the different types of controls VPC Service Controls provide, primarily ingress and egress rules.

What do VPC Service Controls restrict?

VPC Service Controls restrict API requests (*.googleapis.com) at the application layer (OSI Layer 7) to the service and project grouping defined in a Service Perimeter.

A VPC is often discussed in the context of traditional networking constructs, consisting of subnets, routing and so on. Therefore, we emphasize API requests which exist at a higher level in the OSI model. These are two distinct and different means for securing connectivity and access between two or more points.

There are additional controls, described later in this article, inherent to VPC Service Controls that use network CIDR blocks defined in an access level assigned to ingress rules for crossing a Service Perimeter. This level of granularity is appropriate when protecting at the application layer of the OSI model (Layer 7) and does not require more specific facilities, like selecting ports and protocols at OSI Layer 4, typically provided for VPC networking configurations.

Additionally, as alluded to earlier in the article, there are GCP resources that can serve data using both the Google APIs as well as services published from compute services such as Compute Engine and Kubernetes Engine. _VPC Service Controls does not restrict access to managed compute services at OSI Layers 3 and 4**.**_ Additional configuration to make the VPC subnet private and configure access to only supported services is required.

What are common access patterns to GCP services?

Good question. This is the access breakdown when planning and designing VPC Service Controls:

  • GCP Service to GCP Service
    • Within a Service Perimeter
    • Across a Service Perimeter (between authorized and unauthorized resources/projects)
  • VM to GCP Service
    • Within a Service Perimeter
    • Across a Service Perimeter (between authorized and unauthorized resources/projects)
  • Access to resources outside of GCP
    • Access from the Internet
    • Access from private networks
      • Subnetworks configured with Google Private Access
      • Private connections using services like Cloud Interconnect or Cloud VPN
Private Connections

Private Connections

Can you configure access to resources restricted by VPC Service Controls?

Yes. There are four controls applied to Service Perimeters to manage access to restricted resources protected by VPC Service Controls:

  1. Perimeter bridges
  2. Ingress rules
  3. Egress rules
  4. Access levels

All of these access configurations items are related to Access Context Manager. This is relevant, from a technical perspective, since gcloud commands that configure these Service Perimeter components will be using gcloud access-context-manager as the command line prefix.

A perimeter bridge allows projects in different security perimeters to communicate. Perimeter bridges are bidirectional, allowing projects from each Service Perimeter equal access within the scope of the bridge. The bidirectionality, before the rise of ingress and egress rules, is important as this was the primary means of allowing egress between Service Perimeters in the past.

Ingress and egress rules specify what access is allowed to and from different identities and resources. Ingress and egress rules are preferred over perimeter bridges and the access levels. Perimeter Bridges were required to configure bidirectional communication between different Service Perimeters in the same GCP organization. This can be facilitated using a combination of ingress and egress rules and generally results in a simpler configuration.

An important detail regarding ingress and egress rules is that ingress rules can use access levels, which allow an administrator to define access via IP subnetworks via CIDR ranges, allowed regions, and access level dependencies. Egress rules can only target other GCP resources and projects and does not use subnetwork definitions.

Access levels enable context-aware classification of requests based on several attributes, such as:

  • Source CIDR IPv4 and/or IPv6 network range(s)
  • Client device information
  • Allowed GCP regions
  • Dependencies on other access levels

A Service Perimeter can attach access levels to ingress rules, granting API access to restricted services from the Internet (such as trusted 3rd-party partners with known static IPs) or private networks (such as by VPN or Cloud Interconnects) based on the access level associated with a request.

Access levels configured as part of a Service Perimeter only affect ingress to restricted services and can be replaced by ingress rules, which provided a finer grained configuration facility to permit ingress to restricted services.

Can you control access to services from a VPC network within a Service Perimeter?

Yes, you can. The term used for this is VPC accessible services. VPC accessible services can be taken at face value as a control to limit what compute endpoints from within the Service Perimeter are allowed to access restricted services inside of the Service Perimeter.

As an example, if you had a GCP project in a Service Perimeter that was configured to restrict the following services:

  • Google Compute Engine
  • BigQuery
  • Google Cloud Storage

Compute Engine instances only need to interact with Cloud Storage from inside of the perimeter and does not need access to BigQuery. This configuration is possible using VPC accessible services.

If VPC Service Controls only restrict access to Google APIs, how do you further harden resources that are part of the Service Perimeter?

Good question. In short, Google Private Access with VPC Service Controls in conjunction with VPC accessible services. The level of configuration rigor varies between VPC networking and VPC networking integrated with an on-premises network (either via Cloud VPN or Cloud Interconnect) but there are common configurations required for either setup:

  • A DNS entry for *.googleapis.com to point to restricted.googleapis.com, which resolves to 199.36.153.4/30; That CIDR block is not advertised to the Internet despite being a set of public IPs.
  • A custom route to advertise 199.36.153.4/30
  • The Service Perimeter configured with the appropriate private CIDRs using an access level as part of the ingress policy for the Service Perimeter

Is VPC Service Controls supported across all GCP products?

No. Google maintains documentation explaining which products are supported as well as which services are supported by the restricted VIP. This is an important step to do research on to ensure you’re not violating any SLO/SLA and/or support requirements.

Does Google provide any implementation examples to go a level deeper into the technical details?

Yes. Google has articles on:

The Secure data exchange with ingress and egress rules is highly recommended to review, as it’s a very rigorous document capturing implementation details for ingress and egress rules to apply to a Service Perimeter.

Are there any additional ScaleSec articles and media regarding VPC Service Controls?

Yes! ScaleSec’s Ilan Ponimansky recently gave a talk at fwd:cloudsec that both introduces VPC Service Controls as well as provides a robust technical walkthrough implementing VPC Service Controls for a multi-tiered application.

ScaleSec's Ilan Ponimansky talking at fwd:cloudsec

Additionally, ScaleSec’s Cameron McCloud offers an example of this in conjunction with Cloud DLP as part of a data loss prevention implementation.

Data Loss Prevention Services in Google Cloud

Data Loss Prevention Services in Google Cloud


The information presented in this article is accurate as of 9/30/2021. Follow the ScaleSec blog for new articles and updates.

About ScaleSec

ScaleSec is a service-disabled, veteran-owned small business (SDVOSB) for cloud security and compliance that helps innovators meet the requirements of their most scrutinizing customers. We specialize in cloud security engineering and cloud compliance. Our team of experts guides customers through complex cloud security challenges, from foundations to implementation, audit preparation and beyond.

Get in touch!

Game Servers in Google Cloud with Valheim and Kubernetes

A fully automated Valheim game server in the Cloud

Next article

Here for you

Have questions? Leverage our expertise to help you meet your business goals with a strong security posture.

Join us

ScaleSec is a well-connected, fully remote team. We thrive in the great undocumented beyond. We’re hiring in most US metros.

Get in touch

Considering cloud? Want to optimize and transform your existing digital portfolio?
Reach out to us.

Gap Assessment

Get perspective. Address security comprehensively.

Prepare for compliance.

ScaleSec
San Diego, CA 92120, United States

619-SCALE15

© 2021 ScaleSec. All rights reserved. | Privacy Policy