Google Cloud Platform (GCP) offers powerful security controls to mitigate API-based data exfiltration called VPC Service Controls. To execute a successful and secure cloud architecture with VPC Service Controls, it is important to understand exactly how they work. This article aims to inform technical and non-technical stakeholders in an organization in a more discreet fashion that differs from Google’s documentation based on collaborative discussions with colleagues and customers.
Let’s start with a summary description of what VPC Service Controls are and use that as a platform to dive deeper into configuration components.
VPC Service Controls are a technical security control in GCP that allows administrators to configure different GCP project and service groupings called Service Perimeters. By default, a Service Perimeter configuration results in those GCP project’s services being restricted from being accessed outside of the Service Perimeter (ingress) via Google API’s (i.e.: storage.googleapis.com).
Service-to-service communication between restricted services in GCP projects defined as part of a Service Perimeter is allowed. However, by default, those same services cannot make API requests to resources outside of the Service Perimeter (egress).
Those default restrictions protect resources from data exfiltration at scale and mitigate:
VPC Service Controls are meant to prevent bulk data exfiltration and should be part of a larger comprehensive Data Loss Prevention (DLP) strategy when using GCP.
Additionally, VPC Service Controls do not have discrete controls for access to resource metadata (i.e. VM instance metadata). Access to metadata should continue to be managed using IAM.
The last mitigation point referenced ingress and egress rules. These are rules that can be configured to allow access to and from resources within Service Perimeters.
Since VPC Service Controls only restrict access to services via Google’s APIs, you’ll need to include Private Google Access configured on subnetworks in a VPC, firewall rules for OSI Layer 4 network protection, and IAM best practices for a comprehensive private and secure configuration.
Something that can be hard to discern when learning about VPC Service Controls is what does the control explicitly restrict and what other security controls complement VPC Service Controls to achieve a complete solution that matches an organization’s business and technical requirements.
Unfortunately, untangling this using Google literature has been a challenge. The level of focus and detail using official Google literature varies quite a bit. This has resulted in confusing interested parties on what VPC Services Controls are depending on the reader’s technical background, past professional experience, and assumptions and biases. Let’s capture some examples to see exactly what I mean.
The initial research into VPC service controls would likely first lead you to the product page which, at time of authoring, uses the summary description of: Managed networking functionality for your Google Cloud resources.
The term Managed networking functionality
can be interpreted as very expansive at face value and can lead to technical discussions across the different layers of the OSI model, primarily at Layers 3 (Networking), 4 (Transport), and 7 (Application).
While a comprehensive networking discussion is required for a complete solution, VPC Service Controls alone as a service does not deliver this and this misunderstanding can confuse and distract from achieving a successful implementation of VPC Service Controls.
The product documentation offers a concept overview that describes VPC Service Controls as (emphasis on services is mine): VPC Service Controls improves your ability to mitigate the risk of data exfiltration from Google Cloud services such as Cloud Storage _and _BigQuery. You can use VPC Service Controls to create perimeters that protect the resources and data of services that you explicitly specify.
This provides a logical description of how VPC Service Controls work and goes a layer deeper by explaining that VPC Service Controls use perimeters to define what resources and data need to be protected.
The emphasis on Cloud Storage and BigQuery is important since those are managed services that do not require attachment to a VPC network to consume. This means that controlling access to data contained within those GCP services is only accessible through Google APIs. There are other GCP compute services, like Compute Engine, which can provide a means to interface with application data that wouldn’t be provided by Google APIs and, as a result, VPC Service Controls alone would not act as a control for access.
Strangely, the one-line description of VPC Service Controls in the console captures what the VPC Service Controls do while describing a perimeter’s function as touched upon in the concept overview: VPC Service Perimeters function like a firewall for GCP APIs. Choose which projects you wish to be part of the perimeter and which services you want to be protected by it.
Let’s expand upon the description of VPC Service Controls provided at the beginning of this article and go deeper into technical and architectural considerations.
VPC Service Controls are configured at the organization level. The core component of VPC Service Controls is a Service Perimeter. A Service Perimeter definition is comprised of:
The grouping of GCP Project(s) and Service API(s) in the Service Perimeter result in restricting unauthorized access outside of the Service Perimeter to Service API endpoint(s) referencing resources inside of the Service Perimeter. Below is an example of what an error would look like when trying to access a resource restricted by VPC Service Controls.
Conversely, the Service Perimeter also prevents communication to any destination outside of the perimeter by default.
VPC Service Controls restrict API requests (*.googleapis.com) at the application layer (OSI Layer 7) to the service and project grouping defined in a Service Perimeter.
There are additional controls, described later in this article, inherent to VPC Service Controls that use network CIDR blocks defined in an access level assigned to ingress rules for crossing a Service Perimeter. This level of granularity is appropriate when protecting at the application layer of the OSI model (Layer 7) and does not require more specific facilities, like selecting ports and protocols at OSI Layer 4, typically provided for VPC networking configurations.
Additionally, as alluded to earlier in the article, there are GCP resources that can serve data using both the Google APIs as well as services published from compute services such as Compute Engine and Kubernetes Engine. _VPC Service Controls does not restrict access to managed compute services at OSI Layers 3 and 4**.**_ Additional configuration to make the VPC subnet private and configure access to only supported services is required.
Good question. This is the access breakdown when planning and designing VPC Service Controls:
Yes. There are four controls applied to Service Perimeters to manage access to restricted resources protected by VPC Service Controls:
All of these access configurations items are related to Access Context Manager. This is relevant, from a technical perspective, since gcloud commands that configure these Service Perimeter components will be using gcloud access-context-manager as the command line prefix.
A perimeter bridge allows projects in different security perimeters to communicate. Perimeter bridges are bidirectional, allowing projects from each Service Perimeter equal access within the scope of the bridge. The bidirectionality, before the rise of ingress and egress rules, is important as this was the primary means of allowing egress between Service Perimeters in the past.
Ingress and egress rules specify what access is allowed to and from different identities and resources. Ingress and egress rules are preferred over perimeter bridges and the access levels. Perimeter Bridges were required to configure bidirectional communication between different Service Perimeters in the same GCP organization. This can be facilitated using a combination of ingress and egress rules and generally results in a simpler configuration.
Access levels enable context-aware classification of requests based on several attributes, such as:
A Service Perimeter can attach access levels to ingress rules, granting API access to restricted services from the Internet (such as trusted 3rd-party partners with known static IPs) or private networks (such as by VPN or Cloud Interconnects) based on the access level associated with a request.
Access levels configured as part of a Service Perimeter only affect ingress to restricted services and can be replaced by ingress rules, which provided a finer grained configuration facility to permit ingress to restricted services.
Yes, you can. The term used for this is VPC accessible services. VPC accessible services can be taken at face value as a control to limit what compute endpoints from within the Service Perimeter are allowed to access restricted services inside of the Service Perimeter.
As an example, if you had a GCP project in a Service Perimeter that was configured to restrict the following services:
Compute Engine instances only need to interact with Cloud Storage from inside of the perimeter and does not need access to BigQuery. This configuration is possible using VPC accessible services.
Good question. In short, Google Private Access with VPC Service Controls in conjunction with VPC accessible services. The level of configuration rigor varies between VPC networking and VPC networking integrated with an on-premises network (either via Cloud VPN or Cloud Interconnect) but there are common configurations required for either setup:
No. Google maintains documentation explaining which products are supported as well as which services are supported by the restricted VIP. This is an important step to do research on to ensure you’re not violating any SLO/SLA and/or support requirements.
Yes. Google has articles on:
The Secure data exchange with ingress and egress rules is highly recommended to review, as it’s a very rigorous document capturing implementation details for ingress and egress rules to apply to a Service Perimeter.
Yes! ScaleSec’s Ilan Ponimansky recently gave a talk at fwd:cloudsec that both introduces VPC Service Controls as well as provides a robust technical walkthrough implementing VPC Service Controls for a multi-tiered application.
Additionally, ScaleSec’s Cameron McCloud offers an example of this in conjunction with Cloud DLP as part of a data loss prevention implementation.