Google Cloud Networking & the way I think about it

Alright, you’ve mastered the art of storing data in Google Cloud. Your databases are humming along, ready to serve and save information. But how do your applications, your users, and your other services talk to these databases? How does anything talk to anything else?

They communicate over a network. But in the cloud, we don’t just plug in an ethernet cable. We must build our own private, secure, and performant corner of the internet. This is the story of how you become the architect of your own cloud universe, starting with a single, foundational concept: the Virtual Private Cloud (VPC).

This journey is essential for any Associate Cloud Engineer. We’ll build our network from a patch of digital dirt into a bustling, interconnected metropolis, solving problems as they arise and learning the tools of the trade.

Laying the Foundation: The Virtual Private Cloud (VPC)

Before you can build a house, you need to own a piece of land. A VPC is your private plot of land within Google’s global infrastructure. It’s a logically isolated section of the Google Cloud where you can launch resources.

What makes a Google Cloud VPC so special? It’s global. Unlike other clouds where a virtual network is confined to a single region, a GCP VPC can span all Google Cloud regions worldwide. This means you can have a virtual machine (VM) in us-east1 and another in asia-south1, and they can communicate with each other using private, internal IP addresses as if they were in the same room. This is a game-changer for building global applications.

When you start a new project, you have a choice to make.

Auto Mode vs. Custom Mode VPCs

  • Auto Mode: This is the “easy button.” When your project is created, Google pre-provisions a VPC for you named default. It automatically creates one subnet in every GCP region, each with a predefined IP address range. It also creates a set of firewall rules that allow common traffic, like SSH and RDP. It’s fantastic for getting started quickly and for learning.
  • Custom Mode: This is the professional’s choice. You start with a blank slate. No default subnets, no default firewall rules. You have complete control. You define which subnets to create, in which regions, and with which IP address ranges.

Why choose Custom? In a production environment, you don’t want a subnet in every single region, and you certainly don’t want the pre-canned, sometimes overlapping IP ranges that Auto Mode gives you. Custom mode prevents IP range conflicts with your on-premises network or other VPCs and enforces a more deliberate, secure-by-default posture.

You can create a custom VPC with a simple gcloud command:

# The --subnet-mode=custom is the key!
gcloud compute networks create my-custom-vpc --subnet-mode=custom

For the rest of this guide, we’ll assume we’re working with a custom VPC. We’re professionals, after all.

Carving Out Space: Subnets

So we have our global VPC, our “land.” But you can’t put a VM just anywhere in the VPC; you need a developed plot. That plot is a subnet. A subnet, or subnetwork, is a regional resource. It’s a segmented piece of your VPC where you define a specific range of IP addresses. When you launch a VM, you launch it into a specific subnet in a specific region and zone. That VM then gets its internal IP address from the subnet’s defined range.

For example, let’s create a subnet in us-central1 for our web servers. We’ll assign it the IP address range 10.1.2.0/24, which gives us about 254 usable IP addresses.

gcloud compute networks subnets create web-subnet-us-central \
    --network=my-custom-vpc \
    --range=10.1.2.0/24 \
    --region=us-central1

Now we have a place to put our web servers! Google reserves the first two and last two addresses in any subnet range for networking purposes, so 10.1.2.010.1.2.110.1.2.254, and 10.1.2.255 are off-limits.

ls and Doors: Firewall Rules

We have a subnet and we’ve launched a VM in it. By default, it’s completely isolated. Nothing can get in, and nothing can get out. This is secure, but not very useful. We need to create openings. We need firewall rules.

GCP Firewall Rules are stateful. This is a crucial concept. If you allow an incoming (ingress) connection from a source, the return traffic for that same connection is automatically allowed, regardless of any egress rules. You don’t need to create a matching egress rule for the response.

A firewall rule has several key components:

  • Direction: Ingress (for incoming traffic) or Egress (for outgoing traffic).
  • Action: Allow or Deny.
  • Source/Destination: An IP CIDR range, or more powerfully, a network tag or service account.
  • Protocols and Ports: Like tcp:80 for HTTP, tcp:443 for HTTPS, or icmp for ping.
  • Priority: A number from 0 to 65535. The lower the number, the higher the priority. The first rule that matches the traffic is the one that’s applied.

Every VPC has two implied firewall rules that you can’t see but can override: an egress rule that allows all outbound traffic (priority 65535) and an ingress rule that denies all inbound traffic (priority 65535).

The Power of Network Tags Let’s solve a common problem. We want to allow SSH traffic (tcp:22) to all our backend servers, but only from a specific “bastion host” or admin machine. We could create a rule for each server’s IP, but that’s brittle.

Instead, we attach a network tag, like backend-server, to all our backend VMs. Then, we create a single firewall rule.

# First, create the rule
gcloud compute firewall-rules create allow-ssh-from-admin \
    --network=my-custom-vpc \
    --allow=tcp:22 \
    --source-ranges=73.54.12.98/32 \
    --target-tags=backend-server \
    --priority=1000

# Then, when you create a VM, you apply the tag
gcloud compute instances create my-backend-vm \
    --network=my-custom-vpc \
    --subnet=web-subnet-us-central \
    --tags=backend-server

Now, any VM with the backend-server tag automatically inherits this firewall rule. If you remove the tag, it loses access. It’s dynamic and scalable. Using service accounts as the source/target is an even more secure and granular method, tying permissions to an identity rather than a simple string label.

Creating Signposts: Routes

Okay, our VMs can talk to each other within the VPC, and we’ve opened the firewall for specific traffic. But how does a packet of data actually know how to get from VM-A in us-central1 to VM-B in europe-west2? The answer is routes.

A route is a rule that tells the network where to send traffic destined for a specific IP range. Your VPC comes with system-generated routes that handle the basics:

  • A route for each of your subnets, telling traffic how to reach other VMs within that subnet. The next hop is the VPC network itself.
  • default route that sends traffic destined for anywhere else (0.0.0.0/0) to the default internet gateway. This is how your VMs reach the public internet (if they have a public IP).

You generally don’t need to mess with routes unless you are doing something more advanced, like setting up a VPN, which we’ll cover next. The system-generated routes are sufficient for most internal and external communication.

Talking to the Outside World (Privately): Cloud NAT

Our backend VMs need to download security patches and software updates from the internet. The default route allows this, but only if the VM has a public IP address. Assigning public IPs to all our backend servers is a security risk and can be costly. We want them to be able to initiate outbound connections, but we don’t want the internet to be able to initiate connections to them.

This is the exact problem that Cloud NAT (Network Address Translation) solves.

Cloud NAT is a managed service that allows VMs with only private IPs in a subnet to access the internet. It works by routing their outbound traffic through a managed gateway that translates their private source IP to a shared public source IP. It’s egress-only; no inbound connections are possible through the NAT.

To set up Cloud NAT, you first need a Cloud Router (a virtual router for your VPC), and then you configure the NAT gateway itself.

# Step 1: Create a Cloud Router
gcloud compute routers create my-router \
    --network=my-custom-vpc \
    --region=us-central1

# Step 2: Create the NAT gateway and attach it to the router
gcloud compute nats create my-nat-gateway \
    --router=my-router \
    --region=us-central1 \
    --nat-all-subnet-ip-ranges

Now, all VMs in all subnets in us-central1 can reach the internet without needing their own public IP addresses. It’s secure and efficient.

Connecting to Home Base: VPN & Interconnect

Our little cloud deployment is growing up. Now, we need to connect it back to our company’s on-premises data center. Our on-prem servers need to securely access resources in our GCP VPC. We have two main options for this hybrid-cloud setup.

Cloud VPN This creates a secure, IPsec VPN tunnel between your on-premises VPN gateway and a Google Cloud VPN gateway. The traffic travels over the public internet, but it’s encrypted. It’s relatively quick to set up and cost-effective. HA VPN is the recommended option, providing a 99.99% SLA by using two tunnels.

Cloud Interconnect If you need higher bandwidth, lower latency, and more reliability than a standard internet connection can provide, you need a private line. Cloud Interconnect provides a direct physical connection between your on-premises network and Google’s network.

  • Dedicated Interconnect: A private, high-speed physical connection (10 Gbps or 100 Gbps) directly to Google. It’s the highest performance option but also the most expensive and takes longer to provision.
  • Partner Interconnect: Connect to Google’s network through a supported service provider. This offers more flexibility in connection speeds (50 Mbps to 50 Gbps) and can be a good middle ground.

The choice is a trade-off: VPN is for flexibility and standard workloads, while Interconnect is for enterprise-grade, mission-critical workloads that demand high performance and reliability.

Connecting with Friends: VPC Peering

Our organization has multiple teams, and each has its own GCP project and VPC. The data science team in vpc-analytics needs to access a dataset from the main application running in vpc-production. How can we connect these two VPCs?

VPC Network Peering allows two VPCs to connect privately. Once peered, VMs in either VPC can communicate using internal IP addresses as if they were on the same network. Firewall rules in each VPC still apply, so you maintain full security control.

A crucial characteristic of peering is that it is non-transitive. If VPC-A is peered with VPC-B, and VPC-B is peered with VPC-C, it does not mean VPC-A can talk to VPC-C. You would need to establish a separate, direct peering connection between A and C. This can get complicated in large organizations, which leads us to…

Centralized Management: Shared VPC

Imagine you have dozens of projects and VPCs. Managing all the peering connections would be a nightmare. Shared VPC provides a more scalable and manageable solution for large organizations.

With Shared VPC, you designate one project as the Host Project. This project owns the VPC, subnets, routes, and firewalls. Then, you can share some or all of its subnets with other Service Projects. The VMs and resources in the Service Projects can use the shared subnets and communicate with each other, all managed from the central Host Project.

This model allows a central network administration team to enforce security and networking policies while giving developer teams the autonomy to manage their own resources within the Service Projects. It’s the enterprise standard for GCP networking.

What’s in a Name?: Cloud DNS

Remembering 10.1.2.5 is hard. Remembering web-server-prod-1 is easier. Cloud DNS is a managed, authoritative Domain Name System (DNS) service that translates human-readable domain names into IP addresses.

You can create:

  • Public Zones: To manage the public DNS records for your domains (like example.com), making your services available on the internet.
  • Private Zones: To create custom internal domain names that are only resolvable within your VPC network(s). This is perfect for service discovery, allowing your backend services to find each other using names like database.internal.prod instead of hard-coding IP addresses.

A Note on Load Balancing

We’ve successfully built a secure, interconnected network. But what happens when our web server gets too much traffic? We’ll need to run multiple instances and distribute the traffic among them. This is the job of a Load Balancer. Google Cloud offers a sophisticated suite of global and regional load balancers for different types of traffic (HTTP(S), TCP, UDP). This is a huge topic in itself and the next logical step in your networking journey, deserving of its own deep dive.

Common Pitfalls & Best Practices

VPC & Subnets

  • Pitfall: Using Auto Mode VPC for production. The predefined IP ranges can conflict with your corporate network when you set up a VPN.
  • Best Practice: Always use Custom Mode VPCs for production. Plan your IP CIDR ranges carefully to avoid future overlap.

Firewall Rules

  • Pitfall: Creating overly permissive rules (e.g., 0.0.0.0/0 for SSH). This is a huge security risk.
  • Best Practice: Follow the principle of least privilege. Be as specific as possible with source ranges, protocols, and ports. Use network tags or service accounts instead of IP addresses for targets.

Cloud NAT

  • Pitfall: Forgetting that Cloud NAT is regional. If you have VMs in multiple regions that need egress, you need a NAT gateway in each region.
  • Best Practice: Place your NAT gateway in the same region as the VMs that will use it to avoid cross-regional traffic costs.

VPC Peering

  • Pitfall: Forgetting that peering is non-transitive and getting confused about why two indirectly connected VPCs can’t communicate.
  • Best Practice: For complex topologies with more than a few VPCs, strongly consider using Shared VPC instead of building a complex mesh of peering connections.

IAM for Networking

  • Pitfall: Giving developers the compute.networkAdmin role, allowing them to change firewall rules and other critical settings.
  • Best Practice: Use granular roles. Give the network team roles/compute.networkAdmin. Give security teams roles/compute.securityAdmin (can manage firewalls and SSL certs). Give developers roles/compute.instanceAdmin (can manage VMs but not network settings).

GCloud Command Center: Networking Cheatsheet

Here are the essential gcloud networking commands, focusing on what’s most important for the Associate Cloud Engineer exam.

VPC Networks

This is your foundational private network space. Knowing how to create a custom mode VPC is crucial.

  • Create a Custom Mode VPC:gcloud compute networks create [VPC_NAME] --subnet-mode=custom Why it’s important: The --subnet-mode=custom flag is key. For the exam, always assume production environments require custom mode for IP range control.
  • List All VPCs:gcloud compute networks list Why it’s important: A quick way to see all networks in your project, including the default auto-mode VPC.

Subnets

Subnets are the regional IP ranges where your resources live.

  • Create a Subnet:gcloud compute networks subnets create [SUBNET_NAME] \ --network=[VPC_NAME] \ --region=[REGION] \ --range=[IP_CIDR_RANGE] Why it’s important: You must specify the VPCregion, and a valid CIDR range (e.g., 10.10.0.0/24).
  • Enable Private Google Access:gcloud compute networks subnets update [SUBNET_NAME] \ --region=[REGION] \ --enable-private-ip-google-access Why it’s important: This allows VMs with only private IPs to reach Google APIs (like BigQuery or Cloud Storage) without going over the public internet. This is a common exam scenario.

Firewall Rules

Firewall rules control traffic to and from your VMs. Remember, lower priority numbers are evaluated first.

  • Create an Ingress (Inbound) Allow Rule with Network Tags:gcloud compute firewall-rules create allow-http-on-web-servers \ --network=[VPC_NAME] \ --direction=INGRESS \ --priority=1000 \ --action=ALLOW \ --rules=tcp:80,tcp:443 \ --source-ranges=0.0.0.0/0 \ --target-tags=web-server Why it’s important: This command demonstrates several core concepts:
    • --direction=INGRESS: Specifies inbound traffic.
    • --priority=1000: The default priority.
    • --rules=tcp:80,tcp:443: Allows HTTP and HTTPS traffic.
    • --source-ranges=0.0.0.0/0: Allows traffic from any IP address.
    • --target-tags=web-server: The rule only applies to VMs with this tag. This is the most scalable way to apply firewall rules.

Cloud NAT & Cloud Router

Cloud NAT allows private VMs to access the internet for updates and APIs. It requires a Cloud Router.

  • Step 1: Create a Cloud Router:gcloud compute routers create [ROUTER_NAME] \ --network=[VPC_NAME] \ --region=[REGION] Why it’s important: You cannot create a Cloud NAT gateway without a router in the same region.
  • Step 2: Create a Cloud NAT Gateway:gcloud compute nats create [NAT_GATEWAY_NAME] \ --router=[ROUTER_NAME] \ --region=[REGION] \ --nat-all-subnet-ip-ranges Why it’s important: The --nat-all-subnet-ip-ranges flag is a common configuration that makes the NAT available to all subnets in that region.

VPC Network Peering

This connects two VPCs so they can communicate using private IPs.

  • Create a Peering Connection: (This is a two-step process)# In the first network (net-a) gcloud compute networks peerings create peer-a-to-b \ --network=net-a \ --peer-network=net-b \ --peer-project=[PROJECT_ID_OF_NET_B] # In the second network (net-b) gcloud compute networks peerings create peer-b-to-a \ --network=net-b \ --peer-network=net-a \ --peer-project=[PROJECT_ID_OF_NET_A] Why it’s important: Remember that peering must be established from both sides to become active.

VM Instance Creation (Networking Flags)

When creating a VM, you often need to specify its network configuration.

  • Create a VM with a Specific Network and Tag:gcloud compute instances create my-vm \ --zone=us-central1-a \ --machine-type=e2-medium \ --network=[VPC_NAME] \ --subnet=[SUBNET_NAME] \ --tags=web-server,db-client Why it’s important: Shows how to place a VM in a specific custom subnet and apply the network tags that link it to your firewall rules.
  • Create a VM with No Public IP:gcloud compute instances create my-private-vm \ --zone=us-central1-a \ --machine-type=e2-small \ --subnet=[SUBNET_NAME] \ --no-address Why it’s important: The --no-address flag ensures the VM is private. This is a best practice for backend servers that should use Cloud NAT for egress.
error: Content is protected !!