Never Manually Unseal Vault Again My GCP KMS Setup Guide

Jan 18, 2024

Before I can set up my favorite Vault feature (auto-unseal with Google Cloud KMS), I need to create the necessary resources in GCP. This involves creating a KMS key for Vault to use, a dedicated service account, and granting that service account permission to use the key. I’ve done this so many times that I’ve created a script to automate the whole process. Here’s my guide to setting up the GCP side of Vault auto-unseal.

Why I Switched to KMS Auto-Unseal

Let me tell you about the incident that made me switch to KMS auto-unseal. It was 3 AM on a Saturday. Our Vault cluster had restarted due to a Kubernetes node upgrade, and it needed to be unsealed. I was on call. I fumbled for my laptop in the dark, connected to the VPN, and started the unseal process. You need three different unseal keys from three different people. I had one. I had to wake up two other people to get theirs.

By the time we got Vault unsealed, 45 minutes had passed. Every service that depended on Vault for secrets was down. The incident post-mortem was painful. The root cause? We manually unseal Vault. In 2023. The solution was obvious.

KMS auto-unseal eliminated this entire class of problems. Now when Vault restarts, it automatically unseals itself using Google Cloud KMS. No human intervention required. No 3 AM pages. No fumbling for unseal keys. The cluster just comes back up.

Beyond the operational improvement, the security benefits are real. The encryption keys live in Google’s Hardware Security Modules (HSMs), not in people’s password managers. GCP provides automatic key rotation every 30 days. Every encryption and decryption operation gets logged for audit purposes. And Vault becomes truly highly available because it doesn’t depend on humans being awake and available.

My Automated Setup Script

After manually setting up KMS for Vault the first time, I realized I’d have to do this again for staging, then for development, then for every new environment. The manual process involved clicking through the GCP console, copying and pasting values, and hoping I didn’t miss a step. I made a mistake on my second attempt and gave the service account the wrong permissions. Vault failed to unseal, and I spent an hour debugging before I found the IAM issue.

That’s when I scripted the entire thing. Now I have setup-vault-kms.sh, a single script that creates all the GCP resources I need. It takes two arguments: the environment name and the GCP project ID. Run it once, get all the resources configured correctly. Consistent, repeatable, and way less error-prone than clicking through consoles.

Here’s the script:

#!/bin/bash
# setup-vault-kms.sh

set -e

ENVIRONMENT=${1:-production}
GCP_PROJECT=${2:-your-project-id}

VAULT_SA_NAME="vault-server-${ENVIRONMENT}"
VAULT_SA="${VAULT_SA_NAME}@${GCP_PROJECT}.iam.gserviceaccount.com"
KEYRING_NAME="vault-unseal-kr-${ENVIRONMENT}"
KEY_NAME="vault-unseal-key-${ENVIRONMENT}"

echo "Setting up KMS for Vault (${ENVIRONMENT} environment)..."

# Step 1: Create Key Ring and Key
echo "Creating KMS key ring and key..."
gcloud kms keyrings create ${KEYRING_NAME} --location=global --project=${GCP_PROJECT} || echo "Key ring already exists."
gcloud kms keys create ${KEY_NAME} \
  --location=global \
  --keyring=${KEYRING_NAME} \
  --purpose=encryption \
  --rotation-period=30d \
  --project=${GCP_PROJECT} || echo "Key already exists."

# Step 2: Create Service Account
echo "Creating service account..."
gcloud iam service-accounts create ${VAULT_SA_NAME} --project=${GCP_PROJECT} || echo "Service account already exists."

# Step 3: Grant Permissions
echo "Granting KMS permissions..."
gcloud kms keys add-iam-policy-binding ${KEY_NAME} \
  --location=global \
  --keyring=${KEYRING_NAME} \
  --member="serviceAccount:${VAULT_SA}" \
  --role=roles/cloudkms.cryptoKeyEncrypterDecrypter \
  --project=${GCP_PROJECT}

# Step 4: Generate JSON Key
echo "Generating service account key..."
gcloud iam service-accounts keys create /tmp/kms-vault-creds-${ENVIRONMENT}.json \
  --iam-account=${VAULT_SA} \
  --project=${GCP_PROJECT}

echo "Setup complete! Credentials are in /tmp/kms-vault-creds-${ENVIRONMENT}.json"

What Each Step Does

Creating the KMS Key Ring and Key is step one. A KeyRing in GCP is just a container for grouping related keys. I create one per environment to keep things organized. The actual CryptoKey is what Vault uses for encryption. I set it to rotate automatically every 30 days, which is a security best practice I learned from a compliance audit. The script includes error handling because if you run it twice, GCP will complain that the resources already exist. The || echo statements make it idempotent.

Creating the Service Account is straightforward. Each environment gets its own dedicated service account named vault-server-{environment}. I used to share service accounts across environments to save resources, but that was stupid. When I had a security incident in staging, I had to rotate the credentials, which also broke production. Never again. Separate service accounts for separate environments.

Granting Permissions is where security matters most. The script grants roles/cloudkms.cryptoKeyEncrypterDecrypter only on the specific key it just created. Not on all keys in the project. Not on the entire KeyRing. Just this one key. This follows the principle of least privilege and has saved me during a few security reviews.

Generating the JSON Key is the final step. This file contains the credentials that Vault will use to authenticate to GCP KMS. I save it to /tmp because these credentials are sensitive and shouldn’t live on disk longer than necessary. Once I’ve created the Kubernetes secret from it, I delete the file.

How I Use the Script

I make the script executable and run it for each environment I need.

chmod +x setup-vault-kms.sh
./setup-vault-kms.sh production my-prod-project
./setup-vault-kms.sh staging my-staging-project

What to Do with the Output

After running my script, I have everything I need for the Vault deployment:

  1. Service Account JSON Key: /tmp/kms-vault-creds-production.json
  2. Key Ring Name: vault-unseal-kr-production
  3. Encryption Key Name: vault-unseal-key-production

I use these values directly in my Vault Helm values.yaml file, in the seal "gcpckms" block. I then create a Kubernetes secret from the JSON file and mount it into the Vault pods.

# Create the k8s secret
kubectl create secret generic kms-vault-creds \
  --from-file=kms-creds.json=/tmp/kms-vault-creds-production.json \
  --namespace=vault

What I Learned

Scripting this setup was non-negotiable for me after the first few manual attempts. Clicking through the GCP console is fine for learning, but it’s terrible for repeatability. I’ve used this script to set up KMS for Vault in six different environments across three projects, and it works the same way every time.

The cost is basically nothing. GCP charges about $0.06 per 10,000 operations for KMS, and Vault only uses it during unseal and rekey operations. My monthly KMS bill is usually under $1. For that price, I get automated unsealing, key rotation, audit logs, and the ability to sleep through the night without worrying about 3 AM unseal pages.

The biggest lesson I learned was to keep environments completely isolated. Separate KMS keys. Separate service accounts. Separate KeyRings. When I was starting out, I thought sharing resources would save money and reduce complexity. It did neither. It just made security incidents in one environment potentially impact others. The cost of isolation is minimal, and the security benefit is massive.

If you’re running Vault on GCP and still manually unsealing it, stop. Set up KMS auto-unseal. Use this script or write your own, but automate it. Your future self will thank you the next time Vault restarts at 3 AM and you don’t even notice because it unseals itself.

El Muhammad's Portfolio

© 2025 Aria

Instagram YouTube TikTok 𝕏 GitHub