Top 10 GCP Misconfigurations We See Every Week
Google Cloud Platform has strong security defaults compared to other major cloud providers. GCP's organization policies, uniform bucket-level access, and VPC Service Controls are genuinely well-designed. But defaults only protect you if you don't override them — and in practice, teams override them constantly.
After analyzing configurations across thousands of GCP projects, we've identified a consistent pattern: the same ten misconfigurations appear week after week, across organizations of every size. Some are inherited from legacy configurations. Some are shortcuts taken during development that were never cleaned up. Some are the result of following outdated documentation.
Here are the ten we see most often, ranked by how frequently they appear and how much damage they can cause.
1. Overly Permissive IAM Bindings
Frequency: Nearly universal in projects older than six months.
GCP's IAM model is powerful but complex. The most common mistake is granting roles/editor or roles/owner at the project level when a more specific role would suffice. The Editor role grants write access to almost every resource in a project — compute instances, storage buckets, databases, secrets, and more.
Why it happens: During development, teams grant broad roles to move fast. The intention is always to tighten permissions later. Later never comes.
What to do:
- Audit all project-level
IAMbindings. Replaceroles/editorandroles/ownerwith predefined or custom roles scoped to specific services. - Use IAM Recommender to identify permissions that haven't been used in 90 days.
- Implement a policy that requires justification for any role broader than service-specific roles.
- Pay special attention to
allUsersandallAuthenticatedUsersbindings — these grant access to anyone on the internet or any Google account holder, respectively.
2. Default Service Account with Editor Role
Frequency: Present in the majority of projects that use Compute Engine or Cloud Functions.
When you create a GCP project, a default Compute Engine service account is automatically created with the Editor role. Every VM, Cloud Function, and App Engine instance uses this service account unless you specify otherwise. This means every workload in your project has near-full write access to every resource.
Why it happens: It's the default. GCP creates this binding automatically, and most teams never revisit it.
What to do:
- Create dedicated service accounts for each workload with only the permissions it needs.
- Remove the
Editorrole from the default service account. - Use IAM Conditions to further restrict access by resource, time, or IP range where appropriate.
- Set the
iam.automaticIamGrantsForDefaultServiceAccountsorganization policy to prevent this default binding in new projects.
3. Public Cloud Storage Buckets
Frequency: Found in roughly 15-20% of projects we scan.
GCP has made it harder to accidentally create public buckets — uniform bucket-level access and the public access prevention organization policy help significantly. But teams still create public buckets intentionally for static website hosting, public downloads, or CDN origins, and then either forget to restrict them later or misconfigure the scope of public access.
Why it happens: Static site hosting, public assets, and data sharing workflows often require public access to specific objects. The mistake is making the entire bucket public rather than individual objects, or leaving development buckets public after they've served their purpose.
What to do:
- Enable the
storage.publicAccessPreventionorganization policy to block public access by default. Grant exceptions explicitly for buckets that genuinely need it. - Audit all buckets with
allUsersorallAuthenticatedUsersaccess. Verify each one is intentionally public. - Use signed URLs for time-limited access to private objects instead of making objects permanently public.
- Enable uniform bucket-level access on all buckets to prevent legacy ACLs from creating unexpected public access.
4. Firewall Rules Allowing 0.0.0.0/0 on SSH and RDP
Frequency: Present in over 30% of projects.
GCP's default VPC includes firewall rules that allow SSH (port 22) from 0.0.0.0/0 to all instances. Many teams create additional rules with similarly broad source ranges for RDP (port 3389), database ports, or custom application ports.
Why it happens: The default VPC ships with an allow-ssh rule open to the internet. Teams working from multiple locations or without a VPN add broad rules to maintain access. Temporary rules created for debugging are never removed.
What to do:
- Delete the default
allow-sshandallow-rdprules from all VPCs. - Use Identity-Aware Proxy (IAP) for SSH and RDP access. IAP provides authenticated, logged access without exposing ports to the internet.
- If IAP isn't feasible, restrict source IP ranges to known office or VPN CIDR blocks.
- Implement a firewall rule naming convention that includes an expiration date for temporary rules, and automate cleanup.
5. Cloud SQL Instances with Public IP and No Network Restrictions
Frequency: Found in approximately 25% of projects using Cloud SQL.
Cloud SQL instances can be assigned public IP addresses for external connectivity. When combined with authorized networks set to 0.0.0.0/0 or no SSL enforcement, the database is accessible to anyone who can guess or brute-force the credentials.
Why it happens: Development teams need to connect from local machines. Setting up private IP connectivity requires VPC configuration and possibly Cloud SQL Proxy. Public IP is the path of least resistance.
What to do:
- Use private IP for all production Cloud SQL instances. Connect through VPC peering or Cloud SQL Auth Proxy.
- If public IP is required for development instances, restrict authorized networks to specific IP addresses — never
0.0.0.0/0. - Enforce SSL connections via the instance settings.
- Enable Cloud SQL audit logging to detect unauthorized access attempts.
- Use the
sql.restrictPublicIporganization policy to prevent public IP assignment on new instances.
6. Missing Audit Logging Configuration
Frequency: Default audit logging is insufficient in over 60% of projects.
GCP enables Admin Activity audit logs by default, but Data Access logs — which record who read what data and when — are disabled by default for most services. Without Data Access logs, you have no visibility into whether someone exported your entire Cloud SQL database or downloaded sensitive files from Cloud Storage.
Why it happens: Data Access logs can generate significant volume and cost. Teams either don't know they're disabled by default or deliberately leave them off to control logging costs.
What to do:
- Enable Data Access audit logs for security-critical services at minimum: Cloud Storage, BigQuery, Cloud SQL, Secret Manager, and
IAM. - Use log exclusion filters to manage volume — exclude routine read operations on non-sensitive resources while capturing access to sensitive data.
- Route audit logs to a dedicated logging project with restricted access to prevent tampering.
- Set up log-based alerts for suspicious data access patterns.
7. API Keys Without Restrictions
Frequency: Unrestricted API keys found in over 40% of projects.
GCP API keys can be restricted by HTTP referrer, IP address, and API scope. Unrestricted API keys work from any IP, any referrer, and can call any enabled API in the project. If leaked — through client-side JavaScript, mobile app decompilation, or source code exposure — they can be used to consume services and rack up charges.
Why it happens: Creating a restricted API key requires knowing which APIs and referrers to allow. During development, teams create unrestricted keys and embed them in client-side code. Restricting them later requires identifying all legitimate usage patterns.
What to do:
- Audit all API keys in every project. Delete unused keys immediately.
- Restrict remaining keys by API scope (only the specific APIs they need) and by application restriction (HTTP referrers for web apps, IP addresses for server apps, Android/iOS app restrictions for mobile).
- Rotate API keys regularly — GCP makes this easy with key versioning.
- Consider replacing API keys with service account authentication where possible, as service accounts provide finer-grained access control and audit logging.
8. Disabled or Misconfigured VPC Flow Logs
Frequency: Flow Logs are disabled on over 50% of subnets in scanned projects.
VPC Flow Logs capture network flow information — source IP, destination IP, ports, protocols, bytes transferred — for traffic flowing through your VPC. Without Flow Logs, you have no network-level visibility into what's communicating with what, making incident detection and forensic investigation significantly harder.
Why it happens: Flow Logs have an associated cost for log storage and ingestion. Teams disable them to reduce costs or never enable them because they weren't aware of the capability.
What to do:
- Enable VPC Flow Logs on all subnets, at minimum for production VPCs.
- Use a sampling rate of 50% or higher for security-relevant subnets. Lower sampling rates reduce cost but may miss short-lived connections.
- Set the aggregation interval to 5 seconds for timely alerting. The default 5-minute interval is too slow for real-time detection.
- Export Flow Logs to BigQuery for analysis and long-term retention.
9. Compute Instances with Default Network Tags and No Shielded VM
Frequency: Over 35% of instances lack shielded VM features.
Shielded VM provides verified boot integrity, vTPM, and integrity monitoring. Without it, a compromised instance could be running modified boot firmware that persists across reboots and evades OS-level detection. Additionally, instances without specific network tags inherit the default firewall rules, which are often overly permissive.
Why it happens: Shielded VM must be enabled at instance creation time for most machine types. Teams using older instance templates or automated provisioning that doesn't specify shielded VM settings create unprotected instances by default.
What to do:
- Enable Shielded VM (Secure Boot, vTPM, and Integrity Monitoring) on all new instances. Update instance templates to include these settings.
- Use the
compute.requireShieldedVmorganization policy to enforce shielded VM for all new instances. - Assign specific network tags to every instance and create firewall rules that target those tags rather than applying to all instances in the network.
- Audit existing instances and migrate workloads to shielded VM instances during the next maintenance window.
10. Secret Manager Secrets Accessible to Broad IAM Principals
Frequency: Over-permissioned secret access found in approximately 20% of projects using Secret Manager.
Secret Manager is the right place to store credentials, API keys, and certificates. But storing secrets securely is only half the equation — the other half is ensuring that only the workloads and users that need a specific secret can access it.
Why it happens: Teams grant roles/secretmanager.secretAccessor at the project level, giving every service account access to every secret. The intent is usually to allow a single application to access its secrets, but the blast radius is the entire project.
What to do:
- Grant
secretAccessorat the individual secret level, not the project level. - Use
IAMConditions to further restrict access — for example, limiting access to specific service accounts and requiring that requests originate from within your VPC. - Enable audit logging for Secret Manager to track who accessed which secrets and when.
- Rotate secrets regularly and use secret versioning to enable zero-downtime rotation.
Patterns Behind the Misconfigurations
Looking across these ten issues, three root causes emerge repeatedly.
Defaults aren't secure enough. GCP's defaults are better than most, but the default service account role, default firewall rules, and default audit logging configuration all leave gaps. Every new project should start with a hardening checklist that addresses these defaults before any workloads are deployed.
Development shortcuts become production configurations. Broad IAM roles, public IP addresses, and unrestricted firewall rules are created for development speed and never tightened for production. The fix is organizational: enforce different security baselines for development and production projects using Organization Policies.
Manual configuration doesn't scale. Teams that provision infrastructure through the GCP Console make these mistakes more frequently than teams using Infrastructure as Code. Terraform modules and organization policies that encode security requirements prevent entire categories of misconfigurations.
Fixing These at Scale
Finding these misconfigurations manually — clicking through the GCP Console project by project — doesn't scale past a handful of projects. Automated scanning that continuously evaluates your GCP configuration against known security benchmarks is the only sustainable approach.
Platforms like Nuvm scan GCP projects for these misconfigurations and hundreds more, mapping findings to CIS Benchmarks and compliance frameworks. When a new misconfiguration is introduced — whether through the Console, gcloud CLI, Terraform, or API — it's detected and flagged before it can be exploited.
The goal isn't zero findings. The goal is knowing what your exposure is, understanding which findings carry real risk, and systematically remediating the ones that matter most. These ten misconfigurations are the place to start.