News
Gemini LLM Official BlogIntroducing Gemma 4 on Google Cloud: Our most capable open models yet - Today, we’re bringing Gemma 4 to you on Google Cloud, including Vertex AI, Cloud Run, GKE, and Sovereign Cloud.
Generative AI LLM Official BlogIntroducing Veo 3.1 Lite and a new Veo upscaling capability on Vertex AI - The Veo 3.1 family now includes three tiers, all of which feature native audio generation capabilities on Vertex AI.
ADK AI GoADK Go 1.0 Arrives! - The launch of Agent Development Kit (ADK) for Go 1.0 marks a significant shift from experimental AI scripts to production-ready services by prioritizing observability, security, and extensibility. Key updates include native OpenTelemetry integration for deep tracing, a new plugin system for self-healing logic, and "Human-in-the-Loop: confirmations to ensure safety during sensitive operations. Additionally, the release introduces YAML-based configurations for rapid iteration and refined Agent2Agent (A2A) protocols to support seamless communication across different programming languages. This framework empowers developers to build complex, reliable multi-agent systems using the high-performance engineering standards of Golang.
ADK Agents AI JavaAnnouncing ADK for Java 1.0.0: Building the Future of AI Agents in Java - Google has released version 1.0.0 of the Agent Development Kit (ADK) for Java, introducing powerful new features like Google Maps grounding, built-in URL fetching, and a standardized Agent2Agent protocol for cross-framework collaboration. The update enhances agent control through a new "App" and "Plugin" architecture, which allows for global logging, automated context window management via event compaction, and "Human-in-the-Loop" workflows for action confirmations. Additionally, the release provides robust session and memory services using Google Cloud integrations like Firestore and Vertex AI to manage long-term state and large data artifacts.
Networking Official BlogEnvoy: A future-ready foundation for agentic AI networking - In the world of AI agents, the Envoy networking proxy consistently enforces governance and security across all agentic paths, and at scale.
AI Hypercomputer Event Google Kubernetes Engine Infrastructure Official BlogTop Infrastructure and GKE Sessions at Cloud Next '26 - Don't miss the top Cloud Next 26 sessions on infrastructure. Learn about AI Hypercomputer, GKE's future, enterprise migration, agentic AI, and cost-effective scale.
Google Kubernetes Engine Official BlogUplevel your workload scaling performance with GKE active buffer - Active Buffer in GKE provides a native solution for low-latency workload scaling by maintaining warm capacity buffers.
Articles, Tutorials
Infrastructure, Networking, Security, Kubernetes
Official Blog Threat IntelligenceNorth Korea-Nexus Threat Actor Compromises Widely Used Axios NPM Package in Supply Chain Attack - A North Korea-nexus threat actor targeted the popular axios NPM package in a massive supply chain attack.
DevOps Kubernetes NetworkingGKE Ingress: External vs Internal ( Instance Groups vs NEGs ) - Why your internal load balancer uses NEGs instead of Instance Groups and everything that changes because of it.
Cloud RouterGoogle Cloud Router: Using BGP Policies to use BGP Communities to create traffic symmetry - This article demonstrates how to achieve traffic symmetry in hybrid cloud environments by leveraging Google Cloud Router BGP Policies. It explains how BGP Communities can be used to tag on-premises routes, enabling Cloud Router policies to manipulate route priorities and ensure traffic flows symmetrically. This method is crucial for preventing issues with stateful firewalls that might drop asymmetric response traffic.
Batch DevOps GPURunning GPU Workloads with Google Cloud Batch — Flex-start VMs & Calendar-mode Reservations - Google Cloud Batch now provides enhanced provisioning models, Flex-start VMs and Calendar-mode Reservations, for running GPU workloads with the Dynamic Workload Scheduler. These options allow users to explicitly manage the trade-off between cost-optimization and guaranteed capacity, addressing the challenge of acquiring scarce GPU resources.
Google Kubernetes Engine KubernetesMaking “Best Effort” Even Better: Improving the Kueue Scheduler - The Kueue scheduler's "best effort" strategy struggled under high load, causing smaller jobs to starve while large ones monopolized the queue. With the v0.17 release, new optimizations, including the `SchedulingEquivalenceHashing` feature, significantly improve job admission latency and scalability.
Compute Engine FinOpsThe Infrastructure Engineer’s Guide to Benchmarking Price-Performance Gains from Google Cloud VM Modernization - This guide offers a structured benchmarking process for modernizing Google Cloud VMs to achieve significant price-performance improvements. It details how to define performance signals, conduct rigorous testing using repeatable benchmarks and real-world canary deployments, and convert findings into a clear price-performance story. The approach helps engineering teams confidently quantify gains and justify upgrading to newer machine families like N4 for optimal resource utilization and cost savings.
GKE Autopilot Google Kubernetes Engine KubernetesUsing Autopilot ComputeClasses on GKE Standard Clusters - Google Kubernetes Engine (GKE) now offers Autopilot ComputeClasses for Standard clusters, providing a flexible hybrid approach to node management. This allows users to specify a compute class in their pod definitions, prompting GKE to automatically provision and manage dedicated nodes for those specific workloads. This enables dynamic resource allocation, such as utilizing Spot VMs or particular machine families, without needing to manage entire node pools or sacrificing the control offered by Standard clusters.
Official Blog Threat IntelligencevSphere and BRICKSTORM Malware: A Defender's Guide - A detailed guide for hardening vSphere Virtual Center with a focus on the BRICKSTORM backdoor and associated malware activity.
Generative AI Google Kubernetes Engine LLM Official BlogRun real-time and async inference on the same infrastructure with GKE Inference Gateway - GKE Inference Gateway treats accelerator capacity as a fluid resource pool that serves workloads that need deterministic latency and high throughput.
CISO Official BlogCloud CISO Perspectives: RSAC '26: AI, security, and the workforce of the future - Drop in on a RSA Conference fireside chat between Francis deSouza and Nick Godfrey, and hear all about our announcements from the show.
App Development, Serverless, Databases, DevOps
AI Cloud Run GeminiDeploy Gemma 4 on Cloud Run: Pay Only When You Actually Use It - This article details how to deploy Google's powerful new Gemma 4 AI models on Cloud Run, enabling users to pay only for active usage through its "scale to zero" feature.
AlloyDB Cloud SQL Generative AI LLM Official BlogActivating Your Data Layer for Production-Ready AI - Prepare your data for production AI. Explore Google AlloyDB and Cloud SQL labs for semantic search, RAG, multimodal embeddings, AI functions, and NL2SQL query generation.
Cloud Spanner Databases Generative AI LLM Official BlogSpanner's multi-model advantage for the era of agentic AI - Discover how Spanner's unified multi-model database (relational, vector, graph) provides a scalable foundation for agentic AI, eliminating silos and accelerating development.
Cloud Spanner Databases Official BlogReal-world success with Spanner’s fully interoperable multi-model database - See eight real-world examples of how customers are using Spanner's fully interoperable multi-model database for fraud detection, recommendation engines, and agentic AI at scale.
Cloud SpannerSecuring Identity Sprawl: Building a Real-Time Access Graph with Google Cloud Spanner - This article details a solution for combating identity sprawl and hidden access risks in enterprise environments through a real-time access graph built with Google Cloud Spanner. It explains how traditional security tools struggle with deeply nested permissions, leaving organizations vulnerable.
AlloyDB Databases GCP Experience Official BlogHow ID.me Scaled to 160M Users While Reducing Operational Risk - See how ID.me migrated 50TB of data to Google Cloud, using AlloyDB and Vertex AI to scale digital identity for 160M members, reduce operational risk, and detect AI-driven fraud in real-time.
Big Data, Analytics, ML&AI
AI GCP Experience Official Blog SustainabilityHow AI-powered tools are driving the next wave of sustainable infrastructure and reporting - Learn how Google and Equinix use AI to transform sustainability reporting from manual compliance to a strategic asset for efficiency and real-world impact.
Official Blog Vertex AICreate Expert Content: Architect A Personalized Multi-Agent System with Long-Term Memory - Learn how to build a robust multi-agent architecture for "Dev Signal," featuring a Root Orchestrator and specialist agents. Integrate the Vertex AI Memory Bank for long-term preference learning.
Data Analytics GCP Experience Official BlogHow Honeylove boosts product quality and service efficiency with BigQuery - With the power of AI-ready data and conversational tools, this leading intimates brand is able to iterate on products faster and deliver the quality and variety its customer love.
BigQuery SecurityRow-Level & Column-Level Security in BigQuery : A comprehensive guide to fine-grained data access control - This guide explores Google BigQuery's Row-Level Security (RLS) and Column-Level Security (CLS), offering a comprehensive solution for fine-grained data access control. RLS allows administrators to restrict which rows a user can see, while CLS controls access to specific columns or masks their values. These features integrate seamlessly with Google IAM, eliminating the need for data duplication and simplifying data governance within a single BigQuery table.
AI BigQuery Cloud Pub/SubReal-time Intelligent Triage Engine with BigQuery Continuous Queries - This article demonstrates how to build a real-time intelligent triage engine using Google Cloud's BigQuery Continuous Queries, Vertex AI (Gemini), and Pub/Sub.
BigQuery Cloud Dataproc PythonWhen SQL Hits the Wall: Migrating Vehicle Analytics from BigQuery to PySpark - A 10M-row dataset, SQL’s limits, and a migration to PySpark on Dataproc.
LLM Machine Learning TPUEngineering Journal: Fine-Tuning Gemma 3 1B with JAX and Tunix on Google Cloud TPU v5e — Full Workflow and Troubleshooting Guide - This article provides a comprehensive guide to fine-tuning Gemma 3 1B on Google Cloud TPU v5e using JAX and Tunix, detailing the full workflow and troubleshooting common issues encountered.
AI AI Hypercomputer LLMBoost Training Goodput: How Continuous Checkpointing Optimizes Reliability in Orbax and MaxText - The newly introduced continuous checkpointing feature in Orbax and MaxText is designed to optimize the balance between reliability and performance during model training, addressing issues with conventional fixed-frequency checkpointing. Unlike fixed intervals—which can either compromise reliability or bottleneck performance—continuous checkpointing maximizes I/O bandwidth and minimizes failure risk by asynchronously initiating a new save operation only after the previous one successfully completes. Benchmarks demonstrate that this approach significantly reduces checkpoint intervals and results in substantial resource conservation, especially in large-scale training jobs where mean-time-between-failure (MTBF) is short.
ADK AIWhy Google ADK’s Human-in-the-Loop Story Has a Production Gap — And one way it could be fixed - A product manager’s journey from identifying a real developer pain point to shipping an open source fix for Google’s Agent Development Kit.
A2A ADK Generative AI Machine LearningAgent to UI Protocol (A2UI) with Agent Development Kit (ADK) - Google's Agent to UI Protocol (A2UI) is a generative UI protocol that enables AI agents to dynamically create rich, interactive user interfaces for web, mobile, and desktop applications. It defines a standard for agents to send JSON messages describing UI structure and data, allowing for complex display beyond plain text. This article explores how to implement A2UI using Google’s Agent Development Kit (ADK) to build sophisticated agent-driven interfaces.
ADK AIDeveloper’s Guide to Building ADK Agents with Skills - The Agent Development Kit (ADK) SkillToolset introduces a "progressive disclosure" architecture that allows AI agents to load domain expertise on demand, reducing token usage by up to 90% compared to traditional monolithic prompts. Through four distinct patterns—ranging from simple inline checklists to "skill factories" where agents write their own code—the system enables agents to dynamically expand their capabilities at runtime using the universal agentskills.io specification. This modular approach ensures that complex instructions and external resources are only accessed when relevant, creating a scalable and self-extending framework for modern AI development.
MCPFrom Lag to Lightning: Optimizing MCP Toolbox with Built-in Observability - Google Cloud's MCP Toolbox for Databases now features comprehensive built-in observability, including distributed tracing and latency histograms. This powerful update enables developers to quickly identify and diagnose performance bottlenecks, transforming complex "log archaeology" into efficient, full-stack problem-solving.
AI GeminiBring state-of-the-art agentic skills to the edge with Gemma 4 - Google DeepMind has launched Gemma 4, a family of state-of-the-art open models designed to enable multi-step planning and autonomous agentic workflows directly on-device. The release includes the Google AI Edge Gallery for experimenting with "Agent Skills" and the LiteRT-LM library, which offers a significant speed boost and structured output for developers. Available under an Apache 2.0 license, Gemma 4 supports over 140 languages and is compatible with a wide range of hardware, including mobile devices, desktops, and IoT platforms like Raspberry Pi.
Slides, Videos, Audio
Security Podcast - #269 Reflections on RSA 2026 - Beyond AI AI AI AI AI AI AI.
Releases
Cloud SQL Postgres - Breaking: Vector assist ( Preview ) is temporarily disabled for all Cloud SQL for PostgreSQL instances. Feature: Cloud SQL for PostgreSQL now offers conversational analytics, which lets users query their operational data using natural language. This feature is powered by the Conversational Analytics API, which can help you translate complex human dialog into precise database queries to provide actionable insights. This feature is in Preview. For more information, see Conversational analytics for Cloud SQL for PostgreSQL overview.
CDN - Feature: For global external Application Load Balancers, you can configure Cloud CDN cache policies at various levels of a URL map, providing more granular control over caching. You can now apply specific caching logic based on hostnames, URL paths, HTTP headers, and query parameters. This feature is in Preview. For more information, see Cache policies in URL maps.
Virtual Private Cloud - Feature: Service producers can accept or reject connections from individual Private Service Connect endpoints. This feature is available in General Availability. Feature: Hybrid Subnets is available in General Availability. Hybrid subnet routing lets a VPC network share a CIDR block with a connected on-premises network. This configuration helps you migrate workloads to Google Cloud without needing to change any IP addresses. During migration, workloads that have migrated to your VPC network can communicate with those remaining in the on-premises network by using internal IP addresses. After all workloads have migrated, you can disable hybrid subnet routing to restore normal routing behavior.
Bigtable - Feature: You can view the details of Bigtable continuous materialized views in the Google Cloud console.
Cloud Logging - Change: For any new project that is created on or after March 30, 2026, if the project enables the Cloud Logging API, then Google Cloud Observability also enables the Telemetry API. Feature: The filter capabilities for log views have been extended to include support for disjunctive clauses, negation statements, and labels. To learn more, see Filters for log views. Announcement: Cloud Logging adds support for the ca multi-region. For a complete list of supported regions, see Supported regions.
Cloud Monitoring - Change: For any new project that is created on or after March 30, 2026, if the project enables the Cloud Monitoring API, Telemetry API. Feature: Application Monitoring has added support for the following resources: Vertex AI Workbench GKE Gateway GKE Ingress Layer 7 cross-regional Application Load Balancers Additionally, dashboards for Kubernetes workloads display L4 and L7 traffic metrics, when both are available. For more information, see Application Monitoring supported infrastructure. Feature: Application Monitoring has added a Services and Workloads tab, which lists your registered and discovered services and workloads. From this tab, you can do the following: Register discovered services and workloads. Search for services and workloads by functional type, such as Agent or MCP server. Open dashboards that display telemetry. For discovered services and workload, Google Cloud Observability uses the Cloud Asset Inventory name to identify relevant information. To learn more, see the following: List registered and discovered services and workloads Application Monitoring overview View application telemetry
Cloud Trace - Change: For any new project that is created on or after March 30, 2026, if the project enables the Cloud Trace API, then Google Cloud Observability also enables the Telemetry API. Feature: You can use the Cloud Trace API MCP server to let agents and AI applications interact with your trace data. This feature is in Preview.
BigQuery - Feature: You can now create BigQuery non-incremental materialized views over Spanner data to improve query performance by periodically caching results. This feature is generally available (GA). Feature: The following forecasting and anomaly detection functions and updates are generally available (GA): The AI.DETECT_ANOMALIES function supports providing a custom context window that determines how many of the most recent data points should be used by the model. The AI.FORECAST function supports specifying the latest timestamp value for forecasting. The AI.EVALUATE function supports the following: You can provide a custom context window that determines how many of the most recent data points should be used by the model. The function outputs the mean absolute scaled error for the time series. Feature: BigQuery ObjectRef values now support the following: You can run ObjectRef functions with either direct access or delegated access. The OBJ.MAKE_REF function automatically fetches the latest Cloud Storage metadata and populates this in the ref.details field. The OBJ.GET_READ_URL function returns a STRUCT value with a read URL and status columns and renders image results in the Cloud console. Use this function when you don't require a write URL. These features are generally available (GA). Feature: You can now use the CREATE CONNECTION, ALTER CONNECTION SET OPTIONS, and DROP CONNECTION data definition language (DDL) statements to manage Cloud resource connections with GoogleSQL. Additionally, you can now use the connection user type and PROJECT resource type with GRANT and REVOKE data control language (DCL) statements to manage connection and project access. These features are generally available (GA). Feature: You can set the column granularity when you create a search index, which stores additional column information in your search index to further optimize your search query performance. This feature is generally available (GA). Feature: The BigQuery Migration Service supports SQL translations from Snowflake SQL to GoogleSQL. This feature is now generally available (GA). With this change, the translation service supports a wider variety of Snowflake SQL and has improved support for several data types. Among other changes, the translation service maps Snowflake INTEGER and zero-scale NUMERIC types up to precision 38 to INT64 type in GoogleSQL for improved performance by default.
Dataplex - Feature: Automated cataloging of Looker (Google Cloud core) metadata as well as data lineage ingestion from BigQuery sources are now available in preview. For more information, see the Looker (Google Cloud core) documentation.
Looker - Announcement: Starting March 30, 2026, the following features will begin rolling out. Feature: Available in preview, Looker (Google Cloud core) now integrates with Dataplex Universal Catalog, providing a unified discovery and management experience for your Looker metadata. This allows you to search for Looker assets like LookML models and dashboards directly within Dataplex, giving you a comprehensive view of your data landscape. Feature: Available in preview for Looker (Google Cloud core), you can now track end-to-end data lineage from BigQuery to Looker content, including views, Explores, dashboards, and Looks, through the Looker and Dataplex lineage integration. This enables impact analysis to see how BigQuery changes affect downstream Looker (Google Cloud core) contents. Feature: When connecting Looker to your database, you can specify additional Java Database Connectivity (JDBC) parameters that you want Looker to include when it communicates with your database driver. In order to keep your connection secure, Looker now has an allowlist for the additional JDBC parameters that are supported for each database dialect. If you try to create or update a connection that uses unsupported JDBC parameters, Looker will display an error message that shows the non-allowed parameters. If you have an existing connection that uses unsupported JDBC parameters, Looker will remove the parameters and then connect to your database. For a list of the supported JDBC parameters for your dialect, see the "Supported JDBC parameters" section of the database configuration instructions page for your dialect. Feature: Available in preview, you can publish the Conversational Analytics data agents that you create in Looker to Gemini Enterprise. Feature: Available in preview, you can chat with Conversational Analytics data agents in user-defined dashboards and in LookML dashboards. Feature: The Enhanced Content Cleanup preview feature is now available. When this feature is enabled for your instance, it lets admins and content owners access an enhanced content management experience in Looker. The Enhanced Content Cleanup preview feature provides the following capabilities: Lets admins and users access a new Unused content folder to quickly identify and manage the unused content on a Looker instance. Lets admins programmatically schedule content cleanups for individual content or in bulk, and send automatic notifications to content owners. Lets content owners opt out of automated scheduled cleanups for specific content. Lets admins and users move content to the trash. This feature is disabled by default. Feature: Now generally available, Looker has full support for connections with AlloyDB for PostgreSQL. When you create a connection in Looker, you can now select Google Cloud AlloyDB for PostgreSQL from the Dialect drop-down menu. This update doesn't affect existing AlloyDB connections that were created using the PostgreSQL 9.5+ option in the Dialect menu. Feature: The New Looker Explore and Merge Query Experience preview feature is now available. When this feature is enabled for your instance, it lets individual users have the option to try the redesigned Looker Explore and Merge Query interfaces. The streamlined interfaces let Looker users find connections and gain insights from their data more quickly. For an AI-assisted experience, enable additional options on the Gemini in Looker page in the Platform section of the Admin panel. This feature is disabled by default. Learn more about the new Explore and Merge Query experience. Feature: The Enhanced Content Cleanup preview feature is now available. When this feature is enabled for your instance, it lets admins and content owners access an enhanced content management experience in Looker. The Enhanced Content Cleanup preview feature provides the following capabilities: Lets admins and users access a new Unused content folder to quickly identify and manage the unused content on a Looker instance. Lets admins programmatically schedule content cleanups for individual content or in bulk, and send automatic notifications to content owners. Lets content owners opt out of automated scheduled cleanups for specific content. Lets admins and users move content to the trash. This feature is disabled by default. Learn more about managing unused content with Enhanced Content Cleanup. Change: The Looker mobile application now supports sending alert notifications as push notifications to users who have the Looker mobile application on their mobile device.
Database Migration Service - Feature: Database Migration Service for homogeneous MySQL migrations now lets you migrate individual databases from your source. You can select the databases when you create a migration job for homogeneous MySQL migrations. Feature: You can use the Database Migration Service MCP server to enable agents and AI applications to view and manage running migration jobs. This feature is in Preview.
AlloyDB - Feature: AlloyDB now offers conversational analytics, which lets users query their operational data using natural language. This feature is powered by the Conversational Analytics API, which can help you translate complex human dialog into precise database queries to provide actionable insights. This feature is in Preview. For more information, see Conversational analytics for AlloyDB overview. Feature: Hot standby enhances the AlloyDB high availability (HA) architecture to improve failover times and to ensure consistent performance after failover. AlloyDB continuously replicates transactions to the standby node to keep caches warm and to ensure that the node is ready to take over quickly during a failover. This feature is generally available ( GA ) in PostgreSQL 18 and is automatically enabled for all new instances. For more information, see the AlloyDB high availability overview. Change: You can now enable Advanced Query Insights on primary clusters which have secondary clusters configured. Advanced Query Insights is not supported on secondary clusters. If you perform a switchover, you must re-enable Advanced Query Insights on the new primary cluster. Feature: The gcloud beta alloydb connect command is now available in Preview. This command provides a simplified way to connect securely to AlloyDB instances by using the AlloyDB Auth Proxy and psql. For more information, see Connect using gcloud CLI.
Cloud Spanner - Feature: Spanner offers conversational analytics, which lets users query their operational data using natural language. This feature is powered by the Conversational Analytics API, which can help you translate complex human dialog into precise database queries to provide actionable insights. This feature is in Preview. For more information, see Conversational analytics for Spanner overview. Feature: You can create BigQuery non-incremental materialized views over Spanner data to improve query performance by periodically caching results. This feature is generally available (GA). Feature: You can use Cloud resource connections with EXPORT DATA statements to reverse ETL (extract, transform, load) BigQuery data to Spanner. This feature is generally available (GA).
Cloud SQL MySQL - Feature: Cloud SQL for MySQL now offers conversational analytics, which lets users query their operational data using natural language. This feature is powered by the Conversational Analytics API, which can help you translate complex human dialog into precise database queries to provide actionable insights. This feature is in Preview. For more information, see Conversational analytics for Cloud SQL for MySQL overview. Feature: You can now migrate a subset of databases from an external server to a destination Cloud SQL for MySQL instance. For more information, see Configure Cloud SQL and the external server for replication.
Billing - Feature: Scenario modeling for CUD recommendations is generally available Scenario modeling for committed use discount (CUD) recommendations is now generally available (GA). You can simulate scenarios for both spend-based and resource-based CUDs, and customize recommendations to purchase a commitment that maximizes your savings. For more information, see Simulate scenarios for CUDs savings.
Contact Center AI Platform - Full text on the release page.
Cloud Build - Feature: Cloud Build now supports uploading generic artifacts to generic repositories, and also downloading generic repositories as build dependencies. For more information, see genericArtifacts and Specify a generic artifact as a dependency.
Migration Center - Announcement: The discovery client 6.3.13 is available with a new feature and a bug fix. Feature: Preview: Added support for discovery of the following assets from your AWS account: Amazon API Gateway Amazon EC2 Auto Scaling Amazon Elastic Block Store (EBS) Amazon Elastic Container Registry (ECR) Amazon Simple Notification Service (SNS) AWS Application Load Balancer (ALB) AWS AppSync AWS Batch AWS Internet Gateway Elastic IP address (EIP) Elastic Network Interface (ENI) For more information, see Discover assets on AWS and Directly import asset inventory data from AWS. Fixed: Fixed an issue where some supported operating systems from the vSphere guest OS list were missing from the discovered assets. The discovery client now uses the VMware vSphere API to collect all supported OS families.
IAM - Feature: Gemini assistance in the IAM role picker is generally available. For more information, see Get predefined role suggestions with Gemini assistance. Deprecated: Extended attributes for Workforce Identity Federation are deprecated. For group mapping, we recommend using SCIM instead of extended attributes. For more information, see IAM deprecations.
Google Cloud Armor - Feature: Cloud Armor supports a visual Match Condition Builder that allows you to create complex expressions in Common Expression Language (CEL) without writing raw code. For more information, see Use the Match Condition Builder.
Cloud Storage - Feature: You can now use Storage Insights datasets to help manage your data security and compliance. The ability to identify publicly accessible objects is now generally available. Additionally, new fields in bucket and object metadata schemas, such as encryption, retentionPeriod, encryptionType, and retentionExpirationTime, help you audit encryption configurations and monitor data retention policies. For more information, see Storage Insights datasets and Dataset tables and schemas. Feature: You can configure which encryption types are allowed or prohibited for creating new objects in a bucket. For more information, see Enforce or restrict the encryption types for a bucket.
Load Balancing - Feature: SNI-based routing for proxy Network Load Balancers is now available in Preview. You can now route TLS traffic based on Server Name Indication (SNI) hostnames by using the new TLSRoute resource. The load balancer inspects the initial unencrypted ClientHello message to extract the SNI hostname and route connections to the appropriate backend service. This feature provides pure TLS passthrough without terminating the connection at the load balancer. Key benefits include: End-to-end encryption: Clients can establish secure mTLS or TLS sessions directly with origin servers. Role-oriented management: The TLSRoute API lets platform administrators to manage frontend infrastructure while service owners manage their own routes and backends independently. Simplified IP management: Consolidate multiple services behind a single Private Service Connect (PSC) endpoint, reducing IPv4 address exhaustion. This feature is available for regional and cross-region proxy Network Load Balancers. For more information, see: Create a regional external proxy Network Load Balancer load balancer with TLS routes Create a regional internal proxy Network Load Balancer load balancer with TLS routes Create a cross-region internal proxy Network Load Balancer load balancer with TLS routes Feature: Certificate Manager certificates are available in Google Cloud console while provisioning a load balancer. You can select a certificate map for the following load balancers: Global external Application Load Balancers Classic Application Load Balancers Global external proxy Network Load Balancers Classic proxy Network Load Balancers You can select a Certificate Manager certificate for the following load balancers: Regional external Application Load Balancers Regional internal Application Load Balancers Cross-region internal Application Load Balancers This feature is in General availability.
Chronicle - Feature: Multi-stage queries in YARA-L The Multi-stage queries feature is now GA. This feature lets you feed the output of one query stage into the input of another, providing more granular data transformation than a single, monolithic query. You can use multi-stage queries in both Dashboards and Search to build sophisticated detection and visualization logic. No action is required to enable this feature. Learn more about how to create multi-stage queries with YARA-L 2.0. Change: Google Security Operations has updated the list of supported default parsers (full list on the release page).
Compute Engine - Feature: Generally available: The maximum throughput for a Hyperdisk ML disk is increased to 2,097,152 MiB/s from 1,200,000 MiB/s. Hyperdisk ML provides the highest throughput per disk for machine learning and for workloads that require high read throughput on immutable datasets. For more information, see About Hyperdisk ML. Feature: Preview: To control the use of the deprecated container startup agent, an option for deploying containers on Compute Engine instances, you can enforce the constraints/compute.managed.disableVmsWithContainerStartupAgent organization policy constraint. This constraint prevents the creation of Compute Engine instances that use the container startup agent and the gce-container-declaration metadata. You can also enforce this organization policy in dry-run mode to identify projects that use the deprecated metadata, without blocking resource creation. For more information, see Prevent the creation of VMs that use the container metadata and Migrate containers deployed on VMs during VM creation.
Security Command Center - Feature: Risk Engine now supports aiplatform.googleapis.com/ReasoningEngine in both attack paths and high value resource sets. Feature: Security Command Center Risk Engine supports Dataproc resources in attack paths and Dataproc clusters and jobs in high-value resource sets.
Cloud NAT - Announcement: The default TCP TIME_WAIT timeout for Cloud NAT is scheduled to decrease from 120 seconds to 30 seconds, across all regions, as follows: From June 30 to September 29, 2026: new Cloud NAT gateways will use either the 120-second or 30-second default, depending on when the update is deployed in a specific region. On or after September 30, 2026: all new Cloud NAT gateways in all regions will use the 30-second default. Impact on gateways New gateways: after the update is deployed in a region, all new Cloud NAT gateways created in that region will use the 30-second default. This change also applies if a pre-update gateway is deleted and then recreated. Existing gateways: Cloud NAT gateways created before the regional update will retain the 120-second default. You can adjust this value by using the --tcp-time-wait-timeout flag at any time. Cloud NAT gateways configured with a custom TIME_WAIT value aren't affected and will continue to use your configured custom value. The following table outlines the applicable default timeout for new gateways throughout the deployment timeline. Gateway type Default timeout (before June 30) Default timeout (June 30—September 29) Default timeout (on or after September 30) New 120 seconds 30 or 120 seconds 30 seconds
Service Mesh - Announcement: Managed Cloud Service Mesh using the TRAFFIC_DIRECTOR implementation now supports a limited implementation of the EnvoyFilter API. To learn about the supported fields, extensions, and how to use EnvoyFilter for features like local rate limiting see Data plane extensibility with EnvoyFilter. To troubleshoot any issue while configuring, see Resolving data plane extensibility issues. Announcement: Managed Cloud Service Mesh using the TRAFFIC_DIRECTOR implementation now supports a limited implementation of the EnvoyFilter API. To learn about the supported fields, extensions, and how to use EnvoyFilter for features like local rate limiting see Data plane extensibility with EnvoyFilter. To troubleshoot any issue while configuring, see Resolving data plane extensibility issues.
Cloud TPU - Feature: Generally available: TPU7x is generally available (GA). TPU7x is the first release within the Ironwood family, Google Cloud's seventh generation TPU. TPU7x supports large-scale AI training and inference, providing performance and cost-effectiveness for demanding workloads such as large language (LLMs), mixture of experts (MoEs), and diffusion models. For more information, see the TPU7x (Ironwood) documentation.
Document AI - Feature: Upgrading fine tuned custom extractor processors is now available in Preview. The feature allows you to fine tune a new processor version with a newer base version, while keeping the configurations of the previously fine-tuned processor version selected. This is available through the UI in the Deploy & use tab in the console. This is currently supported for upgrading pretrained-foundation-model-v1.4-2025-02-05 to pretrained-foundation-model-v1.5-2025-05-05. For more information, see training overview.
Dataform - Feature: The Dataform folders and repositories feature is now generally available (GA). This feature lets you organize code assets like notebooks and saved queries into a hierarchical structure with IAM policy inheritance. This release also introduces deleteTree API methods for deleting folders and team folders.
Dataproc - Announcement: New Dataproc on Compute Engine subminor image versions: 2.3.28-debian12, 2.3.28-ml-ubuntu22, 2.3.28-rocky9, 2.3.28-ubuntu22, 2.3.28-ubuntu22-arm Change: Dataproc on Compute Engine: Upgraded Apache Zookeeper to version 3.9.5 in image version 2.3. Fixed: Upgraded Dataproc Metastore Proxy to v0.0.79 to fix CVEs. Fixed CVEs CVE-2026-24308 and CVE-2026-24281.
Dataproc Serverless - Announcement: Dataproc and Google Cloud Serverless for Apache Spark are now unified under the Managed Service for Apache Spark brand. This change consolidates our managed Spark deployment options into a single umbrella brand that includes the full breadth of our Spark capabilities. No existing functionality is being removed as part of this change, and there will be no impact to the Dataproc API, client library, CLI, or IAM names.
Cloud SQL SQL Server - Feature: Cloud SQL for SQL server read pools are now generally available and provide operational simplicity and scaling for your read workloads. Read pools provide a single endpoint in front of up to seven read pool nodes and automatically load balance traffic. You can scale your read pool in several ways: Scale in or out: scale load balancing capacity horizontally by modifying the number of read pool nodes in the read pool. Each read pool supports between 1 and 7 read pool nodes. Scale up or down: scale load balancing capacity vertically by modifying the machine type associated with a read pool node. Once defined, configuration is uniformly applied across each read pool node in the read pool. For more information, see About read pools.
Chronicle SOAR - Announcement: Release 6.3.81 is now available for all regions. Feature: Playbook Condition and Multi-Choice Question Flows The maximum number of branches supported in Playbook Conditions and Multiple Choice Questions has been increased from 6 to 20. This allows for more complex branching logic within a single step. For more information, see Use flows in playbooks. Announcement: Release 6.3.82 is being rolled out to the first phase of regions as listed here. This release contains internal and customer bug fixes.