Technical Insights

5 BigQuery features worth knowing

Written by Luka Cempre | Oct 4, 2024 6:00:00 PM

Editor's note (May 2026): This article was originally published in October 2024, covering five BigQuery features that had just entered preview. We've updated it to reflect current availability status — most notably, continuous queries have since reached general availability (GA), a milestone we covered in our Google Cloud Next '26 recap. For a broader look at where BigQuery is heading, see BigQuery Is Growing Up: Key Takeaways from Cloud Next '25.

BigQuery has introduced many new features over the past couple of years. This post covers five that we find the most compelling and that could meaningfully benefit your data workflows.

Key takeaways

  • BigQuery now supports GROUP BY on complex types like arrays and structs, simplifying Google Analytics 4 (GA4) export queries.
  • Short query optimized mode delivers lower-latency results for dashboards and interactive queries.
  • Table Explorer provides field distribution overviews and starter queries for faster data exploration.
  • Change History lets you query up to seven days of table-level changes using SQL.
  • Continuous queries enable real-time streaming of table inserts to BigQuery, Pub/Sub, Bigtable, or Spanner (now generally available).

1. GROUP BY array/struct 

BigQuery added an ability to group by an array or struct. Grouping by these complex data types can simplify queries where you would otherwise have to access individual fields to achieve the same outcome. In the case of Google Analytics (GA4) export, for example, you can now GROUP BY the geo field instead of individually specifying each of the six nested fields for a detailed geo aggregation.

SELECT
  geo,
  COUNT(*) AS events
FROM
  `analytics_123456789.events_20240926`
GROUP BY 1
ORDER BY events DESC

 

All six geo fields are included in the output.

Example of an output:

2. Short query optimized mode 

The short query optimized mode allows users to run queries in a lower latency mode, perfect for dashboards and quick interactive queries. If BigQuery determines that running a query in this mode is possible, it will provide the rows of the results considerably quicker but will not create the usual job. Therefore, there will be no jobReference available.

Google has since published additional detail on how this works. The mode processes eligible queries as a single stage rather than running them through BigQuery's typical multi-stage distributed execution. That single-stage approach, combined with history-based optimization (HBO), is what drives the latency improvement. Note that not every query will qualify. BigQuery makes the determination automatically, and the optimization is most effective for smaller working sets with pre-aggregation and filtering applied.

3. Table Explorer

Table Explorer gives users a quick overview of the values and distributions in selected fields of individual tables. On top of that, it also provides queries responsible for the outputs to help kick off the explorations.

4. Change history

Change history allows you to query the history of changes — updates, deletes, inserts, merges, streams — in any table using the CHANGES function. A few important notes on current behavior per Google's documentation.

  • Change history is retained for a table's configured time travel duration, which defaults to seven days.

  • You must enable change history tracking on the table before the changes you want to query occur, using the enable_change_history = TRUE option at table creation or via ALTER TABLE ... SET OPTIONS().

  • The CHANGES function cannot query the last 10 minutes of a table's history. The end_timestamp must be at least 10 minutes prior to the current time.

  • Enabling change history tracking incurs additional storage and compute costs, though these are typically small.

In the example below, the query first creates a table, adds new rows to it, and then updates one of the existing rows. The snippet that follows demonstrates querying the change table and reviewing the outputs.

Create a test table:

#1 Create a new table
CREATE OR REPLACE TABLE explorations.Olympics_CHANGES
OPTIONS(enable_change_history	= True)
AS
SELECT "Tokyo" as City, 2020 as year
UNION ALL
SELECT "Beijing" as City, 2022 as year
UNION ALL
SELECT "Paris" as City, 2024 as year
UNION ALL
SELECT "Cortina" as City, 2026 as year;

#2 Add new cities
INSERT INTO explorations.Olympics_CHANGES VALUES ("Los Angeles", 2028);
INSERT INTO explorations.Olympics_CHANGES VALUES ("French Alps", 2030);

#3 Update a city
UPDATE explorations.Olympics_CHANGES 
SET year=2021 WHERE City="Tokyo" AND year=2020;

 

Query for table changes using the CHANGES(TABLE, start_time, end_time) function. If start_time is set to NULL it will look back to the creation of the table, end_time is required to be more than 600 seconds older than the current time.

 

# REVIEW CHANGES 
SELECT
  city,
  year,
  _CHANGE_TYPE AS change_type,
  _CHANGE_TIMESTAMP AS change_time
FROM
  CHANGES(TABLE explorations.Olympics_CHANGES, 
		  NULL,
  TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 11 MINUTE))
ORDER BY change_time

 

The output of the CHANGES query above:

The output shows the four initial cities added at table creation, followed by the inserts for Los Angeles and French Alps, and finally the update to Tokyo's year (reflecting the COVID-19 postponement). The update also removes the original Tokyo 2020 row.

5. Continuous queries (now generally available)

Continuous queries are one of the most significant additions to BigQuery in recent years — and since this post was first published, they've reached general availability (GA). Any data written to a table (via BigQuery Storage Write API, streaming, batch load, or INSERT) can be pushed to a new BigQuery table, a Pub/Sub topic, or Bigtable in real time. We covered what this means for marketing teams in our Google Cloud Next '26 recap.

In the example below we use a SQL query that filters by an event name "blog_view" and selects some of the basic dimensions from the real-time Google Analytics 4 export to BigQuery from our website. The continuous query then pushes that data as a message to a Pub/Sub topic. 

EXPORT DATA
      OPTIONS (
        format = 'CLOUD_PUBSUB',
        uri = 'https://pubsub.googleapis.com/projects/adswerve-data/topics/blog-readers')
    AS (
      SELECT
        TO_JSON_STRING(
           struct(
             (SELECT value.string_value 
              FROM unnest(event_params) 
              WHERE key = "page_location") as page_location,      
             user_pseudo_id,  
             device.operating_system, 
             session_traffic_source_last_click.manual_campaign.medium)) as message
       FROM `adswerve-data.analytics_423652181.events_intraday_20241002`
       WHERE event_name = "blog_view"
    )
    

 

The result: seconds after opening the following page on our blog "https://adswerve.com/blog/ga4-bigquery-tips-guide-to-attribution?continous_queries=testing" I am able to pull the message from a Pub/Sub topic.

 

BigQuery's feature velocity has continued since these features first entered preview. For a broader look at where the platform is heading — including what the Cloud Next '26 announcements mean for marketing teams — read our Google Cloud Next '26 recap and BigQuery Is Growing Up: Key Takeaways from Cloud Next '25.

Adswerve provides BigQuery consulting and training for teams around the world. We'd love to talk through what's possible for your data workflows, so please get in touch with us.