The current version of Replicate will not support enterprises using GCP wanting the flexibility to configure Replicate to attribute BigQuery compute costs to a different project than where the BigQuery data will be stored.
Our organisation currently has 400+ GCP projects. Some are IT funded and managed - 'enterprise' projects, others are business funded and managed 'agile' projects used for data and analytics. Each GCP project maybe be funded by a different cost centre within the organisation.
As a Product Manager for our Enterprise Cloud Datalake, I would like our DataOps practice responsible for Replicate to be able to configure our 'ingest as a service' so that we can attribute our BQ compute and BQ storage costs to different GCP Projects.
This would provide flexibility that supports a number of different current use cases, e.g.
(1) Run a centralized replication service ('ingest as a service'), managed and funded by a central IT DataOps team, but distributing data to multiple BQ endpoints in different GCP Projects accross the organisation's GCP estate, where the BQ compute cost will be attributed to the IT DataOps GCP Project where Replicate is hosted, and the storage cost will be attributed to the 'customer' GCP Project ie. the internal business customer who has requested the data.
(2) Be able to attribute our entire Enterprise Replicate hosting and run cost to a single GCP Project (cost centre A), while landing the data in BQ in an entirely seperate GCP Project - in this case our Cloud Enterprise Datalake - where we want our Analytics workloads and compute costs to be attributed seperately (cost centre B).
This is a very standard pattern for BigQuery and supported by many other tools which integrate with BigQuery.