We seem to be hitting a resource limit in Snowflake when we are updating or inserting data on tables with billions of rows. When those tables are being updated or inserted to it tries to perform the transaction in one operation and either runs forever or never completes even on a 4XL warehouse. The more inserts or updates the bigger the warehouse size needs to be which makes processing unpredictable. We never know on a given load how many changes will be coming on that day so we cannot dynamically size the warehouse based on load and we cannot leave it on a bigger size in case this happens as it will needlessly burn credits. Snowflake has asked us to process those sequentially to ensure it is not competing for memory with other queries on a 4XL. We would like to have an option to set a limit on the size of updates and inserts and if that limit is reached it should be broken up into smaller batches. Ideally the size threshold limit should be configurable to adjust in the project to be able to adjust based on the warehouse size selected or it could be dynamically adjusted based on the size of the warehouse used. This would allow us to keep processing on a small warehouse size and would make our loads more predictable. We are actively working with Snowflake support on this issue but we think adding this option in compose would allow for more flexibility and control over our workloads and would allow us to significantly reduce our costs.