Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
We recently migrated from SQL Server 2014 to SQL Server 2019. For one of the inserts in to the _P table that is labled "Inserting Top Rows into TSTG_Entity_NAme_P for existing objects" that joins to 10 files, it is now unable to complete after 24 hours instead of 3 minutes.
We dropped the workaround and retested and the optimizer is now able to pick the right solution to running the query. We think this is because of the change to the housekeeping job that updates table and index statistics.
I would start by analyzing the objects to update the statistics as this will have an impact on how SQL Server executes statements against the tables. You might also check for index fragmentation.
It's possible that performance improvements in later service packs may help, but since the database change seems to be the trigger of the bad performance, I would start with the above.
Thanks,
Dana
Thank you @Dana_Baldwin . We put in a workaround by manually changing the generated script to add an index - but this is a pain because if we regenerate the ETL Task then we have to reapply the workaround. The irony is that on legacy server the housekeeping job used to fail, now on the new server the housekeeping is working. The tasks that are done by the Housekeeping are as follows:
Command |
Index Stats updated? |
Column stats updated? |
ALTER INDEX REORGANIZE |
NO | NO |
ALTER INDEX <index_name> REBUILD | yes but only for stats associated with that index | NO |
ALTER INDEX ALL REBUILD | yes, stats for all indexes will be updated | NO |
DBREINDEX (old syntax) | YES | YES |
The above is the daily housekeeping. For the weekly housekeeping the COLUMN stats are updated.
I am intending to do some testing as follows:
- Check current performance after dropping the workaround
- Clear statistics and retest
I want to test whether the addition of the statistics is causing the issue. I do not fully understand how the optimiser takes account of newly created tables (the work tables that compose creates during the running of the task.)
We dropped the workaround and retested and the optimizer is now able to pick the right solution to running the query. We think this is because of the change to the housekeeping job that updates table and index statistics.