Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello,
I'm connecting to a SQL Server ( version 2008) database that is quite large.
What my client wants to do to avoid performance problems in the database when the script is running is to somehow divide the loading of data into smaller chunks.
For example the product table we would like to divide into chunks of 10 000 products per chunk.
The output is stored into a qvd file.
This is what we have come up with so far:
set varMaxCustomerId =0;
do while (RowNo() >= 0)
tSubscriptionMatrix:
SQL SELECT TOP 10000
CustomerId,
IsActive,
LastUpdated,
SubscriptionChannelId,
SubscriptionMediaId
FROM cdon.product.tSubscriptionMatrix WITH(NOLOCK)
WHERE CustomerId > $(varMaxCustomerId)
GROUP BY
CustomerId,
IsActive,
LastUpdated,
SubscriptionChannelId,
SubscriptionMediaId;
tSubscriptionMatrix2:
Load
max(CustomerId) as MaxCustomerId
Resident tSubscriptionMatrix;
set varMaxCustomerId = peek('MaxCustomerId',0,'tSubscriptionMatrix2');
store * from tSubscriptionMatrix into C:\Tmp6\cdon.product.tSubscriptionMatrix.QVD (QVD);
DROP Table tSubscriptionMatrix;
loop
The problem at the moment seems to be this row WHERE CustomerId > $(varMaxCustomerId) which is not accepted by the SQL server.
Do not look only at the solution above i.e. we are open to all ideas that can move us forward.
What we want to achieve is a solution that enables us to reduce the load / be as "light" as possible on the database.
Is this possble?
br
Martin
Hello,
I wnat to thank all of you who have contributed to this thread.
In the end we changed it so we do not chunk data ( although we got it to work) and instead we connect to a slave database where there are no restrictions on queries asked.
br
Martin