If you're logging reads, then yes, QlikView would affect the SQL transaction log size. I believe that in our environment, we only log insert/update/delete, so at least in our environment, that isn't an issue. It sounds like you don't expect that to be an issue in your environment either.
Another thing you may need is FOR FETCH ONLY in your SQL statements in QlikView. In our system, if you don't specify for fetch only, it will lock the records you're reading on the assumption that you may follow up the read with an update. On top of it being a bad idea to take locks to simply read data, it at least seems reasonable that the locks would push the data into your SQL transaction log.
Thanks for the reply, thats very interesting!
Do you have any examples / info on the fetch command as i've not seen this before.
A common statement i might use is
select field1, field2 from table where field1=100
How would this be changed to use fetch?
We're running 2003 server & SQL server 2005.
For SQL server you will want to use NOLOCK
SELECT fields FROM table (NOLOCK) WHERE whereclause
Of course dirty reads have cons of thier own.
Data may be updated while you are getting data (something that could happen with a query that has a long life cycle).
Phantom records, data that exists due to a transaction (like an insert) but still has the possibilty of being rolled back.
It looks like the "for fetch only" is just DB2, but its a good idea.
In the end i suspect the trans log had grown so much because we had been adding large amounts of data with no backups.
Implementing a maintenance plan is a good idea just for the backup process and i'd recommend that for any live QV database system.
One thing to be aware of, if your using the SQL studio management express (the free version), you don't get the maintenance plan option and can only create backups manually.
There is a way around this by getting the TSQL code from the backup wizard window (generate script option), then using the sqlcmd.exe command line program to run your script file, then using a windows task to schedule it.
A long way of doing things but it works at a pinch.
Thanks for the code, i just gave that a go and works great!
I do have a question regarding the 2 result tables that get displayed, fields such as DbId, CurrentSize etc etc are shown.
Both result tables are the same for me, i'm guessing the first should be the size info before the process runs?
In addition to working with our BI tools I occasionally wear the SQL dba hat. So I think I will throw in my two cents.
Depending on the recovery model you are using on your database you must perform regular log and database backups to manage the size of the log.
If you are in a Simple Recovery mode then the log will manage itself but now that your log is large you will have to shrink it to get it back to a more "normal" size.
If you are in Full Recovery mode then you must back up your database and logs in order to manage the log size. Part of the backup process is to remove transactions from the log and move them into the data file.
Generally speaking I would not worry about reads from Qlikview causing the log to grow, what I wonder is do you have maintenance plans running on a regular basis and how much does your data change each day. That will give me the best answer to how your log file got so big. So unless you are dumping this database and reloading from scratch every night or it is read-only then I would make sure to get a maintenance plan running. Eventually your log will fill the disk and then you won't be able to do anything.
Like I said dba is one of the hats I wear so I'm not the necessarily an expert but I've dealt with this situation many time.
For Microsoft SQL Server the short and simple answer is: No.
The more complex answer is; it depends. Reading in SQL Server doesn't generate log. Only writing (=inserting or updating). It is possible to write back to the database using QlikView, but I'd guess that's not the case.
Each time a update/insert is performed, the logfile is written. It will grow until one of the following occours:
1) A backup is done.
... but even when a backup has been done, the logfile is still in the same size. SQL leaves the file as is and only moves the write pointer to the beginning of the file. It's a really smart strategy, if you have an average of 100Mb of log each 24h, the logfile will grow and then eventually stay at some size (let's say 150Mb). There is a penalty cost of extending the size of the logfile, which is avoided by this strategy.
b) The log is discarded
... The log can be discarded in many ways, but I wouldn't recommend any of them for a production system. If the system is important, keep a backup of the database and the logs (this is known as "Full recovery").
If the system doesn't have use for backups/logs, use "simple backup" (also known as "Truncate log on checkpoint"), which will move back the pointer to the beginning of the log file after each completed insert/update operation. In essense, the log file will be overwritten over and over (and not appended). In this case, the log file will grow up to the size of the largest insert/update operation you do and then stay there.
I hope I've shed some light, and that the information is accurate. Quite some years has passed since I worked with SQL...