Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi Everyone,
I have a script I've created that does loading operations...for the most part loading data from the database (through ODBC) and putting it into QVD files.
Quite often I see QV simply stop processing in the following manner:
SQL
10/2/2014 10:40:34 AM: 2047 select
10/2/2014 10:40:34 AM: 2048 col1
....
10/2/2014 10:40:34 AM: 2052 from
10/2/2014 10:40:34 AM: 2053 table1
10/2/2014 10:40:34 AM: 2054 inner join table2
10/2/2014 10:40:34 AM: 2055 on a = a
10/2/2014 10:40:34 AM: 2056 inner join table 2
10/2/2014 10:40:34 AM: 2057 on ot.object_id = oc.parent_id
10/2/2014 10:40:34 AM: 2058 where
10/2/2014 10:40:34 AM: 2059 c is null
10/2/2014 10:40:34 AM: 2060 a.col('ST','UT')
10/2/2014 10:40:34 AM: 2061 group by
10/2/2014 10:40:34 AM: 2062 col1 ...
10/2/2014 10:55:23 AM: 4 fields found: col1, col2, col3, col4
And then nothing. Sometime it runs (and it will return millions of rows), but sometimes QV just stops there. I'm tracking it using the logs and I'm not sure to see why or what it's doing. I've seen it happen with other SQL Statements, but there's no rhyme or reason.
Does anyone have any advice on where to look next?
I just realized that while hiding the SQL it might not make sense, but the SQL I'm using is fine.
What I'm really after is: when a QV script stops and never comes back after reaching out to the DB, where do you look for errors?
Anyone?
If by SQL you mean SQL Server, then I would always look at the database server first if it's a random issue. Could be locking, clashing with backups etc., network/firewall problems.
Is your query hitting a single or multiple SQL tables/views and is this a production table or an object created just for your QV routine?
flipside
Nope, Oracle. The data it's pulling is large - I mentioned millions of rows - and there's more than 1 table..more like 5 or 6. So there's a lot of work to do. The DB team's never report errors in the DB or the like. The way the script works...several SQL Statement are executed step by step, in many cases one result feeding the next.
I'm not an Oracle expert but the principles are the same. I'm not sure your DB admins will definitely get an alert about a deadlock victim unless something has been configured to look for them - in MS SQL the DBA would need to search through the logs.
There are some timeout settings in QMC but think the job would fail gracefully if it was that. You could try setting a trace on the ODBC connection in ODBC admin to see if anything shows. Also look in Task Manager for processes like qvb.exe and qvconnect*.exe - are they still active while the job hangs?
Generally for large and or complicated sql statements, it may be better to build a table in the DB on a schedule ready for QV consumption.
Regarding network issues, I run various ODBC-connecting jobs overnight and track job durations over time on a graph. I can see that if one job is slow, they all usually slow up by the same amount. You would need your infrastructure team to trace bandwidth usage/speeds over time to see if failed execution coincides with high network usage.
Hope this helps
flipside
Just to check you're not using select * anywhere? I've seen DBAs add in temp columns to tables before which could cause join errors in your QV load script.
Flipside,
Thanks so much, you're giving me other places to look - which is really what I needed. So it sounds like you're saying the QV script *could* simply go away and that it's not unheard of. Here's one thing I know: in QV there's a timeout setting of 21600 somewhere - 6 hours for waiting for the database - and I have seen that fail ungracefully. This case is not the same as it's not 6 hours.
I wouldn't say "go away", although anything is possible. In my experience, a script that stalls is usually either waiting for data or trying to create a complex data model. Another issue could be lack of RAM in which case there will be lots of disk activity, paging from memory to disk etc.. Are you running on a server or desktop?
This is a somewhat beefy server --- 32 GB of RAM (yes, could be more) but that's a key question. If it stalls while building a complex data model, is there nowhere beyond the script log I can troubleshoot? Even with trace statements; before it returns the full number of rows - it just seems to go away. That said, I will look for process files and ODBC logging.
Thanks for your assistance...
Are you loading on the same server that runs QlikView Server?