I've scraped data from a few websites and it can be as simple as pointing Qlikview at a URL and pulling data from a table but can be as difficult as reading values from a table, generating new URLs based on the values in that table and then scraping data from those URLs.
If you can give us an idea of what you're looking to do, we'll be able to help you even more.
Cool. That should be very straight forward as it looks like all the information you need is stored in tables. In the load script select Web Files... and put in your URL. Click next and on the File Wizard:Type screen, scroll through the Tables to find the ones you are looking for.
When I looked, @10 was the Top Advancers table, @11 was the Top Decliners tab and so on.
Important: If the layout of this webpage changes in any way then your load will fail or some tables will contain incorrect data. I've had cases where I've been loading information from a table but a previous table was moved on the web page so it pulled information from the wrong table. Here are some tricks that help me:
Don't use embedded labels when scraping. That way you can query the headers/first line of each column to double check that the data you are loading is being pulled from the correct table.
Set up fail-safes using IF statements in your script - i.e. If tables @10, @11 & @12 all have the same headings, look at the headings of @9 and use that to figure out/double check the correct table.
Set Errormode = 0 in the script. As tables can change column names or change position on the screen, you will get errors. Setting the errormode to 0 will prevent these issues from causing you reload to fail (just remember to set it back to 1 when you are done scraping from the web and begin any data transformation steps).
This one might not be relevant for you but.....If you are storing or scraping historical data from a site, don't do this every time you reload. Scrape them into QVDs and store a date in their title. Then read the list of QVDs names in that folder to determine the latest file you scraped and only pull data newer that that.