The number of records shouldn't be any problem. You're loading only about 4 million records. The problem is likely an issue in the data model. If your tables have fields in common then synthetic keys will be created. Those can take a long time to generate. Perhaps you need to concatenate some tables or rename some fields. See this blog post: Synthetic Keys
If the names of these 6 tables are exactly the same then QV would automatically concatenate them. My guess is that they are not exactly the same. Or some tables have some extra fields that are not in the other tables. If the tables contain the same kind of information (just from a different source, e.g. from different divisions) then you ought to concatenate the tables. If necessary you can force concatenations:
Hard to say from a distance. Ben stated that the first four tables load ok, but the last three give a problem. Are those last three the ones in the script you posted? It's possible that the problem isn't these three, but fields common between this set of three and some of the other four. Try loading just a few records first instead of the complete set. You can add a line FIRST 100 on the line before the load statement to limit the number of records loaded to 100:
Load * from ...somewhere...;
If you do that for all the load statements then the reload will probably finish and you can check in the table viewer where synthetic tables are created. Perhaps you can post a screenshot of that and of the complete load script. Or better yet an example document: Preparing examples for Upload - Reduction and Data Scrambling
this could be related to one wrong data record in the TSV file (.txt), maybe some strange characters or wrong column format. You should take a look at the record where it's stopping.. Are the files loading as a single load?