Check the steps which you have followed to do incremental load. Is it anyware the load statement gets loaded double the records? Did you used CONCATENATE to merge the old records with newly loaded records?
Not sure if any of these thoughts will solve the problem but I hope they are useful thoughts none the less.
1) Have you changed the columns or the definition of the PK at any point? I find its important when doing incremental loads to always build in a mechanism to allow you to force a full reload, granted you could simply delete your QVD file and that would work. If you added or removed fields along the way and did not do a full reload then you may have extra unneeded values in your older data and that could cause an unexplained size differences if comparing an incremental file to a non-incremental file that was recently loaded with only the current field list.
2) I don't recommend your method for creating your PK value. One long string values are not efficient for storage or processing and an autonumber or autonumberhash128 could potentially produce a better more compact key. Second you do not appear to be separating the codes and values that make up the key this could cause problems in some cases for instance 101 concatenated with 1 and 10 concatenated with 11 both produce a value of 1011. This could cause unintended effects in your load process and cause data to erroneously duplicate when making joins, or erroneously be filtered out when using conditions like the WHERE Exists(...).
3) Have you looked at the meta-data on the two QVDs to compare the differences? That may shed some light on where the differences are coming from.