I think you are best served to change the direction of the conversation from database size to another BI reference unit of measure. Say, the number of BI objects delivered in a single application.
The TBs in the DW include indexes, stored procedures, and other adminsitrative overhead typical in a RDBMS. QlikView will never need to address the entire n terabytes. Focus on the size (row count) of the largest fact tables you will be addressing in the SiB. Consider the width (number of columns) as well. Togther these inputs should be enough for you to size the server for the SiB. By the end of the SiB you should be able to prove out the delivery of the additional BI units that QlikView can delivery over a traditional OLAP cube based tool and database size will become a moot point.
you should define analytics use cases which includes only the level of detail (granularity) and those dimensions and facts you need for the use case (or business question). Also, you could slice time-wise. The goal should be to keep only the data needed for the actual discovery task in memory.
If you want to drill down/up/along/through or slice/dice (I know these are OLAP terms) you can chain with prepared and segmented (e.g. by dimension) QVW documents.
Overall, a good idea for starting a SiB is to start with a selection of data, not a whole TB warehouse.