It does work - so it's not necessary to understand the exact code to use it. That is precisely the point. That it can be purely table driven the entire extract process by just maintaining the SQL Server specification tables. It is only necessary to understand the code if you want to extend the functionality not when you add more tables.
I uploaded this just to confirm that there is some practice around this. The documentation is lacking although the presentation gives an idea of the benefits and why it is a good idea to have an approach like this to extracts.
I intend to make better how-to documentation because I already have demo data that demonstrates how it works very well with 36 tables from SQL Server...
I have been on two projects where I made version 1 in the first project and version 2 in a second project. That was in 2015. So the previous versions were implemented first at a TV-broadcasting and streaming corporation were it went hand-in-hand with a high-quality data warehouse.
The version 2 was implemented at a large pension fund. And the third version of this that I posted here was developed during Q1 2016 and presented at Qonnections 2016.
So the main point is to use this if you have a large number of tables to extract from more than ~20 tables and you want to minimize the load script code you need to create. This has huge benefits in terms of much less script maintenance and much more flexibility. If you read the attached PDF it does go in to why many organisations would benefit from choosing such an approach.