If your data model is sound, then the synthetic key is OK. Performance wise, it should be slightly better (not a huge difference) compared to a link table. So if you see a big difference, you should suspect a problem somewhere.
However, I will point you in a different direction: I suggest a concatenated fact table instead. These are almost always faster than both synthetic keys and link tables. The basic structure is
Load ... , 'F_A' as Source From F_A (...) ;
Load ... , 'F_B' as Source From F_B (...) ;
Load ... , 'F_C' as Source From F_C (...) ;
Just make sure that fields that are "compatible" are named the same thing. There could e.g. be a "SalesAmount" in one table and a "BudgetAmount" in another. These should both be named "Amount".
Synthetic keys aren't per se wrong. They are an automatically creation of compound keys which you otherwise had to create himself - but it's strongly recommended to do those key-creation manually then only this makes sure that you know your data well enough and understand your datamodel. Further it kept the structures within table-viewer clear which is especially important if you have many tables and many synthetic keys then it will be quite difficult to search and explain unexpected results. Here you could dive deep into the topic: Should We Stop Worrying and Love the Synthetic Key?
Compared with a "normal" associative datamodel is a link-table datamodel more complex and complicated to create and it could be therefore slower in load-times and gui-performance. An easier approach is often to concatenate the fact-tables to one single fact-table to create a star-scheme datamodel - but per default no approach is better, performanter or more suitable as the others, it will be always depend on various things. I think this will be helpful, too: More advanced topics of qlik datamodels.