I haven't tested it, neither I know what algorithm qlik engine exactly follows at the back-end. I would only say that if there is any chance of differences between them, it could be negligible and the faster (if at all) should be the first one.
Now let me explain why: the first expression filters are of AND nature, i.e. - SalesOffice exclusion and JobTitle inclusion have to be true for the same record (because they are part of same set element - separated by comma). Whereas, in second expression the filters are independent. That means, for first expression there is a scope of filtering one data set and then implying the second one on that limited data set - so search time becomes a little less here. In second expression the two filters are independent hence the filters would be applied separately on the entire data set, taking longer time.
Like Tresesco I don't know how it's internally processed and I agree completely with his first paragraph. But by the second paragraph I believe the way of working is rather reversed.
AFAIK a set analysis is (nearly) the same like a selection. In the first example I think both conditions will be executed in parallel (I assume it in multi-threading) returning TRUE or FALSE for the values in the appropriate fields respectively the system-tables. Afterwards the engine builds the scope respectively a virtual table on the which the real aggregation is applied. In the second example the conditions are chained and they might be executed one after another (in this simple case it might not be needed but in general are more complex and even nested chains possible which may require an additionally evaluating).
Beside this by larger datasets it might worth to test if one or maybe several flags within the script might be improve the UI performance, for example with: