Thanks for your response. We're looking at roughly 7,800 customers accessing our portal with a peak concurrency around 200 (median would be about 80). On the data side, our minimum requirement is ingesting data from our Redshift cluster which was at 80 Gigs 10 months ago and is now closer to 90. The data growth for that source is pretty steady at about 1 GB every month. We have other sources we're tinkering with as a value-add that could be much bigger, but that's still at the whiteboard stage.
Thanks again for any input you might have here.
Are you an OEM customer of Qlik? The OEM team should be able to give you some support on this.
Data Model optimization is always a factor in performance and also where you build calculations and functions within the Script vs the UI.
Check out the posts below. Maybe there are a few things in there that may help
Hi Jason in addition to the excellent information that Michael has provided I think one of our colleagues msi might be able to add to this discussion. Marcus - is there additional information you can provide for Jason in regards to his request above?
Please mark the appropriate replies as CORRECT / HELPFUL so our team and other members know that your question(s) has been answered to your satisfaction.