Skip to main content

QnA with Qlik: Qlik Sense Enterprise Monitoring and Observability

cancel
Showing results for 
Search instead for 
Did you mean: 
Troy_Raney
Digital Support
Digital Support

QnA with Qlik: Qlik Sense Enterprise Monitoring and Observability

Last Update:

May 22, 2023 6:47:46 AM

Updated By:

Troy_Raney

Created date:

Apr 4, 2023 4:16:11 AM

Troy_Raney_0-1680596018659.gif

Environment

  • Qlik Sense Enterprise on Windows

 

Transcript

Hello, everyone! And welcome to another session of Q&A with Qlik. We're excited to have you all here today. Today's topic is Qlik Sense Enterprise Monitoring and Observability which we hope it will help you, if you're new to the topic or in existing users looking for some tricks and tips on it. Help you understand it better, and learn even more. This webinar series is a live question and answer. So, what we want you to do is add your questions to the Q&A panel below, and we will make sure and try to get to those this session. Just a few parameters before we get started. Everyone is on mute, but you can see and hear us. That's why it's essential for you to add your questions into the panel below, and you can go ahead and start doing that. Now another thing, there will be a QR code that pops up at the end of the session for a survey. It's going to take less than 2 min. Just give us your feedback of how we did for this session. Also, we exit your questions. Stay relevant to the topic. Our panelists cannot answer support case questions right now. But our beautiful support team will be able to help you with that, and also we ask you to also see the QR code for our next sessions where you can scan and register for upcoming session April 25th of this year. Again, the topic for the day is Qlik Sense Enterprise, Monitoring and Observability. My name is Emmanuel Herndon and I'm Digital Customer Success Specialist here at Qlik, who works with helping the customer from trial to renewals in anything in their journey. With a focus on webinars and video to enhance that journey. I am not doing this alone, my co-host Troy, my colleague, is helping. Do this Q&A with Qlik with me, will you say Hello, Troy! Sure! Hi, everybody! I'm Troy Raney. I love doing these Q&A with Qliks with Manny. And largely, I help make webinars and helpful videos like this one. And we've got two excellent experts with us today to share their expertise. Mario, you wanna introduce yourself. Hi! Everyone glad to be here with you all today again. My name is Mario Petra. I've been a member of the technical support team in our Lund office since 2015 and focused since then on our Qlik Sense Enterprise, software and all sorts of scalability and performance topics. And I'm happy to talk observability with you all today. I'm Levi Turner. I've got a convoluted title, but it's Master principal analytics, platform architect. I'm in in pre-sales, but I've been at Qlik for what going on 8 years now. Span the time between working the support group for a few years, and then in pre-sales. Since then, I deal with a lot of platforms, level concerns across all of our platforms. Whether they're Qlik view client, Client, manage, Qlik sense, or SaaS. Great! Well, everybody, it's time to answer those questions through the Q&A panel and I see we've already got one come in. I'll go and read it off. The first one. Where can I find the best monitoring tools to use? Levi, do you want to take that one? Yeah, I'll start. And then Mario is, gonna jump in for some other elements. But the way I tend to think about monitoring the Qlik platform, is ultimately we got the Qlik assets. So, your applications. You've got the server, and those are two pretty distinct types of things. You'd want monitor. For the Qlik assets themselves, the stock monitoring apps that come with come with the tool they're installed to program data Qlik Sense repository default apps. If they aren't present in your system, so you can get them, if, should they be deleted, or whether you want some of the, what we call, hidden ones that aren't installed by default. But operations and license monitor, those are the ones every site generally has, those are shipped with the product installed, and should should be up and running. These are pretty essential to have running, because a lot of other metadata apps piggyback off the connections they use. And then there's an additional set of call them. The more boutique ones. Log monitor really focused on analyzing logs. Because that's a that's a pretty pretty heavy task. Reload. So again focus purely on reloads. Think of these as slices of the operations, monitor their purpose, driven to being subsets rather than the monstrrosity. The operations monitor isn't in many respects sessions again, slice the operations, focus more on user access. The connector logs analyzer as well as the app metadata analyzer. The app metadata analyzer is a bit of a unique app in that. It, monitors, because you, a holistic view of the application metadata, so how large the application is in memory! How large it is on reload, how many columns, how many fields! What's the size of those fields? What's the size of the columns? Are there synthetic keys across the applications. This is a really great asset that every site really should have. So you can have insight across the application. This is a really great asset that every site really should have. So you can have insight across all of your state, because most people don't know what's going on with every app in their estate, and it's certainly possible that a developer did a and it's certainly possible that a developer did a poor select star which is going to impact performance across the server as a whole. So, this is extremely useful tool that far too few people have. Outside of the sort of stock experience, The other essential Qlik app that I'd point to is the telemetry dashboard. What this does, is it visualizes some product logging that you can enable. The product login has been in there for quite some time, but ultimately you have to configure the Qlik's Enterprise platform to do the special type of login and then you can build an application that then sort of surfaces that, and gives you visibility, and this will give you performance metrics for every object inside of Qlik, right. So, if a bar chart takes 10 sec to load, it could report that if a you know table object takes 2 min to load, it can report that. So it's very useful for getting very, very deep rather than telling a developer. Hey your app slow, you say, hey, this bar chart takes 2 min to load. We need to move it down. We need to get it, you know, below 30 sec You can get very, very specific very quickly, which helps the bridge. The gap between sort of traditional administrative activity server admin activities and application development activities. I'll stop sharing there, Mario, if you wanted to hop onto a couple of things that you you covered previously on the Techspert Thursday (Techspert Tuesday). Sure, I love to alright. So, there's a bunch of links, and everything is external to our documentation. But let's take a look at some of these examples. Alright, is screen sharing visible? Yup. We can see your screen pancakes! Alright fantastic. So when are the yeah? Pancakes are delicious. Let's see the the first one that I that I sort of mentioned, and a lot of what we showed in terms of visualizing results coming from these machines, both in in form of logs and metrics, was done through grafana in their fantastic dashboard system, other technologies used from from the same product group was low key for extracting logs from windows, machines, and I also recommend Prometheus as a metric ingestor. One way that you can leverage that in the windows ecosystem is through a windows. Exporter. There's another project called WMI Exporter. These are special tools that help you bring in these machine metrics, both from a windows, level performance. So the CPU, RAM usage, and also log extraction from specific software like Qlik and bring those into Prometheus metric, scattering. We also took a look at the Butler monitoring family. This is now growing. I'm happy to report. This is a great landing page to read about it and start understanding how this can help you in your Qlik operations. Something else that I've been using in my career here at Qlik, and with large customers, especially as this is typically already deployed in organizations, Elasticsearch and their EL stack, so elastic as a as a search indexer and engine log stash as a way to ship, to extract logs from windows and ship them up to the elastic search index, and then keep on the last component to visualize it all and to explore the data. Another third party platform, that many of our large enterprise customers started to rely on recently is Data Dog. This is again a an industry, standard observability platform that I encourage you to look at, and many, many more things to come as we start receiving questions from from the audience today. So I'll leave it here, and we'll kick it back to the Q&A. We have a question that someone is asking, and they're saying one area. I struggle with in self-service environments is identifying the cause of server lag. Our theory is a will be a user running a poorly written low script. But I can't tell who is running it, and how much RAM they are using. Is there a way to get this info in real time? Yes, there's a couple of ways that you can go about that and help me out here, Levi. But one thing that I would definitely set up to begin with, is the telemetry dashboard. This will help you in understanding poor app and object resource, consumption inside to Qlik ecosystem. You can complement that with machine level metrics, such as global CPU and RAM usage. Not just that taken by Qlik services, but everything else that's happening on that server. And of course, knowing the user that you're trying to investigate helps. But by default. I believe all reloads would be running under the scheduler account. The internal system scheduler account. So if you don't have reloading impersonation, turned on filtering your existing logs by username that won't necessarily help, especially in the case of a scheduled task. This is a lot easier when reloads happen through the hub. They're also easier to identify, and those would be connected to a specific user ID and their session. So that's that's a good place to start. Great. There was a question. It was kind of a Yeah, yeah, go ahead. Let me! Let me. I had to unmute there, and I'll put it into the chat. But ultimately there's another tool that is out there that's developed. I think, Mario, you referenced it in the Techspert Thursday, but the Butler platform from Gordon. It does monitor. Again, monitoring application level performance in a sort of global engine, an engine heck may have 10 apps, a 100 apps open at once. It gets a little trickier, but you can at least sort of you know, filter down the subsets. Butler does parse out the there's an engine metadata. or health check in point that'll provide you performance characteristics. You know how much RAM hunt the running tally of CPU, what apps are open, and that can at least sort of allow you to start to point, and in the direction of when we have problems, we had a problem at 9 am. We had a problem at 1 pm. These 10 apps were open, and at 9 am, these 11 at 1pm. Definitely not and get a sense of what are the applications that are involved are at play there. I would say it's a bit of a hodgepodge of different things, because you're monitoring server level performance, which is your sort of reliable indicator of performance and then application level characteristics as well, as sessions that were open so you have to sort of nulled a bunch of different types of information together. But it's really excellent tool for that, just because it's sort of surfaces and visualizes. I mean, the API there anyone can access it and built an application off of it, but it does use sort of modern monitoring techniques which is the nice part about Mario's presentation, which is the focuses on what is the best in breed for this. It seems heavy. A lot of organizations already had these tools in place. They may even have enterprise tooling splunk enterprise as opposed to sort of free premium software or free software. So, if you already have that server level performance stuff, use that ultimately don't re-engineer the stack. If you have access to it already, monitor that but typically it's going to be a joint of performance characteristics. What is the server performance? What apps are open, and what timeframes? Those are the 3 sort of most important characteristics. And I would even double down on that. Don't overcomplicate the stack and don't reinvent the stack. Windows itself has performance monitor, and we use that in support scenarios very very often, as other tools won't be available or at least not immediately, as we're trying to rescue a system and get back everybody back online. So there, you can also differentiate between service service, level resource, consumption and machine level resource, consumption and kind of start digging deeper into what happens with very good time, granularity. My main problem with the monitoring tools is the time Granularity factor. First, it only tells you about stuff once it's already happened, so you can't really use them onto the built in monitoring apps to monitor real-time activity. Bottle-rest OS is a much better platform for that, for example, as it receives those signals in real time, or as near real-time as it can get. But yeah, start at the lowest level possible. Try to understand what's happening on the machine. In general, during those activities, and it helps to isolate those workloads. When you, when you have a problematic app and user combo, isolating them to a specific node so that you can dive deeper into their activity, is, I find, that helpful. There's been some follow ups in the chat to that. The original poster said in self-service. environments. Users often run reloads manually that's the environment he's trying to troubleshoot. Yeah, so those would be traceable through the user ID and session ID, but because they're run on the on the consumption node where their session is being served, or the app is loaded. Alright, moving on to the next question. Other plans that you guys know about to implement the usability of Butler and so on, are similar tools in Qlik for monitoring? I don't think we have to. I mean the third party ecosystem is growing at such a rapid place. Pace, both from industries. Standard participants like Grafana and elasticsearch, as well as our extensive partner Ecosystem And again, this goes back to the device comment about reinventing the stack. We produce signals about resource consumption in a number of ways. Now, what is the best way for your organization to ingest and keep track of these metrics over time? That's kind of up to the individual business scenario, I would say, and from my part, for it feels like a little too much for the Qlik Sense platform to handle itself, and most of these metrics, again, are coming from open API endpoints that are very well documented. So there is nothing stopping you from implementing or catering something that works very, very well for you. Thank you for that. next question. Which third party dashboard would you suggest if we wanted to monitor what a user clicks on? This is useful when investigating, reporting for users having greater access, they should, I believe that they should. This is a more of a governance question. Then performance, but a very good topic nonetheless. Yeah. Auditing. Yeah, I did. I would say, this has more to do with data governance and auditing rather than performance. If they have access to something they shouldn't. The performance concerns are the least of your problems, or very low on your problem list. Now let's try to to think of of a different way to put this so that we can talk about it. I would separate out. What is the access you're referring to? Right? Are you talking about in app access? So it's effectively, section access, right? The section accesses. It is written incorrectly. They can see hierarchical value. They shouldn't see, or can they see, apps that they cannot, because the app access that is effectively something you can pretty easily, programmatically determine in the QMC under audit. You can audit what users have access to see the effective access and say, Mario, if he were to log in, he would see apps one to a 100 that can. There's an API underneath that, that can sort of programmatically that can. There's an API underneath that that you can sort of programmatically figure that is not that difficult. But the app access bit is a bit trickier, because ultimately, that's a you know, in memory model. So the only real effective answer is sort of either programme ties that, or sort of standardize what abstracts and access, and reviewing those which is a bit more of a heavy task. So one in the past, when troubleshooting section access level issues one recommendation that I have for customers is to set up a dedicated temporary virtual proxy in their Sense, platform, set it up for header authentication so that they can easily impersonate business users. This has to be done with good business controls in place, of course. And only authorized people should do that. But it's a great way to validate that the data slice that you're configuring for that user is correct. Once the app is actually loaded. But it's something difficult to do. Programmatically, because the app has to be opened by the user in the section. Access table in memory, in order to figure out exactly what to show and what not to show. And it's a bit more in the weeds. But ultimately section access is just a table. It's hidden, hidden from you. So if you were to comment out the Section Access declaration and the section at application declaration and reload that app, you'd now have another table in the model, and you can just use the associate engines to say, Mario, you know, he's got this user ID. What is associated with what dimensional values are associated. So that to me is the way I approach it, rather than trying to reinvent the wheel. Just sort of dupe the app, reload it without having access, access, mode, and sort of drill down and use the associativity of the engine just to figure that out. Because, again, it's second axis is just a table in memory that's sort of hidden it's used for security purposes, but it's really just a table. So, I would approach it that way rather than trying to overcomplicate things. If we're just starting out, if you wanted to confirm things before going to production. Mario's examples absolutely fantastic, you know, validate in a non-production environment before going to production, and emulate some users verify things and then didn't move to prod so header works, there's a litany of other tools to be able to email users. Our team created a tool called I-Portal, I think it still runs. It was. It's built all stable. APis, but it's just you haven't really used it in a while, but there's a lot of ways to emulate users header be the sort of easiest but least secure in a sense. If you Google user, emulation. in Qlik. You should get a number of techniques that may fit you a bit better, depending on the environment. Yeah, I remember Dan Pillar did a session on Section Access. Yeah, maybe we can reference that otherwise, a Qlik-Fix video is in the works. Since this isn't a topic of interest. I just wanna mention that you're on made a comment. He was the one who post that stuff about setting up Butler OS. And he said, The key is often to correlate user events in the sense log files with issues occurring. This allows you to see who is active the very moment the issue happened. So I appreciate that input. Next question. There's a couple of questions about Qlik Cloud, and that wasn't quite the topic for today. But since we got some experts here, let's see which you guys can help with. Is there a documentation for the Click Cloud monitoring apps with more details than the sheet or community post, introducing the different apps like what Levi was showing. Qlik Help? Do you guys wear any documentation relevant to those? As someone who's on the team. That has produced a lot of these apps. No, there's not really that much more documentation. I'll definitely be screenshotting this and send it to our product teams because we would love these to be sort of first-class citizens in the product. We just need to get that done there. But I'd say comment on the Freds, if you have particular questions there, because I know myself, Dan Bila, who wrote wrote the bulk of the apps, the real monitor app metadata and the audit app. The entitlement analyzer. We all monitor those threads and answer any questions so I'd go there if you want sort of specific questions about what can and can be monitored which this sort of bleeds into the other question there about is session usage per sheet available in cloud and i'd say session information is app absolutely available via the entitlement analyzer. So analyzer, Qlik...let's pull this up. Share screen. So the entitlement analyzer. This has on that last sheet there. App consumption, this tells you what users open and what App when as well as the license type they use. This is where you'd get sort of session level information, but in terms of sub session, level information like sheet usage. That's not exposed to date. Entitlement analyzer as the first pass, and then from there. It. Unfortunately, you're gonna have to go back to the sort of old triangular method which is interview users right and get a sense of what people are using. If you're using it to secure what's useful yet 10 sheets and you don't know if they're all use. You may have to do more focused interviews with your users, but start out figuring out which users are opening the app and then go from there. And don't assume that. And you went on mute. There, Mario. Sorry we lost you. Nope, sorry I was talking to Siri for a moment. i don't know why. So, what I was trying to say is, if it ain't there today in cloud, don't assume it will ever be there. It may be there tomorrow. I know there's a lot of observability and auditing tools and mechanisms being implemented and worked on for coming releases. So yeah, stay tuned on the space and continue the conversation on community. We'll happen to tell you when something new is available. I did want to talk a little about best practices on monitoring apps on Windows, specifically. For them to be effective, one of the strategies to be used, could be three to four tiers of apps. And I would say, especially, for the operations monitor and the logs monitor, I would a version of that reload everyday that has data only for that day. A version of the week data and a version of monthly data with aggregations. There is no reason to reload that monthly chunk of data every day that alone can cause performance issues. And dramatically reduce its usefulness. I would say, when you need quick answers by by poking through the data model and making selections and trying to figure out exactly what happened, or to identify trends. Try to figure out what type of questions you're trying to get answers to before gust going and reload give me everything, because first you could be waiting for a very long time for that to finish, and, second, that app may not even contain the level of detail that you need. So, yeah, try with with a small data window first and make sure that the indicators that you're looking for are there and if not, move on to something else. And Maro. I'm curious. You're sort of gut feeling here, cause that's at scale that definitely makes sense If you've got you know, 10 nodes. You've got 30,000 users those apps can get quite large. Is there a scale point where you start? Start to say, this needs to happen? Because if you have 10 users on a site who really cares? That's pretty slim. Yeah, obviously, obviously, no. But one way to look at this is log volumes, for example. So take any of the proxy logs. Do you have mine? 5 of them per day in your archive logs, folder, that I would say, yeah, start chunking down your monitoring apps. Because those will be 8 MB-ish each per node, and they can be dozens and dozens of them per day. Yeah, trying to chunk those down and then filtering down on the sort of topics that you wanna see in those logs as well make sense. But for small to medium sized customers, where either concurrency is not a big factor or the amount of nodes is not a big factor that could end up acting as a multiplier. Let's say in the data volume that you would ultimately get I would keep it as simple as possible, but for large is and enterprises so huge concurrency but over a 1,000 users per day, or 1,000 sessions, let's say that I would start to chunk those down, and depending on the level, too, right? So if you're trying to understand audience, audience level changes over time, you would focus on the sessions one, and you would need a longer timeframe to understand how how your audiences is moving or evolving over to different projects and different applications. If you're trying to understand what happened with RAM consumption on a particular node, you would want to rely more on daily daily metrics for that. And the great thing about like lifting and shipping this off be like doing this analysis outside of the monitoring gaps is that this timeframe slicing happens in the metrics engine and not in Qlik Sense. And because this these tools rely on time series databases instead of just regular relational databases, it is much, much faster for you to filter down from a month's worth of metrics down to what happened in this fifteen-second slice across the board of multiple services. That's an excellent followup. Not everybody should bother to do this. Of course. I would also vote to think about what's your usage pattern? Right. So I mean, you know. Let's say we're experiencing issues. You may reload the operations monitor or the sessions monitor or whichever app you find value valuable on demand. 20 times within a day, I wouldn't set up a every minute reload for that purpose right you're gonna be using serve resources to reload an app. No one views so really start to think in terms of what the consumption pattern of that app is by default. The operations and License Monitor reloads every hour, which for a lot of sites is not important. Right. If you're going to check it at most once a day. Adjust the reloadedule and same for any other monitor and app you use sort of focus on what you're using it for, and not just. I want to have the latest and greatest, and see, see a little line chart wiggle throughout the day, because if you're not opening the app you are using recourses with no purpose. Yeah. And that's a huge resource expenditure. I would say, just to look at CPU and RAM metrics, for example, or even the session Count, go up and down. Butler, for example, can give you those metrics in real time, with barely any resource overhead on the Qlik Sense side. Okay, thank you. Great discussion. We have a few more questions we need to get to. Let's go. We need to get to one of them being what monitoring is possible with QSDA Pro and Qlik Cloud, which native feature can't handle? Like looking for issues with apps. Expression, objects, duplicating, master items, etc. Because if you're not opening the app, you're just using resources for no purpose which itself is. QSDA Pro. Hmm! So, QSDA Pro, so Qlik Sense Document, Analyzer Pro is a tool created by Rob Wonderlick, which is a sort of next generation of his long running document. Analyzer apps. He's had them for Qlik View, for Qlik Sense, he said. I've used them. You know, many iterations, really excellent tools, and not that I need to plug in necessarily, but extremely affordable, like the price for QSDA Pro is really affordable. That you may not even get approval for your expense. Report to purchase it, but what it does just for contacts. And then we can go to alternatives or other things pro. Let's go. I forget the exact website that he uses. I usually just Google things. What it does is basically drill inside of an application right? So it rather than thinking, you know. Think of the app metadata, that's every app, but not inside of an app. That's metadata about an app that's metadata about an app usd, that's metadata about an app Usd pro focus is more on what's inside of the application. Right. So what expressions do you have? What's the response? Time for for individual sheets. What is the nature of the actual fields? You have right? What's the cardinality? Do you have imperfect keys? It so it really really focuses on granular recommendations about inapp level stuff. So I'd say in product, there's nothing that's gonna replicate exactly everything that he's doing. There is an automation that exists. It's called the Click Field Usage Automation, created by a colleague of mine which basically says, what fields are used inside of an app right. So I've got 500 fields in an app. I don't know exactly what's being used. It will show you what is being used, and here's a little video. There are the sort of folks that just sort of drill into. The view with it, but ultimately it. It allows you to get a sense of these fields are not being used, and if you were to drop those fields you would save X amount of RAM. So it's in this example. There are about 400 unused fields that if we were to drop those fields we would save a good proportion of the RAM, 66% of the RAM usage. So that is one element of what Rob has in his package. He does far, far more than that. And there's not an in-product capability that does that, because ultimately it. It requires an opinion, right? You have to have an opinion about what things should be as opposed to what is the metadata endpoints that click has are really good about telling you what exists, not about what it should be, because in honesty. The opinionions differ. But Rob's tool really excellent, I'd say, if I were doing cook development on a day-to-day basis, I'd I'd want to use it, because ultimately it'll save your Butt. If you, you know, make a little mistake, or you're taking over a new app. You're not exactly familiar with. You want to get a sense of what's there? What is what's not really excellent for that. And for the data, governance part and understanding how to optimize data models and how to deliver the least amount of usable data to to users that would point that would point you all to note graph that's a great way to understand your data lineage more on the back end side of things right? And understand how your data sources are being accessed, and that in itself can be part of this picture of what fields are being used, what data tables are being used inside of your apps, and a whole bunch of cleanup can be done. There as well, so don't don't load things. They are not gonna their. Your users are not gonna use. Don't load things that your apps are not gonna use at the graphical level. I think. But would you say that running these side by side, it would be complementary, or one would get you more to a good answer than the other? Depends on the question you're having ultimately right. I mean no graph will will get you a sense of you know. We're the click lineage connectors, which sort of the same thing at this point. That will tell you. Okay, I've got a field call Profit. Where does it come from? It comes from a Qvd. Which is then load from from a, you know, a redshift database, and so if I were to mess with that Kvd, then I'd impact the consumption of that field, to think of it in terms of field level tracking, so a database is changing your scheme is changing your migrating to Snowflake. You want to get a sense of where your sources are, and sort of mapping of things. That's an excellent use case there, and they do an excellent job at that once. You're in the app right? You've got the model. W. Things are reloading is perfectly fine. You still need to have opinions about how to develop the app. You know. Going back to the question about performance. You notice that you know one sheet is performing terribly. Get a lot of reports about it. Well, what's wrong? Right, you know you could say, let's just throw a calculation condition on that on that, on, you know the table that's in that sheet, and say, you have to drill down to make selections. That's that's the first thing most people approach a problem. With, maybe that works. Maybe it doesn't. If it doesn't, you'll need to think about what's the model. What's the data model how are we presenting data? And that's where Rob's tool can be accomplished about how to opinions, about the actual underlying structure of the model as well as the expressions. They're in right. If you've got your yeah 6. And Sifs, because that seems intuitive. It will call that out. So think of it as data flowing into the app you want to have lineage once the data is in the app, you want to have lineage to the expressions and the expression part is the part that Rob's tool is actually job with as well as there's you know. I'm sure a ton of other open source or sort of premium tools out there for that purpose as well. There was a followup question that I wanna address node graph except click lineage connector cannot be used anymore, doesn't it? That was the question just to clarify about the use of node graph. How that works! Let me just double check, because I haven't sold it recently. But again I sell more of the Sas platform. Then client, manage platform so it looks like in she'll I or yeah. July 2024. It will be deprecated. The node graph product. So that was announced on January eleventh, 2023. So it can be used. The question is ultimately. How long it'll be supported. I wanna be supported for. And it looks to be an email that was sent out to customers about. Yeah, and I don't. But ultimately I don't see the word of what we're dropping. Here is the capabilities. Maybe just that a specific product branch. But I, as I understand it, will be building those capabilities into a specific lineage connector that will allow you to extract the same sort of information, but without necessarily, maybe the the node graph interface. Yeah, I'm looking to it. Yeah. And that's the way to interpret it is ultimately we're deprecating. The client managed as the nexus of things, and instead sort of focusing it through sass, so you can. You can use the Linux connectors present that data into a Sas tenant from your client, managed environment. And that's that's the part that we're we're making a decision on numbers. The capabilityilities. Okay, thank you. Just wanted to give you time to complete the thought. Next question. Alright, yeah. And I wanted to quickly appreciate the Geron's response. Here. He's absolutely right. One other thing that I didn't think about when it comes to slicing and reducing the size of your money during apps is to build them on demand with on-demand app generation based on time selections that you need in the moment and kind of. Just have that large level app be reloaded over time. I honestly. Don't think that these apps are useful until you actually need to consult them. Having them reloading the background without eyeballs on them, is a waste of resources. In my opinion, but of course, if you don't do periodic reloads the moment that you do need it to troubleshoot something or to gain some insights, you may be waiting for that for a long time. Unless you know the timeframe that you want to look at, in which case Odag is, is your friend. Okay, thank you. Next question is, let me. Let's go through in a hybrid development. Click, click, save on site since enterprise on site, click, view on site, Qap is extremely hard to get usage by app metrics across all platforms. The best, so far is usage by app in the enterprise sits license monitor, but the but that does it include Q. A. P site, usage. And then it's entitlement. Analyzer helps on sas, but limits to Sas usage. And this note also also would be nice to pull in the catalog and string metadata. In those views, would it be very helpful, as it said, would be very helpful, as we try to migrate more to Sas. Alright. Thank you, Debbie, for that, and for those comments I would have to agree with you. There. Yes, we don't have a unified metrics platform at the moment, but we're making efforts in that direction, for sure, and there's gonna be a lot more data sources that you'd be able to bring into your admin console in cloud. But I am interested about this Qap part. It's a essentially clicks on on windows without the hub it should be able to. You should be able to leverage the same. The same monitoring applications. I'm not sure what the blocker is there in Qap. Maybe you can ask a follow-up question, and we'll we'll think about that. To jump in it. You can use the mark, the default monitor, and ask, but I think she's asking more for the aggregation of all the different places right? So we've got. You know we have client manage click, view and click sense. We have qap, and we have sas, I want to have a unified master and commander of everything right. I want to monitor all 4 of the individual installations, and to enter that in. You certainly can take the data from those applications, and and sort of build something yourself, like the models are already built. Now all 3 of them, but to tomorrow's point there's not a unified master in Commander App. That sort of runs a sort of bit bespoke, for now the license monitoring click sense will get you both clicks and clicks view. There's there's a way to sort of analyze. Click view logs as well, so that'll get you 2 of them. You've got the entire analyzer inside of Sas, and then the Operations Monitor or license monitor from the Qap side. That made just require a little bit of, you know, dumping of data to Qvds. And then point that into to a unified view of the client, manage portions. Then you have a sas portion of things. Alright! Thanks! That's a good tip. These are great questions. I appreciate everybody's participation in this next question. Unfortunately, another click-cloud question, he says, but is an administrative application planned for the future, which, for example, might include Qmc. Change logs. Is there? There is an event tab in the administration console, but unfortunately only limited filtering is possible here. Thank you very much. So 0 visibility into that roadmap. But I will take it to those discussions, for sure. Thanks for the for the feedback and suggestion. Do you know anything about this? There's us! Levi. I'm not aware of anything, but it's honestly a very broad topic. I'd be certainly curious about what? What are the types of things you want to monitor, for example, another attendee mentioned sheet usage. That's a that I know. We've raised that from Pre sales angle with product management several times is being absolutely, you know, essential ingredient to deploy at scale, because you need to be able to curate the stuff that's been created you can exist for 2, 3 years on a site and that it had not come up. But after that that point you start to really need it. Yeah. Especially with with large number of users. But if you have more specific requirements, it. I think this was more from from an admin standpoint, so kind of understanding, level change log and keeping track of who makes what types of changes so groups and space administration app application. Gotcha! That sort of stuff is what I'm what I'm gathering. Yeah. I just wanted to mention. I threw into the chat the link to click ideiation. So I think that'd be an excellent forum for making product audit requests like this, and it puts your idea in there. And the more people. So it gets some attention to what customers are needing. And our product managers definitely take a look at that. Yeah. Actively, actively monitored by architects and product managers. Yeah, absolutely. Alright. We have time for a couple more questions. If you guys wanna go and submit. One that I would love to bring up is, if you guys can shed some light on it is setting up a alerts for Admins. So. Is there any kind of threshold alerts that you might be able to set? Or how can we find documentation on building alerts, so that on top of monitoring it's like, there's some mechanism built in. Right? So that's sort of the but it from my part, it depends on what you have in place for monitoring. Oftentimes alerting, comes along in those suites, so Grafana Loki, for example, and all the Gravana suites, Prometheus, etc. They have an alerts, a notification layer on top of it, but of course you need to define those thresholds. There are warnings and alerts, so you can get based on the telemetry dashboard. So. But those would be more specific to your system and resource consumption there. But that also requires you to understand what what is normal versus a normal. Pointing to these cloud based observability tools. So one of the great things that you can do in in Prometheus as well as in data dog, and in elasticsearch, with the right level of subscription, is anomaly detection. So you have a system that is, that is constantly looking at certain indicators that you, that you select and tries to understand what is a standard deviation, what is normal usage, what is abnormal usage? And you can set like percentage thresholds instead of fixed amounts to be alerted. So, if something increases by 10%. For example, if RAM consumption all of a sudden goes up 50% compared to usual metrics, that you can be alerted based on that, there's nothing in the product per se to set these up you will need something to monitor those metrics, and then a largely based on, based on that. But most of the tools that we that we showedcased during the Sdt. Have that layer built in, and of course, in cloud. You have alerts that are native to to the product. There. Great. Well, everybody, thank you so much for your questions. This has been a really great session. Oh, someone's slipped in one last. Yeah. So to address Chris's last comment, here I am fully aware of that, and we're gonna be working very hard from support to get it with a community, a large to in severely improve the situation. During this year, so I would love us to have a repository of templates for Loki, for elasticsearch, for data dog, just to make those integrations a lot a lot easier for admins. But thank you very much. Well, guys, here is a QR code to a survey about today's session. We love getting your feedback and just as a reminder our next session will be Tuesday, April 25th. And we're gonna be focusing on quick click, replicate. Then I wanna think our panelists today, Mario and Levi really appreciate having you guys share with us. It's been a pleasure. Thank you for making this happen, and thank you very much. Everybody have a great rest of your day. Thank you. Thanks. Everyone. Catch on to the next one.

Contributors
Version history
Last update:
‎2023-05-22 06:47 AM
Updated by: