recently had the opportunity to demonstrate Qlik Sense for the Director of a large health system’s Revenue Cycle team. The consultant who recommended us spoke with me ahead of time and we got along great. “This guy has been around forever and has seen everything. He isn’t interested in the same old boring ways of seeing the current numbers” he said.

 

Hehe. That certainly makes at least two of us. Are you pretty tired of seeing the same old account balance visualizations? Yeah. So that makes 1,293,394 of us that have seen the same things for years and want something new.

 

Then our discussion led to something really interesting. “He is really looking for some way to visualize the entire flow through the Revenue Cycle process, not just the current status.” This demo just got really interesting. “How can we show claims and dollars as it flows from state to state through the process?” Suddenly “light dawn’s on Marblehead” and I've got it baby ...  A Sankey Diagram.

 

I’d been waiting for over 6 months for a perfect case to utilize a Sankey Diagram and voila here it was, teed up for me perfectly and here is what I came up:

Claims#Sankey

 

This chart represents # of Claims. The AR section in blue is the number of claims for patients where all of the data was available on first try and could be billed. The Pre-AR section in purple represents the patients where something was askew in their charts and could not be billed without additional work.

 

“Big deal!” You say.

 

“Even dreaded pie charts can show that context comparison.” You say.

 

“Did I forget that visualizations are supposed to add value to the raw data?” You ask.

 

Stay with me about 2 minutes longer and you’ll see the method to my madness of using a Sankey Diagram where you see the entire flow. The proportions are about even on the left, but notice in the up right corner that the proportions aren’t. Also notice one of the wonderful features of a Sankey Diagram, if you hover over an area it turns dark so you can see the path that area has traveled as well as the number it represents. It’s 100 and it brings to the surface the fact that a smaller percentage of claims end up being paid within 30 days when we have to do work to the patients data in order to file the claim. The proportions are almost even for those paid within 60 days and the proportion of the dreaded money that doesn’t get paid for 90 days or longer is higher for those claims that needed to be worked.

 

This my friends is where the Sankey Diagram stands alone in the value it can add. You can easily scroll your eyes through the “flow” and track anything that seems suspicious. And it gets better.

 

Like any other chart the “# of Claims” doesn’t have to be on the only measurement you visualize. Would you like to see the Dollar ($) values? Why not, you have nothing better to do and I’m sure your curious … here it is:

Claims$Sankey

 

Right away something else jumps out … the proportion of claims was pretty close for AR and Pre-AR but the dollar values, are very far apart. How many bar/pie charts/tables would we have had to look at to uncover what was blatantly obvious thanks to our new friend the Sankey Diagram. Almost immediately we see that we are working a lot of cases to clean up data so we can bill but the dollar volumes for the cases are smaller than the dollar volumes of the “clean data.” What’s causing that? Why do claims that have to be worked before being billed take so much longer to pay once they are billed? True analytics will answer the question you had when you started but will also cause you to ask more questions.

 

I’m lucky in that I had a consultant, tell me about a customer who isn’t just satisfied with the status quo to challenge me to use a visualization I had been dying to try for 6 months. Consider how you could shake things up completely by using a Sankey Diagram or other sexy new visualization.

 

Forget the example data that the author chose to show (sales/dollars) and focus instead on what the essence of the visualization does for the end user. Why did someone spend weeks/months of their lives designing this new way of looking at 0’s and 1’s?  What will it let your end users see in a way that they probably couldn’t get by viewing 10 raw numbers.

 

Click here to check out the Sankey Diagram as well as dozens of other great visualizations you can utilize in Qlik Sense.

 

Click here to check out my blog and see other posts on visualization concepts

This past week I had my annual review. This time of year always makes me envious of those that produce widgets. I would love to be able to show my boss a list of all of the widget producers and say “See boss I’m in the top 2% of all of the widget producers in the company and the top 5% of widget producers around the world please compensate me accordingly.”

 

Since you are reading this post the odds are high that like me you produce Business Intelligence Applications and aren’t producing widgets either. So how do we evaluate our work? How should management evaluate us?

 

One way to evaluate our work might be to simply count the number of applications that we build. Of course I could barely contain a laugh just writing that. Obviously that is wrought with problems so let’s not even consider this option.

 

In a strictly financial sense many types of business can measure the return on investment (ROI.) But perhaps the application we spent 9 months building is intended to help resolve bottlenecks in the company that will lead to improved patient satisfaction. The resolutions that surface may cost the company more money. Does that mean we failed? Certainly not. So we can’t measure ourselves by dollars spent and dollars saved either.

 

If you follow industry pundits, tweets and other social media you might be familiar with the focus of many in the industry to focus on “user adoption.” Evaluating to what degree users actually utilize our applications is probably a good way to measure ourselves. It could be argued that it isn’t a perfect measure of our efforts, however, it does seem to be a pretty good measure of our effectiveness. Because whether we like it or not, our jobs involve more than just slapping an application together. End user adoption, or the lack there off, measures our ability to brand, market and support our application. It is also a pretty good representation of how trustworthy the data in our application is. One of the most important things that end user adoption will measure is our ability to effectively visualize the data in ways that encourage usage.

 

Taking advantage of Qlikview logfiles

One of the nice features of Qlikview is that it retains a log file in the background on the server that retains information about every single end user session that is invoked. Since the introduction for this post was so long I will spare you the pain of reading the raw data of a session log file and skip right to ways to effectively visualize end user adoption using the data that those logfiles contain. Please refer to other posts and discussions directly in the Qlik Community about where to find and how to access these log files.

The session log files contain information that would let us look at things like “how many users used the application” “how many times were sessions invoked” and “how many minutes were used.” Thus the first chart I present contains all 3 of those measures.

Evaluation_Method_Users

The first point I want to make is that I’ve masked the real document names. I did this for two reasons. First you don’t need to know what my real document names are. The more important reason is that I don’t want know what the real document names are. At least for the duration of the time I’m trying to figure out how to effectively measure “end user adoption.” That seems rather odd so let me explain.

 

Overcoming bias when choosing how to measure

I believe that we all have biases. I haven’t developed all of my companies applications and frankly I have some favorites of those that I have developed and some that I was forced against my will to develop. If I knew what the application names were I could be inclined to choose and recommend the metrics that make “my” applications look the best.

 

If you refer back to the chart you will see that Application 69 has the greatest number of users by a large margin. If I knew that Application 69 was written by me I could immediately come to the conclusion that our end user adoption should be based on the number of users that use the application. If I also wrote Application 85 I would probably really push for that policy. “Show me the money.”

 

But wait someone else on my team seems to have an objection because it appears that Application 85 has a lot of distinct users but only has a tiny amount of Sessions and very tiny amount of minutes. Hard for me to argue with that, and I put my outstretched hand back into my pocket.

 

A discussion ensues for several minutes and perhaps we re-sort the chart by number of sessions. Then by Number of Minutes.

Evaluation_Method_Minutes

The author of Applications 33, 49 and 56 now suggests that we evaluate end user adoption by the number of minutes used. I’d like to vote for that since I was the author of Application 69 but I also authored applications at the bottom of the chart for number of minutes. I’m kind of in a no win situation on this.

 

Can you understand my point for masking the document names so that we don’t really know which application was developed by whom? If we are choosing a method of evaluation we need to hide the real document names so that nobody pushes for a choice just because it is better for them.

 

Perhaps of equal importance can you appreciate the beauty of having all 3 columns displayed with numbers as well as bar charts? Obvious patterns jump off the page that help you avoid jumping to quick conclusions just based on 1 value or the other. If we are going to devise the method of coming up with an evaluation method we need the visualization to be really crisp, and this method provides that.

 

You might be screaming “You rotten Qlik Dork … just tell me which of the measures is the right one to use!” To which my reply is a resounding “None of them and yet all of them.”

 

You see nobody said we had to use a single value to do the evaluation of end user adoption and there is so much more that we can do with Qlikview to present a more complete picture. The chart below slices and dices the data a few other ways that presents a different picture.

Evaluation_Method_2

The first column interprets the average number of minutes per session. I might argue that value really represents user adoption of data analytics applications. Regardless if the application was built for a team of 5 or 50 to consume it reflects how long users stay engaged with the application. If we believe that is the goal then perhaps this is the perfect measure. Woo hoo. I think I wrote application 53.

 

Oh wait a second the other developer raises their hand to complain yet again, and points out that average is a really poor statistical indicator and that Median is a better measurement because it isn’t so swayed by outliers. In theory I agree, but as the author of application 53 it appears this statistics mumbo jumbo is costing me a big fat raise because while the average number of minutes per session is the highest, the median number of minutes per session is a measly 5. Phooey on heat maps I say, because if it weren’t color coded nobody would have spotted the 5.

 

Whether we used average number of minutes, or median number of minutes both point out something very interesting. If you look at the very bottom and see the numbers for Application 69 it appears that any of the single measurements like # of Users/Sessions/Minutes alone didn't show a complete picture. Lots of total users and minutes, just not many minutes per session. Quantity for sure but not necessarily much analytical quality.

 

The third column illustrates a completely different measurement, the number of sessions per user. In other words how frequently are users engaging with our application. Like the raw data displayed in chart 1, displaying all 3 of these combined measurements helps paint a broader picture: Is our application engaging users for a very long time? Are they engaging once every 6 months, or are they coming back every other day and working?

 

Box plots to the rescue?

If we produce a box plot and make a few minor tweeks we can see that in fact Application 53 does in fact have a very high max value but the very low median of 5.

BoxPlot_Evaluation

But the beauty of what a box plot visualizes for us can best be seen as I scroll to the right a bit. Notice for applications 87 and 8 both have pretty high medians, which we would see in the heat map chart, but more importantly you can see that even their lowest values are near 10 minutes per session. Meaning when these applications are used they are used for a good amount of time and the time is pretty consistent in a predictable range. Perhaps we could measure the end user adoption based on the predictability and consistency with which users engage?

BoxPlot2

Of course any kind of visualization of end user adoption would be incomplete if we didn’t look at the values over time so that we could see if things were getting better, stabilizing or getting worse.

Evaluation_Trend

The wonderful thing is that while I focused on each method individually the great thing about visualizing data in Qlikview is that we can keep the entire picture together so that we get a true overview. A scorecard of sorts for each application.

WholePicture

The truth about visualizing end user adoption

Click here to read the conclusion of this post on my blog QlikDork.com

 

Please note: The attachment provides you the sample visualizations above in case you want to see how they are accomplished. The load script is loading data from a QVD that has already read in all of the Qlikview log files and applied some logic to them. If you refer to other posts and discussion items you will be able to easily read in your server's log files and actually utilize what I've given you in production.

 

Please respond

I would love to read how you measure end user adoption at your facility, or how in general you measure the effectiveness of the Qlikview/Qlike Sense applications that you have developed.

Visualizing Length of Stay

Posted by Dalton Ruer Mar 15, 2015

What do the numbers 3.53, 17.6 and 4 all have in common?

 

 

They are completely useless when displayed by themselves because have no context.

 

 

Length of Stay is a vastly important metric in health care and here is the most common way to display it:

AVGLOS

 

Perhaps you can make it prettier using a gauge, an LED, some giant sized font or some really out of this world java extension but will that really change the fact that it’s basically a meaningless number without context?

 

So often in the health care field we are so starved for data we can’t wait to slap the values on the screen and then start slicing it and dicing it before really thinking through the more basic question “What value does the number have?” Prettier isn’t better … it’s just prettier.

 

Average LOS is a real number that truly represents our average LOS. But does average length of stay truly represent how well we are doing? Is it fair to compare our average length of stay to anyone else? Is it even fair to compare the average length of stay within our organization between time periods? What about comparing the average length of stay between specialties?

 

 

LOSBySpecialty

 

I submit that any comparison of the average length of stay is like comparing the size of pizzas to the size of chocolates. One is much bigger than the other but who cares … we expect it to be. Just like we would expect the average length of stay for obstetrics patients to be less than the average length of stay for cardiology patients.

 

But LOS is important and the purpose of analytics is to measure where we are and help us find areas that need improving so comparisons are only natural. So how can we go about visualizing the length of stay in a meaningful way that doesn’t involve comparing pizza sizes to chocolate sizes?

 

Click here to read my entire post on my website QlikDork.com

Visualizing the Story

Posted by Dalton Ruer Feb 7, 2015

"It was the best of times,

It was the worst of times,

It was the age of wisdom,

It was the age of foolishness …”

 

These are the unforgettable opening lines to Charles Dickens’ classic “A Tale of Two Cities.” One of the first pieces of classic literature I ever read. I had just entered the 10’th grade and our teacher loved this book so much that we literally spent half of the entire year just reading and reviewing this book.Prints3

 

I began thinking about this book after viewing this piece of artwork from Stefanie Posavec. The complete piece of art represents a chapter by chapter, paragraph by paragraph, sentence by sentence and word by word depiction of part one, of one of her favorite books. She quite literally visualized the story.

 

If I had a fraction of Stefanie’s creativity, the graphic art capabilities or the time to truly do it justice I would love to visualize “A Tale of Two Cities.”

 

I’m more of a people person than a words person so I would probably begin with a wonderful Chord diagram that mapped all of the characters to various character traits. How many traits would Charles Darnay, Doctor Manette and Madame Defarge have in common?

 

SankeyDiagramNo wait the main point of the story is the incredible transformation in characters like Sydney Carton. What great paths a Sankey might divulge if used to map Sydney’s transformation from a lazy alcoholic into a selfless martyr. Where do his changes intersect with others in their development?

 

Sorry I digress … the whole point of sharing those opening lines was to describe the period in which we find ourselves, not theirs.

 

We have petabytes of data available and yet business users can’t access a fraction of it. We have examples of truly great work, ridiculous amounts of computing and graphic horsepower at our fingertips and yet we can’t build data visualizations that business users want to use despite being starved for data.

I have a confession to make

I recently completed a webinar in which quite frankly a fear raised its ugly head and I backed down from it. I was demonstrating what I had worked on with Qlikview and some of the super cool new functionality of Qlik Sense. Not a problem. I’m very comfortable talking about my data and my work. My fear came through in my choice to not present one of the coolest new features of Qlik Sense which is “story telling.” I wasn’t really sure how to frame the graphics I was showing as a story that would be compelling. So I chose to simply avoid the issue.

 

Im pretty confident statistically that there are lots of others who are very much like me.  We can import data from cockail napkins, we have incredible tools at our disposal but what we don't have is the background in story telling.

 

Quite honestly I’ve avoided learning the art of story telling because I’ve felt like presenting my “view” of the story is against the rules. Aren’t we just supposed to share the underlying data without any of our own prejudice?

 

The data should just speak for itself shouldn’t it?

 

It’s the readers responsibility to know what they want the data for isn’t it?

 

Everything I’ve read in the past few weeks since my presentation has told me … NO.

 

Everything I’ve read or listened to lately indicates that great info graphics and great news stories have one thing in common … the author presented and guided the story from their point of view. They don’t alter facts to hide other views. They simply provide a lead or direction to the story to ensure that they have at least presented a path for the reader. If the reader chooses to dive really deep they can.

 

Local Hospital Workers

Battle 5 Headed Monster

 

 

 

Is my take on how our colleagues in data journalism might approach the very situation that many of us in the healthcare field find ourselves. Nearly every day we battle:

  • Meaningful Use and a myriad of other governmental regulations
  • Converting more and more processes from paper to electronics
  • Lessening payment percentages from the government and insurance companies
  • Increasing number of patients with super high deductible insurance plans
  • An aging population that has multiple complications
  • A rapidly growing antibiotic resistant population

 

Ooops! That’s 6 heads. Unlike data journalists I don’t have an editor to check my facts. Don’t let that cause you to miss this really important point … the data visualizations we build should have attention grabbing headlines to them. For many many years data journalists have set the example of how to draw readers in and yet we struggle and sulk at the end of each day because with all of the resources available to us we don’t have the kind of adoption rates for our applications that they get with a pen and paper and brute force in gathering their data sources.

 

What I’m learning tells me that the next big step is to back up those headlines with key points that keep the audiences attention … like the 6 that I mentioned.

 

Those points should then be able to be drilled into until they exhaust the data you have available. The path that your users choose to take is up to them. That’s where your directed story changes course and becomes truly interactive and user guided.

 

The main point I’m trying to make is that regardless of the medium we should all be visualizing the story. The vast amount of detail which we can provide in our data visualizations should be the only thing that separates us from data journalists who are bound by a limited amount of space.

 

My own metamorphosis may not be quite like that of Sydney Carton. My work may never draw the kind of audience that Stefanie Posavec has. But my friends my eyes are being opened widely to the great synergy between data visualization and data journalism. To anyone who tries to stand in my way as I continue to learn this great craft of story telling I say … “Off with their heads.”

 

How would you visualize one of your favorite master pieces?

 

How do you go about telling a story with your data?

http://qlikdork.com/wp-content/uploads/2015/02/Boom.jpgNot that anyone would be surprised to discover a direct correlation between supply and demand but in my last post on “Visualizing Knowledge” the scatter plot proved to be a very advantageous chart type in that it showed there was a very distinct correlation between the two. On the busiest days (Monday) it took longer on average for a patient to transition from walking in the door to being placed in a bed. The scatter plot was a good visualization choice to help us quickly and effectively see the relationships by day of week.

 

The goal of this post is to try and drive deeper because while “day of the week” is a common unit of measure in reality that is a rather large unit of measure that is comprised of 24 individual hours. More specifically we want to find a way of visualizing the busy-ness hour by hour. Just for fun let’s challenge ourselves to provide what Albert Cairo refers to as the “boom” effect in his book “the functional art.” In simplest terms we want the graphic to show up as a visual pyrotechnic. I want our visualization to explode off the page into the readers mind.

 

The natural starting point I suppose would be a pivot table. It’s a simple way of visualizing a measurement like “# of people who walk in the door” across multiple dimensions like “Day of the Week” and “Hour of the Day.” Easy peasy right?

 

PivotTable

 

Easy peasy to create perhaps, but is it really so easy to read? A pivot table may be the perfect choice for multi-demension analysis when both dimensions have few values. But in this case we have 168 unique cells and it is all but impossible to spot any immediate patterns.

 

Fortunately I specifically indicated that we needed to provide a “boom” factor in our visualization or we might have been tempted to stop and simply say “look Mr. E. D. Director you asked to see 168 values and I showed them to you.” In his afore mentioned book “the functional art” Alberto Cairo spends a great deal of time explaining the science behind how we visualize anything as humans.

 

In one section he uses an image of what could be a pivot table and says “The brain is much better at quickly detecting shade variations than shape variations.” The point he was making is that its nearly impossible for humans to see that many numbers side by side and on top of each other and make them out. In the following try and find all of the 6’s:

4 3 6 9 1 6 5 7 8 2 4

9 8 4 6 3 2 1 9 5 3 1

7 2 8 1 4 5 9 6 7 3 1

2 4 1 5 6 8 1 4 2 5 3

 

He then shows an alternative image with the exact same sequence of numbers but uses shading and suddenly what we as humans can do is made abundantly clear:

4 3 6 9 1 6 5 7 8 2 4

9 8 4 6 3 2 1 9 5 3 1

7 2 8 1 4 5 9 6 7 3 1

2 4 1 5 6 8 1 4 2 5 3

 

Cairo immediately goes on in his book to describe the Gestalt theory that human brains don’t see patches of color and shapes as individual entities, but as aggregates. How can we take advantage of that? We need that kind of impact for Mr. E. D. Director. We want him to be able to immediately visualize the busy time periods but avoid all of the busy-ness of 168 cells with numbers.

 

You probably guessed that we don’t want to use a 168 slice pie chart.

 

Nor do we want to use a line chart with 7 different lines each with 24 points.

 

What we want is affectionately known as a heat map. HeatMap

 

There can be no mistaking the busiest hours across all 168 unique cells. There can be no mistaking the obvious patterns either.

 

You can read the entire article on my blog site by clicking this link

Visualizing Knowledge

Posted by Dalton Ruer Jan 18, 2015

BedI love it when a plan comes together.

 

For the past several weeks I’ve been reading furiously and working like a dog doing my day job, while also trying to come up to speed on Qlik Sense as quickly as I can. Today everything seemed to come together so well I felt like I just had to share. My first project in Qlik Sense involved trying to replicate as much of the functionality of our Emergency Department Dashboard as I could. If you’ve ever been to an Emergency Department you probably dread going not just because you are in a crisis but there is often a fear that you’ll “be waiting forever to be seen.”

 

So one of the major metrics used to evaluate Emergency Departments is the time it takes to get the patient into a bed. An arrival to bed time of 15 minutes means it took 15 minutes from the time the patient arrived in the ED until they were taken to a bed. That 15 minutes would include the time to register you when you walked in the door, find out what you believe is wrong with you, determine the severity of your condition and find a location to put you. Naturally it only made sense for me to begin visualizing this “Door to Bed” metric with Qlik Sense and then work up from there.

 

This week one of the really fascinating articles I read was an interview with David McCandless about his new book “Knowledge is Beautiful.” His previous work was titled “Information is Beautiful” so naturally the interviewer was asking him questions about the difference between “information” and “knowledge.” He responded with “… what I’ve discovered through my work is that data is granular, information is clusters and bites of something more structured; and knowledge is something deeper and richer, more interconnected and complex.” In the article David shared additionally that in the first book he had shared singular visualizations, but in his new book he couldn’t stop at one graphic because he wanted to answer all the key questions and address all the aspects of a given subject so that it would knowledge.

 

On the dashboard for this new Qlik Sense application I already displayed the simple “Median Minutes from Door to Bed.” While a critical piece of the puzzle it’s only a very small part of the bigger picture and would be considered “information.” Like David I wanted to go deeper. I chose to begin my exploration with a scatter plot that would show the Median Minutes to Bed across the Number of Arrivals and the Day of the Week and this is what I came up with.

 

TotalArrivals

 

Looked really neat but as I began selecting various Months I realized what I was displaying was very misleading. The information was totally correct but I happen to know that our busiest day of the week by far is Monday. Yet because I had chosen a month that happened to have more Wednesday’s, Thursday’s and Friday’s the total number of arrivals for those days was greater. It’s only logical that if the ED is really busy the time to get you into a bed would be greater right? But total number of arrivals over the course of a month doesn’t really equate to busy when the number of those days of the week can vary. Rats!

 

Here is one of the beauties that I’ve already discovered with Qlik Sense … it’s so easy to drag and drop or choose a different metric that it was nothing for me to choose “Average # of Arrivals” instead of “Total # of Arrivals” so that the chart could depict the “busy” aspect of the knowledge I was trying to visualize and as I was hoping a clear pattern emerged. Something that wouldn’t necessarily jump off the page if I had chosen a pivot table to show the raw data. There is a clear correlation between how busy the ED is and how long it takes to bed patients. AverageArrivals

 

When I removed the date filter another interesting thing jumped out at me. Or I suppose in this case it would be more fitting to say something disappeared … Friday.

 

Click this link to read the rest of my post on Visualizing Knolwedge at QlikDork.com

Visualizing Lab Results

Posted by Dalton Ruer Jan 7, 2015

Visualizing Lab Results

Have you heard the joke about the lab technician that walks into the room to stick you with a 15” needle and draw your blood? Of course you haven’t that’s just not very funny stuff.

 

More than likely you are not as afraid of needles as I am, but I doubt anyone likes being told “I need you to roll up your sleeve.” Seriously do they really have to say out load “This is going to sting a little.” I pretty much guessed ahead of time that a long sharp object inserted into my arm would sting a little.

 

On a serious note though despite the anxiety I feel I am 100% aware of the very valuable insight that lab results provide physicians about my health so I reluctantly tolerate the “sting” and try not to cry until I’ve left. I tell myself I’m brave because I know that there are scaredy cats out there that don’t even see a physician due to their fear. When you are in the hospital it’s not as easy to hide from them though. The lab techs are generally at your bedside very early in the morning, more than likely waking you up, in order to remove a gallon of your blood, or however much those vials hold.

 

Your blood is then rushed to the lab where a technician runs the myriad of tests that the physician(s) have requested and then the results are generated. Until someone does something with the results of the lab tests the values are simply like all of the 0’s and 1’s that reside on our disks … useless.

 

In this post I’m going to discuss how I handled visualizing those lab results for the physicians rounding report I’ve been working on. I began my work on lab results the same way I would anything … “let me see what kind of data and what volume of data I’m dealing with.” LOTS and LOTS. I’m not kidding. It’s almost like every single drop of blood holds 1 MB of data or something. The following is for just the last 3 days of lab results for 1 patient.

 

AllResults

 

You didn’t like having to scroll all the way down here to finish reading did you? It gets worse … I want you to scroll back up and figure out what the most recent lab result value is for HCT. Painful isn’t it. Certainly we are not going to deliver that as our lab results visualization.

 

Click here to view the rest of my Visualizing Lab Results at QlikDork.com

 

I've attached a sample QVW that you can tweak what I've done and come up with your own approach.

BloodPressureImage

 

I was recently asked to produce a rounding report for physicians. Easy enough right? Slap some vitals on a page, toss in some lab results and the medication administration record and voila … you have a rounding report. But that was the old me. The new me is inspired to put as much effort into visualizing the data as I’ve been putting into finding it, extracting it, transforming it, loading it and modeling it. The problem is I’m a 0’s and 1’s kind of guy … what do I know about clinical data or how physicians need to see it or the decisions it may help them make … or worse what decisions won’t they make if I present the data in a way that is misleading. In this post I’m going to focus simply on the presentation of the blood pressure data. In future posts I’ll move on but this seems to be the perfect data type for how me to demonstrate how to add value to something through the art of data visualization. The first point I need to get across is that your visualizations won’t be successful unless you truly understand the data you are presenting.

Your visualizations won't be successful unless you truly understand the data you are presenting.

I’ve just begun my journey through Randy Krum’s book “Cool Infographics.” I can tell you that even he would smile at the American Heart Associations site I found to research bloodpressure. They take boring blood pressure information and present it with static, interactive and video based infographics. Like a learning tri-fecta for me. Helps me understand the data I’m dealing with, helps me clearly understand the points Randy makes and provides me with a catchy opening graphic for this post. Clicking on the image will take you to their website so you can see why I think it’s so cool, plus you’ll have the chance to learn everything you wanted to know about blood pressure, it’s effect on us and why it’s so vital to portray this vital (pun intended) in the best way that we possibly can.

 

One of the wonderful facets of QlikView is that it allows you to present raw data tables so that you can verify the data when there are questions. “What you don’t believe my numbers … here is the raw data to back them up.” That’s huge for sure.

 

Click here to read the rest of my post at QlikDork.com

 

I've attached a simple QVW file that walks you through the several iterations of charts that I used for my post. I can't wait to see the imagination of others at displaying simple numbers reflecting the millimeters of mercury in a way that creates #DataDiscovery.

Filter Blog

By date:
By tag: