With less than 50 days to go before the 2026 World Cup kicks off across the US, Canada, and Mexico, I wanted to share a project I've been working on that brings together a few pieces of the Qlik platform I think work really well together: Choose Your Champion 2026.
It's a web app where anyone can fill out their World Cup bracket, get AI-powered predictions for every possible matchup in the tournament powered by Qlik Predict, explore historical World Cup data, and compete on a leaderboard as the competition unfolds.
You can try it here:https://webapps.qlik.com/choose-your-champion-2026/index.html#/
The app is powered by Qlik, with Qlik Cloud Analytics for the data model and Historical Analysis, Qlik Predict for the matchup predictions, and various Qlik APIs to wire everything into a React front-end.
In this post, I'll walk through how the predictions work under the hood, because that was the most interesting piece to build.
What's in the app:
Choose Your Champion is broken into 4 parts:
Build a bracket: Pick your group stage winners, advance teams through the knockout rounds, and lock in your champion.
Check the predictions: For every possible matchup in the tournament, the app surfaces a Qlik Predict generated win probability for each team plus a draw probability. When you're unsure about a matchup, you can pull up the prediction and use it to decide which team advances.
Explore historical World Cup data: The app includes various visualizations to help you uncover insights from past tournaments: goals, top scorers, host nation performance, biggest upsets. All powered by the associative engine.
Leaderboard: As real matches get played in June and July, submitted brackets are scored automatically and players are ranked in the leaderboard table.
Under the hood: how the predictions work
This was the fun part. The goal was simple, given two national teams, predict the outcome of a hypothetical match (team A wins / draw / team B wins), but the work that makes the predictions actually useful is mostly in the data, not the model (thanks to no-code ML with Qlik Predict).
1. The training dataset
I started with every international football match result from 1872 to March 2026. There's a well-maintained open dataset on GitHub (credit:martj42/international_results) that gets updated after every international window, about 49,000 matches in total.
From that raw history, I built a training dataset focused on the modern era (2010 onwards) and only competitive matches (qualifiers, continental tournaments, World Cup finals). Friendlies got filtered out because they're noisy since teams often don't play their A squads, and the stakes don't match what happens in a real tournament.
That left me with around 9,400 training rows, each representing a real historical match with a known result, enriched with 27 features describing both teams' state going into that match:
Elo ratings for both teams
FIFA rankings and points snapshot to the match date
Rolling 10-match form per team: win rate, goals for, goals against, goal difference
Head-to-head history in the last 10 meetings
Context flags: neutral venue, tournament tier, cross-confederation
World Cup pedigree: a score rewarding teams for deep runs in past tournaments, with more recent success weighted heavier
2. ML Experiment
Once the training CSV was in shape, I uploaded it to Qlik Predict, pointed at the result column as the target, and let it do its thing. This is where Qlik Predict really shines, zero code needed. No Python notebooks, no sklearn, no hyperparameter grids to tune. You just upload your data, pick a target, and it does the heavy lifting with full explainability on the outcomes and what drives the predictions.
Qlik Predict runs multiple algorithms in parallel: LightGBM, CatBoost, XGBoost, Random Forest, and a few others, tunes their hyperparameters, and picks the best performer by F1.
On my first run, I left all the columns in the dataset checked, including the team name columns (team_a, team_b). When I looked at the SHAP importance chart afterward, team_b and team_a were ranking as the #2 and #3 most influential features, meaning the model was essentially learning "team X usually wins" rather than learning from the engineered features.
I created a new version, went back to the Data tab, unchecked the team name columns and a few date fields (which were also ranking higher than they should), and re-ran the experiment. Qlik Predict automatically dropped several more low-importance features during training, leaving a clean, focused feature set. The F1 did not change a lot (stayed at ~0.50), but the SHAP chart now showed the model leaning on exactly the signals we want:
elo_diff
rank_diff
is_neutral
h2h_team_a_advantageetc...
A few other calls that mattered:
Filtering to competitive matches only. A friendly between a top side's B squad and a mid-tier opponent tells you almost nothing about what happens in a World Cup group stage game.
Exponential decay on World Cup pedigree. A deep run in 1970 still counts, but less than one in 2022.
Removing rows with too many missing features. FIFA rankings don't go back to the 90s for every team, so some rows had to get dropped.
3. The apply dataset
Training gives you a model and to use it, you need an apply dataset with new rows you want predictions for.
For Choose Your Champion, I generated every possible pairing of the 48 qualified teams, which comes out to 1,128 unique matchups. Each row has the same 27 features as the training dataset, but computed as a current snapshot: each team's Elo today, their current FIFA ranking, their most recent 10-match form, and so on.
I fed that into the deployed model and got back a probability distribution for every matchup: P(team_a_win), P(draw), P(team_b_win).
The web app
The web app is a React front-end that connects to the Qlik tenant over anonymous access via @qlik/api, so users never see a login screen or have to authenticate against a tenant. The bracket UI pulls predictions from the Qlik Sense data model, so whenever a user opens a matchup, they're looking at data straight from Qlik.
For the historical World Cup section, I used a mix of @qlik/embed components when I needed a quick, ready-to-use chart, and custom nebula.js + picasso.js visualizations when I needed more control over the styling to match the app's look and feel. Both approaches work against the same underlying Qlik Analytics app, so everything stays consistent and governed in one place.
A few takeaways
If you're thinking about building something similar, a few things worth keeping in mind:
Spend the time on feature engineering. The difference between a model that predicts noise and one that predicts football is almost entirely in the features. Qlik Predict handles algorithm selection and tuning well, but it can only work with what you feed it.
The integration is where Qlik Predict pays off. Once a model is deployed, scoring a new dataset and pulling scores back into a Qlik Cloud Analytics app takes one load script. No Python services to maintain, no separate MLOps platform to stand up, no JSON plumbing between systems. That end-to-end data prep, modeling, predictions, and analytics all living in one platform is the thing that made this project come together fast!
Go fill out your bracket
The World Cup starts June 11, so there's plenty of time to get your bracket in and earn your spot on the leaderboard before kickoff. If you're curious about how any of this was built, leave a comment or reach out to me directly!
And if you want to learn more about Qlik Predict and start using it, visit:https://www.qlik.com/us/products/qlik-predict
P.S: I have attached both Training and Apply datasets if you'd like to use them in your own Qlik Predict experiment.
Thank you!
...View More
As a follow-up to my previous blog post titled Finance Report with Waterfall Chart, I wanted to share an awesome demo that showcases financial reporting visualizations including a profit & loss statement with a waterfall chart.Qlik'sDennis Jaskowiak andEkaterina Kovalenko, and partner Dawid Marciniak from HighCoordination, created the Financial Analysis demo based off of Jedox data, incorporating many enhancements to the straight table and pivot table. In addition to the waterfall chart, they use inline SVG to create lollipop charts and bar charts in financial statements.
Here is a look at some of the sheets:
The Dashboard provides a high-level overview of profit and loss, cash flow and liquidity. On this sheet, view the use of inline SVG bar charts and lollipop charts.
The P&L sheet provides a more detailed look at profit and loss.
The Cost Center sheet uses the pivot table to show sales costs.
Find a detailed look of cash flow on the Cash Flow sheet.
This is just some of the sheets you will find in the Financial Analysis demo. If you are looking for appealing ways to visual your financial data while keeping it concise and clean, download the Financial Analysis demo here.
Thanks,
Jennell
...View More
In this blog post, I will review some data flow processors that can be used to prepare your data in a data flow. Let’s start by quickly reviewing what a data flow is. In Qlik Cloud Analytics, a data flow is a no-code experience that visually allows you to prepare your data with drag and drop capabilities. It is intuitive and easy to use and does not require the user to have scripting experience. Data flow processors, along with sources and targets, are used to build a data flow. Each processor handles a specific data transformation task. Here you will find a full list of the data flow processors available.
This blog will touch base on a few processors to familiarize you with how they work and how easy they are to use. To begin, a data flow must first be created. There is more than one way to do this. From the Qlik Cloud Analytics catalog, click on the + Create new button and select Data flow or navigate to Prepare data from the menu and click on the add Data flow button at the top of the page.
+ Create new menu
+ Create new
Prepare data
MenuData flow
Once you name the new data flow, navigate to the Editor.
On the left, there are sources, processors and targets. The source is the data input, the processors are the data transformation types, and the targets are data outputs. Before we can look at the processors, we need to select our input data from the data catalog or a connection. Once that is in place, we can begin to explore the processor options. There are several data flow processors – too many to review in this blog but I will review three of the them - the Filter processor, the Join processor and the Unpivot processor.
Filter Processor
The filter processor filters data based on a condition. A processor can be added to the data flow canvas by dragging and dropping the processor onto the canvas or by clicking on the menu in the data source and selecting Add processor.
If you drag and drop the processor onto the canvas, you will need to connect the dots between the input and processor. If you add it from your data source menu, the dots will automatically be connected for you.
Each processor has a properties panel where the processor can be configured. In this example, let’s use the filter to select employees who live in the United States. To do this, first select the field to process – Country. There is an option to apply a function but one is not needed in this example.The operator will be equal, and the Value will be United States. Once the properties are entered, click the Apply button to save.
At the bottom of the page, I can preview the script (matching and not matching records) for the filter processor I just applied and see a preview of the data.
From the filter processor menu, there are a few options for my next step as seen below.
Add matching target will add a target to the data flow for the records that match the Country = United States filter. Add non-matching target will add a target to the data flow for the records that do not match the Country = United States filter. Matching and non-matching processors can also be added. For this example, I will add a matching target and in the properties panel, I will select the space, the extension (.qvd, .parquet, .txt or .csv) and the name of the target file. Like the sources, the target can be a data file or connection. Once I click Apply in the properties panel, I will see a message at the top right indicating that my flow is valid and ready to be run. Running the data flow will grab my Employees dataset, filter the data by country and store the results in a QVD named US Employess.
I now have a data file that has been transformed and prepared for use.
Join Processor
Now, let’s look at how we can join two data inputs into one data output. To do this, two data inputs are required. In the example below, ARSummary and ARSummary-1 are the two data inputs.
In the properties panel of the join processor, the join type is selected and the fields that should be used to link the two tables are selected. You can learn more about joins here. Once the target is added, the data flow can be run, and the result will be a single table with the records from the ARSummary table and the associated records from the ARSummary-1 table.
Unpivot Processor
If you are familiar with scripting, the unpivot processor is like a crosstable load. It allows you to rearrange a table so that column data becomes row data. It can transform a table like this:
To this:
Here is an example data flow with the unpivot processor:
In the properties panel of the unpivot processor, there are only a few settings to update. The first is the unpivot fields. Here is where the fields that we want to unpivot are selected. In this example, we want the year to be stored as row level data so we select them all.
The Attribute field name is the name we want to give to the unpivoted fields – in this case Year. The Value field name is the name of the data that is associated with the fields we are unpivoting – in this examples Sales.
After applying the changes and running the data flow, we will have a table transformed based on our specifications without any code.
In this blog post, we touched upon three of the many processors that can be used in a data flow. Note that a data flow can have many sources, processors and targets – it all depends on your needs. The visual interface of a data flow makes it easy to prepare your data without any code in an appealing design that is easy to follow. Try it out!
Thanks,
Jennell
...View More
When building dashboards that involve any kind of leaderboard or competitive comparison, it’s always good to show how rankings shift over time. Not just "who's #1 right now," but the full story: who climbed, who dropped, when the shift happened.
Qlik Sense doesn't have a native bump chart. The usual workaround is a line chart with pre-calculated ranks, but I wanted to push this further and add some more interactivity and options.
The idea for creating this extension came after doing some updates on the Formula1 web app which uses a native D3.js chart, so I wanted to transfer that code over to a Qlik Sense extension that can be reused in other apps.
The extension makes it easy for you. You just add two dimensions and one measure, and it takes care of everything else: ranking, time sorting, labels, hover highlighting, and field selections. The object properties will let you customize it to your needs.
In this post I'll walk through what it does, how it's structured, and how you can use it for various use cases.Github Repo:https://github.com/olim-dev/qlik-bump-chart
What is a Bump Chart?
A bump chart shows how entities change position relative to each other over time. Each entity gets a colored line. Rank #1 sits at the top. As rankings shift, lines cross, and you immediately see who's moving up, who's falling, and when an overtake happens.
What can you do with this extension?
Track rankings over any time dimension (quarters, months, weeks, laps)
Highlight top performers, bottom performers, or biggest movers automatically
Toggle between rank mode (auto-calculated) and position mode (raw measure as Y-axis)
Replay the line-drawing animation
Click labels or lines to make Qlik selections
Customize everything: line style, dot size, colors, labels, grid, tooltips
Use Cases
The pattern is always the same: entities, time periods, and a measure.
Sales: rep leaderboard over months
Marketing: brand awareness tracking, campaign effectiveness over quarters
Finance: fund performance rankings
HR: employee performance trends, team productivity rankings
Operations: supplier quality rankings, efficiency metric evolution
Sports: league standings, player rankings across a season, F1 race position over laps
How to set it up
Add two dimensions and one measure:
Dimension 1: the entity to rank (Product, Sales Rep, Team, Driver)
Dimension 2: the time period (Quarter, Month, Week, Lap)
Measure: the value that determines rank (Sum(Sales), Avg(Score), Max(Position))
I have attached a Text file with some sample datasets that you can inline-load in a new app to test it.
Quick tour of the object properties
The property panel has clear labels for which dimension goes on which axis. Here are the sections that matter most:
Chart Settings: Maximum Entities (2-30, sweet spot is 8-15), Rank Direction (highest or lowest value = #1), Line Style (smooth, straight, step).
Position Mode: Instead of auto-calculating ranks, use the raw measure value as the Y position. Great for race data where position is already known.
Labels: Position them on the right, left, or both sides. Rank change indicators (triangle arrows) show how many spots an entity moved. Clicking a label triggers a Qlik selection.
Highlight: Automatically highlight top N, bottom N, or biggest movers with a configurable glow color.
Colors: Six built-in palettes (Vibrant, Qlik Classic, Category 10, Extended 20, Cool, Warm) plus a custom background color.
Animation: Toggle transitions on/off, adjust duration (100-1500ms), and a "Replay Animation" button to re-run the line-drawing effect.
Selections: Toggle selections on/off
The extension folder structure
qlik-bump-chart/ qlik-bump-chart.qext Extension definitions qlik-bump-chart.js Main entry (Qlik lifecycle, paint, selections) qlik-bump-chart-properties.js Property panel definition qlik-bump-chart-renderer.js D3 rendering engine qlik-bump-chart-data.js Data processing, ranking, time sorting qlik-bump-chart-constants.js Defaults, palettes, timing values qlik-bump-chart-style.css All CSS
lib/ d3.v7.min.js D3.js v7
The main JS handles the Qlik lifecycle.
The data module processes the hypercube, sorts time periods, and calculates ranks.
The renderer draws everything with D3.
The properties file defines the right-panel UI.
The constants file holds all defaults, color palettes, and timing values in one place.
Tips
Limit entities to 8-15 for readability. More than that and lines become hard to follow.
Use Highlight Mode to draw attention to the story (top performers, biggest movers).
Position Mode is great for pre-calculated position data like F1 race results.
Pair with a table or KPI object nearby for exact values at glance.
Let me know in the comments if you have any questions or feedback!
Thanks for reading.
Ouadie
...View More
The write table was introduced to Qlik Cloud Analytics last month so in this blog post, I will review how it works and how it can be added to an app. The write table looks like the straight table but editable columns can be added to it to update or add data. The updated/added data is visible by other users of the app provided they have the correct permissions. Read more on write table permissions here. Something else to note, if using a touch screen device, is you will have to disable touch screen mode for the write table to work. Looking at the write table for the first time, I found it intuitive and easy to use. Let’s create a write table with some editable columns to see how easy it is.
The write table object can be added to a sheet like any other visualization. Once it is added, columns can be added the same way dimensions and measures are added to a straight table. Below is a small write table with course information including the course ID, course name, instructor and location.
To add an editable column from the properties panel, click on the plus sign (+) and select Editable column.
The new editable column will be added. In the properties for the column, the title for the column can be modified and from the show content drop down, manual user input or single selection can be selected. Manual user input will create a free form column that the user can type into. The single selection option will allow me to create a drop-down list of options that the user can choose from.
I will change the title to Course Level and for show content I will select single selection and add three list items by typing the list item and then clicking on the plus sign to add it to the list. The list items will be displayed in the drop-down in the order they are added but can be rearranged by hovering over the list-item and dragging it to the desired position. List-items can also be deleted by hovering over it and clicking the delete icon that appears to the left.
When you come out of edit mode, the message below will appear for the editable column prompting you to define a set of primary keys.
Once you click Define, you will see the pop-up below where you can select the column(s) that will be used for the unique primary key. This is necessary to save and map the data entered in the editable column to the data model. I will select the CourseID column as the primary key.
Once this is done, I will see the Course Level column with the drop-down of list-items I added.
Let’s add one more editable column that takes manual user unput and name it Notes.
As I add data or update the editable columns, the cells will be flagged orange to indicate that my edits have not been saved. Once I save the table, they will be flagged green and any new values entered are visible to other users. A cell will be blue if another user is currently making changes to the row, thus locking it. Changes are saved for 90 days in a change store (temporary storage location) provided by Qlik. After 90 days, the data will be deleted. It is also important to note that if an editable column is deleted, the data will be lost. This is also the case if the primary key used for the editable column is removed.
It is possible to retrieve the changes from a change store via the change-stores API or an automation. Using the REST connection and the change-store API, the changes made in a write table can be retrieved and stored in a QVD (if needed for more than 90 days) or added to the data model for use in other analytics. Qlik Automate can also be used to retrieve data from the change-store using the List Current Changes From Change Store block or the List Change Store History block. From there the data can be stored permanently in an external system for later use or used in the automation for another process. Qlik Help offers steps for retrieving data from a change-store.
The write table can make it easy for users to add updates, feedback and important information that may not be available in the data model. Not only can this be done quickly, but it can be immediately visible to other colleagues. Learn more about the write table in the Product Innovation blog along with links to videos and write table FAQs.
Thanks,
Jennell
...View More
If you have been building custom web applications or mashups with Qlik Cloud, you have likely hit the "10K cells ceiling" when using Hypercubes to fetch data from Qlik.(Read my previous posts about Hypercubes hereandhere)
You build a data-driven component, it works perfectly with low-volume test data, and then you connect it to production; and now suddenly, your list of 50,000+ customers cuts off halfway, or your export results look incomplete.
This happens because the Qlik Engine imposes a strict limit on data retrieval: a maximum of 10,000 cells per request. If you fetch 4 columns, you only get 2,500 rows (4 (columns) x 2500 = 10,000 (max cells)).
In this post, I’ll show you how to master high-volume data retrieval using the two strategies: Bulk Ingest and On-Demand Paging, usingthe@qlik/api library.
What is the 10k Limit and Why Does It Matter?
The Qlik Associative Engine is built for speed and can handle billions of rows in memory. However, transferring that much data to a web browser in one go would be inefficient. To protect both the server and the client-side experience, Qlik forces you to retrieve data in chunks.
Understanding how to manage these chunks is the difference between an app that lags and one that delivers a good user experience.
Step 1: Defining the Data Volume
To see these strategies in action, we need a "heavy" dataset. Copy this script into your Qlik Sense Data Load Editor to generate 250,000 rows of transactions (or download the QVF attached to this post):
// ============================================================
// DATASET GENERATOR: 250,000 rows (~1,000,000 cells)
// ============================================================
Transactions:
Load
RecNo() as TransactionID,
'Customer ' & Ceil(Rand() * 20000) as Customer,
Pick(Ceil(Rand() * 5),
'Corporate',
'Consumer',
'Small Business',
'Home Office',
'Enterprise'
) as Segment,
Money(Rand() * 1000, '$#,##0.00') as Sales,
Date(Today() - Rand() * 365) as [Transaction Date]
AutoGenerate 250000;
Step 2: Choosing Your Strategy
There are two primary ways to handle this volume in a web app. The choice depends entirely on your specific use case.
1- Bulk Ingest (The High-Performance Pattern)
In this pattern, you fetch the entire dataset into the application's local memory in iterative chunks upon loading.
The Goal: Provide a "zero-latency" experience once the data is loaded.
Best For: Use cases where users need to perform instant client-side searches, complex local sorting, or full-dataset CSV exports without waiting for the Engine.
2- On-Demand (The "Virtual" Pattern)
In this pattern, you only fetch the specific slice of data the user is currently looking at.
The Goal: Provide a near-instant initial load time, regardless of whether the dataset has 10,000 or 10,000,000 rows as you only load a specific chunk of those rows at a time.
Best For: Massive datasets where the "cost" of loading everything into memory is too high, or when users only need to browse a few pages at a time.
Step 3: Implementing the Logic
While I'm using React and custom react hooks for the example I'm providing, these core Qlik concepts translate to any JavaScript framework (Vue, Angular, or Vanilla JS). The secret lies in how you interact with the HyperCube.
The Iterative Logic (Bulk Ingest):
The key is to use a loop that updates your local data buffer as chunks arrive.
To prevent the browser from freezing during this heavy network activity, we use setTimeout to allow the UI to paint the progress bar.
qModel = await app.createSessionObject({ qInfo: { qType: 'bulk' }, ...properties });
const layout = await qModel.getLayout();
const totalRows = layout.qHyperCube.qSize.qcy;
const pageSize = properties.qHyperCubeDef.qInitialDataFetch[0].qHeight;
const width = properties.qHyperCubeDef.qInitialDataFetch[0].qWidth;
const totalPages = Math.ceil(totalRows / pageSize);
let accumulator = [];
for (let i = 0; i < totalPages; i++) {
if (!mountedRef.current || stopRequestedRef.current) break;
const pages = await qModel.getHyperCubeData('/qHyperCubeDef', [{
qTop: i * pageSize,
qLeft: 0,
qWidth: width,
qHeight: pageSize
}]);
accumulator = accumulator.concat(pages[0].qMatrix);
// Update state incrementally
setData([...accumulator]);
setProgress(Math.round(((i + 1) / totalPages) * 100));
// Yield thread to prevent UI locking
await new Promise(r => setTimeout(r, 1));
The Slicing Logic (On-Demand)
In this mode, the application logic simply calculates the qTop coordinate based on the user's current page index and makes a single request for that specific window of data (rowsPerPage).
const width = properties.qHyperCubeDef.qInitialDataFetch[0].qWidth;
const qTop = (page - 1) * rowsPerPage;
const pages = await qModelRef.current.getHyperCubeData('/qHyperCubeDef', [{
qTop,
qLeft: 0,
qWidth: width,
qHeight: rowsPerPage
}]);
if (mountedRef.current) {
setData(pages[0].qMatrix);
}
I placed these two methods in custom hooks (useQlikBulkIngest & useQlikOnDemand) so they can be easily re-used in different components as well as other apps.
Best Practices
Regardless of which pattern you choose, always follow these three Qlik Engine best practices:
Engine Hygiene (Cleanup): Always call app.destroySessionObject(qModel.id) when your component or view unmounts.
Cell Math: Always make sure your qWidth x qHeight is strictly < 10,000. For instance, if you have a wide table (20 columns), your max height is only 500 rows per chunk.
UI Performance: Even if you use the "Bulk" method and have 250,000 rows in JavaScript memory, do not render them all to the DOM at once. Use UI-level pagination or virtual scrolling to keep the browser responsive.
Choosing between Bulk and On-Demand is a trade-off between Initial Load Time and Interactive Speed. By mastering iterative fetching with the @qlik/api library, you can ensure your web apps remain robust, no matter how much data is coming in from Qlik.
Attached is the QVF and here is theGitHub repository containing the full example in React so you can try it in locally - Instructions are provided in the README file.
(P.S:Make sure you create the OAuth client in your tenant and fill in the qlik-config.js file in the project with your tenant-specific config).
Thank you for reading!
...View More
As we enter the last month of the year, let’s review some recent enhancements in Qlik Cloud Analytics visualizations and apps. On a continuous cycle, features are being added to improve usability, development and appearance. Let’s’ look at a few of them.
Straight Table
Let’s begin with the straight table. Now, when you create a straight table in an app, you will have access to column header actions, enabled by default. Users can quickly sort any field by clicking on the column header. The sort order (ascending or descending) will be indicated by the arrow. Users can also perform a search in a column by clicking the magnifying glass.
When the magnifying glass icon is clicked, the search menu is displayed as seen below.
If a cyclic dimension is being used in the straight table, users can cycle through the dimensions using the cyclic icon that is now visible in the column heading (see below).
When you have an existing straight table in an app, these new features will not be visible by default but can easily be enabled in the properties panel by going to Presentation > Accessibility and unchecking Increase accessibility.
Bar Chart
The bar chart now has a new feature that allows the developer to set a custom width of the bar when in continuous mode. Just last week, I put the bar chart below in continuous mode and the bars became very thin as seen below.
But now, there is this period drop down that allows developers to indicate the unit of the data values.
If I select Auto to automatically detect the period, the chart looks so much better.
Combo Chart
In a combo chart, a line can now be styled using area versus just a line, as displayed below.
Sheet Thumbnails
One of the coolest enhancements is the ability to auto-generate sheet thumbnails. What a time saver. From the sheet properties, simply click on the Generate thumbnail icon and the thumbnail will be automatically created. No more creating the sheet thumbnails manually by taking screenshots and uploading them and assigning them to the appropriate sheet. If you would like to use another image, that option is still available in the sheet properties.
From this To this in one click
Try out these new enhancements to make development and analysis faster and more efficient.
Thanks,
Jennell
...View More