Improving your forecasts using multiple temporal aggregation – Nikolaos Kourentzes (2022)

In most business forecasting applications, the problem usually directs the sampling frequency of the data that we collect and use for forecasting. Conventional approaches try to extract information from the historical observations to build a forecasting model. In this article, we explore how transforming the data through temporal aggregation allows us to gather additional information about the series at hand, resulting in better forecasts. We discuss a newly introduced modelling approach that combines information from many different levels of temporal aggregation, augmenting various features of the series in the process, into a robust and accurate forecast.

  • Temporal aggregation can aid the identification of series characteristics, as these are enhanced across different frequencies. Moreover, this simple trick reduces intermittency, allowing the use of established conventional forecasting methods for slow moving data.
  • Using multiple temporal aggregation levels can lead to substantial improvements in terms of forecasting performance, especially for longer horizons, as the various long term components of the series are better captured.
  • Combining across different levels of aggregation leads to estimates that are reconciled across all frequencies. From a practitioner’s point of view, this is very important, as it produces forecasts that are reconciled for operational, tactical and strategic horizons.
  • The associated R package MAPA allows for direct use of this algorithm in practice through the open source R statistical software.

Typically, for short-term operational forecasts monthly, weekly, or even daily data are used. On the other hand, a common suggestion is to use quarterly and yearly data to produce long-term forecasts. This type of data does not contain the same amount of details and often are smoother, providing a better view of the long term behaviour of the series. Of course, forecasts produced from different frequencies of the series are bound to be different, often making the cumulative operational forecasts over a period different than the tactical or strategic forecasts derived from the cumulative demand for the same period. Nonetheless, in practice, it is most common that the same data and models will be used to produce both short and long term forecasts, with apparent limitations, in particular for the long-horizon forecasts.

Different frequencies of the data can reveal or conceal the various time series features. When fast moving time series are considered, random variations and seasonal patterns are more apparent in daily, weekly or monthly data. Using non-overlapping temporal aggregation it is easy to construct lower frequency time series. The level of aggregation refers to the size of the time buckets in the temporal aggregation procedure and is directly linked with the frequency of the data. The increase of the aggregation level results in a series of lower frequency. At the same time, this process acts as a filter, smoothing the high frequency features and providing a better approximation of the long-term components of the data, such as level, trend and cycle. Notice in Fig. 1 how the series changes for various levels of aggregation. The original monthly series is dominated by the seasonal component, while at the 12th level of aggregation, the now annual series, is dominated by a shift in the level and a weak trend.

Fig. 1. A monthly fast-moving time series at different levels of non-overlapping temporal aggregation.

In the case of intermittent demand data, moving from higher (monthly) to lower (yearly) frequency data reduces (or even removes) the intermittence of the data, minimising the number of periods with zero demands. This can make conventional forecasting methods applicable to the problem. Fig. 2 demonstrates how such time series change behaviour as the level of the aggregation changes.

(Video) Best Practices for Demand Forecasting

Fig. 2. A monthly slow-moving time series at different levels of non-overlapping temporal aggregation.

The presence and the magnitude of such time series features, both for fast- and slow-moving items, affects forecasting accuracy. So, the obvious question for a practising forecaster is: “Which aggregation level of my data should I use?”

As the time series features change with the frequency of the data (or the level of aggregation), different methods will be identified as optimal. These will produce different forecasts, which will ultimately lead to different decisions. In essence, we have to deal with the true model uncertainty, that is the appropriateness or misspecification of the model identified as optimal at a specific level of data aggregation. Even if we ignore the issue of temporal aggregation, with a simple time series we have two major types of uncertainties: i) data uncertainty: are we just very lucky, or unlucky with the sample we have? as more observations become available how would our model and its parameters change? and ii) model uncertainty: is the model or its parameters appropriate, or because of potential optimisation issues the model provides poor forecasts? The second problem is exacerbated for models with many degrees of freedom. Temporal aggregation reveals these differences and forces us to face this issue.

A potential answer to this issue is to try to consider all alternative view of your data, by using multiple aggregation levels, therefore reducing the aforementioned uncertainties. Then model each one separately, capturing best the different time series features that are amplified at each level. As the models will be different, instead of preferring a single one, combine them (or their forecasts) into a robust final forecast that takes into account information from all frequencies of the data.

So, what we propose is not to trust a single model on a single aggregation level of the data, which will avoidably lead to the selection of a single “optimal” model. Instead, consider multiple aggregation levels. This approach attempts to reduce the risk of selecting a bad model, parameterised on a single view of data, thus it mitigates the importance of model selection.

In this section we explain how the proposed Multiple Aggregation Prediction Algorithm (MAPA) should be applied in practice, through three steps: aggregation, forecast and combination. A graphical illustration of the proposed framework is presented in Fig. 3, contrasting it with the standard forecasting approach that is usually applied in practice.

Fig. 3. The standard versus the MAPA forecasting approach.

(Video) Prof Nikos Kourentzes and Prof Robert Fildes present 'Getting the Best out of Forecasting Sofware'

Step 1: Aggregation.

In the standard approach the input of the statistical forecast methods is a single frequency of the data, usually the sampling that has been used during the data collection process or the one where forecasts will be used later on. So, the aggregation level is driven either from data availability or intended use of the forecasts. On the other hand, the MAPA uses multiple instances of the same data, which correspond to the different frequencies or aggregation levels. Suppose we have 4 years of monthly observations (48 data points). In order to transform these to quarterly data (1 quarter=3 months), we define 48/3 = 16 time-buckets. Then, we aggregate, using simple summation, as to calculate the cumulative demand over each of these time-buckets. This process will create 16 aggregated observations that will match the required quarterly frequency. In this case, the aggregation level is equal to 3 periods, while the transformed frequency is equal to 1/3 of the original.

In the case that the selected aggregation level is not a multiplier of the available observations at the original frequency, then some data points may be dropped out through this temporal aggregation process. Continuing our previous example, if the aggregation level is set to 5, then only 9 time buckets can be defined, corresponding to 45 monthly observations. In this case, 3 monthly periods are discarded. We choose to discard observations from the beginning of the series, as the latter ones are considered as more relevant.

This process of transforming the original data to alternative (lower) frequencies, using multiple aggregation levels, can continue as long as all transformed series have enough data to produce statistical forecasts. If the original sampling frequency is monthly data, and given that companies usually hold 3 to 5 years of history, we suggest that the aggregation process continues up to the yearly frequency (aggregation level equal to 12 periods). This will also be adequate to highlight long-term movements of the series.

In any case, while the starting frequency is always bounded from the sampling of the raw data, we propose that the upper level of aggregation should reach at least the annual level, where seasonality is filtered completely from the data and long-term (low frequency) components, such as trend, dominate. Of course, the range of aggregation levels should contain the ones that are relevant to the intended use of the forecasts: for example monthly forecasts for S&OP, or yearly forecasts for long-term planning. This ensures that the resulting MAPA forecast captures all relevant time series features and therefore provides temporally reconciled forecasts.

The output of this first step is a set of series, all corresponding to the same base data but translated to different frequencies.

Step 2: Forecasting.

Each one of the series calculated in step 1 should be forecasted separately. In the forecasting literature, several automatic model selection protocols exist. For fast-moving items exponential smoothing is regarded as reliable and relatively accurate. It corresponds to models that may include trend (damped or not) and seasonality, while allowing for multiple forms of interaction (additive or multiplicative). A widely used approach is the automatic selection of the best model form, based on the minimisation of a predefined information criterion (such as the AIC). It is expected that on our set of series different models will be selected across the various aggregation levels. Seasonality and various high frequency components are expected to be modelled better at lower aggregation levels (such as monthly data), while the long-term trend will be better captured as the frequency is decreased (yearly data).

In the case of intermittent demand series, there are classification schemes that allow selecting between the two most widely used forecasting methods: Croston’s method and the Syntetos-Boylan Approximation (SBA). Under such schemes, the levels of intermittency and the variability of the demand distinguish which method is more appropriate. Using multiple aggregation levels will change both the intermittence and the variability, so we expect that different methods will be identified as optimal across all frequencies. Furthermore, at higher levels of aggregation the resulting series may contain no zero demand periods. In these cases, we propose using conventionally optimised Simple Exponential Smoothing, taking advantage of its good performance in non-intermittent time series. Moreover, it might be the case that the transformation of the data from slow-moving to fast-moving may reveal regular time series components (such as trend or seasonality) that may not exist at the lower levels of granularity where noise is the dominant component. Any time series at a sufficiently high frequency of sampling is intermittent, therefore these cases should be considered as a continuum.

The output of this step is multiple sets of forecasts, each one corresponding to an alternative frequency.

Step 3: Combination.

The final stage of the proposed MAPA approach refers to the appropriate combination of the different forecasts derived from the alternative frequencies.

(Video) Getting the best out of forecasting software - by Prof Nikos Kourentzes & Prof Robert Fildes

Before being able to combine the forecasts produced from the various frequencies, we need to transform them back to the original frequency. While the aggregation of forecasts from lower aggregation levels is straightforward, disaggregation can be more complicated. Many alternative strategies have been discussed in the literature, however simply dividing an observation into equal parts at the higher frequency level has been shown to be very effective and simple to use. Consider, for example, that after the forecasting step we have some aggregated forecasts at the quarterly frequency. In order to transform each one of these point forecasts to the original (monthly) frequency, we simply divide them by three and use the same value for all three months. So, the forecast for January will be the same as the forecast of February or March, and all these will be equal to the forecast of Q1 divided by 3.

Once all forecasts from the various aggregation levels are translated back to the original frequency, these are subsequently combined into a final forecast. The combination can be done by calculating averages, medians or using other operators, such as trimmed means of the forecasts. We have found that both means and medians performed well, although the latter are more appropriate to ensure that the combined final forecasts are not affected from extreme values.

It may be the case that the indented use of the forecasts is at a different frequency from the data sampling one. Or it might be that forecasts from various aggregation levels are needed for operational, tactical or strategic planning. To achieve this, simply aggregate the final forecast to the desired level(s). A convenient property of MAPA forecasts is that they are appropriate for all these levels, as they contain information from all. Therefore such forecasts on various aggregation levels are temporally reconciled and there is no longer a need to work out how to ensure that these forecasts agree, which otherwise would typically be different.

One problem that arises from the application of the algorithm described above is with regard to the extreme dampening of the seasonal component of the time series. Consider the case of monthly data that are converted to all frequencies down to yearly. If the original series is seasonal, then seasonality will be modelled only on the aggregation levels 1 (which corresponds to the original frequency), 2, 3, 4 and 6. At the same time, seasonality will not be modelled at any of the other aggregation levels considered, either being fractional or completely filtered-out.

Calculating the simple combination across the forecasts of all levels will essentially dampen the seasonal pattern. This is illustrated in Fig. 4, where the models, one with seasonality (red) and one without (green) are combined and the resulting forecast has a poor fit, due to the dampening of the seasonality.

Fig. 4. Example of dampened seasonal component due to forecast combination.

In order to address this issue the combination should be done at model components and not the forecasts. This is trivial when using the exponential smoothing family of methods, as it provides estimates for each component (level, trend, and seasonality) separately. The combination of the seasonal component will only take into account the aggregation levels where seasonality is permitted (1, 2, 3, 4 and 6). On the other hand, level and trend will be modelled at each level. If at one level a model with no trend is selected, then the trend component for that aggregation level is equal to zero. Therefore instead of combining the forecasts, one has to first combine all the level estimates from the various aggregation levels, then all the trend estimates and then the seasonal estimates. The resulting combined components are then used to calculate the final forecast, in the same way as one would do with conventional exponential smoothing.

(Video) Uber Practitioner Session

Other ways of tackling this modelling issue would include the consideration of only some of the aggregation levels or the introduction of models that can deal with fractional seasonality.

The short answer is yes! The MAPA approach has been tested on both fast-moving and slow-moving demand data, providing improved forecasting performance when compared with traditional approaches. In more detail, the proposed approach gives better estimates both in terms of accuracy and bias. In the case of fast-moving data, the MAPA improves on exponential smoothing at most data frequencies, while being especially accurate at longer-term forecasts. This is a direct outcome of fitting models at high aggregation levels, where the level and the long-term trend of the series are best identified. In the case of slow-moving data, the MAPA, under the enhanced selection scheme (Croston-SBA-SES), performs better when compared to any single method, while its performance is also superior to that of the original selection scheme (Croston-SBA).

On top of any improvements on the forecasting performance of the MAPA, another advantage of this algorithm is that the decision maker does not have to a-priori select a single aggregation level. While, in some cases, setting the aggregation level to be equal with the lead time plus the review period makes sense, removing this hyper-parameter can be regarded as an advantage, simplifying the forecasting process.

Although there are accuracy and robustness benefits from using MAPA, the key advantage is an organisational one. Combining multiple temporal aggregation levels (thus capturing both high and low frequency components) leads to more accurate forecasts for both short and (especially) long horizons. Most importantly, this strategy provides reconciled forecasts across all frequencies, which is suitable for aligning operational, tactical and strategic decision making. This is useful for practice as the same forecast can be used for all three levels of planning.

Therefore, forecasts and decisions are reconciled and there is no need to alter or prefer forecasts from a particular level, something that in practice is often done ad-hoc with detrimental effects. This is a significant drive towards the “one-number” forecast, where several decisions and organisational functions are based on the same view about the future, thus aligning them. It is not uncommon in practice that short term operational forecasts, driving demand planning and inventory management are incompatible with long-term budget forecasts. MAPA addresses this issue by providing a single reconciled view across the various planning horizons. This is just the first step towards the fully reconciled forecasts within an industrial setting. Cross-sectional reconciliation and effective interaction and communication of the different stakeholders are only some of the remaining aspects of the same problem.

If you are interested in using this approach for forecasting a starting point is the MAPA package for R. Follow that link for examples on how to run MAPA with your own data.

N. Kourentzes, F. Petropoulos and J. R. Trapero, 2014, Improving forecasting by estimating time series structural components across multiple frequencies. International Journal of Forecasting, 30: 291-302.

F. Petropoulos and N. Kourentzes, 2014, Forecast combinations for intermittent demand. Journal of Operational Research Society.

This text is an adapted version of:

(Video) Time Series Forecasting in Tableau

F. Petropoulos and N. Kourentzes, 2014, Improving Forecasting via Multiple Temporal Aggregation, Foresight: The International Journal of Applied Forecasting.


What is temporal aggregation? ›

by Albert MARCET. Introduction. We call temporal aggregation the situation in which a variable that evolves through time can not be observed at all dates.

What is multiple aggregation prediction algorithm? ›

The Multiple Aggregation Prediction Algorithm (MAPA) was proposed by Kourentzes et al. (2014) to take advantage of the time series transformations that can be achieved by non-overlapping temporal aggregation.

How does the aggregation of data affect the sales forecast? ›

Transforming data through aggregation or disaggregation allows you to gather additional information about the series at hand, resulting in better forecasts. Each level for each attribute provides levels of visibility but also provide varying levels of effectiveness or ineffectiveness.

What is temporal aggregation of demand? ›

Temporal Aggregation: Suppose you are a hyper local grocery retailer, say Big Basket. If the demand from certain area is not enough for you to deliver it every day, then you will aggregate the demand across multiple days and say that you will deliver in this area only every two days.

What is temporal aggregation bias? ›

Temporal Aggregation Bias effects are measured by comparing the difference in the proportion of time in which each set of 12 aggregations passes each decibel value. Using the slope of the proportions in the increasing segmentation order, as displayed in Fig. 5, the consistent changes between segmentations is measured.

What is aggregation in value chain? ›

Aggregation refers to a function using which given data at a detail level is aggregated to a higher level. For example, forecast at a product level or product-customer level is aggregated to a product family or product family-country level.

What is the bottom up method for forecasting sales? ›

Bottom-up forecasting is a method of estimating a company's future performance by starting with low-level company data and working “up” to revenue. This approach starts with detailed customer or product information and then broadens up to revenue.

What is exponential smoothing model? ›

Exponential smoothing is a time series forecasting method for univariate data that can be extended to support data with a systematic trend or seasonal component. It is a powerful forecasting method that may be used as an alternative to the popular Box-Jenkins ARIMA family of methods.

Why is aggregated forecast more accurate? ›

Aggregate demand forecasts are usually more accurate than disaggregate forecasts, as they tend to have a smaller standard deviation of error relative to the mean. For example, it is easy to forecast the Gross Domestic Product (GDP) of any country for a given year with less than a 2 percent error.

What are the three ways to aggregate forecasts? ›

What is the best way of aggregating the forecast?
  • Product.
  • Sales channels.
  • Location.
17 Dec 2020

What is aggregate forecasting? ›

This is a sales forecast for a group of products or customers. It may be a forecast of allproducts within a family, all customers in a given region for the sales and operations planning process, or total forecast for the year or budgetary period.

What is aggregation in supply chain management? ›

Aggregate planning, a fundamental decision model in supply chain management, refers to the determination of production, inventory, capacity and labor usage levels in the medium term.

What is inventory aggregation in supply chain? ›

Aggregate inventory management refers to a basic inventory management method that groups items categories, namely, raw materials, work-in-process, and finished goods. It is also referred to as Aggregate inventory control; it manages multiple individual items under each category.

What is aggregated product? ›

Aggregate Products means decomposed granite, sand and gravel, slag, or stone.

What is spatial aggregation? ›

Spatial aggregation is the aggregation of all data points for a group of resources over a specified period (the granularity). Data aggregations in Group Time Series reports are of the spatial aggregation type.

Why are aggregate forecasts more accurate than individual forecasts? ›

As data is aggregated, random errors tend to cancel each other out. This makes the prediction more accurate.

What is capacity aggregation? ›

Aggregate capacity management (ACM) is the process of planning and managing the overall capacity of an organization's resources. Aggregate capacity management aims to balance capacity and demand in a cost-effective manner. It is generally medium-term in nature, as opposed to day-to-day or weekly capacity management.

What is disaggregation and aggregation of data? ›

To aggregate data is to compile and summarize data; to disaggregate data is to break down aggregated data into component parts or smaller units of data.

What are methods of sales forecasting? ›

The five qualitative methods of forecasting include expert's opinion method, Delphi method, sales force composite method, survey of buyers' expectation method, and historical analogy method.

Which forecasting model is best top-down or bottom-up? ›

Many experts believe that bottom-up forecasting offers a more realistic financial view than the top-down model. Unlike top-down forecasting, bottom-up methodologies project revenue by multiplying the average value per sale by the number of prospective sales per product.

What is a top-down forecast? ›

Top-down forecasting is a method used for estimating future revenues for a company starting at the very “top”, with high-level or “macro-level” market data. This can be done by starting with the total addressable market (TAM) size in dollars (or another preferred currency) for a company.

What are the five steps for forecasting? ›

  • Step 1: Problem definition.
  • Step 2: Gathering information.
  • Step 3: Preliminary exploratory analysis.
  • Step 4: Choosing and fitting models.
  • Step 5: Using and evaluating a forecasting model.

How do you smooth a forecast? ›

Forecasting: Exponential Smoothing, MSE - YouTube

How do you calculate demand forecast using exponential smoothing? ›

The exponential smoothing calculation is as follows: The most recent period's demand multiplied by the smoothing factor. The most recent period's forecast multiplied by (one minus the smoothing factor). S = the smoothing factor represented in decimal form (so 35% would be represented as 0.35).

How can forecasting accuracy be improved? ›

If you search for how to improve forecast accuracy, you'll find a lot of technical tips. Track macroeconomic indicators in real-time. Choose the right demand forecasting model. Recalculate forecasts in light of market conditions.

What is the best way to determine if a forecast is performing adequately? ›

A forecast method is generally deemed to perform adequately when the errors exhibit an identifiable pattern. Forecast methods are generally considered to be performing adequately when the errors appear to be randomly distributed.

Why aggregate planning is important? ›

Aggregate planning is important because it helps an organization optimize its costs and production in order to fulfill its long-term goals. Some of the specific ways it can help include: It can help the company use its production capabilities with maximum efficiency.

What is aggregate planning example? ›

Aggregate planning is typically done 12 months into the future. Some examples of aggregate planning are hiring temporary workers, laying off employees for a specific period or cross-training. This works as an effective benchmark to measure resource utilization and implementation.

What is forecasting and aggregate planning? ›

Aggregate Planning by definition is concerned with determining the quantity and scheduling of production for the mid-term future. The timing on an aggregate plan runs normally from 3 to 18 months. Therefore, the plan is a by-product of the longer term strategic plan.

What are the steps in aggregate planning process? ›

  1. Step 1 Identify the aggregate plan that matches your company's objectives: level, chase, or hybrid.
  2. Step 2 Based on the aggregate plan, determine the aggregate production rate. ...
  3. Step 3 Calculate the size of the workforce. ...
  4. Step 4 Test the aggregate plan.

When should aggregate forecasting be used? ›

An estimate of sales, often time phased, for a grouping of products or product families produced by a facility or firm. Stated in terms of units, dollars, or both, the aggregate forecast is used for sales and production planning (or for sales and operations planning) purposes.

Which aggregate planning strategy generally would result in? ›

Which aggregate planning strategy generally would result in the least amount of inventory ? 10 . The bullwhip effect results from a phenomenon , which occurs when : a ) The value chain participants intentionally allow a shortage of inventories to reduce the costs , even if that may result in loss of some customers .

What are the factors affecting aggregate planning? ›

Factors considered in the aggregate planning activity include:
  • Sales forecasts.
  • Inventory investment.
  • Capital equipment utilization.
  • Work force capacity.
  • Skills training requirements.
  • Corporate policies concerning customer service levels, overtime, and subcontracting.

What four things are needed to develop an aggregate plan? ›

Four things are needed for aggregate planning:
  • a logical unit for measuring sales and output.
  • a forecast of demand for a reasonable intermediate planning period in these aggregate terms.
  • a method for determining the relevant costs.

How aggregate planning is implemented? ›

Aggregate planning is a method for analyzing, developing and maintaining a manufacturing plan with an emphasis on uninterrupted, consistent production. Aggregate planning is most often focused on targeted sales forecasts, inventory management and production levels in the mid-term (3-to-18-month) future.

Why is data aggregation recommended during planning for supply networks? ›

When you attempt to look months ahead to determine your supply chain needs, you can use the techniques of aggregate planning. This approach enables you to have a comprehensive view of the supplies you will need in order to meet demand for your products.

Which of the following is an aggregate level inventory process? ›

Which of the following is an aggregate-level inventory process? Aggregate inventory management is accomplished through inventory policy setting and master scheduling and by classifying inventory, using a system such as ABC inventory classification. The other options are item inventory management systems.

What is the average aggregate inventory value? ›

“Average aggregate inventory value” is a term used to describe all of the inventory held in stock, which includes raw materials, work in process and finished goods, all valued at cost. Inventory turnover is an indicator of the policies and practices of an organization.

What are the four main types of inventory? ›

While there are many types of inventory, the four major ones are raw materials and components, work in progress, finished goods and maintenance, repair and operating supplies.

What is an example of an aggregate? ›

An aggregate is a collection of people who happen to be at the same place at the same time but who have no other connection to one another. Example: The people gathered in a restaurant on a particular evening are an example of an aggregate, not a group.

How do you aggregate? ›

Add together all the numbers in the group. In the example, 45 plus 30 plus 10 equals an aggregate score of 95.

How do you do demand aggregation? ›

Aggregate demand is calculated by adding the amount of consumer spending, government and private investment spending, and the net of imports and exports. It is represented with the following equation: AD = C + I + G + Nx.

What is the benefit of aggregating data? ›

Data aggregation helps summarize data from different, disparate and multiple sources. It increases the value of information. The best data integration platforms can track the origin of the data and establish an audit trail. You can trace back to where the data was aggregated from.

What is the purpose of data aggregation? ›

Data aggregation is often used to provide statistical analysis for groups of people and to create useful summary data for business analysis. Aggregation is often done on a large scale, through software tools known as data aggregators.

What is the use of data aggregation? ›

A process in which data is searched, gathered, and presented in a summarized, report-based form, data aggregation helps organizations to achieve specific business objectives or conduct process/human analysis at almost any scale.

What data is needed for a sales forecast? ›

A sales forecast is a prediction of future sales revenue. Sales forecasts are usually based on historical data, industry trends, and the status of the current sales pipeline. Businesses use the sales forecast to estimate weekly, monthly, quarterly, and annual sales totals.

What is an example of aggregate data? ›

For example, information about whether individual students graduated from high school can be aggregated—that is, compiled and summarized—into a single graduation rate for a graduating class or school, and annual school graduation rates can then be aggregated into graduation rates for districts, states, and countries.

What are data aggregation methods? ›

Data aggregation is a technique that enables organizations to achieve specific business objectives or do process/human analysis at practically any scale by searching, gathering, and presenting data in a summarised, report-based format.

What is aggregation and how does it work? ›

Aggregation is a mathematical operation that takes multiple values and returns a single value: operations like sum, average, count, or minimum. This changes the data to a lower granularity (aka a higher level of detail). Understanding aggregations can sometimes depend on what you're trying to accomplish.

What is an aggregation process? ›

Aggregation is the process of combining things. That is, putting those things together so that we can refer to them collectively. As an example, think about the phone numbers on your cell phone. You can refer to them individually (your mother's number, your best friend's number, etc).

What are data aggregation tools? ›

Data aggregation tools are used to combine data from multiple sources into one place, in order to derive new insights and discover new relationships and patterns—ideally without losing track of the source data and its lineage.

What is the correct definition of a data aggregator? ›

A data aggregator is an organization that collects data from one or more sources, provides some value-added processing, and repackages the result in a usable form.

What is aggregation in statistics? ›

An aggregation is a process in which numbers are gathered for statistical purposes and are expressed as one number. This could be in the form of a total or an average.

When you the data you are aggregating the data to a higher level? ›

When you ROLL UP the data, you are aggregating the data to a higher level.

Which is an example of aggregate data quizlet? ›

aggregate data include data on groups of people or patients without identifying any particular patient individually. example of aggregate data are statistics on the length of stay for patients discharge within a particular diagnosis related group.

How can you make a forecast more effective? ›

Most important, I hope to give you the tools to evaluate forecasts for yourself.
  1. Rule 1: Define a Cone of Uncertainty. ...
  2. Rule 2: Look for the S Curve. ...
  3. Rule 3: Embrace the Things That Don't Fit. ...
  4. Rule 4: Hold Strong Opinions Weakly. ...
  5. Rule 5: Look Back Twice as Far as You Look Forward. ...
  6. Rule 6: Know When Not to Make a Forecast.

What are the methods of forecasting? ›

Top Four Types of Forecasting Methods
1. Straight lineConstant growth rate
2. Moving averageRepeated forecasts
3. Simple linear regressionCompare one independent with one dependent variable
4. Multiple linear regressionCompare more than one independent variable with one dependent variable
23 Jan 2022


1. Professor Rob J Hyndman: Ten years of forecast reconciliation
(International Institute of Forecasters)
2. CMAF FFT: Improving organisational forecasts and decisions with hierarchical forecasting
(Lancaster CMAF)
3. CMAF FFT: Forecast Accuracy: Fanciful Aspiration or False Advertising?
(Lancaster CMAF)
4. The quest for greater forecasting accuracy: Perspectives from Statistics & Machine Learning
(The OR Society)
5. Nicolas Kuhaupt - Probabilistic Forecasting with DeepAR and AWS SageMaker
(EuroPython Conference)
6. Panel discussion about AI in research
(Sara Mahmoud)

Top Articles

You might also like

Latest Posts

Article information

Author: Corie Satterfield

Last Updated: 10/01/2022

Views: 5949

Rating: 4.1 / 5 (42 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Corie Satterfield

Birthday: 1992-08-19

Address: 850 Benjamin Bridge, Dickinsonchester, CO 68572-0542

Phone: +26813599986666

Job: Sales Manager

Hobby: Table tennis, Soapmaking, Flower arranging, amateur radio, Rock climbing, scrapbook, Horseback riding

Introduction: My name is Corie Satterfield, I am a fancy, perfect, spotless, quaint, fantastic, funny, lucky person who loves writing and wants to share my knowledge and understanding with you.