Based on my experience of developing TM1 solutions, three areas can be enhanced to improve TM1 performance: Server-related, design-related, and Code-related.
Server-related:
There are over 100 many parameters that allow us to tune our system to get the maximum performance of our TM1/Planning Analytics server. Below are a few paraameters that have a significant impact on performance.
MTQ
Multi-threaded querying allows us to enable multiple cores to conduct queries. This feature will provide us with significant performance improvements, especially for large queries with a lot of consolidations. An optimal number of cores (sweat spot) need to establish to achieve the maximum performance.
MTFeeders
MTFeeders is a new parameter from Planning Analytics (TM1 server v11). By turning on this new parameter in tm1s.cfg, MTQ will be then triggered when recalculating feeders:
CubeProcessFeeders() is triggered from a TM1 process.
A feeder statement updates in the rules.
MTFeeders will provide you significant improvement, but we need to be aware that it does not support conditional feeders. If we are using conditional feeders where the condition clause contains a fed value, you have to turn it off.
To turn on MTFeeders during server start-up you will need to add MTFeeders.AtStartup=T.
ParallelInteraction (TM1 only)
This feature turns on, but a default in Planning Analytics (TM1 11+), we need to set it to true only if we are still using TM1 10.2.
Parallel interaction allows for a higher concurrency of reading and writing operations on the same cube objects. It can be crucial for optimizing lengthy data load processes. Instead of loading all data sequentially, you could load all months at the same time, which is called Parallel loading. Parallel loading will allow you to segment your data and subsequently leverage multiple cores to load the data into cubes simultaneously.
MaximumCubeLoadThreads
This parameter impacts only the start-up time of your PA/TM1 instance. It specifies whether the cube and feeder calculation phases of the server loading are multi-threaded so that multiple processes can be used in parallel. we will need to specify the number of cores that we would like to dedicate to cube loading and feeder processing.
Particularly useful if you have many large cubes, and there is an imperative to improve performance upon server start-up. We specify the maximum amount of cores – 1. Similar to MTQ, to find the optimal number of cores which will provide the optimal performance, we will need to test multiple scenarios.
PersistentFeeders
Persistent Feeders allows you to improve the loading of cubes with feeders, which will also improve the server start-up time. When you activate persistent feeders, it will create a .feeders file for each cube that has rules. Upon server start-up, the tm1 server will reference the .feeders files and will re-load the feeders for cubes.
It is best practice to activate persistent feeders if you have large cubes that have a large number of fed cells.
In many cases, start-up time can be significantly reduced.
Feeders are saved to the .feeders file. Therefore, even if you remove a particular feeder from the rule file, it will remain in the .feeders file. You will need to delete the .feeders file and allow TM1 to re-generate the file.
If you have dynamic rules or consolidated elements on the right-hand side, you will need to use the function reprocess feeders if you choose to add a new version, for instance.
Although this is a greater feature, judgement is required on when to use it. For instance, if your cubes are small and don’t have much rules/feeders it may be more beneficial to leave this off.
Design-related
Using TI process to connect different planning areas, such as sale budgeting and PnL budgeting rather than rules
The typical financial budgeting and planning process is to design many cubes from sales to the final PnL. There are usually two major processes: sales budgeting and forecasting process, and PnL budgeting and forecasting process. We could apply all rules to calculate sales from source sales cube to the final PnL cube; this will make the entire process in real-time. This solution works perfectly fine initially with a small set of data. However, if source data is big or source data volume increases over time, then the performance of PnL related consolidated cube becomes poor. The solution is to use a TI process instead of using rules. We can define a clear border between the sales and PnL area. Instead of using rules to move all sales budgets and forecast data into PnL area, we use TI process to move data into PnL; this will dramatically improve the performance for PnL. Besides, it will make the system easy to debug and maintain. Certainly, we need to make the TI process easy to let users load data as requested.
Using TI process to populate data into analysis or report cube rather than rules.
With respect to reporting, such as a report from Cognos, you can directly author report based on budgeting and forecasting cube. This approach has no problem if the cube is relatively small. However, it could become problematic when cube is big and accessed by many users, as both reports and inputs are very slow. The solution is to create a separate analysis summary cube for the report to use. TI process can be used to move data from operational cubes to report cube. This approach will reduce the conjunction for both operation and reporting.
Separating historical cubes from the mainstream of planning data flow.
As known, financial budgeting and forecasting processes need only two or three years of data, while report could need historical data or more than two or 3-year three years of data. If we keep all data in the data flow for budgeting and planning, then the performance can be dramatically impaired. The solution is to archive finalized budgeting and forecasting data into a historical cube with the help of the TI process. These data can be saved in a cube or saved into data warehousing, depends on the overall data architecture.
Code-related
Rule - Avoid overfeeding
https://code.cubewise.com/blog/7-tips-to-writing-faster-ibm-tm1-and-planning-analytics-rules
TI - Break load into pieces
When the TI process takes too long, we need to identify the problem, with the database, network, or TM1 itself. If it is with TM1, then one of the solutions is to break data into different pieces, such as load by Region. In this case, the same TI process is used, but with parameter Region. We can run these processes simultaneously with either execute Process, or RunTM1TI, which will dramatically improve the process to load data into the database.
No comments:
Post a Comment