Justifying averaging


Some data can be very detailed: with Oracle ASH, one point every 10 seconds means 8640 points a day...


Working with many can points can be handicapping for performance (a lot of memory involved) and for readability.


Example with ASH data: Imagine the following picture:



We have a single report "SAMPLE1" containing ASH Data. Let's now view a typical chart:



The chart is very detailed but somewhere very difficult to read. If we want to zoom, this level of detail can be very interesting, but if we want to have a global view too much details is a handicap.


So what is the solution ?


Averaging


To average data in KAIROS, you need to do the following:


a) create a new node near your initial node


b) rename the created node




c) drag the initial node over the renamed created node while the "ALT" key is pressed


This has the effect to switch to red the chip attached to the renamed node


d) open the created node



There is a lot of information about the opened node.


In the "producers" section, you can check that the current node is attached to the initial node.


In the "aggregator" section there is a lot of available options. All of them are set with a default value, but this default value can be modified.


To average data, we are going to focus our attention to the "method" selector currently set to "$none". If we push the button, we can choose between all these options:



For our particular example, if we want one point every hour, we can choose "$hour" in the list ....



Now we are ready to push the "apply aggregator" button.


Unless a problem, the "apply aggregator" function should be very fast.


Once finished, you can select the new aggregated node in the explorer and call the same chart as in the initial node


e) displaying data on the aggregated node



Now the result is more conform to our expectations: much more readable and fast to manipulate.