The following are frequently asked questions about Media Analytics.
I've created data source "X." Can I use it with report pack "Y?"
Yes. For example, a data source created for QoS Monitor can be used for other types of report packs such as Audience Analytics and Viewer Diagnostics and vice versa. Any modifications required for beacon configuration are automatically done and reflect in just a couple of hours.
I've created a setData() custom dimension for Audience Analytics. Can I use it in QOS Monitor?
Yes. The sharing of custom dimensions is now supported across different modules of Media Analytics.
Can I modify the time zone of an existing report pack?
Not for Audience Analytics, which stores most of the information at a daily level aggregation, bucketed to the starting time of the day as per the chosen time zone. Modifying the time zone would make most of the stored data useless.
What would happen if I wrongly customize a play/visit level dimension such as video type as a Viewer level dimension?
This will give you incorrect Viewer metrics in various places including the dashboards. The Viewer metric is pre-calculated for the viewer level dimensions and stored in standard data stores. Any breach by including a play/visit level dimension at the viewer level will make a viewer to be counted for each value of the play/visit level dimension and hence render the viewer metric in the standard data stores useless.
I added a data filter and the Referrer/URL contained a specific string. Why do I see 'unknown' values for the Source/Stream Name dimensions?
If plays have startup issues, Media Analytics can't capture Referrer/URL values. Since it is difficult to confirm if these should be included or excluded, the filter is ignored, regardless if it's an include or exclude filter. So, the data appears on the portal with an "unknown" value for Referrer/Stream Name dimension.
Why do we see different results/numbers when looking at the same data/report?
This can happen if the Media Analytics engine is still gathering data for the report.
Why do I see "**" as a dimension value in my report?
Media Analytics stores data in multiple tables that have preconfigured limits. The limit is currently set to 100K rows per day for custom cubes created and existing standard cubes have preconfigured storage limits. The number of rows generated and stored is determined by the unique combination of dimension values in the report. We see "** " as dimension values when the maximum number of data points per day is exceeded in a given report.
Day boundary is computed in GMT and not the analyzer time-zone for this limit.
Let's assume that the dimensions in a report are:
Download URLs time is at a 1-day granularity. As soon as the number of values for download URL exceeds the limit, newer ones get reported under **. In an alternate scenario, the number of data points per day = the product of the number of possible values of the dimensions in that report. To better explain this, here are two examples:
Example 1: The total number of possible dimensions in the table for a report with Connection Speed and Delivery Type as dimensions is 5 * 2 = 10. That is, five values for connection speed and two values for delivery type.
Example 2: For a report with Category, Geo, Connection Speed, and Title as dimensions, the values for Category x Geography: Cities x Connection Speed x Title x all other dimensions in this report are considered. So, it's very likely that this report will hit the daily database limit.
Since the limit is hit in this report, MA starts accumulating all additional combinations in a special value **.
It's best to evaluate the dimensions in the report and break them out into multiple reports to stay in the limit of data points per day. You can recreate the report by removing some of the dimensions (maybe Referrer & User Agent). You can talk to your account representative to get the limit increased.
Why do we see different results or numbers when looking at the same data or report?
This can happen if the Media Analytics engine is still gathering data for the report.
Why is startup time for HDS streams higher when compared to HDN1?
HDS is slower than HDN1 for the following reasons:
For both VOD and live, HDS must load a manifest before we can load the first video fragment. This manifest is dynamically generated by the edge server. With HDN1 there is either no manifest or a static SMIL file, both of which are much faster to load.
For live streams, HDS must also load a bootstrap file for each bitrate. If there are five bitrates available, then five bootstraps must be preloaded before playback can start.
With Media Security Policy protected HDS content or content with Player Verification enabled, the player must load an external auth SWF before it can load the first video fragment. With HDN1, a similar decryption module is actually embedded inside the stream for both Player Verification and Media Security Policy.
The edge server waits for the end of the fragment before delivering it. This is because the total byte size needs to be written in the leading header in MP4 boxes. With HDN1, the edge server can begin sending response data as soon as it receives it. This is because the FLV structure has no header byte entry.
The startup buffer for HDN1 is 0.75s. With HDS, it's 3s or 4s (for live and VOD). This is because HDN1 doesn't have to deal with the quantization issues that HDS does. If we set HDS to a 0.75s startup time, then we'll likely experience buffering at every startup.
Why do I get the value "Unknown" for some of my dimensions?
Check the combination of dimensions and metrics you're using. In analytics, each metric has a set of dimensions based on its information level. Any invalid combination could lead to an "Unknown" result. For example, this would happen if you combined the visit metric against a play level Title information dimension.
If the dimension is through the plug-in setData() call, ensure the dimension value is set before the stream has started playing.
If neither of these cases applies, the MA plug-in might not capture some of the dimensions, if the media failed to load. This can happen with multiple dimensions such as Title/Event Name, Stream Name, Live VOD 24/7.
Why is the number of viewers in KPI and KQI for the same selected days higher?
In the business and quality dashboards of Audience Analytics, we show the Viewers metric for various granularities including "Selected Dates", "Today", "Yesterday", "WTD" and "MTD". If the "Selected Dates" is the same as WTD, or the MTD period, the viewers' number in "Selected Dates" is higher than that of "WTD" or "MTD". This happens because the MA portal can't get the uniques for any arbitrary interval. By definition, the metrics are only computed for selected ranges including hour, day, week, month, quarter, and year. By default, it sums up the daily viewers of selected dates and displays it. Since WTD/MTD numbers are calculated using the weekly or monthly unique viewers, those numbers are accurate and "Selected Dates" could be wrong.
For best results, don't use Viewers numbers from KPI for Selected dates if more than one day is selected.
Why is the number of viewers lower than the number of visits?
A viewer is someone who accesses your content. A single viewer can have multiple visits to your site in a day. As a result, the number of viewers is generally lower than the number of visits.
Can the number of visits be lower than the number of unique viewers?
Yes. Here's an example of this use case:
New viewers visit a site a few minutes before the end of the day. They are counted as unique viewers for the day. When the threshold for a day is crossed at 12:00 a.m., the viewers still on the site are once again treated as unique viewers.
These unique viewers are already accounted for, in the number of visits because we only count those visits that start at a specific time duration.
Later in the new day, these same viewers continue to play content. This activity contributes to play duration but it doesn't affect the number of visits. So, this results in a lower number of visits compared to the number of unique viewers.
Why do numbers in MA look different when compared to those in traffic reports?
All media customers are offered traffic reports for free along with other reports for the various Akamai Media products. You can optionally use Media Analytics to track your usage for various dimensions. The sections that follow offer points to consider when comparing these reports and Media Analytics.
CP code vs. data source
When the customer is using client-side MA, there is no clear way to associate the customers' CP codes with the Media Analytics data sources. Only you as the customer can know this. This is only easy to compare if there is a one to relationship. We need to confirm this with you.
We need to confirm if the integration has been done for all devices. Many times we've seen that client-side integrations are not possible, or aren't present in some devices such as PSP. However, the media product traffic reports, which use server-side data, don't need the integration. So, they capture complete data.
Many times we've seen that internal and external bots create a difference. When performance analytics is enabled, these agents could be triggering server-side traffic. This isn't captured by Media Analytics, because it's client side. Also, some customers have used bots that weren't integrated into Media Analytics, client-side.
Unknown user agents
Sometimes we've seen considerable server-side traffic is generated by unknown user agents that don't trigger client-side beacons. We've seen up to a 10% difference in play durations between the server side and the client side due to this. We haven't discovered the root cause of this behavior.
Media vs. overall
In some cases, Media Analytics might be tracking only the media traffic, while the media product traffic reports could be tracking the media, metadata, and other downloadable resources of the customer.
Billing bytes vs. Media Analytics bytes
In server-side Media Analytics, the bytes only pertain to media content playback, while the billing uses the overall bytes during a connection.
Media Analytics doesn't include overhead bytes. When billing, protocol overhead bytes (TCP/Ethernet) are accounted for.
Billing is specific, using 1000 bytes as a kilobyte. Media Analytics considers 1024 bytes as kilobyte.
With client-side Media Analytics, we don't provide bytes as a standard metric. This is because it's not available across all integrations. However, you can get it added in a custom cube with manual metadata changes from the Media Analytics portal team. The client-side bytes would be similar to server-side Media Analytics bytes, and not billing bytes.
Client-side beacons send the play duration every five minutes in a beacon. If the client closes the connection and the plug-in can't send the information (for example, a network cable is unplugged), we don't capture the last few minutes of play duration in client-side Media Analytics.
During live events, the client side can show more play duration, even after the event has ended. This happens because the client could be replaying the media after it finished downloading.
It's always better if the comparison is done for a set of known stream names, instead of comparing overall. There could be some differences because in Media Analytics, the stream name for each five-minute beacon is the same, even though it could have played different bit rates.
Hits vs. plays
In client-side Media Analytics, a play represents the client watching particular media content in the player. This can be hugely different to server-side hits, especially with multi-bitrate renditions where each bit rate change is counted as a hit in the media product's traffic reports.
It's not possible to compare hits for a particular stream name with actual plays for that stream name. In Media Analytics, Plays for a stream name only capture the starting rendition when multiple bitrates are used.
Concurrents vs. audience
Audience in client-side Media Analytics is only a "point in time" metric. This means we provide the exact Audience at different points in time, such as 12:00 and 12:05. In the media product traffic reports, concurrents are derived from the play duration information. The play duration for a connection is spread across each minute and the Average and Maximum for a five-minute period is shown in the report's Control Center interface portal.
It's always better if the comparison is done for a set of known stream names instead of comparing overall. There could be some differences because in Media Analytics, the stream name for each five-minute beacon is the same, even though it could have played different bit rates.
Updated almost 2 years ago