The OTT video streaming industry is a fast-paced environment with an ever-changing number of variables that have the potential to affect your company’s performance, viewer experience, and bottom-line. These variables can occur at random and oftentimes can cause critical errors to workflow and delivery.
Watch Bitmovin’s Lead Software Engineer, Roland Griesser, and Product Manager, Christoph Prager, as they cover the variables that can and will affect your workflows.
In addition, they display the best practices for debugging errors. You’ll learn everything you need to know about how to mitigate the risk of critical errors, resolve them quickly.
Speakers
Roland Griesser
Lead Software Engineer
at Bitmovin
Christoph Prager
Product Manager
at Bitmovin
Show/Hide Transcript
Introduction
Christoph: Welcome everyone. Glad you could make it to today’s webinar. Thank you for joining our webinar today; Metrics That Matter: Top Down Error Reporting with Bitmovin Analytics. Today, we want to look at the philosophy behind how we designed error reporting in the Bitmovin Analytics tool, and then also show you in a live setting how this works and how we conceptualize the debugging.
Christoph: Before we dive into it, a quick introduction and more details of what we have planned on the agenda, I want to introduce all the panelists, then what Bitmovin Analytics is, if you’ve joined us for the first time.
Christoph: Then I’m going to explain a few more theoretical high level points about our debugging philosophy. And then we’ll see debugging in action, looking at top errors and session details. We will also discuss the business benefits of having a proper debugging and top error view in place.
Christoph: One quick note on the administrative side today. If you have any question at any time during the webinar, please use the chat function in the go to webinar panel or or go to webinar app to post any question in the chat there. We will then pick it up in the dedicated q and a session at the end.
Christoph: Anytime during the webinar or when we are in the midst of the Q&A session, please also don’t hesitate to put a question in there. There’s no deadline for any questions. With that, I want to introduce the speakers today; I’m really happy to have Roland Grieser, the lead software engineer for Bitmovin Analytics with me today.
Roland: Hi, everyone.
Christoph: Hi, Roland. Roland is a lead software engineer in the Bitmovin Analytics team; and I’m so happy he’s here today because he has been really elemental in getting the analytics product where it is today, he has been a part of it from almost the beginning. Also, and this is important, he’s a real expert in all things analytics itself.
Christoph: He’s a real data expert and he’s also the one responsible for a lot of the data science that goes into the infographics that we release regularly. So he’s been elemental there as well. Most notably, about the infographics, obviously the one we did about, COVID 19 and the impacts on the streaming behavior habits of users.
Christoph: Thanks for joining. Really glad you’re here. I am Christoph Prager. I’m the product manager for the analytics product, and I will host the session. As a first step, because some probably never have heard of Bitmovin in general, give a quick intro to our company and what we do and what you’re attending today.
Christoph: So let me proceed with that. Bitmovin for those who don’t know us, we are a video streaming infrastructure company, the quickest summary of what Bitmovin does; the company was founded in 2013 and has heavy roots in research and so we pride ourselves on the close connection to university environments who push video streaming to the forefront, and, yeah, the company went through the Y Combinator Accelerator as well, which has been a really big milestone.
Christoph: From a product perspective, Bitmovin offers a product suite which has three main components, a cloud encoder, which was actually the starting point of the company, a video player for web and various SDKs. And as you can see, probably in the development from left to right, an analytics tool to track the activity on the, on those players, as a third product.
Christoph: A bit about the context of today. What we’ve joined today is a webinar in the context of what we call IBC live 2020, usually around this time and also around in April, for us, it’s really important to attend IBC and NAB. Due to the current situation with the Global Pandemic, this is not possible at the moment.
Christoph: We’ll try to bring you the talks we usually have around our booth with partners, but also with members of the Bitmovin staff and employees from Bitmovin. We’ll try to bring those talks that we have at our booth, into the virtual space and we’ll try our best, but obviously we’d love to meet you in person.
Christoph: So we hope to return to that mode of presentation sooner rather than later.But before we dive into the topic, a quick intro into Bitmovin Analytics, as it is the the core topic or the core product of today’s webinar from a Bitmovin perspective.
Bitmovin Analytics: Overview
Christoph: As already touched upon in the intro, Bitmovin analytics is a video analytics tool which captures client side data from the video players on a range of platforms, whether it is speed, Bitmovin player, or it’s an open source player or any, native SDKs. This is done by two modes of data capture.
Christoph: I would say one is collecting the wide range of player events that happen when customers or users interact with the player. But also in a second mode, deploying multiple heartbeats that check back on the users where you see there’s just selectivity or is there still something like buffering going on.
Christoph: So we have those two modes of deployment, which function both in parallel. So it’s not an either or. What we see here is the overview slide we often put on top of our presentations, highlighting a few product features that we believe we’re very good at.
Christoph: Obviously all of them are really interesting. But I want to bring your attention for today to the first two bullets. The first one being all data is available in real time. We go from the macro to micro level while your audience is still watching. And this is something really important and we’ll touch upon it in a second.
Christoph: Why real time data is important also for being really reactive and fast when it comes to error debugging. And the other aspect is, most granular data, is this aspects and we’ll also touch upon it in a second is, as it is important as it relates to the amount of detail that we save for a specific session and for every individual users that watches a video, which is important, as to really understand the context of errors, because often, and this will be a topic that will be with us during the whole webinar, it can be really ambiguous and not straightforward as one hopes them to be. So I said, both of these aspects of our analytics are key in enabling an efficient error debugging and support strategy. But, let me show you quickly how this works together with the architecture of the product before going into the details.
API at The Core
Christoph: Taking a look real quick at the Bitmovin Analytics architecture from a user perspective or a customer perspective, the customer being any video streaming provider that has our analytics deployed on their platform. We tried to provide, and this is kind of a product philosophy as well, as many interfaces as possible to our customers, to enable them to really access the data in the way they really need it with a specific team. We touched upon it also in a webinar with Google and Looker last week, in which we discussed the power of merging different data sets.
Christoph: So for us, from a philosophical standpoint, we really believe in an open platform. However, important in today’s context is that all the interfaces access the API and customers have access to the API as well. So, the centerpiece of the product as it’s highlighted here in orange is the API.
Christoph: And this is important because it ensures that all of these interfaces that we give you, have access to the same level of data granularity and to the same speed of data being available, and this is important as, as we talk about error debugging and topics of use as we don’t want to just provide you the interfaces and then other interfaces are inferior to our dashboards.
Christoph: This is an important piece of architecture that enables you to have a very efficient and fast error debugging pipeline within all those interfaces.
Speed and Granularity
Christoph: Moving on to the next slide, talking a bit quickly about speed and granularity. We’re really proud of ourselves that we’ve built a product that is able to serve data to our customers within five seconds from event increase to platform.
Christoph: The speed is so important because our customers should be able to detect upcoming issues, errors or an increase in key KPIs as the error percentage for me, for example, immediately. This is important when it comes to debugging this or error detection. It doesn’t help you when you’re too late and customers have already reported it on social media or something.
Christoph: So catching an error before a customer reports it is obviously then, not met. So speed is really important and a small detail, but also important as well is that this goes for the entire metric set. By the way, because we don’t want to restrict you in terms of deciding what is important to you and what key aspects to consider.
Christoph: And the second key aspect to consider here is the granularity of data in video. As I’ve already mentioned, there’s lots of ambiguity about issues. And we believe, and context is really key and context here comes in through detail in the form of session level data.
Christoph: So, exactly each event for each user and each individual play, that happened before the error or before the event that you want to investigate. Obviously, just as an error, eventually not after, but if playback still continues, eventually also what happened after and also through highlighting more error data, for example, through bringing up stack traces and Roland will show it in a second where additional error data can come from and how you can really trace it in your deployment in development.
Christoph: I will show you how detailed we get in terms of the granularity of data. And this is important as it relates to the way we saved it.
High Granularity Data for Multi Use-Cases
Christoph: So, as this slide shows on the right, you see how we save data. In the beginning, I mentioned that there are two tracking components, player events, for example, when a user pauses, there’s a quality change, or there’s a quality change in the ABR logic or an error.
Christoph: These are kind of player events that we track and heartbeats to check whether a customer is still playing or whether a session is still buffering. And each of these events is saved in a dedicated role, which you can see on the left. This enables us to really reconstruct what was happening on the user side in such a detailed way.
Christoph: This enables a lot of use cases for us and for the tool for our customers, especially. There’s session tracking in full level detail, so you can track every session independently if there was an error, but also, and this is important, when we come to the business benefits, is the support team has access to track what happened on the user side.
Christoph: So that communication barrier that you often see when users call in or report any issue is immediately broken down, as the customer support agents, whether on the first or second level, most of the time on the second level can access data about the user and see what happened in the stream.
Christoph: This will also be shown in the demo in a second, and then obviously providing error context as well. So the way we save data is an essential part of our philosophy. And maybe before I hand it over to Roland, let me re-touch it real quickly.
Error Percentage is a KPI
Christoph: Obviously an important one, but we believe error percentage is a KPI. It’s somehow stating the obvious, but, if it goes up, it’s important to follow up and see where specific problems arise. Therefore it’s an important monitoring KPI. However, what we see often is that the concentration is just on the percentage and not on the underlying figure, which is increasing or decreasing that error percentage.
Christoph: If you look at it in a monitoring context, eventually the composition of errors can be even more important because every tool that we know has a ground level of issues, but if the composition of those issues during a live stream changes and suddenly you have a different top error, that’s an important issue to investigate.
Christoph: And that’s also why we’ve developed that screen that Roland will show in a second because it broadly follows three principles or steps that we tried to put here and put in there.
Top-Down Error Philosophy
Christoph:The first one is assessing the impact of your errors. So, what areas affected the majority of my user base, or which areas affected an important user segment of my user base.
Christoph: Maybe you want to track premium users in more detail than non paying users, but also could be specific if errors occurred only in a specific region, which is particularly important for you. We’ve seen many use cases where other regions are more important than others in terms of monetization potential.
Christoph: So, it always depends on your use case. And also, having dedicated filters for just groups can be extremely valuable as it depends on the model, but customization that enables you to do a bit more analysis is key here to have your own specific segments already predefined.
Christoph: The second important approach is really to isolate before you go into detail, because when you arrive at that last bullet that we will come to in a second. Ideally all is already isolated enough to look at session level data. If you don’t, do that exercise going through the timelines and seeing, okay, which segments are really affected or which browsers you have, you have too much noise for session level data.
Christoph: So that’s why it’s really important also to follow the steps. If you just jump, select an error and jump into the session level data, you don’t see examples. See anecdotal evidence mostly, and we really get lost in the weeds. But if you’ve done that exercise thoroughly, you will get to the error context.
Christoph: And as I already discussed, this can be really important as video errors, especially can be super ambiguous. Just to take one example, the famous unknown error that we see in many players. Is really something that’s not helpful in a lot of cases. And this is where you have to really dig in deeper and see the context and look at stack traces.
Christoph: But, the error context, as I said, can really help in that, to debug those unknown errors or ambiguous catch all errors as we often call them. And with that kind of talk about our philosophy, I want to hand it over to Roland to give us a bit of an insight, how this looks when in action in our dashboard. Roland, over to you.
Demo: Debugging in Action
Roland: Thanks Christoph for this introduction. Hi again. My name is Roland. I’m a lead developer in the analytics team here at Bitmovin. And today I will guide you through our analytics dashboard, and I will show you how you can use it to identify problems with your video platform.
Roland:A small heads up here. All this demo data is completely made up. So if you see an England Explorer showing up on an iOS device, that’s totally fine and not a problem for a platform here. So, as Christoph has mentioned, the first screen that’s really helpful is the error percentage. Here, we show you the overall health of your platform in terms of errors.
Roland: So, error percentage here is the number of errors, in relation to the sessions considered. So even if your platform would say 100 percent error percentage, that wouldn’t be really helpful – only one impression would be affected. That’s why we have included those context metrics as well. So here you see how many actual impressions have been looked at.
Roland: So, it just puts this number above here way more into relation. Another really nice feature that we have is the industry insights. So the straight line you can see here, this is looking at the median values across all of our customer base. So. Here you can relate your data and error percentage to what others are experiencing.
Roland: As you can see, our demo data isn’t really performing well here, but as I said, it’s completely made up. What’s really nice about this, is also that we break this down by browser, ISP and country. So whenever you compare Chrome, or if you look at data from Chrome, for example, you will not see the 6%, but you will actually see this error percentage filtered by the browser Chrome.
Christoph: So, it’s really important with all the mentioned here that the session considered obviously tells you kind of the impact of, of load on your KPIs, usually, it’s a bad sign if the load goes up and their percentage increases more than the load in there.
Christoph: They should actually stay steady with increased load or decreased load. I think that’s an important aspect when we look at the context metrics and obviously the industry insights give you a good benchmark on whether you’re above them or what is achievable. As said before, there is a ground noise of errors that we will always see with internet video. But obviously if that’s too high, then you should really make an effort to reduce this considerably.
Roland: Yeah, exactly. So now let’s assume that a couple of your customers have complained about the internet connection or the error percentages, or they see a lot of errors in a specific country.
Roland: So. As Christoph has mentioned, we really collect the data in detail, so everything you can see here can be broken down and can be compared to each other. So, we will take a couple of countries here, we will look at the error percentage, then back, and then we can click them and we can actually see them in the graph and also compare them to each other.
Roland: So, apparently, in Peru, we have a higher error percentage than everywhere else, and also in Uruguay, but let’s focus on Peru for now. It’s really nice that you can use all those breakdowns that you can see here and you can use them as filters as well and those can be combined. So, if we want to look into the country now, and we want to figure out what’s happening in Peru, we can filter all the data by Peru.
Roland: And now each of the breakdowns down here will only contain data for this country. Usually if something is only happening in one country, you would suspect a CDN provider problem, for example. Now you can do the same with the CDN provider. You can also see the values here. You can compare everything and see if one of them has a problem somewhere.
Roland: This can be done with any combination you can see here, in the field as you have already seen that you can include filters, but you can also exclude some of them. For example, you only want to, or you don’t want to take Chrome into account. So using that approach, you can easily identify on which platforms errors are happening.
Roland: But then once you have figured that out, you can go into even more detail. That’s where our next screen comes in handy and that’s the top error codes. So. The first one I showed you before was more to segment the error where your errors are happening; and this one now goes into more detail on the more technical side.
Roland: Here we break down all the error codes that are happening on your platform by the deployed player version and by the error code that’s actually happening. As you can see here, there are a lot of 1208 errors happening on Bitmovin 8.4 – and here we even have some information that says the source could not load manifest.
Roland: So maybe the source is misconfigured or something. The next one here doesn’t really give any information and we will dive into that in a second. But before I want to show you, what we have figured out with our data. So that’s the top errors that we can see across all of our customer base. And here, as Christoph had mentioned before, the first one is source progressive stream error.
Roland: So that probably means that your browser can’t play the progressive stream. For the same reason, the third one, so as not to support technology. So you are trying to play one stream format. That’s just not supported on your browser operating system. But here the fourth one, you can already see, one of the most errors that we can see across our platform and across our customers is an unknown error, which doesn’t really tell you anything.
Roland: The next most popular ones are the network error and the DRM fail license request. You can see the one in blue here, the analytics quality change threshold exceeded error. So usually when we collect errors from the platform or from the players, we rely on the player to report them. So if a player reports or has an event interface and reports that there is an error happening, we will collect this error and send it to our API.
Roland: But for example, something that we have seen quite often lately is, If there are a lot of quality changes happening for a playback, it’s not really an error from the player side, but it’s still affecting the user negatively; and you also want to figure out such things. So we’ve introduced some augmented errors that will just send the API that there is something going on, but it’s not really an error.
Roland: As you can see, after introducing this error, it’s already one of the most seen errors that we have now because of all of our customers.
Christoph: yeah, chiming in here one second. I think that generally what we see is warnings or errors can also be helpful. We call them warnings in that context as we’re giving you kind of a warning.
Christoph: We say, okay, this session on these numbers of sessions have a problem with quality switching. Or with oscillation between different renditions. Well, we will not destroy the tracking. We will still continue the tracking on those sessions, but give you a warning about it and tell you, okay, this is actually an issue that you should be looking at because if there’s so much quality switching going on in your platform, the user experience will really suffer because of the changes.
Christoph: Even though the video still plays, there’s a lot of change in quality and the quality changes all the time, which can be an annoying event as it’s often related to buffering events and lengthy buffering times. So this would be used to look at the API logic, if it’s a more front end related change or going back to the encoding setting and saying, okay, does our encoding setting actually match the bandwidth of our user base.
Christoph: So from a product perspective, something that we’re investing more in and we will have more of these warnings in the future as well.
Roland: Yes, thanks. So now we’re going back to the stop error codes. We also have a really nice feature here so that you can actually click those errors and now you can see the same screen that we have had before.
Roland: But this is now actually filtered by displaying a version and only this error code. So what you usually do then is, for example, you want to know on which platform everything is happening. As you can see, apparently all of those errors have been happening on Android, which already helps you a lot.
Roland: So you now have identified that this error is happening on Android and only in the deployed version 2.45. So it still doesn’t really tell you anything about what the error actually is, the 311. And that’s where our last error impressions come in really handy. So here you can see the latest impressions that actually had this error.
Roland: And also in this table below here, all the filters you have added above here or above here in the global filters will be added here as well. So you can actually really figure out, really drill down into a specific browser, platform, whatever, and then see the impressions where those errors have been happening.
Roland: And now. All of those are clickable and will lead you to the thing that Christoph has mentioned at the beginning, our granularity of all the data we collect. So here you can actually see one session from one user. And here you have a summary of everything that he has been using, the ISP, the browser, the analytics version, stream format.
Roland: On which page this has been happening for which video and now we are getting into the details of the error as well. So, as I said before, it was a 311 error and here the error messages in way more detail, for example. Looks like this was an authentication error with a DRM license request. This now already tells you where stuff went wrong.
Roland: But, what we have introduced now is in even more detail. You can also see the stack trace where in the code this has been happening. So it’s really easy to figure out where you have to look in your deployed software and where you can fix this error, if it’s actually coming from your side. And also if it’s not coming from your side, at least you have a point where you can say, Oh, that’s been in the player and you can report it.
Roland: And that’s something really nice to have, and it’s available across multiple platforms for us.
Christoph: Important, here you saw it with Roland before. I think the unknown error being the fourth most common error across all our customer base is really huge. Because obviously you don’t know anything about the error if it’s unknown and there’s no error message.
Christoph: At least if you can get to that level of detail, you can be really fast. In terms of identifying the source and this helps you already and really, it takes off hours of the area investigation in many cases, as it’s particular to a specific part in Europe or the player.
Roland: Yeah, and also now, let’s see what Christoph has mentioned at the beginning as well, our granularity, like here we can actually see the impression log of the whole impression and you will really have one row for each event that has been happening, apparently there was a couple of times it was playing a little bit, seeking, buffering, you can also see how long each of those events took, you can see where in the video timeline that has been happening and apparently here everything ended with a buffer.
Roland: Yeah, with an error, you can also see quality switches. You can see video resolution switches, languages. And above here, we try to visualize that in the graph as well. So this orange line shows you the download speed of the client, which often is really nice to relate to the buffering events. And if you remove that, you can see here the playback video bitrate that has been used to playback the video.
Roland: You can see the buffering events. You would also see ads here if there have been any on the timeline and apparently here in the end, the error happened. We also break down the startup time. Also, if there is an error at starting the video at all, it’s also really nice to look at if it’s just timed out, for example, or something like that.
Roland: And yeah, that’s all for the impression details. But now let’s go back up here and you can see we have assigned a custom user ID to this impression. So if you have a customer base and you want to provide customer support for your users. We also have a really nice tool that will help you identify problems when a user calls, for example, and he will tell you his login and you can just immediately look at what has been happening at his playbacks.
Roland: So we have this feature called session tracking and here you can choose one of your custom user IDs.
Roland: So whenever a user calls and he gives you his email address, you will just use his email address or custom ID that you can assign to each impression for us. And here again, we will show you the latest impressions that have been happening for that user. So apparently he had a couple of errors. He had some playback sessions that worked well.
Roland: And all of that can be clicked as well. And you will also get a detailed impression view again. So that’s really nice to be reactive upfront. If a couple of customers are complaining and they are getting errors, you can immediately look into them and you can see which error is happening and then when you come back to them, you can go back to the error percentage screen and filter by this error code.
Roland: And yeah, so you can react really fast and keep it to a minimum until you can try to figure out the problem and fix it. Also it’s really nice. You can see the used operating systems and user process. So maybe again, that’s only happening on a couple of them and then you will go back and you will add filters again, and you will break down again.
Roland: And. Yeah, that’s pretty much everything from my side and I will hand it over to you again.
Business Benefits
Christoph: Yeah, maybe we can stick to the on that screen a bit before we go over to the business benefits because they especially relate to that. Right. Imagine what I’ve said before, the communication aspect of things and we’ve seen this for example with some telcos where, where they want to, you know, they have an additional OTT, usually they’re a telco, but an additional OTT, business and, customers kind of satisfaction is really dependent on the OTT business running smoothly. So, even if it’s not your main product, having a really good quality of experience is important because it reflects on the other parts of the product.
Christoph: And, what happened here is that the implementation was directly to the screen because we have that user ID and Roland showed it before in the individual session. There’s a deep link. You can deep link into that specific user and have, for example, your support team look into that and imagine your support team most of the time that we hear, whether we’ve discussed with customers, it’s mostly second level support, but especially when a kind of broader audience of OTT users comes to to internet video, the communication aspect of it is often difficult. Okay. What are you using to stream that specific video? I’m using an iPad.
Christoph: Then that’s an easy answer. Eventually the version of it is easier, but then if it goes to the question about what operating system or even what browser are you using, then the communication is often very difficult as people actually don’t know where to look at that, and when you’re supporting, will you have to have the information right at their hands and can quickly kind of relate it to an ongoing error that is known or debug further.
Christoph: And obviously, they can also validate specific experiences that the customer has on their end. So it’s not about just recalling or what you heard there, what you saw there. You can just check it, with clients that did. That’s why it’s important. And this is, this also brings me kind of to the business benefits of having an error pipeline in place inland.
Christoph: If you can quickly, hand over the screen to me as well. Perfect.
Christoph: Yeah, business benefits, obviously, that’s even though this is more of a technical session or technical focus session, we wanted to quickly show you the business benefits that we see, with our customers. Kind of a quick case study, before we dive into the more general, but we have a case study out there with telecoms, that we released last year and in deployment and they’re using our player and analytics, they’ve used both products.
Christoph: They managed to really reduce streaming and related support tickets by as much as 30%. The really good proxy metric for happy customers, because obviously reduced support tickets relate to a happier customer satisfaction at the other end.
Christoph: Important here, just to mention, they use that screen as well, but obviously also use the player and also use the entire analytics product. So the multifactorial aspect, but also having that screen at hand was core to reducing the number of tickets that the support was received in general. It’s important that one of the business benefits is reducing time to root cause and time to fix or attempt a solution, so to say, which is always harder than time to root cause, but we try to build in a team framework that reduces time to recourse because, why is this important? Obviously you can mitigate an error if you have enough information, and can mitigate an error before it shows up somewhere else. I think that’s what’s key. Customer experience is not only about a specific customer, even though that’s an important part, but it’s also about reputation and also about customers talking about it on social media, and you can really reduce the number of complaints about your service on social media platforms. If you’re really fast in addressing the problems. And the second aspect of some second bullets is really reducing the complexity of support costs, as I already alluded to with having data on your customers and having data.
Christoph: Yeah, you’re equipping your support teams with data about the user and then, having a triage in place, which you can say, okay, known error, new investigation, these kinds of things much quicker than if you have zero data. And we’ve started with so many customers who didn’t have any insights into those and really helped them develop a really good pipeline in that respect.
Christoph: It’s important because happier customers is what it translates to as, the most common errors and issues as you take them and solve them really. Yeah, and happy customers are always good.
Q&A
Christoph: To end it right on time as we have 45 minutes planned for that session. I think we have five minutes left for you to pose a few questions.
Christoph: And I think we can, yeah, as I said before, and we have a few already in the chat, maybe before, going further here, letting you know again, sorry, I lost my train of thought a bit, letting you know, again, use the chat function, please, and just, yeah, put in any questions you see. And maybe let me start with the first question, Roland.
Christoph: It’s a mixed question, I would say. Onto a broader question and an engineering question. But what other analytics data related errors have, have you introduced apart from the threshold exceeded errors?
Roland: For now, there is also the, I think it’s called re buffer percent, re buffer time code error.
Roland: So whenever you’re looking into a session and it’s buffering for more than, let’s say, two minutes, we are going to send something to the platform as well because, usually the player will just continue buffering. It’s not reporting any errors. And so the server side will never be aware of something like that.
Roland: But to be honest, after two minutes, no one is waiting two minutes for a video to continue playing. So that’s another one that we have introduced for and we are looking at more of them in future.
Christoph: Yeah, I think what’s important here is, I mean, what we’re trying to be, we’re really a video analytics tool. So we’re not a generic product, like a general analytics, which you can put in errors, all kinds of errors, so I can level data in there.
Christoph: And for that, we’re really trying to reflect the video use case. And as Roland said, if there is a buffering event that lasts for two minutes. There has to be some consideration that we see, also based on the data, we see that people will start abandoning the sessions over that. And if they don’t, that’s a session just stuck that will never recover.
Christoph: If you don’t have those thresholds in place, obviously you will also screw your data. So that’s also an important reason why we implemented them. Yeah, quick one also here. Do you have examples of integration with CDN logging data? Maybe something that I can take.
Christoph: Yeah, this is I think interesting for you would be, I don’t see what who asked the question. In examples of integration with CDN logging data, obviously, we had a webinar with Google and Looker where we talked about the powers of emerging data sets. And I think this would fall under that realm of questions.
Christoph: What is important for us is that we’ve built the analytics architecture to be shareable and to be open. And that’s why we have numerous kinds of integrations already pre built where you can see that, but also, and this is something where these use cases dive into. We have an export function which equips data science teams and teams that run a central warehouse.
Christoph: And this is especially what the Looker and Google webinar was about, where they bring those different logs and different use cases together. So this is what we see this more, not as an integration in our tool, but an integration where we are part of a larger identifier and a larger data lake.
Christoph: This is how we would approach this case. Another one, one that’s really quick. Can the analytics service be used independently of the Bitmovin player? Yes. We have integrations with all major open source players, with native players on the most important platforms. Generally, the product of Bitmovin, and this is also a philosophical, or kind of something that we always try to keep up with is that we, our products can be used independently of each other.
Roland: Also on the native platforms, we not only support a Bitmovin and SDK on Android and iOS, but we also have integrations for the AV player, as well as the Exo player. Obviously, not everything is available there because we just can’t get it out of the player. But whatever the player provides, we are trying to connect there as well.
Christoph: Another question that came in, is the error tracking only available for a Bitmovin player or across all platforms, Roland, as we had that. So can you maybe touch upon that?
Roland: Yes. We rely, obviously, on what the player provides. But here we have limitations, for example, on the iOS platform, where you obviously can’t get such a nice stack based on Android, but whatever is available, we also try to collect there.
Roland: So with the newest updates of the analytics collectors, we also have the data there.
Christoph: Great. Yeah, as we’ve hit time, the 45 minutes we’ve, kind of, Wanted to take for that webinar. I think there are a few other questions which we don’t have the time, but we tried to get back, and answer them offline obviously. Or, yeah, okay. We can do, I think maybe one more, let’s do one more.
Christoph: I want to do one more. Yeah, maybe the question that came up here is how to organize the number of error codes, such as 2, 300 for DRM or 1, 400 for network issues. That’s another question that just came in.
Roland: Yes, here we, again, we rely on the player, each player is supporting those numbers differently, especially if you look at, as I said before, we not only support BItmovin player, that’s also one of the reasons why we decided to do the top error screen, broken down by deployed player version, because obviously also, having a different player version often changes error codes, and so, as you know, which version is, occurred.
Roland: You can easily look those up if they are available publicly somewhere. Otherwise, as you have seen before, this error on Android was, I think, 3011. We had no information until we really dug into the impression and yeah, the error codes from us, but it’s the error codes that actually the players provide are important here.
Christoph: As I already said in the beginning, we hook up on the play event, the same as with the [inaudible], obviously, and based on the data, but also that we get, we also, raise our warnings. Yeah, I think, yeah, we went through most of the questions. We’ll try to follow up on those. We couldn’t answer. Thank you for submitting as we’re already one minute above the time of one.
Christoph: Yeah. Hold on. Thank you so much for being with us today and helping, showing everyone how to debug in a real demo session, real life demo session. As that, please reach out to us. There are different channels, LinkedIn, obviously, or email addresses. And although open channels on Bitmovin, if there are any questions or any follow ups, happy to hear from all of you.
Christoph: And with that, thanks again, Roland, and thank you all and have a great afternoon still, wherever you are a great day. Thanks.