NETINT Technologies about hardware-assisted software solutions - Simon Karbowski & Jan Ozer
Hi, I'm Jan Ozer I'm here today with Simon Karbowski, who is the CEO of StreamVX. Hi Simon. Thanks for joining us. Hi, Jan. Thanks for having me. Can you give us a 30-second overview of where you're located and the product and services that you provide? Sure. So, we are a product company, software product Company from Poland. We're located in Gdansk, a famous city in the north of Poland. We basically moving appliances toward software solutions. So what we do on a daily basis we're trying to accommodate different problems on the OTT environments like transcoding, packaging, and delivery networks. We empower the new revenue models. We're trying to enrich the
video systems. We are into digital advertising, optimizing the costs, lowering power usage... We're delivering the critical components into the systems of different operators and trying to make our products as scalable as possible. And we are also moving them toward the new modern deployments like cloud or on-premise and mixed models. Okay,
and who is your typical client? How would you describe them and their needs? Typically I would say that our client will be the paid-TV operator or the broadcaster, Telco. That's basically..., or streaming companies. It's a... Various parties have anything in common with video, basically. What is giving us the common ground here is that we aiming toward those customers to deliver the new technology toward the ecosystems, or we want to introduce new technology, we change the codecs and solve the problems. The flexibility in our products is giving a huge advantage to the parties involved. So you call yourself a software company. What is the typical way that you deliver your products? So, typically we're delivering this on the customer's hardware. So, the customer is
choosing the hardware. As I mentioned before, this is also a very often and more and more popular scenario when we're only deploying this on AWS or Google cloud, or Azure. So, basically, in the cloud. There are a lot of mixed scenarios or hybrid scenarios, if you prefer to call them that.
So that's typical. But of course, we have different products. Sometimes the products are allowing to do this as simple as that. And when it comes to transcoders... So we have a transcoder, in our case, the product name. It's a little more sophisticated because we usually need to advise a little more on the hardware part. And in this case, as we know, we can do this in the CPU, we can do this in the GPU, but we can also do this in ASICs as we do this with NETINT, which we have a great partnership with right now. So basically, to quickly answer your question, we recommend the hardware, and considering that the customer is saying, let's say, I want to transcode so many channels, with that kind of ladders, we're trying to figure out the best approach to cover this scenario. And we probably want to go a little more into the details further in our discussion today,
but an ASIC approach here is winning in plenty of scenarios. One of the biggest trends that are pushing encoding decisions in Europe, particularly, is that rising energy costs. What are you seeing from your prospects and your customers about energy going forward? and how they select their encoding products? Yes, that's a very, very important factor, as you said, in Europe right now. We see more and
more questions. And actually, it's a very, very precise question. So we're getting straightforward questions like okay, 'how much on a daily basis' or how much per channel' will it consume - the power. So, again, as we know, it's not so obvious to answer that kind of question because nobody before was paying so much attention to the power consumption in video systems. As far and as
detailed. But the good news is that all as much as all RFPs are touching that, again, thanks to cooperation with NETINT, this kind of answers are coming a little easier and on the winning side because there are a lot of advantages to this part. Give us an overview. I mean, you can encode in software, you can encode with GPUs, and you can
encode with ASICs how do they compare from, I guess, a watt consumption per stream basis? So, of course, there are some basic differences. And the density is the one. But that's it's not coming toward the power... the power answer... So probably we'll touch it a little bit later. But if it comes to powers, the NETINT... The NETINT's Quadras andT408s are, using just seven watts of power. And in our benchmarks, it is very capable. Compared to GPU ..., it's really like a hundred times much more power on the GPU.
Of course, the GPU itself is doing a lot of different things which we're not using in the transcoding. But that's not the point because, at the end of the day, the bill comes for the whole device. So here, the power usage is on the ASIC side, winning significantly. Because if you're going directly to the software side and doing this just on the CPU,
the power is going to be a little better, but then we have to remember that the density or the number of streams switching, which we can do directly in the CPU is really, really low. So, even though, we will get it a little, maybe better, on the power side, the amount of the servers, so in total, the cost of ownership is going to be completely behind on a comparison scale. Not worth of mentioning, I would say. So, you're saying that it takes many more computers running to equal the density of a single computer with multiple T408s, for example.
That's, basically, going to take a lot of CPUs and a lot of servers to do the same job, which we can do on one ASIC. So swapping ASIC for servers is a real win-win for your customers who are looking to cut their power costs? That's exactly the point. What we, basically, are proving in more and more cases right now is that, because basically, VX transcoder is a software that is leveraging different platforms for doing the transcoding tasks. And, of course, as I mentioned,
we have a different scenario trained and checked. And we have different, let's say, deployments, but then ASICs come with really unusual density, I would say. It's, of course, because the ASIC is dedicated, as the name states. It's dedicated for a particular task. And as we said, CPU is generally for everything, so it's not dense enough. It's it's capable to do a lot of stuff but not only transcoding or encoding. And then the GPU is pretty capable on the encoding, though not as
much compared to the ASIC, but then it's doing a lot of stuff and it's consuming a lot of power. So nowadays, I would say, that is not necessarily the best choice. Do you have any customer success stories that you can share? of saving power by switching to ASICs? We basically having, right now, or participating right now in the several RFPs and several POCs with the customers, and as I mentioned before, the winning trend here is for the combined solution. So, StreamVX software on top of NETINT's ASICs is basically beating everybody on density and on power usage. So, what we see already as really impressive numbers,
we can squeeze like 30 channels out of one RU unit compared to a similar configuration on two RU units. So you already have like half of the rack space Coming to power usage - it is really a significant difference. Because one RU unit we're speaking about, the server is using barely 7,700 Watts. And the two RU units are using 1,500. So, twice more, basically. And the density is at least 30 percent more channels out of half of the space. So, let's say KPIs here are impressive, and at the end of the day, the simple scenario, where we basically can do this as almost 'Plug and Play'. That makes it even better. I mean, it's nice to hear about power, but the quality is always critical for any service. One of the older knocks about hardware encoders was that the
quality couldn't match the software. What's been your experience there? On one side, we have a lot of movement in the video market. We know, and we're speaking very often together about that it's moving very fast, and the amount of video is rising rapidly, and it's going to be high and higher every day. But the quality, in my opinion, is getting toward the point where it's one of the major differentiators on the market. So if you really want to push your service further, or you want to gain more traffic on your platform, and gain more customers, quality is the factor that will differentiate you. And when it comes to NETINT, the gain in quality is something that we got quite impressed about. Because, okay, if we can squeeze it on the software transcoding,
we can go pretty high with the quality but again, the density is really suffering here. And on the NETINT, we see very, very often we see two factors that get very important, let's say, subjective impression on the quality level. One is ultra-low latency which we can squeeze with the hardware on ASICs, which is very important for sports or live events. Because then you want to be really, really fast and just after the actual camera over the source feed. But on offline, when we have a little more time and better settings for the transcoding, then we see VMOV scores like 96, 98 even sometimes, which is not so easy to squeeze out of the software. But even if you do that, you're basically doing just one movie per one machine, which is useless because it's wasting a lot of power and a lot of resources. So the quality
here is, in my opinion, very important. But also NETINT is quite impressive on that factors. You mentioned latency what how many of your customers care about latency? You know it's always about the use cases. For us, every customer is a different use case. Our flexibility here wins a lot of cases because we really understand the video, as you guys understand the video. It's a very strange animal. So
you have to feel if what you measure really matters or not. There are some customers, as we spoke in the second before, which need live sports or gaming events transmitted. In that kind of thing, the latency is critical. And there are some customers that just have a video libraries and the latency is not so critical. It's just offline stuff, so then the quality is probably the bigger priority. So it depends. But again, here, as I
mentioned before, latency is counted a little as one of the quality factors, in our opinion. Our observation in the market is the latency is also considered a part of the quality, and with NETINT, we can squeeze something around 10 milliseconds, which is really also impressive. Very useful for live events. Definitely very useful. And quite impressive again comparing to the software solutions. Put that in perspective for us. I mean, if I'm watching a soccer match on TV what's my latency
going to be if I'm watching over cable and what's it going to be if I'm watching via streaming, say, with one of your systems? It's, you know, a very good question. Very hard to answer precisely because every single ecosystem we see on the market and in our customers' cases is really different. So those architectures are, basically, you know, hard to compare. But the good point here for you is that we see a huge difference in the transcoding platforms which are deployed in the legacy systems. Like a lot of cables are using steel right now on the broadcast, where we see even 20 or 25 seconds delay towards the actual feed. And again, on the systems where we're just on the transcending, we're getting 10 milliseconds delay even if you add, let's say, I don't know, another that towards the distribution, we are still, you know, far away from this 20 seconds. So that's,
in our opinion, it's a huge difference, and it's a huge jump toward being really closer to the live feed. So it's a significant difference. But, again, there are, of course, and I'm not saying there aren't a lot of deployments that are having this differently applied, and it's getting closer, but definitely a basic approach from NETINT, this is making things much easier, because that's a critical part which is on the hardware processing of the source feeds. It is narrowed towards really impressive times. Talk to me about integration. So, if I have a system that's depending upon software encoding at this point, and I want to switch to ASICs what's the integration task look like, and how long is that going to take? That's probably the whole thing behind our partnership with NETINT And the whole trick here is that we are a software company. Yet, we've been allowed, thanks to this great cooperation, to integrate very deeply with the ASICs.
So what we did we basically hid all the magic and all the heavy lifting which is done by the ASICs, in behind, of course to do this fast and good quality. Great quality transcoding in the back. We've hidden this in in the nice UI and the nice API, if you prefer to use API, in a nice API to control this with a few clicks, and easy and fast. So, from the perspective of the customer, if you're speaking really, really easy case, we're just getting some inputs, and we're just setting up some outputs, and that's it. So it's going to be a few hours, basically the simplest possible case. If we're doing some retrofits or we're
changing the architecture, we're speaking about maybe more sophisticated redundancy scenarios, it's going to be longer. But the whole thing here is that it's a software-defined solution with really well-integrated hardware. So this part is completely behind everything and the customer doesn't care about that. The whole integration is
about, let's say, architecture and fitting in the stores, the needs of the customer. And that's of course our job. That's what we do. But that's, again, case by case approach. What is critical here, I would say, is that our approach and NETINT's approach are very similar. That's why we really like this cooperation because we really focused on solving the problems. We really
focus on the customer cases. So these combined powers are giving us really the flexibility, so we can say it's a plug and play, Because, we basically can deliver something which just giving us this input, giving us the needs on the output and we will simply deliver it. So the customer sees nothing but lower power bills and higher density, and you handle the rest? Yes, that's basically the whole thing here. What's important here... We have to remember that, and we both know that, the whole fuss behind the how it's happened and what exact parameters you have to add to that. You can mess up in
plenty of places, but in most of the cases, if we hide this whole thing behind the nice UI, we cover 99 percent of the cases, and then if customers need more expertise, that's where we come. We're fixing this, we're fine-tuning, we're advising, or, as we spoke, deploying very sophisticated redundancy scenarios to just make sure it'll never fail. So, what other hardware do you consider in a potential deployment? Say, moving from software-only to hardware-assisted? We have different experiences, and we're doing this for quite some time already. So we, of
course, again, as a software solution, we can run it on the CPU, we can run it on the GPU, and we can run it on the ASICs. We integrated different hardware accelerations. But most of the cases, which we see right now, again, which we mentioned already, are taking more and more care and are more interested in the power usage and density. For obvious reasons. And as much as right now is the case in Europe, because we know the crisis situation, all the stuff, but, in my opinion, it's going to evolve it's going to be very important globally very soon. Because the rise in the amount of the video which is produced, and so on. And so it will simply force everybody
to take care of the density and the power bills. So what happens, and what we see, is that the ASICs advantage ahead of GPU, is really significant. So, we can really go like a 10, 20, even 30 times sometimes the density with the much, much lower usage. So, if you're counting
this venture like a TCO three years, it's really easily covering the whole cost of the difference through the power bills. It's really so significant. And, of course, we can run a different acceleration but what we see right now is really migration towards efficiency. So, we probably even gonna see soon retrofit cases like somebody we're gonna want to exchange some hardware for a more efficient one. Which again in our software case is very very easy. And similar to NETINT, which is based on the interfaces which can match probably to virtually any server on the market, because the interfaces you can use for connecting the hardware towards the server is universal. As our software
can universally run in a lot of servers. So it's, again, a perfect match. We can basically even rescue a little older systems which they want to optimize. You've mentioned the payback period. What are you seeing is the payback period for a T408 type installation? That's, of course, coming to the scale question again. But what are we doing right now n the RFPs, in the cases we right now working on, it's usually three years which easily excuses the cost. So that's not too long,
I would say, but it depends on the scale. There are probably a lot of cases that are going to repay it faster. There are some cases in which, maybe, big installations, or recent installations, which are maybe not so power-consuming. But in general... It's not any more discussion that, you know, you're really looking for a single person.
With the density gains, we're getting on this kind of approach. It's really easy to catch the moment you're getting back the money. And nobody's predicting that the power pricing is going to go down in the near term? It looks like it's always going to be going up? Probably... If you both know that, we would be rich or we wouldn't be sitting here. Nobody knows. Honestly, nobody was expecting the pandemic
or the war in Ukraine. So it's not so easy to answer, if it is definitely going to happen like that or not. What we can be sure of, in my opinion, that's, and we see this for years, and it's giving more traction right now, that amount of the video produced and the live video produced right now is rising rapidly. And because the amount of
the video content is rising rapidly all the time, so the amount of the infrastructure which is processing the video also needs to be raised. So definitely, this cost is going to be taken into account more and more. And we have to remember that not only the live video is here, but there is a lot of offline content that is there and must be processed somehow. There is a lot of gaming content that is being streamed, okay. That's also live, but it's a little different animal. There are a lot of advertising markets which is needing transcoding as well. So
there are a lot of different places where this video processing is very important. So even if the prices were going to stay, the scale is going to rise, and the costs are going to be brought to the attention, and the cost gonna be kept low by density again, in my opinion. So summarize that, you know, if I'm out looking for the encoding hardware, you know, we've talked about...
We've talked about costs, we've talked about the power, we've talked about latency, we've talked about the quality, we've talked about the interface... Are there any other factors that potential buyers of encoding products should consider? A lot of them! But I have to mention a few.I really like, you know, with the cooperation we've got here. Because the video itself is really an
ugly animal, So what is very, very important in the environment we're working with is really the awareness of what the video is, and how to behave with the video. And again, StreamVX is coming with a lot of history in the industry, and NETINT with a lot of experience. This is a perfect combination to deliver a partnership that will create a video venture in the right place. Because it's really really important to understand how to take care of the video itself. The other part is all this quality stuff, which we spoke briefly,
that it really there is a lot of video on the market again, but the quality is starting to be a differentiator. In my opinion, it's really going to be huge. Because, okay, we're speaking about the 4K we're speaking about the 8K, but we're still seeing really ugly even HD. But it's not HD it's not even close to the HD. So, we're still seeing a lot of streams that claim to be HD, but they are far away from the quality we're expecting from the HD streams.
And this is also very important to remember that does not necessarily have to be like that. And it shouldn't be like that. So that's another factor we have to take care of. And very important is this flexibility. The flexibility of the vendors, of the solution,
or of the product that you're choosing for the transcoding or encoding. Because we have to remember that the use cases or the number of streams you want to process are changing. And the conditions are changing, our inputs are changing. And the flexibility of the product is very, very important because we see right now, in our experience, that there are a lot of appliances, which are, as usually appliances are, built to do one function. And we're living in the times when the software approach for the function changes with the update of the software, and it's as easy as it can be. Just click, and here you go. That's something that customers expect right now.
And on top of that, if you want to make really sophisticated or higher ability scenarios, where you don't want to allow it to fail on the live event, or you don't want to allow the VOD system failure on the, I don't know, dozens or thousands of customers. You want to be sure that this solution you choose is really capable to do that. And again, the software-defined proper design as the StreamVX, with the proper hardware support, which gonna be NETINT, you know, you are always on time and with low latency and good quality. That's really a perfect match, in my opinion.
Well, that sounds like a perfect ending to this interview. I appreciate your time, Simon, and talk to you soon.