DISCLAIMER : during my daily life I’m an Akamai Professional Services employee (Dr Jekyll), so you might want to take some critical distance with the following article. If you happen to know me personally, you also know that at nights, the blogger inside me (Mr Hyde) is always driven by objectivity and detached passion for innovation. This being said, I wish you an interesting reading!
4K streaming, mobile broadcasting for the crowd, generalized delinearization, worldwide video events… OTT delivery is just multiplying the challenges, as customers’ expectations are raising each day in terms of video fast-start, instant channel switching, lack of buffer and high frame size/rate – on all devices in all network conditions. To answer those challenges, OTT delivery answer today is basically more unicast sessions, more servers, more peering – and less and less guarantee of satisfying end-user experience as long as there is no specific end-to-end paid agreement to guarantee that the path will be provisioned from the origin server up to the video device. Even in this ideal scheme, the device might still suffer from poor wireless conditions which jeopardize the experience. So, how do we deal with all this stack of potential problems: do we stick to the aging receipts, rely blindly on Moore’s law and perpetuate a hopeless CDN weapons race? Or do we try to find smarter ways to make the OTT growth reach a sustainable delivery model ?
OVFSquad opens the debate
This topic was precisely the discussion theme chosen for a recent conference organized in Paris by the Online Video French Squad (aka OVFSquad – la Streaming French Touch), a collective of crazy French-speaking streaming tech guys (les #OUFS, in French slang) I’m involved in. As the collective gathers folks from various video sectors (broadcasters, CDNs, video services providers, industry suppliers, research and consulting), we had the chance to expose various approaches to this multi-facet problem, under the welcoming umbrella of Telecom Paris Tech (the most renowned French telecom school): our speakers were Nivedita Nouvel (VP Marketing, BroadPeak), Claude Seyrat (CMO, Expway), Gwendal Simon (Associate Professor, Telecom Bretagne), the StreamRoot startup co-founders Pierre-Louis Théron, Nikolay Rodionov & Axel Delmas, and Julien Privé (Senior Solutions Engineer, Akamai) for the conclusion. The conference was definitely a great knowledge sharing moment, and a highly valuable source of reflection for pushing the limits of our current distribution model. The following considerations are extracted in great part from this conference’s minutes.
One OTT – many headaches
Depending on which challenge we need to tackle and which kind of distribution network is concerned, the discussion entry point is not always a simple “CDN scalability and business model limits are getting closer” observation. This one might be partly relevant for off-net CDNs who struggle for peering, colocation space and unified QoS – even for those like Akamai which have already deployed 140K servers on 1200 networks and serve 21Tbps in peak. Taking into consideration Akamai current market share, Julien Privé’s projections for upcoming years show that Akamai would need to serve up to 300 times its current traffic volume.
For on-net operators, as the ones BroadPeak is providing managed-CDN solutions to, the discussion is of course focused on scalability for high traffic events – but also on the managed network optimization and resources sharing with existing fixed services when it comes to serve hundreds of thousands of simultaneous OTT devices. For mobile operators as the ones Expway supplies with LTE Broadcast solutions, it’s also a matter of optimizing the network resources for distributing a few blockbuster contents, and to overcome unicast limitations when it comes to serving crowded places like stadiums. And for video services providers and broadcasters whose headaches StreamRoot tries to lighten, the discussion entry point might rather be how to lower the distribution costs while ensuring decent QoS on highly popular video events and staying CDN-agnostic.
One could think these concerns would be solved by radically different technologies and strategies, but actually our speakers explained that a limited set of them might be sufficient to cover all the scope, and that it may rather be an evolution of the existing workflows with the help of proven technologies than a revolution…
New hopes out of old receipes
As often, some technologies classified as “old and declining” in a specific context can be reused in a different requirements set with much success – and this precisely the case with multicast and peer-to-peer in the context of OTT distribution.
Multicast is one of the most famous sea serpent the video-over-IP world has been hunting for years and years since its inception in the late 80s. As there was no simple way to ensure end-to-end multicast routing across heterogeneous networks, apart from specific regions like the UK where politics made it mandatory, the use of IP multicast was restricted to the scope of one network, like one ISP or one company network. The major service powered by multicast today is IPTV: it’s actually the gold standard in terms of scalability of multi-channel full-HD distribution over wide managed networks. During the 90s and afterwards, we also saw a limited use of multicast for streaming needs in the corporate world, especially with Windows Media Services, somehow with RealServer/RealPlayer and more recently with Flash Media Server 4, but it did not really take off – mainly for business model reasons and technical complexity across distant corporate sites – all the more than the corporate world finally embraced HTTP streaming which was easier to scale with standard proxy-cache architectures. Recent initiatives like the Octoshape Infinite HD-M solution have generated a new interest for multicast, as it is going away from classical IPTV MPEG-TS streaming to embrace OTT formats. So, 25 years after the multicast story started, why did this mule turn into an OTT racehorse?
Well first, multicast is a central and well known transport standard for triple-play services over managed networks – a usual suspect, might we even say. Other valid reasons are that the multicast service plan is not extensible indefinitely but it generally has got some available capacity remaining after IPTV, that’s it’s fully compatible with OTT as UDP is not a show stopper for ABR HTTP delivery (okay it’s unusual, but fully possible from a technical point of view) and that it’s massively scalable to sustain traffic peaks as the network mesh itself is somehow replacing the classical CDN caching architecture (which is interesting if you think of the upcoming hundreds of 15Mbps 4K streams that operators will have to cope with in a near future).
Multicast is a real broadcast enabler for OTT, as BroadPeak is showing with its nanoCDN technology, made of mainly two different components: the Transcaster server which is running in the head-end, taking a standard unicast live ABR stream (like an HLS stream in push mode, secured by DRM if needed) as input (or group of streams like in a satellite multiplex) and converting it into a multicast channel, so that the live streams are conveyed in multicast down to the home gateways inside the ISP managed network. And on the home gateway, the second component – nanoCDN Agent – is present as an embedded software stack which converts back the multicast stream into unicast – therefore allowing all compatible devices in the home network to read the stream without any end-user application modification. In the head-end, the operator can use BroadPeak’s Mediator server to monitor live stream popularity over the network and trigger their automatic ingestion through the Transcaster servers when they reach a popularity level where provisioning a multicast group makes more sense than delivering all the streams in unicast. The beauty of the system is that through the intelligence of the nanoCDN Agent, the end-user device then uses the multicast-to-unicast translated stream most of the time, and can fallback directly on the unicast delivery mode when the stream is paused or trick-moded. While multicast represents 90% of the delivered bits, unicast also plays a central role for quick initialization of the video channels, before multicast takes over. During the conference, Nivedita Nouvel confirmed that the end-user device could also fall back transparently to requesting the gateway disk buffer if storage capacity was available for that. That’s an additional saving over already huge cost cuts on unicast bandwidth use, but we can still find other advantages to it: no player code modification needs to be applied, and it’s agnostic to DRMs and ABR formats (including DASH).
On a close but not identical way of handling the multicast integration – and for the off-net environment, Akamai’s approach is to negotiate with ISPs access to their network multicast capabilities, meaning that the Akamai Edge servers will at some point be able to stream the ABR fragments over UDP Multicast in addition to TCP unicast. This will obviously require enriching the usual video player stack to support subscription to multicast groups, but this might be more transparent to ISPs as the multicasting will be done directly by Akamai Edge servers. The architecture does not require specific software integration on the gateways to make it work on all OTT devices embedding the appropriate client stack. While it sounds like a complicated technical setup, this kind of architecture might also be challenged by business considerations, as it’s not a usual approach for operators to rent their multicast spectrum compared to establishing obvious pricing for unicast traffic. But past all business challenges considerations, the most important lesson here is that multicast is definitely recognized as a major enabler for live OTT at massive scale.
This was exactly the arguments Claude Seyrat (Expway) was developing for streaming video over mobile networks: unicast cannot scale on base stations, but multicast allows to serve a very high number of concurrent users from a unique base station. The most obvious case presented was the one of stadiums where thousands of people might want to watch instant replays or to follow a specific player on a customized video stream. While there is no definitive data on the scalability limit, we might be fixed pretty soon with experience feedback from this year’s Super Bowl for which Verizon tested 4G LTE Broadcast (powered by Expway technology) in the stadiums. In Korea, KT and Samsung just launched the first commercial service using LTE Broadcast, so we are now going out of the prototyping phase and entering the deployment one. Mentioning a limited cost of mobile network upgrade for operators, an industry-wide consensus around this technology and a versatility regarding the nature of distributed bits (audio/video and/or data – allowing various services proposal, like pre push data ones or new generation digital radio), Claude Seyrat was convinced that this time will be the good one for a mobile broadcasting definitive standard, and that the DVB-H fail will soon be forgotten. From a technical standpoint, Expway’s LTE broadcast solution relies first on the DASH (ISO-BMFF) packaging format. While multicasting is enabled through FLUTE (File Delivery over Unidirectional Transport) with FEC (Forward Error Correction), Unicast is used for the control channel and some data retransfers if the multicast transfer failed. In the client stack we can find a DASH client and the eMBMS middleware layer which bridges the eMBMS modem with both the DASH client and the FEC client component. The overall architecture is quite complex compared to usual OTT headends, and it might be a good reason explaining one of the acknowledged problems the technology is still suffering from: latency is around 10 seconds as of now. As Moore’s law always needs new playgrounds, LTE Broadcast seem to offer an interesting field for performance improvements, but the overall technology value proposition is already sufficient to convince operators, as it allows to solve major bottlenecks. So yes, multicast seems definitely to be a promising game changer for video delivery on both fixed and mobile networks…
A new era of peer-assisted delivery
Peer-to-peer is also not a new idea, almost a “has-been” one, as Gwendal Simon was joking while beginning his presentation – the evolution of academic papers volume on the topic show that the hype vanished in 2009. Many solutions exist (like Adobe Cirrus, Bittorent Live (RIP), Swarmplayer or RAYV to mention only the most recent ones) but few of them have met a success story at wide scales. Most examples in this category come from China with UUsee or PPlive which handled 20 million simultaneous peers during the 2008 Olympics opening ceremony and was the sequel of an academic research. These systems were mainly using streaming technologies like RealVideo and Windows Media, and always need a standalone software to be installed on the end-user computer. These specifics are nowadays challenges to overcome as streaming standards have migrated to ABR and the general tendency is to do more and more things natively in the browsers, even without additional plugins which are on the verge of disappearing given the rise of interoperable HTML5 and its evolution to advanced video streaming features.
With historical actors of peer-assisted delivery like Akamai (with its NetSession client software) showing their interest for WebRTC, there’s no doubt that the client-less specificity of this kind of new gen protocol is a definitive advantage over previous solutions requiring heavy clients installation. Still there are limits that still exist, as the problematic compatibility with mobile devices due to CPU-intensive process generated by the P2P exchanges. This might also be a problem in the future regarding the deployments on low-end connected TVs. Despite this limitation and the declining market share of desktop PCs, this is still a valuable option, all the more that the WebRTC interoperability is definitively moving forward with strong support from Google.
Hybridization as a long-term strategy
For the actors with a wide technology portfolio coverage like BroadPeak or Akamai, it was clear during the conference that neither multicast nor peer-assisted delivery will be sufficient on a standalone basis to cover all the use case and network situations, but that a combination of technologies and delivery modes is necessary. Nivedita Nouvel mentioned VOD content pre-positioning on top of multicast for BroadPeak, while Julien Privé mentionned for Akamai an ambitious transparent combination of VOD content pre-positioning and mobile-optimized delivery on top of multicast and peer-assisted delivery. Akamai demonstrated content prepositionning with a Qualcomm STB reference design during CES 2014, and its new mobile solution codenamed Astraeus started to be privately demonstrated at IBC 2013. It’s a patent-pending combination of UDP low-latency delivery with FEC-guaranteed QoS and advanced congestion control algorithms which has shown on mobile devices a 50% quicker video startup time, a 90% less frequent rebuffering and x2 to x5 raise of the bitrate served to the device. Coupled to its wish to disseminate its multi-protocol delivery client technology on the widest range of CE devices and ISP gateways, Akamai is clearly showing that it’s time to extend the limits of the CDN to a dynamic, wider mesh network.
Taking server-to-client TCP unicast delivery as the starting point for all their solutions, the presenters from the OVFSquad conference showed that there is a big margin for imagination on top of it, either by reusing proven technologies or shaping up new innovative accelerated delivery protocols, and that the variety of network and devices situations just requires such complex assembly. 4K is coming, we know it’s going to be a real challenge from the delivery point of view but fortunately, this conference has shown that there is a hope for sustainability of the delivery model with existing and upcoming smart solutions.
Sur le même thème, en Français, avec les vidéos des présentations, chez OVFSquad :