This year the EBU BroadThinking Conference was sounding like a holistic swirl, a milestone in the trend of technology to define sets that are greater than the sum of their parts, through creative evolution. « Where Broadcast Meets BroadBand », you get some interesting fusion effect occurring and diluting the traditional boundaries of the screens, with the handheld devices being part of the big screen experience or extending it rather than trying to scalp it, in an environment where all the devices converge towards a restricted set of standards rather than tracing their own line.
While we by default think that standardization kills creativity, events like BroadThinking show that it’s the opposite: if we gather energies to solve common problems together, we can both come up with a more evolved solution and concentrate on what’s important past the pixel grid: the user experience, so consistent across screens that you forget there’s more than one screen involved.
Augmented TV: one focus for all eyeballs?
« The user IS the screen » could have been the conclusion of Dale Herigstad‘s presentation, as its journey from 2D print to Augmented TV in stereoscopic 3D was a so inspired manifesto for creating new visualization experiences that blend video and metadata while relying on user intuition. You probably remember those great scenes from Minority Report where the hero was manipulating layers of multimedia content with powerful gloves. Well, Herigstad was one of the guys behind this exciting cinema creation, and obviously it did leave traces. Now leading the SeeSpace startup and working to bring the Augmented TV device inAiR to life, Herigstad was bringing a TED wind over BroadThinking.
The thing special about the inAiR, a little UFO-shaped HDMI dongle (still in the works), lies in the way the interaction is generated after the program is recognized and the relevant companion contents are grabbed from the internet: the video program is scaled so that the display zone can host side-layers of internet content without disturbing the original viewing experience. When used in 3D mode, the data layers are positioned in front of the depth axis and they behave like holograms floating in the air – you can manipulate them using your smartphone which acts like a magical touchpad, and this natural interaction makes of this whole setup an interesting user experience. It’s not the traditional second-screen experience where the companion data lives only on your tablet and makes your eyes travel from one screen to another, it’s a unified approach where your eyes can always focus on the big screen and your hands be the self-guided remote control. Somehow disturbing considering usual TV definition, this approach of making the TV a host screen for metadata will take a new interest when we’ll switch to 4K definition. Even with the current HD definition, inAiR’s video/metadata blending does provide a unique outlook of the upcoming Minority Report phantasms becoming a reality: the scene is a unique assembly resulting from the viewer’s choices, it’s an extension of his sensory universe.
HTML5: one OS for connected devices?
There’s a thing sure about HTML5 now: it has finally reach a level of API completeness which allows it to compete decently with lower level technologies used to craft apps on various devices. That was the track developed by Frode Hernes from Opera Software, who was talking about Web technologies for Interactive TV. For him, even if we can see various underlying OSes running on connected TVs, most of them support HTML5 – and that makes HTML5 the de-facto SmartTV OS. And TVs resources are sufficient for web apps to leverage advanced HTML5 features. From a rendering engine point of view, Hernes is seeing the a majority of the TV industry moving from Webkit and Presto to Chromium/Blink engine over the 2014/2015 period, which will provide an additional level of browser unification. From a pure video perspective, it’s noticeable that 2014 TV generations are supporting Media Source Extensions and Encrypted Media Extensions, so there is no more restraint for a unified deployment of premium live/VOD streaming through MPEG-DASH with Common Encryption and multiple DRMs. Today, Hernes sees DIAL (created by Netflix and YouTube) as the unifying protocol between companion screen apps and TVs, but we’ll see later on that there are ongoing initiatives aiming to standardize all the second screen field and thus to challenge DIAL supremacy.
DASH/CENC/CFF: one gateway for premium OTT video?
Dr. Stefan Arbanowski from the Fraunhofer Institute developed a similar track of the Web browser being the common app platform across devices. According to him, Common Encryption coupled with DASH and Common File Format are gathering the convergence conditions towards a single format for multiple media/devices profiles, multiple delivery systems and multiple DRMs. Having this built over industry standards provides an unprecedented strength to the combination, thus confirming MSE and EME as the standard way to stream protected contents across platforms. So far, Fraunhofer has been implementing the Microsoft CDMi Specification in its FAMIUM framework, but there is still a road to go to reach interoperability in the Content Decryption Modules area – and Fraunhofer proposes two directions here: “building a universal open source CDM that can be baked into all browsers, and making CDMi a platform feature that can be accessed by various browsers (thus standardizing discovery and CDM-CDMi communication)”.
This is the final step where the industry needs to make efforts, as all browser vendors are pretty much doing things on their own as of now. We need multi-DRM workflows to become a commodity: work is under way at the DASH Industry Forum as regards the backend aspects of it, but these client-side hurdles need also a converging solution so that whenever a browser encounters a content which DRM is not supported, he can transparently trigger the download/install of the corresponding (and standardized to work across platforms) CDM. While it’s likely that Firefox and Opera would accept to implement such an interoperability, the point remains a big question mark as regards Internet Explorer, Chrome and Safari: if they don’t open up, it means that you will have, as a content provider, to use the PlayReady/Widevine/Fairplay DRM combination to cover those browsers (not even talking about other DRMs you would need to cover specific devices). That’s a serious drawback of the move towards native streaming, as previous workflows based on browser plugins generally allowed to use one DRM across more platforms. It will be interesting to see if vertical business interests could be overcome here.
EBU-TT-D: one subtitling format across the food chain?
Frans de Jong from EBU was presenting an update on the EBU-TT subtitling format. This EBU-TT initiative aims at reducing the subtitling landscape fragmentation, with many broadcast and broadband formats – and often non interoperable TTML profiles when in the broadband world. EBU has produced three major specifications for EBU-TT: the core specification in July 2012, the mapping specification of EBU STL to EBU-TT in June 2013, and the EBU-TT-D (subtitling distribution format) specification in January 2014 (with a complementary ISO BMFF mapping document). The target is to use EBU-TT end-to-end: authoring/contribution, playout, archive, distribution – with a role of pivot format when it comes to conversion to broadcast subtitling formats like DVB or Teletext ones. One of the major differences with broadcast formats is the time unit in H/M/S/d, not frames. And EBU-TT-D can be chunked, so it makes it a serious candidate for broadband use cases. What’s the adoption trend for this EBU-TT-D format? It will be required in HbbTV 2.0, has been required in the DVB DASH Profile, and introduced in the DASH Industry Forum DASH-AVC/264 interoperability point (through the support of the SMPTE TT subset only, though). Apart from this adoption trend in the DASH output world, some broadcasters like the BBC and the ARD are working on the use of EBU-TT in all the production chain, to get rid of the old STL format. One of the last piece of specification missing is the one regarding the live contribution, but work is under way at EBU and on track for a 2014 release. We might even see EBU-TT-D used for UHDTV broadcast distribution if things go well. Overall, the EBU-TT specifications family provides a good basis for a more unified approach of subtitling across heterogeneous distribution environments, and we now wait it to spread through the industry’s software solutions in order to confirm this hope.
HbbTV 2.0: finally winning over proprietary environments?
Jon Piesing, Chairman of the HbbTV Specification Working Group, was presenting a summary of the new features introduced in the version 2 of the HbbTV specifications (due to release later this year), mainly HTML5, HEVC video (where supported by the hardware) and EBU-TT-D support, all of which are welcome to overcome limitations of the CE-HTML-based previous versions. What’s more innovative is the work that has been done on companion screen integration (based on UPnP for the discovery): there’s a module for installing/launching companion screen app from the HbbTV app, one, for one remotely launching the HbbTV app, and one for App2App Communication through a WebSocket server running on the TV. On top of these modules, HbbTV 2.0 is introducing a new range of cross-app media synchronization features, based on the protocols crafted in the DVB-TM-CSS group – which work was also presented at the conference by Kevin Murray from Cisco. Current prototypes based on the TM-CSS protocols don’t show yet sufficient accuracy for lip sync, but it should definitely improve in the near future. What about implementation? The HbbTV 2.0 specification is announced to be ready for market implementation in 2015. The experience we had with the 1.5 version showed a two years’ time-to-market since the specification finalized. Nothing indicates that this version will be deployed faster by CE manufacturers, all the more than there is no plan for firmware upgrade of the currently deployed 1.5 TV sets. Still, Italy has already announced their integration of the 2.0 version in their HD-Book v.4.0 – replacing MHP as the reference middleware. Maybe a good indicator of HbbTV finally reaching a strong technical maturity…
HEVC: towards one codec only?
DASH: one transport format to rule them all?
Thomas Stockhammer from Qualcomm and DASH Industry Forum delivered a dense presentation about the state of MPEG DASH, which has already been massively deployed by the biggest actors like Netflix, YouTube and Hulu, but still needs a definitive booster to see deployments multiply in the broadcasters’ world. That’s what DASH Industry Forum is working on (more details on DASH-IF’s numerous interoperability initiatives are available in my article included the Streaming Media DASH SuperGuide), alongside the 13 other stakeholder organizations involved in DASH’s standardization (3GPP, ATSC, DLNA, DTG, DVB, EBU, HbbTV, IETF, IMTC, MPEG, SCTE, UltraViolet, W3C). The most important broadcasters-oriented evolution anticipated was the ratification of the DVB DASH Profile. It has actually happened on July 3, and brings to DVB an interoperability point very close to DASH AVC-264, DASH-HEVC/265 and the DASH recommendations included in the HbbTV 1.5 standard. This DVB DASH Profile has been adopted as the standard baseline for delivery of video contents over IP in HbbTV 2.0, which is due for ratification in Q1 2015. ATSC is also considering DASH for the next version of its specifications, so the reach of DASH in the broadcast world doesn’t seem to slow down – at least outside of the delivery chain for live TV on the big screen which will probably stay in the TS world for many years on. For the mobile use cases, the 3GPP consortium has been specifying DASH as the only eMBMS video transport mean for both broadcast and unicast network situations, with a first LTE Broadcast deployment done by Korea Telecom in January 2014 and many large scale trials going on around the world. At the same time, DASH players have multiply and now reach many different platforms (browsers, mobile devices, STBs, Smart TVs…). Of course, not all challenges are yet overcome by DASH, especially in the live field where much robustness efforts are concentrated now, to ensure service continuity, CDN consistency, low end-to-end latency, streamlined ad insertion and conditional access with key rotation. But all indicators are there to show that in the now widened OTT world, the DASH open standard now makes more sense to deploy than any other ABR technology – given its new reach to broadcast ecosystems and its intrinsic strengths.
Cloud processing: unifying workflows?
Marina Kalkanis and Henry Webster were presenting the BBC Future Media‘s efforts dedicated to put the BBC iPlayer video content production workflows in the cloud, through the development of the « Video Factory ». This scalable platform covers the ingest, transcoding and delivery of iPlayer contents – and is designed from the ground up to leverage the Cloud advantages and to apply the same level of resiliency that is used for broadcast. It’s a modern service oriented, message driven, architecture (built with Apache Camel framework and Java applications) which orchestrates 50 components having a clear behavior contract to honor. On the ingest side a RTP Chunker component is used to send the 24 live TS streams to Amazon S3 and the corresponding Chunk Concatenator component reconstructs the mezzanine streams on S3. Past the ingest phase, the Video Factory drives multiple transcoding engines (Amazon Elastic Transcoder, Elemental Cloud or FFmpeg depending on the use case) plus a bunch of complementary QC/Clipping and MAM activities. On the distribution side, Unified Streaming solutions are in charge of dynamically packaging and protecting live/on-demand/download streams on the origin platform exposed to CDNs. If the BBC is satisfied with the flexibility and reliability of the solution, Kalkanis and Webster also pointed out that the procurement approach of cloud services is not as flexible as they would need, and their service and cost models rather immature. As often, the BBC is here leading the way in the broadcasters’ world and shows that, through auto-scaling architecture, the public Cloud can be leveraged to ensure a flawless user experience and workflow management. Moving to next stage – broadcast playout from the Cloud – as vendors have been suggesting recently while virtualizing their solutions previously bound to specific hardware, might be another story in terms of provisioning, as the SLAs will most probably require the use of private clouds, but the move towards a generalized cloud processing is definitely triggered in the broadcast world.
CDN: definitely no unique answer
Will Law from Akamai was presenting on the New Media Distribution Technologies for CDNs, and this was probably the area where there is the less consensus regarding the way to deal with the consumers habits & video devices equipment evolutions. Actually, with the explosion of users, daily video consumption and bitrate/image size towards UltraHD, the near future promise an explosion of the necessary bandwidth at CDN level by 300 and requires a sustained end-user connection speed of 10 to 15Mbps for decent UHD-1 quality in H.265. That’s where Law came not with one, but rather with eight complementary technologies that CDNs can leverage to solve the challenge: of course HEVC which is the most obvious way to cut the transport costs by 30 (live) to 50% (vod) at equivalent quality, bring down 720p at 2Mbps where 4G networks can play it smoothly or just make UHD feasible. Storage density increasing faster than processor frequency, putting drives in consumer equipment like home routers is a good way to get the contents closer to the users if the transport of the contents is asynchronous to their consumption. For live events, IP Multicast which is successfully used for IPTV can be translated to OTT distribution with the appropriate CDN architecture to circumvent the lack of end-to-end multicast routing across the Internet, and leverage the existing native multicast & AMT infrastructure inside the operators managed networks. Of course DASH was also on the list of improving technologies, with a positive effect on video playback performances and QoS. At network low level, the FastTCP technology (congestion avoidance algorithm) that Akamai integrated in July 2013 generated a significant gain in throughput, especially for mid-range links in Europe which see a 79% boost from 6.3 to 11.3Mbps – while the effect was less important in the US (between 15 and 18%). For distribution over 3GPP, eMBMS is boosted by Qualcomm Snapdragon processors support and operators successful trials on the field, with an intelligent switch/combination of unicast and broadcast/multicast depending on the network situation. UDP was also listed as a potential help factor as it is maximizing the throughput compared to HTTP over TCP – and Akamai is working on an HTTP/UDP Hybrid Protocol combining advanced congestion control and FEC, while being compatible with existing ABR technologies and workflows. Finally, peer assisted delivery technologies such as the client-less, HTML5/WebRTC based one from StreamRoot provide efficient ways to optimize streaming of live channels and popular VOD contents. All of these technologies combined together are shaping the future of a CDN like Akamai, which goes closer on the client side and extends the perimeter of the CDN to a maximum of connected devices (routers, TVs, mobile devices, game consoles…). More details on this new CDN approaches can be found in the previous post on this blog – definitely a fascinating topic when considering the challenges of Ultra HD delivery.
The BroadThinking Conference 2014 edition was once again a major rendezvous for all actors of the screens hybridization movement, both for generating concrete hopes of technology convergence and at the same time showing how business interests can limit it. Compared to the 2013 edition, the maturity increment is obvious for many technologies like HEVC, DASH and HbbTV, and we could all feel how strong is the DASH-IF/EBU/HbbTV/DVB axis in promoting cross-referenced standards, with Europe as the pathfinder for the rest of the continents. This European dynamic is well embodied by the EBU, who is developing useful open source platforms like the Reference Test Engine for Encoding or the Open Source Cloud Infrastructure for Encoding to Distribution (OSCIED) and works to unify subtitling practices across the broadcast and OTT industries. All these standardization collective efforts, combined to the industry’s increased awareness of how beneficial it is to use interoperable standards, are definitely showing us the path towards a unified way to handle the video explosion from a technical standpoint.
Now I guess you understand better what I meant by
« Forget Multiscreen, We Are Heading Towards ONE Screen »
BROADTHINKING 2014 ELTROVEMO AWARDS
|Top Beard||Sean O'Halpin (BBC)|
|Top Humor||David Price (Ericsson)|
|Top Presenter||Will Law (Akamai)|
|Top Architecture||Marina Kalkanis and Henry Webster (BBC)|
|Top Wow Effect||Dale Herigstad (SeeSpace)|
|Top Density||Thomas Stockhammer (Qualcomm)|
EXPLORATORY LINKS (to prepare the 2015 Edition)
HBB-Next explore les pistes de la TV connectée du futur
Enabling Ultra-HbbTV, A vision on future converged media
Media Synchronization Workshop (MediaSync) 2013 presentations
WebVTT versus TTML: XML considered harmful for web captions?
HbbTV Symposium Europe 2014 : October 8/9 in Paris !