Broadcasters are looking to virtualize because they don’t want to build a specific, dedicated architecture for a process; they want a process-agnostic platform that can be readily adapted to whatever format or signal they need.
BY PETER WHARTON
The Indian market is fast acknowledging cloud technology as a viable means to host broadcast workflows. Cloud-based themes were weaved throughout both Broadcast India and BES shows last year and a recent study indicated that 15 – 20% of broadcast and post production businesses in India are beginning to adopt a cloud infrastructure.
The cloud certainly brings significant advantages to the business of TV content creation, storage and post production processes, both technically and economically, including zero spend on broadcast-specific hardware; leveraging of existing commoditybased hardware; vastly reduced cabling/ physical storage; virtually no cost on electricity or cooling systems; no expenditure on datacentre maintenance/security; reduced staff operator training costs due to universal software/UIs; rapid channel deployment/ upgrades; and instant access/easier and faster transfer of media content.
Along with the rest of the world, the broadcasters of India are now also poised to explore cloud playout, which promises
even further economic and flexible freedom for linear TV channels. But, at the moment, moving fulltime playout to the cloud is still a bridge too far for most broadcasters.
In fact, our current distribution systems are still primarily via satellite; and right now when we hear about the implementation of playout from the cloud, for the vast majority of cases, broadcasters and playout solutions providers aren’t really talking about true playout from the cloud, but simply moving some non-realtime operations into the cloud. Here
are some of the issues we need to consider when it comes to true cloud playout:
BROADCASTERS NO LONGER WANT TO BE IN THE INFRASTRUCTURE BUILDING BUSINESS Broadcasters are looking to virtualize because they don’t want to build a specific, dedicated architecture for a process; they want a process-agnostic platform that can be readily adapted to whatever format or signal they need to go out to the viewer. This means that their operations need to run on software instead of hardware. It needs to run on software that is not tied to a particular
machine – software that takes advantage of the elasticity in a virtualized environment and frees up the resources for other tasks when it no longer needs them.
The logical extension of virtualization – which is really a private data center or private cloud, is the ability to also run in someone else’s cloud. Being virtualized also means being able to expand as needed outside of your own datacenter into an outsourced datacenter or cloud, as and when it’s needed. This might only be required during periods of high demand or special events. Some major broadcasters are looking to outsource their entire infrastructure operation to major cloud operators like Amazon Web Services and Microsoft Azure. They don’t want to be in the infrastructure-building business anymore; they want to leave that to the Amazons and Microsofts of this world and instead focus on where they really differentiate themselves
from the competition – content creation and curation. And, who can blame them considering the overwhelming sense of uncertainty that the industry is currently facing.
UNCERTAINTY IS FUELLING VIRTUALISATION As the shift towards OTT and other nonlinear viewing continues and as the dominance of non-traditional players continues in these markets, the more uneasy linear TV broadcasters have become.
Uncertainty over the direction of the industry and technology has interfered with the broadcaster’s ability to be able to plan ahead. It’s difficult to predict where they’ll be in six months or a year and what new requirements they’ll be expected to meet. It’s an opposite scenario to that of previous broadcast industry milestones; the migration from analogue to digital, SD to HD and from baseband to file-based workflows.
Given the turmoil and uncertainty, broadcasters now face an overwhelming number of questions that must be considered when upgrading or building a facility today: Will I need to migrate to UHDTV/4K? Will satellite and cable distributors carry the UHD version of my signal? Can consumers even tell the difference between UHD and my HD signal up-converted by their new 4K TV? How should I build a UHDTV/4K facility? Using SDI as quad-3G or 12G? Over coax or fibre? Or over IP uncompressed using 2022-6, Aspen or TR-03? Or compressed over IP using Tico, LVCC, etc.? Will I need to launch new channels? Should I consider temporary OTT channels? Will my channel requirements change so that I have to reconfigure and modify my channels and workflows? Will new transmission platforms force me to redesign my playout system? It’s all this uncertainty that is fuelling virtualization.
THE CLOUD ISN’T QUITE READY FOR REALTIME PLAYOUT Being fully virtualized sounds wonderful and ma y claims have been made from playout solutions providers that they can already move playout to the cloud. In reality, almost every ‘cloud playout’ system we’ve seen till date puts only the non-realtime components in the cloud and still use edge players to take in and switch the live feed; they still utilize a media cache for content pushed from the cloud and they rely on a GPU engine for graphics insertion and SDI outputs.
Recently a major US network and its playout solutions provider claimed they had moved their network’s playout operations to
the cloud, when all they really did was move the main content stores a few states away and tie the facilities together with multiple 10G dedicated private fibre connections so that the entire corporate enterprise could share the same storage architecture. But, the automation, cache and playout still occurs in the same building that it did 30 years ago.
CLOUD PLAYOUT WOULDN’T SEE MUCH IN THE WAY OF CPU SAVINGS IF VIRTUALIZED Typically, only the non-realtime components of a playout system are being ‘virtualised’ at present – media management, storage, archiving, traffic, scheduling, and logging. All of these are prime candidates for virtualization because they are ‘bursty’, itinerant
operations – actions that use a lot of CPU cycles for a brief period of time and then go dormant until they’re needed again:
You load a playlist at 8PM for the next day, and for the next 20 minutes the system is racing along, locating and retrieving content form media stores and archives and caching all of that media onto the playout server cache. It then waits there patiently and quietly for the next 23.5 hours until the next playlist is loaded.
Why not use those spare CPU cycles to back-up the email servers, transcode files for next week or migrate the archives? Or better yet, rent those CPU cycles from Amazon and only pay for 30 minutes per day instead of 3600? These virtualized and cloud implementations also provide additional benefits beyond cost savings. They’re accessible anywhere, making workflows more collaborative and the workforce more distributed.
The actual playout engine is trickier. It is performing operations nonstop; 24 hours a day, 7 days per week, 365 days per year, so it wouldn’t see much in the way of CPU savings if virtualized. It usually requires powerful GPUs to provide the graphics broadcasters expect today and it needs to handle live feeds and switching of synchronous sources, as well as having to be realtime and robust; never dropping a packet or a frame of video. When you think about it, there’s not much
reason really to move the main channel playout itself to the cloud.
Occasional use of playout applications, such as disaster recovery or temporary sports/festival event channels will be the first adopters of true cloud playout, and through their experience and maturation, they will pave the way for future primetime live channels to play completely from the cloud.
“ALMOST EVERY CLOUD PLAYOUT SYSTEM WE’VE SEEN TILL DATE PUTS ONLY THE NON-REALTIME COMPONENTS IN THE CLOUD AND
STILL USE EDGE PLAYERS TO TAKE IN AND SWITCH THE LIVE FEED.”
OBSERVING PROGRESS AT A DISTANCE Progress is being made with regards to virtualization; playout solution specialists today, such as BroadStream Solutions, are working on developing the same resiliency and redundancy for cloud–based playout as the industry has always demanded from its broadcast cores – just as broadcast equipment
manufacturers adapted telecommunications networks and satellites for broadcasting 60 years ago.
Today you can run BroadStream’s core OASYS playout services in a virtualized environment and reap the benefits of a collaborative cloud-based solution. Broad- Stream’s HTML-5 based Multi-Channel Web Client lets users run any number of channels in multiple sites from multiple locations using a standard web browser. BroadStream customers will soon be able to monitor and control every possible playout intricacy of any channel, from anywhere in the world,
from any device.
The Indian broadcast market has always been cautious when it comes to adopting new technologies, whether it’s digital, HD,
3D, IP and now, The Cloud. In their usual smart way, Indian broadcasters will observe other markets making investments, testing boundaries, making mistakes and tackling all the challenges detailed above before embracing and gaining all the advantages that cloud playout will have to offer.
It won’t be long before full playout from the cloud becomes commonplace, but for the time being, technological concerns, uncertainty about standards and the questionable economics for real time cloud playout have meant there is an understandable hesitation to get on board.
And just as we led the market with the first integrated playout solution and the first one with IP I/O and hybrid SDI and IP I/O, you can count on BroadStream’s OASYS to lead again in cloud-based playout.