‘Streaming cannot rely only on data rates’

With the market for live streaming at the threshold of rapid growth in India, Digital Studio asked Uday Sodhi, EVP and Head (Digital Business) at Sony Pictures Networks …

With the market for live streaming at the threshold of rapid growth in India, Digital Studio asked Uday Sodhi, EVP and Head (Digital Business) at Sony Pictures Networks India, about the estimates for the market in 2017. Here’s what he had to say:

What are the impediments to the offtake of live streaming services?
Although live streaming services are on the cusp of growth in India, the ubiquity of media consumption still has inherent challenges. These include slow internet speeds, low monetisation opportunities, piracy, and online payments.

Is there scope for higher viewer engagement?
Live streaming will continue to be in demand in 2017. People are demanding more and more live experiences for their favorite OTT content, especially sports. Studies suggest that viewers demand this content later if they miss the action live. Sports live streaming went up in the year 2016 with UEFA European Champions, France, scoring massive viewership for SonyLIV. With so much to expect in the live-streaming space in the coming year, the trend is sure to stay. The exciting line up of sporting events, such as the UEFA Champions league, WWE, and Indian cricket series, will continue to drive viewer engagement for us.

Is the market for live streaming largely dependent on affordable mobile data rates?
India is undergoing disruption on an unprecedented scale. The market for live streaming cannot only be dependent on affordable mobile data rates but a lot of other factors like penetration of smartphone devices and increase in internet speed and access. Along with increasing connected device ownership and time spent online, consumers’ media consumption habits are also shifting. Moreover, the next wave of growth in the Indian market is expected from non-metro and rural areas, where wireless mobile internet shall play a pivotal role.

Is the demographic of the audience changing on the whole?

There is a predominant shift towards using the mobile phone as the primary and, often, the only access point to the internet. As the digital audience shifts from the familiar user profiles, there will emerge an opportunity to create target-group-specific content for the new audience. We have to make a push for mobile micro payments or wallets. Right now, subscription revenue is constrained by low credit card penetration and payment gateway failures.

Changing dynamics of Colour grading

Why colour grading is a critical part of post-production in the digital age | By Vanessa Haarhoff | Colour is a crucial way of conveying meaning beyond the screen in …

Why colour grading is a critical part of post-production in the digital age | By Vanessa Haarhoff | Colour is a crucial way of conveying meaning beyond the screen in visual storytelling. Colour grading is therefore a fundamental tool in video post-production to perfect colour, depending on the type of video production: whether it be a film; a documentary; or an advert, in a bid to evoke a desired response from the audience, he explains.

Leo Joseph, MD of the Dubai-based post-production facility Mile Studios, agrees that colour grading is a fine art, not only vital to induce a certain “feeling and ambiance” particular to a certain type of video genre, but to maintain a high standard of post-production video. “Colour grading carried out by colour grading specialists can either make or break a production,” he says.

Senior colourist and manager of film restoration services at the US-based independent media company, Olympusat Inc, Jim Wicks notes that if colour grading specialists don’t “listen” to the narrative or message of a certain video genre with their eyes before starting to colour grade, the results could be disastrous.

He explains that a client once assigned him to create a Technicolour look for a creative Indie film. Although it was a lovely look, the Indie film was shot with a lot of deep earth tone colours and dark lighting, so the colour graded Technicolour look did not fit this film. The colour pallet of Star Wars for example does not fit the colour pallet of Star Trek, or any other film for that matter, explains Wicks; “the look of a film is organic to that individual film.”

Joseph says that technological software advancements over the past couple of years have made a multitude of different video post-production colour grading software packages available within the market. Sandhar explains that the most popular colour grading software used by post-production professionals are NuCoda by Digital Vision; Resolve by DaVinci and Baselight by Filmlight. Wicks says the type of software that a colourist uses really depends on the type of production.

He notes that both DaVinci Resolve and Baselight have their unique advantages, but explains how he has utilised DaVinci Resolve to restore over 200 35mm classic Spanish-language films for broadcast and Blu-ray distribution in HD, 2K and 4K.

Colour grading specialist Azin Samar from the Dubai-based video and audio distributor MediaCast says the newly upgraded DaVinci Resolve 11 (as a software base solution) gives colourists an expanded and precise set of grading tools. “With the additional editing capabilities added it is a complete delivery powerhouse for video post-production for independent filmmakers all in a 4K environment, which is an important aspect when handling large files,” she says.

Joseph notes that despite the advancements in video post-production software the transition of film from an analogue to a digital platform has created some issues for pure colour grading practitioners as colour grading tools are now widely available in most video editing software programmes and accessible to anyone, making the quality of colour in many final projects questionable. “Anyone can technically correct the colour through video editing software, but it is not often appealing to the market”. Wicks notes that although this is an issue, at the end of the day it is a colourist with a trained eye for colour that will make the difference to any project. “The colour grading software is the race car but the colourist is the driver”.

Samar agrees with Joseph that the colour grading industry (as experienced in the Middle East) is challenged with a lack of colour grading expertise. Some common challenges among young film makers would be the lack of accessible sources of learning about colour grading and a lack of knowledge, explains Samar. “Film schools spend time teaching editing to filmmakers but not so much, even not at all about colour grading, so most colourists in the industry are self-trained,” she adds. Joseph explains that some of the most talented colour grading experts within the industry have over 30 years of experience. “Colour grading cannot just be learnt, it is a skill that takes years to perfect”.

Despite these challenges, Sandhar says that the changing nature of the technological landscape has benefitted the quality of colour grading in the video post-production workflow because it has allowed colour grading experts to work remotely from the “cloud”. The cloud connects a wide pool of creative talent in a post production environment in real-time.

Wicks emphasises that having the right people with the right chemistry is the key to creating great film, television, and web video. Finding and establishing that right relationship is absolutely crucial to the success of a film or television project, “so, when you do find it, you’ll do just about anything to keep the team together, even if it’s a virtual team”.

Wicks explains that although many of his clients may be located in Los Angeles, New York, or London, he does not need to be; “the cloud is like a virtual door to my colour suite, whereby my clients can virtually come in to use my services on their projects”. They can upload their projects to cloud storage, where I can access and download the assets to work on them in my colour suite, he explains. With the ability to remotely grade, his clients are able to watch his progress in real-time in their location as he goes through their projects from his Florida based colour suite, he notes.


Joseph adds that although the cloud has added value to the video post-production workflow, it can and has sometimes been observed to have affected the quality of some post-production work because colour grading specialist are not grading the files from the original RAW files because of their large capacity size. Colour grading from RAW files means grading from native camera footages, he explains. Each camera generates different types of files: RED cameras give r3d files, Alexa Prores give 444 files, and 5D H264 and Sony camera create MXF files for example, but because these files have such a high resolution making them bigger and difficult to handle, requiring more time to upload, download, copy, render and play. As results of this, most of the colour grading professionals confer the RAW files into lower formats to use for edit and then colour grade from the same, which is not good. “Each conversion will result in quality loss,” Joseph says.

Quality post-production colour grading and editing relies on the use of 4K monitors as opposed to HD or 2K, explains Joseph. As video types continue to transition to very high resolution digital format and in some cases computer graphic imaging, colour grading professionals need to use high resolution technology which accurately displays detail and RAW colour in order to edit and enhance colour quality. Dan Mitre, creative director at Mile Studios, explains how the lack of colour quality and feeling is apparent within a video production when colour is graded on an uncalibrated monitor.

Samar says that the order of “operation workflow” is the most important part of colour grading to achieve organization, time management and efficiency. She explains that often this order is not followed properly due to users wanting to achieve the results very quickly, which then results in one overlooking or forgetting important details. “In colour grading some of these errors cannot be fixed at later stages, which leaves the colourist with no other choice but to restart the project from scratch. Additionally, to achieve the desired look, every colourist needs to be aware of what is known as “Colour Grading Strategy”. Ignoring the strategy will cost the colourist more time and money, she explains.

Wicks says that another colour grading error comes from not collaborating. Colour Grading is not a solo activity; there must be input from the DP, the Director, and others on the creative team. Colourists interpret their ideas and help make them become reality. The focus on the art of colour grading is not on the talents of the people in it – but on the team as a whole. Mitre agrees that a lack of team synergy and direction can result in a poor production.

Despite the challenges that the industry is facing in a fast moving technological environment, the role of colour grading is really emerging as an important arena to invest in and acquire awareness and knowledge about, explains Samar.

Operators Crack The Code

Decisions on how and where to manage bulk transcodes of assets have become critical for media enterprises, but software-defined video solutions and the cloud are not necessarily the answer. BY …

Decisions on how and where to manage bulk transcodes of assets have become critical for media enterprises, but software-defined video solutions and the cloud are not necessarily the answer.
                       BY ADRIAN PENNINGTON

On-premise, in cloud, or a mix? 
With nearly infinite combinations of consumer preferences, devices, formats and protocols and a flurry of new OTT and live-to-linear VOD services, operators require limitless flexibility and scalability to keep pace. Core to an operator’s ability to quickly capitalise on new business opportunities is to efficiently repurpose material from one platform format to another.

“Operators are looking for flexible solutions to be able to quickly adapt to any codec, resolution and packaging options that may be required for the end device,” underlines Mark Seneca, product development with Imagine Communications. “As such, any transcoder must address their core needs for density and video quality, but must also have the capability to support new evolving standards.” The business necessity to do more work for less money makes automation a pre-requisite. “Automation not only allows operators to do more with their existing staff, but also allows the system to be selfmonitoring, self-adjusting and in some cases self-correcting,” says Paul Turner, VP of enterprise product management at Telestream. “This fundamentally enables them to offer services which are of importance to their business, while significantly reducing the costs of doing so. The revenue models for some of these services are starting to solidify, so customers also want to be sure that their transcoding systems are flexible enough to handle the ad insertion and recognition process that will become standard practice as these models mature.”
One of the fundamental decisions facing operators as they build out or upgrade their transcoding facility is where to actually place it. The decision swings between on-premise equipment, a private cloud on-premise, a public cloud, or a mix thereof. The key decision points are based on the amount of content and assets they want to manage and transcode, budget restrictions of CAPEX versus OPEX, availability of additional applications being able to take advantage of a cloud system, and the skill level of the operators. “Will delivery of an operator’s any-screen product be limited to in-network delivery within homes and businesses?” asks Chris Knowlton, VP & Streaming Industry Evangelist, Wowza Media Systems. “If so, this leans towards an on-premises transcoding solution. Conversely, a TV Everywhere product is likely using cloud-based delivery, and in some cases, it makes sense to have the transcoding also go to the cloud.”

Will the content be a predictable load, perhaps including every channel that the operator delivers? For a transcoding load that is likely to stay fixed for several years, an on-premises deployment is sometimes more costeffective. If the load is variable, however, with perhaps a core set of channels and some special event or seasonal offerings, then a hybrid model is worth consideration, where cloud transcoding can be used to pick up the temporary additional capacity needed. Questions must asked of the operator’s financial model.
“Are they better served by investing in hardware up-front and depreciating it over time, or by tax-deductible pay-as-you-go operating expenses?” poses Knowlton. “While creative financing is available to turn most IT assets into operating expenses, for many businesses, transcoding in the cloud provides a simpler way to align expenses to revenue and maintain their cash flow targets.” Another consideration impacting location is whether the operator has capacity in their infrastructure to accommodate more on-premises gear. “If they prefer to run on-premises but don’t have the room, they can start in the cloud, and then either free up current rack space by consolidating other gear with higher-density upgrades or by building out additional capacity,” he says. “Either way, as the on-premises capacity becomes available for transcoding, they can start transitioning off the cloud.”

Each operator situation is different.Low-volume broadcast companies may want to move all transcoding functionality to the cloud so they can scale resources up and down as requirements fluctuate.For companies that consistently process vast amounts of video, the economics of a cloudonly solution are still challenging. For those companies, a hybrid workflow makes the most economic sense.

 “A hybrid solution reduces barriers to entry and enables broadcasters to ebb and flow their video processing resources – enabling them to hedge a bet against new products and services without putting all their capex eggs in one basket” Keith Wymbs, CMO, Elemental

Elemental’s CMO, Keith Wymbs, explains that a hybrid is achieved by maintaining “just enough” on-premise infrastructure to fulfil day-to-day transcoding requirements “while leveraging cloud services for the elasticity to keep pace with variable demand.”This ground-to-cloud approach has the potential to save organisations significant capital expenditures, claims Wymbs, by instantly scaling up video processing capacity to accommodate high-traffic events, and scaling back down again as traffic wanes  while avoiding additional hardware investments that aren’t consistently utilised.

“In addition to lowering barriers to entry and reducing up-front capital investments, broadcasters can ebb and flow their video processing resources – enabling them to hedge a bet against new products and services without putting all their capex eggs in one basket,” he says. “If those products and services are wildly successful, then broadcasters can rapidly scale up their processing resources. If they are not, broadcasters can simply turn the processing resources off.” Here’s Imagine’s take on the subject:

“The advantage of on-premise equipment is that it can handle large amounts of content based on an initial outlay. On-premise cloud has a similar advantage, while also being able to provide compute resources for other applications when not needed for content management. Public cloud works well for a hybrid approach to offload during peak capacity and when there is much less content to handle, or for operators who are facing a CAPEX challenge and prefer an OPEX model.”

Software or hardware?
The location decision goes hand-in-hand with consideration of whether the merits of software-based transcode systems outweigh that of hardware-based ones. Elemental’s business is built on software so it’s unsurprising it evangelises this route. “Relying on traditional video processing infrastructure is becoming increasingly difficult and costly, yet video distributors simply do not have the option of ignoring demand for multiscreen video services as they risk permanent loss of customers to Internetbased OTT alternatives,” argues Wymbs.
“For video processing tasks and broadcast workflows, video providers need to evolve their systems from dedicated hardware based on ASICS, FPGAs and other custom chips to software-defined video solutions running on standard off-the-shelf hardware,” he argues.“In order to lead and manage the transition to new video codecs such as HEVC, advanced audio codecs, advanced colour spaces, increased colour bit depth, objectoriented audio specifications, forensic watermarking and new display formats like 4K Ultra HD, video providers should look to content delivery services built upon a software-based platform.”

 “Automation not only allows operators to do more with their existing staff, but also allows the system to be selfmonitoring, self-adjusting and in some cases self-correcting, significantly reducing the cost of offering important services” paul turner Vice-president at Telestream

There is a long-standing debate that software-based encoders ultimately produce higher quality results than those that are hardware based, particularly if you also bring in GPU vs. dedicated ASICs. Imagine, which recently acquired RGB Networks to its portfolio, having considerably strengthened its media processing ability by taking over Digital Rapids last year, adopts this more nuanced view. “Fully hardware solutions tend to bring high video quality and service density thereby providing the best cost per service for operators,” advises Seneca. “The disadvantage of hardware solutions is that they require dedicated hardware that cannot be easily repurposed for other applications. They also have a longer development cycle of new features and functionality, if available at all. A full software solution brings ultimate flexibility with a higher service and density cost. Hybrid solutions with hardware acceleration address the advantages of both by increasing density and reducing service costs while remaining flexible for new feature development.”


MAM integration
Over the last decade, integration with asset management has become vital and all the major vendors of encoders and transcoders have integrated with all the major asset management vendors. This happened before the advent of common interface standards like FIMS. So, while that standard offers hope for common control, it is not widely deployed for encoding today as it arrived a little late on the scene. Other traditional hardware functions are moving into smart transcoders like the Dalet AmberFin platform.

“This platform includes frame rate conversion and frame rate fix-up,” explains Dalet’s chief media scientist, Bruce Devlin. “A lot of content is today assembled from new and old footage that may have been shot at different rates. Whilst assembling movie content, European TV content and American TV content on a timeline in and editor may be easy to do. The results do not look good when broadcast on a low bandwidth transmission channel and displayed on a big, bright, flat screen. Patented frame rate conversion technology from Dalet can fix up may of these problems without resorting to a re-edit of the original content.” Integration with a facility’s asset management layer can range from simplistic to deep. Wowza’s Knowlton explains that the simpler the integration such as using watch folders to capture newly-archived live video assets and metadata), the more the operator can treat the various components of their video workflow as modular components, choosing best-of-breed technologies rather than a monolithic technology stack. Telestream’s Turner notes that MAMtranscoder integration is generally done via API, “although less sophisticated systems may only offer hot folder integration, which as you can imagine offers much less interaction, and places all of the management burden on the asset management system itself.” All of these issues will come into sharper focus at NAB in Las Vegas. “We’re at an inflection point in the industry which creates the opportunity for new suppliers to emerge in due course,” predicts Wymbs. “This only happens once every decade or more, but we believe Elemental is well positioned to be the dominant suppliers of core video infrastructure over the next decade.”

Deep Storage

Advances in cloud storage, object storage and flash storage offers a reliable solution to the modern day problems of media operations. By Jason Danielson With companies amassing data …

Advances in cloud storage, object storage and flash storage offers a reliable solution to the modern day problems of media operations.
By Jason Danielson


With companies amassing data at alarming rates, information storage worldwide has become a growing concern. While media storage has taken long strides in the past decade, we are still seeing the same old legacy systems continuing to proliferate in broadcast and production facilities in spite of other industries gaining huge efficiencies by virtualising their infrastructures, often called private clouds.

The IT industry has moved much of its less critical data to the public cloud, and sooner or later broadcasters — vocally averse to putting their content into the cloud —  will be considering and experimenting with public clouds. In fact, quite a few broadcasters and digital media outlets are beginning to see the elasticity benefits of the cloud-rental model, at least for some use cases.

Studios are investigating object storage to better move, manage, and secure billions of pieces of content in a single Web-scale, geodispersed content repository. And in another development, flash, in the form of SSDs, is looking more attractive each passing month for storage workloads such as database operations, 4K compositing, and mixed-use video production. Media storage is certainly changing.

Media operations producing, distributing, and transacting content such as video, computer graphics animation, photos, illustration, HTML Web pages, and audio have specific requirements for digital storage based on the production or distribution bandwidth necessary for their operations.Tier-one storage topologies vary greatly across these media and entertainment operations:

  1. HD, 2K, 4K compositing/colour grading rely on low-capacity, high-bandwidth, direct-attached arrays.
  2. Broadcast production depends on petabyte- and multipetabyte-scale shared storage that provides gigabytes per second of video ingest, edit, transcode, and playout bandwidth.
  3. Animation and VFX bank on file servers supporting hundreds of artist workstations and thousands of render compute cores.
  4. Active Archives are used in large studio and Wnetwork content repositories that handle transcoding and distribution of content to business units, affiliates, and licensees.
  5. TV Everywhere counts on origin servers ingesting, transcoding, packaging and feeding hundreds of channels to global content delivery networks.
  6. Web Media Services hang on Web-scale repositories that hold billions of files of user-generated content.
  7. Media transaction use cases lean on lowlatency storage arrays that support databases performing millions of transactions per day.

Jason Danielson

The fundamental shifts in media storage today (and over the next few years) are no different than the fundamental shifts in storage across the entire IT landscape. That’s not to say that media operations have the same storage workloads as the general IT landscape; they don’t. But the arguments that broadcasters and studios have made for years about needing specialised storage aren’t holding up as well as they used to. Workflows in other industries, such as oil and gas exploration, use files much larger than the file sizes typically used in video. File counts in social media and cloud use cases far outstrip file counts within a broadcast or studio operation. It’s true that video workflows will continue to require specialised storage features like edit-while-ingest, partial file retrieval, and deterministic bandwidth, but these features are no longer hurdles for IT storage platforms.

The storage industry has delivered SAN storage and NAS storage for more than two decades now, and media companies have cleverly deployed these two fundamentally different types of systems to their advantage. These systems have become drastically faster, smaller, and cheaper over this time period. As they have, media operations have moved to higher-bit-rate production and larger shared production workgroups, and they have built more online content repositories. We’ve seen adoption and great progress over the past twenty years in the use of digital media storage for production and distribution.

But really, that was only the beginning. Three new entrants to the storage landscape are reaching critical velocity. These technologies, which have been around in one form or another for more than a decade, are now or will soon be commercially viable options: Cloud storage, Object storage and Flash storage. Each of these storage technologies, in its own way, threatens to disrupt and promises to revolutionize media infrastructures. What are some of the more likely scenarios?

Cloud Storage: Cloud storage isn’t so much a new type of storage technology as it is a new type of storage offering. Essentially cloud storage is a business model that lets you rent storage as you need it. Cloud storage has its advantages in terms of elasticity: namely, the ability to use, and pay for, storage based on temporary need; and the ability to reduce capital costs by moving some less critical storage workloads to the cloud. Furthermore, media operations could benefit from using the cloud to test storage scenarios before scaling them out, which allows those operations to conserve capital until owning the storage resources becomes more practical than renting. Yet tapping into the advantages of cloud storage can be easier said than done. Why?  Because data is heavy. That is, it doesn’t move easily or quickly. What’s more, media operations want to retain data for long periods of time, so there’s value in owning the storage, keeping control over the content, and knowing where it is at all times. Those factors make moving entirely to cloud storage a complicated proposition. However, moving parts of the storage workflow to the cloud makes a lot of sense.

Object Storage: Often confused with cloud storage, object storage is a technology that can be deployed by a public cloud service provider like Amazon Web Services (AWS) or Microsoft Azure, or a hybrid cloud service provider like IBM Softlayer. Media operations can also implement object storage themselves. Object storage is arguably the next generation of software abstraction layer after network file systems. Some people think of object storage as cheap commodity storage. It is not that. Instead it provides content resiliency without the use of RAID technology. Object storage is a fundamentally different approach to how objects (files with a unique identifier) are stored, managed, and accessed within the storage system.

Media firms have moved to higher bit rate production

Object stores are inherently designed to store billions of files and to geo-disperse content across multiple sites — two things file systems simply cannot do. But the advantages over file systems don’t stop there: Objects stores are designed to provide access to your content even if an entire site fails. File systems can only do that by syncing (copying) content between two separate file systems in two separate locations. Object stores are often designed to provide resiliency of content over many years, performing integrity checks of content and recreating objects as needed if bits or disk drives are detected to be jeopardised.

Object stores are better designed to migrate content from one storage medium to another without disruption to the operation. That’s because they assume that the need to access the content will long outlive whatever storage media (disks, tapes, etc.) are deployed today. Some object stores even provide sophisticated policy engines for tiering content between storage media — and storage sites — thereby offloading the heavy lifting from today’s media asset management systems.

For those reasons, using object storage for large broadcast, studio, and Web content repositories is becoming compelling. The immediate value of an object store for any given use case depends on application compatibility. Object stores employ several different APIs — interfaces for depositing or withdrawing content to or from the store and managing and monitoring the content while it is there. Several HTML-based interfaces are being used today, but the one gaining the most traction is the S3 interface that AWS has developed for its cloud offering. Edit, transcode, and MAM applications will be able to talk to an object store through a file/object gateway, which is how object stores are being deployed in these early days. But application vendors will need to develop these new interfaces before the full value of an object store can be exploited.


Flash Storage: Flash has been around in storage systems for over a decade, but its primary use has been to accelerate the reading and writing of content to disk drives. Until recently, the cost of flash has prohibited its use beyond just a few gigabytes per system, but today SSDs over a terabyte in capacity are being designed into all flash arrays and hybrid (flash/hard disk drive) systems to provide extremely low-latency and highperformance random access to content.

Every use case, from animation rendering to 4K editing, stands to be affected by this wave of innovation. Thousands of compute cores — all asking for roughly the same set of a million small files in order to render a scene — can hit the flash storage at once instead of being gated by the revolutions per minute of disk drives. You can liken it to the difference between booting up your Macintosh laptop versus booting up your old Windows machine. Flash is being driven down in price as it is adopted, and it is already allowing storage manufacturers to configure systems that are faster, highercapacity, and less expensive than the systems they are able to design without flash. Some workloads — such as, mixed edit/render/ transcode workgroups, media transactions, and file system databases — are accelerated so tremendously that the business value of the added performance more than compensates for the cost of an all-flash array. And that’s today. Over the next two years we will see a radical shift in what’s possible in digital media thanks to flash.

While speed, capacity, and costs of SAN and NAS storage will continue to improve as they have over the past decade, the entry into the digital media storage landscape of cloud storage at the business model level, object storage at the system level, and flash at the storage medium level will drastically change how we store, manage, and monetize content over the next few short years.

Jason Danielson is the media industry lead at NetApp

Go with the flow

Workflow tools for film and TV production have developed rapidly in recent years. DS asks the experts about the key developments Productions from TVCs to movies and TV …

Workflow tools for film and TV production have developed rapidly in recent years. DS asks the experts about the key developments

Productions from TVCs to movies and TV series are increasingly being shot in HD and 4K, and projects also have ever tighter deadlines, putting strain on the entire production and post-production cycle. Some of these demands are being addressed by a new breed of media management software that handles data from production through to post-production, and the delivery of a product ready for transmission. Digital Studio asks three experts about the latest trends in workflow management software and how it is helping production professionals to work more efficiently.

DS: What are the latest trends in workflow management software?

Dymond: Workflow can be an overused term in our industry. At Imagine Communications we think of workflow as the way that processes are brought together to achieve a business goal. So our approach is to start by listening and understanding, then discussing potential improvements to the workflow and the ultimate goal. The very last step is to map this to a technology stack.

The latest trend among our customer base is that they are looking for solutions that can be delivered and managed with minimum operational input, for maximum efficiency. They are also looking at technology and workflow orchestration platforms that allow them to design and add new workflows without complex analysis or system downtime.

Krishnan: There is currently a general trend within the industry to outsource IT into the cloud by building up private enterprise networks, so it is not surprising at all that more and more cloud-enabled workflow management offerings are reaching the market. As a result BPMN-compliant drawing tools are offered as web kits, eliminating the need for any local software installation. This also provides greater collaboration capabilities when designing and testing processes with developers working from remote locations. Another trend is the greater awareness by companies that their software become a component suitable for being integrated into a greater context/process, and this requires the software to expose its feature set as services that can be invoked during runtime by a BPM engine.

Sandhar: We are seeing efforts to standardise workflow integration between different systems. Analytics and reporting so businesses can have a heat map of workflow activity, which informs resourcing and investment. Deploying in virtual environments to optimise HW cap investment.


Avid’s Deepraj Sanda


“The laTeST TrenD among our cuSTomer baSe iS 
ThaT They are looking for SoluTionS ThaT can 
be DeliVereD anD manageD WiTh minimum operaTional
inpuT, for maximum efficiency.”

DS: How has software developed in the past year or two?

Sandhar: Development of software has changed in an effort to use technology to monetise the market and gain an advantage compared to previous rules where software was being utilised to gain operational efficiencies within a business.

Krishnan: Numbers have gone up significantly in terms of available workflow management suites recently, but more importantly, due to the availability of mature open-source work flow engines, more embedded solutions are present such as the Viz One Workflow Engine.

Dymond: There has been a movement away from the single, large scale monolithic software solutions towards modular, smart applications that are designed with integration and interoperability ‘out of the box’.  Commoditisation of the hardware and integrated software design has allowed the move away from complex bespoke solutions to discrete solutions.

This ensures that the end user is only presented with essential information and not overwhelmed by the complexities of the engine room.

DS: What additional demands are there on workflow management software now?

Krishnan: There is a constant strive for simplification in an ever more complex environment and a desire to do more with fewer technical resources. This is met by constant development of portfolio functions and processes that provide the end users with more advanced functionality available, such as easy to use graphical shapes.

Sandhar: Analytics, reporting and secure virtual deployment are key demands in the media business at present along with expectation of integration APIs that conform to service-based standards.

Dymond: Workflow orchestration toolsets are now adding capabilities around resource provisioning. Earlier solutions might inform you of a bottleneck in your process chain, but with today’s software-defined video processing the workflow management layer should be able to dynamically add additional processing and storage. By learning from previous demands, the workflow orchestrator should be able to look ahead and proactively spin up more resources to avoid bottlenecks starting.

DS: How are software developers trying to cater to this demand?

Sandhar: Key to catering for this market is to ensure software developers are always analysing the industry needs and evaluate which ones they are best able to meet, and then develop accordingly to cater for demand.

Dymond: Central to the effectiveness of software-defined networks is the capability of the workflow orchestrator to apply intelligent decision-making to resource management and queue management. It depends upon a well-designed architecture, which allows metrics to be shared and understood. As well as acting automatically and autonomously as much as possible, the workflow management layer should also be providing data to external reporting tools or business intelligence for the C-level execs to weigh up organizational bottlenecks against lowest cost path to resolution.

DS: What are the biggest growth areas within workflow management?

Dymond: Processing and distribution as well as storage are prime candidates for growth. We now have more growth in multiplatform content distribution, and that leads to more complexity than ever before in the numbers of both technical standards for outputs and of metadata sets required.  For every different output there is currently a potential need for some form of manipulation – even if it is only some re-wrapping.  The subsequent hand off to a delivery platform also requires some level of asset storage – even if only temporarily.

Sandhar: In the media industry security is a particularly an important topic. The extent to which it is a growth area is unclear but certainly there is more focus from vendor’s to ensure their assets are secure, this has bought about new opportunities for Avid to and our MediaCentral Platform approach has potential to leverage this by looking at ways of integrating security solutions from vendors.

Krishnan: The biggest growth areas for us are to enrich the end-user workflow experience, tighter MAM interaction, and expanding the workflow functions to users that normally don’t use the MAM system in the organisation. Recent work includes interfaces to Adobe products such as Photoshop and After Effects, but also to users in the newsroom. Beyond this, we are focussing on additional reporting, dashboarding and audit functions, to really bring additional value to our offerings.

Vizrt’s RV Krishnan

“There iS a conSTanT STriVe for SimplificaTion
in an eVer more complex enVironmenT anD a
DeSire To Do more WiTh feWer Technical reSourceS.” 
RV Krishnan

DS: How is business in the MENA?

Dymond: The MENA region has a significant number of high level customers for Imagine Communications’ technologies.  Some of this is led by discrete technologies but we also are proud of the large number of fully integrated end-to-end solutions which Imagine Communications has delivered over the years.  Today we see continuous refresh, upgrade and enhancement business through technology evolution.

Krishnan: Middle East has been one of the most important markets for Vizrt recording an impressive growth in new projects and solutions over the years.  Middle East broadcasters have been showing an increasing appetite to be leaders in adopting new technologies and this has seen a surge in sales for Viz Mosart studio automation, the Viz One Media Asset Management platform and other graphics and social TV workflow tools we offer for live studios and newsroom environments.

Sandhar: Avid’s core business comes from the MENA region and our customer based is extremely strong and varied in they region.  Demand for the Avid Media Central Platform has predictably high as customers look for a solution to solve their complicated workflows.

DS: Is the MENA region on par with the rest of the world in adoption of workflow management tools? 

Krishnan: In the past, the MENA region has traditionally followed the trends set by more established markets, but this is now changing. Key broadcasters in the region have been driving adoption of the latest workflow concepts to challenge established counterparts in the more established parts of the world. .

Sandhar: The Avid Media Central Platform has been adopted foremost by our customers in the MENA region. Our customers in the region want to be the driving force in MENA for advanced production techniques.

Dymond: Many of the solutions Imagine Communications has delivered within the MENA region are very much designed on the concept of hidden complexity. Standalone workflow management tools are rare. Typical solutions involve embedded workflow functionality coupled with other technology stacks like playout automation, content transcoding and management.

Catching Up On Shows You Missed

Live-to-VOD capabilities offer new ways to package live content alongside targeted advertising Today, the lines between live TV and video on demand (VOD) services are blurred. Hitting the …

Live-to-VOD capabilities offer new ways to package live content alongside targeted advertising

Today, the lines between live TV and video on demand (VOD) services are blurred. Hitting the pause button to take a quick break during a live football game is no longer just a farfetched impulse; it’s a practical expectation.
Replaying a scene from a live TV broadcast is now as simple as hitting the rewind button and catching up on a favorite show is just a menu option away. As a result, live streamed linear feeds of sports, news and entertainment content are becoming a competitive necessity.
Time-shifted services enrich live TV experiences, can be adapted for multiscreen viewing and offer new ways to package live content alongside targeted advertising.Increasingly, consumers expect their video anywhere, on any device and want to view that content with DVR controls like time delay, pause or repeat. Between 2011 and early 2014, the number of urban television consumers watching time-shifted content increased from 30% to 43%. Example of advanced live-to-VOD services include:

  • Catch-up TV: Enabling viewers to replay TV shows broadcast hours or days earlier, catch-up TV allows pay TV operators to offer an alternative to on-demand movies and to monetize content through targeted
  • Start-over TV:Time-shift TV controls let viewers replay a live broadcast already underway from the beginning and to switch back to a real-time broadcast feed. Targeted advertising streamed on top of existing commercial breaks offers distributors a new monetization avenue.
  • nPVR:DVR controls, which enable creation and storage of live TV content recordings for playback on any device,are increasingly included as a component of pay TV subscriptions.

Pay TV operators can add value to live broadcasts by creating VOD assets in real time. Though customer satisfaction and loyalty are important objectives for operators,so too is monetization. Live-to-VOD capabilities offer new ways to package live content alongside targeted advertising.

Fixed Function, Hardwarebased Infrastructures Cannot Keep Up

To keep a competitive edge, content providers also need to be able to easily prepare their technology infrastructures for live linear streaming at the lowest possible total cost …

To keep a competitive edge, content providers also need to be able to easily prepare their technology infrastructures for live linear streaming at the lowest possible total cost of ownership.

That strategy starts with a continuously upgradeable video processing and delivery infrastructure that can bring premium live-streamed content to viewers no matter what device they’re using or where they are.

Fixed-function hardware might offer good performance initially, but can be quickly surpassed by more cost-effective and highlyadaptive options. Dedicated hardware-based infrastructures from incumbent video processing suppliers won’t be able to withstand accelerating innovation in audio processing,color depth, content protection and tracking,and video encoding innovations.

A software-defined video processing platform provides far greater flexibility and scalability, while extending the useful life of infrastructures as the industry evolves. By leveraging the most powerful general purpose programmable processors, the power and efficiency of a software platform can follow the same rate of performance and cost enhancements as standard IT infrastructure.
With this new approach, support for new services and video formats can be integrated seamlessly through software upgrades. What is used to process MPEG-2 video today can migrate seamless to H.264 or HEVC in the future. What is used to trial 8-bit 4K processing might evolve to 10 or 12-bit processing at real deployment. The possibilities are only constrained by the lines of code in the software — and not by chip designs within traditional hardware systems.

Software-Defined Video Enables Live-To-Vod

Software components can be distributed across a network in order to bring multiscreen delivery systems closer to end users.Instead of handling all ABR streaming from a central head-end, format packaging for specific devices can be accomplished on local edge servers. This can greatly improve the efficiency of video delivery while reducing bandwidth consumption as only one stream per profile is required between the head-end origin server and the local edge server.
The flexibility of a software-defined video processing approach allows content providers to offer premium live content while extracting unmined value from that same content. They can enhance core live linear streaming services with catch-up TV functionality and multiscreen delivery. By integrating ad insertion capabilities, operators can even generate additional revenues through targeted advertising.
By supporting third party integrations,a software platform can allow DRM, adservers, and other video functions to be fully integrated into a unified system. Modular software-based platforms can also support a multitude of optional add-ons including video processing specific to device profiles,just-in-time packaging, or audio transcoding.Live-to-VOD features such as nPVR and catch-up TV can also be supported through modular software components. Other possibilities could include video analytics, content protection systems and ad-servers.
When built upon a software platform, a live-to-VOD system can quickly scale up through ground or cloud-based video processing. As processing and storage capacities of cloud infrastructures improve,video processing can benefit from increased performance while legacy hardware can be repurposed for less capacity-hungry applications. Content providers may decide to run all or part of their video processing in the cloud. By mixing both ground and cloud-based resources, they can choose what level of system support they would like to maintain internally versus through external enabled services infrastructures.

Overcoming Live-To-Vod Multiscreen Challenges

Live-to-VOD services can be adapted to second screen devices such as PCs, smartphones and tablets. However, formatting live-to-VOD content to fit second screen devices is not as straightforward as simply playing back content on a TV set.
A live-to-VOD service needs to be able to repackage content, using recorded catch-up TV or nPVR content as mezzanine files, to a wide variety of devices supported by the pay TV operator. The system needs to be both
scalable and flexible to support edge servers and third party CDNs. With premium pay TV content, it is also important that live-to-VOD services incorporate DRM technology to protect valuable content regardless of the device used for playback.
Whether it’s catch-up, start-over, nPVR, or pause TV, each live-to-VOD service implementation must also take into account the different types of screens and networks in use. Bandwidth, storage, monetization and security concerns can lead to complications that are quickly multiplied by the myriad viewer devices and scenarios involved.
By relying on a software-based approach rather than fixed-function hardware, live-to-VOD systems can be more easily upgraded to embrace new standards and features as they emerge. By also including a just-in-time (JIT) packager that can adapt video streams to network and device parameters in real time, pay TV operators can be prepared for whatever comes next.

Video Delivery Platform For Advancing Live-To-Vod

Leading cable and satellite operators are already using software-defined video processing to enable multiscreen TV and time shifting features. As the price to performance ratios of off-the-shelf hardware continue to improve and private cloud IT capabilities mature,the business case for shifting traditional video encoding and multiplexing to software defined video systems is now compelling.
Elemental’s video processing and multiscreen delivery solutions are all built with a software-based approach. This allows for the rapid addition of new live-to-VOD features and support for new types of devices as they are introduced to the market. For example,the Elemental® Delta video delivery platform supports multiscreen delivery of advanced live-to-VOD services such as catchup TV, start-over TV, and nPVR. The IP video delivery solution lowers storage, bandwidth and transit costs and helps content providers mitigate distribution expense by taking ownership of greater portions of the delivery infrastructure.
Already adopted in more than 10 countries,Elemental Delta combines just-in-time (JIT) packaging, origin services, intelligent caching, dynamic ad insertion and replacement,and end-to-end encrypted content protection functions in a single platform.The platform reduces multiscreen system complexity with the ability to transform any input into any output for high-quality, secure video delivery. The IP video delivery solution lowers storage, bandwidth and transit costs and helps content providers mitigate distribution expense by taking ownership of greater portions of the delivery infrastructure.Elemental Delta enables video providers to leverage a single multiscreen delivery workflow for every connected device, eliminating the management of multiple CDNs and network topologies.

Unlike generalized multiscreen delivery services, Elemental Delta provides on-the-fly support for all major adaptive streaming protocols,compression formats, and multiple digital rights management (DRM) systems within a single framework. Among supported protocols are HDS, HLS, Smooth Streaming and the MPEG-DASH standard, which
supports on-demand, live and time-shift applications and services. Elemental Delta also handles H.264 delivery as well as the new high-efficiency video coding (HEVC/H.265)codec needed for next-generation video delivery.
To secure content, the platform combines embedded encryption and decryption capabilities with JIT DRM wrapping, enables protected assets to be stored and moved efficiently through the network and applies DRM in real time upon delivery. Finally,Elemental Delta has built-in failover and redundancy, whether on the ground or in the cloud.

Bringing Native Intelligence To Media Workflows

Volicon, the leading provider of enterprise media intelligence solutions, will showcase the capture and share applications for its Observer Media Intelligence platform. The system help broadcasters capture media …

Volicon, the leading provider of enterprise media intelligence solutions, will showcase the capture and share applications for its Observer Media Intelligence platform. The system help broadcasters capture media from a variety of sources and quickly produce and deliver compelling content to viewers via on-air broadcast, as well as digital and social media platforms. Together, Capture and Share facilitate the capture, extraction,processing, and distribution of content in the appropriate format for virtually any target outlet and device. In a fresh approach to multiplatform content creation and delivery, these applications leverage the Media Intelligence Platform’s unique content-recording capabilities and intuitive user interface to provide a much faster and more cost-effective model than is possible with conventional recording,editing, and packaging solutions.


Volicon’s enhanced Multiviewer option for the Observer Media Intelligence Platform unites the platform’s recording capability with multiviewer functionality to give users access to multiple live or recorded programs (audio and video),complemented by frame-accurate data,on a monitor wall or other display. In addition to enabling users to keep their eyes and ears on every channel, the option makes it easy to identify and look past audio and video impairments to examine the integrity of metadata. During IBC2015,Volicon will demonstrate enhancements including configurable layouts; support for various room, screen, and player sizes;and an array of useful new widgets including clocks and graphs.


Volicon’s Observer OTT provides networks, video service providers,and broadcasters with a solution for logging (recording) and monitoring over-the-top (OTT) A/V services that stream content to computers, tablets,and smartphones. With the same suite of tools already proven for set-top box (STB) and transport stream (TS) monitoring,Observer OTT offers a complete,cost-effective quality monitoring and/or compliance logging solution for multiplatform media delivery. Users can ensure that video-on-demand(VOD) and linear services are available 24/7 at optimal quality, validate service level agreements (SLAs) with content delivery networks (CDNs) using pixel-level verification of cloud delivery and playback, confirm the presence of captioning, and determine that specialized apps are providing optimal quality of experience. In addition to providing a true recording of services, the system facilitates remote streaming and review as well as in-depth analysis of both unencrypted and encrypted content.
During IBC2015, Volicon will demonstrate how Observer OTT ingests content from each point in the OTT pipeline — including a variety of target mobile devices — not only to provide a valuable look at how consumers experience streamed content, but also to supply rich data that speeds isolation and resolution of any quality issues for content viewed on any device


Addressing broadcasters’ need for convenient,cost-effective long-term storage of aired content, Volicon has introduced a new Archiver option for the company’s Observer Media Intelligence Platform®.Providing multiple simultaneous users with random access to an indexed store of full-resolution, high-bit-rate content,as well as low-resolution proxies, this option makes programming, promos, and advertisements readily available for use cases ranging from ad verification to repurposing.When years of online storage are required, the Archiver option offers a scalable, high-performance, low-cost alternative to the expensive systems typically implemented for long-term archives.Because it features both baseband (SDI) and transport stream (TS) interfaces and is compatible with any application, the module can be deployed easily in virtually any broadcast environment.

Behind the glass door

Broadcasters are looking to virtualize because they don’t want to build a specific, dedicated architecture for a process; they want a process-agnostic platform that can be readily adapted …

Broadcasters are looking to virtualize because they don’t want to build a specific, dedicated architecture for a process; they want a process-agnostic platform that can be readily adapted to whatever format or signal they need.

The Indian market is fast acknowledging cloud technology as a viable means to host broadcast workflows. Cloud-based themes were weaved throughout both Broadcast India and BES shows last year and a recent study indicated that 15 – 20% of broadcast and post production businesses in India are beginning to adopt a cloud infrastructure.
The cloud certainly brings significant advantages to the business of TV content creation, storage and post production processes, both technically and economically, including zero spend on broadcast-specific hardware; leveraging of existing commoditybased hardware; vastly reduced cabling/ physical storage; virtually no cost on electricity or cooling systems; no expenditure on datacentre maintenance/security; reduced staff operator training costs due to universal software/UIs; rapid channel deployment/ upgrades; and instant access/easier and faster transfer of media content.
Along with the rest of the world, the broadcasters of India are now also poised to explore cloud playout, which promises
even further economic and flexible freedom for linear TV channels. But, at the moment, moving fulltime playout to the cloud is still a bridge too far for most broadcasters.
In fact, our current distribution systems are still primarily via satellite; and right now when we hear about the implementation of playout from the cloud, for the vast majority of cases, broadcasters and playout solutions providers aren’t really talking about true playout from the cloud, but simply moving some non-realtime operations into the cloud. Here
are some of the issues we need to consider when it comes to true cloud playout:

BROADCASTERS NO LONGER WANT TO BE IN THE INFRASTRUCTURE BUILDING BUSINESS Broadcasters are looking to virtualize because they don’t want to build a specific, dedicated architecture for a process; they want a process-agnostic platform that can be readily adapted to whatever format or signal they need to go out to the viewer. This means that their operations need to run on software instead of hardware. It needs to run on software that is not tied to a particular
machine – software that takes advantage of the elasticity in a virtualized environment and frees up the resources for other tasks when it no longer needs them.
The logical extension of virtualization – which is really a private data center or private cloud, is the ability to also run in someone else’s cloud. Being virtualized also means being able to expand as needed outside of your own datacenter into an outsourced datacenter or cloud, as and when it’s needed. This might only be required during periods of high demand or special events. Some major broadcasters are looking to outsource their entire infrastructure operation to major cloud operators like Amazon Web Services and Microsoft Azure. They don’t want to be in the infrastructure-building business anymore; they want to leave that to the Amazons and Microsofts of this world and instead focus on where they really differentiate themselves
from the competition – content creation and curation. And, who can blame them considering the overwhelming sense of uncertainty that the industry is currently facing.

UNCERTAINTY IS FUELLING VIRTUALISATION As the shift towards OTT and other nonlinear viewing continues and as the dominance of non-traditional players continues in these markets, the more uneasy linear TV broadcasters have become.
Uncertainty over the direction of the industry and technology has interfered with the broadcaster’s ability to be able to plan ahead. It’s difficult to predict where they’ll be in six months or a year and what new requirements they’ll be expected to meet. It’s an opposite scenario to that of previous broadcast industry milestones; the migration from analogue to digital, SD to HD and from baseband to file-based workflows.
Given the turmoil and uncertainty, broadcasters now face an overwhelming number of questions that must be considered when upgrading or building a facility today: Will I need to migrate to UHDTV/4K? Will satellite and cable distributors carry the UHD version of my signal? Can consumers even tell the difference between UHD and my HD signal up-converted by their new 4K TV? How should I build a UHDTV/4K facility? Using SDI as quad-3G or 12G? Over coax or fibre? Or over IP uncompressed using 2022-6, Aspen or TR-03? Or compressed over IP using Tico, LVCC, etc.? Will I need to launch new channels? Should I consider temporary OTT channels? Will my channel requirements change so that I have to reconfigure and modify my channels and workflows? Will new transmission platforms force me to redesign my playout system? It’s all this uncertainty that is fuelling virtualization.

THE CLOUD ISN’T QUITE READY FOR REALTIME PLAYOUT Being fully virtualized sounds wonderful and ma y claims have been made from playout solutions providers that they can already move playout to the cloud. In reality, almost every ‘cloud playout’ system we’ve seen till date puts only the non-realtime components in the cloud and still use edge players to take in and switch the live feed; they still utilize a media cache for content pushed from the cloud and they rely on a GPU engine for graphics insertion and SDI outputs.
Recently a major US network and its playout solutions provider claimed they had moved their network’s playout operations to
the cloud, when all they really did was move the main content stores a few states away and tie the facilities together with multiple 10G dedicated private fibre connections so that the entire corporate enterprise could share the same storage architecture. But, the automation, cache and playout still occurs in the same building that it did 30 years ago.

CLOUD PLAYOUT WOULDN’T SEE MUCH IN THE WAY OF CPU SAVINGS IF VIRTUALIZED Typically, only the non-realtime components of a playout system are being ‘virtualised’ at present – media management, storage, archiving, traffic, scheduling, and logging. All of these are prime candidates for virtualization because they are ‘bursty’, itinerant
operations – actions that use a lot of CPU cycles for a brief period of time and then go dormant until they’re needed again:
You load a playlist at 8PM for the next day, and for the next 20 minutes the system is racing along, locating and retrieving content form media stores and archives and caching all of that media onto the playout server cache. It then waits there patiently and quietly for the next 23.5 hours until the next playlist is loaded.
Why not use those spare CPU cycles to back-up the email servers, transcode files for next week or migrate the archives? Or better yet, rent those CPU cycles from Amazon and only pay for 30 minutes per day instead of 3600? These virtualized and cloud implementations also provide additional benefits beyond cost savings. They’re accessible anywhere, making workflows more collaborative and the workforce more distributed.
The actual playout engine is trickier. It is performing operations nonstop; 24 hours a day, 7 days per week, 365 days per year, so it wouldn’t see much in the way of CPU savings if virtualized. It usually requires powerful GPUs to provide the graphics broadcasters expect today and it needs to handle live feeds and switching of synchronous sources, as well as having to be realtime and robust; never dropping a packet or a frame of video. When you think about it, there’s not much
reason really to move the main channel playout itself to the cloud.
Occasional use of playout applications, such as disaster recovery or temporary sports/festival event channels will be the first adopters of true cloud playout, and through their experience and maturation, they will pave the way for future primetime live channels to play completely from the cloud.


Peter Wharton

OBSERVING PROGRESS AT A DISTANCE Progress is being made with regards to virtualization; playout solution specialists today, such as BroadStream Solutions, are working on developing the same resiliency and redundancy for cloud–based playout as the industry has always demanded from its broadcast cores – just as broadcast equipment
manufacturers adapted telecommunications networks and satellites for broadcasting 60 years ago.
Today you can run BroadStream’s core OASYS playout services in a virtualized environment and reap the benefits of a collaborative cloud-based solution. Broad- Stream’s HTML-5 based Multi-Channel Web Client lets users run any number of channels in multiple sites from multiple locations using a standard web browser. BroadStream customers will soon be able to monitor and control every possible playout intricacy of any channel, from anywhere in the world,
from any device.
The Indian broadcast market has always been cautious when it comes to adopting new technologies, whether it’s digital, HD,
3D, IP and now, The Cloud. In their usual smart way, Indian broadcasters will observe other markets making investments, testing boundaries, making mistakes and tackling all the challenges detailed above before embracing and gaining all the advantages that cloud playout will have to offer.
It won’t be long before full playout from the cloud becomes commonplace, but for the time being, technological concerns, uncertainty about standards and the questionable economics for real time cloud playout have meant there is an understandable hesitation to get on board.
And just as we led the market with the first integrated playout solution and the first one with IP I/O and hybrid SDI and IP I/O, you can count on BroadStream’s OASYS to lead again in cloud-based playout.

Amagi enables virtualized playout with CLOUDPORT 3.0

Amagi has announced CLOUDPORT 3.0, the latest version of its cloud-based playout platform, which the company claims enables TV networks to operate virtualized playout on the cloud. This …

Amagi has announced CLOUDPORT 3.0, the latest version of its cloud-based playout platform, which the company claims enables TV networks to operate virtualized playout on the cloud. This gives them greater flexibility and agility to spin new channels and create regional feeds instantly to keep pace with changing viewer dynamics and preferences.

Available as a commercial-off-the-shelf (COTS) platform using Intel servers, CLOUDPORT 3.0 can also be deployed at operator headends while retaining full control over operations with the broadcasters. It 3.0 is IP-enabled, supports live broadcast, and is 4K UHD compatible. Complete with multifeed monitoring, the platform offers remote playout management, creating a live MCR-like experience on the cloud.

K.A. Srinivasan, Co-founder of Amagi said, “With CLOUDPORT 3.0, TV networks can respond to market needs quicker, as well as operate multichannel playout and delivery with zero CAPEX when compared to traditional playout and broadcast models. Given its flexibility to be hosted on the cloud, CLOUDPORT 3.0 can be used to create broadcast-quality OTT feeds. It can also double up as a cost-effective option to meet disaster recovery needs of TV networks. There is no longer a need for broadcasters to stay invested in expensive, traditional delivery models”

Offered as a platform-as-a-service model, CLOUDPORT 3.0 is packed with advanced features such as near-live asset changes to broadcast playlists, real-time social media integration, and enhanced digital video effects for a better end-user experience.