XDCAM is a series of video formats that are widely used in the broadcasting industry, you might also know them as MXF. Sony introduced them back in 2003, and since then they’ve become quite popular among video professionals. It has always been possible to encode to XDCAMs in Panda through our raw encoding profiles, but we’ve decided to make it more streamlined. Oh, and, by the way, to make their quality possibly best in the industry.
And here it is, the new preset to create XDCAM profiles. Everything can be set up using Panda’s UI. Because XDCAMs only allow a predefined set of possible FPS values, we decided that it would be a good idea to always use our motion-compensated FPS conversion for XDCAM profiles (more on Google’s blog). If your input video’s frame rate doesn’t match that used by the XDCAM preset, or if it is progressive and you need interlaced outputs, the quality won’t degrade as much as it would without motion compensation. And that’s what gives our preset the best quality in the cloud video encoding industry.
Should you have any questions or suggestions regarding the new presets – just shoot us an email at email@example.com.
Curious clients like the ones we have in Panda are such a joy to work with. One of them sent us a great question about ftyp headers in MPEG-4.
ISO base media file format (aka MPEG-4 Part 12) is the foundation of a set of formats, MP4 being the most popular. It was inspired by the QuickTime’s format, an then generalized in an official ISO standard. Then other formats used this ISO spec as a base and added their own functionality (sometimes in an incompatible way). For example, when Adobe created F4V – the successor of FLV – it used MPEG-4 Part 12 too but needed a way of packing ActionScript objects into the new format. Long story short, F4V turned out a weird combination of MP4 and Flash.
Anyway, all MPEG4 Part 12 files consist of information units known as ‘boxes’. One of the kinds of these boxes is ftyp, which contains information about the variant of ISO-based format the file uses. In general, every new variant should be registered on mp4ra.org, but that’s not always the case. Full list of possible ftyp values (registered and non-registered) is maintained on ftyps.com website.
Majority of MP4 files produced by Panda will be labelled as ISOM (the most general label), but you might want to use a different label. Instead of ISOM you might for example use MP42, which is MP4 version 2 and does add a few things to the ISO base media file format, so different labels actually make sense.
These low-level MPEG4 Part 12 details can be easily manipulated using GPAC, which fortunately is available in Panda. Assuming that you’re already using raw neckbeard profiles, to change the ftyp of a file from ISOM to MP42 after it’s processed by FFmpeg, you could use following commands:
As you probably know you can create advanced profiles in Panda with custom commands to optimize your workflow. Since we are constantly improving our encoding tools an update could sometimes result in custom commands not working properly. Backwards compatibility can be tough to manage but we want to make sure we give you a way to handle this.
That’s why we made possible to specify which stack to use when creating new profile. Unfortunately, the newest one – corepack-3 - used to have only one tool, ffmpeg. It was obviously not enough and had to be fixed so we extended the list.
What’s in there you ask? Here’s short summary:
FFmpeg - a complete, cross-platform solution to record, convert and stream audio and video. http://ffmpeg.org
In an always competing IT world there are many rivaling groups of skilled developers who independently try to solve the same problems and implement the same concepts. It usually results with a vast choice of possible solutions that share a lot of common traits. This abundance of techniques, methods and protocols is one of the things that allowed the rise of modern software.
That’s why standards are important
They provide a well defined core that needs to be implemented among all vendors. It makes the constant struggle for portability a bit easier. Standard shifts the responsibility: now developer doesn’t have to worry about every possible type of a user software and include several tests for special cases. He doesn’t need to write extra code just to handle a single task done differently in a different environments. He can improve the support of a single protocol instead of working with five. It’s now a vendor’s job to provide a product that works with code compliant to the specification.
Unfortunately, creating a standard is not a simple task and there is a lot of problems in order to satisfy all the needs and cases. A clash between proposals is hard to avoid. It takes time to have one victorious solution emerge and dominate the market.
Divided world of adaptive bitrate streaming
Such strife can be observed now in a world of multimedia streaming techniques. There are three competing HTTP based methods, referred as adaptive bitrate streaming – Apple’s HLS, Microsoft HSS and Adobe HDS. These 3 provide the way to transmit multimedia with bitrate that can be changed dynamically, depending on network bandwidth and hardware capabilities.
They are similar but occupy a different parts of the market. HSS is present in Silverlight based applications, HLS is in a common use among mobile devices and HDS is popular companion of Flash on desktop. It would be a lot easier for developers to have one common technology to support instead of 3 separate ones. That’s why there were attempts to standardize the adaptive bitrate streaming.
Enter MPEG DASH
The MPEG group, major organization that contributes commonly used multimedia standards, introduced their own version of HTTP based streaming called MPEG DASH, that strives now to become a dominant method for delivering rich video content.Right now MPEG DASH is far from being a champion and the only preferred choice. HDS, HLS and HSS are still commonly used across the Internet. It’s hard to predict if it’ll prevail. All that’s certain is it should not be ignored. There’s a little merit in waiting for a winner to rise victorious from this clash of technologies. That’s why we decided to enhance Panda with a DASH support.
We provide this feature through a new preset that is available to choose from a profiles list. Similar to a HSS support that we introduced recently, there are two ways the encoded set of output files can be stored in the cloud. If the default .tar extension is preserved then both multimedia files and XML manifest with all necessary metadata are archived into a single file, which can be later downloaded and unpacked. Alternatively you can choose a .mpd which makes Panda upload all output files into the cloud separately.
Another important decision to make is a set of output bitrates. The default setting consists of bitrates with values of 2400k, 600k, 300 and 120k. Changing the video bitrate value through the preset settings panel results with values equal to the one you set, 1/2, 1/4 and 1/8 of it (just like with HSS which we introduced before).
To test your output you can use one of these media players:
This new preset allows you to adapt our product to your needs more flexibly. There is a saying that “nobody ever got fired for buying IBM equipment” because when in doubt one should choose what is a standard for the industry. If you want to provide your application with features that modern streaming techniques offer then choosing MPEG DASH might be a good option. Panda is there to help getting your videos encoded the right way.
…. but some profiles are more equal than others. At least they might be to you. We do realize that Panda is basically all about video transcoding and its core tasks are pretty much the same for most of you. However, good services should be customizable so they can fit specific user requirements.
If your business relies on specific output files more heavily than others you may want them to be encoded first. Until now you could do this by setting priority on your encoding clouds and sending more important files to the one with high priority.
This solution while generally good, had a limitation if you wanted to have specific profile encoded before others. Let’s say you need high quality MP4(H.264) encoded as your primary profile while the smaller, lower quality MP4s could follow later on. It required you to upload files twice to two different encoding clouds. Definitely not fun, as one of our clients pointed out.
We had to do something about it. And we did. Couple days later and now also profiles can have a priority setting of High, Normal and Low so you can control the order in which the jobs for specific profiles are processed.
Yay, it’s Friday afternoon! Here’s something short before you go and enjoy your weekend.
We have just added in Panda an ability to provide our customers with encryption of the HLS protocol. This means improved transmission security and that you get to control who can view your videos. We help you encrypt your videos either by generating a 128-bit key along with initialization vector or using a key provided by you.
In short – we encrypt all the segments of your video found in an associated HLS variant playlist. You will be able to choose a URL which contains your secret key to decrypt them with your media player. HLS Variant and HLS Variant audio profiles will now contain Encryption section to help you set everything up in seconds.
To make sure the secret key from URL is properly protected, you will need to authorize the media player to obtain the key using your web authentication service.
You could have the most astounding video content, in high resolution and with amazing quality, enhanced with all sorts of special effects and advanced graphical filters – it doesn’t matters if you aren’t capable of delivering it to your consumers. Their connection speed is often limited and they might not have enough bandwidth to receive all these megabytes filled with rich multimedia data. While our networks are improving at an astonishing rate, they’re still the main bottleneck of many systems, as the size of the files rises rapidly with better resolutions and bitrates. While you can add several more cores to your servers to increase their computing power, you’re not able to alter the Internet infrastructure of your users. You have to choose – send them high definition data or sacrifice the quality to make sure the experience is smooth.
Continuous streaming vs Adaptive bitrate
The most obvious solution is to prepare several versions of the same video and deliver one of them depending on user bandwidth. In the past this was a standard approach, once the choice was made, files were streamed in a progressive way, from the beginning till the end, just like images in modern web browsers.
This method has several disadvantages. The biggest one – you cannot dynamically switch between versions in the middle of the sending process in order to react to changes in the network load. If the connection improves you can’t take advantage of it and you have to continue sending worse quality despite having resources available. Even worse, you can’t prevent congestion if the transmission speed decreases – you either cancel the entire process or end up with the video becoming laggy. With continuous streaming you also can’t just skip part of the multimedia and jump ahead until the downloading process gets to the desired moment neither can you rewind data quickly.
To fix these issues better a more flexible solution is needed. For a long time the preferred choice was Adobe RTMP (Real-Time Messaging Protocol) used together with Adobe FMS (Flash Media Server). It was complex and became problematic in the era of mobile devices, since their support for Flash based technologies is pretty average. This allowed HTTP based protocols to emerge and dominate the market.
These technologies split video and audio into smaller segments which are encoded with different bitrates. It allows to dynamically choose optimized data based on current connection speed and CPU. It’s called adaptive bitrate streaming.
HTTP has a number of advantages compared to RTMP
it’s a well known, simple, popular and universally applied protocol
it can use caching features of content delivery networks
it manages to traverse firewalls much easier.
Microsoft HSS, underrated protocol
As of now there is no single, standard HTTP-based protocol, instead there are several implementations by different vendors. One of them, Apple’s HLS, was available in Panda for a long time. Now we’re adding another one – HSS (HTTP Smooth Streaming), a Microsoft technology which allows the use of adaptive bitrate streaming features in Silverlight applications. Even though that with the advent of HTML5 Silverlight is not as popular as it used to be (over 50% market penetration in 2011) it’s still a widely spread, common technology and a noteworthy rival of Flash.
To use HSS specialized server is needed. The most obvious choice would be Microsoft’s IIS but there are modules for Nginx, Apache Httpd and Lighttpd as well. After setting it up, together with a Silverlight player, you need to split your video files into data segments (files with .ismv extension) and generate manifest files (.ism and .ismc extensions), which are used to inform receivers what kind of content the server can deliver.
HSS preset in Panda
This is where Panda comes in handy as a convenient encoding tool. All you have to do is add HSS preset to your set of profiles and configure it as needed to get a pack of converted files ready to deploy. The most important setting is an output file format. With a default ‘.tar’ extension you will receive at the end of the encoding process a single, uncompressed archive which contains all necessary data. All that’s left is to unpack this archive into the selected folder of your video server and then provide your Silverlight player with a proper link to a manifest file. You can alternatively choose ‘.ism’ format, which won’t archive the output. Instead files will only be sent to your cloud, from where you can use them any way you need.
Another important thing to consider is a video bitrate value for your segments. The default settings produce segments with bitrates of 2400k, 600k, 300k and 120k. If you insert custom value, your output will consist of segments having a bitrate equal to the provided value, 1/2, 1/4 and 1/8 of it. Finally, you might alter the standard set of options such as resolution and aspect ratio.
Now you can have the benefits of adaptive bitrate streaming if your business uses Microsoft technologies. All you have to worry about is content quality, since the problems with delivery are becoming less of a burden, all thanks to the abilities of HTTP protocol.
Here at Panda, we are constantly impressed with the requests that our customers have for us, and how they want to push our technology to new areas. We’ve been experimenting with more techniques over the past year, and we’ve officially pushed one of our most exciting ones to production.
Introducing frame rate conversion by motion compensation. This has been live in production for some time now, and being used by select customers. We wanted to hold off until we saw consistent success before we officially announced it We’ll try to explain the very basics to let you build an intuition of how it works – however, if you have any questions regarding this, and how to leverage it for your business needs, give us a shout at firstname.lastname@example.org.
Motion compensation is a technique that was originally used for video compression, and now it’s used in virtually every video codec. Its inventors noticed that adjacent frames usually don’t differ too much (except for scene changes), and then used that fact to develop a better encoding scheme than compressing each frame separately. In short, motion-compensation-powered compression tries to detect movement that happens between frames and then use that information for more efficient encoding. Imagine two frames:
Now, a motion compensating algorithm would detect the fact that it’s the same panda in both frames, just in different locations:
We’re still thinking about compression, so why would we want to store the same panda twice? Yep, that’s what motion-compensation-powered compression does – it stores the moving panda just once (usually, it would store the whole frame #1), but it adds information about movement. Then the decompressor uses this information to construct remaining information (frame #2 based on frame #1).
That’s the general idea, but in practice it’s not as smooth and easy as in the example. The objects are rarely the same, and usually some distortions and non-linear transformations creep in. Scanning for movements is very expensive computationally, so we have to limit the search space (and optimize the hell out of the code, even resorting to hand-written assembly).
Okay, but compression is not the topic of this post. Frame rate conversion is, and motion compensation can be used for this task too, often with really impressive results.
For illustration, let’s go back to the moving panda example. Let’s assume we display 2 frames per second (not impressive), but we would like to display 3 frames per second (so impressive!), and the video shouldn’t play any faster when we’re done converting.
One option is to cheat a little bit and just duplicate a frame here and there, getting 3 FPS as a result. In theory we could accomplish our goal that way, but the quality would suck. Here’s how it would work:
Yes, the output has 3 frames and the input had 2, but the effect isn’t visually appealing. We need a bit of magic to create a frame that humans would see as naturally fitting between the two initial frames – panda has to be in the middle. That is a task motion compensation could deal with – detect the motion, but instead of using it for compression, create a new frame based on the gathered information. Here’s how it should work:
These are the basics of the basics of the theory. Now an example, taken straight from a Panda encoder. Let’s begin with an example of how frame duplication (the bad guy) would look like (for better illustration, after converting FPS we slowed down the video, and got slow motion as a result):
See that jitter on the right? Yuck. Now, what happens if we use motion compensation (the good guy) instead:
It looks a lot better to me, the movement is smooth and there are almost no video artifacts visible (maybe just a slight noise). But, of course, other types of footage are able to fool the algorithm more easily. Motion compensation assumes simple, linear movement, so other kinds of image transformations often produce heavier artifacts (they might be acceptable, though – it all depends on the use case). Occlusions, refractions (water bubbles!) and very quick movement (which means that too much happens between frames) are the most common examples. Anyway, it’s not as terrible as it sounds, and still better than frame duplication. For illustration, let’s use a video full of occlusions and water:
Okay, now, let’s slow it down four times with both frame duplication and motion compensation, displayed side-by-side. Motion compensation now produces clear artifacts (see those fake electric discharges?), but still looks better than frame duplication:
And that’s it. The artifacts are visible, but the unilateral verdict of a short survey in our office is: the effect is a lot more pleasant for motion compensation than frame duplication. The feature is not publicly available yet, but we’re enabling it for our customers on demand. Please remember that it’s hard to guess how your videos would look like when treated with our FPS converter, but if you’d like to give it a chance and experiment a bit, just drop us an email at email@example.com
The edge of video transmission is moving quickly, just to mention HD television being mainstream for some time and 4K getting traction; H264 being ubiquitous, and HEVC entering the stage. Yet most people still remember VHS. It’s good to be up with the latest tech, but unfortunately the world is lagging behind most of the time.
Television is a different universe than Internet transmission. The rules are made by big (usually government) bodies and rarely change. Although most countries have switched to digital transmission, standard definition isn’t gone yet – SD channels are still very popular, which forces content providers to support SD formats too.
Recently, we’ve helped a few clients to craft transcoding pipelines that support all these retiring-yet-still-popular formats. We’ve noticed that it’s a huge nuisance for content makers to invest in learning old technology and that they would love to shed the duty on someone else; so we made sure that Panda (both the platform and the team) can deal with these flawlessly.
There’s a huge variability among requirements pertaining SD: for example, you have to decide how the image should be fitted into the screen. High-quality downsampling is always used, but you have to decide what to do when the dimensions are off: should you use letterboxing, or maybe stretch the image?
Another decision (which usually is not up to you) is what exact format should be used. This almost always depends on the country the video is for. Although the terms NTSC, PAL and SECAM come from the analog era (digital TV uses standards like ATSC and DVB-T), they are still used to describe parameters of encoding in digital transmission (e.g. image dimensions, display aspect ratio and pixel aspect ratio). Another thing the country affects is the compression format, the most popular are MPEG-2 and H.264, though they are not the only ones.
Standard television formats also have specific requirements on frame rate. It’s a bit different than with Internet transmission, where the video is effectively a stream of images. In SD TV, transmission is interlaced, and instead of frames it uses fields (which contain only half the information that frames do, but allow to save up bandwidth).
Frame rate is therefore not a very accurate term here, but the problem is still the same – we have exact number of frames/fields to display per unit of time, and the input video might not necessarily match that number. In such case the most popular solution is to drop and duplicate frames/fields according to the needs, but quality of videos produced this way is not great.
There is a solution, though, but it’s so complicated that we’ll just mention it here – it’s motion compensation. It’s a technique originally used for video compression, but it also gives great results in frame rate conversions. It’s not only useful for SD conversions, we use it for different things at Panda, but it helps here too.
Well, it’s definitely not the end of the story. These are the basics, but the number of details that have to be considered is unfortunately much bigger. Anyway, if you ever happen to have to support SD television, we’re here to help! Supporting SD can be as easy as creating a profile in Panda:
For the last decade, content marketing has been dominated by video streaming. Whether you’re a comedian posting funny videos to build a following or a business creating an informative product demo to help your viewers, choosing the right type of video message is crucial to boosting views and rankings. Here is a closer look at some of the most popular forms of video marketing for various content types.
Social Videos for Individual Messages or Projects
Streaming media is the foundation on which the social Internet runs. Over the last two years, Twitter and Instagram have piggybacked on the social video marketing success of YouTube and Facebook. Twitter released Vine, which allows the user to post and share six-second videos, while Instagram added video-streaming capabilities to their regular feeds.
The benefit of choosing social video is that it has the ability to reach many people in a short amount of time. If your video is only 30 seconds to a minute long and designed to capture your viewer’s attention within the first five seconds, there’s a better chance of getting more views, likes, and shares.
This type of content marketing is great for short messages, entertainment (i.e., funny videos), and sales messages.
The Birth of the Online Film Is Giving New Life to Video Marketing
There is a misconception that people won’t take time out of their daily routines to watch a video that is more than three minutes long. YouTube was built around this belief and, up until a few years ago, was dominated by it. The Coca-Cola Company has paved the way for those needing to deliver a message that cannot be adequately expressed in under five minutes, but still want to reach their streaming media audiences.
The seven-minute animated video for their new car line was a mixture of important information, humor, and entertaining visuals. It was quickly embraced by viewers and sent across the social media world. Other companies have also taken this route. Some videos have breached the 10-minute mark, reaching up to almost half an hour in length.
This type of video marketing can be tough, but for those with an important message to deliver and a penchant for creativity, the online film may be the answer to avoiding traditional marketing routes. Online films are best suited for extended product sales pitches or for providing a visual checklist of content that appeals to your audience.