Skip to main content

You don’t need transcoding

Well, not always. Sometimes muxing might be a better option.

Muxing is the process of packing encoded streams into another container format while preserving your video and audio codecs. There is no actual transcoding or modifying your video streams. It is just changing the most external video shell.

 

Muxing at it's finest
Muxing at it’s finest

 

A few days ago we have added new preset in Panda called “HLS Muxing Variant”. You can easily guess what it does with the input video. The most important thing about transmuxing is that it takes less time comparing to traditional encoding to “HLS Variant” as it is not changing resolution, bitrate, etc. That’s why we priced it as low as ¼ of standard video minute, no matter the size or resolution of source video.

It may sound complicated so here’s an real life example. Let’s assume you have HQ source video and it is H264, AAC encoded video with 2000k bitrate. Re-encoding is always time consuming and and impacts quality so you can use transmuxing to only change format. You may say that HLS is adaptive streaming technology so you need more that one bitrate. You’re right! It is. You can create two other profiles for 1000k and 500k. And variants playlist as well.

Panda::Profile.create!({

:preset_name => "hls.muxer",

:bitrate => 2000, # this three values are for variant.playlist

:width => 1280,

:height => 720

})

Panda::Profile.create!({

:preset_name => "hls.variant",

:video_bitrate => 1000

})

Panda::Profile.create!({

:preset_name => "hls.variant",

:video_bitrate => 500

})

Panda::Profile.create!({

:preset_name => "hls.variant.playlist",

:variants => "hls.*"

})

Now you can send our HQ source video to Panda. The output will be 1 master playlist, 3 variants playlist and 3 groups of segments (and some screenshots). With these in place you are ready to serve your adaptive streaming content.

Give it a try. If you have any problems remember that we are here for you and we are always happy to help.

Closed Captions… what, why and how?

Closed captions have become inseparable part of any video. Making it possible to watch Scandinavian independent cinema. Helping the hearing impaired  experience the Game of Thrones as good as it gets. We all benefit from them.

Most video players have option to load subtitles from file. However, that means that if you want to deliver video with subtitles to your client, you’d have to send not only media file, but subtitles files too. What if they get mixed up? Or how to be sure that we have sent all available subtitle files to client? Fortunately there are other ways.

The first option is to burn subtitles into every frame of video. Sometimes it is needed for devices which can’t transform frames by themselves. Old TVs are a good example here. But it doesn’t mean we should be limited by old technology? Of course not. The second option is to use closed captioning. It allows to put multiple subtitles into one video file. Each of them will be added as a separate subtitle track. Now anyone who downloads video with closed captions embedded will be able to select which one to use or if disable them if not needed.

Closed captions are must have these days and we didn’t want to be left behind. So, there’s new parameter in H.264 preset which enables closed captioning. At the moment it is accessible only through our API but we are working on adding it to our web application. The parameter name is ‘closed_captions’ and the value can be set to:

  • ‘burn’ – with this setting Panda will take the first subtitle file from list and add subtitles to every frame
  • ‘add’ – with this setting Panda will put every subtitles from the list into separate track

Here’s a snippet of code in Ruby with an example how to use it:

Panda::Profile.create(
    :preset_name => "h264",
    :name => "h264.closed_captions",
    :closed_captions => "add"
)

Panda::Video.create!(
    :source_url => "VIDEO_SOURCE_URL",
    :subtitle_files => ["SUBTITLE_1_SOURCE_URL", "SUBTITLE_2_SOURCE_URL",  "SUBTITLE_3_SOURCE_URL"],
    :profiles => "h264.closed_captions"
)

Panda supports all major subtitles formats like SRT, DVD, MicroDVD, DVB, WebVTT and many more.

Thank you!

How profiles pipelining makes your life easier

Have you ever wondered if there is an option to encode your video and then use an encoded version of it as an input to new encoding? So far it hasn’t been available off the shelf, but it has been possible to get it using our notification system. But why should our customers have to take care of it by themselves?

So, what is profiles pipelining?

Let’s say you want to send a video to Panda and encode it using 3 profiles: “h264.1”, “h264.2”, “h264.3”, and then you want a video created within profile “h264.2” to be encoded using profiles “h264.4” and “h264.5”. And you also want to create output using profile “h264.3” which needs to be encoded using profile “h264.6”. But it’s not the end. To make it harder you also want to encode video created with profile “h264.5” using “h264.7”. Uhh, it can be hard to imagine what is going on, so for simplicity below you can see an image showing what I mean.

 

Pipeline Example
Example pipeline

 

 

First we need to describe it using JSON:

{
  “h264.1”:{},
  “h264.2”:{
    “h264.4”:{},
    “h264.5”:{
      “h264.7”:{}
    }
  },
  “h264.3”:{
    “h264.6”:{}
  }
}

 

And now we can send our request to Panda. Below is example Ruby code:


pipeline = {
  “h264.1” => {},
  “h264.2” => {
    “h264.4” => {},
    “h264.5” => {
      “h264.7” => {}
    }
  },
  “h264.3” => {
    “h264.6” => {}
  }
}


Panda::Video.create!(
  :source_url => “SOURCE_URL_TO_FILE”,
  :pipeline => pipeline.to_json
)

Now, when encoding is done, Panda will look if there is anything more to do next in the pipeline. If, for example, encoding to “h264.2” is done, it will become a new input for  “h264.4” and “h264.5” profiles and so on. Encodings created using pipelines will have additional field parent_encoding_id which can be used to find out what was the input used to encode or to reproduce pipeline with encodings instead of profiles.

If you have any problems with this new feature don’t forget that we are always here to help you.

Take care!

Panda Corepack-3 grows bigger and better

As you probably know you can create advanced profiles in Panda with custom commands to optimize your workflow. Since we are constantly improving our encoding tools an update could sometimes result in custom commands not working properly. Backwards compatibility can be tough to manage but we want to make sure we give you a way to handle this.

 

That’s why we made possible to specify which stack to use when creating new profile. Unfortunately, the newest one – corepack-3 – used to have only one tool, ffmpeg. It was obviously not enough and had to be fixed so we extended the list.

 

What’s in there you ask? Here’s short summary:

  • FFmpeg – a complete, cross-platform solution to record, convert and stream audio and video.
    http://ffmpeg.org
  • Segmenter Panda’s own segmenter that divides input file into smaller parts for HLS playlists.
  • Manifester – used to create m3u8 manifest files.

 

Of course, this list is not closed and we’ll be adding more tools as we go along. So, what would you like to see here?