Tutorial: Setting up Panda Live Transcoding and Fastly CDN

There’s pretty big chance that once you set up Live Transcoding you’ll also need CDN to deliver your video streams in various bitrates.

We’ll show you how to distribute a live stream to your end-users with Live Transcoding HLS servers as HTTPS origins for your CDN service. In this case we’ll use Fastly, a real-time CDN that gives you total control over your content the ability to instantly update it.

Create HLS stream with DVR

If you followed our previous tutorial how to set up live transcoding stream you should already know how to go about it. Here, we just want to remind you that you can allow your users seek, rewind, fast forward your live stream by enabling DVR.


DVR Switch


After the stream is created just copy its HLS playlist url, it will come handy when setting up a CDN:


HLS playlist URL


In our example we’ll use https://live01.pandastream.com:51003/hello.m3u8 as the URL.


Adding Fastly as CDN

First, you’ll need to create a new service in Fastly.

  • set Origin Server Address, in our case it is live01.pandastream.com:51003
  • set Domain Name, in our case it is player.uniquedomain.com
    Adding new service in Fastly


Next step is to Activate SSL (TLS) for Fastly connections. SSL Options are available from the dropdown list for Service Settings. All you have to do is to set Use SSL for Connection to Yes.

Setting Fastly SSL options

All is left to do now is to deploy the configuration and activate it.


Activating service

Done. You’ve set up Fastly as your CDN for Live Transcoding.

Read More

Tutorial: Panda Live Transcoding and live broadcast with Wirecast

Transcoding your first live stream in Panda takes just a couple easy steps. The workflow is always the same: each time you want to transcode new live event you create a stream from Panda Live Transcoding dashboard. Every Live Transcoding Stream comes with at least two URIs – one for publishing your raw stream to and second for playback of the transcoded video.

Let’s go through the the process of setting up new live transcoding job and then configuring the broadcasting part with Wirecast.

BTW, the workflow can be also automated with the Live Transcoding API documentation.

Starting new streaming

To make it easier to get started we came up with several stream templates


Live Transcoding templates


Let’s say we’re going to create a stream with two-variant HLS template. The template allows set length of the stream, video and audio quality of low bitrate variant and whether the transcoded video served as HLS playlist should be rewindable (which means enabled DVR)


Live stream setup



When you click Start streaming button a request for new stream to be scheduled is sent to app and on success its id is returned



Successful stream start and its ID


Once its created you can follow your stream status on Live Dashboard.


Live Transcoding dashboard


When transcoding server has been started, configured with the requested transcoding pipeline it awaits your video to be broadcasted to its ingester endpoint . The URIs for all endpoints are displayed in Details column of the Live Streams table. The ingester URI in our example isrtmp://live01.pandastream.com:51000/in/btmqooriyqgmfon/ and we’re going to use it to setup live broadcast with Wirecast…


Wirecast is all-in-one live streaming production software. Getting it to work with Panda Live Transcoding is actually pretty easy. Just 4 step and you’re good to go.

  • create new broadcast (File → New)
  • set up output for your broadcast to our ingester URI (Output → Output settings → RTMP Server)
  • the default configuration for our example looks like the following:
Wirecast Settings


  • and add your shot to the live stream and start broadcasting

That’s it – you’re ready to start live transcoding your event.


Read More

You don’t need transcoding

Well, not always. Sometimes muxing might be a better option.

Muxing is the process of packing encoded streams into another container format while preserving your video and audio codecs. There is no actual transcoding or modifying your video streams. It is just changing the most external video shell.


Muxing at it's finest
Muxing at it’s finest


A few days ago we have added new preset in Panda called “HLS Muxing Variant”. You can easily guess what it does with the input video. The most important thing about transmuxing is that it takes less time comparing to traditional encoding to “HLS Variant” as it is not changing resolution, bitrate, etc. That’s why we priced it as low as ¼ of standard video minute, no matter the size or resolution of source video.

It may sound complicated so here’s an real life example. Let’s assume you have HQ source video and it is H264, AAC encoded video with 2000k bitrate. Re-encoding is always time consuming and and impacts quality so you can use transmuxing to only change format. You may say that HLS is adaptive streaming technology so you need more that one bitrate. You’re right! It is. You can create two other profiles for 1000k and 500k. And variants playlist as well.


:preset_name => "hls.muxer",

:bitrate => 2000, # this three values are for variant.playlist

:width => 1280,

:height => 720



:preset_name => "hls.variant",

:video_bitrate => 1000



:preset_name => "hls.variant",

:video_bitrate => 500



:preset_name => "hls.variant.playlist",

:variants => "hls.*"


Now you can send our HQ source video to Panda. The output will be 1 master playlist, 3 variants playlist and 3 groups of segments (and some screenshots). With these in place you are ready to serve your adaptive streaming content.

Give it a try. If you have any problems remember that we are here for you and we are always happy to help.

Read More

Easier, faster, better looking & still secure – API Tokens

If you’ve ever had to access Panda API by crafting raw HTTP requests or write your own Panda client library, you must know how annoying request signatures could be. They make communication very secure, but can be very inconvenient.

Building a signature was quite a complex, error prone task. And debugging wasn’t the most pleasant thing on earth either as the number of possible mistakes was huge. Each of them manifested in the same way – an error message saying that signatures mismatch has occurred.

Wouldn’t it be great to have another authorization method, whose usage would be as simple as copy & paste a string? Without compromising security. One that would be simple enough to make querying Panda from command line tools actually viable?

It bothered us as well so we decided to put some time into making everyone’s life bit easier. We came up with a solution that’s being used by a number of payment platforms. And these guys usually do care about security. If you’re using Panda API you will now be able to authorize yourself through an API Token instead of a signature.

There is one unique auth token per encoding cloud in Panda. You can check API Token for each cloud in our web application and generate new one if needed.

API Token view

And now we can finally do what other services have been bragging about for a long time. We can have curl examples. YAY!

That’s how you send a file to Panda now (more examples in our docs):

curl -X POST -H "Content-Type: multipart/form-data" -F "file=@/path/to/file/panda.mp4" "http://api.pandastream.com/v2/videos.json?token=clou_lCTyUrw5eapr3rVE5vTOwlgxW&file=panda"


   "created_at":"2015/07/17 15:33:47 +0000",
   "updated_at":"2015/07/17 15:33:48 +0000",


That’s all folks. Have a great weekend!

Read More

Closed Captions… what, why and how?

Closed captions have become inseparable part of any video. Making it possible to watch Scandinavian independent cinema. Helping the hearing impaired  experience the Game of Thrones as good as it gets. We all benefit from them.

Most video players have option to load subtitles from file. However, that means that if you want to deliver video with subtitles to your client, you’d have to send not only media file, but subtitles files too. What if they get mixed up? Or how to be sure that we have sent all available subtitle files to client? Fortunately there are other ways.

The first option is to burn subtitles into every frame of video. Sometimes it is needed for devices which can’t transform frames by themselves. Old TVs are a good example here. But it doesn’t mean we should be limited by old technology? Of course not. The second option is to use closed captioning. It allows to put multiple subtitles into one video file. Each of them will be added as a separate subtitle track. Now anyone who downloads video with closed captions embedded will be able to select which one to use or if disable them if not needed.

Closed captions are must have these days and we didn’t want to be left behind. So, there’s new parameter in H.264 preset which enables closed captioning. At the moment it is accessible only through our API but we are working on adding it to our web application. The parameter name is ‘closed_captions’ and the value can be set to:

  • ‘burn’ – with this setting Panda will take the first subtitle file from list and add subtitles to every frame
  • ‘add’ – with this setting Panda will put every subtitles from the list into separate track

Here’s a snippet of code in Ruby with an example how to use it:

    :preset_name => "h264",
    :name => "h264.closed_captions",
    :closed_captions => "add"

    :source_url => "VIDEO_SOURCE_URL",
    :profiles => "h264.closed_captions"

Panda supports all major subtitles formats like SRT, DVD, MicroDVD, DVB, WebVTT and many more.

Thank you!

Read More

Panda adds audio streams packing

Video files can contain more than just video, they usually have some kind of audio too, and sometimes subtitles or other data as well. Video processing is not an easy task, but one thing is certainly easier about it than about audio: there’s almost always just one video stream to worry about, but there can be multiple audio streams. Good, old stereo means having two audio streams, and nowadays we’ve gone much further than just two. We often have a small platoon of speakers around our couch, and each one of them might get its own audio stream.

A single container format (like MP4, MKV or AVI) can contain multiple audio streams. When transcoded, these streams can be left as they are, but sometimes you may need to map them to different speakers, merge or duplicate, even add muted streams. All these things have always been possible in Panda through raw FFmpeg commands, but recently we’ve decided to make it bit simpler.

Now you can decide if Panda should merge audio streams, with a single checkbox. The default behavior is to leave the streams mapping untouched. However, if you want to merge – Panda will intelligently pack them as audio channels into a single stream – 2 input streams will be packed as one stereo stream, 6 streams as one 5.1 surround streams, and so on.

Let us know if you have any questions.

Read More

It’s a go! Live Transcoding in Panda.

For the last few months we’ve been super busy at Panda. Thousands of Golang lines later, we’re ready to present a completely new product – Live Transcoding.

Live streaming is all the rage

We don’t need to tell you that live streaming is hot topic, and has by far outgrown its traditional boundaries. The stuff you’d instinctively associate with it – like sports events or TV shows – is still there, but now we have tons of new things to the list, like watching DOTA championships online, or Notch live-coding his games for Ludum Dare. With Meerkat and Periscope live streaming has recently gone social.

We’ve been operating Panda as a VOD-focused product, building a solid piece of experience with different video formats and standards. Having a team with such knowledge, we feel pretty confident to take on the live streaming thing.

We’re not file delivery guys, there are people who are better at this (with whom we cooperate, but more on that later); but we’re willing to accept any transcoding challenge since we’re focused, well… on transcoding.

Live transcoding is key piece of delivering quality streaming. Without it, you wouldn’t be able to do adaptive bitrate streaming, having to serve only one stream variant. Only small part of your viewers would then be able to actually see the footage uninterrupted, leaving the rest with video that would either be jittery or not work at all.

Adaptive bitrate is key

Adaptive bitrate is probably the most important reason why people need live transcoding. A transmission that employs the adaptive bitrate technique would use a few variants of the stream, each requiring different bandwidth, so the viewers receive the one that’s best suited for their device and network.

The most popular standard for adaptive bitrate nowadays is Apple’s HLS. It’s relatively simple, based on HTTP, and almost ubiquitously supported. That’s why we chose to support it from the very beginning.

The other important standard is RTMP, which is a protocol developed by Adobe for video in Flash. It’s not based on HTTP and is a bit more complex but still very popular. Even if the stream is eventually delivered over HLS, RTMP is often used as a lingua franca protocol by software that lives in the backend. That’s the technology you’d most likely use to feed the stream from your computer, camera or console.

Help us help you

With RTMP and HLS you’re basically fully equipped to do adaptive live streaming, and that’s what Panda’s beta program is offering you.. Lot more will come in the future – including MPEG DASH – but these two easily cover 90% of use cases.

Take it for a spin, tell us what you think so we can make it better.



Read More

How profiles pipelining makes your life easier

Have you ever wondered if there is an option to encode your video and then use an encoded version of it as an input to new encoding? So far it hasn’t been available off the shelf, but it has been possible to get it using our notification system. But why should our customers have to take care of it by themselves?

So, what is profiles pipelining?

Let’s say you want to send a video to Panda and encode it using 3 profiles: “h264.1”, “h264.2”, “h264.3”, and then you want a video created within profile “h264.2” to be encoded using profiles “h264.4” and “h264.5”. And you also want to create output using profile “h264.3” which needs to be encoded using profile “h264.6”. But it’s not the end. To make it harder you also want to encode video created with profile “h264.5” using “h264.7”. Uhh, it can be hard to imagine what is going on, so for simplicity below you can see an image showing what I mean.


Pipeline Example
Example pipeline



First we need to describe it using JSON:



And now we can send our request to Panda. Below is example Ruby code:

pipeline = {
  “h264.1” => {},
  “h264.2” => {
    “h264.4” => {},
    “h264.5” => {
      “h264.7” => {}
  “h264.3” => {
    “h264.6” => {}

  :source_url => “SOURCE_URL_TO_FILE”,
  :pipeline => pipeline.to_json


Now when encoding is done Panda will look if there is anything more to do next in the pipeline. If for example encoding to “h264.2” was done, Panda will create a new video from this encoding and encode it using profiles “h264.4” and “h264.5” and so on. If you want to know the ID of video created with particular encoding just ask Panda about it when it’s done. The Encoding object will contain the additional field “derived_video_id”.

If you have any problems with this new feature don’t forget that we are always here to help you.

Take care!

Read More

Panda’s new clothes

The neglected child…

New user interface for our web application has been long overdue. We focused so hard on improving Panda core features that we neglected the UI. A bit. Today is the day to make it up to you. New, updated and better Panda UI has arrived.

You’re probably very well aware that old UI was okay-ish but far from perfect. We decided to take it easy and choose evolution instead of revolution. Changing users habits and workflows is always sensitive and tricky business. That’s why we think it makes more sense to introduce changes gradually.


Video list view



Our main goal for first roll out was to deliver cleaner, simpler UI that makes a better use of screen space. Based on your feedback we added small changes for those of you that transcode large volumes. We’ve added simplified list view for videos and profiles to make it easier to browse them. We unified the application behavior, to make sure configuration of key Panda features is always done the same way.


Profiles list view



Front-end piece of the app is now based on well known and proven AngularJS framework.

What’s next?

Expect more changes in the coming weeks. We’re working on better, more detailed encoding analytics. That’s one of more requested improvements and we’re happy to oblige.

Current console will get overhaul as well to make it more useful. Brand new piece of Panda – Live Transcoding –  will be getting it’s own piece of UI (it’s in beta now). And of course, there’ll be a number of small tweaks improvements that at first may go unnoticed but will make your work with Panda both, easier and more fun.

We would love to hear what you think. What could have been done better? Did we miss something?

Stay tuned!

Read More

On bears and snakes. Panda has updated Python library.

We usually don’t want to deal with complicated APIs, protocols and requests. Straightforward, clear way of doing things is preferred and it’s usually the best to hide raw communication and all the technical details under a simple interface. The structural organization of most successful systems is based on several layers of abstraction. The higher you are the less control over things you have but then such level of control is often dispensable in favor of simplicity.

Panda communicates with the rest of the world using several endpoints, related to particular entities it works with, like clouds, notifications and videos. Each of these endpoints can be reached using HTTP requests. Depending on their type (POST, GET, DELETE or PUT) and arguments various operations are executed. It could be modifying an existing profile, deleting a video or creating a new cloud. All these requests need proper timestamps and the right signature to pass the verification. To save you from managing it all on your own, several client libraries are used.

New Python library

We just wanted to let you know that Python library in Panda has been updated to make integrations much easier. So far it only offered basic functionality like signature generating. You still had to provide both, an endpoint location and a HTTP method to send a request, and then parse returned JSON data on your own. It’s no longer needed in the next version of the package, which introduces a new, simpler interface, based on the one provided by the Ruby language gem. Returned information is now stored in dictionary-like objects, which makes it easier to inspect. Also, you don’t have to input API endpoint locations and proper HTTP method types to interact with your data.

Resumable upload is there

Finally, a support for a resumable upload was added. If you send a file using a basic POST request, you don’t have a chance of resuming an upload in case of a connection failure. It is especially annoying if happens at the end of uploading large multimedia file. In such case, even through several gigabytes have already been sent, you have to start all over again. 

Panda offers another, much better approach, and allows you to create uploading session. The old version of the library only returned the endpoint address and left all the work up to you. The new one is now capable of managing the session using simple, easy to remember set of methods. You don’t have to calculate offsets and positions in a multimedia file anymore to ensure that it will be sent in one piece.

The backward compatibility with previous version is also preserved. If you prefer, you can still use the old way and call specific HTTP methods manually.

With the new library and thanks to power of Python you can easily write clear, robust, elegant and maintainable code. And that’s the fun part, isn’t it?

GitHub repo and examples

Read More