Skip to main content

Timestamps in Panda videos

The following is a post explaining how to use timestamps in Panda using our API. Interested in having this feature added to the web interface? Request beta access

article-2539097-1AA6876200000578-761_634x485In Panda, it’s easy to setup the encoding pipeline with presets developed by our team – with just a few clicks, it’s possible to configure profiles for the most popular audio and video formats on the web. With that comes some caveats, such as limiting how much you can configure.

FFmpeg and corepack-3

You might have already stumbled across a chapter in our documentation titled “FFmpeg Neckbeard”. It describes how you can create encoding profiles by specifying the whole FFmpeg command by yourself, which means that everything that our FFmpeg can do is available. We’ve recently added the FreeType library to corepack-3 (it’s the newest stack we have), and that made a few new things possible. One of them is adding timestamps to videos. This can be done through FFmpeg’s “drawtext” filter.

Play time

Okay, so let’s create a new profile that, besides doing some typical transcoding, adds a small timestamp in the top-left corner. The filter that does this can be configured by passing the below argument to FFmpeg:

-vf "drawtext=fontfile=/usr/fonts/FreeSans.ttf:timecode='00:00:00:00':r=25:x=5:y=5:fontcolor=black"

The “drawtext” filter takes a few arguments that tell FFmpeg how the timestamp should be rendered, the full list is available in FFmpeg’s documentation. And, as you can see, it needs a font. The example above uses “FreeSans.ttf” – it’s one of the fonts from the GNU FreeFont library, and the whole collection is available in /usr/fonts/ directory, so the following values of “fontfile” will work on Panda:

/usr/fonts/FreeMono.ttf
/usr/fonts/FreeMonoBold.ttf
/usr/fonts/FreeMonoBoldOblique.ttf
/usr/fonts/FreeMonoOblique.ttf
/usr/fonts/FreeSans.ttf
/usr/fonts/FreeSansBold.ttf
/usr/fonts/FreeSansBoldOblique.ttf
/usr/fonts/FreeSansOblique.ttf
/usr/fonts/FreeSerif.ttf
/usr/fonts/FreeSerifBold.ttf
/usr/fonts/FreeSerifBoldItalic.ttf
/usr/fonts/FreeSerifItalic.ttf

Now we can apply this knowledge to a profile in Panda. There are plenty of examples of creating new profiles in “FFmpeg Neckbeard”, and basically adding a timestamp is as simple as adding the filter argument to the command. The important thing is that you use corepack-3 – “drawtext” will not work with older stacks. That’s how this would look like in Ruby:

require 'panda'

Panda.configure do
  access_key "your_access_key_123"
  secret_key "your_secret_key_42"
  cloud_id "id_of_the_target_pand_cloud"
  api_host "api.pandastream.com"
end

drawtext_args_map = {
  :fontfile => "/usr/fonts/FreeSans.ttf",
  :timecode => "'00\:00\:00\:00'", # timestamp offset
  :r => "25", # FPS of the timestamp, for 1:1 ratio it should be equal to the FPS of input videos
  😡 => "5", # x and y specify position of the timestamp
  :y => "5",
  :fontcolor => "black",
}

drawtext_args = drawtext_args_map.to_a.map { |k, v| "#{k}=#{v}" }.join(":")

Panda::Profile.create!({
  :stack => "corepack-3",
  :name => "timestamped_videos_v2",
  :width => 480,
  :height => 320,
  :video_bitrate => 500,
  :audio_bitrate => 128,
  :extname => ".mp4",
  :command => "ffmpeg -i $input_file$ -threads 0 -c:a libfaac" 
              " -c:v libx264 -preset medium $audio_sample_rate$" 
              " $video_bitrate$ $audio_bitrate$ $fps$ $filters$" 
              " -vf "drawtext=#{drawtext_args}" -y $output_file$"
})

The result (the original video is on pandastream.com):

That is nice, but we can do better. The timestamp could be more visible and in a better position – by using different “drawtext” switches we can add a background to the timestamp and place it near the bottom, centered. Using drawtext’s built-in variables we can even do this independently from video’s dimensions. The following does exactly that:

drawtext_args_map = {
  :fontfile => "/usr/fonts/FreeSans.ttf",
  :timecode => "'00\:00\:00\:00'",
  :r => "25",
  😡 => "(w-tw)/2", # w - width, tw - text width
  :y => "h-(2*lh)", # h - height, lh - line height
  :fontcolor => "white",
  :fontsize => "18",
  :box => "1",
  :boxcolor => "black@1", # 1 means opaque
  :borderw => "5",
  :bordercolor => "black@1"
}

And the final result:

If you have any questions on this subject, send a note to support@copper.io. We’re happy to help.

Panda Now Supports Google Compute Engine – Experience Speed Increases Of Up To 50% Faster

Here at Copper, we’ve been busy working to bring you the best performance for your app. We’re really excited to announce that we have added Google Compute Engine as an encoding option to Panda.

panda-gceGoogle Compute Engine Benefits

With Panda being deployed on Google Compute Engine,  the result is astounding.  Encoding videos can be up to twice as fast as previous options. Our users can now experience speed increases of between 30% to 50% for standard encodings within Panda.

Google Cloud Storage

When users upload video to Panda and choose Google Compute Engine, it encodes via GCE and then we transfer it to your Amazon S3 account, just like always. In the near future, we’re going to add support for Google Cloud Storage for those of you  who would prefer to use it. If you are running on Google Cloud exclusively, you’ll soon be able to leverage it’s full benefits with Panda.

Panda optimized for Google Compute Engine

You’ve been asking for multi-cloud support and we’re delighted to be working with our partners at Google to get here today. We completely reimagined our deployment process to support GCE as we take advantages of the inherent infrastructure differences between public cloud vendors. Panda was originally built on certain features of the AWS infrastructure, so we went back to the core and re-wrote them to optimize for GCE. And now you can see the result of this in even faster encodings.

We’re thrilled to be able to bring you the best in cloud video encoding. If you have any ideas or suggestions to help us do this, let us know at support@copper.io.

Panda Adds Support for Retina with Apple Live Streaming

We’ve now made Panda even better for developers who want to implement Apple live streaming through HTTP Live Streaming (HLS). We’ve added an additional “resolution” tag which the video player is able to choose the best stream resolution. By the way, the tag is optional, but recommended if the variant stream includes video.

What is HTTP Live Streaming?

HTTP Live Streaming (HLS) allows you to stream live and on-demand video and audio to an Apple mobile device. As an adaptive streaming protocol, HLS has several advantages:

  • Multiple bit-rate encoding for different devices

  • HTTP delivery

  • Segmented stream chunks for live streams over widely available HTTP infrastructure

How does HTTP Live Streaming work?

HLS lets you send streaming video and audio to any supported Apple product. It segments video streams into 10-second chunks that are stored using a standard MPEG-2. HTTP Live Streaming also provides for media encryption and user authentication, allowing publishers to protect their work.

How does a stream get into HLS format?

  1. The video and audio source is encoded to an MPEG-2 transport stream container, with H.264 video and AAC audio, which are the codecs that Apple devices support

  2. Output profiles are created (a single input stream may be transcoded into several output resolutions depending on the device)

  3. The streams are segmented, and an index file is used to keep track of the segments

How do HLS downloads work?

The user downloads the index file via a URL that identifies the stream. For a given stream, the client then fetches each stream chunk in order. Once the client has enough of the stream downloaded and buffered, it’s displayed. Remember to use the <video> tag to embed HTTP Live Streaming, and use the <object> or <embed> tags only to specify fallback content.