Skip to main content

Closed Captions… what, why and how?

Closed captions have become inseparable part of any video. Making it possible to watch Scandinavian independent cinema. Helping the hearing impaired  experience the Game of Thrones as good as it gets. We all benefit from them.

Most video players have option to load subtitles from file. However, that means that if you want to deliver video with subtitles to your client, you’d have to send not only media file, but subtitles files too. What if they get mixed up? Or how to be sure that we have sent all available subtitle files to client? Fortunately there are other ways.

The first option is to burn subtitles into every frame of video. Sometimes it is needed for devices which can’t transform frames by themselves. Old TVs are a good example here. But it doesn’t mean we should be limited by old technology? Of course not. The second option is to use closed captioning. It allows to put multiple subtitles into one video file. Each of them will be added as a separate subtitle track. Now anyone who downloads video with closed captions embedded will be able to select which one to use or if disable them if not needed.

Closed captions are must have these days and we didn’t want to be left behind. So, there’s new parameter in H.264 preset which enables closed captioning. At the moment it is accessible only through our API but we are working on adding it to our web application. The parameter name is ‘closed_captions’ and the value can be set to:

  • ‘burn’ – with this setting Panda will take the first subtitle file from list and add subtitles to every frame
  • ‘add’ – with this setting Panda will put every subtitles from the list into separate track

Here’s a snippet of code in Ruby with an example how to use it:

Panda::Profile.create(
    :preset_name => "h264",
    :name => "h264.closed_captions",
    :closed_captions => "add"
)

Panda::Video.create!(
    :source_url => "VIDEO_SOURCE_URL",
    :subtitle_files => ["SUBTITLE_1_SOURCE_URL", "SUBTITLE_2_SOURCE_URL",  "SUBTITLE_3_SOURCE_URL"],
    :profiles => "h264.closed_captions"
)

Panda supports all major subtitles formats like SRT, DVD, MicroDVD, DVB, WebVTT and many more.

Thank you!

Panda Corepack-3 grows bigger and better

As you probably know you can create advanced profiles in Panda with custom commands to optimize your workflow. Since we are constantly improving our encoding tools an update could sometimes result in custom commands not working properly. Backwards compatibility can be tough to manage but we want to make sure we give you a way to handle this.

 

That’s why we made possible to specify which stack to use when creating new profile. Unfortunately, the newest one – corepack-3 – used to have only one tool, ffmpeg. It was obviously not enough and had to be fixed so we extended the list.

 

What’s in there you ask? Here’s short summary:

  • FFmpeg – a complete, cross-platform solution to record, convert and stream audio and video.
    http://ffmpeg.org
  • Segmenter Panda’s own segmenter that divides input file into smaller parts for HLS playlists.
  • Manifester – used to create m3u8 manifest files.

 

Of course, this list is not closed and we’ll be adding more tools as we go along. So, what would you like to see here?

 

A case for MPEG DASH

In an always competing IT world there are many rivaling groups of skilled developers who independently try to solve the same problems and implement the same concepts. It usually results with a vast choice of possible solutions that share a lot of common traits. This abundance of techniques, methods and protocols is one of the things that allowed the rise of modern software.

However, it can also be a burden because one needs to provide the support for multiple technologies instead of being able to focus on one. The worst thing that can happen is a set of incompatible mechanisms that need to be separately served within the application, language or library. A good example – legendary browser wars that we had in the 90s. Both Microsoft and Netscape developed their own unique features that weren’t supported in the competitor’s product which brought a ton of problems for web developers who wanted their web pages rendered the same way. Even now it’s common to use various JavaScript libraries like YUI and jQuery to fix issues related to legacy browsers.

That’s why standards are important

They provide a well defined core that needs to be implemented among all vendors. It makes the constant struggle for portability a bit easier. Standard shifts the responsibility: now developer doesn’t have to worry about every possible type of a user software and include several tests for special cases. He doesn’t need to write extra code just to handle a single task done differently in a different environments. He can improve the support of a single protocol instead of working with five. It’s now a vendor’s job to provide a product that works with code compliant to the specification.

Unfortunately, creating a standard is not a simple task and there is a lot of problems in order to satisfy all the needs and cases. A clash between proposals is hard to avoid. It takes time to have one victorious solution emerge and dominate the market.

Divided world of adaptive bitrate streaming

Such strife can be observed now in a world of multimedia streaming techniques. There are three competing HTTP based methods, referred as adaptive bitrate streaming –  Apple’s HLS, Microsoft HSS and Adobe HDS. These 3 provide the way to transmit multimedia with bitrate that can be changed dynamically, depending on network bandwidth and hardware capabilities.

They are similar but occupy a different parts of the market. HSS is present in Silverlight based applications, HLS is in a common use among mobile devices and HDS is popular companion of Flash on desktop. It would be a lot easier for developers to have one common technology to support instead of 3 separate ones. That’s why there were attempts to standardize the adaptive bitrate streaming.

Enter MPEG DASH

The MPEG group, major organization that contributes commonly used multimedia standards, introduced their own version of HTTP based streaming called MPEG DASH, that strives now to become a dominant method for delivering rich video content.Right now MPEG DASH is far from being a champion and the only preferred choice. HDS, HLS and HSS are still commonly used across the Internet. It’s hard to predict if it’ll prevail. All that’s certain is it should not be ignored. There’s a little merit in waiting for a winner to rise victorious from this clash of technologies. That’s why we decided to enhance Panda with a DASH support.

We provide this feature through a new preset that is available to choose from a profiles list. Similar to a HSS support that we introduced recently, there are two ways the encoded set of output files can be stored in the cloud. If the default .tar extension is preserved then both multimedia files and XML manifest with all necessary metadata are archived into a single file, which can be later downloaded and unpacked. Alternatively you can choose a .mpd which makes Panda upload all output files into the cloud separately.

Another important decision to make is a set of output bitrates. The default setting consists of bitrates with values of 2400k, 600k, 300 and 120k. Changing the video bitrate value through the preset settings panel results with values equal to the one you set, 1/2, 1/4 and 1/8 of it (just like with HSS which we introduced before).

To test your output you can use one of these media players:

This new preset allows you to adapt our product to your needs more flexibly. There is a saying that “nobody ever got fired for buying IBM equipment” because when in doubt one should choose what is a standard for the industry. If you want to provide your application with features that modern streaming techniques offer then choosing MPEG DASH might be a good option. Panda is there to help getting your videos encoded the right way.

Panda adds streaming with Microsoft HSS

You could have the most astounding video content, in high resolution and with amazing quality, enhanced with all sorts of special effects and advanced graphical filters – it doesn’t matters if you aren’t capable of delivering it to your consumers. Their connection speed is often limited and they might not have enough bandwidth to receive all these megabytes filled with rich multimedia data. While our networks are improving at an astonishing rate, they’re still the main bottleneck of many systems, as the size of the files rises rapidly with better resolutions and bitrates. While you can add several more cores to your servers to increase their computing power, you’re not able to alter the Internet infrastructure of your users. You have to choose – send them high definition data or sacrifice the quality to make sure the experience is smooth.

Continuous streaming vs Adaptive bitrate

The most obvious solution is to prepare several versions of the same video and deliver one of them depending on user bandwidth. In the past this was a standard approach, once the choice was made, files were streamed in a progressive way, from the beginning till the end, just like images in modern web browsers.

This method has several disadvantages. The biggest one – you cannot dynamically switch between versions in the middle of the sending process in order to react to changes in the network load. If the connection improves you can’t take advantage of it and you have to continue sending worse quality despite having resources available. Even worse, you can’t prevent congestion if the transmission speed decreases – you either cancel the entire process or end up with the video becoming laggy. With continuous streaming you also can’t just skip part of the multimedia and jump ahead until the downloading process gets to the desired moment neither can you rewind data quickly.

To fix these issues better a more flexible solution is needed. For a long time the preferred choice was Adobe RTMP (Real-Time Messaging Protocol) used together with Adobe FMS (Flash Media Server). It was complex and became problematic in the era of mobile devices, since their support for Flash based technologies is pretty average. This allowed HTTP based protocols to emerge and dominate the market.

These technologies split video and audio into smaller segments which are encoded with different bitrates. It allows to dynamically choose optimized data based on current connection speed and CPU. It’s called adaptive bitrate streaming. 

HTTP has a number of advantages compared to RTMP

  • it’s a well known, simple, popular and universally applied protocol
  • it can use caching features of content delivery networks
  • it manages to traverse firewalls much easier.

Microsoft HSS, underrated protocol

As of now there is no single, standard HTTP-based protocol, instead there are several implementations by different vendors. One of them, Apple’s HLS, was available in Panda for a long time. Now we’re adding another one – HSS (HTTP Smooth Streaming), a Microsoft technology which allows the use of adaptive bitrate streaming features in Silverlight applications. Even though that with the advent of HTML5 Silverlight is not as popular as it used to be (over 50% market penetration in 2011) it’s still a widely spread, common technology and a noteworthy rival of Flash.

To use HSS specialized server is needed. The most obvious choice would be Microsoft’s IIS but there are modules for Nginx, Apache Httpd and Lighttpd as well. After setting it up, together with a Silverlight player, you need to split your video files into data segments (files with .ismv extension) and generate manifest files (.ism and .ismc extensions), which are used to inform receivers what kind of content the server can deliver.

HSS preset in Panda

This is where Panda comes in handy as a convenient encoding tool. All you have to do is add HSS preset to your set of profiles and configure it as needed to get a pack of converted files ready to deploy. The most important setting is an output file format. With a default ‘.tar’ extension you will receive at the end of the encoding process a single, uncompressed archive which contains all necessary data. All that’s left is to unpack this archive into the selected folder of your video server and then provide your Silverlight player with a proper link to a manifest file. You can alternatively choose ‘.ism’ format, which won’t archive the output. Instead files will only be sent to your cloud, from where you can use them any way you need.

 

ydD6yeL8Qoi3IWOxfeJF

 

Another important thing to consider is a video bitrate value for your segments. The default settings produce segments with bitrates of 2400k, 600k, 300k and 120k. If you insert custom value, your output will consist of segments having a bitrate equal to the provided value, 1/2, 1/4 and 1/8 of it. Finally, you might alter the standard set of options such as resolution and aspect ratio.

Now you can have the benefits of adaptive bitrate streaming if your business uses Microsoft technologies. All you have to worry about is content quality, since the problems with delivery are becoming less of a burden, all thanks to the abilities of HTTP protocol.

 

Frame rate conversion with motion compensation

Here at Panda, we are constantly impressed with the requests that our customers have for us, and how they want to push our technology to new areas. We’ve been experimenting with more techniques over the past year, and we’ve officially pushed one of our most exciting ones to production.

Introducing frame rate conversion by motion compensation. This has been live in production for some time now, and being used by select customers. We wanted to hold off until we saw consistent success before we officially announced it 🙂 We’ll try to explain the very basics to let you build an intuition of how it works – however, if you have any questions regarding this, and how to leverage it for your business needs, give us a shout at support@pandastream.com.

Motion compensation is a technique that was originally used for video compression, and now it’s used in virtually every video codec. Its inventors noticed that adjacent frames usually don’t differ too much (except for scene changes), and then used that fact to develop a better encoding scheme than compressing each frame separately. In short, motion-compensation-powered compression tries to detect movement that happens between frames and then use that information for more efficient encoding. Imagine two frames:

Panda on the left...
Panda on the left…
...aaand on the right.
…aaand on the right.

Now, a motion compensating algorithm would detect the fact that it’s the same panda in both frames, just in different locations:

First stage of motion compensation: motion detection.
First stage of motion compensation: motion detection.

We’re still thinking about compression, so why would we want to store the same panda twice? Yep, that’s what motion-compensation-powered compression does – it stores the moving panda just once (usually, it would store the whole frame #1), but it adds information about movement. Then the decompressor uses this information to construct remaining information (frame #2 based on frame #1).

That’s the general idea, but in practice it’s not as smooth and easy as in the example. The objects are rarely  the same, and usually some distortions and non-linear transformations creep in. Scanning for movements is very expensive computationally, so we have to limit the search space (and optimize the hell out of the code, even resorting to hand-written assembly).

Okay, but compression is not the topic of this post. Frame rate conversion is, and motion compensation can be used for this task too, often with really impressive results.

For illustration, let’s go back to the moving panda example. Let’s assume we display 2 frames per second (not impressive), but we would like to display 3 frames per second (so impressive!), and the video shouldn’t play any faster when we’re done converting.

One option is to cheat a little bit and just duplicate a frame here and there, getting 3 FPS as a result. In theory we could accomplish our goal that way, but the quality would suck. Here’s how it would work:

Converting from 2 FPS to 3 FPS by duplicating frames.
Converting from 2 FPS to 3 FPS by duplicating frames.

Yes, the output has 3 frames and the input had 2, but the effect isn’t visually appealing. We need a bit of magic to create a frame that humans would see as naturally fitting between the two initial frames – panda has to be in the middle. That is a task motion compensation could deal with – detect the motion, but instead of using it for compression, create a new frame based on the gathered information. Here’s how it should work:

Converting from 2 FPS to 3 FPS by motion compensation: panda is in the middle!
Converting from 2 FPS to 3 FPS by motion compensation: panda is in the middle!

 

These are the basics of the basics of the theory. Now an example, taken straight from a Panda encoder. Let’s begin with an example of how frame duplication (the bad guy) would look like (for better illustration, after converting FPS we slowed down the video, and got slow motion as a result):

 

See that jitter on the right? Yuck. Now, what happens if we use motion compensation (the good guy) instead:

 

It looks a lot better to me, the movement is smooth and there are almost no video artifacts visible (maybe just a slight noise). But, of course, other types of footage are able to fool the algorithm more easily. Motion compensation assumes simple, linear movement, so other kinds of image transformations often produce heavier artifacts (they might be acceptable, though – it all depends on the use case). Occlusions, refractions (water bubbles!) and very quick movement (which means that too much happens between frames) are the most common examples. Anyway, it’s not as terrible as it sounds, and still better than frame duplication. For illustration, let’s use a video full of occlusions and water:

 

Okay, now, let’s slow it down four times with both frame duplication and motion compensation, displayed side-by-side. Motion compensation now produces clear artifacts (see those fake electric discharges?), but still looks better than frame duplication:

 

And that’s it. The artifacts are visible, but the unilateral verdict of a short survey in our office is: the effect is a lot more pleasant for motion compensation than frame duplication. The feature is not publicly available yet, but we’re enabling it for our customers on demand. Please remember that it’s hard to guess how your videos would look like when treated with our FPS converter, but if you’d like to give it a chance and experiment a bit, just drop us an email at support@pandastream.com

SD television formats in Panda

The edge of video transmission is moving quickly, just to mention HD television being mainstream for some time and 4K getting traction; H264 being ubiquitous, and HEVC entering the stage. Yet most people still remember VHS. It’s good to be up with the latest tech, but unfortunately the world is lagging behind most of the time.

Television is a different universe than Internet transmission. The rules are made by big (usually government) bodies and rarely change. Although most countries have switched to digital transmission, standard definition isn’t gone yet – SD channels are still very popular, which forces content providers to support SD formats too.

Recently, we’ve helped a few clients to craft transcoding pipelines that support all these retiring-yet-still-popular formats. We’ve noticed that it’s a huge nuisance for content makers to invest in learning old technology and that they would love to shed the duty on someone else; so we made sure that Panda (both the platform and the team) can deal with these flawlessly.

There’s a huge variability among requirements pertaining SD: for example, you have to decide how the image should be fitted into the screen. High-quality downsampling is always used, but you have to decide what to do when the dimensions are off: should you use letterboxing, or maybe stretch the image?

Fiordland National Park, New Zealand (Nathan Kaso)
Fiordland National Park, New Zealand (Nathan Kaso)

Another decision (which usually is not up to you) is what exact format should be used. This almost always depends on the country the video is for. Although the terms NTSC, PAL and SECAM come from the analog era (digital TV uses standards like ATSC and DVB-T), they are still used to describe parameters of encoding in digital transmission (e.g. image dimensions, display aspect ratio and pixel aspect ratio). Another thing the country affects is the compression format, the most popular are MPEG-2 and H.264, though they are not the only ones.

Standard television formats also have specific requirements on frame rate. It’s a bit different than with Internet transmission, where the video is effectively a stream of images. In SD TV, transmission is interlaced, and instead of frames it uses fields (which contain only half the information that frames do, but allow to save up bandwidth).

Frame rate is therefore not a very accurate term here, but the problem is still the same – we have exact number of frames/fields to display per unit of time, and the input video might not necessarily match that number. In such case the most popular solution is to drop and duplicate frames/fields according to the needs, but quality of videos produced this way is not great.

There is a solution, though, but it’s so complicated that we’ll just mention it here – it’s motion compensation. It’s a technique originally used for video compression, but it also gives great results in frame rate conversions. It’s not only useful for SD conversions, we use it for different things at Panda, but it helps here too.

Well, it’s definitely not the end of the story. These are the basics, but the number of details that have to be considered is unfortunately much bigger. Anyway, if you ever happen to have to support SD television, we’re here to help! Supporting SD can be as easy as creating a profile in Panda:

Adding SD profile in Panda
Adding SD profile in Panda

Video Marketing Can Be the Most Effective Way to Reach Your Audience

PolarBears

For the last decade, content marketing has been dominated by video streaming. Whether you’re a comedian posting funny videos to build a following or a business creating an informative product demo to help your viewers, choosing the right type of video message is crucial to boosting views and rankings. Here is a closer look at some of the most popular forms of video marketing for various content types.

Social Videos for Individual Messages or Projects

Streaming media is the foundation on which the social Internet runs. Over the last two years, Twitter and Instagram have piggybacked on the social video marketing success of YouTube and Facebook. Twitter released Vine, which allows the user to post and share six-second videos, while Instagram added video-streaming capabilities to their regular feeds.

The benefit of choosing social video is that it has the ability to reach many people in a short amount of time. If your video is only 30 seconds to a minute long and designed to capture your viewer’s attention within the first five seconds, there’s a better chance of getting more views, likes, and shares.

This type of content marketing is great for short messages, entertainment (i.e., funny videos), and sales messages.

The Birth of the Online Film Is Giving New Life to Video Marketing

There is a misconception that people won’t take time out of their daily routines to watch a video that is more than three minutes long. YouTube was built around this belief and, up until a few years ago, was dominated by it. The Coca-Cola Company has paved the way for those needing to deliver a message that cannot be adequately expressed in under five minutes, but still want to reach their streaming media audiences.

The seven-minute animated video for their new car line was a mixture of important information, humor, and entertaining visuals. It was quickly embraced by viewers and sent across the social media world. Other companies have also taken this route. Some videos have breached the 10-minute mark, reaching up to almost half an hour in length.

This type of video marketing can be tough, but for those with an important message to deliver and a penchant for creativity, the online film may be the answer to avoiding traditional marketing routes. Online films are best suited for extended product sales pitches or for providing a visual checklist of content that appeals to your audience.

Panda Is The Foundation Of A Psychological Assessment Tool Used For The Department of Defense

Seeking a Developer-Friendly Video Encoding Solution

Adam Hasler builds digital products. He’s the lead developer at The Big Studio, a design-focused consultancy based in Boston. Not only does Adam engage in a lot of design, but he also does all the coding. Like other digital product leaders, Adam Hasler is first and foremost a developer and a designer. When it comes to other projects like video encoding, it’s usually out of scope for a typical day’s work.

Adam’s been working on an app that’s used by psychologists as an assessment mechanism. It’s the project of a psychologist, who’d been applying this assessment framework on paper, and administering it to people that way.

Adam’s task was to build a video quiz where subjects could click on a video and give their feedback. The video quiz component would then record where they clicked, and allow them to give feedback on why that moment resonated for them. Each subject’s feedback would then be compared to that of experts to assess whether or not they could read a situation as well.

adam-hasler-small

“I needed to build a tool where subjects watching a video could say, ‘There, right there, that thing that happened is what I think is important,’” explains Adam Hasler. “In my first test build, I used a solution that involved uploading a video and running it through a script. It didn’t work. It was a disaster.”

Panda Is The Best Solution

To complete the project, Adam needed to build both a testing and an authoring component. Psychologists needed to be able to write the tests, so there were two user personas in that sense: a tester and a test taker. The tester would always be a psychologist, who wasn’t technologically savvy, so Adam had to make a really good test editing interface. Because of the nature of the project, he needed to:

  1. Upload videos
  2. Have them appear in different formats depending on the browser being used

“I discovered Panda through Heroku, and it ended up being the best solution,” says Adam. “With Panda, psychologists can author an assessment video by dragging and dropping it into a container I built. Panda uploads it, and collects the feedback. We don’t have to worry about uploading 4 different video file types, because Panda encodes the videos to work on different browsers.”

shadowbox-screenshot

Panda Delivers Encoded Video To The Department Of Defense

 Thanks to Panda, the project has been highly successful. One of the key user personas is the Department of Defense, which is testing subjects for their response to conflicts.

“Because of the interactivity, I needed more than a video on a page,” explains Adam. “With Panda, I get that beautiful little jSON object back with all the information I would need to make all the difference for this very little, key component. I love Panda! It made my life so much easier. I think it’s so cool.”

Re-architecting for *real* scale

7 minute read

On the surface, Panda is a pretty simple piece of software – upload a video, encode it into various formats, add a watermark or change frame rate, and deliver it to a data store.

Once you spend some time with it, it begins to show how complex each component can be – and how important it is to continuously improve each one.

Lucy Production Line

When Panda was first built, it worked beautifully, and it was quick! But as time went on, and the volume of videos encoded per day increased, it became obvious that to keep pace with increasing speed requirements from customers, and maintain growth – core parts of the platform were going to need to be rethought.

We started looking at each component piece by piece, to find bottlenecks, optimize throughput and keep a fair operating expense so we could retain our price leadership. Panda might be a software platform – but having read the ‘The Goal‘ by Eli Goldratt about a manufacturing plant really reminded us of the process. (It’s a great read btw).

In July we updated to the most current versions of Ruby and Go – and added a memory cache to tasks that were maxing out our instances. Then we tackled the big scale bottleneck – the job manager.

Our biggest bottleneck: the Job Manager

The Job Manager is built to ensure that our customers video queues get processed as close to real-time as possible, and distributes transcoding jobs to the encoder clusters. Whether it’s 2000 encoders on 8 CPU cores each, or 1 encoder on 1 CPU core it’s important it’s allocated correctly.

It monitors all encoding servers running within an environment, receives new jobs, and assigns them to instance pools.

The Panda Job Manager was a single thread Ruby process, which worked well for quite some time. We noticed it would start struggling during peaks, and we had to do something about it. We started looking at where we could optimize it, by identifying each bottleneck one by one.

It was obvious that events processing was too slow in general, but before we even fired up a profiler, we managed to find a huge one just by looking at logs and comparing timestamps.

Redis Queue Architecture

Short digression: We use Redis queues for internal communication, and there was one such queue where all messages for the manager were being sent. The manager was constantly pooling this queue and most of its work was based on messages it received. Each encoding server had a queue in Redis too, and all these queues were used for communication between the manager and encoders.

Image 2014-12-04 at 12.24.46 PM

Because a single Redis queue was used for new jobs as well as manager/encoders communication, huge numbers of the former were causing delays in the latter. And a slow down in internal communication meant that some servers were waiting unnecessarily long for jobs to be assigned.

Is Ruby and Redis the Answer?

The obvious solution was to split the communication into two separate queues: one for new jobs and another one for internal messaging. Unfortunately, Redis doesn’t allow blocking reads from more than one queue on a single connection.

We were forced either to implement Redis client that would use non-blocking IO to handle more that one connection in a single thread, or resort to multiple threads or processes. Writing our own client seemed like a lot of work, and Ruby isn’t especially friendly if you’d like to write multithreaded code (well, unless you use Rubinius).

Before trying to solve that, we launched manager within a profiler to get a clearer picture. It turned out that roughly 30% of time was spent at querying the database (jobs were saved, updated and deleted from the DB), and the remaining 70% was just running the Ruby code. Because we were a few orders of magnitude slower that we wished, optimizing neither just the database nor the Ruby code would be enough (and we still had to solve the queues issue). We needed something more thorough that a simple fix. 

Go baby, GO!

gopherWe started by rewriting the manager in Go. We didn’t want to waste time on premature optimization, so it roughly was a 1:1 rewrite, just a few things were coded differently to be more Go-idiomatic – but the mechanics stayed the same.

The result? Those 70% that were previously spent on Ruby code dropped to about 1%! That was great, we got almost 70% speed-up, but we were still nowhere near where we wanted.

Multithreading

Then we fixed the queues issue. With Go’s multithreading model is was so simple that it’s almost not worth mentioning – we even accidentally got a free message pre-fetching in a Go channel (another thread pools Redis and pushes messages to a buffered channel). And this was a huge kick – now we could handle more than 16,000,000 jobs per day per job manager.

We could have pushed it harder, but we still hadn’t even started profiling our new Go code at this point. Golang has great tools for profiling, so rather quickly we were able to go through the bottlenecks (it was database almost all the time). When we decided that it’s enough, we started testing… And we just couldn’t get enough EC2 instances to reach manager’s limit. We ended at about a bit less than 80,000,000 jobs per day and even a sign of sweat wasn’t visible on manager.

The graph below shows the number of videos per day projected from the number of videos processed within the last 30 minutes. We started at a bit more than 1,000,000, then switched to the Golang manager and got to the 80,000,000 limit – but there were no more jobs (we reached our EC2 spot limits while performing the benchmark!), so we might have processed even more (but it should be a safe number for some time).

YC4FoQlSGCzci5043QJO

The end result of this phase is a technical architecture that clears queues much faster, and for the same encoder price, delivers better throughput and greatly enhanced encoder bursting (especially good during the holiday season where we often have customer that ratchet up activity by 100x!), and more automation. We’re not done yet – and we have some fantastic features coming in 2015 that the new back-end enables us to deliver.

PS. Kudos should also go to Redis – it’s a fantastic, very stable and battle-tested piece of software. Big thanks, Antirez!

Do you have a suggestion or have some knowledge you’d like to share with us? We’d love to hear from you – get in touch support@pandastream.com anytime (we’re 24×7).

Apple’s iPhone 6 and 6 plus boast support for H.265

Image 2014-10-16 at 4.09.16 PMApple released its flagship device, the iPhone 6 and iPhone 6 plus a few weeks ago, and according to Tim Cook, it’s their biggest iPhone month ever.

Most analysts, fanboys, and tech reviewers are keen on the larger screen size, new processor, and how thin it is.

Here at Panda on the other hand, were delightfully surprised that on their specs page both the iPhone 6 and 6 plus are said to utilize H.265 for encoding and decoding FaceTime.

As we’ve said in our previous blog post, H.265 or High Efficiency Video Coding (HEVC) is said to match the quality of H.264, but at half the bit rate. This would be a massive help for cellular networks, by reducing bandwidth by up to 50%.

Interestingly, in today’s Apple event they announced the new iPad Air 3, but that device does not support H.265.

H.265 has yet to see wide adoption on the consumer device market, so perhaps the iPhone can blaze another trend, as it has done so well so far.

Send us a note to support@copper.io if you want to get started with H.265 video encoding.