Last year, we wrote about our experiences of a 4K shoot we carried out for stock footage library Pond5. Since then, we’ve done a number of other shoots using a fully 4K workflow, so I thought it would be a good time to share our thoughts on how these went, and our practical experiences storing, working with and outputting at this resolution. These apply to small-to-medium-sized production companies like ours; if you work at a large facility with hundreds of staff, the basics will still apply but you can probably add a zero or two onto the complexity of anything I mention here.

It’s also worth noting that although most people refer to “4K” as being double the resolution of HD, it’s not (quite). 4K technically refers to a frame that’s 4096 x 2160 pixels, whereas something that’s double the size of HD – 3840 x 2160 pixels – is 4K UHD. Since there’s not much practical difference in the pixel count, and most people say “4K” when they mean UHD, I’ve used 4K as a catch-all term to mean both types of ultra high-definition.

Behind-the-scenes still from a pyrotechnics shoot with fire performer Naomi FrenchBackground, or: why 4K?

There have been plenty of musings about whether 4K is here to stay, or whether it’s “just another 3D” (or – controversially – VR; check back in a few years to see if VR was a flash in the pan, or if it stuck). While the human eye hasn’t been upgraded in millennia, and VHS worked well enough for years for everyone to have entire libraries of 80s action movies which are now gathering dust in charity shops, 4K is now being pushed at us on all sides. Most TVs are now 4K-capable (via HDMI 1.4+, although only HDMI 2.0+ is capable of carrying 4K at more than 24 frames per second), although you’ll probably be hard pressed to spot the difference in most cases. We recently bought a 65” 4K TV for use at a tradeshow, and decided to shoot a variety of content in 4K to go alongside our regular showreel material, which is all in 1080p: nobody (yet), apart from Pond5, has asked us to shoot or finish anything in 4K. A technical glitch on the day meant that the reel we played on the stand was actually in 1080p. I looked closely to see anyone wrinkle their noses or complain it looked grainy, but nobody did. (If you were there, and thought our reel looked pixellated, please let me know; we’ll dig out a prize for you to win.)

Our stand at a recent tradeshow, complete with 4K reel on the tv.

The elements of working in 4K

Gone are the days when changing your workflow meant upgrading all your existing tape bays. We’ve been tapeless since 2009, and I haven’t missed tape for a second. However, even though we’re now only talking about large files becoming even larger files, there are several things to think about when committing to a 4K workflow.

1. The file size

4k footage takes up four times the disk space of 2k.

Footage shot in 1080p is 1920 pixels by 1080, or just over 2 megapixels in size. 4K UHD footage is double this – in both dimensions – giving 8.3 megapixels per frame, or 4 times the actual pixel count of HD. True 4K is slightly more than this at 8.85 megapixels per frame. What this means in practice is that to shoot either 4K or UHD properly, at a bitrate high enough for you to notice the difference, you have to quadruple the data rate you would use for HD. So if you’d shoot HD at 50Mbps, you’ll need to shoot UHD at 200Mbps. I personally feel 50Mbps is the bare minimum that HD footage should be acquired in, unless you’re shooting something very static like an interview, so in most cases we would shoot HD at around 100Mbps or more, depending on the content. (Basically: the more movement per frame, the higher the bitrate you need. A classic case where you need a really high bitrate would be for something like a confetti cannon, which is often used as part of the finale at a gig. You can see on this video that the picture looks fine until the confetti pops out, at which point it starts looking pretty horrible.)

What all this boils down to is that both 4K and UHD involve much larger amounts of storage than HD does. Our Pond5 shoot resulted in two hours of footage totalling 479GB. This is fairly typical.

2. Data transfer speeds

If you thought your edit machine was slow before…

Once you’re able to store all this data, the next challenge is to get it from wherever it’s being stored to whatever you’re editing it with. If you’re a one-person-band, using a desktop or laptop computer and an external drive, then you’re probably fine with either USB3 or Thunderbolt. If you’re in a multi-user environment, though, this probably means storing the footage on a server somewhere.

The most common way to be able to shunt this amount of footage from A to B is via ethernet: you’ll need at least 1GbE throughout, which includes all switches and other network hardware you might be using. The network will work at the speed of the slowest component, so for example if you’re using a cheap switch that bills itself as 1Gb, it probably isn’t – that 1Gb is likely to be shared between all the ports, so if anyone else is doing anything on the network while you’re trying to work with your footage, everything will slow down. We use a Netgear GS724Tv3 which works fine with the whole team hammering away at it at once.

If your network isn’t up to it, you can also copy all the footage you need onto your local disk(s) and work with it from there. For best results, use SSDs if you can, rather than “spinning” disks: a decent 512GB SSD will give you a far better transfer rate than a 7200rpm SATA HD. Typically this will mean around 400-500MB/sec for an SSD, versus 100MB/sec for a non-solid-state disk.

3. Actually playing back the footage

Press the spacebar, and wait.

This is where it can get tricky. Whether or not a given computer will be able to play back footage at 4K depends on a number of things, and they might not be immediately obvious. We were unable to even work with the footage we shot last September, for example, until we upgraded the driver for the network card on the laptop we were using. Suddenly, hey presto – the footage played back fine, instead of jerkily or not at all.

Apart from the CPU you’re using, though – and we’ll assume you’re not trying to edit 4K footage on an old Pentium or something – the most important component will probably be your graphics card. Adobe Premiere, which is what we use, uses CUDA acceleration to speed things up, but this does depend on the graphics card you have. We use both nVidia Quadro (K4000 and K5000) and GeForce (GTX980M) cards, and the GeForce ones don’t seem any slower than the much more expensive Quadros. It’s also worth noting that even though a card might say it can support a certain number of screens at 4K, in practice it might puff and wheeze if they’re all on at once. When we first got our 4K screen, we hooked it up to the fastest machine we’ve got: a 20-core/40-thread PC with SSDs, a Quadro K5000 graphics card, and 64GB of RAM. This machine was already running two screens – one at 2560 x 1440 and one at 1920 x 1080 – but it struggled with a loop of 4K content played full-screen from YouTube, which it wouldn’t play back without juddering. This illustrates the point that it isn’t necessarily the speed that you can get data to the edit machine which will cause the problem, as the footage was buffering from YouTube without any trouble. The solution was to ditch one (or, even better, both) the other screens, after which the 4K content played back fine. We now use this TV hooked up to a 4-core / 16GB laptop with a mid-range graphics card, and it plays most low-bitrate content without complaining.

4. Working with the footage

Intermediate codecs are crucial.

So, you’ve got everything set up, and you can play back 4K in your editing program without your computer throwing a wobbly. You’ll now probably need to decide on an intermediate codec for any times that you need to export something to a third-party application (like After Effects) and then reimport it.

We don’t use Final Cut or Avid, which both transcode footage at the ingest stage to ProRes or DNxHD respectively; this step is less relevant if you use either of those packages. Premiere, however, doesn’t transcode anything: you throw any footage you want to use into a timeline, and Premiere will make the best fist it can of playing it back. While this saves time when starting a project, it can often lead to confusion at the point when you want to round-trip footage via After Effects, Mocha, or some other compositing or fx package.

Our workhorse cameras, currently a pair of Sony FS7s, can output 4K at up to 10-bit 4:2:2. If you want to add something in to a shot and then grade it later, you therefore need to be able to keep the full 10-bit colour gamut. You could output a shot as an uncompressed TIFF or OpenEXR sequence, work with it as individual frames, and then reimport these into Premiere, but they’ll play pretty slowly as they’ll be uncompressed. A better option is to use a codec like MXF OP1a, which gives you 10-bit 4:2:2 options and is, as far as I can tell, exactly what comes out of the camera – so you shouldn’t lose any quality, at least not perceptibly.

5. Outputting

The most useable output codec for 4K footage is currently H.264.

The options you have when outputting 4K are a lot more obvious than they are when you’re working with it. H.264 MP4 files work absolutely fine for most things: they’re 8-bit, but if you’re putting them on YouTube or Vimeo, they’ll get retranscoded anyway. If you’re delivering to broadcast, each network or station will have its own, highly detailed, delivery guidelines, which they’ll have sent you along with their invoice for the airtime.

The bitrate you need for H.264 will depend, as I’ve already mentioned, on how complex the action in the footage is, but we find a good rule of thumb (for most 25p content) is:

bitrate = {width * height}/{250}

So for UHD footage, a rough bitrate would be 3840 x 2160 / 250 = 35Mbit. This is an arbitrary formula, but it’s a good starting point: render out the busiest-looking section from your edit at this bitrate and see if it looks dreadful. If it does, double the bitrate and try again.

Conclusion

Using a fully 4K workflow is an interesting challenge. It’ll certainly get easier, but for us, and for now, the extra hassle involved in dealing with it really outweighs the benefits of using it. (This is why we offer it as an option at +25% of the cost of HD.) However, if the content you’re shooting is tremendously detailed – such as night-time timelapses shot somewhere with no light pollution, or landscapes, or aerial photography – it can be a useful option to offer. If you’ve worked with 4K and want to share your experiences, whether good or bad, leave us a comment below.

Join the discussion One Comment

  • chimychart says:

    When connecting a drive using FireWire 800, the drive will, generally, play about four streams of ProRes 422 before the FireWire connection is fully saturated.

Leave a Reply

0118 324 3500