Bits and Bobs
Thoughts on keying part 1- Technology
Last modified on 2020-06-25 10:02:00 GMT. 0 comments. Top.
For a long time I’ve subscribed to the Media Motion After Effects email list, an invaluable resource for all things After Effects as well as being a friendly online community. The other day I found an email I posted there about 2 years ago, in which I listed some of the changes I’ve seen in keying throughout my working career. I’d written the email to kill time while I was rendering, and I’d forgotten I wrote it until the other day. It’s always weird to read things you can’t remember writing, but it did make me think about how technology changes some things and not others. So here I’ve taken my original vague ramblings from the email and fleshed them out a bit… this is part 1, the things that have changed.
It’s often said that you learn from your mistakes, but when it comes to keying you’re often learning from other people’s mistakes. If you’re given “perfect” footage and all you have to do is click the eyedropper in Keylight to get a perfect result, then you haven’t learnt much. It’s when the footage is poor, the lighting is wrong, and everything falls apart and you’re expected to rescue the shot that you start to learn how it’s all working “under the hood”.
When I started my professional career in 1997 everything we shot was on Betacam SP. The company I worked for owned a mid-level 3-chip camera with the cheapest Fuji lens available, and I was working with a Media 100 editing system. Beta-SP is an analogue format that has a lot of noise, the occasional dropout, and the Media 100 system was doing the analogue to digital conversion to an 8-bit lossy compression format. But this wasn’t too bad – our setup was pretty typical for corporate work produced in Melbourne at that time, and occasionally the projects I worked on had bigger budgets than usual- which meant digital betacam or even film, or smaller than usual- which meant DV. So typically there were many factors that contributed to the quality of the footage I’d be working with, and they can be summarised here:
– Footage shot on an industrial camera with a low-quality lens
– Recorded onto Beta-SP, a noisy analogue format
– Everything is interlaced, standard definition
– Digitising via analogue component signal
– Media 100 used a lossy 8-bit compression system
– All video software was 8-bit
– After Effects only had basic keyers unless you purchased the “Production Bundle”, which gave you the Difference Keyer and the Linear Colour Keyer. But no Keylight…
– If you wanted to purchase a 3rd party keyer then the only option was Ultimatte, which was a few thousand dollars…
I’ve mentioned the camera lens because I’ll always remember when a good friend of mine was acting as our DOP on a project, and he turned up with a beautiful Canon lens that was worth more than the rest of our camera kit combined. With all other factors being equal, the lens made an astonishing difference to the quality of the footage- it was a shame we only had it for a day.
Looking back I’m surprised I was able to key anything at all, but keying became a routine part of my job and one which was made a lot easier when I won a copy of “Composite Wizard“. Not only did this set of plugins save my footage on many occasions, but the manual was hugely informative and I learned a lot from reading it cover-to-cover. So keying in After Effects was something I approached with confidence, but it was always an involved multi-stage process:
– Digitising interlaced analogue 8-bit footage using Media 100
– Use the Composite Wizard “denoise” filter to smooth out the video
– Use the Composite Wizard “smooth screen” filter to even out the background lighting
– Boost the screen colour saturation using the AE Hue/Saturation filter
– Key with the AE Linear Key filter set to “chroma”
– Clean the matte with the Composite Wizard “Miracle Alpha Cleaner”
– Adjust the edges with either the Composite Wizard “Matte Feather” or the AE “Matte Choker” filters
– Use the AE Spill Supressor to clean up colour spill
Because these steps were slow, and computers back then were much slower, I would render the foreground footage out with an alpha and do the compositing as a separate step. And that’s how I worked for over 5 years.
Looking at these steps it’s interesting to note how much “fudging” is going on. Because the footage I’m beginning with isn’t perfect almost every stage of the process is dealing the problems that begin there – too much noise, uneven lighting, low-resolution formats. The denoise filter is a fudge, the smooth screen filter is a fudge, boosting the saturation of the screen is a fudge, using the miracle alpha cleaner is a fudge, but all neccessary processes to get an acceptable result – and all standard practises for the technology of the time. I haven’t had to key interlaced standard definition footage for many years now, and I hope I’ll never have to again.
So what’s different now?
Technology has advanced in all areas relating to all stages above, making it easier to produce higher quality results in less time.
The single biggest change is the availability of higher quality acquisition formats that capture progressive images. Even if you have to work with Digital Betacam a progressive camera can make a huge difference. But HD is much better- so many more pixels to work with- and RED is better still. The easiest footage to key, and the highest quality results I’ve obtained have come from RED footage. Even if your final delivery format is standard definition, shooting on HD or RED will make keying much much easier.
The post production chain is now entirely digital, which removes analogue conversions from the process along with the video noise inherent with older analoge formats like Beta-SP. If you’re working with RED or even DSLRs then there isn’t even tape to worry about.
Software now works with higher bit-depths… editing systems can work with 10-bit uncompressed footage and After Effects now works in 16 or 32 bit modes. Even when working with standard definition digital betacam footage you can obtain better results by working with uncompressed 10-bit footage than you can with 8-bit, with After Effects in 16-bit rather than 8-bit mode.
Keying software has become more advanced, and Keylight is wonderful when dealing with well lit blue or green screens. It’s a big step up from the After Effects linear colour keyer. Keylight is great, but it’s only one improvement in the chain. It would still struggle with the noisy, interlaced, compressed analogue Beta-SP footage from my 1997 days.
I often still use the bundled Colour Finesse plugin to tweak the colour of the screen before I apply Keylight, but more often to adjust the hue than to boost the saturation. Boosting the saturation is pretty much the same as the “screen gain” control in Keylight, so I may as well do it there if neccessary. But I find that the blue and green in the video footage can usually be hue-shifted towards a more pure “digital blue” or “digital green” and this can improve the result.
Likewise, in some cases I will obtain significantly better results by de-noising the footage first, but these days the After Effects “remove grain” plugin is much cleverer and gentler than the Composite Wizard equivalent. However, the AE remove grain plugin is really slow, so even though we have faster computers than we did 10 years ago, we’re working with higher resolution files and more intensive algorithms. So degraining 3K RED footage is still an overnight render…
For me the single biggest improvement over the past 13 years of my career has been the move from interlaced to progressive acquisition formats. The post-production chain is now entirely digital, which eliminates conversion artefacts from the process as well. The software side of things has improved fidelity by offering increased bit-depths and more pixels to work with, and the underlying algorithms that power the plugins we use have increased in their sophistication and speed. But I don’t think these improvements are as significant as getting rid of fields!
To get a good key 13 years ago required a combination of fudges to overcome the limitations of the technology at the time- beginning with the interlaced nature of the acquisition format and the analogue conversion required just to get it into the computer. Today we have better tools and we can produce better results more easily, and the processes we go through are no longer fudges designed to overcome technical shortcomings, but tools that improve and enhance the final result.
But while technology has changed and made some things easier, some things have stayed the same… and that’s part 2!
Thoughts on keying part 2- The Myth of the Single Click
Last modified on 2020-06-25 10:02:00 GMT. 0 comments. Top.
To paraphrase John Lennon- I never expected to become an After Effects specialist, it happened while I was making other plans.
Many people I work with are surprised to discover that I have previously worked as a writer, producer and director (not at the same time) and it’s the producing part especially which relates to keying. And while technology has advanced in leaps and bounds over the course of my career, some other things haven’t changed at all- which brings us to part 2.
In part 1 I talked about the technology that was in common use when I began my career, and I said that many of the steps I used to obtain an acceptable key in After Effects were essentially “fudges”. I call them fudges because – at the time – I always felt as though I was compensating for mistakes. There’s a saying “the grass is always greener on the other side” but for me it was greenscreens, not grass. I always felt as though I was dealing with inferior footage and that the guys working for big-name post-houses on expensive TVCs would always have perfect footage that would key with one click of their SGI mouse.
I was certainly not alone in this thought. Most of my friends with similar corporate production jobs had the same “greenscreen is always greener at the other facilities” inferiority complexes. We all had a basic assumption that if the footage was shot properly and if our software was good enough then we would be able to key with one click. When our footage didn’t key well we would blame everything in turn. We would blame the DOP for not lighting the screen properly, or the screen for not being painted with the right paint. We would blame the camera for being too noisy; the lens for being too blurry; the recording formats for lacking fidelity; the capture hardware and codecs for their compression; the software for lacking precision; our accounting departments for not buying us the right plugins and so on… We were using 8-bit After Effects on desktop Macs and we all had Flame/Inferno envy (back then there was also Henry envy, but that sounds like a cabaret singer). Surely the lucky few who worked in the million-dollar Flame/Quantel rooms could key with one-click… why else would you have a computer the size of a fridge?
The fancy ads in CineFx for the Ultimatte keying software worked- convinced that this plug-in would solve all problems one of my friends had his company buy it despite the huge cost. It was better than the AE keyers but the improvements were not miraculous and it was very slow. Meanwhile a neighbouring company contracted a programmer to write their own custom keyer (cheaper than multiple Ultimatte licences) but I don’t think any of us worked on a project where the footage keyed perfectly with one click. So we’d drink another coffee and complain more about how the lighting was rubbish, and then I’d reach for another Composite Wizard plug-in, or mask out another shadow, or roto out some footprints, and basically pile on the fudge.
Looking back with the benefits of 13 years experience, I now realise that I was/ we were wrong. Since then I’ve produced a few projects with 6-figure budgets and if I dig back into my past and pull on my producer hat again, I can do some basic sums and expose the “myth of the single click”.
Professional film and video productions cost a lot of money. If you add up the costs of all the crew and equipment that are hired for the shoot, and then divide it by the number of hours you shoot for then you have a basic figure of how much your production costs every hour. Using this method I can look back at a mid-level TVC I produced and calculate that we spent around $8,000 for every hour in the studio. A high-end corporate video cost $3,000 per hour in a similar studio. These prices are just for crew and equipment for the shoot. If I wanted to exaggerate the point I could also include the fees of the actors and the director and maybe even pre-production costs. In this way I could easily claim that every hour in the studio was costing $10,000 or more.
If you’re wondering what this has to do with keying then imagine yourself on set, trying to tell the Producer that the DOP hasn’t lit the greenscreen very well and that he really should set up a monitor with vectorscope to get the best possible result, and that the grips/gaffer need to adjust the lighting arrangement so the greenscreen is evenly lit. This is a very good way to learn new swear words.
There are many great articles on the internet about keying, and how to light for keying and how to shoot for keying in order to get good results. I think I’ve read them all (I wrote one of them) and I’ve learnt a lot from them. They’re great when you have a lot of time, or when you’re working for a company that has its own studio and equipment. But when you’ve hired lots of expensive people and you’re burning $3000-$10,000 per hour and you know that overtime kicks in after 8 hours then suddenly your priorities change and you decide that there are better things to do than to spend time adjusting the lights. Sure the crew could help you out and adjust everything so the screen is perfectly lit, but it could end up costing thousands and thousands of dollars. Yes- if the footage doesn’t key perfectly then you will need to spend time rotoscoping it, but roto is cheaper than a studio full of equipment and crew. And the footage will probably need some roto anyway.
I spent several years in front of After Effects moaning about poor DOPs and producers who would decide to “fix it in post” before becoming one of them myself. Because ultimately- when the clock is ticking and money is flowing and everyone has less time on set than they need anyway- lighting the greenscreen perfectly isn’t that important. You can fix it in post. Roto is cheaper than having the gaffer pull more lights out of the truck and spend an hour setting them up.
Of course this argument is only valid if you have a producer who understands the maths, and allows for roto in the budget. This is where the failing usually is. It’s when there is no allowance – in time or in money – to fix difficult keys with roto that compositors get frustrated, and we all start moaning again.
In this respect, it’s the producers who expect footage to be keyed with a single click that are really the ones to blame- its their belief in the myth of the single click that can cause grief in post-production. I mentioned earlier that you can find many great articles on the internet to do with keying and shooting greenscreens, and it’s certainly easy to find technical tips and techniques to ensure that greenscreen footage is shot well. However there are two very valuable non-technical lessons to learn as well. Firstly, no matter how well the footage is lit there will almost certainly be some level of rotoscoping required, even if it’s just garbage mattes. And secondly- rotoscoping is not easy and is not something that can be palmed off to inexperienced and cheaper workers. The majority of producers I’ve worked with have assumed that rotoscoping is a “fix” that is only required when something goes wrong, and because they haven’t budgeted for it their first reaction is to try and have it done as cheaply as possible. But cheap roto is not always good roto.
So while technology has changed, budgets and basic accounting haven’t. If anything – as software has increased in sophistication and footage is better to begin with – roto will become easier and cheaper too, so overall standards will rise. Even when you have footage that does key well there will always be a rough spot here or a shadow there, that can only be fixed manually.
I recently updated my showreel and the new version opens with a project I worked on for Nokia, produced by Kemistry in London. This is an example of a project that was well produced and a pleasure to work on. Once the edit was locked-off and the footage was ready for compositing, the producer (Andrew Pearce) had myself and another compositor begin work on keying with 2 dedicated rotoscopers to help us out. The roto guys had not been called in after we found keying difficult- Andrew had simply recognised from the beginning that there would need to be roto work at some stage, and so they were included in the budget. In actual fact the footage (shot on RED) keyed very well but there were still clips that needed extensive roto and the guys would wait to be allocated clips by either myself or the other compositor as we needed their help. Additionally – Andrew had hired genuine roto specialists who knew what they were doing, and not cheap & eager youngsters who lacked experience. It all worked well.
In part 1 I listed the stages that I used to go through in order to key footage. This time I’ll list my thoughts as a producer on shooting footage for keying:
1) Shooting greenscreen footage perfectly is expensive
2) It is cheaper to shoot imperfect footage and fix it in post
3) All greenscreen footage will probably need some degree of rotoscoping
4) Rotoscoping is not easy, nor is it an entry-level skill
5) Keying will always take more than one-click of a mouse
My figures above ($3000 – $10,000 per hour in the studio) were based on my experiences on low to mid level projects. But it’s pretty easy to imagine that with typical Hollywood budgets- compounded by limited availability of main actors – the figures are massively higher. So the time spent setting up a greenscreen and adjusting the lighting for a feature film will be even more expensive, so there’s even less likelihood of it being done “perfectly”. Judging from DVD special features and behind-the-scenes specials that discuss visual-fx it sometimes looks as though the greenscreen is simply a visual aide for rotoscopers rather than something that can be feasibly keyed with any software.
Many years ago I read an article/blog called something like “the myth of the blue screen”. I’ve tried to find it again with google but I’ve failed. The gist of the article is that many big Hollywood visual-fx facilities don’t even try to use keying plugins, keying is generally done entirely by rotoscope artists instead. The author of the article had worked on the film “Forest Gump” and claimed that although the film is extensively referenced for the use of bluescreen/greenscreen to integrate Tom Hanks into archival footage and to digitally remove the legs of actor Gary Sinise, all of those effects were achieved by manual rotoscoping and no keying plugins were used at all. Basically- the footage is shot under such expensive and stressful circumstances that there is no way the key colours will ever be lit perfectly. So some roto will always be needed- and if some roto is needed then it’s simply easier to do the entire shot with roto. I can understand why. If anyone knows the article then please email me the link- it would be a great reference.
So by thinking about the costs involved in a film/video shoot, as well as the stressful nature of filming in general, it’s easy to see why footage never keys with a single click.
As for the greenscreen envy I used to have when I started- not only was I wrong about the myth of the single click, but there’s every chance that the guys with the Flames and Henrys were having to deal with worse footage than I was.
Although I’ve just written a lot about how it’s not realistic to key footage perfectly with a single click, I have had many projects that have come close. Perhaps all those articles on the internet about lighting greenscreens have soaked into the collective unconscious, as the quality of greenscreen footage I’ve had to work with in recent years has been really good. However- as the quality of the footage increases, so does the quality of the compositing. Even if the footage itself does key with a single click, there are always going to be several more steps involved to improve the quality of the final composite. So perhaps I shouldn’t talk about the myth of the single-click key, but rather the myth of the single-click composite.
In part 3 I’ll discuss compositing in general and show some examples of keying difficult footage, and also reveal the most difficult footage I’ve ever had to key…
Thoughts on keying part 3- Compositing
Last modified on 2020-06-25 10:02:00 GMT. 0 comments. Top.
Spell-checkers insist on changing the word “compositing” to “composting”, which has caused more than one person to think I’m some sort of gardener. And although I can teach a spell-checker to recognise the word ‘compositor’ it’s not as easy to explain what it means, so many of my friends still don’t know exactly what it is that I do.
In part one of this rambling chain of thought, I looked at the way that technology has improved the process of keying blue/greenscreen footage. Compared to 15 years ago we have better cameras, better hardware and better software. In part 2 I looked at keying from a producer’s perspective and concluded that even though technology has advanced, the practical and financial side of video production means that keying will never be as simple as a single click. Good keying will always involve some rotoscoping, and it isn’t realistic to expect to get ‘perfect’ keys with one click of the mouse.
The myth-of-the-single-click is something I ponder a lot. The term itself is something I just made up when I was typing the last part, so perhaps I should elaborate. When I talk about “myth-of-the-single-click” I’m referring to the notion that you can import some blue/greenscreen footage into After Effects, and with one click of the mouse have a perfectly keyed result that’s ready to render. The reason I call it a myth is because this never happens, but the idea that it’s supposed to seems to have perpetuated itself through tutorials, adverts, product demonstrations, and some sort of general industry zeitgeist. As I mentioned in parts 1 & 2, when I started my career I assumed that anyone using a high-end system could key with a single click, and whenever I had to resort to complex workflows in order to get a usable result I assumed it was because something was wrong. And it wasn’t just me, it was my friends and colleagues, and not just those who worked in post – it was also Producers and Directors.
The myth of the single click encapsulates a range of issues that span technology, personality, politics, economics and experience – but it’s not just slick product demonstrations and fancy ads combined with a slight inferiority complex. The myth-of-the-single-click, which I still think confounds corporate video producers and younger people today, also stems from the fact that there’s more to keying than just keying. So let’s look at compositing.
As a job description, keying is really compositing. When I’m hired to do blue/greenscreen keying what I’m really hired to do is composite a scene together. Although my workflow may involve an intermediate stage of rendering a quicktime with an alpha channel, I’ve never been hired to simply key out a background and deliver video with an alpha. Instead, I’m hired to deliver a finished scene. A composite.
It’s fairly simple to judge the quality of a raw key, because it’s pretty clear what the goal is. If you’ve shot someone in front of a greenscreen, then the goal is to remove the green. Judging a composite is more difficult, however, because there’s so many different aspects to a scene to consider. If a composite doesn’t look quite right then it can be difficult to determine exactly why, and if a composite appears to look good then it can be difficult to know how to make it better.
From the perspective of a professional career, perhaps the most critical thing to understand is that Producers and Directors are not compositors, and they probably won’t have a very strong technical understanding of the process. A Director isn’t likely to look at your scene and give you some technically specific tweak to do. That’s your job (some Directors will but whether or not they make sense is another matter). Although all projects are different, it’s generally up to the individual compositor to decide how little or how much to do to a scene. Producers and Directors might point out obvious things such as adding shadows and reflections, but beyond that it’s up to you.
What Producers and Directors will do, however, is look at your work and decide if it’s any good. If a composite looks average then they don’t need the technical knowledge to know why, they’ll just blame you. You’ve probably heard the saying ‘I don’t know much about art, but I know what I like”… in post-production it’s “I don’t know much about After Effects, but I know what looks good”.
With this in mind, the myth-of-the-single-click is exposed as a true myth – something totally unachievable- because compositing a finished scene together involves so much more than just keying.
Over the years I have developed a mental checklist of things I consider when I’m doing greenscreen compositing, and without writing a book, here are some of them:
Colour Matching
Matching the colour balance of different elements in a composite will get you 90% of the way there. I usually start with the levels plug-in and go through the shadows, mids and then highlights and match everything up, but it’s important to match the overall contrast and saturation of elements too. I prefer to use the ‘tint’ plugin to adjust saturation, rather than the Hue/Saturation plugin. The ‘colour balance’ plugin is also very useful when matching disparate elements, especially with the ‘preserve luminosity’ option turned on. There are other articles and tutorials on the internet that look at colour matching in detail.
Exposure matching
Although you can adjust the different elements in your composite so they have the same colour balance, they still might differ in their brightness and contrast. The exposure dial in the composition window is a quick way of previewing your composite at a range of exposures, and can roughly indicate if a layer is brighter or darker than it should be. The exposure plugin is a quick way to adjust an individual layer without affecting the colour balance. Rendered 3D elements are more likely to have a large tonal range than shot footage, ie. with 3D renders you can reasonably expect black to be 0 and white to be 255, but with real-life footage the darkest areas of the shot may be much higher, or the brightest parts lower.
Stabilising movement
In the olden days, when people shot on film, the image would have a slight camera weave depending on how good the equipment was. 16mm footage could move around quite a bit, and so if were compositing various 16mm elements together you really had to stabilise them all beforehand. But even footage shot on HD or RED might have little movements in it if someone bumps the camera, or the cameraman breathes heavily, and any little bumps or kinks should be smoothed out.
Grain matching
If your footage originated on film then it will have film grain, but even digital cameras have a signature ‘digital noise’ and compression artefacts. If you want elements in your composite to match then you’ll need to compensate for these differences – removing noise from noisy sources and adding noise to clean sources. I generally use the After Effects “remove grain” plug-in with temporal sampling turned on. If you have 3D rendered elements to work with then they’ll be really clean and you’ll need to add grain or noise to them.
Foreground/Background masking
Parts of the background plate might have to appear in front of the foreground plate. This will usually involve rotoscoping the background element out. I find this commonly happens with tree branches or foliage that’s blowing around.
Blur / Sharpness
Footage from different sources will probably have differing amounts of sharpness. 3D renders will be razor sharp, de-interlaced video will be really soft. Film will be softer than video. If you are integrating 3D renders with video then you’ll probably have to blur it more than you expect.
Motion blur
The amount of motion blur in different layers can be different, especially is you have 3D rendered elements that haven’t been rendered with motion blur. Reel Smart Motion Blur is an invaluable tool here.
Depth of Field
If your background plate has a shallow depth of field, then any elements added to the scene will need to match the blurriness of the focal plane they’re in. If there’s a pull-focus of the background plate during the shot then all the elements in your composite will need to adjust accordingly. The ‘box blur’ effect is a fast way of making an element look out of focus. For more complex scenes, you may need to get creative and create a depth map for use with the AE Lens Blur effect, but ideally you’ll have the Frischluft Lenscare plugins which look gorgeous and are faster than the AE stock equivalent. If you’ve added shadows to the scene, perhaps by duplicating a keyed foreground element and filling it with black, the shadow may need to be more blurry at the top than at the bottom. Combining a ramp and the compound blur can achieve this.
Shadows & Reflections
Things cast shadows. So add them if you need to, and the same goes for reflections. You can make shadows look more realistic by precomping them and using the background plate as a displacement map, and reflections can be subtly distorted by scaling the X&Y independently, skewing with the ‘transform’ effect, or even using the liquify effect, to match the surface they’re on. Reflections in water can be distorted by using the water as a displacement map for the reflection layer.
Light wrap
As well as being a plugin that’s part of the Composite Wizard and Key Correct Pro collections, there are various tutorials on the internet about how to add light-wrap without 3rd party plugins. Basically, light areas of the background plate should wrap around the edges of your foreground element.
Interlayer blur
This is a technique that is described in more detail in the ‘Composite Wizard’ manual, but it’s a pain to set up properly. The idea is that the edge/boundary between your foreground/background layers will be blurred together. A quicker method than the CW one is to use an edge-detection plugin to create an outline of your foreground layer, and use it as a matte for an adjustment layer with a blur effect. Very subtle…
Light source matching
If different elements have been lit in different ways, from different directions and with different lights then it’s really hard to adjust them to match. The same goes for 3D renders that may have been lit in a fairly simple way, and have to be integrated into a real-life background. Unfortunately you can’t re-light real-life footage in post perfectly, but you can crudely darken areas of a layer by adding masks, using the fill effect, and feathering the edges and adjusting the fill opacity. To do it more effectively you will have to precompose the layer and rotoscope coloured solids with different transfer modes onto your footage. It’s not going to be perfect, but it will improve the overall composite.
Lens Flares
I love lens flares. If your background plate has light sources or hotspots, you can track them and apply the tracking data to lens flares which are rendered over your foreground layer. Even if I am working with a purely CG scene, I will use the 3D camera data with the toComp expression and have multiple lens flares in the scene. As the camera moves, and the lens flares react accordingly, they add a genuine sense of depth to the scene. I’m not talking about huge, gratuitous 100% opacity flares, but soft, blurry, subtle layers that add some moving light to the scene.
Fog effects
Smoke, or even fractal noise, can not only add character to a scene but it can help tie disparate elements together. By using expressions and the collapse-transformation switch, 3D fractal noise can exist within the scene, in true 3D space, and animate convincingly although if you turn it up too much it looks like you’re remaking Jekyll & Hyde, in 19th Century London.
Global Adjustments
Adjustment layers at the top of a composite can be used to add noise or grain back to the entire scene. You can also add camera movement back into a composited scene, and subtle zooms or pans can really help ‘sell’ the composite.
And that’s just the stuff I can think of.
In part 2 I promised a look at the most challenging key I’ve ever faced, and so to keep my promise I’ve dug out a still-frame:
Oddly enough, Keylight managed a half-decent result and a lot of choking helped to produce a useable result with minimum roto.
Looking back to part 1, there’s no doubt that having great footage makes things easier. But the point of part 2 was that it’s naive to expect keying to be simple, quick and easy. And that’s because compositing is an involved and complex art form, and the list of things that can make up a successful composite is not only limited to the things I’ve listed above.
So the myth-of-the-single-click key is exactly that – a myth.