Notes from the edge of Perception
High Dynamic Range Video
at the University of Warwick

19 January 2011
  • The University of Warwick
  • What is HDR?
  • Twenty F-stops
  • The presentation
  • Motion picture development
  • Medical applications
  • Image based lighting
  • Security applications
  • Capture and compression
  • Standards
  • Question and answer
  • Comment
  • Resources
  • Don’t believe what they tell you. The camera always lies. It lies its little lens off. It plays with perception so that what you see is certainly not what you get.

    I am enormously interested in metaphor. The photographic image, be it digital, film, video or camera obscura, is one of the ultimate metaphors. It hides itself in plain view and, in so doing, is so much more subtle than it seems.

    What does the human eye see and what does the human brain do with it? With the ubiquity of digital photography, 3D and digital effects we might be tempted to think that we have long been able to capture what the eye sees. Sight is very much more sophisticated than just capturing light, there is the small matter of what to do with the lightning once it is in the box.

    Professor Alan Chalmers and his team have sallied forth to the boundaries of knowledge with phasers set on stun, to bring the metaphor back alive. They captured it on a HDR video and they want to tell us all about it.

    The University of Warwick

    I arrived at the University of Warwick campus in plenty of time for the screening of the first HDR film and a talk by the people blazing the HDR video trail. So far so good. It had seemed a simple enough task to find the International Digital Laboratory and I was looking forward to settling in with some coffee and biscuits with the rest of the delegates.

    The sat-nav brought me to the postcode, which turned out to cover the whole campus. The google map printout was at home, forgotten on the printer. So I flagged down a passing student as he floated by and asked for directions. He struggled manfully to find some frame of reference for me but in the end hopped in the car and took me to the venue. He turned out to be reading physics and was specialising in optics. Based on this random selection I was already very impressed with the University by the time I got into the lecture theatre only a little bit late.

    I had done a bit of research before going, so I knew it was Professor Chalmers on stage talking about HDR. He was fizzing with dynamic, enthusiastic energy. This was infectious but I got the impression, that like most people who are highly motivated, he has to work very hard to suffer anyone on a different energy shell. It is a difficult process for anyone to expose their hard work and painstaking dedication to the world. The technical world, in particular, seems to sometimes attract an element who are threatened and feel they have to criticise what is outside their model of the world. He is quite obviously excited and proud of his work at the University of Warwick and I don’t blame him.

    What is HDR?

    According to wikipedia "High Dynamic Range imaging is a set of techniques that allow a greater dynamic range of luminance between the lightest and darkest areas of an image than current standard digital imaging techniques or photographic methods."

    Dynamic Range: this is the ratio between the largest and smallest values of a changeable quantity. In this case we mean the brightness levels that are visible to the eye, from starlight to sunlight.

    Luminance: there is a very technical definition but just think of it as the amount of brightness emitted from a surface. It is measured in units called candelas per meter squared. (cd/m2)

    Have you ever taken a photo of someone against a bright window? The window has high luminance, the person has relatively low luminance. You know what happens: in the photo the person is underexposed and the window is overexposed. The window is too bright and the person is too dark. The way many consumer cameras deal with backlighting it is to advise you to deploy the flash so that the person comes somewhere near the same luminance as the window.

    This is because the normal camera is not as flexible as the human eye and involves itself in a spot of compromise. There is no normal exposure. Like many things in life, photography is relative. Have you ever gone from a comfortably lit room out into a sunny afternoon - for a while the great outdoors seems a bit overexposed and blinding. Similarly, when you go from a sunny outdoors to indoors, you can’t see too well until your eyes adjust.

    The eye is an amazing piece of kit, and the processor in the brain makes it the mother of all high definition capture and display systems. The human eye can see in luminance from starlight to sunlight. This is 20 f-stops in photography parlance. We can see and process what is a range of 20 exposure settings to a camera.

    F-stops are the ratio between the apperture of a camera and the focal length of the lens. It determines the exposure or the amount of light. Each f-stop doubles or halves the previous f-stop depending on which way you are going. The area of the apperture doubles or halves with each f-stop. Think of your iris or a camera iris opening or shutting to accomodate different light levels - your eye and the camera have the same lense but with different appertures.

    When you look at a person against a bright window you can see the person clearly and you can see out the window clearly. This proves, to me at least, that the eye is more than a camera. What the camera is missing is the amazing adaptability of the human brain.

    What the good prof and his merry band thought was that it would be nice if we could shoot video of anything the human eye can see as if it were seeing it.

    There exist three ways to do this already:

    1. Single exposure - but this is not very satisfactory.
    2. Standard composite - this means taking lots of consecutive photos and merging them (apparently the i-phone lets you do this if you have one) - but if anything moves you get blurring.
    3. Computer generated - a computer creates the lighting effects.

    Twenty F-Stops

    What the Warwick team have done is to create a camera that takes all twenty f-stops (or exposure levels) simultaneously, so that the image has all the information you need to create a clear picture in any lighting conditions. It has the ability to create pictures of scenes as the human eye would see them. (I have a mental image of a sheaf of parallel layers for each frame, each with a different exposure level, that can later be merged or traversed - but this is not my partucular area of expertise and I was there to learn).

    Apparently HDR still technology has been around for a short while but what we came to see tonight is a camera that takes videos at 20 f-stops, full HD (1920 × 1080) resolution at 30 frames per second.

    The presentation

    I don’t know if it was deliberate but the evening was a great metaphor for itself. We were welcomed as a large group and given an overview. Then we were split into red, blue, yellow and orange groups and rotated around four different ten minute presentations, re composited together again for food and a final Q&A session. This was a nice idea that meant that a cosy ratio between presenter and audience was maintained. It also meant that Professor Chalmers gave his team a fair crack of the whip in showcasing the project.

    I was in red group so I went through the presentations in this order

    1. Why is HDR important
      This was a great talk giving the technical background and the hard science.
    2. A demonstration of the videos on different displays
      This include footage of some special effects; a demonstration of the sort of detail that is captured; a film of a thoracic operation in alarming detail. It did showcase where this technology might be able to go.
    3. A demonstration of the camera itself beside a high end consumer video camera
      The camera was developed by the German company Spheron and how it works is not public knowledge.
    4. A demonstration of 360 degree digital photography.
      We saw this camera and had our group photo taken.
    I took pages and pages of technical notes about HDR but I was more excited about the possible uses.

    Motion picture development

    The team at Warwick University are in the process of shooting a short film with Huw Bowen, of Entanglement Productions - we got a sneak preview. It was hard to believe that this was done without the benefit of a lighting crew. Yes this is one hell of a commercial application. Imagine being able to film anywhere anytime with no expensive lighting. That means that once they have a portable camera you can film anywhere, anytime with minimum preparation and stunning results.

    Medical applications

    They have also been working with a surgical team to show how they can film an operation in a way that allows the detail inside dark body cavaties to be filmed at the same time as detail under bright operating lights without any extra lighting.

    The medical theme continued as we were shown how this could be used to digitise and display x-rays for detailed examination.

    Image based lighting

    We were shown how this technology could be used to capture real lighting information to make digital insertion of images into games, films etc. more realistic. The examples they showed us were impressive.

    Security applications

    Regardless of lighing conditions a clear image can be obtained.

    "So what?" you might ask. Well just think about it. All the information is captured in the original video and detail is captured at every exposure setting. Nothing is lost. The possibilities are infinite.

    The challenges that face them are:

    Capture and compression.

    Each image of the thirty frames per second weighs in at 24mb. At present they are toting about 24 terrabytes (24,000 gigabytes) in what looks like a crate. This holds 5 hours of video at current rates of comnpression. The camera is a big black box with an impressive looking lens and a serious looking cable linking it to the storage and the computer that processes the images. This is not a consumer product yet. This is leading edge research and development.

    Standardising the HDR format

    Like all new technologies HDR appears needs to establish a standard format so that everyone can plug and play and therefore build on the work done. Professor Chalmer's team is leading an EU COST Action (IC-1005 HDRi) to work across Europe to achieve this HDR standard.

    Display

    This is the rub. You have to have some way to display HDR. We were shown a bank of 4 displays with the same footage playing. By far the best was an LED backlight modulated LCD screen.
    LEDmodulated LCD (7K)
    To be able to take advantage of the technology and to push its capabilities you need both ends of the system - capture and display.

    The question and answer session

    This was a panel made up of Professor Chalmers, Mr Huw Bowen (Film maker Entanglement Productions), Mr Christopher Moir (Principal Fellow - Economics) and Dr. Ederyn Williams (Director Warwick Ventures Limited).

    We had lots of questions about where this stands in relation to 3D, elegant questions about cosmic photons falling on a photographic plate from astronomers; how this is stretched to reveal the universe and we were told that this is not the same as tone mapping. There was information on how HDR content could be delivered to LDR displays, how LDR images could be converted to HDR, how HDR might be marketed and the effects on your local cinema.

    Comment

    There was much interest as to where this technology fits with reference to 3D. Professor Chalmers explained that 3D only works up to 300meters because after that the cameras have to be so far apart that it becomes impractical. HDR encapsultes natural depth perception and he implied that HDR could be used to compliment 3D.

    My guess is that 3D has been here before and that as it stands it excludes entire demographics. It excludes the % of people whom 3D makes ill. It is useless to anybody who has not naturally developed stereoscopic vision. It is a nuisance to anybody who has damage to, or is missing one eye - it requires you to process two images, one with each eye for, to get a 3d experience. This is currently achieved with special glasses although there are other methods in research.

    If I understood correctly HDR will present something that all of these people will be able to appreciate. If I were a betting man, based on what I saw tonight, I would expect to be watching my movies in HDR rather than 3D in the coming years. Here is my tuppence worth: People who have stereoscopic vision retain it even if they close one eye. The brain fills in the information based on other cues. Beautiful natural images like those I saw tonight are far more pleasing to me than any 3D technology I have seen. Professor Chalmers came back to the point several times that we see in HDR all the time.

    We know the brain adapts what it sees - HDR video gives it a lot to play with.

    I will watch this research with great interest. I would not be a bit surprised if there are unexpected results!

    Resources