Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics

World's First Full HDR Video System Unveiled 107

Zothecula writes "Anyone who regularly uses a video camera will know that the devices do not see the world the way we do. The human visual system can perceive a scene that contains both bright highlights and dark shadows, yet is able to process that information in such a way that it can simultaneously expose for both lighting extremes – up to a point, at least. Video cameras, however, have just one f-stop to work with at any one time, and so must make compromises. Now, however, researchers from the UK's University of Warwick claim to have the solution to such problems, in the form of the world's first full High Dynamic Range (HDR) video system."
This discussion has been archived. No new comments can be posted.

World's First Full HDR Video System Unveiled

Comments Filter:
  • Wow! (Score:3, Insightful)

    by snspdaarf ( 1314399 ) on Tuesday January 18, 2011 @06:16PM (#34921198)
    Now p0rn can be filmed in sunlight and shadow at the same time!
    • by Anonymous Coward

      Only fitting that a website called "Jizzmag" should reveal it to the world.

    • And black on black porn. You know, because dark skin may be hard to see in crappy video.
  • by Anonymous Coward on Tuesday January 18, 2011 @06:16PM (#34921204)

    Personally, I think the HDR screen described, with HDR videos, would be more interesting and immediately useful than the ever-so-commonly-advertised but ever-so-rarely-purchased "3D" screens.

    • by macshit ( 157376 )

      Personally, I think the HDR screen described, with HDR videos, would be more interesting and immediately useful than the ever-so-commonly-advertised but ever-so-rarely-purchased "3D" screens.

      I totally agree -- it's quite clear the consumer electronics industry is thrashing around like crazy trying to find something to convince people to re-buy all their equipment, but the 3d tech they seem to have chosen is so completely "meh" (if not downright "ugh") that it seems almost guaranteed to fail to live up to the hopes they've pinned on it...

      HDR display tech, by contrast [haha], works quite well, isn't really all that challenging technically, is well suited to price reduction through mass productio

    • HDR screen I heard are stupid expensive though.

      Current LCD TV tech has enough trouble getting the contrast up and decent blacks.

      I have no idea what exotic tech those high-end professional HDR screens use.

      • HDR doesn't need special screens. All the work is done at the recording end. You can watch HDR on your computer screen right now. (http://www.youtube.com/watch?v=BlcLW2nrHaM)
        • Yes, but... that's not what it actually looks like to our eyes. Same with HDR still photography, which I've dabbled in a bit. The actual video or photograph may be HDR, but in order to display properly on an inexpensive screen, the depth is compressed. Thus, you need expensive screens to properly see. That's not to say you can't get impressive results that are indistinguishable from the real thing to the untrained eye, but the ability to truly display HDR video has the potential to blow people away - it'd b

  • by Anonymous Coward on Tuesday January 18, 2011 @06:24PM (#34921298)

    Didn't a pair of guys do this last year using a pair of DSLR's and a beam splitter?

    Also ,unless someone is building a HDR display this is all pretty academic, HDR images have to have their range compressed and then tone mapped in order to be displayed via conventional means, this is normally terribly unsubtle and results in an image that looks not entirely unlike it was rendered using 3d modelling. If we are going to see another big shift in display (read: TV) technology in the next decade I would much rather we moved away from the sRGB / YUV colour space than started fucking about with HDR content, what's the point of trying to take advantage of our eyes exposure latitude if we can only render 1/3 of colours?

    • by UDChris ( 242204 )

      Mod parent up. I'd rather have a revolution in home entertainment tech than another "filmed in high-def, compressed to 480i for the masses" such as the one we're currently digging out of. I still have 76 channels of standard def, even though the cable company pretty much requires you to get a box in my area, which allows for digital-to-analog conversion.

      I'd rather have a revolution in tech that's so revolutionary you have to adopt, up and down the line, to be able to use it at all. Unfortunately, my pipe dr

    • by Nialin ( 570647 )
      We need a launch pad for better visual technology. This could be crucial for future photographic endeavors.
    • Correct, it's been done before using 2 Canon 5D Mark II's, as seen in this clip: Soviet Montage Blog [sovietmontage.com]

    • by Anonymous Coward

      read article for details of HDR display. thx

      • Lack of details, you mean. Sounds like BrightSide tech to me (i.e.: a multi-zone LED backlight through normal LCD panel).
    • by MightyMait ( 787428 ) on Tuesday January 18, 2011 @07:22PM (#34921886) Journal
      If you read the article, the author mentions both the folks in SF who did HDR video previously as well as the fact that the Warwick team have indeed developed a new HDR display for their system.

      Also mentioned is that the new system generates 42GB/minute of data to capture images with 20 exposures per frame. My backup nightmare just grew large fangs!
      • I wouldn't take the article as gospel as it appears to repeat what the Warwick guys have claimed. But of course they did not develop the camera that they use. It is in fact an existing commercial product that they use off-the-shelf.

        Details (from the real developer) can be found here [spheron.com].

    • Re: (Score:3, Informative)

      by zeroeth ( 1957660 )
      Speaking of 3D rendering, most of them output HDR which would be awesome to see without being tone mapped!. (LuxRender has built-in Reinhard tone mapping) And since we are on the topic of HDR.. this is what it is NOT http://www.cs.bris.ac.uk/~reinhard/tm_comp/flickr_hdr/The%20Problem.html [bris.ac.uk] (Reinhard discusses the blown out tone mapping heavily prominent on flickr)
    • It seems to me the answer is pretty obvious; all we need to do is bring back interleaving, with alternate bracketed frames for alternating lines. Ramp FPS up to 60, and you should get something approximating HDR video on current technology displays.

    • by ZosX ( 517789 )

      Clearly you don't know much about HDR. You can also use HDR to create very natural looking images, and in fact, this is one of its primary uses. Also you don't need an HDR display because any HDR you see on the web is presented to you in boring 8-bit jpeg for the most part. Most screens cannot adequately display the 32-bit color necessary for true HDR to really shine. That being said, even at 8- bit jpegs, HDR will typically give you a much larger dynamic range than what most sensors are capable of. One day

    • by NibbleG ( 987871 )
      Yeah, they did at least six months ago... That was when I saw it... www.youtube.com/watch?v=BlcLW2nrHam
    • Even if you don't have a finished HDR product, being able to edit in HDR is amazing. Yes, it would be great to have HDR displays, but even without them, say...trying to much with brightness in a pic. You want say, a bright sunny day, that's resulting in washed out colours to look better. You can mess with the colour curves, but depending on your video, this can look horrible.

      With HDR, just grab the brightness slider and pull it down. It'll look great.

      I've used 32bpc before, but in a really round about way.

  • by Monkeedude1212 ( 1560403 ) on Tuesday January 18, 2011 @06:28PM (#34921330) Journal

    I first learned about HDR from Valve, during one of the developer commentaries on one game or another... (Lost coast maybe? Anyways) They were trying to explain how Bloom is done in video games, and certain other effects like how walking out of a dark tunnel to bright light will affect your vision for a tiny bit, as your eyes need to adjust to the new lighting conditions.

    Thats when I started looking it up and yeah, basically the idea is that you take one shot that is under exposed (dark), one shot that is over exposed (light) and one that is properly exposed, and as many more in between as you want. Then you feed it all into a bit of software which takes the richest colours and lighting conditions from each photo and imposes them into one single image, so the dark corners remain dark and the bright lights remain bright and the vivid colours are still vivid. Its quite cool stuff.

    I'm a little curious as to how this is working, is it managing to encode the HDR real time into it's range compressed and tone mapped beauty at least 24 frames per second, or does it merely record the 3 or more images simultaneously and then take a few minutes afterwards to do the encoding? The first I think would be more impressive, but not really necessary.

    • It takes 20 images for each frame, at 30fps 1080p. You combine them yourself in post with the help of special software that can also apparently deduce the location and intensity of various light sources, allowing you to add rendered objects into the scene with realistic lighting.

    • Great, they reinvented Kodacrome.

      Sony camera's do this real time for still photos on consumer grade electronics (the sub $500 A33 for example) so I am thinking that the same can be done real time for movies.

      Some HDR techniques don't require 3 photos, they can extrapolate from a single exposure.

      • Re:Cool stuff (Score:4, Informative)

        by nomel ( 244635 ) <turd&inorbit,com> on Tuesday January 18, 2011 @09:03PM (#34922760) Homepage Journal

        Unless you have an HDR screen, this would require an automatic tone mapping. The thing about automatic tone mapping is that you have to decide what intensity information to throw out since you only have 256 values that you can display. For instance, using a 14 bits per color channel canon DSLR sensor, if you want to look at the image on your screen, this means you'll have to thrown out 98.4% of your intensity values. It is extremely important which values you decide to throw out, especially considering there's usually a subject or subjects in a photo that you want to keep visible.

        By the way, this 14bits gets you about +/- 2 stops...the camera they're talking about gives you 20 stops...that's an *incredible* amount of intensity information (giving the file size). Really this is more of a solution for filming a scene once and not having to worry about if you camera exposure is set correctly, which *is* extremely valuable.

        Now, viewing HDR movies? Not in theaters with any sort of current projection technology with reasonable ticket prices. The projection bulbs would have to go up probably 20 times in brightness, keeping similar crappy projection theater black levels. And, how do you deal with the ambient light coming off of your now incredibly bright white screen and bouncing off of the audience? At home, do you really want a tv that bright? From this bit-tech [bit-tech.net] review, "The light from the box was so bright, or indeed, was of such great contrast with the surrounding area, that it almost hurt to look at.".

        • Re:Cool stuff (Score:4, Informative)

          by The13thSin ( 1092867 ) on Tuesday January 18, 2011 @11:46PM (#34923792)

          I would like to correct a mistake prevalent here and in the news summary: common camera's do NOT get 1 (or 2) stops of light information (the difference between black and white). In fact, camera's like the Canon 7D have about 11 stops of dynamic range [source] [clarkvision.com] and professional video camera's like the Red One have about 13 1/2 stops of difference between black and white [source] [red.com]. Still, as X stops means 2^X times the light difference, going from 13 1/2 to 20 stops is a pretty huge deal.

          Another misconception: the amount of bits per channel only indicates precision, not dynamic range. Of course, when the researchers in the article created a 20 stops camera, they needed much better precision to get similar quality in the same range as the current camera's, which leads to the quoted 42 GB per minute uncompressed video stream.

          (Please note: DSLR camera's like the Canon 7D can detect and save more dynamic range than is apparent from the JPG's they create and the extra information is saved in the RAW file, which allows you to change exposure settings at least 1 stop in post processing without (noticeable) drop in quality.)

    • Re:Cool stuff (Score:5, Interesting)

      by Doogie5526 ( 737968 ) on Tuesday January 18, 2011 @07:17PM (#34921844) Homepage

      I find it kind of funny that HDR means the opposite thing in photography versus video games
      http://img194.imageshack.us/img194/7391/1244894383293.jpg [imageshack.us] (pulled from some old digg post)

      Traditionally games render the world and keep it between 0 and 1 (zero being black/completely dark and 1 being white). HDR is computing values above and below and clipping so things that are blown out (like reflections and highlights) are super white. I think it was an update to Half Life 2 that first did this in a commercial game.

      In photography, they take multiple exposures and stick them in to an HDR image. Then, they use tone mapping to convert it to an 8-bit visible image. Tone-mapped images are generally called HDR, even though that's a misnomer.

  • http://www.panavisionimaging.com/imagers_DMAX.htm [panavisionimaging.com]

    i get the feeling lots of people have been working on this.

  • by NeutronCowboy ( 896098 ) on Tuesday January 18, 2011 @06:40PM (#34921482)

    I mean, we have 1080p 3D stereovision with full-micron surround color effects, and yet, movies still stutter like mad on a fast pan because that damn 24 fps capture rate just can't keep up. Is it really so much harder to capture 60 fps and encode than it is to do a working 3D effect? I'd pay more for movies that have reliable framerates in the 60 Hz range than I would for 3D.

    • by Doogie5526 ( 737968 ) on Tuesday January 18, 2011 @07:06PM (#34921718) Homepage

      Roger Ebert asked the same thing (on page 4)
      http://www.newsweek.com/2010/04/30/why-i-hate-3-d-and-you-should-too.html [newsweek.com]

      I think there's a couple reasons. The first, and probably most significant, is nostalgia by film makers. They love the motion blur of 24fps. It helps evoke the "feeling" of film. Every film student I know either wants to shoot or convert their footage to 24fps. There is a noticeable difference. When you start increasing the resolution and frame rate, you lose motion blur and it starts to look like home video or video games (when generally don't compute motion blur at all).

      Another big issue is the amount of light. When you have more frames in a second, each frame has less light to suck up. It's a big issue with high-speed film. Having sensors that are more light-sensitive is a fairly recent thing (combined with advanced noise reduction) and will continuously get better.

      The stuttering is something cinematographers keep in mind when shooting (or at least, they should). I read an article about shooting imax and they said the biggest problem was the stuttering. They're also using 24fps, but the screens are much larger. When you pan, the object could jump 2 to 3 feet per frame. They intentionally had slower pans to compensate. You noticing this is probably a side-effect of larger theatrical screen and larger tvs at home.

      • Motion blur can just be re-added digitally while retaining 60 fps. And if you don't project film, the light intensity problem you mention is not a problem.

        • by Anonymous Coward

          I believe re-adding motion blur is harder than you think. The 2d motion blur techniques I've seen basically do a smearing, which looks terrible on composited images (and looks ok on individual, pre-composited elements--as long as it's a up/down left/right motion and not toward or away from the camera). Better algorithms may exist, but the processing power necessary would be prohibitive. It would also look different, which is part of the nostalgia filmmakers have. Spielberg still edits his films by cutti

    • Higher FPS would make motion a lot sharper (less motion blur), but it won't solve the stuttering. Stuttering is caused by some CMOS-based HD cameras (like the very popular Red One) having rolling shutter, where they capture the image in chunks instead of all at once. Find an old PC game, turn off vsync, and look at the nasty tearing you get when it tries to render 100fps onto your 60fps monitor in a rolling fashion. Same issue -- high FPS won't solve it.
    • The same motion detection that video compression uses to reduce data rates could be used to interpolate as many frames as you wish between two existing frames. Goodbye stutter. As software projects go, it's not even a particularly difficult job.
    • by nomel ( 244635 ) <turd&inorbit,com> on Tuesday January 18, 2011 @08:09PM (#34922280) Homepage Journal

      It's because it's not pleasing to the eye. 60fps movies look very strange...like home videos. The 24 fps is what gives them that "movie look". If you look at some example vids from some of the newer consumer cameras that can do 24 and 60fps...you'll see the huge difference it makes.

      • 24fps has nothing to do with being pleasing to the eye. 16fps was common before this, Edison was pushing for 48fps. 24fps was a compromise that was chosen simply because it was the slowest, meaning cheapest, speed that allowed for good sound quality. http://en.wikipedia.org/wiki/Film#Technology [wikipedia.org]
        • That all may be true, but that doesn't mean that 24fps isn't pleasing - because it is! It may have just been a happy side effect, but it worked out quite well I'd say.

      • by Anonymous Coward

        It's because it's not pleasing to the eye. 60fps movies look very strange...like home videos. The 24 fps is what gives them that "movie look".

        Ah, now I understand why reality displeases me so much, it's that ridiculously high fps.

  • They claim

    "more representative description of real world lighting by storing data with a higher bit-depth per pixel than more conventional images"

    So basically they store the image with greater color resolution then the conventional 8bit-RGB -- they are not getting realtime over/under exposure passes to get HDR enhanced mixed output.
  • Two companies already dealt with this in the past, though they aren't doing as much with the technology as one would hope.

    1 - Fuji has a sensor that does HDR already. Several years ago, in fact. It has two overlapping layers of sensors and takes two images at the same time, then blends that two together. Done, can buy it today. Just it isn't in use in a video camera yet(and their current camera doesn't do video very well)

    2 - Foveon also has a different sensor approach where it is layered like film. Th

    • The Foveon sensor is not an HDR sensor. It simply does away with the bayer filter by having layers in the silicon so that different wavelength penetrate different depths, the amount of light received is still proportional to the amount of charge the photosite will contain after the exposure is over.... so this still suffers from the dynamic range problem of standard single-color-per-photosite sensors.

      • by Plekto ( 1018050 )

        Well, that's somewhat true. A typical sensor will accurately capture all but about one F-stop higher and lower than human vision. That means that overblown super-HDR type movie poster or similar shots and the like aren't something that you should expect in any camera. Why it behaves like a HDR camera is because it doesn't hit 255 and just simply put solid white in the image when it gets over-exposed. A typical Bayer sensor will do this and there's nothing to be done about it - you hit that wall and you

  • We just think they are. We change our focus and view, squint, shift our heads, and shade our eyes to avoid brightness to view dark areas. Video cameras can do most of that, too, plus they can zoom, something eyes lack. The problem is representing that view on a monitor, which does not have the dynamic range of the real world. Photographic prints that have HDR compensation may look surreal, and others look washed out in places. Video has the same issues. It takes a lot of post production to make it app
  • Is this the same HDR that the iPhone 4's still camera has?

    • Heh. I think most photographes/videographers would be offended by that notion, unless you're being sarcastic. 8-)
      • by joh ( 27088 )

        Heh. I think most photographes/videographers would be offended by that notion, unless you're being sarcastic. 8-)

        Still, it's works basically the same way. The iPhone camera in HDR mode takes three exposures and combines them.

  • Video cameras, however, have just one f-stop to work with at any one time, and so must make compromises

    Just to name one example, the Red One has a wonderful dynamic range of 11.3 stops. http://en.wikipedia.org/wiki/Red_Digital_Cinema_Camera_Company [wikipedia.org]

    • I think the point is 'at any one time.' The range of the aperture is different, my Canon "L" glass can range from f2.8 to the upper 20s. You can't have a camera that has a range, as the aperture can only be a hole that limits the light. The only way you could have more than one at once is a camera which would have multiple sensors and multiple apertures. Don't know if that would even be possible.'
    • What the author is trying to say is that conventional cameras are set to one particular f-stop at any given time. Of course, the f-stop can be changed, but you're still only using one setting at a time. With HDR photography/videography, the same image is captured multiple times at different f-stops.
      • by muridae ( 966931 ) on Tuesday January 18, 2011 @08:42PM (#34922554)

        The summary is just plain wrong, and the article may be as well. First, there seems to be some massive confusion between f-stops and dynamic range 'stops'. An f-stop is your aperture setting, and is part of the control that determines how much light gets into the camera. If I go out and desire to take an HDR picture of something, the f-stop is the last control I will use in setting each exposure. The f-stop has the side effect of changing the depth of focus, thats covered in photography 101. If you change that in a set of pictures, some things will be in focus in one frame, while out of focus in others. It doesn't look that nice once post processed.

        On the other hand, a dynamic range stop is just notation for double the amount of light. If someone said "That film has about 9 stops of resolution" you would know that physically the brightest area on a picture would have 2^9 times as much photonic flux. Or you would be more camera focused, and know that the film would only record detail in the 4.5 stops above, and 4.5 stops below what ever you set the exposure for. An object 5 stops brighter than what you were focused at would be a washed out blur, and something 5 stops darker would be total shadow. A quick run through google suggests that Kodachrome, the legendary film, could record only about 8 stops dynamic range. The human eye can pick up something closer to 24 stops. GP's Red camera records 11.3 stops. Some people will claim that a digital camera gets as many stops as bytes, but that is only with the analog to digital conversion is logarithmic, and so is the display it is shown on. Mine runs about 7 stops, depending on other settings.

        So, what's that got to do with this camera? I suspect what the article meant to say is that the camera captures 20 stops of data at 30fps. Better than the Red, better than almost any film in existence. It is doing the same thing in a single shot that other cameras do in several. All that will mean is less blur in HDR video, since subjects won't move irregularly between exposures. One would still have to tone-map the output down to a range that it can be displayed for printing, projection, or dvd.

        • by Plekto ( 1018050 )

          Which brings me to the real point of this insanity surrounding "HDR".

          Q: Why do we actually need HDR?
          A: Because today's sensors for the most part use technology that makes the images look like junk in poor lighting conditions. Specifically, the colors are terrible and there's very little ability to deal with over-exposures. The range between "perfect image" and "junk" is a very small margin with not much leeway or forgiveness between the extremes.

          The response so far has largely been to increase dynamic ran

          • by muridae ( 966931 )

            Which brings me to the real point of this insanity surrounding "HDR".

            Q: Why do we actually need HDR?

            You could just the same answer that it is because displays suck, and can not reproduce the range between dark and light that exist in the real world. Each weak spot must be conquered at a time. Really, how much marketing has gone into convincing people that HD is the furture, when we are stuck with displays that still use the sRGB color space. Yeah, it will waste bit space for imaginary colors, but getting a display that can show saturated greens and yellows would be great.

            The other point to the hyper-satur

            • by Plekto ( 1018050 )

              Though, someone could also argue that film or something taken with a camera could in theory be printed or used in a theater, where the limitations of a (now our only option) LCD display wouldn't apply.

              But yes, if we're talking about computers, there's another limiting factor. We'd essentially need a whole new system from the ground up to take and utilize the images - aside from stuff like shooting movies and the like(which they already do for IMAX and similar).

              What we need instead isn't HDR, but HCR. I he

  • by RichiH ( 749257 ) on Tuesday January 18, 2011 @07:41PM (#34922032) Homepage

    It does not matter that a camera can only have one aperture and one ISO setting. Our eyes have only one iris, as well. What matters is that our retinas & brain have a dynamic range that trumps CMOS/CCD sensors. Oh, and the fact that our eye cheats by seeing more colour in the middle of the retina and more bright/dark & movement in the corner of our eyes.

    That being said, I am looking forward to anything that extends the dynamic range in both cameras _and_ displays.

    • by Malc ( 1751 )

      I'd also suggest that as our eyes look around a scene, they adjust for whatever we're looking at directly. Cameras don't do this localised adaptation so effectively need to have a greater range than the eye.

  • The word "dynamic" has a meaning.
    This is not an "HDR" display, nor does such a display exist, nor would anyone want one.

    This is an "HR" display.

    "The new system, by contrast, captures 20 f-stops per frame of 1080p high-def video, at the NTSC standard 30 frames-per-second. In post-production, the optimum exposures can then be selected and/or combined for each shot, via a "tone-mapping" procedure."

    They're using the typical method of taking many exposures of the same frame. Makes sense.
    I would hope they're usi

  • Now there is an HDR video camera when are we going to have an HDR video monitor? Seems kind of useless to have an HDR camera with no way to display it.
  • RED has shipped its first two production EPIC-M cameras which has a feature called HDRx, which allows up to 18 stops of DR in a single exposure for every motion picture frame. It doesn't require a beam splitter or any other gadgetry.

    Peter Jackson has a number of them he's using for the Hobbit. I think the latest Spiderman is shooting with it too.

    It does that at 5K, which is 5120x2880 resolution.

    As to comments that HDR is better than 3D, or that you don't need lighting... they are unfounded. You still need lighting to create the precise mood you want. The advantage is that you can now create that mood more easily in more lighting conditions. This is especially important in conditions that the film maker can not control. The first RED demo was a shot from inside a barn out the barndoor into the Arizona desert. The camera held detail in the shadows inside the barn and in the sky and on white surfaces in direct sunlight.

    The normal solution to that lighting situation is to pour about a hundred thousand watts of lighting into the inside of the barn, hope nothing catches on fire and that you are close enough to the sun

    • by dfghjk ( 711126 )

      HDRx uses two exposures. The exposures aren't merged, either, they are stored as separate tracks in the data stream.

  • by CoolGuySteve ( 264277 ) on Tuesday January 18, 2011 @10:47PM (#34923460)

    Didn't Autodesk Toxik 2008 already do HDR compositing with RED cameras?

    Not only that, didn't they sell it to real live customers?

    This is not the first, it's not even notable, frankly

    • read the article. it is the first, and it is notable: they have one camera, that basically generates video with a higher number of bits per pixel; they then use a custom display that can properly render this richer information.
      before them, as far as I know, everybody kept reducing information into the 24bit standard.

  • o.k. - attention-grabbing subject line aside... I can't RTFA b/c it's slashdotted, so I don't know exactly what dynamic range we're talking about. But the much hyped Red Epic camera (sequel to the Red One) has full-motion HDR, and is shipping as of this month. Models with this feature range from around $10k to around $40k - so admittedly more prosumer than consumer.

    It stores the extra data in a secondary video stream, so that you can tone-map in post. And apparently, it can be dialed up & down, so t

  • A CCD was operated at 100 Hz instead of 50Hz (PAL) and alternating frames had different exposure times. The two interleaved video streams were merged in real time into a high dynamic range image and then compressed into a standard dynamic range image where details could be clearly seen in both dark and bright areas.

    This was connected to a videoconferencing system and worked very well when the room lights were turned off for a projector. You could see both the presenter's face and the projected image. A stan

  • HDR photos you find on the web are actually tone mapped photos. They were HDR when they were captured, or when different exposures were combined into a single image, but after that stage they were tone mapped in order to make all the details visible on a conventional display.

    Tone mapping is something we may stop doing when we have proper HDR displays like in this article. A display like that will more closely resemble the real world, and tone mapping will be unnecessary because our eyes can handle high
  • by Anonymous Coward

    "The human visual system can perceive a scene that contains both bright highlights and dark shadows, yet is able to process that information in such a way that it can simultaneously expose for both lighting extremes"

    Completely untrue. Any decent DSLR should have better dynamic range than the human eye. When you look at a scene, you see the overall picture and it seems clear, but really at each instant you're seeing a small clear area in a detail and all the rest of the scene in much less detail. You don'

E = MC ** 2 +- 3db

Working...