Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

Lytro Illum Light-Field Camera Lets You Refocus Pictures Later 129

Iddo Genuth writes "Earlier today Lytro introduced a new light-field camera called Illum. This is the second camera with this innovative refocusing technology from the California based company founded in 2006. The new camera is a more advanced version of the first camera introduced in 2012. It has a much larger sensor with four times the resolution (Lytro still uses the term megarays instead of megapixels), a much larger and longer zoom lens with a f/2 constant aperture and of course the ability to refocus after you take a picture (the new Illum can refocus on many more points in the image compared to the older version). Users will also have more control of the camera, a larger screen, and the ability to create regular JPEG images or videos made from the refocused images they capture."
This discussion has been archived. No new comments can be posted.

Lytro Illum Light-Field Camera Lets You Refocus Pictures Later

Comments Filter:
  • IIIum? (Score:5, Informative)

    by EmagGeek ( 574360 ) on Tuesday April 22, 2014 @03:08PM (#46817371) Journal

    Is that IIIum, Illum, or IlIum?

    The font slashdot uses makes it impossible to tell.

  • Meh (Score:4, Insightful)

    by vadim_t ( 324782 ) on Tuesday April 22, 2014 @03:17PM (#46817465) Homepage

    It's mostly a solution in search of a problem.

    Photographers choose what to focus on very intentionally, it rarely makes sense to focus on anything else. Of course it's possible to misfocus, but in that case it makes no sense to let the user play with it.

    It's still going to be low res, because you get a small fraction of the "megarays" the sensor provides. The spec for this camera was 40, IIRC, so it might get around 4MP, which can't really compete with a modern DSLR. While resolution isn't everything, having some margin for cropping and large prints is a very good thing.

    The control for the interactive photos is still clunky. I can't find a way to for instance get the whole image in focus, though that should be possible. It does it while changing perspective.

    It doesn't fix the other problem that leads to blurriness -- camera shake. It's all well and good to be able to refocus, but most people learn to focus right pretty fast. The problem is with low light environments, and this isn't going to save you if you handhold and shoot at 1/10.

    The sample images still looks low res and blurry.

    It costs $1600 and doesn't seem to have interchangeable lenses -- what, are they insane?

    Overall interesting toy, but doesn't seem to have a practical use.

    • Re: (Score:1, Interesting)

      by brunes69 ( 86786 )

      - Yes it is $1600 and 4MP. Do you know how much the first DSLRs were with only 1MP? Technology evolves.

      - Why do you need interchangeable lenses when you can focus on or apply lens effects on whatever you want after the fact? You would not care about lenses with this kind of technology at all - in fact, the elimination of lenses means this technology could result in large cost savings over the long haul.

      • by Anonymous Coward

        You don't change lenses to change focus; modern lens assemblies move the lens to change focus, and move the aperature to adjust aperature. You change lenses to change the focal length (what zoom does) beyond the optical limits of the lens on the camera.

        • I prefer the old ways.

          To change focal length, change the lens.
          To change the aperture, twist the aperture ring.

          To focus the lens, twist the focusing ring.

          To zoom, move closer to, or farther from your subject.

          It may not make for better pictures, but it makes the process of taking them more enjoyable.

      • Re:Meh (Score:4, Insightful)

        by vadim_t ( 324782 ) on Tuesday April 22, 2014 @03:38PM (#46817665) Homepage

        - Yes it is $1600 and 4MP. Do you know how much the first DSLRs were with only 1MP? Technology evolves.

        The problem is that physics get in the way of resolution increases, and the best modern DSLRs already have a sensor that can out-resolve most lenses.

        Which means that a Lytro style camera is going to necessarily sacrifice quality.

        You can make a larger sensor, but that costs serious $$$. This thing is in the price range of a full frame camera. If I'm guessing right, to compete in quality with a normal one it'd have to go with a medium format sensor, and those start at around $10K.

        - Why do you need interchangeable lenses when you can focus on or apply lens effects on whatever you want after the fact? You would not care about lenses with this kind of technology at all - in fact, the elimination of lenses means this technology could result in large cost savings over the long haul.

        Because lenses have nothing to do with focusing? All lenses can focus at all ranges. You can't put a f/1.4 on this for shallower depth of field and better low light performance, or a 10mm wide angle, or a fish eye, or a better telephoto lens, or a tilt/shift for architecture.

        It could however be very cool for macro, but oddly enough they don't seem to be hurrying to demonstrate that. Which is a pity -- extreme macro is a huge pain to focus, and that's the one area where this thing could show some promise.

        • by jfengel ( 409917 )

          You can't put a f/1.4 on this for shallower depth of field and better low light performance, or a 10mm wide angle, or a fish eye, or a better telephoto lens, or a tilt/shift for architecture.

          I thought the point of this contraption was that those were things you could do after the exposure (except perhaps for the "low light performance"). Am I off base?

          • by Chirs ( 87576 ) on Tuesday April 22, 2014 @04:43PM (#46818181)

            The lens is F/2, so you can't get the equivalent brightness of an F/1.4 (though you might be able to get the depth of field in post-processing).

            The lens is 30-255, which is pretty good range, but you can't swap it out to go wider/longer.

            Tilt-shift type effects (angled focal plane) should be doable in post-processing, but it would depend if they've added that functionality to their software.

            • by jfengel ( 409917 )

              Thanks! I'd mod you "informative" but obviously... (It's actually giving me a mod drop-down box, but I know it wouldn't really work.)

            • I get what you're saying but in the practical world it makes no difference. No one picks f/1.4 because they need that low-light performance. At that aperture the depth of field starts border-lining into alternative art. I know very few situations where you would benefit from the light capturing capabilities in a situation so dark where it would be almost impossible to accurately focus the hairline depth of field on your subject.

              Even the advent of high ISO cameras has seen many people get away from shooting

              • by Rich0 ( 548339 )

                I get what you're saying but in the practical world it makes no difference. No one picks f/1.4 because they need that low-light performance. At that aperture the depth of field starts border-lining into alternative art. I know very few situations where you would benefit from the light capturing capabilities in a situation so dark where it would be almost impossible to accurately focus the hairline depth of field on your subject.

                Depth of field is also a function of range. If you're taking picture of a school play from halfway back in the auditorium, a fast lens will help quite a bit. The subjects are moving, so you can't use longer exposures and image stabilization is useless - you need a somewhat fast shutter. Flash on its own won't do any good at that range unless you can stick remote flashes on the stage or you're using something fairly exotic (and high-intensity flashes during a play aren't exactly unobtrusive). At a distan

                • Fair call save for the comment on the sensor. Yes every sensor degrades with increased ISO. Yet every series of camera increases image quality at the same ISO too. About 5 years ago I would have happily stood next to you screaming for faster lenses. I was also worried about noise. My current camera, I get a decent image with an uninspiring lens in candlelight. Which brings me back to the whole using f/1.4 for artistic purposes more than anything.

                  On a side note have a look at the existing lens size and the f

            • Even though the light/brightness is F/2, the depth of field scales with the size of the sensor (which is listed as 1"). So compared to a full frame camera, this device has a crop factor of around 2.5. That means the F/2 is equivalent to a 35mm full frame camera with a maximum aperture size of F/5.

              So having a super narrow depth of field with good background defocus will require a larger distance between the object and the background, and a smaller distance between the camera and the object. This'll be ok for

        • Re:Meh (Score:5, Informative)

          by NIK282000 ( 737852 ) on Tuesday April 22, 2014 @04:35PM (#46818119) Homepage Journal

          The resolution at this point doesn't matter, this is a demo product that will only by bought by future investors and camera heads (quite possibly myself included). The
          rest of your lens woes don't really apply with a plenopitc camera, the DOF is calculated when the image is made, if they can get DOF with unlimited depth they can
          get ultra thin just the same way. They also boast that you can use lenses with no aspherical elements which means making addon lenses would be very cheap on
          future versions of this camera. A tilt lens is not required with a plenoptic camera, it captures all parts in focus and then calculates the distance and angle you pick for
          a plane of focus, you could even have a calculated "surface of focus" that is wavy or bent.
           
          If they make enough on this one their next camera should be photographer's dream.

          • It's a basic sampling problem. Instead of dedicating all your pixels to a single image, you're basically splitting them up and sampling many different images simultaneously. This will result in lower resolution in the final image than if you took that image with a "standard" camera. On the other hand, it makes it less likely that you'll miss a shot due to focus issues.

            It's a tradeoff between resolution and flexibility.

            Making the sensor much bigger would allow for more pixels, but would also be more expen

      • Why do you need interchangeable lenses when you can focus on or apply lens effects on whatever you want after the fact?.

        Zoom.
        Aperture.

    • Comment removed based on user account deletion
      • Re:Meh (Score:4, Insightful)

        by ceoyoyo ( 59147 ) on Tuesday April 22, 2014 @04:15PM (#46817983)

        No, you can't. The simplest tradeoff that you might switch lenses for is aperture versus focal length. A larger aperture is good for low light. A longer focal length is good for things that are far away. You can fake a longer focal length by cropping your picture, but that reduces resolution (something this camera already has a problem with) and requires a lens with much higher resolving power (which is ALSO something this camera has a problem with).

      • Re:Meh (Score:5, Informative)

        by bws111 ( 1216812 ) on Tuesday April 22, 2014 @04:53PM (#46818243)

        You seem to be confusing lenses and filters. Lenses are not used to 'apply distortions' (although a side effect of many lenses is distortion). Lenses are used to control what fills the frame of the picture.

        I'll give you an example. Suppose you are on the sidelines at a football game, and want to take some pictures. One picture might be of what your eye sees - a good portion of the stands on the other side of the field, grass between you and the players, and the players. A better picture may be of only the player controlling the ball. A different picture may want to show mostly the stands, to show the size of the crowd.

        A point and shoot camera, or a camera with a 'normal' lens is going to take the first picture. A telephoto lens would take the second picture (you could zoom in and get just the players face, including the sweat dripping from his hair), and a wide angle lens would take the third picture.

        Now, why can't this camera elimate those lenses? Well, suppose you have a 10MP camera. In the wide-angle shot, the players face probably takes up .1% of the frame. If you are using all 10MP to capture the wide angle shot, your players face only uses about 10K of the pixels. If you try to blow the players face up to full-frame you have an extremely blocky picture with no detail at all. On the other hand, if you want to the players face to occupy 10MP, you need to capture 10 GIGA pixels in your wide angle shot.

        • by lgw ( 121541 )

          Well put, but don't forget "gather more light". The reason long lenses tend to be big lenses is to increase the area you're gathering light from. The smaller and more distant^2 your field of view, the less light you have to work with. A face at 100 yards is going to need a large lens surface to give the electronics something they can see.

        • The lens on the Illum already goes up to 255mm focal length, which is a longer-distance telephoto than most people ever use. It should be plenty for capturing the player's face in your example.

    • Re:Meh (Score:5, Informative)

      by Anonymous Coward on Tuesday April 22, 2014 @03:39PM (#46817673)

      In a fast-paced environment like a concert or sporting event, the ability to literally point and click, and pull the best shots later, is a fantastic advancement.
      I love my first gen Lytro. I've gotten some amazing shots from it.

      Don't knock it until you've tried it.

      • by Jmc23 ( 2353706 )

        Don't knock it until you've tried it and don't understand how to take a good photo.

        FTFY.

        Training wheels might seem like a godsend to the trike expert, but otherwise....

    • Re:Meh (Score:4, Informative)

      by nine-times ( 778537 ) <nine.times@gmail.com> on Tuesday April 22, 2014 @03:41PM (#46817693) Homepage

      It's mostly a solution in search of a problem.

      I think you're right that most people haven't been searching for this kind of camera, but I think you could have made the same argument about digital cameras in the first place, as well as computers in general. Things were just fine before. Professionals who were used to doing things the "normal" way saw them as more trouble than they're worth. They were expensive and had technical shortcomings when compared to the conventional solution.

      However, it allows you to do something new that you couldn't do before. I'd say there's a good chance the technology will be refined and you'll see this sort of thing become cheaper and better. People will find cool and interesting applications. Something neat will probably come of this.

      • Most of the first DSLRs were sold to photojournalists who had deadlines. All the usual photographic tools for creating a record of the days events, without the hassle of developing film.

    • The Chicago Sun-Times decided to replace its photographers with iphones-- the result was notably less dramatic photos. I'm not sure what became of that experiment, but the Illum might be more useful than an iphone, as a trained photo editor could take the raw illum files gathered by print reporters and refocus them appropriately. I'm not sure that this would end up being ethical, though.

      • trained photo editor could take the raw illum files gathered by print reporters and refocus them appropriately. I'm not sure that this would end up being ethical, though.
        Why? They use filters all the time and often post-process for lighting both of which changes the amount of electrical engineering "information" in the picture. Post-focusing does not remove any information, it is information-wise similar to cropping a picture.
        • From the AP Code of Ethics

          The content of a photograph must not be altered in Photoshop or by any other means. No element should be digitally added to or subtracted from any photograph. The faces or identities of individuals must not be obscured by Photoshop or any other editing tool. Only retouching or the use of the cloning tool to eliminate dust on camera sensors and scratches on scanned negatives or scanned prints are acceptable.
          Minor adjustments in Photoshop are acceptable. These include cropping, dodgi

          • Precisely. Cropping, change of tone/color adjustments are OK. Also note that Dodging/Burning are basically blurring out areas that the photographer/editor does not want to focus on.
            That said, they say blurring of backgrounds is not OK. Maybe this will require that the editor focus all visible elements of the photo and then change the color to hide the background again - this is also possible with Lytro since it does have the information available. But it is a strange way of hiding "information" to first
            • Hmm. That's an odd way of using dodging and burning. It's usually used to improve contrast by overexposing or underexposing selected areas of an image.

    • by AK Marc ( 707885 )
      I see the eventual benefit being something like a security cam with CSI-like refocusing capabilities. It may be 3+ generations away, but it's getting more possible.

      Zoom. Enhance.
      • by vadim_t ( 324782 )

        Except you get much less zoom and enhace with this thing because you reduce your resolution to 10% of the sensor's capability for the sake of the depth of field control. A 40mp sensor turns into a 4 mp one. A face 100 pixels wide on an image is useful. A face that's 10 pixels wide, rather less so.

        For CSI you'd want the near opposite of this camera: high resolution, a small aperture to keep everything at once in focus, focused to infinity, and excellent low ISO performance to compensate for the small apertur

        • by AK Marc ( 707885 )
          But then you can't enhance. And I remember when "VGA camera" was a standard to be aspired to. Now I own no less than 3 10+ megapixel cameras, 10+ MP in a phone that cost about $500 without any contracts/subsidies. The dedicated camera cost more than that, but has more features. Sadly, the camera in the phone is almost exactly what you are asking for.
    • Actually - even if the photographer retains full creative control, being able to fine-tune focus later is good. Just like camera raw where the parameters can be adjusted later.

      Especially if your vision isn't 20/20 looking through that little viewfinder lens.

  • I just got the Gen 1 version of the camera. I like the small package size and the small price. You can use it to just take regular pictures, but you can have a lot of fun composing creative photos that takes advantage of the refocus capability to tell a story in the photo using the foreground and and the background as distinct photo elements. For example, a foreground subject tells one story, but refocus on the background element and the meaning of the story suddenly changes in a surprising way. Fun.
    • So you can make crappy YouTube slideshows with bad music and a boring voiceover.

      • Boooo. Lytro is a genuinely innovative camera, and I applaud them for that. What the artistic payoff turns out to be, only time will tell, but it's worth exploring.
    • I just got the Gen 1 version of the camera. I like the small package size and the small price. You can use it to just take regular pictures, but you can have a lot of fun composing creative photos that takes advantage of the refocus capability to tell a story in the photo using the foreground and and the background as distinct photo elements. For example, a foreground subject tells one story, but refocus on the background element and the meaning of the story suddenly changes in a surprising way. Fun.

      Can't you do this with a regular DSLR and software? Your whole image is focused and then with software blur the parts you don't want to have attention? How is that different than using a special camera and using software to change the focus.

  • Comment removed based on user account deletion
    • Too bad they didn't make it to 8Mp. That would give video producers a bunch of creative options while working in 4K. Next rev!

      • How directly comparable are the megapixel figures anyways? To start with, I don't think Lytro has a bayer filter, and it is less sensitive to lens quality.
        • Re:2D resolution (Score:5, Informative)

          by ceoyoyo ( 59147 ) on Tuesday April 22, 2014 @04:25PM (#46818057)

          The megapixel figure is the comparable number. The Lytro not only has a Bayer filter, it also has another filter that uses multiple pixels to measure the direction of the light. So you take your raw sensor, that might capture 40 MP, divide that by whatever number you like for Bayerization to get colour, and divide that by some other number (about 10 for Lytro's products) for the directional sensing.

      • by gmueckl ( 950314 )

        That would be the equivalent of a 80 megapixel raw video in order to retain all the viewing/editing capabilities afterwards... storing that away in real time isn't yet economic. And I honestly have no idea if there is a decent compression scheme for the data, either.

    • Since the short excerpt doesn't mention this I thought to mention: their forums say Illum produces a 4 megapixel image once it's exported in a regular 2D format.

      That makes sense because I would assume it is actually taking a bunch of images at various focal lengths and superimposing them, but once you decide what you want, it has to write out a legitimate file. Lenses are lenses and once the light rays hit the film or sensor, other than trying to sharpen an image through extrapolation, it's too late to change focus. Physics simply doesn't allow an out of focus image to somehow become focused.

    • Since the short excerpt doesn't mention this I thought to mention: their forums say Illum produces a 4 megapixel image once it's exported in a regular 2D format.

      That makes sense because I would assume it is actually taking a bunch of images at various focal lengths and superimposing them, but once you decide what you want, it has to write out a legitimate file. Lenses are lenses and once the light rays hit the film or sensor, other than trying to sharpen an image through extrapolation, it's too late to change focus. Physics simply doesn't allow an out of focus image to somehow become focused.

      • by gmueckl ( 950314 )

        As far as I remember, the Lytro cameras use a micro-lenslet array to refocus the image differently for different patches on the sensor. So it is recording multiple focal planes at once. But when you dig a bit into light field representations and light field interpolation (e.g. the original light field and lumigraph papers), then you'll probably see that you can process the data in more interesting ways than simply flipping through a focal stack.

  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Tuesday April 22, 2014 @04:07PM (#46817917) Journal

    For still photography, focus isn't a terribly hard problem to solve. Autofocus works, and DSLRs let you compose, focus, and shoot manually as well. Easy peasy.

    On the other hand, for movies shot using large-format sensors, focus is a huge issue. The amount of work spent following focus on a movie is significant, and it fails more often than you might think. Modern lenses are incredibly sharp, but they have such a tiny range that is in perfect focus that they are hard to use. Admittedly, the people who use these cameras and lenses are professionals with years or decades of experience, and they do well... ...But -- if we could focus our shots after the fact, it would be a true game changer for movie making. We could chose just what part of the scene should be in focus, and change that throughout the shot easily. Yes, this moves yet another part of the movie making process into post, but that's not a bad thing. As other people have suggested at other fora, editing/coloring/framing and visual effects are all done in post, and it helps make better movies. This would help too. Having the depth maps automatically generated would make visual effects easier and better as well.

    I recognize that the amount of processing that goes on to make these images makes a motion picture camera a challenge, and the number of high-end motion picture cameras is probably a tenth of a percent of the DSLRs that are made, at most. Still, we could just capture the 40 MRays and do the processing later; storage and networks are getting faster and larger all the time.

    Come on, Lytro! Make it happen!

    • by AK Marc ( 707885 ) on Tuesday April 22, 2014 @04:42PM (#46818173)
      Then they could finally do 3D right. I hate 3D movies because movies like Avatar make me ill. They are much more enjoyable in 2D. Why? Because directors (even 3D ones) still think in 2D. The scenes where the director has foreground shrubbery to help set setting and such, the plants are out of focus because the focus range is so small, but jumping out at you because they are closer. For 3D, if you are using 3D for depth, not just an occasional shark-jumping-out-of-the-screen moment, everything should be in focus. Let the 3D provide the depth, and let the viewer selectively focus. But forcing us to look at the actors because they are the only thing in focus, while forcing us to look at blurry plants because they are jumping out at us will always get a poor result.

      3D will never look right if the same movie is watchable in 2D. But since everything is dual-D, none will be right for both. And that's a director problem.
      • This is exactly what I was thinking when I read the article summary. We might finally get non-animated 3D movies where most of the field of view can be in focus at the same time. 3D movies give my Wife headaches because her eyes are always trying to focus on the wrong things when they pass in front of the camera.

      • by Arkh89 ( 2870391 )

        Let the 3D provide the depth, and let the viewer selectively focus.

        Selectively focus on? On the screen of the theater???
        Everything will be in-focus at the same time, which would contradict the stereo depth... Thus, the brain would also get dizzy...
        No solution here, dump stereo 3D...

        • by AK Marc ( 707885 )
          For movie screens, everything is at screen distance, which is sufficiently infinity for the focal length. The effect you mention doesn't bother me, just when the deliberately out-of-focus parts are in the fore of a 3D scene. The 3D makes them jump at you, but you can't focus on them, because they were recorded out of focus.
          • by Arkh89 ( 2870391 )

            Don't get me wrong, I agree on the effect you are mentioning ("forced blur"). The effect I am talking about really bothers me (as you don't feel the re-focus on object which are clearly at a different distance because of the different stereoscopic separation).

            You are probably right as the first might be worse than the last. I am just saying that stereo 3D (with a fixed field, as a cinema screen or a TV) is not able to render a good illusion of a real 3D world.

      • Looking further to the future, when we develop 3D holographic displays, you should be able to take one of the images from a Lytro camera and directly convert it into a hologram. The total angular shift would be limited (basically to the width of the lens - move your eye beyond it and surfaces which were hidden in the original pic would be revealed in the hologram). But it would still be an honest to goodness hologram, as in you could move your head side to side a bit to peek around corners (which you can'
      • May I remind you all that the so called 3D movies are not 3D but only stereoscopic movies.

        You only have the depth perception, but you won't turn around the scene as you would be able to do with a 3D volume display.

        There isn't any issue of focus with a real 3D movie (volume display), since the spectator focuses his eyes on the part of the scene he watches.

        Of course I agree with the focus issues that the stereoscopic movies have.

        • by AK Marc ( 707885 )

          You only have the depth perception,

          Yes. 2D= flat. 2D + depth = 3D. You only have 3D in 3D, and not 3D. Or so goes your statement.

          I think the complaint you are looking for is that it's fixed perspective 3D. But that's still 3D. But really, how would you expect a "true" 3D movie to work? You look up and see the boom mike? Turn around and see the camera and crew? Your argument is like claiming that a live play isn't 3D if it isn't theater in the round, because the proscenium arch restricts viewing width, same as a stereoscopic movie.

          • I'd like to stress the differences between my real 3D (a) and your fake 3D (b).

            1) Looking at 3D(a) scene, a spectator can focus his eyes on whatever point he wants, whereas the 3D(b) scene offers only one focus plan, which leads to problems and this is the whole point of my previous post and its parent post.

            2) Fixed and non fixed perspective exists for both 3D(a) and 3D(b), so I don't get your argument here.
            Fixed for 3D(a) is achieved by not being able to move relatively to the displayed scene (just sit th

            • by AK Marc ( 707885 )
              There are piles of depth cues. Parallax is only one. You are arbitrarily picking and choosing the cues you accept as "real" or not. You also discount the possibility of having the image in focus at all points. With "fake" 3D, but all points in focus on the stereoscopic images, you completely eliminate your #1. And if you insist #5 isn't stereoscopy, then you are insisting that both eyes are fed the same image. I assert that's false.

              Most of the effect in #3 is mental, not physical. Your brain is not pr
              • 6) Paralax can be consistently observed in real 3D display scene when you move and change your point of view, whereas the stereoscopic display lure the brain that there would be paralax effect if you move but, when you try to do so, it doesn't happen, you can't make a close object actualy translate faster than a distant one, and you won't see what's behind neither - not more that the other eye was already seeing.

                Back to previous points:

                #1) Yes, we can have all plans in focus at once for a stereoscopic di

                • by AK Marc ( 707885 )

                  6) Paralax can be consistently observed in real 3D display scene when you move and change your point of view, whereas the stereoscopic display lure the brain that there would be paralax effect if you move but, when you try to do so, it doesn't happen, you can't make a close object actualy translate faster than a distant one, and you won't see what's behind neither - not more that the other eye was already seeing.

                  See #2.

                  The observer is not the display device. The fact that my eyes don't see the same image when I'm looking at an object doesn't mean that this object is stereoscopic. This isn't a property of the object.

                  You are using a more strict definition of stereoscopy than required. Feeding two different and coordinated images to the eyes is stereopsis. There is no qualification I see that the images must be 2D, or from separate sources. A single hologram (Real 3D) results in two separate images (as seen by the eyes of a viewer), and thus is stereopsis. If you have seen a "reliable" definition that disagrees, please point me to it. I've never seen any that would exclude it from stereopsis/stereoscopy.

    • by JerryLove ( 1158461 ) on Tuesday April 22, 2014 @04:45PM (#46818193)

      For still photography, focus isn't a terribly hard problem to solve. Autofocus works, and DSLRs let you compose, focus, and shoot manually as well. Easy peasy.

      Depends on what you are shooting and what you are shooting with. Bird moving through foliage at low F value? AF is likely to grab foliage. Something really close to camera and moving randomly? That can be a problem too. Baby waving arms... make sure you get focus to the face: AF (esp phase-focus) is likely to get the nearest object rather than the correct one. Contrast focus (and phase focus on-sensore, as with Canon 70D) can add face / eye detect, but (except the 70D) at the cost of speed (so moving objects are a problem again).

    • If you use a high enough f-stop, then more of your image is in focus at any given focused point. You could have a movie camera with multiple sensors set at overlapping focal points to have a seamless all in focus shot. You could then use software to imitate the out of focus portions. But, why? A good director already knows the shot they want and what should and should not be in focus. Using such a system would be akin to a singer using a voice box to make them sound in key. With so many crappy movies (an

    • Focus is a horrible problem to solve.

      "real" photographers don't use auto-focus, because you're almost guaranteed that it will focus on the wrong thing. When I'm taking point-and-shoot pictures with pocket camera, I have to be careful, and hope that nothing distracts the camera. When I'm doing serious photography with my nicer cameras, it stays in manual mode.

      Unfortunately, this camera looks cool, but it would be relegated to use like my nice DSLRs are. I bring them with me when I'm doing a shoot. It's

      • by dgatwood ( 11270 )

        "real" photographers don't use auto-focus, because you're almost guaranteed that it will focus on the wrong thing. When I'm taking point-and-shoot pictures with pocket camera, I have to be careful, and hope that nothing distracts the camera. When I'm doing serious photography with my nicer cameras, it stays in manual mode.

        On the contrary, most "real" photographers, at least in my experience, use autofocus almost exclusively. With modern, autofocus lenses, the focus throw is relatively short, so the camera'

        • Well.. I count myself as one of the manual focus crowd, as well as anyone who uses anything but a point & shoot camera. As you said, it's the focus point gets you. The "bird in flight" photo you describe is a great example. Are you managing to keep the bird on the auto-focus point (or the majority of points for multipoint focus)? While you're tracking it? Including when you press the shutter?

          I've seen a lot of photos like that, and they do a wonderful job of some very pretty well focused cloud ph

          • by dgatwood ( 11270 )

            Are you managing to keep the bird on the auto-focus point (or the majority of points for multipoint focus)? While you're tracking it? Including when you press the shutter?

            This is where AF point expansion on the current-generation DSLRs comes in handy. But for any camera more than a couple of years old, yes, it's rather a pain, and the keeper rate often isn't great. Of course, depending on how fast the bird is moving, at 300mm, even with IS, half the time, I can't keep the bird fully in the frame, so focus

    • by gmueckl ( 950314 )

      Alright, I've mentioned elsewhere in this discussion that recording the whole light field data at decent framerates isn't currently possible in an economically feasible way. It could be done if you throw enough money at the problem, but at that point it's cheaper to redo the shot a couple of times.

      Hm, I'm not sure that this kind of camera is able to generate good depth maps. The visualization that helps adjust the focal range in this demo video illustrates the point: it is basically an edge detection filter

  • I think all crime scene photographers should get these cameras.

  • by gerddie ( 173963 ) on Tuesday April 22, 2014 @04:21PM (#46818021)
    I was considering getting the first version of their camera, but they use a proprietary image format for the original data and requests to open it are unanswered so far [lytro.com]. Not even a SDK is provided to access the original data even though it was promised [lytro.com]. Kind of disappointing and enough reason for me not to buy.
  • I assume you can post-process to get super depth of field without needing to stack images (which is obviously a problem when the subject is moving). Pretty cool product.
    • by dfghjk ( 711126 )

      "Super depth of field" is meaningless without standardizing resolving power. A single pixel image has infinite depth of field after all.

      A 40MP camera that renders a 4MP final image gains an inherent 3x increase in perceived depth of field just due to the lost resolving power of the anemic output resolution. When seen in this context, it's not really a cool product at all.

      If you took at 40MP conventional sensor and stopped the lens down until diffraction limited the resulting sharpness to 4MP, you would ha

      • by Arkh89 ( 2870391 )

        Yeah but now you are at f/8, not f/2 anymore. So you lose a great deal of energy (interesting for low light scenes).

      • by tomhath ( 637240 )
        What you say about image quality is true today, but not necessarily true next year. The reason I used "Point and shoot" as the subject is because most people who take pictures don't have the skill or knowledge to make depth of field or even focusing decisions when a shot presents itself. They set it on auto exposure/auto focus and hope it comes out. Lytro's technology gives the option of making those decisions during post-processing.
  • http://lewiscollard.com/camera... [lewiscollard.com]

    I was hoping to read the comments here and find other people asking "Wait, /. didn't get the Lytro joke the first time around?" Where are you, people :-(

  • The "advertisement" video they posted on youtube actually delivers all the reasons you need to know why not to buy this camera.

    The resolution is way too bad even for display on an ordinary 1920x1080 display. Stair steps visible all over the place. The color rendering is horrible, like in some old mobile phone camera. Plus there are artefacts to see where details should be.

    Seriously, this is still nothing more than a party gimmick. "Refocus" your first few snapshots, enjoy for a minute, then the "somethi

  • It's still a one-trick pony, and not a trick that many people need to do very often. Sure, a professional may invest in any number of specialized $1,200 tools to get images under special situations. It's just the idea that this revolutionizes the field of photography, or that _everyone_ needs this to get good pictures of Tommy blowing out the candles on his birthday cake, that's crazy.

    I cannot think of a single time in my life when I wanted to press the button once and get two different images, one with sub

  • I thought we've had this technology for years. I mean, every time you see them zoom-and-enhance on CSI they're taking some blurry out-of-focus element of the picture and rendering it in sharp high resolution. And those aren't even special cameras, they're usually just crappy 320x240 black & white security cams. It's all in the software, baby.

"If it ain't broke, don't fix it." - Bert Lantz

Working...