Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software

2D To 3D Object Manipulation Software Lends Depth to Photographs 76

Iddo Genuth (903542) writes "A group of students from Carnegie Mellon University and the University of California, Berkeley have developed free software which uses regular 2D images and combines them with free 3D models of objects to create unbelievable video results. The group of four students created the software (currently for Mac OS X only) that allows users to perform 3D manipulations, such as rotations, translations, scaling, deformation, and 3D copy-paste, to objects in photographs. However unlike many 3D object manipulation software, the team's approach seamlessly reveals hidden parts of objects in photographs, and produces plausible shadows and shading."
This discussion has been archived. No new comments can be posted.

2D To 3D Object Manipulation Software Lends Depth to Photographs

Comments Filter:
  • by AaronLS ( 1804210 ) on Thursday August 07, 2014 @11:45AM (#47623099)

    No longer is it Photoshopped, but instead we say it's been Carnegie Melloned.

  • by Anonymous Coward

    isn't this just texture mapping onto a 3d model?

    • The same way that Avatar was just computer animation, like Toy Story.

      • by Anonymous Coward

        not really. this is quite a simple process: pick a model, apply texture to it, manipulate at will. More impressive is code that generates the model using the image itself.

  • by oodaloop ( 1229816 ) on Thursday August 07, 2014 @11:57AM (#47623193)
    I don't do anything like this for a living, but I must say I'm impressed. I'm fairly certain someone will say this was done back in 1997 though so it's nothing new.
    • by Anonymous Coward

      More or less, it's an evolution of previous work.

      Among other previous work:
      Rendering Synthetic Objects into Legacy Photographs (2011)
      http://www.youtube.com/watch?v=hmzPWK6FVLo

      3-Sweep: Extracting Editable Objects from a Single Photo, SIGGRAPH ASIA 2013
      http://www.youtube.com/watch?v=Oie1ZXWceqM

  • It's "a software" again. I'd like to give the author an information, perhaps while we eat a spaghetti, that in English we have "mass nouns" and thus you cannot have one hardware or one clothing or one software.
  • by roman_mir ( 125474 ) on Thursday August 07, 2014 @12:11PM (#47623305) Homepage Journal

    How can images be admissible in court in our modern technological age of 3d manipulation of 2d images? Sure, they still have visual artifacts (like in the video presentation for this technology, when the airplanes are turned into 3d, their propellers are not changed, the same image of a propeller is kept for 3d model as was on the original 2d picture) but eventually all of these will go away, it may become impossible to detect that an image in front of you was manipulated at all.

    Eventually this will also apply to video footage.

    Add the digital augmentation of reality into the mix (Google Glass, etc.) and you can't rely even on the recorded information. We know that people are not good at remembering the details of what they saw, but if cannot be sure of images and video (and obviously audio) either, then this type of data becomes useless in courts. That's an interesting development in itself, never mind the fact that you can now turn a picture into a movie if you want.

    • Pictures and video are used in court but someone testifies that it hasn't been modified. If the defense argues that it has been modified then a jury weighs the merits of that claim.

      • by sjames ( 1099 )

        The problem is that as technique improves, the theory that the photo/video was altered in a way that can't be detected becomes ever more plausible.

        It was easy to take the witness's word for it when the alternative would involve millions in equipment and would likely be trivial to detect.

      • by Alsee ( 515537 )

        a jury weighs the merits of that claim

        Unfortunately, I wouldn't trust the average juror to weigh a head of lettuce.

        -

  • A question on this (Score:5, Interesting)

    by DigitAl56K ( 805623 ) on Thursday August 07, 2014 @12:30PM (#47623457)

    While those results look impressive, in some of the demos where objects are seamlessly moved around, how are they filling in the original background (or what looks like it)? The video largely explains how the model is textured, lit, environment mapped, rendered with shadow projection with calculated perspective and depth of field, but I didn't hear much about re-filling the background. I assume they're cloning or intelligently filling texture ala photoshop, or perhaps in all cases where they showed something being animated it was a new clone of an existing object into a new area of the photo?

    • by Bryan Ischo ( 893 ) * on Thursday August 07, 2014 @12:45PM (#47623617) Homepage

      I agree there was some trickery there. Since they did not address this at all, I am assuming that the answer is simply that they had to manually paint in the parts of the photos that were revealed when other parts were removed. Having to point that out in the video would take away from the apparent magic which is probably why they didn't mention it (and that's somewhat disingenous if you ask me). It's possible that they provide some tool that attempts to automatically fill in the background, and if so it would appear that it was used in some of the examples (such as when the apple or whatever it was was moved in the painting, the area that was revealed looked more like the cloudy background than it did like the table that the apple was on), but there's no way that they automatically compute the background for anything that is not on top of a pattern or more or less flatly shaded surface. I also noticed that in some examples, they were merely adding new objects to the scene (such as the NYC taxi cab example), and although they started with a scene that looked like the cab was already there is moved it to reveal painted chevrons underneath, it's likely that those chevrons were already in the photo and didn't need to be recreated.

      In short: they glossed over that detail and used examples that didn't require explaining it, but it'c certainly an issue that a real user would have to address and doesn't happen as "magically" as it would appear from the video.

      BTW, CMU alum here. Went back to campus for the first time in nearly 20 years earlier this year. My how things have changed. I suppose every college is the same way now, but holy crap it's so much more cushy than it used to be! Guess all that cush keeps the computer science juices flowing ...

    • i'm downloading the open source software now to test it out. I assume it is very similar to content aware fill used in Photoshop.

  • by HyperQuantum ( 1032422 ) on Thursday August 07, 2014 @12:38PM (#47623531) Homepage
    The software appears to be proprietary, not free as in 'free software'. It is available for zero cost, but usage is restricted:

    ACADEMIC OR NON-PROFIT ORGANIZATION NONCOMMERCIAL RESEARCH USE ONLY

  • I was very impressed with the effects.

    I would love to know how easy such manipulation is to detect? Is it harder or easier to detect than photo-shop?

    At some point, photo-shop type effects will become undetectable.

  • The algorithms and software get better and better.
    SIGGRAPH next week in Vancouver.
  • by Anonymous Coward

    watching or attending siggraph is like watching an Ubisoft conference.

    Everything looks amazing on stage, but when you get your hands on it is another story altogether.

    I'll believe this works when I use it. Until then, I might as well go watch the lawnmower man and consider it a documentary.

  • by account_deleted ( 4530225 ) on Thursday August 07, 2014 @02:00PM (#47624473)
    Comment removed based on user account deletion
  • I thought the point was to create believable video results. Bad, fake-looking 3D out of 2D sources has been done to death, mostly for the cinema...
  • I've done a little bit of work in a related area, so I skimmed the paper (at the bottom of the first link,) and it's nowhere near as impressive and automagical as the video makes it seem. The user has to provide a mask distinguishing the object they are manipulating from the rest of the image, and then the user also has to provide the 3D model for the object! The model is then smoothed to better fit the original using the mask and the inferred illumination, textured using the image, and then popped out to b

  • That's pretty crashy software. At least it builds. You'd think they would distribute something a little more solid.
  • This looks pretty cool, but I have a lot of questions.

    On it's surface, it looks like a lot of the results they're getting wouldn't currently be outside of the realm of student level work, such as the simple practice of projecting and baking textures into materials from photographs, the innovation seems to be that they're quickly automating a lot of that stuff into a UI with a fast lighting solution. One of the things I find most rewarding about 3d is that you sometimes get this huge burst of increased

E = MC ** 2 +- 3db

Working...