Jump to content
The Dark Mod Forums

2d-to-3d conversion knowledge wanted


STiFU
 Share

Recommended Posts

Hey people,

 

I am currently starting to work on my thesis about "2D-to-3D Conversion based on still images" and I just realized that the normalmap-estimators / heightmap-estimators I've used so often to create normalmaps from phototextures do basically just what I need. So my question is, does anyone know any good papers about this topic or other good knowledge sources? (There are so many nerds among us thief fans, so I figured, I'll just ask here... :) )

 

Cheers!

Link to comment
Share on other sites

Thanks, I'll check that out. From what I've learned so far, the general process seems to be a mixture of shape from shading and shape from texture technologies.

 

I wrote an email to the author of njob, asking for info, but didn't get an answer. Ah well, I guess I'll go a different approach anyway.

Link to comment
Share on other sites

In the context of your thesis, what exactly do you mean "2d to 3d", are we talking {3d scanning, heightmapping} or {spatial aware mapping}?

 

If you're looking at the spatial stuff, I would suggest reading some of the work done by the members of PhotoSynth, a while back they had a page up with a whole bunch of studies etc done by them before being brought into the commercial side of things by Microsoft.

 

In regards to automatic heightmapping, I really don't think that there's a wide number of established 'best practice' methods for doing it, seeing as commercial software like crazybump do a fairly bad job at it (assuming average quality photos) but might be worth asking the authors for some input into what the problems are that they face and where they think research best be applied. SSBump's author seems quite active but his method of letting the person adjust everything to try make good maps isn't too useful in your research I don't think (tho the stuff that you can adjust makes excellent normals if the lighting is correct)

 

If you are going to be building a proof of concept method, I think most of your time will be spent looking into shadow removal and lighting uniformity, since that seems to be where everything trips up. A method to do these steps either automatically(unrealistic) or semi-interactively(but not quite "open photoshop") would be some awesome stuff.

 

As far as I know the way that njob does its magic is something to do with shifting levels and perhaps curves around to specific values (I think it tries to look for shadows and highlights and average them out). Having source for njob would be great, its got such a nice interface for quick work but annoying bugs and bad error handling (I'm looking at you tiff importer ;))

Link to comment
Share on other sites

Let me elaborate on the background of my thesis. With 3D-Displays being developed, there's need for automatic conversion of 2d-content to pseudo-stereo-3d-content, so that regular TV broadcast and DVDs can be viewed in stereo-3d. This is achieved by estimating a Depth-Map and rendering a right-eye-view from it based on the the source image, while the original image is used for the left-eye-view ("Depth Image Based Rendering").

 

There are already a couple of very good approaches for estimating depth1) resulting in very smooth but low-detailed depth-maps. So I was planning to modulate some details on those smooth depth-maps (maybe also limit the modulation to foreground objects) and see how that looks like. My idea was to retrieve the details with the techniques, used by those heightmap estimators. I can't tell how their wrong estimations will look like in the end, this is all out in the open... :)

 

_______________________________________

1) Mainly those utilizing technologies like depth from focus and depth from perspective-geometry using shape matching.

Link to comment
Share on other sites

STiFU, have you checked the latest c't on that stuff? They had an article, though it mainly was an overview over the existing techniques to create pseudo 3D on the fly.

My Eigenvalue is bigger than your Eigenvalue.

Link to comment
Share on other sites

Ah thanks. Could you tell me the exact issue? The table of contents on the homepage only lists articles about illness caused by stereo-3d and how to record stereo-movies.

 

Edit: That article about illness caused by stereo-3d (german) was pretty interesting. I already imagined though that the headaches some people report were caused by viewers trying to focus out-of-focus elements on the screen and that article proves my theory... :)

Link to comment
Share on other sites

The article is in issue 06/10 starting on page 116, "Tiefenbehandlung". If you can't get it for free on the heise.de website, I can send you scans of the article. It's 6 pages long.

My Eigenvalue is bigger than your Eigenvalue.

Link to comment
Share on other sites

A cheap trick, depending on how the human eye (and its mental processing) really works, could be to simply take an edge-detect (like the gimp-plugin), and use that to provide details. The direction of the change is wrong half the time, but just like all those GUI controls with fake-3d outlines where inset or outset does not really matter, it may look nice to people. :laugh:

 

More realistically, having written software 3d-renderer bumpmapping, I would look for circular spots on the image (scan through), where you can detect a 'highlight' effect. Highlights come in two flavors: metallic and non-metallic, and you can see based on the colour of the light and colour of the highlight (ignoring iridescence). based on a 5x5 region around a pixel, say, you then calculate the angle of the highlight based on the position of the white 'spot' or 'stripe', I would do this by making a weighted average of the angular vector based on every 'highlight' pixel's intensity, position relative to the center pixel. Similar but inverse for 'shadows' (black). You end up with angular vectors for every pixel, which are smoothed because of the 5x5. Perhaps 5x5 is small and produces not much variation in bump vectors. If you want to get fancy, you would want to integrate with edge detection, detect surfaces or regions in the image with a similar texture, i.e. similar highlight and shadows, and use that information as well, and so on. Sorry if this is rambling off on an idea ^_^

Link to comment
Share on other sites

  • 2 months later...

Just in case anyone is interested, I'd like to share what I have been working on in the last months. Here are the results of my algorithm. It is based on focus-analysis of edges. The focus information is propagated to the inner pixels and a depthmap is computed. After some postprocessing the depthmap looks like the image below in the middle. With that information a right-eye-view can be rendered, while the original image is used for the left-eye-view. The image on the right shows the resulting anaglyph (red/cyan) 3d image. The renderer is really only a quick hack, as it is basically not part of my thesis, so it produces artifacts here and there. I just wanted to see how my estimated depthmaps would look like in 3D! :) The quality of those depthmaps is dependent of the quality of the image-segmentation. On the lower-right of the depthmap on the bush, you can see that the segmentation didn't work very well, while in other parts of the image, it worked out quite good.

 

post-684-128075792645_thumb.jpg

 

Now, I will only have to do some evaluations of the algorithm (compare it to others etc.) and write a report about it. The end is near! :rolleyes:

Link to comment
Share on other sites

Just in case anyone is interested, I'd like to share what I have been working on in the last months. Here are the results of my algorithm. It is based on focus-analysis of edges. The focus information is propagated to the inner pixels and a depthmap is computed. After some postprocessing the depthmap looks like the image below in the middle. With that information a right-eye-view can be rendered, while the original image is used for the left-eye-view. The image on the right shows the resulting anaglyph (red/cyan) 3d image. The renderer is really only a quick hack, as it is basically not part of my thesis, so it produces artifacts here and there. I just wanted to see how my estimated depthmaps would look like in 3D! :) The quality of those depthmaps is dependent of the quality of the image-segmentation. On the lower-right of the depthmap on the bush, you can see that the segmentation didn't work very well, while in other parts of the image, it worked out quite good.

 

post-684-128075792645_thumb.jpg

 

Now, I will only have to do some evaluations of the algorithm (compare it to others etc.) and write a report about it. The end is near! :rolleyes:

 

 

 

 

 

 

 

 

 

HI I am also working on 2D to 3D conversion methods, can you please help me regarding on this.

 

I am also new to 3D graphics programming, so some basic info is also needed.

Regards

Hari

Link to comment
Share on other sites

Well, I guess some explanation on what exactly you want to do would be helpful. But in any case, if you have access to ieee.xplore, you should browse for some papers there. Searchterms can be:

2d 3d conversion, depthmap, depth estimation, depth image based rendering, depth cues.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...