# How would you design an experiment to grok optical phenomena?

+ 5 like - 0 dislike
113 views

I've been toying with the idea of making a 3D scanner that uses an IR distance sensor to find position vectors of an object in space and then translates that into a 3D computer model.

One of the greatest challenges in this case is optical phenomena. If the surface has holes, or dimples that reflect the IR beam or cause diffraction, then getting an accurate reading would be nearly impossible, but that's an engineering problem. It would require several different passes from different angles to figure out what that real distance actually is.

What I would like to understand is there a way to use these "glitches" in the data to model the phenomena itself?

I think that this would make for a great experiment, but I don't know where to start and what to study. Essentially, I would like to derive from the data a theory of what interference was observed from the sensors and then model it to understand what it means.

So, how should I try to;

1. Setting up the experiment
2. Setting controls to verify the data
3. Model the data
4. Fabricate a theory to fit the observation

Update: What I want to do is to create an experiment that uses these mistakes in data (weird readings upon hitting distorted surfaces) and create a model that shows what optical phenomena are at play over here.

Well, you could say why not just see the surface and judge? That wouldn't be fun, now, would it? On a more serious note, what I am trying to do is just make my own experiment to actually measure whatever optical phenomena is at play. Explain it and perhaps derive an expression for it from the data itself.

Basically, I want to look at the data set correspond it with the physical model and see the variation in the distance reading vs. what it should have been and the surface and try to model why that happened with the 3D diagram.

As I am someone who just got out of high school. What do I learn and how do I go about actually doing this?

Note: I use optical phenomena as a sort of place holder for reflection, refraction, diffraction or whatever combination of these is at play. Sorry, for the confusion.

Update 2: I guess the question title is misleading and so is the text. I changed / rephrased it to get more participation.

Moreover, I know that it's hard to give a perfect answer to my question, but I would love to discuss this with someone and figure out what to learn and how. So, please go ahead take a shot. I would love to hear what you have on your mind.

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user Anna
asked Jan 13, 2011
retagged Apr 1, 2014
I think this needs a little clarification. First of all, the question gives the impression that you are asking about interferometry. Normally when people talk about "optical interference", this is what they mean. Second, it isn't really clear to me what sort of model you are trying to construct. Do you want to use the data from the range finder to construct an estimate of the profile of a surface, using the "glitches" to detect perforations and other defects?

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user Colin K
@Colin K, interferometry is how you record and display holograms in 3D, which is what the OP seems to asking about.

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user user346
@space_cadet: Interference effects are certainly important for holography, but "Interferometry" generally refers to techniques which use interference to make a measurement. It sounds to me like the OP wants to use one of these optical range-finders to reconstruct a 3D scene, and wants some way to extract extra information from what would normally be considered flaws in his data. Not the same as interferometry!

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user Colin K

## 4 Answers

+ 1 like - 0 dislike

@Anna what you are asking about is the problem of recording holographic images of 3D objects, the much harder counterpart of the problem of displaying a recorded hologram as a 3D image. I've thought about this a bit and it seems there is no simple solution if you want to obtain an image with a 360 degree view, with only one optical device.

However, someone has managed to rig Microsoft's controller to act as a 3D camera. Perhaps replicating this feat might make a nice project for you! Of course, there is a finite field of view and the example is far from a working "holographic recorder" of some sort. The video of the demonstration is pretty mind-blowing though!

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user user346
answered Feb 14, 2011 by (1,975 points)
You seem to confuse 3D imaging with holography. They are not at all the same. A hologram reproduces the originally recorded optical field, including amplitude and complex phase information. 3D imaging is a much broader concept, but is generally restricted to recording data, and does not capture phase information.

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user Colin K
@Colin K, you might be right. I'll have to clarify some of these concepts for myself. Cheers.

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user user346
+ 1 like - 0 dislike

Better late than never, right?

Those sensors are designed to give you measurements more or less reliably, so surface quality should not be too much of an issue. They do suggest you mount them on a servo so they can be scanned. That way you can build up a 2-d array of depth values.

That data will be quite noisy, so you're going to have to process it to come out with meaningful information such as, if you're on a roadway, where is it?

That processing will be really interesting, at the intersection of AI, machine vision, and signal processing. (Don't assume you need special computer hardware. First get something working.)

You might find Red Team Racing interesting. (My son worked on it.)

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user Mike Dunlavey
answered Nov 12, 2011 by (20 points)
+ 0 like - 0 dislike

Study of interference and the related application to 3D scanning has some aspects still to be resolved but it would be unwise to try to reinvent. Much of the 3D imaging theorising and initial experimentation was completed during the last 3 decades and is now available thru academic and commercial contacts.

Although I am always careful about recommending web references... you really need to check out... http://en.wikipedia.org/wiki/3D_scanner.

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user Dunrid
answered Jan 14, 2011 by anonymous
The point is because it is unwise to reinvent. It's why I want to reinvent it. I understand things by creating things and that's just how I learn. Thanks.

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user Anna
Dear Anna, it's great if you try to reinvent things - that's how science should be done. However, I am afraid that if you will construct the experiment according to the answers you will receive here, it will not be a reinvention of yours. It will be reinvented by someone else - and most likely, it will be just the old invention communicated to you by at least two steps. :-)

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user Luboš Motl
You're right. I'm sorry if I didn't put things across the right way, but I'm not over here gunning for blue prints. I just want to know where do I start learning the theory, and what theory is relevant to this. Moreover, by asking people to come up with their version of it I just wanted to see how they proceed into it. It's not a question of what to make, but how to make it. I've learnt over time that the right approach is everything. I'll come up with the details on my own. I hope that made sense to you.

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user Anna
+ 0 like - 0 dislike

Perhaps an extreme case of your line of thought where you focus on the "glitches" in your detector and try to extract data out of them is the "single pixel camera", where you take a series of random measurements from your very simple detector and reconstruct the image you're looking for. Here's a nice article by Terence Tao on the mathematics of these.

This post imported from StackExchange Physics at 2014-04-01 16:39 (UCT), posted by SE-user j.c.
answered Jan 15, 2011 by (260 points)

## Your answer

 Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOve$\varnothing$flowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.