Jump to content
The Dark Mod Forums

Random math question:


Ishtvan

Recommended Posts

This could well be a dumb question, but I haven't found an answer yet:

 

Is there any way to add two complex numbers in polar form while preserving phase angles over 2*pi? I.e., not wrapping the phase angle when it reaches 2*pi? I've searched online and found nothing.

 

For example, you have two complex numbers, r1 angle phi1, r2 angle phi2, where phi1 or phi2 may be over 2*pi. Now add them to get r3 angle phi3, but preserve angles in excess of 2*pi without wrapping back to an angle between 0 and 2*pi.

 

The standard way of adding polar complex numbers is converting to cartesian form, such that:

 

r1 angle phi1 + r2 angle phi2 = r1(cos(phi1) + j*sin(phi1)) + r2(cos(phi2) + j*sin(phi2))

= (r1 cos(phi1) + r2 cos(phi2)) + j*(r1 sin(phi1) + r2 sin(phi2))

 

Then proceeding from there. But at that point you've already lost phase information beyond 2*pi, because

sin(phi + 2*pi) = sin(phi) , so you've already converted the original phi to mod(2*pi, phi)

So that doesn't work. If there were some trig identity to write that in terms of sin or cos (phi1 + phi2) or (phi1 - phi2) or half-angles or something, that might work, but I haven't found a way yet.

 

Thinking about them visually as vectors doesn't seem to work either, because the vector r angle phi is equivalent to the vector r angle (phi + 2*pi). So you can't tell the vectors apart.

 

I need this for an application where the phase angles represent the distance travelled by a travelling wave, and it matters if they go over 2*pi, because the distance the wave has travelled is still increasing as the phase angle goes round and round the complex circle beyond 2*pi.

Link to comment
Share on other sites

Hmm, that's a tricky one.

 

Seems to me you're trying to do something which doesn't make sense in the standard complex number paradigm; angle 2pi is not just "equivalent" to angle 0, it's actually defined to be the same angle. Put another way, the Cartesian form of a complex number is its canonical representation; if you can't distinguish two complex numbers based on their Cartesian form, then they are quite literally the same number. You can't say that one instance of 2+4i has a property which somehow differs from another instance of 2+4i. It's simply axiomatically untrue. Overloading the polar representation to encode additional information might be possible, but strikes me as hackish. It's certainly not mathematically well-founded.

 

Perhaps there's some other way to achieve what you're trying to do? Maybe you'll just have to forget about encoding distance travelled using the phase angle, and use an additional parameter instead. I'm not familiar with your project, or indeed with the entire field of physics in which it resides, so I'm afraid I can't be more help than that. :)

 

Maybe these guys can help you more?

My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.
Link to comment
Share on other sites

What I really want to do is eventually equate the arguments of some exp(j*phi) terms. I agree that exp(j*0) and exp(j*2pi) is the same number, but that doesn't mean 0 and 2*pi are defined to be the same number. They are two different numbers, used as two different arguments into a function that recurs every 2*pi.

 

Put physically, a wave that travels 1 meter accumulates a phase of exp(j*(2pi/wavelength)*1). A wave that travels 1 meter plus one wavelength accumulates a phase of exp(j*(2pi/wavelength + 2pi), which is the same phase as a wave that travelled one meter. But that does not mean the distance 1 meter and the distance 1 meter + wavelength are the same.

 

Say you have some equation with complex exponentials on the left and right sides:

exp( j*A ) = exp( j*B )

That reduces to A = B as one solution. That is not the same as A = B +/- 2*pi, which is a different solution, and we want to look at each individual solution. In particular, we care about the A = B solution because it is the "lowest order" mode of our system. I'll just state that without going into more detail.

 

Where it gets fuzzy is when you want to preserve the argument of exp(j*phi) terms over increasingly complicated manipulations. If you're just multiplying two terms, exp(j*phi1) * exp(j*phi2), it's easy to keep track of the argument, because the result is exp(j*(phi1 + phi2)), and the final argument is phi1 + phi2. You can keep track of this argument even if it goes over 2*pi.

 

The problem is, when you have addition instead of multiplication, A*exp(j*phi1) + B*exp(j*phi2) = C*exp(j*phi3), I don't see any way to keep track of phi3 over 2*pi. Because our usual procedure for complex addition is to evaluate the polar argument to get a cartesian point, then switch back to polar, but in doing so, you put it through functions (cos, sin, tan) where f(argument + 2*pi) = f(argument). So an argument that "should" be over 2pi jumps back down to mod(2pi, argument). Put another way, we know where we end up on the complex number circle, but we lose the information about how many times we went around the circle to get there. That information was originally present in the polar angles of the two numbers we added.

 

I'm just wondering if there's a different way to consider polar form addition where you still preserve that information (how many times you went around the circle / polar arguments over 2*pi).

Link to comment
Share on other sites

I agree that exp(j*0) and exp(j*2pi) is the same number, but that doesn't mean 0 and 2*pi are defined to be the same number. They are two different numbers, used as arguments into a function that recurs every 2*pi.

You're using j=sqrt(-1), I take it?

 

As numbers 0 and 2*pi are of course different, yes. But when considered as angles in a geometrical context they lose all distinction; and by using polar coordinates, you are implicitly importing that context. My point is you're trying to overload the notion of "phase" to also include the notion of "particular distance travelled to achieve that phase". Surely there's a cleaner way to formulate that which sidesteps these issues; and if you can find it, your problem might disappear.

 

Put physically, a wave that travels 1 meter accumulates a phase of exp(j*(2pi/wavelength)*1). A wave that travels 1 meter plus one wavelength accumulates a phase of exp(j*(2pi/wavelength + 2pi), which is the same phase as a wave that travelled one meter. But that does not mean the distance 1 meter and the distance 1 meter + wavelength are the same.

Of course not. I'm not saying they are. I'm saying that the fact that the phases are identical means that you can't shoehorn in the notion of distance and expect the resulting formulae to make any sense.

 

Where it gets fuzzy is when you want to preserve the argument of exp(j*phi) terms over increasingly complicated manipulations. If you're just multiplying two terms, exp(j*phi1) * exp(j*phi2), it's easy to keep track of the argument, because the result is exp(j*(phi1 + phi2)), and the final argument is phi1 + phi2. You can keep track of this argument even if it goes over 2*pi.

 

The problem is, when you have addition instead of multiplication, exp(j*phi1) + exp(j*phi2) = A*exp(j*phi3), I don't see any way to keep track of phi3 over 2*pi. Because our usual procedure for complex addition is to evaluate the polar argument to get a cartesian point, then switch back to polar, but in doing so, you put it through functions (cos, sin, tan) where f(argument + 2*pi) = f(argument), so you lose that information about the argument. Put another way, we know where we end up on the complex number circle, but we lose the information about how many times we went around the circle to get there.

Sure, I get that. I'm merely responding to your question by questioning the foundations of your formulation. :P

 

What I really want to do is eventually equate the arguments of some exp(j*phi) terms.

Since they're angles, I reckon you can equate them without needing to perform any mathematical gymnastics if you can show that they only differ by a factor of 2*pi. It would read something like: "...formula 1 equals A*exp(j*phi1), and formula 2 equals B*exp(j*phi2). We can show that A=B. And, since phi1 and phi2 differ exactly by a factor of 2*pi, they are the same angle. Therefore formulas 1 and 2 are equal."

 

If you also need to prove something about the distance travelled as well as the phase, I'd argue that you need to show that separately.

 

I'm just wondering if there's a different way to consider polar form addition where you still preserve that information (how many times you went around the circle / polar arguments over 2*pi).

OK, OK, I'll try to answer your actual question then... :P Time to break out the trig...

 

*45 minutes later* Eh, I got nothing. Drew some diagrams and got enmeshed in trig for a while, but I can't find any obvious way to relate the angles without putting them through a trig function first.

My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.
Link to comment
Share on other sites

By putting your result back into the exponential function, you're performing a non-injective transformation which is not invertible. You cannot "undo" this loss of information. The exponential function is only injective when using real numbers, not complex ones.

 

You need to keep track of it manually during your algorithm. The result in the form r·e(i·phi) won't allow you to extract information which is not there anymore.

Link to comment
Share on other sites

Yeah, that makes sense. I was hoping there was a way to perform the addition without ever leaving the exponential form where you keep track of the initial arguments, but I don't see a way. Unfortunately it becomes hard to "keep track" of the argument during this algorithm, because it involves multiplying together an arbitrary number of 2x2 matrices with complex elements, then dividing element 2,1 of the result by element 2,2. When you do matrix multiplication, you have to add complex numbers to get each element of the resulting matrix. That brings us to the adding complex number while preserving arguments over 2*pi problem.

 

The problem I'm trying to solve is finding the modes of a symmetric, dielectric waveguide with an arbitrary number of layers. Given the waveguide structure, find all the propagation constants corresponding to supported modes. I did a symmetric 5-layer waveguide solver earlier, and there it's still possible to algebraically keep track of the phase and figure out how many times you've gone through 2*pi. For the n-layer guide though, you need to use a transfer matrix method to reduce multiple reflections off of n layers down to an equivalent reflection off of the start of the stack. For this, you have to multiply 2x2 complex matrices, one for each layer, until finally you get a matrix representing all the layers and can get the complex reflection coefficient from its elements. I haven't found a way to track the phase arguments through this iterative matrix multiplication though.

 

Now, I can just give up and unwrap the phase numerically, using matlab's unwrap() funcion for example, that looks for the discontinuous jumps from +pi to -pi, but this requires evaluating the function over the full range of possible propagation constants. Because you need to know all those points of the function before you can numerically unwrap it.

 

This is less elegant than if you can unwrap the phase algebraically at a given point, because we want to solve a transcendental equation where this phase angle is on one side and someting else is on the other side. If you can unwrap it a given point without needing to know the function over a wide range, you can use fast and accurate zero-finding algorithms that rapidly converge on the zero without needing to evaluate the function very many times. For the numeric unwrapping, you need to evaluate the function over your whole range before you can unwrap it, and your accuracy will be limited by how finely you discretize that range.

 

I found some C++ code that someone else wrote to solve this problem, but they use the method of discretizing the whole range of possible propagation constants and trying each point. I was hoping there was a way to unwrap the phase at a single point so that faster/more accurate zero-finding methods could be used.

 

@Crispy:

Of course not. I'm not saying they are. I'm saying that the fact that the phases are identical means that you can't shoehorn in the notion of distance and expect the resulting formulae to make any sense.

I would disagree :). Writing a wave as a complex exponential involving the distance traveled comes from solving a 2nd order DE for the wave equation, and the solutions can be expressed as A*exp(+j*k*x) + B*exp(-j*k*x). This takes into account both sinusoidal solutions and exponential decay/growth solutions, because k can be complex. For real k, the phase does repeat every time the argument changes by 2*pi, that's why it's still a wave and not something else. A wave does repeat every time the distance argument changes by a wavelength. There's nothing inherently wrong with that solution.

Link to comment
Share on other sites

@Crispy: I think I can explain better.

 

The resonance condition that you're trying to solve is really equating the phase, not the distance travelled, so yes, this is a weird equation that has an infinite amount of solutions spaced out by 2*pi, waves can travel different distances and still solve the equation. That is the physical reality, there can be more than one possible mode of the structure. But, there are ways of looking at the arguments into the functions to see which solution you are at. Are you at A = B, or A = B + 2*pi? This tells you useful physical things, like whether your resulting mode is even or odd, how many nodes it has, etc. I don't know if you've done the "particle in a box" problem in quantum mechanics, but solving the electromagnetic Helmholtz wave equation for optical modes in different dielectric constants is mathematically analogous to that. It just gets more complicated the more layers you have.

 

There are usually ways to tell which solution you are at, and you do sort of look at the distance travelled separately, because you look at the argument separately from the result of putting it into the exponential function. It's just in this case, the argument gets lost in all the damn manipulations I need to do to get the complex reflection coefficient for an arbitrary stack of different layers.

 

It seems like there is no way to track the arguments through these matrix multiplications though, so I will probably just have to plug in a full set of possible mode propagation constants and sweep through them one by one to see which mode I'm at (the lowest order mode is the highest possible propagation constant that solves the equation and the mode order increases as you go down in propagation constant).

Link to comment
Share on other sites

Hmm, it occurred to me that you can define an effective length for a multilayer structure,

 

L_eff = 0.5 * dphi/dBeta

 

And I think you can actually use this effective length to solve for the resonant modes of a cavity with this structure on either side, rather than rely on phi. But to evaluate that numerically, it seems like you would already have to know the value of phi over some range of Betas. This is the same speed/accuracy drawback of evaluating phi over the range of betas and then numerically unwrapping 2*pi jumps.

 

I don't know of a way to find the derivative of phi with respect to Beta symbollicaly when phi is given by:

angle( matrix element / matrix element ), where the matrix has been generated by multiplying an arbitrary number of 2x2 matrices together, each with elements that depend on Beta. In each individual case, given enough time, you could write it out symbolically, but I don't know how to do it in the general case with N matrices. At the very least it would go to a higher and higher order polynomial in Beta the more matrices you multiplied together.

 

Oh well at this point I'm ready to say screw it, I'll just put in the whole range of Betas and unwrap phi numerically.

Link to comment
Share on other sites

I only did high school physics, and we didn't cover quantum mechanics at all. So much of the physics here is completely over my head. :) I did do a couple of years of university mathematics though (the high-end undergraduate stream, the same one as done by people going for an Honours in maths). So that's where I'm coming from; basically a pure mathematics perspective, without having any understanding of the underlying physical processes.

 

I do understand where you're coming from, and by the sound of it you do have valid reasons for doing what you're doing. I still think it's dodgy though. :P Chalk it up to the old theory vs. practice divide; i.e. mathematicians are to theoretical physicists as theoretical physicists are to experimental physicists. ;)

My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.
Link to comment
Share on other sites

I'm an experimentalist whose equipment is currently broken, so I'm stuck doing some simulations. :)

 

As a side note, I just wasted almost all of today because I assumed matlab's "prod" function would multiply the 2x2 matrices in the usual matrix multiplication fashion if you had a 2x2xN matrix M and tried prod(M) along dimension 3. Especially since when you put in M(:,:,n), you get a 2x2 matrix out. But no, prod just multiplies element by element and gives you a 2x2 matrix where element 1,1 is the product of all element 1,1's of each matrix along the third dimension. Apparently there is no matrix-multiplying version of prod in Matlab. Because who would ever think of using Matlab to, you know, multiply matrices.

 

It seems I'll have to use a freaking for loop. I don't think the 3rd party multiprod function will be useful here, because I want to multiply together all the elements of an array of matrices, I don't want to multiply two arrays of matrices together. Although maybe there is some clever way to trick multiprod into doing that.

 

Me not like math, will crush with rock!!

Link to comment
Share on other sites

I'm an experimentalist whose equipment is currently broken, so I'm stuck doing some simulations. :)

This was one of the reasons why I didn't want to stay in my research group continuing to work on my project. The topic itself was interesting, but in the chronically underfunded university facilities there is always something broken. So much time is wasted just waiting on spare parts, waiting for the other guy to release the only vacuum pump or merely re-doing your whole fucking device because the insulation of the vacuum deposition equipment was defective and you had too much pressure at the end of a 5-day-production cycle.

Link to comment
Share on other sites

Yeah, problems like that are frustrating. We spent years arranging the purchase of a brand new $200k short pulse laser system. After they came out and set it up, it worked for a few days, then broke. First a solenoid driving a mechanical shutter broke and the shutter stopped opening, and it was in a system where we theoretically void the warranty if we work on it ourselves. Then the control electronics went haywire and, as far as we can tell, fed in lots of current while turning off the cooling. That cracked a gain rod. Then there were a series of other problems. Among other things, it sprung a water leak. <_< And I'm supposed to be defending my thesis some time this fall.

Link to comment
Share on other sites

I need this for an application where the phase angles represent the distance travelled by a travelling wave, and it matters if they go over 2*pi, because the distance the wave has travelled is still increasing as the phase angle goes round and round the complex circle beyond 2*pi.

Sounds a bit like the phase-correllation-function... :)

 

But well your problem is, that you want to perform an addition on basis of cartesic coordinates, but want to use the advantages of a polar-coordinate addition, which would simply be (r1+r2, phi1+phi2). But a very simple approach to your problem would be:

Perform the addition in Cartesic-coordinates => z3
compute norm of z3 => r3
compute phase of z3 (arctan with case differentiation) => phi3'
if abs( phi1 % 2pi - phi2 % 2pi ) > pi the range of [0,2pi] will be surpassed.
  phi3 = phi3' + int( phi1 / 2pi ) + int( phi2 / 2pi );  //to preserve the angles of the summands....
  if r1 > r2
   phi3 = phi3 + (-1)^( phi1 % 2pi > pi )  * 2pi;
  else
   phi3 = phi3 + (-1)^( phi1 % 2pi <= pi )  * 2pi;
  end
end

The direction the angle is updated is dependent of the first summand, but basically this is just a definition. You could also just always add 2pi if the first condition was true. By the way, I didn't read all of this... too much text! :) This is just what came into my mind.

Edited by STiFU
Link to comment
Share on other sites

Thanks! I'll have to think about that for a bit, and see if it's possible to write out my general matrix multiplication by hand in that way. E.g., multiplying two matrices A and B to get C,

C11 = A11 * B11 + A12 * B21

Where * above is explicitly a polar multiply and + is the modified addition we're talking about.

 

I'm wondering about the specific code, and what it does if phi1 and phi2 start out over 2*pi. The inner stuff looks like it would work, but this:

if abs( phi1 % 2pi - phi2 % 2pi ) > pi the range of [0,2pi] will be surpassed.

Seems like it would skip everything else if phi1 was say 4*pi and phi2 was 8*pi, and just leave phi3 as phi3'

That is, that if condition seems like it would handle the one extra wrapping in the addition of the angles, but this line should maybe be outside of that if?

phi3 = phi3' + int( phi1 / 2pi ) + int( phi2 / 2pi );

 

At this point it's kind've an academic exercise. I implemented the method where I just put in some range for the variational parameter and unwrap the phase using the result along that whole axis. It's slower/less accurate this way, but the implementation does seem to be fast & accurate enough. :)

Link to comment
Share on other sites

I'm also intuitively bothered by this definition of addition. r1 and r2 can take on any value. In the limit where r1 >> r2, the final result r3 angle phi3 should be very similar to r1 angle phi1, shouldn't it? For example, 1 angle pi/2 + 0.0001 angle 10*pi/2 shouldn't give phi3 = (10 + 3/4) *pi, should it?

 

So I'm not sure that the wrapping by adding int(phi1 / (2*pi) and int(phi2 / (2*pi)) regardless of the magnitudes of r1 and r2 makes sense. But maybe it does in this weird definition where we always want to know how far we've gone in phase, I don't know. It's sort of like saying we started with a small vector, spun around the cirlce a bunch of times and then added a larger vector? Then spun around some more depending on whether the angle of that larger vector was over 2*pi?

Link to comment
Share on other sites

Hmm, now that I think of it, I wonder if I could do a hybrid algorithm where you first evaluate the final phase on a low-res axis, numerically unwrap the phase to get a general idea of where it should be. Then you could have your single-point evaluation check the low-res phase, see how many pi you are off of it and unwrap the point based on that? So you can do fine-root finding beyond your low-res resolution.

 

Maybe I'll try that if it needs more accuracy and optimization later. As is, I can match the effective index down to the 4th decimal point or so, within a reasonable evaluation time. Although that's only about 1% accuracy overall since my device uses ~1% changes in index. :)

Link to comment
Share on other sites

This thread reminds of that episode where Lt. Barclay is discussing equations at the chalkboard with Einstein. Couldn't find an image, damnit.

 

I have nothing to contribute here (waaay out of league), except to say it somehow gives me a warm fuzzy to see the actual "attempting to solve a problem" (presumably for the first time?) of graduate work as opposed to the "solve the same damn textbook assignments a million other students also solved" of undergraduate. I recall doing my senior (undergrad) "research" on a platinum cancer drug, only to find out near the end "oh, this is just academic exercise; this compound has already been researched and dismissed as a viable drug." Oh... erm... thanks. :-/

Link to comment
Share on other sites

I'm not solving this particular problem for the first time, people have written plenty of computational mode solvers. In fact, many people would laugh at me because I'm only doing a one dimensional case and people do much more complicated algorithms with 3d waveguide structures.

 

The entire device as a whole is pretty much being done for the first time, and this mode solver is one part of simulating the whole device. It's a question of how long it would take to integrate someone else's mode solver code with my existing code vs writing my own. It can be very time consuming if the other code is in a different langauge, or even worse, done by some closed-source commercial package where you just get the result out and you have to make that whole package talk to your code.

 

Also, if I go through the whole derivation and code writing myself, I have the theoretical knowledge to recognize erroneous results, and know what conditions it works under and what conditions might break it. It gives you more trust in the code than blindly running someone else's code, and don't have first hand knowledge of what kind of inputs cause it to put out erroneous results, and what an erroneous result might look like. Since we're going to use this overall code to order thousands of dollars worth of material growth, it's good to have first hand knowledge of every part of it.

 

I am using other people's codes to verify the results of mine. For example, the university of Twente has a good C++ mode solver accessible by Java applet. Here is the link in case anyone wants to recreationally find waveguide modes (sounds like an awesome Saturday night activity to me):

http://wwwhome.math.utwente.nl/~hammer/oms.html

Link to comment
Share on other sites

I'm wondering about the specific code, and what it does if phi1 and phi2 start out over 2*pi. The inner stuff looks like it would work, but this:

if abs( phi1 % 2pi - phi2 % 2pi ) > pi the range of [0,2pi] will be surpassed.

Seems like it would skip everything else if phi1 was say 4*pi and phi2 was 8*pi, and just leave phi3 as phi3'

This condition just means that the vectors point into opposite directions. The direction is represented in phi, so every value greater than 2pi is just redundant information and can be discarded here.

 

but this line should maybe be outside of that if?

phi3 = phi3' + int( phi1 / 2pi ) + int( phi2 / 2pi );

True! :)

 

I'm also intuitively bothered by this definition of addition. r1 and r2 can take on any value. In the limit where r1 >> r2, the final result r3 angle phi3 should be very similar to r1 angle phi1, shouldn't it?

Now that you mention the magnitudes, I noticed a mistake in the concept:

if r1 > r2  ....
//This should rather be
if r1 * sin(phi1) > r2 * sin(phi2)
//which is equivalent  to
if y1 > y2

Furthermore it should be checked whether the signs of y1 and y2 are different, as a final condition. I'd rewrite the algorithm, but I'm a little ...erm... "indisposed" right now! ;) I'll come back to you another time...

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recent Status Updates

    • nbohr1more

      Was checking out old translation packs and decided to fire up TDM 1.07. Rightful Property with sub-20 FPS areas yay! ( same areas run at 180FPS with cranked eye candy on 2.12 )
      · 1 reply
    • taffernicus

      i am so euphoric to see new FMs keep coming out and I am keen to try it out in my leisure time, then suddenly my PC is spouting a couple of S.M.A.R.T errors...
      tbf i cannot afford myself to miss my network emulator image file&progress, important ebooks, hyper-v checkpoint & hyper-v export and the precious thief & TDM gamesaves. Don't fall yourself into & lay your hands on crappy SSD
       
      · 3 replies
    • OrbWeaver

      Does anyone actually use the Normalise button in the Surface inspector? Even after looking at the code I'm not quite sure what it's for.
      · 7 replies
    • Ansome

      Turns out my 15th anniversary mission idea has already been done once or twice before! I've been beaten to the punch once again, but I suppose that's to be expected when there's over 170 FMs out there, eh? I'm not complaining though, I love learning new tricks and taking inspiration from past FMs. Best of luck on your own fan missions!
      · 4 replies
×
×
  • Create New...