Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

145 submissions , 122 unreviewed
3,930 questions , 1,398 unanswered
4,846 answers , 20,598 comments
1,470 users with positive rep
501 active unimported users
More ...

How to formulate variational principles (Lagrangian/Hamiltonian) for nonlinear, dissipative or initial value problems?

+ 3 like - 0 dislike
176 views

Although this questions is very much math related, I posted it in Physics since it is related to variational (Lagrangian/Hamiltonian) principles for dynamical systems. If I should migrate this elsewhere, please tell me.

Often times, in graduate and undergraduate courses, we are told that we can only formulate the Lagrangian (and Hamiltonian) for "potential" systems, where in the dynamics satisfy the condition that: $$ m\ddot{\mathbf{x}}=-\nabla V $$ If this is true, we can formulate a functional which is stationary with respect to the system as: $$ F[\mathbf{x}]=\int^{t}_0\left(\frac{1}{2}m\dot{\mathbf{x}}(\tau)^2-V(\mathbf{x}(\tau))\right)\,\text{d}\tau $$

Taking the first variation of this functional yields the dynamics of the system, along with a condition that effectively states that the initial configuration should be similar to the final configuration (variation at the boundaries is zero).

Now, given the functional: $$ F[\mathbf{x}]=\frac{1}{2}[\mathbf{x}^{\text{T}} * D(\mathbf{x})]+\frac{1}{2}[\mathbf{x}^{\text{T}} * \mathbf{Ax}]-\frac{1}{2}\mathbf{x}'(0)\mathbf{x}(t) $$ With $\mathbf{A}$ symmetric and $\mathbf{x}(0)$ being the initial condition, and: $$ [\mathbf{f}^{\text{T}} * \mathbf{g}]=\int^{t}_0 \mathbf{f}^{\text{T}}(t-\tau)\mathbf{g}(\tau)\,\text{d}\tau $$

If we take the first variation and assume only that the initial variation is zero, the functional is stationary with respect to: $$ \frac{d\mathbf{x}(t)}{dt}= \mathbf{Ax}(t) $$

This is a functional derived by Tonti and Gurtin, it represents a variational principle for linear initial value problems with symmetric state matrices and shows, as a proof of concept, that functionals can be derived for non-potential systems, initial value or dissipative systems.

My question is, is it possible to derive these functionals for arbitrary nonlinear systems which do not have similar initial and final configurations (and cannot have similar initial and final configurations due to dissipation)?

What sorts of conditions would exists on the dynamics of these systems?

In this example, $\mathbf{A}$ must be symmetric which already implies all of it's eigenvalues are real and thus it is a non-potential system, but there is still a functional which can be derived for it.

Any related sources, information, or answers regarding specific cases would be appreciated. If anyone needs clarification, or a proof of any result I presented here, let me know.

Edit: Also, a related question anyone seeing this: I'm currently just interested in the abstract aspect of the problem (solving/investigating it for the sake of it), but why are functional representations such as these useful? I know there are some numerical application, but if I have a functional which attains a minimum for a certain system, what can I do with it?


This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Ron

asked Feb 25, 2015 in Theoretical Physics by Ron (25 points) [ revision history ]
edited Jul 29, 2015 by Dilaton
Friction and dissipation are non-variational, see e.g. this post. There are "Lagrangian" formulations for dissipative forces, but they do not obey a naive principle of least action, see this post and this paper

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user ACuriousMind
@ACuriousMind: I will review the posts and paper you linked to, I'm not sure what you mean yet, but I'll come back by the end of today to reply with more specificity. Also, another thing I want to mention: Rayleigh has a method for dealing with dissipation and external forcing, but I am specifically looking for a self contained formulation with no disspation function necessary, just one functional.

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Ron
The first post I linked shows a general method for deciding whether there is a Lagrangian description for a system given by differential equations. It is definite - there is no Lagrangian description of generic dissipative force (though you might be able to cheat in specific situations). The paper discusses how an "extended Lagrangian description" whose Lagrangian is not the sum of potentials and kinetic energies may be set up to model disspative forces.

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user ACuriousMind
@ACuriousMind: I'm not necessarily looking for a Lagrangian, I'm asking a more general question about the existence of ANY functional which may be stationary with respect to ANY system. The other related question may be: when can you "cheat" and why? What sorts of conditions are there on systems where you can "cheat"?

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Ron
As Qmechanic writes: "This opens up a lot of possibilities, and it can be very difficult to systematically find an action principle; or conversely, to prove a no-go theorem that a given set of eoms is not variational." I think we don't have an answer to your question in such generality.

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user ACuriousMind
@ACuriousMind: I don't see Qmechanic's comment for some reason, but I understand what you mean. I've seen other papers about the topic, mainly ones that delve into convolution based approaches. I was mainly interested in the theoretical aspects of the problem. For example, assuming I can find a fuctional, doesn't matter what it is, what sorts of conditions exist on the dynamics, fuinctional, etc.

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Ron

1 Answer

+ 1 like - 0 dislike

Comments to the question (v3):

I) The Gurtin-Tonti bi-local method [which OP mentions in an example; see also Section II below] of pairing opposite times $t\leftrightarrow (t_f-t_i)-t$ (hidden inside a convolution) is an artificial trick from a fundamental physics point of view, unless further justified. Why would such correlations into the past/future take place?

In fact, it may have non-local quantum mechanical consequences if such non-local action is supposed to be used in a path integral formalism.

Also the Gurtin-Tonti convolution method does not work for a non-compact time interval $[t_i,t_f]$, i.e. if $t_i=-\infty$ or $t_f=\infty$.

Most fundamental physics models typically obey locality, but there are various non-local proposals on the market.

The question of whether a certain set of equations of motions $E_i(t)$ has an action principle (or not!) can be very difficult to answer, and is often an active research area, cf. e.g. this Phys.SE post.

Also what constitutes an acceptable action principle? E.g. can we just introduce some Lagrange multipliers $\lambda^{i}(t)$ and an action $S=\int\! dt ~\lambda^i(t) E_i(t)$ so that $\delta S/ \delta\lambda^i(t) = E_i(t)$, and call it a day? Or are we not allowed to introduce auxiliary variables or non-locality? Should it satisfy a minimum principle rather than a stationary principle? And so forth.

II) Example. Let us for simplicity consider the unit time interval $[t_i,t_f]=[0,1]$. A symmetrized version of the Gurtin-Tonti model is the following bi-local action

$$S[q]~:=~ \frac{1}{4}\iint_{[0,1]^2} \!dt~du~\left\{ q^i(t) \left(\frac{dq^i(u)}{du}- A_{ij}(t,u) q^j(u)\right)+(t\leftrightarrow u) \right\}\delta(t+u-1) $$ $$~=~\frac{1}{2}\int_{[0,1]} \!dt~\left\{\frac{1}{2} q^i(1\!-\!t) \frac{dq^i(t)}{dt}-\frac{1}{2}q^i(t) \frac{dq^i(1\!-\!t)}{dt}- q^i(1\!-\!t)A_{ij}(1-t,t) q^j(t) \right\} $$ $$~=~\frac{1}{2}\int_{[0,1]} \!dt~\left\{ q^i(1\!-\!t) \frac{dq^i(t)}{dt}- q^i(1\!-\!t)A_{ij}(1-t,t) q^j(t) \right\} \tag{1}$$

with symmetric matrix

$$\tag{2} A_{ij}(t,u) ~=~A_{ji}(u,t) .$$

Interestingly, the boundary contributions in the variation $\delta S$ cancel without imposing any boundary conditions (BC). In other words, as far as the finding stationary solutions, we may assume that the variables $q^i$ are free at both end points. (However, there might be other reasons to impose BCs.)

The functional derivative

$$\tag{3} \frac{\delta S[q]}{\delta q^i(t)}~=~\left.\left\{\frac{dq^i(u)}{du}- A_{ij}(t,u) q^j(u)\right\}\right|_{u=1-t}. $$

Hence the equations of motion become

$$\tag{4} \frac{dq^i(t)}{dt}~\approx~A_{ij}(1\!-\!t,t) q^j(t). $$

References:

  1. V. Berdichevsky, Variational Principles of Continuum Mechanics: I. Fundamentals, 2009; Appendix B.
This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Qmechanic
answered Feb 26, 2015 by Qmechanic (2,790 points) [ no revision ]
I'm not sure what you mean by "artificial trick", can you explain that further? Also, although I see why the functional you have presented may have some issues (the non-locality in time for A), what about examples using convolution? They seem to solve this problem.

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Ron
I updated the answer.

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Qmechanic
I think I see what you mean. Effectively, you're making the argument that mixing these opposite times doesn't make sense from a physical standpoint? What kind of justification would make if more reasonable? Take for example this paper: arxiv.org/abs/1112.2286 here, the author uses fractional derivatives to formulate a variational principle. Fractional derivatives are also non-local quantities, and there is something to be said about processes with friction being non-local in a sense, as they are path dependent.

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Ron
I see what you mean about the issue of locality. It's interesting to note that in the frequency domain, the convolution takes on a local form and the inner product does the opposite.

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Ron
The convolution could also be useful for describing dissipative processes specifically because of it's non-local nature. Sort of a way of saying "I'm currently at time $t$, now let me convolve all of the information about my state $t$ time in the past with the information going back to that time.". If you look at the requirements for a system to be able to work in the convolutive framework with path independence, for linear systems only systems with symmetric state matrices work (which only have real eigenvalues, which can be dissipative).

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Ron
Also, there are ways to formulate local variational principles for dissipative systems, my only worry would be that the configuration of the system isn't truly the same at the beginning and the end, so how can one justify saying the variation is zero at both endpoints?

This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Ron

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysics$\varnothing$verflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
To avoid this verification in future, please log in or register.




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...