- #1
danrop
- 2
- 0
Hi all,
I'm trying to understand someone's PhD thesis on the topic of variational surface evolution and its application in computer vision, and I'm having trouble working out how he evaluates some particular types of expressions involving the gradient.
I think it'll be easier if I specify the concerned references directly, with the hope that someone with the time, patience and knowledge can take a brief look at them and help clarify things.
For an overview of the author's research, please take a look at the following presentation:
http://www.uib.no/People/nmaxt/oslo-talk/Solem_Oslo-2005.pdf
Of particular interest to my problem is the gradient projection given on page 10 of this presentation, and the gradient descent evolution of the surface on page 15.
I will reproduce these two equations below (with a slight difference from the original document), referring to them as the gradient projection eq. and the steepest descent eq. respectively:
[tex]\nabla_{S^m} f(\mathbf{x}) = \nabla \tilde{f} - \langle \nabla\tilde{f}, \mathbf{n}\rangle\mathbf{n}[/tex]
[tex]\nabla_M E = \nabla\cdot(g_n + g\mathbf{n})[/tex]
(Note that the second equation uses the the projected gradient term from the first - [tex]g_n = \nabla_{S^m} g[/tex] )
My first equation uses n instead of x as done by the author in the presentation, because in the presentation he has written the equation for the specific case when M is the unit sphere. I simply wanted to emphasise that M could be any surface, and n is the unit normal at the point under consideration. Also, to remove any potential confusion (one that I experienced initially), for the general case, the S^m in [tex]\nabla_{S^m}[/tex] is merely used to indicate of the fact that the 'Gauss map' on any closed manifold is given by the map n : M -> S^m
Now, referring to the author's PhD thesis:
http://homeweb.mah.se/~tsjeso/publications/Solem-thesis-2006.pdf
My main problem is how the author applies these equations to some specific examples of energy functionals, on page 40 and 41, of the thesis.
Referring first to the (simpler) example on page 41 (section 3.7), with
[tex] g = - 1/2 (\mathbf{v}\cdot\mathbf{n}) [/tex]
The way the author applies the gradient projection eq. seems to imply that
[tex]
\nabla_{S^m}(\mathbf{v}\cdot\mathbf{n}) = \mathbf{v} - (\mathbf{v}\cdot\mathbf{n})\mathbf{n}
[/tex]
or to narrow it down even further
[tex] \nabla(\mathbf{v}\cdot\mathbf{n}) = \mathbf{v} [/tex]
(with the assumption - I suppose - that the vector field 'v' is defined throughout the space, and that the gradient of the vector field 'v' is a function), and maybe it's just some simple property from vector calculus of the gradient operator that I haven't been able to apply, but I don't get it.
Ditto with the more complex examples on page 40 of the thesis, eqs. 3.27 and 3.25.
I don't have a lot of mathematical knowledge, so forgive me if I made a slip-up somewhere in my understanding of the problem and the author's solution (and in which case I would welcome any corrections)
I'd be grateful for any help...
Thanks!
I'm trying to understand someone's PhD thesis on the topic of variational surface evolution and its application in computer vision, and I'm having trouble working out how he evaluates some particular types of expressions involving the gradient.
I think it'll be easier if I specify the concerned references directly, with the hope that someone with the time, patience and knowledge can take a brief look at them and help clarify things.
For an overview of the author's research, please take a look at the following presentation:
http://www.uib.no/People/nmaxt/oslo-talk/Solem_Oslo-2005.pdf
Of particular interest to my problem is the gradient projection given on page 10 of this presentation, and the gradient descent evolution of the surface on page 15.
I will reproduce these two equations below (with a slight difference from the original document), referring to them as the gradient projection eq. and the steepest descent eq. respectively:
[tex]\nabla_{S^m} f(\mathbf{x}) = \nabla \tilde{f} - \langle \nabla\tilde{f}, \mathbf{n}\rangle\mathbf{n}[/tex]
[tex]\nabla_M E = \nabla\cdot(g_n + g\mathbf{n})[/tex]
(Note that the second equation uses the the projected gradient term from the first - [tex]g_n = \nabla_{S^m} g[/tex] )
My first equation uses n instead of x as done by the author in the presentation, because in the presentation he has written the equation for the specific case when M is the unit sphere. I simply wanted to emphasise that M could be any surface, and n is the unit normal at the point under consideration. Also, to remove any potential confusion (one that I experienced initially), for the general case, the S^m in [tex]\nabla_{S^m}[/tex] is merely used to indicate of the fact that the 'Gauss map' on any closed manifold is given by the map n : M -> S^m
Now, referring to the author's PhD thesis:
http://homeweb.mah.se/~tsjeso/publications/Solem-thesis-2006.pdf
My main problem is how the author applies these equations to some specific examples of energy functionals, on page 40 and 41, of the thesis.
Referring first to the (simpler) example on page 41 (section 3.7), with
[tex] g = - 1/2 (\mathbf{v}\cdot\mathbf{n}) [/tex]
The way the author applies the gradient projection eq. seems to imply that
[tex]
\nabla_{S^m}(\mathbf{v}\cdot\mathbf{n}) = \mathbf{v} - (\mathbf{v}\cdot\mathbf{n})\mathbf{n}
[/tex]
or to narrow it down even further
[tex] \nabla(\mathbf{v}\cdot\mathbf{n}) = \mathbf{v} [/tex]
(with the assumption - I suppose - that the vector field 'v' is defined throughout the space, and that the gradient of the vector field 'v' is a function), and maybe it's just some simple property from vector calculus of the gradient operator that I haven't been able to apply, but I don't get it.
Ditto with the more complex examples on page 40 of the thesis, eqs. 3.27 and 3.25.
I don't have a lot of mathematical knowledge, so forgive me if I made a slip-up somewhere in my understanding of the problem and the author's solution (and in which case I would welcome any corrections)
I'd be grateful for any help...
Thanks!
Last edited by a moderator: