Computing and Processing Correspondences with Functional Maps | Maks Ovsjanikov et. al
Computing and processing correspondences with functional maps | ACM SIGGRAPH 2017 Courses
Part I: Functional Maps Basics
P.S. Part name was added by me.
What are Functional Maps?
- Shape correspondence can be represented as the transportation TF of functions between shapes. If such functions are represented with bases {ϕi} on different shapes, the formulation becomes: TF(f)=TF(∑iaiϕiM)=∑iaiTF(ϕiM)
Here the core is TF(ϕiM)=∑jcjiϕjN: TF(f)=∑j∑iaicjiϕjN
- Noted that here the matrix C or {cji} can fully encode the shape correspondence, in the form of transportation of basis functions between shapes.
- If only consider the spectral embeddings {ai}: TF(a)=Ca
- If C is correct, the point-to-point map can be phrased as: T≈ΦNCΦM+, i.e. a correspondence matrix constructed by the dot product between ϕiN and the transported CϕjM. Noted that ϕi is the spectral embedding of the indicator function of vertex i on the shape.
- Inversely C=ΦNTTTΦM, if ΦTΦ=I, which holds for any basis Φ derived from a symmetric matrix operator, e.g. LBO.
- Constraints & regularization
- Function preservation
Such kind the constraint can be constructed with point descriptors, landmark point correspondences, segment correspondences, etc. min∥CA−B∥2
- Operator commutativity
Apply a linear operator and then transport should be equivalent to first the transport and then apply the linear operator. One example is LBO: min∥ΛNC−CΛM∥2, where Λ is diagonal matrix of eigenvalues of LBO on the shape - the matrix form of LBO in the spectral domain.
P.S. For isometry, LBO commutativity should be hold.
- Conversion to point-to-point maps
It can be proved that ϕi is the spectral embedding of the indicator function of vertex i on the shape. Therefore, CΦM is the transported point indicator functions on M to N. Finding the nearest ϕjNs from ΦN provides the corresponding points.
- Refinement
For volume preserving deformation, C should be orthonormal. In this case, aligning CΦM and ΦN becomes [[BeslPJ1992|ICP]] algorithm and the C itself can be iteratively refined.
Part II: Functional Maps in Special Settings
Partial Functional Maps
-
Partial Functional Maps
Assume to be given a full shape M and a partial shape N that is approximately isometric to some (unknown) sub-region M′⊂M. The authors showed that for each “partial” eigenfunction ϕjN of N there exists a corresponding “full” eigenfunction ϕiM of M for some i≥j, such that Cij=⟨TF(ϕiM,ϕjN)⟩L2(N)≈±1, and zero otherwise.
Therefore, C will shows a slanted-diagonal structure and an angle can pre-computed to optimize for C.
-
Deformable clutter
Deals with the case that the shapes only partly overlap, rather than one is the subset of another. In this case, the overlapping parts itself is a variable to be optimized.
Maps in Shape Collections
- Descriptor and subspace learning
With a collections of shapes, we can evaluate which point descriptors performance the best and assign best weights to them for matching new shapes. It can even be further used to identify a stable functional subspaces for shape matching.
- Networks of Maps
Adding pair-wise links between shapes in the shape collection forms a networks of functional maps. On natural constraint for optimizing the network of maps is cycle consistency, i.e. a cycle of maps show yields an identity map.
- Latent spaces
However, optimizing all pair-wise maps is computationally impractical. We therefore turn to construct a latent shape and define Yi as a map from shape i to the latent shape. Then Cij=Yj−1Yi.
When all YiTYi=I, any cycle of maps will cancel out as I. Therefore ∀i,YiTYi=I is a strong constraint to facilitate cycle consistency.
- Metrics and Shape Differences
Comparing deformations is a fundamental operation in shape collection. For this purpose we introduce two difference operators acting on functions describing the deformation undergone by the metric ... the area-based and conformal shape difference operators DA and DC.
Functional Vector Fields
The idea of functional maps can be transferred to the vector fields domain.
Part III: Advanced Computation & Conversion
Computing Functional Maps
In this chapter, more advanced techniques for solving functional maps are discussed.
- Additional optimization variables
- Joint diagonalization
With near-isometric shapes, C should be near-diagonal. However, this doesn't hold for non-isometric shapes. Joint diagonalization optimizes for new basis on M,N that makes C diagonal.
- Manifold optimization
Classical manifold optimization method includes ADMM.
The main idea of manifold optimization is to treat the objective as a function defined on the matrix manifold, and perform descent on the manifold itself rather than in the ambient Euclidean space.
- Unknown input ordering
The ordering of the point descriptor functions could be unstable. The ordering can be considered as a variable during functional map optimization.
- Coupled functional maps
Simultaneously solving for functional maps C1 from M to N and C2 from N to M and imposing the constraint of C1C2=I.
- Correspondence by matrix completion
In classical matrix completion problem, one is given a sparse set of observations aij∈Ω of a matrix A and tries to find the lowest rank matrix with elements equal to the given ones on the subset of indices Ω ... Matrix completion problems are widely used in recommender systems.
Map Conversion
- Optimize the (soft)assignment matrix
In [[#Part I Functional Maps Basics]], we have discussed how to convert functional map to point-to-point correspondence by nearest neighbor search on the spectral embeddings. It can be seen as performing point cloud registration on the embedding space. Here we introduce a (soft)assignment matrix to represent the registration in the optimization framework.
- Nearest Neighbors
Equivalent to rigid ICP registration in embedding space.
- Orthogonal refinement
Equivalent to orthogonal Procrustes in embedding space.
- Regularized Map Recovery
Equivalent to nonrigid CPD registration in embedding space.
- Product Manifold Filter for Bijective Map Recovery
The resulting pointwise map can be seen as a “denoised” version of the input, in analogy with classical KDE-based denoising of images.
- Linear Assignment Problem
- Continuous Maps via Vector Field Flows
To represent the target point-to-point map as a composition of an arbitrary continuous map between the two surfaces and a flow associated with an unknown vector field on one of them