I have 3 coordinates, moving_points. I know where each of these is supposed to really be, fixed_points. I want to transform them (and many other coordinates) to these known locations (involving translation, rotation, but also a bit of stretching).

How can I create a transformation so that when it’s applied to these 3 coordinates they become a set of known other coordinates?

MWE:

fixed_points = [[0,0], [0,50], [130,50]]
moving_points = [rand(2) for _ in 1:3]
trans = # Probably something involving CoordinateTransformations, the fixed_points, and moving_points
@assert fixed_points ≈ trans(moving_points)

fixed_points = [[0,0], [0,50], [130,50]]
X = hcat(vcat(fixed_points...), ones(2*3))
moving_points = [rand(2) for _ in 1:3]
y = vcat(moving_points...)
p = X\y

which wouldn’t work for any of the points. Or this:

Ab = [hcat(m, [1,1])\f for (m, f) in zip(moving_points, fixed_points)]

but then, how do I apply that…? As you can see, I’m a wee bit lost.

I was under the impression that trans would be a 3×3 transformation matrix…

This one works (of course, it can be written in more efficient way, it’s only proof of concept)

X = permutedims(hcat(map(x -> [x..., 1.0], moving_points)...))
Y = hcat(map(x -> x[1], fixed_points), map(x -> x[2], fixed_points))
c = X \ Y
c = c'
A = c[1:2, 1:2]
b = c[:, 3]

And transformation can be written like this

mov_points = hcat(moving_points...)
A * mov_points .+ b

using CoordinateTransformations
function CoordinateTransformations.AffineMap(ps::Vector{Pair{T,R}}) where {T, R}
X = vcat(hcat(last.(ps)...), ones(1, length(ps)))'
Y = hcat(first.(ps)...)'
c = (X \ Y)'
A = c[:, 1:2]
b = c[:, 3]
AffineMap(A, b)
end
fixed_points = [[0,0], [0,50], [130,50]]
moving_points = [rand(2) for _ in 1:3]
f = AffineMap(Pair.(fixed_points, moving_points))
@assert fixed_points ≈ f.(moving_points)

So while this works great for three coordinates, I’m trying to make it work for only two coordinates. But the problem is that this implementation “flattens” all the coordinates I apply it to:

ps = [[0,0] => [1,1], [10,0] => [9,4]] # using just two coordinates
f = AffineMap(ps)
@assert first.(ps) ≈ f.(last.(ps)) # it "works"
n = 1000
for (x,y) in zip(rand(-20:20, n), rand(-20:20, n))
_, y2 = f([x,y]) # but no matter what the moving point is
@assert y2 == 0 # its `y` value is always zero
end

I’m sure this makes sense because there is no information about the y axis in those two coordinates (both have zero y-values). But I’m now wondering how to make this work. Now the transformation is a translation followed by a rotation and finally stretching just one of the axis (in this case, the x axis).

With two coordinates, you need an additional constraint, for example that the map is only rotation and scaling, but no shearing. Scaling can be computed from the change in the distance of the points, rotation from change in the angle of the line connecting the two points.

Wow, I have never worked with CoordinateTransformations before, but it is really very powerful.

Solution to this problem is in the sequence of linear transformations. We should start from moving points to the origin, than rotate them to align along x axis, than scale, than rotate backward in such a way so points will be collinear and finally shift them to fixed_point (it’s not very clear from this description, try to read the code and draw necessary transformations). And great thing: these are all linear transformations and CoordinateTransformations internally concatenate them together into single AffineMap so no Linear Algebra is needed, it’s pure geometry!

using CoordinateTransformations
using Rotations
fixed_points = [[1, 1], [9, 4]]
moving_points = [rand(2) for _ in 1:2]
z1 = moving_points[2] .- moving_points[1]
z2 = fixed_points[2] .- fixed_points[1]
α = norm(z2)/norm(z1)
θ1 = PolarFromCartesian()(z1).θ
θ2 = PolarFromCartesian()(z2).θ
t1 = Translation(-moving_points[1]...)
lm1 = LinearMap(RotZ(-θ1)[1:2, 1:2])
lm2 = LinearMap([α 0; 0 1])
lm3 = LinearMap(RotZ(θ2)[1:2, 1:2])
t2 = Translation(fixed_points[1]...)
am = t2 ∘ lm3 ∘ lm2 ∘ lm1 ∘ t1 # This is a single affine map!
@assert fixed_points ≈ am.(moving_points)

Thanks a lot!!! I’ve managed to avoid this kind of transformation altogether.

But! the bigger problem of creating a transformation from a set of fixed and moving coordinates is still open (see issues 30 and 33 in CoordinateTransformations.jl). Matlab solves it nicely, see “More About” and then “Transformation Types” here. Perhaps an additional package that does exactly that would be beneficial. It would be extra nice if said package would solve it for when there are more than the minimum number of coordinate pairs and the resulting transformation object would be an optimum considering all of the input pairs.

I think you were suggested a perfectly fine solution in that issue:

For exactly determined linear problems (like the original one here), just use the linear solver. For overdetermined linear problems you can use eg least squares (again \), and for nonlinear transformations just minimize a cost function.

Does this orthogonal Procrustes problem include all the possible transformations, namely translation, scale, shear/stretching or “just” rotation? Cause otherwise that blog post would basically solve camera calibration for me…

It’s no longer an orthogonal transformation if it includes shearing?

In general, if the correspondence between the points in the two sets to be aligned is known, the problem is relatively easy for linear operators. If the correspondence is not known it becomes a bit messier.