When does exp(A) == exp.(A)?

Is it ever the case that exp(A) == exp.(A) apart from when A is 1x1?

3 Likes

Diagonal :wink:

1 Like

Umm no: exp(::Diagonal) is diagonal but exp(0)= 1:

julia> exp.(I(2))
2×2 Matrix{Float64}:
 2.71828  1.0
 1.0      2.71828

julia> exp(I(2))
2×2 Diagonal{Float64, Vector{Float64}}:
 2.71828   ⋅ 
  ⋅       2.71828
2 Likes

… maybe I shouldn’t post before coffee …

7 Likes

You can get arbitrarily close by just making A a matrix of larger and larger negative numbers.

1 Like

You can get arbitrarily close by just making A a matrix of larger and larger negative numbers.

You need to be more precise about how you want to approach the negative infinity. With e.g. equal entries this is nowhere on the road to zeros:

julia> exp([-10000 -10000;-10000 -10000])
2×2 Matrix{Float64}:
  0.5  -0.5
 -0.5   0.5
3 Likes

Good point.

julia> using Optim

julia> sol = optimize(ones(2,2), BFGS()) do A
           norm(exp(A) - exp.(A))
       end
 * Status: success

 * Candidate solution
    Final objective value:     0.000000e+00

 * Found with
    Algorithm:     BFGS

 * Convergence measures
    |x - x'|               = 5.47e+04 ≰ 0.0e+00
    |x - x'|/|x'|          = 1.00e+00 ≰ 0.0e+00
    |f(x) - f(x')|         = 2.55e-02 ≰ 0.0e+00
    |f(x) - f(x')|/|f(x')| = Inf ≰ 0.0e+00
    |g(x)|                 = 0.00e+00 ≤ 1.0e-08

 * Work counters
    Seconds run:   1  (vs limit Inf)
    Iterations:    5
    f(x) calls:    23
    ∇f(x) calls:   23


julia> sol.minimizer
2×2 Matrix{Float64}:
 -54762.4  -53084.3
 -53084.3  -54762.4

julia> exp.(sol.minimizer)
2×2 Matrix{Float64}:
 0.0  0.0
 0.0  0.0

julia> exp(sol.minimizer)
2×2 Matrix{Float64}:
  0.0  -0.0
 -0.0   0.0
6 Likes

Stefan gutel tweeted me

Which had an example

6 Likes

Is there a way to get logarithms of negative numbers without having to write + 0im all the time:

A = [log(-4/3+0im) log(-2+0im)   log(-2+0im);
     log(-2+0im)   log(-4/3+0im) log(-2+0im);
     log(-2+0im)   log(-2+0im)   log(-4/3+0im)]

julia> norm(exp(A) - exp.(A)) < 1e-12
true
2 Likes

How about

julia> A = log.(Complex[-4//3 -2    -2
                        -2    -4//3 -2
                        -2    -2    -4//3])
3×3 Matrix{ComplexF64}:
 0.287682+3.14159im  0.693147+3.14159im  0.693147+3.14159im
 0.693147+3.14159im  0.287682+3.14159im  0.693147+3.14159im
 0.693147+3.14159im  0.693147+3.14159im  0.287682+3.14159im

julia> exp(A) ≈ exp.(A)
true

?

3 Likes

Great, thanks :slight_smile:

NB: Is there any point in this case in defining rational numbers (4//3) given that the logarithms and exponentiation should convert then to floats?

Not really, no.

1 Like

My first thought was also Diagonal, and I also haven’t had my coffee yet :stuck_out_tongue:

1 Like

With a matrix A that is strictly positive definite e^{tA} \rightarrow 0 as t \rightarrow -\infty. So as you see, tA is negative definite with negative entries :slight_smile:

Nilpotent matrices should provide plenty of examples; the entries solving equations of the messy type e^{a_{ij}} = polynomial(a_{11},a_{12},\dots,a_{nn}).

Not every positive definite matrix has all positive entries though.

Examples of what?

Which is why I specified, negative definite (not semi-definite) matrix with negative entries.

Examples of matrices satisfying exp.(A) == exp(A).

Very unlikely nilpotent helps since zero is not an example

Theorem 1 of the following paper gave a complete characterization of the solutions.

ADDED: But the problem in the paper is not really what we are discussing here, notice that the zero-th term is missing, in the following definitions from the paper.
p(A)=c_{m} A^{m}+c_{m-1} A^{m-1}+\cdots+c_{1} A
p^{H}(A)=c_{m} A^{(m)}+c_{m-1} A^{(m-1)}+\cdots+c_{1} A

:disappointed_relieved: It will be nice if someone can provided a large family of solutions to our problem here.

4 Likes

At the moment there is only one example which is complex… I’m curious if there’s a real example

1 Like

If matrix A has invertible eigenvector matrix M with corresponding eigenvalues \lambda_1,\ldots,\lambda_n, then

\exp(A)\cdot M = M\cdot \mathrm{diag}(e^\lambda_1,\ldots,e^\lambda_n)

Here, \mathrm{diag}(e^{\lambda_1},\ldots,e^{\lambda_n}) \neq \exp.(\Lambda) where \Lambda = \mathrm{diag}(\lambda_1,\ldots,\lambda_n) \mathrm{diag} (e^{\lambda_1},\ldots,e^{\lambda_n}) has zeros outside of the diagonal, while \exp.(\Lambda) has unity outside of the diagonal.

More generally,

\exp(A) = \sum_{i=0}^\infty\frac{1}{i!}A^i