For motivation please look at the following suite of benchmarks:

```
julia> m, n , nz = 10000, 10000, 1000000
(10000, 10000, 1000000)
julia> A = spzeros(m, n)
10000×10000 SparseMatrixCSC{Float64,Int64} with 0 stored entries
julia> @btime A' == A'
2.415 s (4 allocations: 64 bytes)
true
julia> A = sprand(ComplexF64, m, n, nz / m / n);
julia> @btime A' == A'
7.932 s (4 allocations: 64 bytes)
true
julia> x1 = A';
julia> x2 = (copy(A)');
julia> @btime x1 == x2
12.005 s (2 allocations: 32 bytes)
true
julia> @btime x1.parent == x2.parent
3.141 ms (0 allocations: 0 bytes)
true
```

Maybe you can share my feeling, that only the last figure is acceptable (0.003 seconds vs. 12 seconds).

As `A' === Adjoint(A)`

, there seems to be an issue with `Adjoint`

and `==`

. Nevertheless I think, the issue is with all of those wrappers, which were introduced to improve performance, namely:

`Adjoint, Transpose, UpperTriangular, LowerTriangular, Symmetric, Hermitian`

.

And not only the `==`

is problematic.

```
julia> @btime x = abs.(UpperTriangular(spzeros(m, n))');
776.993 ms (8 allocations: 763.02 MiB)
```

is an example of combinations of those wrappers, which are not taking advantage of the sparsity of the underlying matrix.

My question is: are there efforts being undertaken to improve the performance of lazily wrapped sparse Matrices in general and specially of combined wrappers? I would like to start a project on this, but want first to see, if somebody else is working in this area.