I would like to chime in my experience. For my applications it turned out that Julia’s non-sparse algorithms are very good (especially tricks with @view, mul!(), div!(), etc.) that I couldn’t beat it with any sparse implementations.
Remember, that apart from doing the sparse calculations your code probably has also other things going on (potentially a lot of them) that require also converting to and from for those calculations, which also takes time.
I gave it a quick look and it seems the multiplication dense times Hermitian sparse uses a non-optimized code path (which is not aware of sparsity).
Here’s a quick workaround, which is worth it if you reuse the matrix C a couple of times: just use SparseMatrixCSC, e.g. make it Hermitian but then go back to the native sparse format, which is optimized better than “Hermitian-of-sparse”:
Chs = sparse(Hermitian(C))
The Hermitian wrapper shouldn’t even be necessary if you know that your input is Hermitian anyway. It is mostly used when a specific algorithm must be chosen for e.g. factorization, but here on a simple matmul it won’t do much of a difference.