If you explain more about the large matrices and how they are generated, there is a fair chance something to speed up the process can come up.
For example, in the example in the question, the convolution of uniform distributions (according to v) can be calculated, and 3 vectors of length 10 can be sampled. Without ever producing the original matrices (that could very well be much faster).
This was an illustration of how extra information about the matrices and vector can help.
You figured out how to do this using a comprehension. To use broadcast, use t .* Ref(v). This should have very similar performance to your comprehension.
To use a single linear algebra operation, try
reshape(reduce(vcat,t)*v,:,length(t))
You may want to wrap this in eachcol(...) if you want to access the results as a collection of vectors rather than a 10x3 (or whatever your real size is) matrix. That said, I wouldn’t expect this to be much (if any) faster than the comprehension or broadcast. The reduce(vcat,t) requires a little work and the consolidated linear algebra call may not save enough to make up the difference.
If 10x10 is truly the size of matrices you’re dealing with, you might get a significant performance benefit by using SMatrix and SVector from the StaticArrays package. Using those, there’s no benefit (and likely detriment) to consolidating the operations. Just use the comprehension or broadcast. E.g.,
julia> using StaticArrays
julia> t = [@SMatrix rand(10, 10) for _ in 1:3];
julia> v = @SVector rand(10);
julia> res = [t_i * v for t_i in t]
Don’t try to use StaticArrays for arrays with more than a few hundred elements. They’re not faster than Array at those sizes but become very tough on the compiler and can choke your system.