If you use Spark as intended, i.e. manage large distrubited datasets using SQL / DataFrame API, then the difference in performance will be negligible.
If you use Spark for something different, e.g. distributed computing in your main language, most likely you do a wrong think.
Spark was designed for processing large amounts of data, in recent versions - with heavy emphasis on SQL-like features. If you want to run distributed computations in Julia, take a look at Kubernetes.