Matlab was only interpreted; also Python I believe at some point way in the past. By now Matlab is JIT compiled to native code. I like the term just-ahead-of-time (JAOT) compiled for Julia, which happens per method. Python is actually also compiled by now (also JAOT(?); I’m not sure, or per file?), but not to machine code, but to its own bytecode. Unlike Java where you distribute compiled (Java) bytecode, I believe that’s not done for Python(?).
Python is much slower than Julia for two reasons, because even if strictly speaking compiled (but not thought of that way, since you don’t distribute binary executables by default, or bytecode), the bytecode is still interpreted. Not with Matlab since it compiles all the way. The other reason Python is slow is it’s too dynamic, and hard to optimize for, and it’s not even tried much. Matlab is also I believe too dynamic (though not as?), and well all dynamic languages before Julia. You can also fully compile (a restricted subset of Python) to machine code, with e.g. Numba.
Being compiled to bytecode will give some speedup, when it’s then interpreted.
Compiling to machine code, JIT or otherwise will be much faster, but there are also levels to compiling. Compiling to naive machine code is much better than not compiling. Then on top you can compile to optimized code, i.e. for Julia/C/C++ -O0 (strictly speaking not no optimization for Julia unlike for most languages), -O1, -O2 (Julia’s default), -O3.
even if not fully interpreted it could by like early Java (which compiled to bytecode, before it introduced JIT), still interpreting but now a bytecode, same as Python currently. It could also mean JIT compiled to native code, like Julia, I’m not sure.
The terms interpreted and compiled are a bit blurred by now by JIT or JAOT. For Java JIT applies to bytecode. It can also optionally by compiled to binary executables.