Since I am not a computer science guy, so judging by the name, I guess it should be faster than the JIT compiler.
That’s a pretty bad idea
Julia is unlike most other JIT compiled languages. What AOT can get us is only removing JIT overhead, which occurs for the first function call.
Julia is basically statically compiled at runtime, with the same tools as Clang uses to ahead of time compile C++ (namely LLVM) - so after the first call, the performance of that function is indistinguishable from an AOT compiled function.
As a demonstration in the Julia REPL:
julia> 1+1 # this is at runtime!
2
julia> test(a) = sin(a^19)
test (generic function with 1 method)
julia> @time test(22.0) # at runtime compile function for Float64 - takes quite a bit of time and memory
0.004459 seconds (1.55 k allocations: 81.745 KiB)
0.09520449192956647
julia> @time test(22.0) # now this calls the basically AOT compiled function
0.000005 seconds (5 allocations: 176 bytes)
0.09520449192956647
Now you might even say, that test
was compiled ahead of time of the second function call 
To get rid of the first slow down, which is entirely due to compilation, one needs to tell Julia which functions to compile “ahead of time”. Getting a binary out of that etc, is still a bit experimental, but works even for large package as I have shown with Makie 