Hi,
I work on a Julia benchmark Data Base project aiming to compare different implementations of basic computational kernels implemented in various languages.
I wonder about the most efficient way to call a Python snippet from Julia. The following MWE
using PyCall
using BenchmarkTools
#I would like to be able to read the python snippet from a file like
# py_snippet=read("axpy.py")
#but for now I follow the PyCall example
py"""
def pyaxpy(y,x,a):
y+=a*x
"""
function measure(n)
@show n
x,y,a=(rand(n),rand(n),1/3)
@btime py"pyaxpy"($y,$x,$a)
px,py,pa=map(PyObject,(x,y,a))
@btime py"pyaxpy"($py,$px,$pa)
end
foreach(measure,(10^i for i in (1:4)))
Thanx a lot,
I modified the MWE accordingly (with pre-PyObjecting)
using PyCall
using BenchmarkTools
#I would like to be able to read the python snippet from a file like
# py_snippet=read("axpy.py")
#but for now I follow the PyCall example
py"""
def pyaxpy(y,x,a):
y+=a*x
"""
pyaxpy = py"pyaxpy"
function measure(n)
@show n
x,y,a=(rand(n),rand(n),1/3)
@btime py"pyaxpy"($y,$x,$a)
px,py,pa=map(PyObject,(x,y,a))
@btime py"pyaxpy"($py,$px,$pa)
@btime $pyaxpy($py,$px,$pa)
end
foreach(measure,(10^i for i in (1:4)))
Yes, your example was necessary (for me). The final MWE
using PyCall
using BenchmarkTools
#I would like to be able to read the python snippet from a file like
# py_snippet=read("axpy.py")
#but for now I follow the PyCall example
py"""
def pyaxpy(y,x,a):
y+=a*x
"""
pyaxpy = py"pyaxpy"
function measure(n)
@show n
x,y,a=(rand(n),rand(n),1/3)
@btime py"pyaxpy"($y,$x,$a)
px,py,pa=map(PyObject,(x,y,a))
@btime py"pyaxpy"($py,$px,$pa)
@btime $pyaxpy($py,$px,$pa)
@btime pycall($pyaxpy, PyObject, $py,$px,$pa)
end
foreach(measure,(10^i for i in (1:4)))
You would think this would be easy, even in Python (but it differed by version, here for Python 3).
It was surprisingly obscure, and at work, I ended up doing this way (may not be best way, and as I’m rewriting in Julia anyway, so will not investigate, but please tell me if you find a better way):
using PyCall
py"""
filename="app.py"
with open(filename, "rb") as source_file:
code = compile(source_file.read(), filename, "exec")
exec(code)
app.run_server(host='localhost', port=8050)
"""
For this, how about using the $$ interpolation mechanism built in @py_str?
help?> @py_str
py".....python code....."
Evaluate the given Python code string in the main Python module.
If the string is a single line (no newlines), then the Python
expression is evaluated and the result is returned. If the string
is multiple lines (contains a newline), then the Python code is
compiled and evaluated in the __main__ Python module and nothing
is returned.
If the o option is appended to the string, as in py"..."o, then
the return value is an unconverted PyObject; otherwise, it is
automatically converted to a native Julia type if possible.
Any $var or $(expr) expressions that appear in the Python code
(except in comments or string literals) are evaluated in Julia
and passed to Python via auto-generated global variables.
This allows you to "interpolate" Julia values into Python code.
Similarly, ny $$var or $$(expr) expressions in the Python code
are evaluated in Julia, converted to strings via string, and are
pasted into the Python code. This allows you to evaluate code
where the code itself is generated by a Julia expression.
For example:
julia> using PyCall
# could as well be:
# pycode = read("myfile.py", String)
julia> pycode = """
def hello():
print("Hello from python!")
"""
"def hello():\n print(\"Hello from python!\")\n"
julia> py"""
$$pycode
"""
julia> py"hello()"
Hello from python!
Would it be possible to directly call timeit from inside julia ? this way the evaluation will be done completely in python without considering the object pass overhead.
something like:
At the moment, it’s not running because locals() doesn’t exist in julia the way I experience it.
Is there a work around ?
edit:
Okey so I tested it without the namespace being relevant:
(appending 1000 elements in a list)
timeit.timeit("""
times=1000
a = []
for i in range(1,times+1):
a.append(i)
""", number=10000)/10000*1000000000 #nanoseconds
but running it directly in python is faster by 10 us.
Don’t know why though, since I thought timeit should start measuring by the moment the transition to the python environment is complete.
In the end I ended up creating a temp python module with all the benchmarks as functions. I also had a python wrapper function that would use timeit to benchmark the functions of interest.
Later i was calling the python wrapper function from a Pluto notebook after importing my python local module using PyCall.jl