Help benchmarking small python code

Disclaimer: I asked now the same thing in the Python discourse forum, and had deleted it here. But anyway, if anyone want’s to help, it will be most welcome.

I am trying to compare a neighbour list search to other implementations, and scipy has one. I am benchmarking it but it seem to be so slow that I don’t really trust what I’m doing. I am just doing this:

Input file:

import numpy as np
from scipy.spatial import KDTree

def pp(points) : 
    kd_tree = KDTree(points) 
    pairs = kd_tree.query_pairs(r=0.05) 
    return pairs 

points = np.random.random((10000,3))

#%timeit pp(points)

then I run:

% ipython3 -i nn.py

In [1]: %timeit pp(points)                                                                            
2.78 s ± 134 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

It that correct? It sounds strange, because the Julia solutions that do the same thing take of the order of 6ms for that size of points, and these are just a couple of library calls, I wouldn’t expect the different to be that large.