Estimating when downside of parallelism becomes not so important

Hello, I am new to Julia and I found this code snippet online, were somebody calculated the fibonacci number with and without parallelism. I was wondering about this magic n < 40 statement were the person decides, if he uses the parallel implementation or the standard one. Is there a way without stepwise incrementing n to actually get the best value for situations like this?
I am a newby at parallel programming and I try to understand, when the usage of @parallel , @spawn and pmap becomes useful and which function calls are to small to calculate distributed.

@everywhere  function parallel_fib_first(n)
        if(**n < 40**)
            fib(n)
        end
        x = @spawn parallel_fib_first(n - 1)
        y = parallel_fib_first(n - 2)
        fetch(x) + y
    end

I hope you can help me to get a deeper understandig of julia and parallel programming

I have been experimenting with parallelism myself. http://julia.cookbook.tips/doku.php?id=parallel . alas, there are some aspects that you should be aware of.

julia can parallel-task on one process. this is not of any speedup use. (there are some cases when it can be convenient, such as serving socket requests.) (@async is one of them.)

julia has threads. for small tasks, with little memory use, they are almost perfect speed-up tools with perfect scaling. I think your task qualifies.

julia has processes, which are more heavyweight. when badly tuned, they can be very bad. when well tuned, they can work well.

my little page has some benchmarks about the relative costs/benefits.

/iaw

Thank you for your great respond, your link really helped me, too.

Usually you repeatedly double it rather than incrementing it linearly. The point is that the base case has to be large enough for the computational savings of parallelizing to outweigh the communication cost of spawn/fetch (or any other form of parallelism), and there’s generally no way to determine this without application-specific benchmarking.