Possible bug: Crash during inference in @testset

I’m encountering a crash of my streaming (point-by-point with no temporary arrays) FIR code (max speed wanted), appearing only when inside a @testset region. The error is clearly located in the Core.Inference of the Julia runtime.
MWE:

using StaticArrays

const FIRState{T, N} = Tuple{Int, NTuple{N,T}, NTuple{N,T}}        
function FIR_init(::Type{T}, kernel::NTuple{N, V})::FIRState{T, N} where {T, N, V}        
    #ensure ker is of same type as buf
    ker = (convert.(T, kernel)...)
    return (N, ntuple(i->zero(T), N), ker)
end        
@inline function FIR(xi::T, state::FIRState{T, N})::Tuple{T, FIRState{T, N}} where {T, N}    
    n, buf, ker = state
    yi=sum(SVector(ker).*SVector(buf))        
    yi, (0, buf, ker)
end    
function test_fir(w::AbstractArray{T, 1}, fir1_state) where T        
    ww = similar(w)
    for (i,xi) in enumerate(w)
        y1, fir1_state = FIR(xi, fir1_state)        
        ww[i] = y1
    end
    ww
end
# ----- Test --------------------
using Base.Test
@testset "FIR                           " begin
    T =  Float64        
    #FIRstate = FIR_init(T, (1,1,1,0,-1,-1,-1)) #will work
    FIRstate = FIR_init(T, ((Float64.([1,1,1,0,-1,-1,-1]))...))    #will crash with "The applicable method may be too new: running in world age 2757, while current world is 21917"
    a = test_fir(convert.(T, collect(1:20)), FIRstate)
    info("$a")
end

If I run the inner code of the @testset block line by line, everythink works.
as soon as it is run in the @testset block a crash appears

Main> @testset "FIR                           " begin
           T =  Float64
           #FIRstate = FIR_init(T, (1,1,1,0,-1,-1,-1)) #will work
           FIRstate = FIR_init(T, ((Float64.([1,1,1,0,-1,-1,-1]))...))    #will crash with "The applicable method may be too new: running in world age 2757, while current world is 21917"
           a = test_fir(convert.(T, collect(1:20)), FIRstate)
           info("$a")
       end
ERROR: MethodError: no method matching string(::Expr)
The applicable method may be too new: running in world age 2757, while current world is 21903.
Closest candidates are:
  string(::Any...) at strings/io.jl:120 (method too new to be called from this world context.)
  string(::BigInt) at gmp.jl:568 (method too new to be called from this world context.)
  string(::BigFloat) at mpfr.jl:885 (method too new to be called from this world context.)
  ...
Stacktrace:
 [1] limit_type_depth(::Any, ::Int64) at .\inference.jl:692
 [2] getfield_tfunc(::Any, ::Core.Inference.Const) at .\inference.jl:897
 [3] (::Core.Inference.##171#172)(::Any, ::Any) at .\inference.jl:899
 [4] builtin_tfunction(::Any, ::Array{Any,1}, ::Core.Inference.InferenceState, ::Core.Inference.InferenceParams) at .\inference.jl:1215
 [5] builtin_tfunction(::Any, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:1133
 [6] abstract_call(::Any, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:1701
 [7] abstract_eval_call(::Expr, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:1927
 [8] abstract_eval(::Any, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:1950
 [9] abstract_interpret(::Any, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:2076
 [10] typeinf_work(::Core.Inference.InferenceState) at .\inference.jl:2669
 [11] typeinf(::Core.Inference.InferenceState) at .\inference.jl:2787
 [12] typeinf_edge(::Method, ::Any, ::SimpleVector, ::Core.Inference.InferenceState) at .\inference.jl:2535
 [13] abstract_call_gf_by_type(::Any, ::Any, ::Core.Inference.InferenceState) at .\inference.jl:1420
 [14] abstract_call(::Any, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:1897
 [15] abstract_eval_call(::Expr, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:1927
 [16] abstract_eval(::Any, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:1950
 [17] abstract_interpret(::Any, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:2076
 [18] typeinf_work(::Core.Inference.InferenceState) at .\inference.jl:2669
 [19] typeinf(::Core.Inference.InferenceState) at .\inference.jl:2787
 [20] typeinf_edge(::Method, ::Any, ::SimpleVector, ::Core.Inference.InferenceState) at .\inference.jl:2535
 [21] abstract_call_gf_by_type(::Any, ::Any, ::Core.Inference.InferenceState) at .\inference.jl:1420
 [22] abstract_call(::Any, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:1897
 [23] abstract_eval_call(::Expr, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:1927
 [24] abstract_eval(::Any, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:1950
 [25] abstract_interpret(::Any, ::Array{Any,1}, ::Core.Inference.InferenceState) at .\inference.jl:2076
 [26] typeinf_work(::Core.Inference.InferenceState) at .\inference.jl:2669
 [27] typeinf(::Core.Inference.InferenceState) at .\inference.jl:2787
 [28] typeinf_ext(::Core.MethodInstance, ::UInt64) at .\inference.jl:2628
 [29] eval(::Module, ::Any) at .\boot.jl:235

so it appears that when the kernel tuple is not specified explicitly, but calculated at runtime as

((Float64.([1,1,1,0,-1,-1,-1]))...)

the runtime crashes in the inference phase.
Could someone give me a hint whetter this is a bug that needs to be filled, or if there is something wrong in my code?

Julia Version 0.6.4
Commit 9d11f62bcb* (2018-07-09 19:09 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=16)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.9.1 (ORCJIT, skylake)

Its JuliaPro install