Dear folks,

I observe a strange behavior (note that the following snippets are embedded in a function): let `x`

be some decision vector of size `n`

. Then,

```
sum_x = 0.0
x_val = getvalue( model[:x] )
println("typeof x_val: ", typeof(x_val))
@time for i=1:n
sum_x += x_val[i] #this sum allocates mem
end
```

returns

```
typeof x_val: Array{Float64,1}
0.015041 seconds (299.49 k allocations: 4.570 MiB, 69.16% gc time)
```

so memory is allocated, and I do not understand why. I also tried `x_val = deepcopy( getvalue( model[:x] ) )`

with the same result.

In contrast:

```
sum_x = 0.0
foo = rand(Float64, n)
println("typeof foo: ", typeof(foo))
@time for i=1:n
sum_x += foo[i] #this sum does not
end
```

outputs

```
typeof foo: Array{Float64,1}
0.000060 seconds
```

which does not allocate memory and is much faster.

Do you have any suggestions why this is the case, and how to work with (intermediate) JuMP results as fast as in the second example?

Thanks in advance!