The basic reason is that numerical integration algorithms are formulated to produce a single number, the definite integral over the entire interval. For a “cumulative sum” you want an algorithm that produces a function, and such algorithms generally look very different.
Of course, it’s easy to get a cumsum from something like a trapezoidal rule or similar low-order integration methods that just sum a bunch of local integral estimates over small intervals, typically from equally-spaced samples. But more sophisticated integration techniques like Gaussian quadrature are typically exponentially more accurate than the trapezoidal rule, so you are sacrificing a lot if you use the simpler local methods (ala NumericalIntegration.jl or Trapz.jl).
The more sophisticated integration methods typically work by implicitly constructing one or more high-order polynomial approximations of your function (carefully sampled at non-equispaced points) and then integrating that polynomial, but the methods are cleverly formulated to avoid explicitly constructing the polynomial(s). They instead just sample the function at carefully chosen points, multiply by some precomputed weights (that correspond mathematically to integrating an interpolating polynomial), and sum to get the estimated integral (over the whole interval only).
To instead get the “cumsum” accurately, i.e. to get the integrated function at arbitrary points in the interval, you need a very different type of formulation that explicitly constructs the interpolating polynomial(s) in some form, at which point you can integrate the polynomial to obtain another polynomial and evaluate it wherever you want. This is exactly what ApproxFun.jl does.