How to speed up converting Dict to Json?

I am using Julia and the JSON module to convert a Dict to JSON and sent it to LLM APIs. Below is the code I’m using:

function send_request(url, headers, payload)
    try
        @debug "Payload" payload
        json_payload = JSON.json(payload)
        @debug "JSON payload ready"
        response = HTTP.request("POST", url, headers, json_payload, proxy=ENV["http_proxy"])

        if response.status == 200
            return response
        else
            @error "Request failed with status: $(response.status)"
            println(String(response.body))
            return nothing
        end
    catch http_error
        @error "HTTP request error: $http_error"
        return nothing
    end
end

Normally, this works fine. However, when the json_payload contains a field that is the base64 encoding of an image (around 2MB in size), the call to JSON.json(payload) takes a very long time to complete.

Is there any way to speed up the conversion process when dealing with large base64-encoded data in the JSON payload?

However, when the json_payload contains a field that is the base64 encoding of an image (around 2MB in size), the call to JSON.json(payload) takes a very long time to complete.

Can you provide a reproducible example with data? What do you consider a “very long time”?

I tried running JSON.json on a Dict containing a 2MB string, and it takes 20ms.

julia> using JSON, Random, BenchmarkTools

julia> payload = Dict("foo" => Random.randstring(2^21)); # 2MB string

julia> @btime JSON.json($payload);
  21.234 ms (35 allocations: 4.44 MiB)

It’s going to be hard for anyone to help you without more information. Something like the above, where you provide code to generate a Dict containing random data, would be easiest to reproduce.

Sorry. It seems that the problem is not JSON, but the LLM API is a bit too slow to dealing with such a large payload. For some reason the println I used to track where it is slow didn’t accurately show where the bottleneck is.