Possible enum slowness?

I may have encountered enum slowness–looking for insight. The background is a bit long: just skip to my question below.

The application is a Covid simulation using a physical event approach at the level of individuals. The population data is tracked in a TypedTable with a row per person. Three of the columns track characteristics of a person using categories. Since there are 4, 5, and 4 categories respectively an enum seemed appropriate. I switched from using Int (Int64 on MacOS) for these columns to enum, based on some code fragments below.

I have a branch for each so I can switch branches to easily compare performance. There are 3 fundamental processes in the simulation:

  • spread–after seeding a city with 6 infected people, the infection spreads across those who are susceptible;
  • transition–those who have become infected transition for 25 days over varying degrees of sickness and infectiousness to either recover or die (some of the lucky ones get better in less than 25 days);
  • tracking–accumulating daily outcomes into “history” time series.

For my smallest dataset of 95_626 individuals for 180 days here are representative timings in seconds:

Int64 enum
spread .5755 .5632
transition .5079 .7619
tracking .1174 .2110

Timings are obtained by capturing @elapsed for each function call. Spread, transition and tracking run each day so the values are accumulated across days. These timings are after compilation and doing several runs. Transition is 50% slower and this becomes a huge problem for the dataset of 8.4 million individuals.

Enums are defined as below. In the code there are a few places where inputs from a file need to be compared to the enum value and I do Int(<enum>). This is nearly instantaneous so I don’t think these deliberate conversions are the culprit.

My first hunch was that the Mac and LLVM are quite inefficient with Int32 (and Int8, etc) and LLVM likes 8 byte values. Could be wrong here. To see, it was easy to change the base type of the enums to Int64. Well, there was a minute difference in timing, which could be attributed to the built-in randomness of the simulation–different numbers of folks get sick and transition differently. But, with a large dataset these differences are +/- 3%.


Now I am wondering. Are there internal conversions or data management issues with large-ish arrays of enums? Seems like they should just behave like base the types, but the Int64 approach code is notably faster for transition and tracking.

There doesn’t seem to be any difference in the code except how the population matrix is created and the infrequent conversion of the enums to Int. I started with the Int64 version; modified it; and then ran diffs to eyeball if I was doing anything different. Didn’t seem to be. You could also implicate TypedTables, but I use them identically in both versions. All indexing is into columns and occurs in time < O(n) for the size of the vector–but within a very small range of time for a pretty huge range in the length of the vectors. In theory it could be fixed time because it’s just a calculation of an address in memory, but there seems a slight increase in time to reach near the end of the huger vectors.

Code Fragments

enum approach

creating/initializing the table for the “categorical” columns:

       dat = Table(status=fill(unexposed, pop), 
                    agegrp=reduce(vcat,[fill(age, parts[Int(age)]) for age in instances(agegrp)]), 
                    cond=fill(notsick, pop), 
                    ...<10 more columns, all Int>

defining the enums:

@enum condition begin
@enum status begin
@enum agegrp begin
Int64 approach

creating the same columns:

parts = apportion(pop, age_dist)
dat = Table(
            status = fill(intype(unexposed), pop),    
            agegrp = reduce(vcat,[fill(i, parts[i]) for i in agegrps]),
            cond = zeros(intype, pop),<...more columns>

(By the way intype holds a Ref to the type of int I want to use, which I made an input parameter. I experimented with Int16, Int32 and Int64. Only the “day” columns can exceed 128. Didn’t make any difference… Don’t think UInt would make much difference.)

defining the integer constants instead of enums:

# status
const unexposed         = 1  
const infectious        = 2
const recovered         = 3
const dead              = 4

# agegrp 
const age0_19           = 1 
const age20_39          = 2 
const age40_59          = 3 
const age60_79          = 4 
const age80_up          = 5 

# condition
const notsick           = 0
const nil               = 5
const mild              = 6
const sick              = 7
const severe            = 8

Hey, can you maybe provide benchmarks for the exact cases you want to compare with only one table, one enum, one column or so - as reduced as possible?

It’s kind of complex code but at root, it just retrieves and updates values in a vector.

Let me see if I can gin up something that does something mechanical on Vector{Int} and vector enum.

So, the answer here is that the entire application is dependent on the data values being integers. In many cases the results are used in comparisons and as indices. To do a rough-and-ready change of the arrays from Int to enum required a lot of conversions to Int and occasionally string (where I needed the text). Also, the type of an enum isn’t iterable so it has to be collected into tuple with instances() and that would then need to be converted to Int.

Individually, these conversions are pretty fast. But, in a hot loop they all add up.

So, to get an enum implementation to be performant I would need to replace everywhere I assume integer and shift code and other data structures to use enums, too. It’s all possible with (my) time. Then, the enum version and int version should match performance (within epsilon). I really am using the integer values as categories, not for math.

I’d have the same need to do a lot of modifications using Symbols, too.

In general, does anyone have experience with using symbols as categorical values vs using enums?

1 Like