Hi,

I’m trying to setup an optimization in JuMP. I do not have much experience in this field but it seems trivial:

The aim is assigns speaker to slots with respect to some conditions. I want to minimizes the variance of the time intervals between the scheduled slots for each speaker.

I’ve implemented a matrix of binary decision variables `a[i,j]`

that is `1`

if slot `i`

is assigned to speaker `j`

. With this I can encode my conditions easily, such as

```
@variable(model, a[1:n_slots, 1:n_speakers], Bin)
@constraint(model, sum(a, dims=1) .== 1) # only one speaker per slot
```

Unfortunately, I get stuck with the loss. A possible loss function for speaker j is

l_k = \sum_k | (t_{k,j} - t_{k-1, j}) - \bar{d} |

where t_k is the slot number of the k-th assignment (i.e. the k -th occurrence of `a[:,j] == 1`

) of speaker j, and \bar{d} the average time interval without constrains: \bar{d} = \frac{n_\text{speakers}}{n_\text{slots}}. (kind of a linearized variance)

My questions are:

- How can I code such a loss compatible with
`JuMP`

? My problem is to get from the matrix`a`

to the slot times t_k. - Could I use a different loss function instead?
- I feel I probably should set up the problem complete differently. I’d appreciate any hint!

Many thanks in advance!

Andreas