Is there a recommended way to one-hot encode a batch of sequences?
More precisely, I am modelling sequences from an alphabet with
q letters. A sequence of length
N can be one-hot encoded as a
q x N one-hot matrix (e.g., using
Flux.OneHotMatrix). Then it seems that to encode a batch of
B sequences I would need a
q x N x B “one-hot tensor”.
What’s the recommended approach?