I am building a simple to use autoencoder model.. anyone interested?

Hello, I saw that all autoencoders presented in the various blogs require to actually implement the autoencoder from a DL framework.

I am implementing instead one where the user just creates the model m =AutoEncoder(outdims=4) (with optional definition of the encoding/decoding layers, specification of the epochs, batch size, etc…), fit it fit!(m,x) and predict the encoded values predict(m,x) or decode them with inverse_predict(m,encoded_x).

I just wander if do you think this could be useful to someone and could be eventually land as a transformer model in MLJ… (I haven’t done the MLJ wrapper yet)

@ablaom

3 Likes

Yup, that would be nice.

would be great!

it’s done, just need to solve an issue with the autotune, release and announce it… :slight_smile: :slight_smile:

https://sylvaticus.github.io/BetaML.jl/dev/Utils.html#BetaML.Utils.AutoEncoder

2 Likes

Done! Here the release: [ANN] BetaML.jl.. yet an other (simple) Machine Learning Package - #13 by sylvaticus

I believe this is the easiest way to get some data have a dimensionality reduction trough an AutoEncoder. While the user can eventually specify the number of dimensions in the latent space, the number of neurons in the inner layers or the full specification of encoding/decoding layers and NN training options, this remain completely optional, as some heuristics are applied. Also the autotune method allows to further simplify these choices.

At the end the idea is that the user doesn’t need to know what there is behind the AutoEncoder, just to apply it in order to have a nonlinear transformation to a latent space of its data…