Am planning to use an already pre-trained BERT model for Greek published in huggingface. As we know for using this type of pre-trained models, one depends on python’s hugginface and pytorch. I know that @chengchingwen has been doing some exciting work with Transformers.jl. I really appreciate the effort he puts on this!
On the package’s page stays “Current we only support a few model and the tokenizer part is not finished yet.” Any news on the front?