Hi All,
I quite like the Transformers.jl. I do not aspire to train or finetune LLMs, but I am more interested in learning prompts. I am a bit lost with loading models from the HuggingFace.
Here are few questions:
- Why are models named with symbols? e.g.
:gpt2
? - How do I find the names of models? For example “gpt2:lmheadmodel”?
- How do I find the model is supported? Is there any tutorial, how to add unsupported model?
- Has anyone tried to run the model on multiple GPUs, if it does not fit one?
Let me give you a concrete example.
Let’s say that I would load this model EleutherAI/gpt-neo-125m · Hugging Face .
I can load the tokenizer as
hgf"EleutherAI/gpt-neo-125m:tokenizer"
but I do not know, which heads the model offer.
I figured out that I can probably load everything as
hgf"EleutherAI/gpt-neo-125m"
which seems to return a tuple (tokenizer, model)
and “ditch” the naming.
Nevertheless, questions 1,2, and 4 remains. The lib is great. So much hard work and so nice.