Introducing PromptingTools.jl – your new best friend for working with Large Language Models (LLMs).
Imagine saving 20 minutes every day and paying just a few pennies for it. That’s right - LLMs allow you to buy back time!
With PromptingTools.jl, you can use the OpenAI models (but also your other favorites) with no hassle. Success with LLMs is all about writing high-quality prompts. This package hopes to allow the Julia community to easily discover, share, and quickly re-use great prompts (templating is aimed for v0.2).
Key perks? Think easy-to-use aigenerate function that replaces all prompt variables through the provided kwargs, an @ai_str macro to slash those keystrokes when you have some burning questions, and light wrappers for the AI outputs to enable downstream multiple dispatch. Plus, it’s super light on dependencies, so you can put it in your global env.
So, why juggle tabs and lose focus? Keep your flow state sacred with PromptingTools.jl and never leave the REPL. It’s not about building empires; it’s about giving you back time every single day.
Next steps: More tutorials to show you how to save time!
Pssst… If you want to use the new GPT-4 Turbo that’s cheaper and faster, you can just add its alias (“gpt4t”) after the ai macro: ai"What is the capital of France?"gpt4t. There is also is asynchronous version of this macro (see README), so you can keep crunching while GPT-4 is working on your task.
EDIT: The package is not registered yet. I’ll do so only if there is interest.
Remember, use ai"<some_question> for quick inquiries. For instance, I can never remember how to activate the temp environment, so I did: ai”In Julia, how do you activate the temp environment?"
Happy coding!
PS: What I actually used was an advanced version of the above syntax: aai"In Julia, how do you activate the temp environment?"gpt4t which taps into the latest GPT-4 Turbo and runs asynchronously.
Season’s Greetings, Julia Elves! Get ready to unwrap a sleigh-load of updates from PromptingTools.jl, all bundled up just in time for a festive coding season!
First off, we’ve cooked up some magic in our elves’ workshop: the @ai!_str macros. Now, continuing a conversation is as easy as singing a Christmas carol. Start with ai"question", reply with ai!"follow-up-response", and if you fancy a new tune, just begin anew with ai"". It’s like having your very own AI Santa, ready to chat through the snowy night!
Have you been good this year? Because our RAGTools sub-module is here, a gift for all the Retrieval-Augmented Generation enthusiasts! Build your own AI-guided winter wonderland with functions like build_index and airag. And don’t worry, we’ve made sure parsing of Julia code in your AI messages is on the nice list, AICode is catching errors like Santa catches cookies!
With the festive spirit, we’ve also added new prompt templates and improved AICode handling, making coding as joyful as unwrapping presents. Plus, we’re now welcoming new guests to our holiday party: MistralAI and OpenAI-compatible APIs. It’s like inviting the whole North Pole over!
Our elf engineers have been busy tinkering with utilities for Julia code generation, ensuring your holiday projects are merry and bright. And with our revamped MODEL_REGISTRY, it’s like having a Christmas list of all your favorite AI models! Don’t forget to check out the new Preferences.jl-based presets.
So, gather around the fireplace, fellow Julia developers. With PromptingTools.jl, you’re not just writing code; you’re orchestrating a symphony of AI-powered holiday cheer. Let’s make this coding season the merriest one yet!
Hey team! We just pushed a small but mighty update to PromptingTools.jl, bringing in the latest from OpenAI! Here’s the scoop:
Model Swap : We’ve updated our default embedding model to “text-embedding-3-small” for that sweet spot of lower cost and higher performance. Heads up: the default chat model will shift to OpenAI’s 0125 by mid-Feb, but you can try it out already - see below
New Models on Board : Added the latest OpenAI models behind aliases “gpt4t”, “gpt3t”, and the new embedding models aliased with “emb3small” & “emb3large” to our lineup. In general, more power, less cost! Especially interesting, because “gpt3t” performs better on Julia and is cheaper!
Tweaks & Fixes : Improved AgentTools for smarter feedback and improved our AICode evaluation to catch those pesky errors better. More on that in the future
Need to Know : This update does include a breaking change with the default models. If your code depends on the old defaults, make sure to specify your model choice explicitly.
Upgrade Time : Grab the latest version via your Julia package manager. Check out our docs for the full rundown of the new features and models.
Big news! PromptingTools.jl has just dropped eight shiny new updates, supercharging it with cool new models (hello, Anthropic & GoogleGenAI!), zippier dataset prep, and even artsy AI capabilities.
What’s New?
Anthropic’s Claude for smarter text & data handling.
GoogleGenAI integration for that Google magic touch.
Binary FAST RAGTools & a customizable RAG interface for your unique needs.
Tidier datasets and a treasure trove of docs & tools.
Plus, we’re smashing bugs and enhancing performance to make your coding life smoother.
Curious for more? Dive deep into our latest blog post Read More or just visit the docs.
Let’s create, debug, and maybe break things (but in a fun way)!
Your feedback and contributions could help steer this ship, so don’t be shy!
EDIT: Why care about Claude models? Because Haiku is an absolute beast and super cheap: