Title: [Idea/Discussion] A Linter to Fix LLM-Induced Pythonisms in Julia Code – Does it make sense?
Hello Julia Community,
I’m working on a small, strategic Open Source project and would appreciate your valuable input. The idea stems from a common annoyance: Large Language Models (LLMs) often generate Julia code that is polluted with Python syntax (e.g., using and instead of &&, or True instead of true).
The Idea: Julia-AI-AutoChecker
I’ve built a simple, Python-based linter called Julia-AI-AutoChecker that specifically targets these interferences. Since many advanced LLMs can execute Python code as a tool, the goal is to integrate this linter into the AI’s workflow so the AI can self-correct its own output before presenting it to the user.
The result should be immediately clean, idiomatic, and runnable Julia code.
Example Correction:
AI generates:
if status == True and data > 0Linter forces correction to:
if status == true && data > 0
Questions for the Community:
- Is this tool necessary? Do you encounter these Pythonisms frequently enough that an automated cleanup tool would significantly improve your workflow when using AI code assistants?
- Does this approach make sense? We chose Python because it’s natively runnable in most AI sandboxes. Does using a Python tool to enforce Julia style feel strategically sound, or are there better, Julia-native approaches that are better suited for LLM integration? (I did a quick search but didn’t find a direct equivalent focused on Pythonisms).
- Is something similar already widely used? I’m aware of excellent tools like
ReLint.jlandJET.jl, but I’m looking for a pre-processor focused on basic, LLM-induced syntax errors.
I’ve put the early stage code on GitHub. Any feedback, criticism, or suggestions for new rules would be greatly appreciated as I try to validate this concept.
GitHub Repository (Early Stage):
Thank you for your thoughts and honesty.
Best regards,
[Aox]