I have been using Manus to not only write code (mostly Python) but to run and automatically debug it as well, and produce reports, etc. It can do this easily because it has the most popular languages pre-installed on Ubuntu images on its own servers, which it executes efficiently. In the case of Julia, however, there is no such image, but Manus is willing to install Julia and load packages and run them, but it has to do this fresh with each new conversation/query, which is (apart from being time-consuming) very expensive in tokens. I have managed to get the speed/cost down considerably by supplying specific Julia version numbers and associated Project/Manifest files but still face the installation overhead each time I want it to write/debug something in Julia for me. Additionally, Manus seems to struggle to debug (for instance, when Flux changed the reference from ADAM to āAdamā in a version update, Manus was not able to figure this out from the online documentation or discussion, which cost me lots of time and money!). I am wondering if anyone knows of any other platform or more efficient way of getting an LLM to write, run, and debug Julia code, with a bit better familiarity with Julia (and the quirks and shortcomings of its documentation, etc.), and maybe even possibly pre-loaded images and package documentation which might be quicker and more cost-effective?
Which are the advantages of using Manus with cloud execution compared to tools like CoPilot and run/deboug the generated code locally on Viscode?
Because (as far as I know - correct me if I am wrong) I have to be involved in the process - it makes suggestions, I try them and give it feedback, etc. I would rather tell it to suggest/tryout solutions itself without me being involved. As far as I know, CoPilot does not do that (or can it?)
Iām not interested in that specific use case, so I havenāt pushed it to its limits, but Copilot has an agentic mode that will work autonomously.
You can āVibe Codeā with the Copilot chat, and it will iteratively work on things. The furthest Iāve gone is ask it to generate tests, then ask it to run those tests until it doesnāt get errors. It worked independently through several bugs and came back when things were clean.
It had some linting errors and asked permission to solve those. It did.
Like I said, not sure how far that can be pushed, it works pretty well and itegrates with Juliaās best supported IDE.
Thank you so much for that. I have never seen that explicitly announced or documented. Some time ago, I had migrated from VSCode to Cursor (which is integrated with Claude/Sonnet), which iteratively works on things, but I never seen a capability of it doing that ādriverlessā. I will see if I can get that working. Is there a special name for that feature?
[PS I just tried it out - this now works in Cursor as well! This is new since I last tried it!! Thanks!]
When you install Copilot, you get a chat window. At the bottom of the chat window is a drop down with different modes. I donāt remember them off the top of my head, but itās something like āChatā, āReviewā, and āAgentā.
Good luck!