I was wondering about the exact usage of femtolisp in julia. I’ve Googled around but the nothing exact came up, though there were some references in the old archives.
I know that the julia-parser has a native implementation now so, If anyone could please explain to me the exact usage and reason for using femtolisp within the source-code and the reason for it’s inclusion in the executable.
Syntax lowering is also in femtolisp (julia-syntax.scm), and has not yet been ported. There are some bootstrap issues which would need solving, and not a strong-enough reason to do so at this time.
Lisp is great for writing parsers. Femtolisp is a neat (actually, closer to Scheme than CL), self-contained dialect and its implementation. It is included in the executable since it is an interpreter, and it runs the parser code.
Basically, you can ignore it unless you want to modify the parser (for new surface syntax).
And apart from the source is there any blog/article you know of that’d help me distinguish various aspects of the compiler system.
Say, we know that Julia has intentionally done away with TCO and Mutual Recursion techniques etc and there’s good reasoning behind it. The point is, at which layer does one need to work on to create such extensions.
Does femtolisp only translates the julia surface syntax to this pseudo-typed-scheme syntax or something more? I see that the base is defined in Julia itself so what exactly does the C/C++ part does?
I’ve gone through the code_lowered thing, seems useful.
Sorry but I don’t understand what you mean here. Why would you need anything else?
AFAIK the femtolisp intepreter is just used for parsing, ie maps strings to ASTs. That’s all it does. Some C functions (eg jl_parse_string) are glue to call it.
Well, most of it, of course not everything.
Again, you could be a very advanced Julia programmer without having to touch either femtolisp or C code. I mostly read the femtolisp part out of curiosity (I like Lisp), not because of solving an practical problem.
For a slightly related tangent, a while ago there was an attempt to switch from femtolisp to Chicken Scheme, which compiles scheme to C. It was deemed too resource intensive in the end, I think. Anyway, see https://github.com/JuliaLang/julia/issues/7977#issuecomment-52172600.
So, if I want to try out features like mutual recursion or have a stab at tail call recursion, the post femtolisp parser and lowered code generator, it’s basically the codegen.cpp that’s takes over and communicates with the llvm.
I know that this has been debated to death but I’d like to ask that as mentioned in the llvm5 docs there’s clearly a tail call optimization in place. So, if not julia than other, theoretically speaking, other languages might still leverage it. https://github.com/rust-lang/rfcs/issues/271
Another thing being, would it be of any performance gain if julia switched from femtolisp+cpp to chez+Rust. I mean, how does one really measure that objectively?
I meant that, as @kevin.squire mentioned about chicken-scheme - how about using something like plain old racket or chez or xyz. Why does femtolisp fit the internals well?
A good test might be simply building Julia itself (at least in it’s current, very large form), and measuring the time from when it starts compiling all the Julia code.
AFAIK it does not need to “fit” the rest of the “internals” beyond parsing. Once it generates the AST, its job is done. Anything that would parse and generate the same AST could be a replacement (given other features, like easy maintenance, being self-contained, fast enough, friendly license, etc). But it works, so apparently its replacement is not a high priority.
Also, removing femtolisp would violate Greenspun’s 10th law even more blatantly. The fact that femtolisp is neither bug-ridden nor slow is already pushing it.
The important question is: is it useful building something on top of a Lisp that Jeff Bezanson entirely controls and can modify as needed to suit Julia’s needs?
[quote=“abhi18av, post:4, topic:1902”]
And apart from the source is there any blog/article you know of that’d help me distinguish various aspects of the compiler system.
[/quote]There is also an old (2014) Juliacon video about Julia Internals: Introduction to Julia Internals - YouTube
@Tamas_Papp Yup, I’ve read the devdocs - it’s explained pretty clearly over there. I usually just download the pdf and I can’t remember seeing those there. Still, the docs have satiated much of my curiosity.
I’m not sure where you read about a “definite” plan. There are certainly people who expressed an interest, but e.g. as @JeffBezansonwrote:
The fact is that no one appears to be seriously working on making Julia self-hosting right now. There are far too many more interesting problems to work on.
I think there’s still interest in moving the parsing and lowering code from Scheme to Julia - if we could reap some performance gains (which I feel we could) by doing so.
Currently, isn’t most all of the compilation time spent in those two phases?
Speeding that up could make a big difference for people who’d like to use Julia as their scripting language also.
Also, having the parsing and lowering code in Julia would greatly expand the number of people who would be able to improve it.
There are a few things in C (such as most of the stuff in utf8proc) that could be done in Julia, and made more efficient also.
Changing any of the parts that are currently in C++ to Julia, I agree with what Jeff said.