The problem is that it’s not entirely clear what you mean by that. Every program starts out with some kind of literal numbers like a = 2.72. Then you apply some functions to those, like sin(a). By default, a will be a Float64, and then the result will be a Float64. If you start out with a Float32 like a = 2.72f0, then sin(a) will be a Float32. The same goes for the vast majority of functions in Julia, be they system functions like sum, maximum, +, *=, or the functions you write yourself. Typically you get back what you put in there.
One exception is if you compute things like a+b, where a is a Float32 and b is a Float64, then a will be promoted to Float64, and you get back a Float64. The same thing happens if you compute things like sin(Int32(2)), you get back a Float64. But putting integers into real functions isn’t a good practice anyway, even though there is an explicit conversion from Int32 to Float64 somwhere in Julia for this case.
Ordinarily it will suffice if you explicitly use Float32 and Int32 for your number literals, then most functions will stick to that. That is actually one of the main features of Julia, your functions are compiled for the types you call them with.
Other than such functions, you will certainly not want that internal pointer arithmetic and things like that are done with 32-bit types on a 64-bit machine.