I came up with a neat Julia solution using sets to solve that simple pangram. However, I’m a bit perplexed… C benchmarks show it running over 200 times faster, and even Python outperforms my Julia code substantially. Curious if anyone has insights into why this might be happening. Here’s my Julia implementation, along with the others:
In C
#include <stdio.h>
#include <stdlib.h>
#define ALPHABET_SIZE 26
int isPangram(char* str) {
int alphabet[ALPHABET_SIZE] = {0};
for (int i = 0; str[i] != '\0'; i++) {
if (str[i] >= 'A' && str[i] <= 'Z') {
alphabet[str[i] - 'A'] = 1;
} else if (str[i] >= 'a' && str[i] <= 'z') {
alphabet[str[i] - 'a'] = 1;
}
}
for (int i = 0; i < ALPHABET_SIZE; i++) {
if (alphabet[i] == 0) {
return 0;
}
}
return 1;
}
void main(int argc, char** argv) {
if (argc == 0) {
printf("Provide at least one letter!");
exit(0);
}
int i;
for(i = 1; i < argc; i++) {
if (argc < 1) break;
if (isPangram(argv[i])) {
printf("yes, it's a pangram \n");
} else {
printf("no, it's not a pangram \n");
}
}
exit(0);
}
In Python
import sys
def verifica_letras(string):
letras_usadas = set(string.lower())
letras_alfabeto = set('abcdefghijklmnopqrstuvwxyz')
return letras_usadas == letras_alfabeto
if len(sys.argv) <= 1:
print("Provide at least one letter!")
sys.exit()
if verifica_letras(sys.argv[1]):
print("yes, it's a pangram \n")
else:
print("no, it's not a pangram \n")
In Julia
function ispangram(input)
'a':'z' ⊆ lowercase(input) ? println("Yes, is a pangram.") : println("Not, it's not a pangram.")
end
length(ARGS) > 0 ? ispangram(ARGS[1]) : println("Provide at least one letter!")
Benchmarks with hyperfine - C
> hyperfine --shell=none --warmup 30 "./a.out wjjqevffkkgbcehhiqpvqutmwxawzvjnbvukmlzxyhkgfddzfjhcujnlkjbdfgghjhujkiuytghjioplkjhgfdsaqwertyujioplkjhgfdsaqwertzuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkj"
Result
Time (mean ± σ): 701.0 µs ± 152.1 µs [User: 525.3 µs, System: 67.9 µs]
Range (min … max): 523.8 µs … 3765.6 µs 3569 runs
Benchmarks with hyperfine - Python
> hyperfine --shell=none --warmup 30 "python3 pangram.py wjjqevffkkgbcehhiqpvqutmwxawzvjnbvukmlzxyhkgfddzfjhcujnlkjbdfgghjhujkiuytghjioplkjhgfdsaqwertyujioplkjhgfdsaqwertzuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkj"
Result
Time (mean ± σ): 15.9 ms ± 3.8 ms [User: 11.6 ms, System: 3.8 ms]
Range (min … max): 12.4 ms … 37.7 ms 210 runs
Benchmarks with hyperfine - Julia
hyperfine --shell=none --warmup 30 "julia --compile=all pangram.jl wjjqevffkkgbcehhiqpvqutmwxawzvjnbvukmlzxyhkgfddzfjhcujnlkjbdfgghjhujkiuytghjioplkjhgfdsaqwertyujioplkjhgfdsaqwertzuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsazxcvbnmlkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkjhgfdsaqwertyuioplkj"
Result
Time (mean ± σ): 266.0 ms ± 23.4 ms [User: 228.5 ms, System: 112.0 ms]
Range (min … max): 240.1 ms … 314.1 ms 12 runs
Puzzled
C is expectedly much faster, but I’m surprised at the performance gap between my Python and Julia implementations. Python finished in 15 ms, while Julia took more than 17 times that. Could anyone familiar with both languages help me understand why Julia might be lagging here?