Regular expressions seems to be returning offsets in bytes not characters. Is this the intended behavior? Is there a way to get the offsets in characters?
That’s just the way it was designed.
If you want offsets of the Unicode codepoints, you could use the LegacyStrings package, and use the UTF32String type.
You don’t need to use UTF32String nor LegacyStrings. Maybe you can tell us more about what you want to do?
String indices are in bytes in Julia (at least for the default String type) because that’s the only efficient way of accessing a character in variable length encodings like UTF-8 or UTF-16. Counting characters requires iterating over the string from its beginning.
If what you really need is the number of characters before the first match, you can just do something like length(s[1:m1.offsets[1]]) or (a bit more efficient) length(SubString(s, 1, m1.offsets[1])). But beware that “character” is a subtle notion, which does not necessarily correspond to Unicode codepoints. See graphemes if what you need is the user-perceived number of characters.
You can get the offset in characters from the ind2chr function.
The reason that they return the offsets in bytes (“code units” of the underlying UTF-8 encoding) is this is how Julia String is indexed, so byte offsets are usually the most useful thing to know (e.g. to extract substrings from the original string).
But you’re right that you may want to use graphemes, e.g. length(graphemes(SubString(s, 1, m2.offsets[2]))), if you want to count user-perceived characters.
I was using a mixture of length and match.offsets to compute indices into strings. This was very bad. Switching to using only match.offsets fixed the problem.