Optimize the lexer
elegaanz opened this issue · comments
Ana Gelez commented
The current algorithm works well but is terribly slow. Another algorithm that should work as good and be much faster :
start = 0
end = 1
while start < src.len()
matches = tokenize(src[start..end])
if matches.len() == 1
push matches[0]
start = end
end = start + 1
else if matches.len() == 0
error "unexpected character"
else
end = end + 1