HalfKern is essentially a font auto-kerning tool masqueraded as a kerning audit tool.
The way the tool works is that for every pair of letters that are considered, it will blur their renderings and space the two such that the blurred images overlap a certain amount. This certain amount is found by first calibrating using the "ll", "nn", and "oo" pairs.
The tool currently does not store autokerning results in the font.
$ python3 kern_pair.py FontFile.ttf --dict dictionary.txt
$ python3 kern_pair.py FontFile.ttf PairString
$ python3 kern_pair.py Roboto-Regular.ttf --dict /usr/share/dict/words
fi 0 -4
lt 4 0
rb 4 0
rk 4 0
dy 4 0
rh 4 0
yb 4 0
dv 4 0
Ti -4 0
dt 5 0
yh 4 0
Ap 4 0
Ly -1 -6
kp 4 0
ET -3 1
PA -3 -7
RT 0 -4
FA -3 -8
Ut 4 0
kj 6 0
LT -9 -13
IX -3 1
Aj 5 0
GT -4 0
Mt 4 0
Lj 4 0
YU -1 -5
Ht 4 0
Kj 6 0
hT -7 0
TJ -7 -12
cT -6 0
vb 4 0
rZ -5 0
oT -8 0
vh 4 0
Jt 4 0
Kp 4 -1
FJ -9 -13
Rj 4 0
HX -3 1
VJ -5 0
The first value is the pair of letter to kern. The second value is, in percent
of EM, the suggested kerning value, and the last value is the kerning currently
in the font. Only pairs where the two kerning values differ by a tolerance
amount are showed. This tolerance can be set using -t
or --tolerance
.
The default tolerance is 3.3%.
The -l
or --letters-only
makes the tool only consider kerning between
two letters (ie. no punctuation). The tool also ignores digits, since they
typically have a fixed width and no kerning by design.
To inspect the pairs reported, you can use the kern_pair.py
tool:
$ python3 kern_pair.py Roboto-Regular.ttf LT
LT autokern: -9 (-184 units) existing kern: -13 (-275 units)
Saving kern.png and kerned.png
In this case the tool thinks the pair "LT" is over-kerned in Roboto.
Obviously that's up to taste. But here's the two files kern.png
and kerned.png
generated by the tool:
In the kern.png
image, the first line is with no kerning. The second line
is the tool's suggestion, and the third line is the existing font
kerning. The kerned.png
, the pair is showcased between lower, and upper,
letters. The three rows, similarly, show no-kerning, tool's suggestion,
and existing kern.
The tool has two different ways to form an envelope around each glyph.
This can be set using --envelope sdf
(default) or --envelope guassian
.
It also has two different ways to summarize the overlap of two glyph envelopes.
This can be set using --reduce sum
(default) or --reduce max
.
This gives four different combinations of modes to run the tool. Which one works best for a project is subjective and should be experimented with. The defaults, in my opinion, generate the best results
To produce per-language dictionaries to be used with this tool you can use the aosp-test-texts repository, or the libreoffice spellcheck dictionaries, or the harfbuzz-wikipedia-testing.
TODO: Expand on how to use these.
For simple English wordlist on Linux and Mac platforms you can use
/usr/share/words/dict
.
To see the envelope for one character, use:
$ python3 kern_pair.py fontfile.ttf X
This will generate the envelope image for X
and save it to envelope.png
.