Last week, I wrote about distinguishing manual testing in a typeface design process.
By calling it manual testing, as software developers might, it becomes easier to talk about automated testing in typeface design.
The ease of getting started with automated browser testing has improved dramatically for web developers over the last few years: it’s gotten much easier for people of various skill levels to write browser tests that simulate user actions. Those simulations ca even be run on multiple browsers, and at different screen sizes. This way, developers have increased confidence that, as they make other changes, the previously tested pieces haven’t been negatively affected.
It’s gotten to the point where adding a single, sweeping test, that simulates a visitor running through the most important flow on a website, is not necessarily difficult to do. Now, it’s a viable option for this to be the very first automated test someone adds to a project.
Fonts are software. I find it useful to look to other parts of the software field, to see where type designers might benefit from well-established practices in spaces that move much faster—perhaps too fast.
But the important part of this particular parallel is the goal, not the implementation.
For fonts, I suppose you could try and do the same, but you’ll end up somewhere strange: automating the installation of your font on your system, scripting a popular desktop publishing application to open, exporting a document with some text set in your font, and checking the output renders as expected. Phew!
This might be the literal equivalent to testing “the most important flow on a website,” but it’s excessive for a font—certainly it’s excessive as the first automated test in your project.
Instead of copying the process, where might we end up if we copied the goal instead: how might we increase confidence in the software quality, with much less effort than doing the same thing manually?
Until next time,