Å = 197 = c5
What names do you use when you’re determining the character coverage of the typeface you’re working on? My guess is that you use the character itself, for example “Å”, or a more human-friendly name, like “Aring” in Glyphs’ “nice name” naming scheme.
You could also choose to use the underlying Unicode code point: uni00C5
as the glyph name. In most other situations (where the plus sign is allowed), you’d see this written as U+00C5.
You could also see it represented elsewhere as 197
.
How do you get from U+00C5 to 197, and back? How are these the same thing?
Character | Code Point | Hexadecimal | Decimal |
---|---|---|---|
Å | U+00C5 | 00c5 | 197 |
197 is the base-10 decimal number, the numeral system we’re more accustomed to using. 00c5 is the same thing in the base-16 hexadecimal numeral system. Instead of using 10 symbols (0 through 9), it uses 16 symbols (0 through 9 and A through F).
The number here is really just “c5” but when you write it as a Unicode code point, it has to be at least four characters, with leading zeroes filling in anything missing. So “c5” becomes “00c5” or U+00C5. But they are all different ways of writing the same thing.
The code that does the math for us
It’s one of those things that I find easier to understand when you see it change with the relevant math—or in this case, the code that does the math for us.
I’m going to do this in JavaScript, so if you’d like you can type Command + Option + I on macOS or Control + Option + I on Windows to open the browser’s developer console and do the same.
Or, especially if you’re on mobile, you can open this CodePen and see the simple back-and-forth.
There and back
Hopefully even if you’ve never written any JavaScript, all you need to know here is that console.log
is what’s displaying or printing the result for us.
We can start out by simply displaying “Å.”
console.log('Å')
// Logs “Å”
Then, we’ll use JavaScript to get the decimal base-10 number:
let decimal = 'Å'.charCodeAt()
console.log(decimal)
// Logs “197”
Next, we’ll convert the decimal number to the hexadecimal base-16 text string (which is why we use 16 here):
let hexadecimal = decimal.toString(16)
console.log(hexadecimal)
// Logs “c5”
As I mentioned earlier, the value doesn’t have to have leading zeroes—that is something we want to show the result in the Unicode code point format though. So let’s pad any missing characters at the start of the text string with the character 0
:
let hexadecimalZeroPadded = hexadecimal.padStart(4, '0')
console.log(hexadecimalZeroPadded)
// Logs “00c5”
Now we have 00c5
! We don’t strictly need this next step to convert back to the original character, but let’s display it in the Unicode code point format anyway:
let unicodeCodePoint = 'U+' + hexadecimalZeroPadded.toUpperCase()
console.log(unicodeCodePoint)
// Logs “U+00C5”
Then, we can convert it back to decimal. We’re also passing in 16 in this case to say we’re trying to parse a base-16 hexadecimal string, and not a base-10 decimal string.
let decimalFromHexadecimal = parseInt(hexadecimalZeroPadded, 16)
console.log(decimalFromHexadecimal)
// Logs “197”
Finally, we can use another built-in JavaScript function to say “we have a character code value (197), that we’d like to convert into the real character.”
let character = String.fromCharCode(decimalFromHexadecimal)
console.log(character)
// Logs “Å”
And this logs “Å” like we want.
That’s how we can go from Å to 197 to U+00C5, and back.
Until next time,
Kenneth