Converting Unicode in Python 3: from Character Code to Decimal

Given the Control Code column in the Wikipedia List of Unicode Characters:

 

Example 1: The Cent character

Code Glyph Decimal Description
U+0041 A 65 Latin Capital letter A

> Python Prompt:

> code = ‘0041’
>>> decimal = int(code,16)
>>> decimal
65
>>> chr(decimal)
‘A’

Example 2: The Cent character

Code Glyph Decimal Html Description
U+00A2 ¢ 0162 ¢ Cent sign          

> Python Prompt:

> code = ’00A2′
>>> decimal = int(code,16)
>>> decimal
162
>>> chr(decimal)
‘¢’

Example 3: The Greek Sigma character

Code Glyph Decimal Description
03A3 Σ 931 Greek Capital Letter Sigma

> Python Prompt

> code = ’03A3′
>>> decimal = int(code,16)
>>> decimal
931
>>> chr(decimal)
‘Σ’

Example 4: Soccer Ball

0 1 2 3 4 5 6 7 8 9 A B C D E F
U+26Bx

> Python Prompt:

> code = ’26BD’
>>> decimal = int(code,16)
>>> decimal
9917
>>> chr(decimal)
‘⚽’

Note: The Soccer ball did not display correctly in my Windows Shell, but rendered properly when I copied it into a Chrome WordPress textarea.

 

Example 5: Emoticons

1F60E 😎 smiling face with sunglasses

>>> code = ‘1F60E’
>>> decimal = int(code,16)
>>> decimal
128526
>>> chr(decimal)
‘😎’

Punycode.js: Encode Javascript

Punycode.js is a robust Punycode converter that fully complies to RFC 3492 and RFC 5891.

This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm:

punycode.ucs2

punycode.ucs2.decode(string)

Creates an array containing the numeric code point values of each Unicode symbol in the string. While JavaScript uses UCS-2 internally, this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16.

punycode.ucs2.decode('abc');
// → [0x61, 0x62, 0x63]
// surrogate pair for U+1D306 TETRAGRAM FOR CENTRE:
punycode.ucs2.decode('\uD834\uDF06');
// → [0x1D306]

Unicode & Character Encodings in Python: A Painless Guide

Python

import unicodedata

>> print(u”Test\u2014It”)

Test—It

>> s = u”Test\u2014It”

>> ord(s[4])

8212

>>> chr(732)
‘˜’
>>> c = chr(732)
>>> ord(c)
732

https://stackoverflow.com/questions/2831212/python-sets-vs-lists

escape_characters = set()

if ord(c) in escape_characters:

>> unicodedata.name(c)
‘SMALL TILDE’

JavaScript:

String.fromCharCode(parseInt(unicode,16))

>> c = String.fromCharCode(732);
“˜”
>> c.charCodeAt(0);
732
>> String.fromCharCode(0904)
>> c = String.fromCharCode(parseInt(‘2014’,16))   2014 = hex
“—”
>> c.charCodeAt(0);
8212
c = String.fromCharCode(39);
>> c.charCodeAt(0);
39

jsFiddle

var str = String.fromCharCode(e.which);
$(‘#charCodeAt’)[0].value = str.charCodeAt(0);
$(‘#fromCharCode’)[0].value = encodeURIComponent(str);

jQuery String Functions

  • charAt(n): Returns the character at the specified index in a string. The index starts from 0.
    1 var str = "JQUERY By Example";
    2 var n = str.charAt(2)
    3
    4 //Output will be "U"
  • charCodeAt(n): Returns the Unicode of the character at the specified index in a string. The index starts from 0.
    1 var str = "HELLO WORLD";
    2 var n = str.charCodeAt(0);
    3
    4 //Output will be "72"

Mathias Bynens: JavaScript Has a Unicode Problem:

As my JavaScript escapes tool would tell you, the reason is the following:

>> <span class="string">'ma\xF1ana'</span> == <span class="string">'man\u0303ana'</span>
<span class="literal">false</span>

>> <span class="string">'ma\xF1ana'</span>.length
<span class="number">6</span>

>> <span class="string">'man\u0303ana'</span>.length
<span class="number">7</span>

The first string contains U+00F1 LATIN SMALL LETTER N WITH TILDE, while the second string uses two separate code points (U+006E LATIN SMALL LETTER N and U+0303 COMBINING TILDE) to create the same glyph. That explains why they’re not equal, and why they have a different length.

However, if we want to count the number of symbols in these strings the same way a human being would, we’d expect the answer 6 for both strings, since that’s the number of visually distinguishable glyphs in each string. How can we make this happen?

In ECMAScript 6, the solution is fairly simple:

<span class="function"><span class="keyword">function</span> <span class="title">countSymbolsPedantically</span>(<span class="params">string</span>) </span>{
	<span class="comment">// Unicode Normalization, NFC form, to account for lookalikes:</span>
	<span class="keyword">var</span> normalized = string.normalize(<span class="string">'NFC'</span>);
	<span class="comment">// Account for astral symbols / surrogates, just like we did before:</span>
	<span class="keyword">return</span> punycode.ucs2.decode(normalized).length;
}

The normalize method on String.prototype performs Unicode normalization, which accounts for these differences. If there is a single code point that represents the same glyph as another code point followed by a combining mark, it will normalize it to the single code point form.

>> countSymbolsPedantically(<span class="string">'mañana'</span>) <span class="comment">// U+00F1</span>
<span class="number">6</span>
>> countSymbolsPedantically(<span class="string">'mañana'</span>) <span class="comment">// U+006E + U+0303</span>
<span class="number">6</span>

For backwards compatibility with ECMAScript 5 and older environments, String.prototype.normalize polyfill can be used.

Turning a code point into a symbol

String.fromCharCode allows you to create a string based on a Unicode code point. But it only works correctly for code points in the BMP range (i.e. from U+0000 to U+FFFF). If you use it with an astral code point, you’ll get an unexpected result.

>> <span class="built_in">String</span>.fromCharCode(<span class="number">0x0041</span>) <span class="comment">// U+0041</span>
<span class="string">'A'</span> <span class="comment">// U+0041</span>

>> <span class="built_in">String</span>.fromCharCode(<span class="number">0x1F4A9</span>) <span class="comment">// U+1F4A9</span>
<span class="string">''</span> <span class="comment">// U+F4A9, not U+1F4A9</span>

The only workaround is to calculate the code points for the surrogate halves yourself, and pass them as separate arguments.

>> <span class="built_in">String</span>.fromCharCode(<span class="number">0xD83D</span>, <span class="number">0xDCA9</span>)
<span class="string">'💩'</span> <span class="comment">// U+1F4A9</span>

If you don’t want to go through the trouble of calculating the surrogate halves, you could resort to Punycode.js’s utility methods once again:

>> punycode.ucs2.encode([ <span class="number">0x1F4A9</span> ])
<span class="string">'💩'</span> <span class="comment">// U+1F4A9</span>

Luckily, ECMAScript 6 introduces String.fromCodePoint(codePoint) which does handle astral symbols correctly. It can be used for any Unicode code point, i.e. from U+000000 to U+10FFFF.

>> <span class="built_in">String</span>.fromCodePoint(<span class="number">0x1F4A9</span>)
<span class="string">'💩'</span> <span class="comment">// U+1F4A9</span>

For backwards compatibility with ECMAScript 5 and older environments, use String.fromCodePoint() polyfill.

 

Getting a code point out of a string

Similarly, if you use String.prototype.charCodeAt(position) to retrieve the code point of the first symbol in the string, you’ll get the code point of the first surrogate half instead of the code point of the pile of poo character.

>> <span class="string">'💩'</span>.charCodeAt(<span class="number">0</span>)
<span class="number">0xD83D</span>

Luckily, ECMAScript 6 introduces String.prototype.codePointAt(position), which is like charCodeAt except it deals with full symbols instead of surrogate halves whenever possible.

>> <span class="string">'💩'</span>.codePointAt(<span class="number">0</span>)
<span class="number">0x1F4A9</span>

For backwards compatibility with ECMAScript 5 and older environments, use String.prototype.codePointAt() polyfill.

 

Real-world bugs and how to avoid them

This behavior leads to many issues. Twitter, for example, allows 140 characters per tweet, and their back-end doesn’t mind what kind of symbol it is — astral or not. But because the JavaScript counter on their website at some point simply read out the string’s length without accounting for surrogate pairs, it wasn’t possible to enter more than 70 astral symbols. (The bug has since been fixed.)

Many JavaScript libraries that deal with strings fail to account for astral symbols properly.

 

Introducing… The Pile of Poo Test™

Whenever you’re working on a piece of JavaScript code that deals with strings or regular expressions in some way, just add a unit test that contains a pile of poo (💩) in a string, and see if anything breaks. It’s a quick, fun, and easy way to see if your code supports astral symbols. Once you’ve found a Unicode-related bug in your code, all you need to do is apply the techniques discussed in this post to fix it.

 

 

Stack Overflow on String.fromCharCode():

inArray returns the index of the element in the array, not a boolean indicating if the item exists in the array. If the element was not found, -1 will be returned.

So, to check if an item is in the array, use:

<span class="kwd">if</span><span class="pun">(</span><span class="pln">jQuery</span><span class="pun">.</span><span class="pln">inArray</span><span class="pun">(</span><span class="str">"test"</span><span class="pun">,</span><span class="pln"> myarray</span><span class="pun">)</span> <span class="pun">!==</span> <span class="pun">-</span><span class="lit">1</span><span class="pun">)</span>
  • String.fromCodePoint() Not supported by Internet Explorer.  From Safari 10
  • String.fromCharCode() Supported since for ever, double as fast
  • The difference:

    Although most common Unicode values can be represented with one 16-bit number (as expected early on during JavaScript standardization) and fromCharCode() can be used to return a single character for the most common values (i.e., UCS-2 values which are the subset of UTF-16 with the most common characters), in order to deal with ALL legal Unicode values (up to 21 bits), fromCharCode() alone is inadequate. Since the higher code point characters use two (lower value) “surrogate” numbers to form a single character, String.fromCodePoint() (part of the ES6 draft) can be used to return such a pair and thus adequately represent these higher valued characters.