Unicode & Character Encodings in Python: A Painless Guide

Python

import unicodedata

>> print(u”Test\u2014It”)

Test—It

>> s = u”Test\u2014It”

>> ord(s[4])

8212

>>> chr(732)
‘˜’
>>> c = chr(732)
>>> ord(c)
732

https://stackoverflow.com/questions/2831212/python-sets-vs-lists

escape_characters = set()

if ord(c) in escape_characters:

>> unicodedata.name(c)
‘SMALL TILDE’

JavaScript:

String.fromCharCode(parseInt(unicode,16))

>> c = String.fromCharCode(732);
“˜”
>> c.charCodeAt(0);
732
>> String.fromCharCode(0904)
>> c = String.fromCharCode(parseInt(‘2014’,16))   2014 = hex
“—”
>> c.charCodeAt(0);
8212
c = String.fromCharCode(39);
>> c.charCodeAt(0);
39

jsFiddle

var str = String.fromCharCode(e.which);
$(‘#charCodeAt’)[0].value = str.charCodeAt(0);
$(‘#fromCharCode’)[0].value = encodeURIComponent(str);

jQuery String Functions

  • charAt(n): Returns the character at the specified index in a string. The index starts from 0.
    1 var str = "JQUERY By Example";
    2 var n = str.charAt(2)
    3
    4 //Output will be "U"
  • charCodeAt(n): Returns the Unicode of the character at the specified index in a string. The index starts from 0.
    1 var str = "HELLO WORLD";
    2 var n = str.charCodeAt(0);
    3
    4 //Output will be "72"

Mathias Bynens: JavaScript Has a Unicode Problem:

As my JavaScript escapes tool would tell you, the reason is the following:

>> <span class="string">'ma\xF1ana'</span> == <span class="string">'man\u0303ana'</span>
<span class="literal">false</span>

>> <span class="string">'ma\xF1ana'</span>.length
<span class="number">6</span>

>> <span class="string">'man\u0303ana'</span>.length
<span class="number">7</span>

The first string contains U+00F1 LATIN SMALL LETTER N WITH TILDE, while the second string uses two separate code points (U+006E LATIN SMALL LETTER N and U+0303 COMBINING TILDE) to create the same glyph. That explains why they’re not equal, and why they have a different length.

However, if we want to count the number of symbols in these strings the same way a human being would, we’d expect the answer 6 for both strings, since that’s the number of visually distinguishable glyphs in each string. How can we make this happen?

In ECMAScript 6, the solution is fairly simple:

<span class="function"><span class="keyword">function</span> <span class="title">countSymbolsPedantically</span>(<span class="params">string</span>) </span>{
	<span class="comment">// Unicode Normalization, NFC form, to account for lookalikes:</span>
	<span class="keyword">var</span> normalized = string.normalize(<span class="string">'NFC'</span>);
	<span class="comment">// Account for astral symbols / surrogates, just like we did before:</span>
	<span class="keyword">return</span> punycode.ucs2.decode(normalized).length;
}

The normalize method on String.prototype performs Unicode normalization, which accounts for these differences. If there is a single code point that represents the same glyph as another code point followed by a combining mark, it will normalize it to the single code point form.

>> countSymbolsPedantically(<span class="string">'mañana'</span>) <span class="comment">// U+00F1</span>
<span class="number">6</span>
>> countSymbolsPedantically(<span class="string">'mañana'</span>) <span class="comment">// U+006E + U+0303</span>
<span class="number">6</span>

For backwards compatibility with ECMAScript 5 and older environments, String.prototype.normalize polyfill can be used.

Turning a code point into a symbol

String.fromCharCode allows you to create a string based on a Unicode code point. But it only works correctly for code points in the BMP range (i.e. from U+0000 to U+FFFF). If you use it with an astral code point, you’ll get an unexpected result.

>> <span class="built_in">String</span>.fromCharCode(<span class="number">0x0041</span>) <span class="comment">// U+0041</span>
<span class="string">'A'</span> <span class="comment">// U+0041</span>

>> <span class="built_in">String</span>.fromCharCode(<span class="number">0x1F4A9</span>) <span class="comment">// U+1F4A9</span>
<span class="string">''</span> <span class="comment">// U+F4A9, not U+1F4A9</span>

The only workaround is to calculate the code points for the surrogate halves yourself, and pass them as separate arguments.

>> <span class="built_in">String</span>.fromCharCode(<span class="number">0xD83D</span>, <span class="number">0xDCA9</span>)
<span class="string">'💩'</span> <span class="comment">// U+1F4A9</span>

If you don’t want to go through the trouble of calculating the surrogate halves, you could resort to Punycode.js’s utility methods once again:

>> punycode.ucs2.encode([ <span class="number">0x1F4A9</span> ])
<span class="string">'💩'</span> <span class="comment">// U+1F4A9</span>

Luckily, ECMAScript 6 introduces String.fromCodePoint(codePoint) which does handle astral symbols correctly. It can be used for any Unicode code point, i.e. from U+000000 to U+10FFFF.

>> <span class="built_in">String</span>.fromCodePoint(<span class="number">0x1F4A9</span>)
<span class="string">'💩'</span> <span class="comment">// U+1F4A9</span>

For backwards compatibility with ECMAScript 5 and older environments, use String.fromCodePoint() polyfill.

 

Getting a code point out of a string

Similarly, if you use String.prototype.charCodeAt(position) to retrieve the code point of the first symbol in the string, you’ll get the code point of the first surrogate half instead of the code point of the pile of poo character.

>> <span class="string">'💩'</span>.charCodeAt(<span class="number">0</span>)
<span class="number">0xD83D</span>

Luckily, ECMAScript 6 introduces String.prototype.codePointAt(position), which is like charCodeAt except it deals with full symbols instead of surrogate halves whenever possible.

>> <span class="string">'💩'</span>.codePointAt(<span class="number">0</span>)
<span class="number">0x1F4A9</span>

For backwards compatibility with ECMAScript 5 and older environments, use String.prototype.codePointAt() polyfill.

 

Real-world bugs and how to avoid them

This behavior leads to many issues. Twitter, for example, allows 140 characters per tweet, and their back-end doesn’t mind what kind of symbol it is — astral or not. But because the JavaScript counter on their website at some point simply read out the string’s length without accounting for surrogate pairs, it wasn’t possible to enter more than 70 astral symbols. (The bug has since been fixed.)

Many JavaScript libraries that deal with strings fail to account for astral symbols properly.

 

Introducing… The Pile of Poo Test™

Whenever you’re working on a piece of JavaScript code that deals with strings or regular expressions in some way, just add a unit test that contains a pile of poo (💩) in a string, and see if anything breaks. It’s a quick, fun, and easy way to see if your code supports astral symbols. Once you’ve found a Unicode-related bug in your code, all you need to do is apply the techniques discussed in this post to fix it.

 

 

Stack Overflow on String.fromCharCode():

inArray returns the index of the element in the array, not a boolean indicating if the item exists in the array. If the element was not found, -1 will be returned.

So, to check if an item is in the array, use:

<span class="kwd">if</span><span class="pun">(</span><span class="pln">jQuery</span><span class="pun">.</span><span class="pln">inArray</span><span class="pun">(</span><span class="str">"test"</span><span class="pun">,</span><span class="pln"> myarray</span><span class="pun">)</span> <span class="pun">!==</span> <span class="pun">-</span><span class="lit">1</span><span class="pun">)</span>
  • String.fromCodePoint() Not supported by Internet Explorer.  From Safari 10
  • String.fromCharCode() Supported since for ever, double as fast
  • The difference:

    Although most common Unicode values can be represented with one 16-bit number (as expected early on during JavaScript standardization) and fromCharCode() can be used to return a single character for the most common values (i.e., UCS-2 values which are the subset of UTF-16 with the most common characters), in order to deal with ALL legal Unicode values (up to 21 bits), fromCharCode() alone is inadequate. Since the higher code point characters use two (lower value) “surrogate” numbers to form a single character, String.fromCodePoint() (part of the ES6 draft) can be used to return such a pair and thus adequately represent these higher valued characters.

Python : How to replace single or multiple characters in a string ?

Replace multiple characters/strings in a string

str.replace() function can replace the occurrences of one given sub string only. But what if we want to replace multiple sub strings in a given string ?

Suppose we have a string i.e.

Now, how to replace all the occurrences of these three characters ‘s’, ‘l’, ‘a’ with this string ‘AA’ ?
Let’s create a new function over replace() to do that i.e.

It will replace all the occurrences of strings in List toBeReplaces with newString in the main given list mainString.
Let’s see how to replace the occurrences of [‘s’, ‘l’, ‘a’] with “AA” i.e.

Rate Limiting with Python and Redis (GitHub)

 

import time

 

from client import get_redis_client
from exceptions import RateLimitExceeded
def rate_per_second(count):
    def _rate_per_second(function):
        def __rate_per_second(*args, **kwargs):
client = get_redis_client()
            key = frate-limit:{int(time.time())}
            if int(client.incr(key)) > count:
                raise RateLimitExceeded
            if client.ttl(key) == 1# timeout is not set
                client.expire(key, 1# expire in 1 second
            return function(*args, *kwargs)
        return __rate_per_second
    return _rate_per_second
@rate_per_second(100# example: 100 requests per second
def my_function():
    pass  # do something
if __name__ == __main__:
    success = fail = 0
    for i in range(2000):
        try:
            my_function()
            success += 1
        except RateLimitExceeded:
            fail += 1
        time.sleep(5/1000# sleep every 5 milliseconds
    print(fSuccess count = {success})
    print(fFail count = {fail})