Split unicode string into 300 byte chunks without destroying characters

UTF-8 has a special property that all continuation characters are 0x800xBF (start with bits 10). So just make sure you don't split right before one.

Something along the lines of:

def split_utf8(s, n):
    if len(s) <= n:
        return s, None
    while ord(s[n]) >= 0x80 and ord(s[n]) < 0xc0:
        n -= 1
    return s[0:n], s[n:]

should do the trick.

Note: it is to be done on the encoded value, i.e. str in python 2 and bytes in python 3. The python 3 bytes.__getitem__ also includes the call to ord already, so just drop it there.


UTF-8 is designed for this.

def split_utf8(s, n):
    """Split UTF-8 s into chunks of maximum length n."""
    while len(s) > n:
        k = n
        while (ord(s[k]) & 0xc0) == 0x80:
            k -= 1
        yield s[:k]
        s = s[k:]
    yield s

Not tested. But you find a place to split, then backtrack until you reach the beginning of a character.

However, if a user might ever want to see an individual chunk, you may want to split on grapheme cluster boundaries instead. This is significantly more complicated, but not intractable. For example, in "é", you might not want to split apart the "e" and the "´". Or you might not care, as long as they get stuck together again in the end.