View unicode codepoints for all letters in file on bash

I wrote myself a perl one-liner, that do just that, and it also prints the original character. (It expects the file from STDIN)

perl -C7 -ne 'for(split(//)){print sprintf("U+%04X", ord)." ".$_."\n"}'

However, there should be a better way than this.


I needed the code point for some common smileys, and came up with this:

echo -n "" |              # -n ignore trailing newline                     \
iconv -f utf8 -t utf32be |  # UTF-32 big-endian happens to be the code point \
xxd -p |                    # -p just give me the plain hex                  \
sed -r 's/^0+/0x/' |        # remove leading 0's, replace with 0x            \
xargs printf 'U+%04X\n'     # pretty print the code point

which prints

U+1F60A

which is the code point for "SMILING FACE WITH SMILING EYES".


Inspired by Neftas's answer, here is a slightly simpler solution that works with strings, rather than a single char:

iconv -f utf8 -t utf32le | hexdump -v -e '8/4 "0x%04x " "\n"' | sed -re"s/0x /   /g"
#                                         ^
# The number `8` above determines the number of columns in the output. Modify as needed.

I also made a Bash script that reads from stdin, or from a file, and that displays the original text alongside with the unicode values:

COLWIDTH=8
SHOWTEXT=true

tmpfile=$(mktemp)
cp "${1:-/dev/stdin}" "$tmpfile"
left=$(set -o pipefail; iconv -f utf8 -t utf32le "$tmpfile" | hexdump -v -e $COLWIDTH'/4 "0x%05x " "\n"' | sed -re"s/0x /   /g")


if [ $? -gt 0 ]; then
    echo "ERROR: Could not convert input" >&2
elif $SHOWTEXT; then
    right=$(tr [:space:] . < "$tmpfile" | sed -re "s/.{$COLWIDTH}/|&|\n/g" | sed -re "s/^.{1,$((COLWIDTH+1))}\$/|&|/g")
    pr -mts" " <(echo "$left") <(echo "$right")
else
    echo "$left"
fi


rm "$tmpfile"

Sample output

Tags:

Linux

Unicode