Remove diacritics using Go

transform.RemoveFunc is deprecated.

Instead you can use the Remove function from runes package:

t := transform.Chain(norm.NFD, runes.Remove(runes.In(unicode.Mn)), norm.NFC)
result, _, _ := transform.String(t, "žůžo")
fmt.Println(result)

You can use the libraries described in Text normalization in Go.

Here's an application of those libraries:

// Example derived from: http://blog.golang.org/normalization

package main

import (
    "fmt"
    "unicode"

    "golang.org/x/text/transform"
    "golang.org/x/text/unicode/norm"
)

func isMn(r rune) bool {
    return unicode.Is(unicode.Mn, r) // Mn: nonspacing marks
}

func main() {
    t := transform.Chain(norm.NFD, transform.RemoveFunc(isMn), norm.NFC)
    result, _, _ := transform.String(t, "žůžo")
    fmt.Println(result)
}

To expand a bit on the existing answer:

The internet standard for comparing strings of different character sets is called "PRECIS" (Preparation, Enforcement, and Comparison of Internationalized Strings in Application Protocols) and is documented in RFC7564. There is also a Go implementation at golang.org/x/text/secure/precis.

None of the standard profiles will do what you want, but it would be fairly straight forward to define a new profile that did. You would want to apply Unicode Normalization Form D ("D" for "Decomposition", which means the accents will be split off and be their own combining character), and then remove any combining character as part of the additional mapping rule, then recompose with the normalization rule. Something like this:

package main

import (
    "fmt"
    "unicode"

    "golang.org/x/text/secure/precis"
    "golang.org/x/text/transform"
    "golang.org/x/text/unicode/norm"
)

func main() {
    loosecompare := precis.NewIdentifier(
        precis.AdditionalMapping(func() transform.Transformer {
            return transform.Chain(norm.NFD, transform.RemoveFunc(func(r rune) bool {
                return unicode.Is(unicode.Mn, r)
            }))
        }),
        precis.Norm(norm.NFC), // This is the default; be explicit though.
    )
    p, _ := loosecompare.String("žůžo")
    fmt.Println(p, loosecompare.Compare("žůžo", "zuzo"))
    // Prints "zuzo true"
}

This lets you expand your comparison with more options later (eg. width mapping, case mapping, etc.)

It's also worth noting that removing accents is almost never what you actually want to do when comparing strings like this, however, without knowing your use case I can't actually make that assertion about your project. To prevent the proliferation of precis profiles it's good to use one of the existing profiles where possible. Also note that no effort was made to optimize the example profile.

Tags:

Unicode

Utf 8

Go