Picking the Right Language Identifier
Within programs and structured data, languages are indicated with stable identifiers of the form en, fr-CA, or zh-Hant. The standard Unicode language identifiers follow IETF BCP 47, with some small differences defined in UTS #35: Locale Data Markup Language (LDML). Locale identifiers use the same format, with certain possible extensions.
Often it is not clear which language identifier to use. For example, what most people call Punjabi in Pakistan actually has the code 'lah', and formal name "Lahnda". There are many other cases where the same name is used for different languages, or where the name that people search for is not listed in the IANA registry. Moreover, a language identifier uses not only the 'base' language code, like 'en' for English or 'ku' for Kurdish, but also certain modifiers such as en-CA for Canadian English, or ku-Latn for Kurdish written in Latin script. Each of these modifiers are called subtags (or sometimes codes), and are separated by "-" or "_". The language identifier itself is also called a language tag, and sometimes a language code.
Here is an example of the steps to take to find the right language identifier to use. Let's say you to find the identifier for a language called "Ganda" which you know is spoken in Uganda. You'll first pick the base language subtag as described below, then add any necessary script/territory subtags, and then verify. If you can't find the name after following these steps or have other questions, ask on the Unicode CLDR Mailing List.
If you are looking at a prospective language code, like "swh", the process is similar; follow the steps below, starting with the verification.
Choosing the Base Language Code
Go to iso639-3 to find the language. Typically you'll look under Name starting with G for Ganda.
There may be multiple entries for the item you want, so you'll need to look at all of them. For example, on the page for names starting with “P”, there are three records: “Panjabi”, “Mirpur Panjabi” and “Western Panjabi” (it is the last of these that corresponds to Lahnda). You can also try a search, but be careful.
You'll find an entry like:
While you may think that you are done, you have to verify that the three-letter code is correct.
Click on the "more..." in this case and you'll find id=lug. You can also use the URL http://www.sil.org/iso639-3/documentation.asp?id=XXX, where you replace XXX by the three-letter code.
Verify that is indeed the language:
Look at the information on the ethnologue page
Check Wikipedia and other web sources
AND IMPORTANTLY: Review Caution! below
Once you have the right three-letter code, you are still not done. Unicode (and BCP 47) uses the 2 letter ISO code if it exists. Unicode also uses the "macro language" where suitable. So
Use the two-letter code if there is one. In the example above, the highlighted "lg" from the first table.
Verify that the code is in http://www.iana.org/assignments/language-subtag-registry
If the code occurs in http://unicode.org/repos/cldr/trunk/common/supplemental/supplementalMetadata.xml in the type attribute of a languageAlias element, then use the replacement instead.
For example, because "swh" occurs in <languageAlias type="swh" replacement="sw"/>, "sw" must be used instead of "swh".
Choosing Script/Territory Subtags
Verifying Your Choice
Documenting Your Choice
If you are requesting a new locale / language in CLDR, please include the links to the particular pages above so that we can process your request more quickly, as we have to double check before any addition. The links will be of the form:
and so on
Unicode language and locale IDs are based on BCP 47, but differ in a few ways. The canonical form is produced by using the canonicalization based on BCP47 (thus changing iw → he, and zh-yue → yue), plus a few other steps:
Replacing the most prominent encompassed subtag by the macrolanguage (cmn → zh)
Canonicalizing overlong 3 letter codes (eng-840 → en-US)
Minimizing according to the likely subtag data (ru-Cyrl → ru, en-US → en).
BCP 47 also provides for "variant subtags", such as zh-Latn-pinyin. When there are multiple variant subtags, the canonical format for Unicode language identifiers puts them in alphabetical order.
Note that the CLDR likely subtag data is used to minimize scripts and regions, not the IANA Suppress-Script. The latter had a much more constrained design goal, and is more limited.
In some cases, systems (or companies) may have different conventions than the Preferred-Values in BCP 47 -- such as those in the Replacement column in the the online language identifier demo. For example, for backwards compatibility, "iw" is used with Java instead of "he" (Hebrew). When picking the right subtags, be aware of these compatibility issues. If a target system uses a different canonical form for locale IDs than CLDR, the CLDR data needs to be processed by remapping its IDs to the target system's.
For compatibility, it is strongly recommended that all implementations accept both the preferred values and their alternates: for example, both "iw" and "he". Although BCP 47 itself only allows "-" as a separator; for compatibility, Unicode language identifiers allows both "-" and "_". Implementations should also accept both.
ISO (and hence BCP 47) has the notion of an individual language (like en = English) versus a Collection or Macrolanguage. For compatibility, Unicode language and locale identifiers always use the Macrolanguage to identify the predominant form. Thus the Macrolanguage subtag "zh" (Chinese) is used instead of "cmn" (Mandarin). Similarly, suppose that you are looking for Kurdish written in Latin letters, as in Turkey. It is a mistake to think that because that is in the north, that you should use the subtag 'kmr' for Northern Kurdish. You should instead use ku-Latn-TR. See also: ISO 636 Deprecation Requests.
Unicode language identifiers do not allow the "extlang" form defined in BCP 47. For example, use "yue" instead of "zh-yue" for Cantonese.
The Ethnologue is a great source of information, but it must be approached with a certain degree of caution. Many of the population figures are far out of date, or not well substantiated. The Ethnologue also focus on native, spoken languages, whereas CLDR and many other systems are focused on written language, for computer UI and document translation, and on fluent speakers (not necessarily native speakers). So, for example, it would be a mistake to look at http://www.ethnologue.com/show_country.asp?name=EG and conclude that the right language subtag for the Arabic used in Egypt was "arz", which has the largest population. Instead, the right code is "ar", Standard Arabic, which would be the one used for document and UI translation.
Wikipedia is also a great source of information, but it must be approached with a certain degree of caution as well. Be sure to follow up on references, not just look at articles.