This document describes the Unicode CLDR Technical Committee's process for data collection, resolution, public feedback and release.
The process is designed to be light-weight; in particular, the meetings are frequent, short, and informal. Most of the work is by email or phone, with a database recording requested changes (See change request).
When gathering data for a region and language, it is important to have multiple sources for that data to produce the most commonly used data. The initial versions of the data were based on best available sources, and updates with new and improvements are released twice a year with work by contributors inside and outside of the Unicode Consortium.
It is important to note that CLDR is a Repository, not a Registration. That is, contributors should NOT expect that their suggestions will simply be adopted into the repository; instead, it will be vetted by other contributors.
The final approval of the release of any version of CLDR is up to the decision of the CLDR Technical Committee.
Formal Technical Committee Procedures
For more information on the formal procedures for the Unicode CLDR Technical Committee, see the Technical Committee Procedures for the Unicode Consortium.
The UTS #35: Locale Data Markup Language (LDML) specification are kept up to date with each release with change/added structure for new data types or other features.
Requests for changes are entered in the bug/feature request database (CLDR Bug Reports).
Structural changes are always backwards-compatible. That is, previous files will continue to work. Deprecated elements remain, although their usage is strongly discouraged.
There is a standing policy for structural changes that require non-trivial code for proper implementation, such as time zone fallback or alias mechanisms. These require design discussions in the Technical Committee that demonstrates correct function according to the proposed specification.
Data- Submission and Vetting
The contributors of locale data are expected to be language speakers residing in the country/region. In particular, national standards organizations are encouraged to be involved in the data vetting process.
There are two types of data in the repository:
Core data (See Core data for new locales): The content is collected from language experts typically with a CLDR Technical Committee member involvement, and is reviewed by the committee. This is required for a new language to be added in CLDR. See also Exemplar Character Sources.
Common locale data: This is the bulk of the CLDR data and data collection occurs twice a year using the Survey tool. (See How to Contribute.)
The following 4 states are used to differentiate the data contribution levels. The initial data contributions are normally marked as draft; this may be changed once the data is vetted.
Level 1: unconfirmed
Level 2: provisional
Level 3: contributed (= minimally approved)
Level 4: approved (equivalent to an absent draft attribute)
Implementations may choose the level at which they wish to accept data. They may choose to accept even unconfirmed data if having some data is better than no data for their purpose. Approved data are vetted by language speakers; however, this does not mean that the data is guaranteed to be error-free -- this is simply the best judgment of the vetters and the committee according to the process.
Survey Tool User Levels
There are multiple levels of access and control:
These levels are decided by the technical committee and the TC representative for the respective organizations.
Unicode TC members (full/institutional/supporting) can assign its users to Regular or Guest level, and with approval of the TC, users at the Expert level.
Liaison or associate members can assign to Guest, or to other levels with approval of the TC.
The liaison/associate member him/herself gets TC status in order to manage users, but gets a Guest status in terms of voting, unless the committee approves a higher level.
Users assigned to "unicode.org" are normally assigned as Guest, but the committee can assign a different level.
Each user gets a vote on each value, but the strength of the vote varies according to the user level (see table above).
For each value, each organization gets a vote based on the maximum (not cumulative) strength of the votes of its users who voted on that item.
For example, if an organization has 10 Vetters for one locale, if the highest user level who voted has user level of 4 votes, then the vote count attributed to the organization as a whole is 4 for that item.
Optimal Field Value
For each release, there is one optimal field value determined by the following:
Add up the votes for each value from each organization.
Sort the possible alternative values for a given field
by the most votes (descending)
then by UCA order of the values (ascending)
The first value is the optimal value (O).
The second value (if any) is the next best value (N).
Draft Status of Optimal Field Value
Let O be the optimal value's vote, N be the vote of the next best value (or zero if there is none), and G be the number of organizations that voted for the optimal value. Let oldStatus be the draft status of the previously released value.
Assign the draft status according to the first of the conditions below that applies:
If the oldStatus is better than the new draft status, then no change is made. Otherwise, the optimal value and its draft status are made part of the new release.
For example, if the new optimal value does not have the status of approved, and the previous release had an approved value (one that does not have an error and is not a fallback), then that previously-released value stays approved and replaces the optimal value in the following steps.
It is difficult to develop a formulation that provides for stability, yet allows people to make needed changes. The CLDR committee welcomes suggestions for tuning this mechanism. Such suggestions can be made by filing a new ticket.
After the contribution of collecting and vetting data, the data needs to be refined free of errors for the release:
Collisions errors are resolved by retaining one of the values and removing the other(s).
The resolution choice is based on the judgment of the committee, typically according to which field is most commonly used.
When an item is removed, an alternate may then become the new optimal value.
All values with errors are removed.
Non-optimal values are handled as follows
Those with no votes are removed.
Those with votes are marked with alt=proposed and given the draft status: unconfirmed
If a locale does not have minimal data (at least at a provisional level), then it may be excluded from the release. Where this is done, it may be restored to the repository for the next submission cycle.
This process can be fine-tuned by the Technical Committee as needed, to resolve any problems that turn up. A committee decision can also override any of the above process for any specific values.
For more information see the key links in CLDR Survey Tool (especially the Vetting Phase).
If data has a formal problem, it can be fixed directly (in CVS) without going through the above process. Examples include:
syntactic problems in pattern, extra trailing spaces, inconsistent decimals, mechanical sweeps to change attributes, translatable characters not quoted in patterns, changing ' (punctuation mark) to curly apostrophe or s-cedilla to s-comma-below, removing disallowed exemplar characters (non-letter, number, mark, uppercase when there is a lowercase).
These are changed in-place, without changing the draft status.
Linguistically-sensitive data should always go through the survey tool. Examples include:
names of months, territories, number formats, changing ASCII apostrophe to U+02BC modifier letter apostrophe or U+02BB modifier letter turned comma, or U+02BD modifier letter reversed comma, adding/removing normal exemplar characters.
The TC committee can authorize bulk submissions of new data directly (CVS), with all new data marked draft="unconfirmed" (or other status decided by the committee), but only where the data passes the CheckCLDR console tests.
The survey tool does not currently handle all CLDR data. For data it doesn't cover, the regular bug system is used to submit new data or ask for revisions of this data. In particular:
Collation, transforms, or text segmentation, which are more complex.
For collation data, see the comparison charts at http://www.unicode.org/cldr/comparison_charts.html or the XML data at http://unicode.org/cldr/data/common/collation/
For transforms, see the XML data at http://unicode.org/cldr/data/common/transforms/
Non-linguistic locale data:
There may be conflicting common practices or standards for a given country and language. Thus LDML provides keyword variants to reflect the different practices (for example, for German it allows the distinction between PHONEBOOK and DICTIONARY collation.).
When there is an existing national standard for a country that is widely accepted in practice, the goal is to follow that standard as much as possible. Where the common practice in the country deviates from the national standard, or if there are multiple conflicting common practices, or options in conforming to the national standard, or conflicting national standards, multiple variants may be entered into the CLDR, distinguished by keyword variants or variant locale identifiers.
Where a data value is identified as following a particular national standard (or other reference), the goal is to keep that data aligned with that standard. There is, however, no guarantee that data will be tagged with any or all of the national standards that it follows.
Maintenance releases, such as 26.1, are issued whenever the standard identifiers change (that is, BCP 47 identifiers, Time zone identifiers, or ISO 4217 Currency identifiers). Updates to identifiers will also mean updating the English names for those identifiers.
Corrigenda may also be included in maintenance releases. Maintenance releases may also be issued if there are substantive changes to supplemental data (non-language such as script info, transforms) data or other critical data changes that impact the CLDR data users community.
The structure and DTD may change, but except for additions or for small bug fixes, data will not be changed in a way that would affect the content of resolved data.
Public Feedback Process
The public can supply formal feedback into CLDR via the Survey Tool or by filing a Bug Report or Feature Request. There is also a public forum for questions at CLDRMailing List (details on archives are found there).
There is also a members-only CLDRmailing list for members of the CLDR Technical Committee.
Public Review Issues may be posted in cases where broader public feedback is desired on a particular issue.
Be aware that changes and updates to CLDR will only be taken in response to information entered in the Survey Tool or by filing a Bug Report or Feature Request. Discussion on public mailing lists is not monitored; no actions will be taken in response to such discussion -- only in response to filed bugs. The process of checking and entering data takes time and effort; so even when bugs/feature requests are accepted, it may take some time before they are in a release of CLDR.
Data Release Process
The locale data is frozen per version. Once a version is released, it is never modified. Any changes, however minor, will mean a newer version of the locale data being released. The version numbering scheme is "xy.z", where z is incremented for maintenance releases, and xy is incremented for regular semi-annual releases as defined by the regular semi-annual schedule
Early releases of a version of the common locale data will be issued as either alpha or beta releases, available for public feedback. The dates for the next scheduled release will be on CLDR Project.
The schedule milestones are listed below.
Labels in the Jira column correspond to the phase field in Jira. Phase field in Jira is used to identify tickets that need to be completed before the start of each milestone (table above).
Meetings and Communication
The currently-scheduled meetings are listed on the Unicode Calendar. Meetings are held by phone, every week at 8:00 AM Pacific Time (-08:00 GMT in winter, -07:00 GMT in summer). Additional meeting is scheduled every other Mondays depending on the need and people's availability.
There is an internal email list for the Unicode CLDR Technical Committee, open to Unicode members and invited experts. All national standards bodies who are interested in locale data are also invited to become involved by establishing a Liaison membership in the Unicode Consortium, to gain access to this list.
The current Technical Committee Officers are:
Chair: Mark Davis (Google)
Vice-Chairs: Annemarie Apple (Google), Peter Edberg (Apple)