Updating Script Metadata

New Unicode scripts

We should work on script metadata early for a Unicode version, so that it is available for tools (such as Mark's "UCA" tools).
  • Unicode 9/CLDR 29: New scripts in CLDR but not yet in ICU caused trouble.
  • Unicode 10: Working on a pre-CLDR-31 branch, plan to merge into CLDR trunk after CLDR 31 is done.
  • Should the script metadata code live in the Unicode Tools, so that we don't need a CLDR branch during early Unicode next-version work?

If the new Unicode version's PropertyValueAliases.txt does not have lines for Block and Script properties yet, then create a preliminary version. Diff the Blocks.txt file and UnicodeData.txt to find new scripts. Get the script codes from http://www.unicode.org/iso15924/codelists.html . Follow existing patterns for block and script names, especially for abbreviations. Do not add abbreviations (which differ from the long forms) unless there is a well-established pattern in the existing data.

Aside from instructions below for all script metadata changes, new script codes need English names (common/main/en.xml) and need to be added to common/supplemental/coverageLevels, under key %script100, so that the new script names will show up in the survey tool. For example, see the changes for new Unicode 8 scripts.

Can we add new scripts in CLDR trunk before or only after adding them to CLDR's copy of ICU4J? We did add new Unicode 9 scripts in CLDR 29 before adding them to ICU4J. The CLDR unit tests do not fail any more for scripts that are newer than the Unicode version in CLDR's copy of ICU.

Sample characters

We need sample characters for the "UCA" tools for generating FractionalUCA.txt.

Look for patterns of what kinds of characters we have picked for other scripts, for example the script's letter "KA". We basically want a character where people say "that looks Greek", and the same shape should not be used in multiple scripts. So for Latin we use "L", not "A". We usually prefer consonants, if applicable, but it is more important that a character look unique across scripts. It does want to be a letter, and if possible should not be a combining mark. It would be nice if the letters were commonly used in the majority language, if there are multiple. Compare with the charts for existing scripts, especially related ones.

Editing the spreadsheet

Google Spreadsheet: Script Metadata

Use and copy cell formulas rather than duplicating contents, if possible. Look for which cells have formulas in existing data, especially for Unicode 1.1 and 7.0 scripts.

For example,
  • Script names should only be entered on the LikelyLanguage sheet. Other sheets should use a formula to map from the script code.
  • On the Samples sheet, use a formula to map from the code point to the actual character. This is especially important for avoiding mistakes since almost no one will have font support for the new scripts, which means that most people will see "Tofu" glyphs for the sample characters.

Script Metadata properties file

  1. Go to the spreadsheet Script Metadata
  2. File>Download as>Comma Separated Values
  3. Location/Name = {CLDR}/tools/java/org/unicode/cldr/util/data/Script_Metadata.csv
  4. Refresh files (eclipse), then compare with previous version for sanity check.
  5. Run {cldr}/tools/java/org/unicode/cldr/unittest/TestScriptMetadata.java
    1. A common error is if some of the data from the spreadsheet is missing, or has incorrect values.
  6. Run GenerateScriptMetadata, which will produce a modified common/properties/scriptMetadata.txt file.
  7. The new script names need to be added to common/main/en.xml, and the script codes added to common/supplemental/coverageLevels.xml (under key %script100) so that the new script names will show up in CLDR survey tool.
    1. See #8109#comment:4 r11491
  8. Also add the new script codes to TestCoverageLevel.java variable script100.
    1. Why is this hardcoded here? Why does it not come from common/supplemental/coverageLevels.xml?
  9. Remove new script codes from $scriptNonUnicode in common/supplemental/attributeValueValidity.xml
  10. Run GenerateValidityXML.java
    1. See Update Validity XML
    2. This needs the previous version of CLDR in a sibling folder.
      1. For example, with the current post-31 trunk in ~/svn.cldr/uni10:
      2. Create  ~/svn.cldr/cldr-archive/cldr-31.0  and there
      3. svn co svn+ssh://unicode.org/repos/cldr/tags/release-31-0-1 .
    3. Now run GenerateValidityXML.java with the usual option like  -DCLDR_DIR=/usr/local/google/home/mscherer/svn.cldr/uni10
    4. Compare the trunk files with the generated ones:  common/validity ../Generated/cldr/validity/
    5. At least script.xml should show the new scripts. Copy this file into the trunk.
  11. Run GenerateMaximalLocales, as described on the likelysubtags page, which generates another two files.
    1. Compare the trunk files with the generated ones:  common/supplemental ../Generated/cldr/supplemental
    2. Copy likelySubtags.xml and supplementalMetadata.xml to the trunk if they have changes.
  12. Compare generated files with previous versions for sanity check.
  13. Run the CLDR unit tests.
  14. Check in the updated files.
Problems are typically because a non-standard name is used for a territory name. That can be fixed and the process rerun.
Comments