Statement of Work for CLDR Contract Engineers

This document provides some background for the CLDR Software Engineer tasks. The focus is on performance and UI improvements to the CLDR Survey Tool (ST).

The work can be split among more than one person. For example, an engineer with JS skills could work on the front-end, coordinating with an engineer with Java skills working on the backend. Relatively independently from that area, a different person with expertise in performance, Java, and SQL works on the performance. It is expected that people would work remotely, with their own equipment; they should however be located in an accessible timezone (between California Time and Central European Time).

The goals for this work are listed below under Performance Improvement Goals and UI Improvement Goals. Payment will be based on completion of milestones.

Overview

So that there is a basic understanding of what is being worked on, the following provides an overview of the CLDR Survey Tool. Its basic purpose is to provide a UI for people to add and fix translations of locale data. This data is stored in a database, and is converted to XML format for release.

People representing many different organizations may be working on the same locale at the same time, and the results are resolved on the basis of votes apportioned to the translators (called “vetters” in CLDR). The translations may be simple names, or they may be more complicated patterns (such as a pattern specifying a date format).

As each item is typed it, a series of tests are run. The status of an updated value (whether it has an error, warning, or neither) is asynchronous — the user doesn’t have to wait on the status before going to the next item on that page.

The ST doesn’t always present the same view to different users; users can pick a “coverage level” that they want to work on. (The core locales are done at at least a “modern” level; organizations can choose default levels on a per-locale basis.) Users may also have differing numbers of votes.

To try out the Survey Tool and look at the user guide, see survey-tool. Note that unless we are in a data submission phase, the ST is in Read-only mode, and some of the functionality is suppressed.

The help text for a given field typically points to a page that explains more about a particular field or a particular type of error, such as timezones, or other pages on http://cldr.unicode.org/translation. The Survey tool also has a forum for translators. The forum is used to leave comments / suggestions for other translators, so that disputes on particular items can be resolved.

Internals

The workhorse of the CLDR tooling is a Java class called CLDRFile. Logically, it is a set of <key, value> pairs, where the key is an XML path containing a code (like "DE" for Germany), and the value is a translation of that code (like "Allemagne"). (For end-users, the path is sometimes called a “field”.)

The ST architecture uses a frontend written in Javascript and a backend written in Java. Data is stored in a MySQL database.

Every time someone adds or updates a translation in the ST, a series of data tests are run. These check for the consistency of the data; otherwise it is too easy for translators to make mistakes. We add to those tests over time, as we discover common problems that translators have. Some of these tests are relatively independent, such as the CheckForExemplars, which makes sure that a translation doesn't contain unexpected characters (such as a Latin character O instead of a Greek Omicron in the name of a country in the Greek locale). Some are dependent on other items, such as checking that no distinct countries have the same name. The tests can signal an error to users, or just a warning.

Take an XML path, such as the following

//ldml/numbers/decimalFormats[@numberSystem="latn"]/decimalFormatLength/decimalFormat[@type="standard"]/pattern[@type="standard"])

From it, we look up:

The above are all driven by data files. In addition, the tests also have to map from paths to different conditions to test for, and to groups of related paths (for testing collisions, logical groups, etc). The examples shown to users in hover tips — important especially for complex cases such as showing how pieces are substituted into patterns — also have to branch depending on the paths. See an example of these hover tips, hovering over the English and native patterns.

We make heavy use of regex, including a lot of code that tries multiple regex patterns against a path, and takes the first match. We have a mechanism called RegexLookup that is used to handle much of this, typically driven data files. Much of the code in tests and for examples predates this, and has simpler tests for different types of paths based on containment or XPathParts checking.

Here are some possible areas for investigation to meet the performance goals listed below.

    • Thread-contention. While the goal is to be thread-safe, of course, there are probably areas where the threading is not optimal; where we block on items at too high a level, thus slowing down performance. An analysis is needed to determine whether that is happening and straighten it out if so.

    • Test performance. The tests are a fertile field for improving performance. For example, the tests build up caches as they work. Many different fields go into displaying a date, so when a date is checked, we end up building up ICU objects out of all the components needed for number and date formats. We don't currently keep track of which items affect the validity of which caches, so when a translation changes, all of these caches effectively get flushed, and then rebuilt. We suspect this is a prime target for improvement.

    • Regex performance. RegexLookup is (a) not always used, and (b) may need some performance work.

    • Path parsing. Revamp the code that calls the XPathParts constructor and replace the factory methods for frozen or unfrozen copies. We can probably do a got part of this with refactoring via IDE.

    • Database. The ST stores the CLDR files in a MySQL database. We should examine this to make sure that the saving is fully asynchronous, and that data storage is optimized.

The code is available as described on CLDR Releases/Downloads, under "Advanced-SVN-Access".

Performance Improvement Goals

The goals are to improve the performance and reliability so that with up to 40 vetters working concurrently, people see the expected results:

Performance is dependent on many factors such as number of users on the system, specific field and validations behind, server environment, etc.

About the performance areas and expectations below:

    • The areas are identified either as most frequently encountering performance issues OR actions that are most frequently performed by Survey Tool users.

    • The test locale (Czech = cs) is used as an example for a locale because of number of fields that get exposed is more comprehensive than others. That can be changed to a similar locale if a different one is easier to work with.

    • User settings for the following performance areas are for Vetter, with Modern coverage.

Test fields are the following:

The chief problem is that the performance degrades under higher load.

Performance Milestones

    1. Milestone #1: Create scaffolding so that simultaneous input from multiple vetters can be simulated by a tool that interacts with the survey tool, using the common actions specified in the goals.

    2. Milestone #2: Complete an analysis of the hotspots in loaded operation, and create a document of recommend tasks to address them (in priority order, based on the best ROI).

      1. The CLDR committee will then settle on a set of tasks to address the performance issues based on the recommendations

    3. Milestones #3… Each of the tasks in 2a will be a separate milestone

      1. There also may be iterations: after fixing performance problem #1, re-run the tests to identify the hotspots to make sure that the tasks are still relevant. New tasks may need to be added, or they may need prioritization.

UI Improvement Goals

The goals are to improve user experience of CLDR data contributors, and help administer/monitor contribution activities. Following is a list of known issues that will be prioritized. Some implementation will require investigation on the current issue, and spec on the desired behavior.

TRAC Query: Front/Backend

Description of Services

    • During a production phase when contributors are actively working in the survey tool, investigate and fix incoming tickets in order of priority determined by the CLDR TC triage.

    • Address existing bugs in order of priority determined by the CLDR TCs. The existing tickets are organized into areas of priority in the Areas and Priorities section.

    • When working through the Survey Tool code, review for improvements to the current design and bring recommendation to the CLDR TC.

    • Bring design solutions to CLDR TC member and find best possible solutions.

    • Attend CLDR TC meetings (as invited) to participate in triages or discuss designs.

    • Create documentation as needed (or requested by the CLDR TCs) that would explain either the existing design or new design solutions.

    • Resolve any blocking issues in the production environment not limited to the survey tool code.

Areas and Priorities

Following are list of areas in order of priority with some examples of existing bugs and current pain points. Listed in each area are examples of known tickets, for the most current set, use TRAC and the keywords provided for each area.

    1. Browser compatibility. TRAC keyword “BrowserCompat

    2. The following 4 browsers need to be supported: Chrome, Firefox, Safari, Edge

      • Updated browsers (no older than 6 months)

Example tickets: #10396 ST: on Edge or IE copy "English" or "Winning" doesn't work

    1. High priority bugs. TRAC keyword “STP1

    2. Example tickets:

      • #10323: Filipino forum does not load

      • #10521: voting page issues found in pt_PT

      • #9675: "voting participation" hard to get to, and crashes

    3. Dashboard UI. TRAC keyword “Dashboard

    4. The Dashboard is used throughout contribution phases, but most importantly during vetting phase, and users need to be able work quickly to prioritize and work through the items.

    5. Example tickets:

      • #10528: Assamese Error shown in Dashboard does not show in the data point

      • #10866: Dashboard suggestion for New

      • #10692: Dashboard losing count message does not match code

    6. Voting experience. TRAC keyword “Voting

    7. Voting is one of the fundamental actions that vetters perform. UI design is a big factor to vetters submitting correct data.

    8. Example tickets:

      • #10324: some sub-locales cannot add suggestion when there is approved data

      • #10485 Import: Automatically Import Vote information from previous cycle

      • Make Tips and Examples shown to the vetters follow intended spec and easy to use

    9. Inheritance. TRAC keyword: “Inheritance

    10. Inheritance is a hierarchy vertically and horizontally that determines the base data set when there is no specific data available for the locale. Inheritance impacts sub locales, but main locales also are impacted vertically up to Root and horizontally as well. Inheritance is important to vetters, because they should be using the inherited value to reduce data repetition as well as consistency when it’s deemed correct for the locale.

    11. Example tickets:

      • #10466: Inheritance value is not available

      • #10574: always be able to explicitly select the “inherited” value

      • #8722: consistent colors for aliases

      • #9554: inherited nits

    12. Architectural Cleanup: : TRAC keyword: “STARCH

      • Front/Back end has developed without a clean API separation.(#7339)

      • Change the back-end to present a clean and documented API to the front-end (REST best practices and separating authentication endpoints from other data endpoints) which will enable performance, security, and feature development on the front and back end.(#5992)

    13. Reporting. TRAC keyword: “Reporting

    14. Reporting impacts various UI area where calculations are utilized. For examples, the Priority Items, Count of Votes in User views, Calculated error counts in Dashboard, etc. Reporting is important for vetters as well as administrators to monitor progress and to get the accurate statistics.

    15. Example tickets:

      • #9584: "Manager" can't see "Priority Items Summary"

      • #10773: For Errors on the Priority items, show comprehensive

    16. Admin UI: TRAC keyword: “AdminUI

    17. Admin UI serves roles such as TC and Manager to manage users and their access control in the Survey tool.

    18. Example tickets:

      • UI improvements may be needed (low priority)

    19. Forum: TRAC keyword: “Forum

    20. The Forum is where vetters discuss different aspects about the data itself, and come to consensus on controversial items.

    21. Example tickets:

      • #10935: Show all old forum posts

    22. ST General: TRAC keyword: “STGEN

    23. The survey tool has different areas that are not specific to vetters actions for contributing, but that supports communication and navigation within the system.

    24. Example tickets:

      • #7306: clipping problems in tool

      • #9326: Survey tool left nave BCP47 and Supplemental shouldn’t be there

      • 10289: The default coverage isn't set right for locales.

Milestones

Dates can be adjusted to fit invoicing requirements.