Skip to content

Use packtab for Unicode table packing#145463

Closed
behdad wants to merge 4 commits intopython:mainfrom
behdad:packtab
Closed

Use packtab for Unicode table packing#145463
behdad wants to merge 4 commits intopython:mainfrom
behdad:packtab

Conversation

@behdad
Copy link

@behdad behdad commented Mar 3, 2026

This vendors packtab under Tools/unicode/packtab and uses it to regenerate
CPython's Unicode lookup tables.

packtab is a small table-packing generator that emits compact lookup code
for large static tables.

This switches the generated unicodectype and unicodedata lookup paths
away from the old split-bin tables and over to packtab-generated helpers,
including the decomposition, NFC composition, Unicode name inverse, and
UCD 3.2.0 change tables.

On a macOS arm64 build, this reduced:

  • python.exe: 6163528 -> 6092632 bytes (-70896, -1.15%)
  • unicodedata.so: 772352 -> 673344 bytes (-99008, -12.82%)
  • combined shipped: 6935880 -> 6765976 bytes (-169904, -2.45%)

Tests run:

  • ./python.exe -m test -j0 test_unicodedata test_tools

@python-cla-bot
Copy link

python-cla-bot bot commented Mar 3, 2026

All commit authors signed the Contributor License Agreement.

CLA signed

@bedevere-app
Copy link

bedevere-app bot commented Mar 3, 2026

Most changes to Python require a NEWS entry. Add one using the blurb_it web app or the blurb command-line tool.

If this change has little impact on Python users, wait for a maintainer to apply the skip news label instead.

@behdad
Copy link
Author

behdad commented Mar 3, 2026

If this change has little impact on Python users, wait for a maintainer to apply the skip news label instead.

Change has little impact on Python users. Thanks.

@behdad
Copy link
Author

behdad commented Mar 3, 2026

This basically changes the Unicode data tables packing from a two-level to a three-level structure. The perf impact should be minimal and offset by smaller data size.

behdad added 3 commits March 3, 2026 08:10
Vendor harfbuzz/packtab under Tools/unicode/packtab and use it in
Tools/unicode/makeunicodedata.py to generate packed lookup helpers for
the main codepoint->record/type maps.

Switch unicodedata and unicodectype runtime lookups to those generated
helpers.

Measured on macOS arm64 builds:
- python.exe: 6163528 -> 6092632 bytes (-70896, -1.15%)
- unicodedata.so: 772352 -> 722912 bytes (-49440, -6.40%)
- combined shipped: 6935880 -> 6815544 bytes (-120336, -1.73%)
Replace the remaining split-bin Unicode lookup tables in the unicodedata
path with packtab-generated helpers for:
- decomposition indexes
- NFC composition pairs
- Unicode name inverse codepoint lookup
- legacy 3.2.0 change indexes

Measured on macOS arm64 builds versus clean HEAD:
- python.exe: 6163528 -> 6092632 bytes (-70896, -1.15%)
- unicodedata.so: 772352 -> 673344 bytes (-99008, -12.82%)
- combined shipped: 6935880 -> 6765976 bytes (-169904, -2.45%)
All Unicode table lookups in this generator now emit packtab-based
helpers, so the old splitbins compressor is no longer used.

Validated by regenerating Unicode data, rebuilding python.exe and
unicodedata.so, and running test_unicodedata and test_tools.
@bedevere-app

This comment was marked as duplicate.

@StanFromIreland
Copy link
Member

StanFromIreland commented Mar 3, 2026

Do we even need to vendor it? It is a tool after all, we can just install it for regeneration?

What are is the size difference of the files, do you have benchmarks?

@behdad
Copy link
Author

behdad commented Mar 3, 2026

Do we even need to vendor it? It is a tool after all, we can just install it for regeneration?

Sure. I vendored it such that the artifacts can be exactly reproduced.

What are is the size difference of the files, do you have benchmarks?

The binary sizes are reported in the opening comment. sloc is net shrinkage:

% git diff main.. | diffstat
 Modules/unicodedata.c                     |   21 
 Modules/unicodedata_db.h                  |10323 +++++++++++++++++++++++++++++++-------------------------------------------------------------
 Modules/unicodename_db.h                  | 7845 ++++++++++++++++++++++++++++-----------------------------------------
 Objects/unicodectype.c                    |    7 
 Objects/unicodetype_db.h                  | 3635 +++++++++-----------------------
 Tools/unicode/makeunicodedata.py          |  178 -
 Tools/unicode/packtab/LICENSE             |  201 +
 Tools/unicode/packtab/README.md           |  184 +
 Tools/unicode/packtab/packTab/__init__.py | 1701 +++++++++++++++
 Tools/unicode/packtab/packTab/__main__.py |  270 ++
 10 files changed, 10146 insertions(+), 14219 deletions(-)

@behdad
Copy link
Author

behdad commented Mar 3, 2026

do you have benchmarks?

I'll get some benchmarks.

Add a small Python-level benchmark under Tools/unicode for comparing
unicodedata.category() lookup speed across builds on three fixed
workloads: all code points, BMP only, and ASCII only.

Current results from optimized non-debug builds (-O3 -DNDEBUG),
comparing clean HEAD vs the packtab branch:
- all: baseline 98.98 ns median, packtab 108.44 ns median
- bmp: baseline 97.44 ns median, packtab 105.01 ns median
- ascii: baseline 83.80 ns median, packtab 82.53 ns median
@bedevere-app
Copy link

bedevere-app bot commented Mar 3, 2026

Most changes to Python require a NEWS entry. Add one using the blurb_it web app or the blurb command-line tool.

If this change has little impact on Python users, wait for a maintainer to apply the skip news label instead.

@behdad
Copy link
Author

behdad commented Mar 3, 2026

do you have benchmarks?

I'll get some benchmarks.

I added a small Python-level unicodedata.category() benchmark in Tools/unicode/benchmark_unicodedata_category.py and compared an optimized non-debug main build (-O3 -DNDEBUG) against this packtab branch.

Median results on my machine:

  • Full Unicode space:
    • main: 98.98 ns/lookup
    • packtab: 108.44 ns/lookup
  • BMP only:
    • main: 97.44 ns/lookup
    • packtab: 105.01 ns/lookup
  • ASCII only:
    • main: 83.80 ns/lookup
    • packtab: 82.53 ns/lookup

So in this Python-level benchmark, the packtab version is slightly faster for ASCII, but slower for BMP/full-Unicode lookups.

My current hypothesis is:

  • ASCII benefits from the smaller table footprint, which likely improves cache behavior.
  • The broader Unicode cases pay for the extra lookup indirection: the old split-bin scheme was effectively 2 table fetches, while the current packtab layouts here are often 3 more scattered memory fetches.

That said, many real unicodedata workloads have strong codepoint locality, since Unicode scripts are generally encoded in contiguous ranges. So a uniform full-space scan is useful as a stress test, but it is not necessarily representative of typical text-processing access patterns.

So the space win is real, but at least in this benchmark it appears to trade some non-ASCII lookup speed for reduced binary size and somewhat better hot-cache behavior on tiny working sets.

@StanFromIreland
Copy link
Member

StanFromIreland commented Mar 3, 2026

So, we save ~12% in the size of the file, but slow down lookup in some cases by 10%. Additionally, we greatly increase the maintenance burden (I also see the vendored files are already two releases behind?). I'm not convinced this is worth it, it seems to be costing us more than it is saving, -0.5 on this currently.

@behdad
Copy link
Author

behdad commented Mar 3, 2026

Thanks for looking.

@behdad behdad closed this Mar 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants