Use packtab for Unicode table packing#145463
Conversation
|
Most changes to Python require a NEWS entry. Add one using the blurb_it web app or the blurb command-line tool. If this change has little impact on Python users, wait for a maintainer to apply the |
Change has little impact on Python users. Thanks. |
|
This basically changes the Unicode data tables packing from a two-level to a three-level structure. The perf impact should be minimal and offset by smaller data size. |
Vendor harfbuzz/packtab under Tools/unicode/packtab and use it in Tools/unicode/makeunicodedata.py to generate packed lookup helpers for the main codepoint->record/type maps. Switch unicodedata and unicodectype runtime lookups to those generated helpers. Measured on macOS arm64 builds: - python.exe: 6163528 -> 6092632 bytes (-70896, -1.15%) - unicodedata.so: 772352 -> 722912 bytes (-49440, -6.40%) - combined shipped: 6935880 -> 6815544 bytes (-120336, -1.73%)
Replace the remaining split-bin Unicode lookup tables in the unicodedata path with packtab-generated helpers for: - decomposition indexes - NFC composition pairs - Unicode name inverse codepoint lookup - legacy 3.2.0 change indexes Measured on macOS arm64 builds versus clean HEAD: - python.exe: 6163528 -> 6092632 bytes (-70896, -1.15%) - unicodedata.so: 772352 -> 673344 bytes (-99008, -12.82%) - combined shipped: 6935880 -> 6765976 bytes (-169904, -2.45%)
All Unicode table lookups in this generator now emit packtab-based helpers, so the old splitbins compressor is no longer used. Validated by regenerating Unicode data, rebuilding python.exe and unicodedata.so, and running test_unicodedata and test_tools.
This comment was marked as duplicate.
This comment was marked as duplicate.
|
Do we even need to vendor it? It is a tool after all, we can just install it for regeneration? What are is the size difference of the files, do you have benchmarks? |
Sure. I vendored it such that the artifacts can be exactly reproduced.
The binary sizes are reported in the opening comment. sloc is net shrinkage: |
I'll get some benchmarks. |
Add a small Python-level benchmark under Tools/unicode for comparing unicodedata.category() lookup speed across builds on three fixed workloads: all code points, BMP only, and ASCII only. Current results from optimized non-debug builds (-O3 -DNDEBUG), comparing clean HEAD vs the packtab branch: - all: baseline 98.98 ns median, packtab 108.44 ns median - bmp: baseline 97.44 ns median, packtab 105.01 ns median - ascii: baseline 83.80 ns median, packtab 82.53 ns median
|
Most changes to Python require a NEWS entry. Add one using the blurb_it web app or the blurb command-line tool. If this change has little impact on Python users, wait for a maintainer to apply the |
I added a small Python-level unicodedata.category() benchmark in Median results on my machine:
So in this Python-level benchmark, the packtab version is slightly faster for ASCII, but slower for BMP/full-Unicode lookups. My current hypothesis is:
That said, many real unicodedata workloads have strong codepoint locality, since Unicode scripts are generally encoded in contiguous ranges. So a uniform full-space scan is useful as a stress test, but it is not necessarily representative of typical text-processing access patterns. So the space win is real, but at least in this benchmark it appears to trade some non-ASCII lookup speed for reduced binary size and somewhat better hot-cache behavior on tiny working sets. |
|
So, we save ~12% in the size of the file, but slow down lookup in some cases by 10%. Additionally, we greatly increase the maintenance burden (I also see the vendored files are already two releases behind?). I'm not convinced this is worth it, it seems to be costing us more than it is saving, -0.5 on this currently. |
|
Thanks for looking. |
This vendors packtab under
Tools/unicode/packtaband uses it to regenerateCPython's Unicode lookup tables.
packtabis a small table-packing generator that emits compact lookup codefor large static tables.
This switches the generated
unicodectypeandunicodedatalookup pathsaway from the old split-bin tables and over to packtab-generated helpers,
including the decomposition, NFC composition, Unicode name inverse, and
UCD 3.2.0 change tables.
On a macOS arm64 build, this reduced:
python.exe: 6163528 -> 6092632 bytes (-70896, -1.15%)unicodedata.so: 772352 -> 673344 bytes (-99008, -12.82%)Tests run:
./python.exe -m test -j0 test_unicodedata test_tools