This week we’ve held Hackweek 20 in SUSE so I’ll try to explain here what I’ve worked on. I recently noticed glibc 2.33 introduced hwcaps support which means it’s now possible to install libraries using an expanded cpu instruction set from recent CPUs in addition to the regularly compiled libraries and glibc will automatically choose the version optimized for the current cpu in use. This sounded very nice so I thought I’d try to work on that for my hackweek project.
My plan was to work at the package building level: Add/modify rpm macros to make it easy to build packages so that subpackages optimized for the different microarchitectures are (semi-)automatically generated and SUSE/openSUSE users can easily install those packages with optimizations for the specific cpu in use.
The preliminary tests
I began by creating a home:alarrosa:branches:hackweek:glibc-hwcaps project in obs to force gcc-11 to be used by default to build every package I wanted to test and then added a home:alarrosa:branches:hackweek:glibc-hwcaps:baseline subproject where I’d build baseline versions of packages and a home:alarrosa:branches:hackweek:glibc-hwcaps:x86-64-v3 project where I’d build the packages using `-march=x86-64-v3 -mtune=skylake` so they’re optimized for my cpu and I can measure the speed improvement.
I first thought I’d benchmark converting an x264 file to x265 using ffmpeg, so I built fdk-aac, libx264, x265 and ffmpeg-4 in both projects (baseline and x86-64-v3). The results were practically the same with both versions but that was partly expected since ffmpeg and most video libraries usually already contain code to check the current cpu and run code specifically optimized for it in assembly.
So I thought I should try a C/C++ library that’s not video-related, which brought me to building baseline and x86-64-v3 versions of libpng16, poppler, cairo and freetype2 libraries.
I then executed the following command to render png files for each page of a large pdf file using both sets of libraries:
time pdftocairo asio.pdf -png
The results were:
- 325.618 seconds (mean over 3 runs with 1.235 seconds of difference between the min and max results) for the baseline version.
- 336.672 seconds (mean over 4 runs with 0.664 seconds of difference between the min and max results) for the x86-64-v3 version
Yes, you read that right. Unexpectedly, the optimized version was noticeably slower. I got a bit frustrated with that result but still thought that it might be related to problems with the current version of the compiler that might be fixed in the future, so it might be worth to continue working on the project.
A quick test for glibc-hwcaps
I created a really small libbar dynamic library with a function that prints a message on the screen, built it three times with three different messages and put each of them into /usr/lib64, /usr/lib64/glibc-hwcaps/x86-64-v2 and /usr/lib64/glibc-hwcaps/x86-64-v3 . I then did a small foo binary that linked to libbar and called that function. Making only some of the libraries available worked as expected so I confirmed that glibc-hwcaps support worked as expected.
The microarch rpm macros
At this point (it was already wednesday afternoon), I could start working on the rpm macros. In order to test them, I created yet another project at home:alarrosa:branches:hackweek:glibc-hwcaps:test . In there I created a new package microarch-rpm-macros that would install … well… the rpm macros 🙂 and then another package called microarch that would be used on one hand to generate a microarch-filesystem package that owns the new directories /usr/lib64/glibc-hwcaps and /usr/lib64/glibc-hwcaps/x86-64-v[234] and 3 other packages (microarch-x86-64-v2, microarch-x86-64-v3 and microarch-x86-64-v4) that you’ll see in a moment what they’re used for.
I worked on the rpm macros and these packages on Thursday and Friday and by 19:00 on Friday I got everything working.
I’ll explain the rpm macros I created on my next post so that it can be used as a reference without having all the explanations in this post about the story to develop them.
The rpm macros built the package four times with different optimization flags, generated all three subpackages with the optimized versions, put the library files in place and then adding the repository from the test obs project I could do:
sudo zypper in microarch-x86-64-v3
Loading repository data…
Reading installed packages…
Resolving package dependencies…The following 2 NEW packages are going to be installed:
libbz2-1-x86-64-v3 microarch-x86-64-v32 new packages to install.
Overall download size: 52.6 KiB. Already cached: 0 B. After the operation, additional 74.4 KiB will be used.
Continue? [y/n/v/…? shows all options] (y):
So just installing the microarch-x86-64-v3 package pulls in all optimized packages for that microarchitecture automatically.
Conclusions
I consider the hackweek project was a partial success. I did what I wanted in the original plan and it works well. There’s still work to do of course:
- The rpm macros need to be polished (a lot) before submitting them to Factory.
- More packages apart from bzip2 should be adapted to use them.
- The macros will need to be adapted to more use cases. For example, using cmake or meson to build a package with the %microarch* macros is not tested and I have no doubts it’ll fail. Fortunately, now that the main work is done I think this will be easy to implement.
- I need to provide NOOP versions of the macros for other architectures since currently they just fail to build packages on anything different than x86-64 (does glibc-hwcaps support microarchitectures for other architectures?)
And then, even if I work on all points above there’s still the main issue of the optimized libraries being slower than the baseline ones. In any case, once this issue is solved, all this should bring some benefits to our distributions. And the project was also useful to have a confirmation that using optimization flags doesn’t always means that the generated code will be faster.
Before ending I’d like to thank Florian Weimer, Michael Matz, Dario Faggioli, Albert Astals and Dan Čermák for their valuable input on this project as well as Matěj Cepl, Ben Greiner and the rest of the authors of the great openSUSE python singlespec macros which are the inspiration of this project.
Maybe you simply picked the wrong programs to test with or the scope of the project (which was only about the dynamic library functionality to begin with) was too limited. Also, did you try your benchmarks with x86-64-v3 -mtune=generic instead of -mtune=skylake? Which specific CPU did you use? To evaluate the usefulness I’d suggest a full distribution re-build with x86-64-v3 vs. generic. The Phoronix Test Suite can help you with the benchmarking as there are many test profiles which could be used to test a diverse set of workloads. This would take more effort and time though – but I am sure there are more people in the community that would be eager to try out such a test ISO release.
By the way, other distributions are also experimenting with these feature levels and found some gains from their (limited) testing (see the “Benchmarks” section: https://gitlab.archlinux.org/archlinux/rfcs/-/merge_requests/2/diffs?commit_id=1a532bcfc37e280bf69b219bb44308d863dee476). Maybe you could try to reproduce them on your hardware? I’ve personally used a gcc toolchain built with x86-64-v3 on Ubuntu (from: https://launchpad.net/~ubuntu-toolchain-r/+archive/ubuntu/x86-64-v3) on my 12-Core/24-Thread Haswell-EP Xeon and found it to be faster during compilation than the generic build.
P.S: I forgot to thank you for sharing your experiences and your work nonetheless!
Looks like this is being worked on in RPM:
https://github.com/rpm-software-management/rpm/issues/1812
can glibc-hwcaps apply to glibc self? for example, libc.so has libc-v2.so, libc-v3.so, libc-v4.so