Microarchitecture rpm macros

During Hackweek 20 at SUSE I created some rpm macros to create packages easily that use the glibc-hwcaps feature. There’s a post with the journal from the hackweek in case you want to read it. Here I’ll just explain how to use the new macros I created.

The package definition

First you have to add a BuildRequires to use the macros:

BuildRequires:  microarch-rpm-macros

Then before the %description section, you have to add a line like:

%microarch_subpackage -n %{libname}

The %microarch_subpackage macro is used to generate the subpackage sections. It’s important that the parameter passed to it is the same as the parameter passed to the %package section that defines the library package. It’ll also generate the %files section with the same contents as the %files section in the library package but with the directory adapted to the microarchitecture of each subpackage.

The %build section

Let’s consider the following code in the build section:

autoreconf -fiv
%configure \
   --with-pic \
   --disable-static
make %{?_smp_mflags} CFLAGS="%{optflags}"

We will replace that with:

autoreconf -fiv
%{microarch_build %$configure \
  --with-pic \
  --disable-static
  make %{?_smp_mflags} CFLAGS="%{$optflags}"
}

The %microarch_build macro will take care of executing 4 times the code within to build the baseline version of the package and then the x86-64-v2, x86-64-v3 and x86-64-v4 versions, each in a different directory and with different %optflags values which include the respective -march and -mtune parameters in each case as well as different %_libdir values so the library is installed to the right place later in %install.

Note that %configure was replaced with %$configure and %{optflags} was replaced with %{$optflags}. This is done so that they’re not expanded before passing the arguments to %microarch_build .

Also note that the autoreconf execution was left out of the macro. This is so that the configure script is generated in the root source directory. Then the %microarch_build macro can generate build.x86-64-vN directories and put there the build files.

In my test with the bzip2 package I had a special case. If the %do_profiling boolean is set then code is built, the tests are executed and then the code is built again with the generated profiling information. The %build section used was:

%configure \
  --with-pic \
  --disable-static
%if 0%{?do_profiling}
  make %{?_smp_mflags} CFLAGS="%{optflags} %{cflags_profile_generate}"
  make %{?_smp_mflags} CFLAGS="%{optflags} %{cflags_profile_generate}" test
  make %{?_smp_mflags} clean
  make %{?_smp_mflags} CFLAGS="%{optflags} %{cflags_profile_feedback}"
%else
  make %{?_smp_mflags} CFLAGS="%{optflags}"
%endif

And I replaced that with:

%{microarch_build %$configure \
  --with-pic \
  --disable-static
%if 0%{?do_profiling} && "%{$microarch_current_flavor}" == "x86-64"
  make %{?_smp_mflags} CFLAGS="%{$optflags} %{cflags_profile_generate}"
  make %{?_smp_mflags} CFLAGS="%{$optflags} %{cflags_profile_generate}" test
  make %{?_smp_mflags} clean
  make %{?_smp_mflags} CFLAGS="%{$optflags} %{cflags_profile_feedback}"
%else
  make %{?_smp_mflags} CFLAGS="%{$optflags}"
%endif
}

Note that here again I used a $ within %{$microarch_current_flavor} so it can be replaced in each flavor with the right value.

The %install section

%install sections usually consist on running something like %make_install and then maybe installing some files manually. In this case, we would replace this:

%make_install pkgconfigdir=%{_libdir}/pkgconfig
install -Dpm 0755 foo %{buildroot}%{_bindir}/foo
install -m 0644 %{SOURCE2} %{buildroot}%{_mandir}/man1

with:

%microarch_install %make_install pkgconfigdir=%{_libdir}/pkgconfig
install -Dpm 0755 foo %{buildroot}%{_bindir}/foo
install -m 0644 %{SOURCE2} %{buildroot}%{_mandir}/man1

%microarch_install will run the argument passed to it 4 times but first it’ll run the microarch flavors and the baseline flavor will be run last.

Note that after the flavors are installed and before the baseline installation is done, it’ll remove all *.so files within the glibc-hwcaps directories since we don’t want development files in there.

The %check section

In the %check section packages usually run the generated binaries to test they work as expected. Note that we can’t do that for all flavors since we may not have a recent enough CPU to run them. Because of this, I opted to just check the baseline flavor.

Just replace

make %{?_smp_mflags} test

or anything you have to run from the build directory in %check with:

pushd %microarch_baseline_builddir
make %{?_smp_mflags} test
popd

And that should be enough for the simple case of bzip2 and similar packages.

Please note that this is work in progress and currently using the macros with %cmake or %meson will fail and is not supported yet. Check the conclusions on the previous post for information about what’s still missing.

Hackweek 20: glibc-hwcaps in openSUSE

This week we’ve held Hackweek 20 in SUSE so I’ll try to explain here what I’ve worked on. I recently noticed glibc 2.33 introduced hwcaps support which means it’s now possible to install libraries using an expanded cpu instruction set from recent CPUs in addition to the regularly compiled libraries and glibc will automatically choose the version optimized for the current cpu in use. This sounded very nice so I thought I’d try to work on that for my hackweek project.

My plan was to work at the package building level: Add/modify rpm macros to make it easy to build packages so that subpackages optimized for the different microarchitectures are (semi-)automatically generated and SUSE/openSUSE users can easily install those packages with optimizations for the specific cpu in use.

The preliminary tests

I began by creating a home:alarrosa:branches:hackweek:glibc-hwcaps project in obs to force gcc-11 to be used by default to build every package I wanted to test and then added a home:alarrosa:branches:hackweek:glibc-hwcaps:baseline subproject where I’d build baseline versions of packages and a home:alarrosa:branches:hackweek:glibc-hwcaps:x86-64-v3 project where I’d build the packages using `-march=x86-64-v3 -mtune=skylake` so they’re optimized for my cpu and I can measure the speed improvement.

I first thought I’d benchmark converting an x264 file to x265 using ffmpeg, so I built fdk-aac, libx264, x265 and ffmpeg-4 in both projects (baseline and x86-64-v3). The results were practically the same with both versions but that was partly expected since ffmpeg and most video libraries usually already contain code to check the current cpu and run code specifically optimized for it in assembly.

So I thought I should try a C/C++ library that’s not video-related, which brought me to building baseline and x86-64-v3 versions of libpng16, poppler, cairo and freetype2 libraries.

I then executed the following command to render png files for each page of a large pdf file using both sets of libraries:

time pdftocairo asio.pdf -png

The results were:

  • 325.618 seconds (mean over 3 runs with 1.235 seconds of difference between the min and max results) for the baseline version.
  • 336.672 seconds (mean over 4 runs with 0.664 seconds of difference between the min and max results) for the x86-64-v3 version

Yes, you read that right. Unexpectedly, the optimized version was noticeably slower. I got a bit frustrated with that result but still thought that it might be related to problems with the current version of the compiler that might be fixed in the future, so it might be worth to continue working on the project.

A quick test for glibc-hwcaps

I created a really small libbar dynamic library with a function that prints a message on the screen, built it three times with three different messages and put each of them into /usr/lib64, /usr/lib64/glibc-hwcaps/x86-64-v2 and /usr/lib64/glibc-hwcaps/x86-64-v3 . I then did a small foo binary that linked to libbar and called that function. Making only some of the libraries available worked as expected so I confirmed that glibc-hwcaps support worked as expected.

The microarch rpm macros

At this point (it was already wednesday afternoon), I could start working on the rpm macros. In order to test them, I created yet another project at home:alarrosa:branches:hackweek:glibc-hwcaps:test . In there I created a new package microarch-rpm-macros that would install … well… the rpm macros 🙂 and then another package called microarch that would be used on one hand to generate a microarch-filesystem package that owns the new directories /usr/lib64/glibc-hwcaps and /usr/lib64/glibc-hwcaps/x86-64-v[234] and 3 other packages (microarch-x86-64-v2, microarch-x86-64-v3 and microarch-x86-64-v4) that you’ll see in a moment what they’re used for.

I worked on the rpm macros and these packages on Thursday and Friday and by 19:00 on Friday I got everything working.

I’ll explain the rpm macros I created on my next post so that it can be used as a reference without having all the explanations in this post about the story to develop them.

The rpm macros built the package four times with different optimization flags, generated all three subpackages with the optimized versions, put the library files in place and then adding the repository from the test obs project I could do:

sudo zypper in microarch-x86-64-v3
Loading repository data…
Reading installed packages…
Resolving package dependencies…

The following 2 NEW packages are going to be installed:
 libbz2-1-x86-64-v3 microarch-x86-64-v3

2 new packages to install.
Overall download size: 52.6 KiB. Already cached: 0 B. After the operation, additional 74.4 KiB will be used.
Continue? [y/n/v/…? shows all options] (y):

So just installing the microarch-x86-64-v3 package pulls in all optimized packages for that microarchitecture automatically.

Conclusions

I consider the hackweek project was a partial success. I did what I wanted in the original plan and it works well. There’s still work to do of course:

  • The rpm macros need to be polished (a lot) before submitting them to Factory.
  • More packages apart from bzip2 should be adapted to use them.
  • The macros will need to be adapted to more use cases. For example, using cmake or meson to build a package with the %microarch* macros is not tested and I have no doubts it’ll fail. Fortunately, now that the main work is done I think this will be easy to implement.
  • I need to provide NOOP versions of the macros for other architectures since currently they just fail to build packages on anything different than x86-64 (does glibc-hwcaps support microarchitectures for other architectures?)

And then, even if I work on all points above there’s still the main issue of the optimized libraries being slower than the baseline ones. In any case, once this issue is solved, all this should bring some benefits to our distributions. And the project was also useful to have a confirmation that using optimization flags doesn’t always means that the generated code will be faster.

Before ending I’d like to thank Florian Weimer, Michael Matz, Dario Faggioli, Albert Astals and Dan ÄŒermák for their valuable input on this project as well as MatÄ›j Cepl, Ben Greiner and the rest of the authors of the great openSUSE python singlespec macros which are the inspiration of this project.

Firmas digitales con autofirma/@firma/clienteafirma en openSUSE

This post explains how to install the Spanish government official tool to generate digital signatures and the web applet used in the Spanish government web sites to sign documents that the user needs to upload. If enough people request it, I’ll write also an English version, but for now, as it’s mainly useful for Spanish openSUSE/SUSE users, I’ll write the rest of the post in Spanish.

En este post voy a explicar cómo instalar AutoFirma (también llamada @firma o clienteafirma en distintas versiones) en openSUSE Leap (15 y 42.3), openSUSE Tumbleweed y SUSE Linux Enterprise.

El método fácil y rápido

Para instalar AutoFirma en openSUSE, basta usar la instalación en 1 click, pulsando en el siguiente botón:

El método alternativo

Si se prefiere usar la línea de comandos, se puede añadir mi repositorio de paquetes estables que todavía no están en la distribución e instalar el paquete autofirma desde ahí. En este repositorio intento mantener versiones estables de paquetes que creo que pueden ser interesantes o que personalmente me resultan interesantes así que debería ser seguro añadirlo a los repositorios del sistema. Además, si en algún momento arreglo algún problema o actualizo la versión empaquetada a una más reciente, se obtendrán las mejoras automaticamente al actualizar el sistema. El repositorio es home:alarrosa:packages y para añadirlo hay que usar:

En openSUSE Tumbleweed:

sudo zypper ar -f https://download.opensuse.org/repositories/home:/alarrosa:/packages/openSUSE_Tumbleweed/home:alarrosa:packages.repo

En openSUSE Leap 15:

sudo zypper ar -f https://download.opensuse.org/repositories/home:/alarrosa:/packages/openSUSE_Leap_15.0/home:alarrosa:packages.repo

En SLE 15:

sudo zypper ar -f https://download.opensuse.org/repositories/home:/alarrosa:/packages/SLE_15/home:alarrosa:packages.repo

Y para instalar el paquete:

sudo zypper in autofirma

Cómo usar autofirma

La forma más fácil es añadir el certificado que se haya descargado de la página de la FNMT al almacén de certificados de Firefox (en Preferencias / Privacidad y Seguridad / Certificados). La mayoría de las acciones también permiten usar ficheros PKCS #12 (.p12) directamente.

Las aplicaciones AutoFirma y Cliente @firma aparecerán en el apartado de Oficina del menú de aplicaciones.

AutoFirma

AutoFirma permite firmar documentos muy facilmente.

Diálogo para seleccionar el rectángulo donde se generará una firma visual.

No hace falta modificar las opciones por defecto.


Mensaje de documento correctamente firmado.

Cliente @firma


Cliente @firma da más opciones de firma.

Selección del certificado con el que firmar.

Con cliente @firma se puede validar si una firma es correcta.

Mensaje de firma validada correctamente.

Sobre el paquete autofirma

La última versión oficial distribuida (en el momento en que escribo estas lineas, es la 1.6.3 para Windows y la 1.6.2 para Linux) no permite leer correctamente el almacén de certificados de Firefox, razón por la cual decidí empaquetar la versión en desarrollo de github. Además, hice algunos cambios para que encuentre las librerías del sistema y de Firefox correctamente en openSUSE. También he añadido ficheros .desktop para integrar mejor clienteafirma/autofirma en el escritorio.

Para finalizar, sólo comentar que me llevé una grata sorpresa al ver que el código fuente de autofirma estaba disponible en github con una licencia libre (GPL 2.0 y EUPL 1.1). Desde aquí me gustaría agradecer a los desarrolladores de autofirma por su compromiso con el software libre. En las próximas semanas intentaré enviarles los cambios que he realizado para que los introduzcan en las versiones oficiales por si le son útiles a otras distribuciones.

Actualización: Añadidas instrucciones para SLE 15

How to install the Mycroft A.I. on openSUSE Tumbleweed

Haven’t you ever wanted to have an open source artificial intelligence assistant like some companies provide on their phones/desktops or even at your home but without the privacy concerns? In July last year, during Akademy 2017, I saw a great presentation of Mycroft and the Mycroft plasmoid, done by Aditya Mehra. Mycroft can understand and answer questions the user speaks on the microphone and can also perform actions the user requests. I inmediately knew I wanted that in openSUSE. You can watch the conference in the next video to see what I mean:

Unfortunately, I saw Mycroft had a lot of dependencies and an unorthodox install system, so I didn’t do much with it, but then November came and we had a hackweek at SUSE (in few words, it’s a week SUSE developers can use to work on personal projects). So I started this project to package all Mycroft dependencies along with Mycroft itself and the Mycroft plasmoid as well as to modify Mycroft to integrate in a regular Linux system. Since then, I’ve been using some of my free time to update the packages and the result is that now Mycroft can be easily installed on openSUSE Tumbleweed with a couple of commands and following standard procedures.

I’ll give the installation instructions below, but first of all, let me give some clarifications:

  • Mycroft has many dependencies, including over 50 python packages, many of which are not on Tumbleweed repositories yet (as of 2018-03-06). This is just a matter of time, but in the meantime you’ll have to add a couple of development repositories.
  • If you’re using openSUSE Leap 42.3, then I’m afraid Mycroft can’t be installed there. The good news is that once Leap 15 is released, you’ll want to revisit this blog as there’ll surely be better news by then.
  • Mycroft developers have been nice to accept a patch I sent them to allow it to be installed on standard distribution directories. I think it would be nice if they used those also on their Mycroft platforms, but of course, that’s their call. Also, by default, it seems Mycroft was always thought to run on virtualenvs and that’s not a recommended way to package something for a regular Linux distribution, so the packages are patched to make Mycroft use the system python packages (more on this on the Changes section below which I strongly recommend reading before installing the packages).

Installation instructions

First, you have to add the devel:languages:python repository to zypper, which contains development python packages that haven’t been accepted (yet) into Tumbleweed:

sudo zypper ar -f https://download.opensuse.org/repositories/devel:/languages:/python/openSUSE_Tumbleweed/devel:languages:python.repo

Then, you have to add the repository of the OBS project I use to release usable packages that are for any reason not yet in the official distribution repositories:

sudo zypper ar -f https://download.opensuse.org/repositories/home:/alarrosa:/packages/openSUSE_Tumbleweed/home:alarrosa:packages.repo

Note that both commands above are just one long line each.

Now, you can install the mycroft-core and plasma-mycroft packages, which should pull in all their dependencies:

sudo zypper in mycroft-core plasma-mycroft

It will request you to trust the added repositories keys. On a clean Tumbleweed system the command installs 160 packages and after it finishes, you can add the Mycroft plasmoid to the plasma desktop.

Once installed, you can use the plasmoid to start the Mycroft services, ask something (in the example below, I said on the microphone “Hey Mycroft, what is 2 + 2 ?”) and stop Mycroft.

But before using Mycroft you have to pair it. The first time you start it, it will give you a code composed of 6 alphanumeric characters. Go to home.mycroft.ai, create a free account and register your “device” by entering the code.

And that’s all! You should now be able to use Mycroft on your system and maybe even install new skills. A skill is a module that add a certain capability to Mycroft (for example, if you add the plasma-user-control-skill, Mycroft will understand you when you say “Hey Mycroft, lock the screen” and lock the screen as you requested). Skills can be listed/installed/removed using the plasmoid or the msm commandline application.

In any case, please note this is still work in progress and some features may not work well. Also, I made some changes to mycroft-core code and plasma-mycroft in order to install it in a Linux system and allow it to work without a python virtual environment, so this might break stuff too. Please, don’t blame the Mycroft developers for issues you find with these packages and if you report any issue, I think it’s better to mention it first in the comments section in this post before submitting a bug report to github.com/MycroftAI and bothering them with problems that might not be their fault.

Changes with respect to the upstream version

What did I change with respect to the code provided by Mycroft developers? Well, first, I included some upstream patches to make mycroft-core use python3 and installed it like any other python3 application in /usr/lib/python3.6/site-packages/ . That way we’re also helping the Mycroft developers with the planned upgrade to python3 by testing early.

I also changed the way Mycroft is started so it feels more natural on a Linux desktop. For this, I created some systemd units based on the ones done for ArchLinux. The idea is that there’s a user systemd target called mycroft.target that runs 4 systemd user services when run (mycroft-bus, mycroft-skills, mycroft-audio and mycroft-voice). Of course, it also stops them when the target is stopped. This is all hidden to the user, who can just start/stop Mycroft turning a switch in the plasmoid.

On a regular Mycroft installation, the configuration file is in /etc/mycroft.conf and the skills are installed to /opt/mycroft/skills, but on a regular system a regular user can’t modify those files/directories, so I moved them to ~/.mycroft/mycroft.conf and ~/.mycroft/skills and changed mycroft to prefer those locations. You can have a look at the Mycroft documentation to see what parameters you can set in your mycroft.conf file.

When installing a skill, in a regular Mycroft installation, msm invokes pip to install the required python modules on the virtual environment. Since we’re not using virtual environments I’m just logging the required python modules to ~/.mycroft/mycroft-python-modules.log . So you if you think a skill might be misbehaving or not being properly loaded you should first check that file to see if there’s a missing python module requirement which should be installed in the system using zypper. I plan to make this automatic in the future, but for now, you’ll have to check manually.

You can see all the changes I made by browsing the mycroft-core and plasma-mycroft pages in OBS.

I also added changes to other packages. For example, the duckduckgo2 python module is not prepared to work with python3, so I ported it. The same happens with the aiml python module, which seems to be abandoned since 2014 and only works with python2. Fortunately, in this case there’s a python-aiml fork, which adds support for python3 and other improvements, so I made mycroft use that one instead.

Some example commands

This is a small list of questions and commands you might like to try:

Hey Mycroft …

  • What is 2 + 2?
  • What is 21% of 314?
  • What is the capital of Spain?
  • When was Alan Parsons born?
  • How high is the Eiffel Tower?
  • Search the web for ethernet cables
  • Set an alarm in 5 minutes (after the alarm is triggered, you can say “Hey Mycroft, stop alarm” to stop it)
  • Remind me to watch the oven in 3 minutes (after the reminder is triggered, say “Hey Mycroft, stop reminder” to stop it)
  • Tell me a joke
  • Tell me about the Solar System
  • Play the news (when you want it to stop, just say “Hey Mycroft, stop”)
  • Open Dolphin
  • Close Firefox
  • Decrease volume
  • Show Activities
  • What time is it?
  • What’s the weather like?
  • Will it rain?
  • Type this is a test (it will write “this is a test” on your current window as if you used the keyboard)

Configuration

After you play a bit with it and test the basic functionality works, you might want to configure Mycroft for your settings. I recommend to at least open the ~/.mycroft/mycroft.conf file and change the example location settings to your city, your coordinates (look for your city on Wikipedia and press on the coordinates in the right side box to see your city coordinates in decimal notation) and your timezone (the “offset” value is your timezone difference with respect to GMT in milliseconds and “dstOffset” is the daylight saving time offset which is usually AFAIK, generally 1 hour).

When changing the configuration file, be extremely careful and don’t leave any blank line nor introduce any comment, since currently the json parser is very sensitive to syntax errors (fortunately, you’ll see clear errors in the logs if there’s any). In any case, be sure to have a backup config file, just in case.

Known problems

  • The first time you start the Mycroft systemd services it will download the 31 default skills which can take long (up to a couple of minutes). So if you start Mycroft from the plasmoid, the first time you do it, the plasmoid will timeout and report a “Connection Error”. Please just wait a couple of minutes, stop Mycroft from the plasmoid and start it again.
  • Sometimes, the plasmoid seems to have trouble remembering its settings, so if you have trouble starting/stopping the services through the UI, go to the “Settings tab” and change “Your Mycroft core installation path” to “Default Path” and then back to “Installed using Mycroft Package” again (just clicking on one and then the other is enough).
  • Some skills don’t support python3 out of the box yet. This is really troubling since some of those failing are installed by default . I sent pull requests to fix 5 skills (fallback-wolfram-alpha, skill-reminder, skill-desktop-launcher, skill-youtube-play and mycroft-youtube) but they haven’t been merged yet, so I’ve distributed the patches to support python3 within the packages and added support to msm to apply local patches after a skill is installed. Since msm also updates the skills from their git repositories, I made it reverse-apply the patches before any update and apply them again afterwards. This should allow to receive upstream patches from the skill developers and let them have preference over my patches. I’ve also fixed skill-autogui and fallback-duckduckgo to work with python3, but didn’t submit the changes to their corresponding upstreams yet.
  • If you find any other skill is failing, you can check with journalctl --user --since "1 hour ago" the journals and see if the skill is generating any exception. Also, having a look at ~/.mycroft/mycroft-python-modules.log might be a good idea to check if those python packages are installed in the system (note that the openSUSE python packaging guidelines state that the python3 package for a module must be called python3-<modulename> so it should be easy to check manually)
  • Mycroft is creating files under /tmp without proper permissions. This can be allowed on a device like the
    Mycroft Mark 1 or Mark II, but is a security concern on a generic Linux system. I hope to get some time in the next days/weeks to work on this.
  • When installing a skill (using the ui or the msm application) sometimes it doesn’t show up as “installed” but it is. Check the contents of your ~/.mycroft/skills directory to see exactly what is installed.

End thoughts

I have many plans for these packages. For example, I’d like to submit to upstream all changes I’ve done since I think those will be useful for other distributions too and to help get it to work with python3 as soon as possible. As mentioned before, I’d also like to make a pip/zypper integration tool so skills requirements can be installed automatically in the system and I’d like to add a skill to Mycroft to integrate it with one application I’m developing (more on this in future posts 🙂 ) . If nobody does it first, it would be great to add Spanish support now that it seems support for languages other than English is being added.

Btw, Mycroft developers are adding support for the Mozilla open source voice recognition framework, so you might consider collaborating with The Common Voice project to help it grow and mature.

Before ending, I’d like to thank all the python openSUSE packagers (specially Tomáš Chvátal) for carefully, patiently and quickly reviewing python package submissions for over 50 new packages required by mycroft-core and of course, the Mycroft developers and Aditya Mehra (the plasma-mycroft developer), who are doing a great job.