My workflow for building big sets of RPM packages

In last months I did two rebuilds: NodeJS 4.x in Fedora and OpenStack Mitaka in CentOS. Both were targeting AArch64 and both were not done before. During latter one I was asked to write about my workflow, so will describe it with OpenStack one as base.

identify what needs to be done

At first I had to figure out what exactly needs to be built. Haïkel Guémar (aka number80) pointed me to openstack-mitaka directory on CentOS’ vault where all source packages are present. Also told me that EPEL repository is not required which helped a lot as it is not yet built for CentOS.

structure of sources

OpenStack set of packages in CentOS is split into two parts: “common” shared through all OpenStack versions and “openstack-mitaka” containing OpenStack Mitaka packages and build dependencies not covered by CentOS itself or “common” directory.

prepare build space

I used “mockchain” for such rebuilds. It is simple tool which does not do any ordering tricks just builds set of packages in given order and do it three times hoping that all build dependencies will be solved that way. Of course what got built once is not tried again.

To make things easier I used shell alias:

alias runmockchain mockchain -r default -l /home/hrw/rpmbuild/_rebuilds/openstack/mockchain-repo-centos

With this I did not have to remember about those two switches. Common call was “runmockchain –recurse srpms/*” which means “cycle packages three times and continue on failures”.

Results of all builds (packages and logs) were kept in “~/_rebuilds/openstack/mockchain-repo-centos/results/default/” subdirectories. I put all extra packages there to have all in one repository.

populate “noarch” packages

Then I copied x86-64 build of OpenStack Mitaka into “_IMPORT-openstack-mitaka/” to get all “noarch” packages for satisfying build dependencies. I built all those packages anyway but having them saved me several rebuilds.

extra rpm macros

When I started first build it turned out that some Python packages lack proper “Provides” fields. I was missing newer rpm build macros (“%python_provide” was added after Fedora 19 which was base for RHEL7). Asked Haïkel and added “rdo-rpm-macros” to mock configuration.

But had to scrap everything I built so far.

surprises and failures

Building big set of packages for new architecture most of time generate failures which were not present with x86-64 build. Same was this time as several build dependencies were missing or wrong.

packages missing in CentOS/aarch64

Some were from CentOS itself — I told Jim Perrin (aka Evolution) and he added them to build queue to fill gaps. I built them in meantime or (if they were “noarch”) imported into “_IMPORT-extras” or “_IMPORT-os” directories.

packages imported from other CBS tags

Other packages were originally imported from other tags at CBS (CentOS koji). For those I created directory named “_IMPORT-cbs”. And again — if they were “noarch” I just copied them. For rest I did full build (using “runmockchain”) and they end in same repository as rest of build.

For some packages it turned out that they got built long time ago with older versions of build dependencies and are not buildable from current versions. For them I tracked proper versions on CBS and imported/built (sometimes with their build dependencies and build dependencies of build dependencies).

downgrading packages

There was a package “python-flake8” which failed to build spitting out Python errors. I checked how this version was built on CBS and turned out that “python-mock” 1.3.0 (from “openstack-mitaka” repository) was too new… Downgraded it to 1.0 allowed me to build “python-flake8” one (upgrading of it is in queue).

merging fixes from Fedora

Both “galera” and “mariadb-galera” got AArch64 support merged from Fedora and got built with “.hrw1” added into “Release” field.

directories

I did whole build in “~/rpmbuild/_rebuilds/openstack/” directory. Extra folders were:

  • vault.centos.org/centos/7/cloud/Source/openstack-mitaka/common/
  • vault.centos.org/centos/7/cloud/Source/openstack-mitaka/
  • mockchain-repo-centos/results/default/_HACKED-by-hew/
  • mockchain-repo-centos/results/default/_IMPORT-cbs/
  • mockchain-repo-centos/results/default/_IMPORT-extras/
  • mockchain-repo-centos/results/default/_IMPORT-openstack-mitaka/
  • mockchain-repo-centos/results/default/_IMPORT-os/

Vault ones were copy of OpenStack source packages and their build dependencies. Hacked ones got AArch64 support merged from Fedora. Both “extras” and “os” directories were for packages missing in CentOS/AArch64 repositories. CBS one was for source/noarch packages which had to be imported/rebuilt because they came from other CBS tags.

status page

In meantime I prepared web page with build results so anyone interested can see what builds, what not and check logs, packages etc. It has simple description and then table with list of builds (data can be sorted by clicking on column headers).

thanks

Whole job would take much more time if not help from CentOS developers: Haïkel Guémar, Alan Pevec, Jim Perrin, Karanbir Singh and others from #centos-devel and #centos-arm IRC channels.

Cello: new AArch64 enterprise board from 96boards project

Few hours ago, somewhere in some hotel in Bangkok Linaro Connect has started. So during morning coffee I watched keynote and noticed that Jon Masters presented RHELSA 7.2 out of the box experience on Huskyboard. And then brand new board from 96boards project was announced: Cello.

Lot of people was expecting that this Linaro Connect will bring Huskyboard alive so people will finally have an option for some cheap board for all their AArch64 needs. Instead Cello was presented:

cello

Compared to Husky (below) there are some hardware differences to notice but it is normal as 96borads enterprise specification only tells where to put ports.

Huskey

I suppose that both boards were designed by different companies. Maybe it was a request from Linaro to ODM vendors to design and mass produce 96boards enterprise board and Husky was prototyped first but Cello won. Or maybe we will see Husky in distribution as well. Good part is: you can preorder Cello and get it delivered in Q2/2016.

Have to admit that I hoped for some industry standard board (96boards Enterprise specification mentions microATX) instead of this weird 96boards-only format which I ranted about already. Anyway 299 USD for quad-core Cortex-A57 with SATA, UEFI, ACPI, PCIe (and maybe few more four letter acronyms) does not sound bad but good luck with finding case for it ;(

Red Hat Enterprise Linux Server for ARM 7.2 Development Preview released!

When I started working for Red Hat I got a list of packages in RHEL 7.0 which did not built for AArch64. Some time later I worked on merging those fixes in Fedora and upstream. Red Hat Enterprise Linux 7.0 got released. Then 7.1 followed. Then CentOS developers added AArch64 target based on work we did in RHEL.

Yesterday Red Hat Enterprise Linux 7.2 got released. What makes this version special is one paragraph:

Red Hat is also making available Red Hat Enterprise Linux Server for ARM 7.2 Development Preview, which was first made available to partners and their customers in June 2015. This Development Preview enables new partner hardware and additional features for the ARM architecture.

Which means AArch64 port. Working out of box on SBSA/SBBR compliant hardware.

USB on Mustang

When I got APM Mustang at home I knew that one day I will use it to test desktop environments. Lack of graphics and USB kept me away from it. And I am closer now…

Yesterday Mark Langsdorf wrote two patches which allow to use USB3 ports from Mustang’s backplate. I applied first version of them, altered DeviceTree blob a bit and after reboot I got that:

16:36 hrw@pinkiepie-rawhide:~$ lsusb
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 1234:2088 Brain Actuated Technologies 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

All with slightly modified 3.18-rc2 kernel from Fedora rawhide.

Now need to sort out graphics… But first need to buy yet another card…

2 years of AArch64 work

I do not remember exactly when I started working on ARMv8 stuff. Checked old emails from Linaro times and found that we discussed AArch64 bootstrap using OpenEmbedded during Linaro Connect Asia (June 2012). But it had to wait a bit…

First we took OpenEmbedded and created all tasks/images we needed but built them for 32-bit ARM. But during September we had all toolchain parts available: binutils was public, gcc was public, glibc was on a way to be released. I remember that moment when built first “helloworld” — probably as one of first people outside ARM and their hardware partners.

At first week of October we had ARMv8 sprint in Cambridge, UK (in Linaro and ARM offices). When I arrived and took a seat I got information that glibc just went public. Fetched, rebased my OpenEmbedded tree to drop traces of “private” patches and started build. Once finished all went public at git.linaro.org repository.

But we still lacked hardware… The only thing available was Versatile Express emulator which required license from ARM Ltd. But then free (but proprietary) emulator was released so everyone was able to boot our images. OMG it was so slow…

Then fun porting work started. Patched this, that, sent patches to OpenEmbedded and to upstream projects and time was going. And going.

In January 2013 I started X11 on emulated AArch64. Had to wait few months before other distributions went to that point.

February 2013 was nice moment as Debian/Ubuntu team presented their AArch64 port. It was their first architecture bootstrapped without using external toolchains. Work was done in Ubuntu due to different approach to development than Debian has. All work was merged back so some time later Debian also had AArch64 port.

It was March or April when OpenSUSE did mass build of whole distribution for AArch64. They had biggest amount of packages built for quite long time. But I did not tracked their progress too much.

And then 31st May came. A day when I left Linaro. But I was already after call with Red Hat so future looked quite bright ;D

June was month when first silicon was publicly presented. I do not know what Jon Masters was showing but it probably was some prototype from Applied Micro.

On 1st August I got officially hired by Red Hat and started month later. My wife was joking that next step would be Retired Software Engineer ;D

So I moved from OpenEmbedded to Fedora with my AArch64 work. Lot of work here was already done as Fedora developers were planning 64-bit ARM port few years before — when it was at design phase. So when Fedora 15 was bootstrapped for “armhf” it was done as preparation for AArch64. 64-bit ARM port was started in October 2012 with Fedora 17 packages (and switched to Fedora 19 during work).

My first task at Red Hat was getting Qt4 working properly. That beast took few days in foundation model… Good that we got first hardware then so it went faster. 1-2 months later and I had remote APM Mustang available for my porting work.

In January 2014 QEmu got AArch64 system emulation. People started migrating from foundation model.

Next months were full of hardware announcements. AMD, Cavium, Freescale, Marvell, Mediatek, NVidia, Qualcomm and others.

In meantime I decided to make crazy experiment with OpenEmbedded. I was first to use it to build for AArch64 so why not be first to build OE on 64-bit ARM?

And then June came. With APM Mustang for use at home. Finally X11 forwarding started to be useful. One of first things to do was running firefox on AArch64 just to make fun of running software which porting/upstreaming took me biggest amount of time.

Did not took me long to get idea of transforming APM Mustang (which I named “pinkiepie” as all machines at my home are named after cartoon characters) into ARMv8 desktop. Still waiting for PCI Express riser and USB host support.

Now we have October. Soon will be 2 years since people got foundation model available. And there are rumors about AArch64 development boards in production with prices below 100 USD. Will do what needed to get one of them on my desk ;)

From a diary of AArch porter –- testsuites

More and more software come with testsuites. But not every distribution runs them for each package (nevermind is it Debian, Fedora or Ubuntu). Why it matters? Let me give example from yesterday: HDF 4.2.10.

There is a bug reported against libhdf with information that it built fine for Ubuntu. As I had issues with hdf in Fedora I decided to look and found even simpler patch than one I wrote. Tried it and got package built. But that’s all…

Running testsuite is easy: “make check”. But result was awesome:

!!! 31294 Error(s) were detected !!!

It does not look good, right? So yesterday I spent some time yesterday on searching for architecture related check and found main reason for so big amount of errors — unknown systems are treated as big endian… Simple switch there and from 31294 it dropped to just 278 ones.

Took me a while to find all 27 places where miscellaneous variations of “#if defined(__aarch64__)” were needed and finally got to point where “make check” simply worked as it should.

So if you port software do not assume it is fine once it builds. Run testsuite to be sure that it runs properly.