Today I was fighting with Nova. No idea who won…

I am working on getting OpenStack running on AArch64 architecture, right? So recently I went from “just” building images to also using them to deploy working “cloud” setup. And that resulted in new sets of patches, updates to patches, discussions…

OpenStack is supposed to make virtualization easier. Create accounts, give access to users and they will make virtual machines and use them without worrying what kind of hardware is below etc. But first you have to get it working. So this week instead of only taking care of Kolla and Kolla-ansible projects I also patched Nova. The component responsible for running virtual machines.

One patch was simple edit of existing one to make it comply with all comments. Took some time anyway as I had to write some proper bug description to make sure that reviewers will know why it is so important for us. And once merged we will have UEFI used as default boot method on AArch64. Without any play with hw_firmware_type=uefi property on images (which is easy to forget). But this was the easy one…

Imagine that you have a rack of random AArch64 hardware and want to run a “cloud”. You may end in a situation where you have a mix of servers for compute nodes (the ones where VM instances run). In Nova/libvirt it is handled by cpu_mode option:

It is also possible to request the host CPU model in two ways:

  • “host-model” – this causes libvirt to identify the named CPU model which most closely matches the host from the above list, and then request additional CPU flags to complete the match. This should give close to maximum functionality/performance, which maintaining good reliability/compatibility if the guest is migrated to another host with slightly different host CPUs. Beware, due to the way libvirt detects host CPU, CPU configuration created using host-model may not work as expected. The guest CPU may confuse guest OS (i.e. even cause a kernel panic) by using a combination of CPU features and other parameters (such as CPUID level) that don’t work.

  • “host-passthrough” – this causes libvirt to tell KVM to passthrough the host CPU with no modifications. The difference to host-model, instead of just matching feature flags, every last detail of the host CPU is matched. This gives absolutely best performance, and can be important to some apps which check low level CPU details, but it comes at a cost wrt migration. The guest can only be migrated to an exactly matching host CPU.

Nova assumes host-model when KVM/QEMU is used as hypervisor. And crashes terribly on AArch64 with:

libvirtError: unsupported configuration: CPU mode ‘host-model’ for aarch64 kvm domain on aarch64 host is not supported by hypervisor

Not nice, right? So I made a simple patch to get host-passthrough to be default on AArch64. But when something is so simple then it’s description is probably not so simple…

Reported bug on nova with some logs attached. Then digged for some information which would explain issue better. Found Ubuntu’s bug on libvirt from Ocata times. They used same workaround.

So I thought: let’s report a bug for libvirt and request support for host-model option. There I got link to an another bug in libvirt with set of information why it does not make sense.

The reason is simple. No one knows what you run on when you run Linux on AArch64 server. In theory there are fields in /proc/cpuinfo but still you do not know do cpu cores in compute01 are same as compute02 servers. At least from nova/libvirt/qemu perspective. This also blocks us from setting cpu_mode to custom and selecting cpu_model which could be some way of getting same cpu for each instance despite of types of compute node processor cores.

The good side is that VM instances will work. The problem may appear when you migrate VM to cpu with other core — it may work. Or may not. Good luck!

2017 timeline


  • Quiet time. Just after Linaro ERP release was done.


  • FOSDEM as usual. This time with CentOS/RDO meeting.
  • Took my daughter Mira to Touluse, France for winter holidays. There was about 17°C which was nice change after week of snow. Visited Aerospace museum and Space city. Nice trip.
  • Started working on Kolla. Took few months to get non-x86 support merged and then I became core reviewer. Learnt a lot during those months.


  • Linaro Connect in Budapest. Met friends, saw some nice places in a city. Arrow company presented several boards compatible with 96boards specifications including Enterprise Edition one.
  • “Root Linux” conference in Kiev, Ukraine. Gave a talk about OpenStack on AArch64 (youtube video).
  • OpenStack Day in Warsaw, Poland. Few interesting prelections, lot of talks with other developers.


  • OpenStack, OpenStack and OpenStack. And rewriting patches all over again and again.


  • Support for non-x86 support finally landed in Kolla. My goal was to get AArch64 supported and got POWER8/LE support as free bonus.


  • With a friend we went for Sandra concert. Somehow I remembered city name wrong and instead of 1 hour drive it was 4 hour one. Still fun.
  • Went for long weekend trip to Cologne, Paderborn, Düsseldorf. Ok, main reason was Paderborn where I visited Heinz Nixdorf MuseumsForum (described as biggest computer museum in the world) and Universität Paderborn where main Aminet server was hosted when I started my Amiga adventure. Museum was nice but I enjoyed Computer History Museum more.
  • Built first set of OpenStack ‘Pike’ container images for Debian/AArch64.
  • Started working on DPDK related issues. Then we got assignee to work on it and I only provided updated packages.


  • Visited Linaro office in Harston for a release sprint. And we celebrated my birthday in a pub. Was great.


  • Helped organizing Ingress Anomaly in Szczecin, Poland. We got aroung 1200 agents from many countries. My job was to take care of missions, mosaics and Mission Day. It was exhausting and we have a lot of fun.


  • Attended Riverwash demoscene party.
  • Went with a friends for a week in Bieszczady mountains. Nearly out of any civilisation related stuff.
  • Another Linaro Connect. This time around San Francisco. And new 96boards Enterprise Edition hardware — this time in MicroATX form factor: Socionext SynQuacer with 24 cores.
  • Spent a day in Computer History Museum and still have a feeling that it was not enough time to see everything ;D


  • Attended Retrokomp/Load Error event.
  • Donated blood for a first time. Will repeat.
  • Started playing with Bigtop project. Lot of Java stuff, Docker images and porting issues. Reported several bugs, replaced their old AArch64 build slave with two new ones (all three run in Linaro Developer Cloud).



  • Attended Silly Venture demoscene party. It is amazing what people can get running on those old Atari machines.

Firefox Quantum ;(

From time to time I try to change web browser (switch Firefox <> Chrome). This time it is moving to Firefox Quantum (v57). And have to say that I have very mixed opinion.

For years it was easy: Chrome is faster, Firefox has extensions which can alter how browser look, feel, work, behave. From Firefox Quantum it is gone. All add-ons have now be “so called” WebExtensions – no way to alter browser itself, only what is presented on web page can be changed.

Say good bye to switching tabs with mouse scroll – function was always missing in Firefox but there was extension for it. Same with tab grouping in tab bar – “Tree Style Tab” is now sidebar and original tab bar has to be disabled through userChrome.css file. Good that they got at least moved reload/stop button to the left side of location…

I will use it for week or two and see it stay or not on my desktop instead of Chrome. Have to admit that main reason for test is tab grouping function in Tree Style Tab as it allows me to get rid of multiple browser windows.

Also I have limited amount of extensions in use to just six ones related to ad blocking/privacy/user scripts.

I am now core reviewer in Kolla

Months of work, tens of patches, hundreds of changed lines. My whole work in Kolla project got rewarded this week. I am now one of core reviewers 🙂

What does it mean? I think that Gema summarised it best:

For those of you who don’t know, this means Kolla has recognised our contributions to the project as first class and are giving Marcin and ARM64 a vote of confidence, they realise we are there to stay.

I found it helpful in my daily work as now I can suggest my coworkers to send their patches directly instead of proxying it through me ;D

Donated blood

In past several friends suggested me to go and donate blood. For some reason I skipped that. Until today.

At Red Hat we have those “We are Red Hat week” (WARHW in short) events. Do not ask me what is going on during them as I have no idea (as I work remotely). There are some celebrations in offices but for me closest one is in Berlin (and I still did not visit it).

Since June there is another Red Hat guy in Szczecin: Damian Wojsław. So we decided to do something as kind of celebration of a WARHW. Warsaw office guys had idea to gather and donate blood so we followed.

Took some forms to fill, blood check, quick chat with some doctor and then 450 ml of blood went away ;D

Can Socionext SynQuacer be first 96boards desktop machine?

During Linaro Connect SFO17 I had an occasion to take a look at first 96boards Enterprise Edition MicroATX format board: Socionext SynQuacer. Can it be called first 96boards desktop machine?

Just to remind — 96boards EE specification defined two form factors:

  • custom 160x120mm
  • MicroATX

There were attempts to build boards in that custom format (Husky, Cello) but they both failed terribly. Turns out that companies which are able to produce 96boards CE boards are not able to make more complicated ones.

Connect ago I wrote about Systart Oxalis LS1020A board as being first 96boards EE one but it used that custom format.

So going back to SynQuacer board…

I would say that it looks like typical MicroATX mainboard:

  • four memory slots (DDR4, up to 64GB, ECC or not ECC)
  • CPU under heatsink (24 Cortex-A53 cores, 1GHz clock)
  • PCI-Express slots (x1, x1, x16 with just 4 lanes)
  • two SATA ports
  • Gigabit Ethernet port
  • two USB 3.0 ports at the back
  • connector for another USB 3.0 ports
  • 96boards low-speed connector (think sensors, serial console, tpm etc)
  • 24pin ATX power connector (no extra +12V ones)
  • power and reset buttons
  • fan connector
  • JTAG port

Socionext SynQuacer

The official announcement did not provide information about price. Only info present was that it will available in December 2017. During discussions with Socionext representatives I was told that full developer box will cost around 1000 USD and involve mainboard, memory, storage (rather not SSD), case and graphics card. Price for just mainboard was not provided as it looked like such option is not planned.

From software point of view there was UEFI presented. With graphical boot. Upstreaming kernel support is in progress (Linaro provides 4.14-rc tree with required changes).

Will it satisfy a need for AArch64 desktop? Time will show. From what I got from developers using it already performance is quite ok as long as it is multithreaded (so kernel build goes nice with -j24 until linking phase kicks in).

Other option for AArch64 desktop would be Macchiatobin. Latest revisions are needed as PCI support got fixed (I was told that first revisions were unable to fully use PCI Express port). Bernhard Rosenkränzer was demoing such setup and it was running nicely.

Fridge magnets

All started few years ago when I had no idea for a gift from Orlando. So I brought my wife magnet with “someone came for Florida and I got was that stupid magnet” text. Some time later I started own collection…

Today I reorganised magnets because had to add one but there was no space available. I have around 80 magnets from places I visited and some from places to visit.

Fridge magnets collection

With this amount I had to find a way to not loose track. So I created a map:

And small request at the end: if you live in one of places with red marker and there is an option that we meet (conference, other event) then would be great if you bring me magnet ;D

Moar X-Genes!

At Linaro we have one of those HPe Moonshot beasts. Basically it is chassis with some Ethernet switches built-in. Then you can plug cartridges with processors into it. There are some x86-64 ones and there are M400 ones with X-Gene cpu, 64GB ram and some SSD storage.

And there was delivery at Linaro office. With huge pile of M400 cartridges. Gema opened chassis and started to plug one after another until we got all 45 slots used (we had 15 cartridges before):

Moonshot chassis filled with m400 cartridges

Turned out that one slot is dead so we have to live without c22n1 cartridge. But that still gives us 44 octa core systems. Each has 64GB ram, storage size varies (some have 480GB, some 120GB, some do not want to tell).

We are waiting for another chassis to fill it with rest of M400s ;D

There will be some work as we need to get them updated to be SBSA/SBBR compliant (U-Boot -> kernel is something I leave for some Company but it is not how Linaro expects) – we need to replace firmware setup.

Plans for use? Linaro Developer Cloud, OpenStack 3rdparty CI and probably several other targets.

We need some thermite…

Time goes and it is that time of year where Linaro Enterprise Group is working on a new release. And as usual jokes about lack of thermite starts…

Someone may ask “Why?”. Reason is simple: X-Gene 1 processor. I think that it’s hateclub grows and grows with time.

When it was released it was a nice processor. Eight cores, normal SATA, PCI Express, USB, DDR3 memory with ECC etc. It was used for distribution builders, development platforms etc. Not that there was any choice 😀

Nowadays with all those other AArch64 processors on a market it starts to be painful. PCI support requires quirks, serial console requires patching etc. We have X-Gene1 in Applied Micro Mustang servers and HPe Moonshot M400 cartridges. Maybe officially those machines are not listed as supported but we still use them so testing a new release work there has to be done.

And each time there are some issues to work around. Some could probably be fixed with firmware updates but I do not know do vendors still support that hardware.

So if you have some spare thermite (and a way to handle that legally) then contact us.

Is my work on Kolla done?

During last few months I was working on getting Kolla running on AArch64 and POWER architectures. It was a long journey with several bumps but finally ended.

When I started in January I had no idea how much work it will be and how it will go. Just followed my typical “give me something to build and I will build it” style. You can find some background information in previous post about my work on Kolla.

A lot of failures were present at beginning. Or rather: there was a small amount of images which built. So my first merged change was to do something with Kolla output ;D

  • build: sort list of built/failed images before printing

Debian support was marked for removal so I first checked how it looked, then enabled all possible images and migrated from ‘jessie’ to ‘stretch’ release. Reason was simple: ‘jessie’ (even with backports) lacked packages required to build some images.

  • debian: import key for repository
  • debian: install gnupg and dirmngr needed for apt-key
  • debian: enable all images enabled for Ubuntu
  • handle rtslib(-fb) package names and dependencies
  • debian: move to stretch
  • Debian 8 was not released yet

Both YUM and APT package managers got some changes. For first one I took care to make sure that it fails if there were missing packages (which was very often during builds for aarch64/ppc64le). It allowed to catch some typo in ‘ironic-conductor’ image. In other words: I make YUM behave closer to APT (which always complain about missing packages). Then I made change for APT to behave more like YUM by making sure that update of packages lists was done before packages were installed.

  • ironic-conductor: add missing comma for centos/source build
  • make yum fail on missing packages
  • always update APT lists when install packages

Of course many images could not be built at all for aarch64/ppc64le architectures. Mostly due to lack of packages and/or external repositories. For each case I was checking is there some way for fixing it. Sometimes I had to disable image, sometimes update packages to newer version. There were also discussions with maintainers of external repositories on getting their stuff available for non-x86 architectures.

  • kubernetes: disable for architectures other than x86-64
  • gnocchi-base: add some devel packages for non-x86
  • ironic-pxe: handle non-x86 architectures
  • openstack-base: Percona-Server is x86-64 only
  • mariadb: handle lack of external repos on non x86
  • grafana: disable for non-x86
  • helm-repository: update to v2.3.0
  • helm-repository: make it work on non-x86
  • kubetoolbox: mark as x86-64 only
  • magnum-conductor: mark as x86-64 only
  • nova-libvirt: handle ppc64le
  • ceph: take care of ceph-fuse package availability
  • handle mariadb for aarch64/ubuntu/source
  • opendaylight: get it working on CentOS/non-x86
  • kolla-toolbox: use proper mariadb packages on CentOS/non-x86

At some moment I had over ten patches in review and all of them depended on the base one. So with any change I had to refresh whole series and reviewers had to review again… Painful it was. So I decided to split out the most basic stuff to get whole patch set split into separate ones. After “base_arch” variable was merged life became much simpler for reviewers and a bit more complicated for me as from now on each patch was kept in separate git branch.

  • add base_arch variable for future non-x86 work

At Linaro we support CentOS and Debian. Kolla supports CentOS/RHEL/OracleLinux, Debian and Ubuntu. I was not making builds with RHEL nor OracleLinux but had to make sure that Ubuntu ones work too. There was funny moment when I realised that everyone using Kolla/master was building images with Ocata packages instead of Pike ;D

  • Ubuntu: use Pike repository

But all those patches meant “nothing” without first one. Kolla had information about which packages are available for aarch64/ppc64le/x86-64 architectures but still had no idea that aarch64 or ppc64le exist. Finally the 50th revision of patch got merged so it now knows ;D

  • Support non-x86 architectures (aarch64, ppc64le)

I also learnt a lot about Gerrit and code reviews. OpenStack community members were very helpful with their comments and suggestions. We had hours of talk on #openstack-kolla IRC channel. Thanks goes to Alicja, Duong Ha-Quang, Jeffrey, Kurt, Mauricio, Michał, Qin Wang, Steven, Surya Prakash, Eduardo, Paul, Sajauddin and many others. You people rock!

So is my work on Kolla done now? Some of it is. But we need to test resulting images, make a Docker repository with them, update official documentation with information how to deploy on aarch64 boxes (hope that there will be no changes needed). Also need to make sure that OpenStack Kolla CI gets Debian based gates operational and provide them with 3rdparty AArch64 based CI so new changes could be checked.