Developers planet is online

People write blogs. People read blogs. But sometimes it is hard to find blogs of all those interesting people. That’s where so called “planets” are solution.

Years ago there was “Planet Linaro” website filled with blog posts from Linaro developers. Then it vanished. Later it got replaced by poor substitute.

But I do not want to have to track each Linaro developer to find their blog and add it into Feedly. So instead I decided to create new planet website. And that’s how Developers Planet got born.

So far it lists a bunch of blogs of Linaro developers. I used venus to run it. Few years old code but runs. Will adapt HTML/CSS template to be a bit more modern.

And why .cf domain? It is free — that’s why.


Let me introduce new awesome project: YADIBP. It is cool, foss, awesome, the one and only and invented here instead of there. And it does exactly what it has to do and in a way it has to be done. Truly cool and awesome.

Using that tool you can build disk images with several supported Linux distributions. Or maybe even with any BSD distribution. And Haiku or ReactOS. Patches for AmigaOS exist too!

Any architecture. Starting from 128 bit wide RUSC-VI to antique architectures like ia32 or m88k as long as you have either hardware or qemu port (patches for ARM fast models in progress).

Just fetch from git and use. Written in BASIC so it should work everywhere. And if you lack BASIC interpreter then you can run it as Python or Erlang. Our developers are so cool and awesome!

But let’s get back to reality — there are gazillions of projects of tool which does one simple thing: builds a disk image. And gazillion will be still written because some people have that “Not Invented Here” syndrome.

And I am getting tired of it.

Today I was fighting with Nova. No idea who won…

I am working on getting OpenStack running on AArch64 architecture, right? So recently I went from “just” building images to also using them to deploy working “cloud” setup. And that resulted in new sets of patches, updates to patches, discussions…

OpenStack is supposed to make virtualization easier. Create accounts, give access to users and they will make virtual machines and use them without worrying what kind of hardware is below etc. But first you have to get it working. So this week instead of only taking care of Kolla and Kolla-ansible projects I also patched Nova. The component responsible for running virtual machines.

One patch was simple edit of existing one to make it comply with all comments. Took some time anyway as I had to write some proper bug description to make sure that reviewers will know why it is so important for us. And once merged we will have UEFI used as default boot method on AArch64. Without any play with hw_firmware_type=uefi property on images (which is easy to forget). But this was the easy one…

Imagine that you have a rack of random AArch64 hardware and want to run a “cloud”. You may end in a situation where you have a mix of servers for compute nodes (the ones where VM instances run). In Nova/libvirt it is handled by cpu_mode option:

It is also possible to request the host CPU model in two ways:

  • “host-model” – this causes libvirt to identify the named CPU model which most closely matches the host from the above list, and then request additional CPU flags to complete the match. This should give close to maximum functionality/performance, which maintaining good reliability/compatibility if the guest is migrated to another host with slightly different host CPUs. Beware, due to the way libvirt detects host CPU, CPU configuration created using host-model may not work as expected. The guest CPU may confuse guest OS (i.e. even cause a kernel panic) by using a combination of CPU features and other parameters (such as CPUID level) that don’t work.

  • “host-passthrough” – this causes libvirt to tell KVM to passthrough the host CPU with no modifications. The difference to host-model, instead of just matching feature flags, every last detail of the host CPU is matched. This gives absolutely best performance, and can be important to some apps which check low level CPU details, but it comes at a cost wrt migration. The guest can only be migrated to an exactly matching host CPU.

Nova assumes host-model when KVM/QEMU is used as hypervisor. And crashes terribly on AArch64 with:

libvirtError: unsupported configuration: CPU mode ‘host-model’ for aarch64 kvm domain on aarch64 host is not supported by hypervisor

Not nice, right? So I made a simple patch to get host-passthrough to be default on AArch64. But when something is so simple then it’s description is probably not so simple…

Reported bug on nova with some logs attached. Then digged for some information which would explain issue better. Found Ubuntu’s bug on libvirt from Ocata times. They used same workaround.

So I thought: let’s report a bug for libvirt and request support for host-model option. There I got link to an another bug in libvirt with set of information why it does not make sense.

The reason is simple. No one knows what you run on when you run Linux on AArch64 server. In theory there are fields in /proc/cpuinfo but still you do not know do cpu cores in compute01 are same as compute02 servers. At least from nova/libvirt/qemu perspective. This also blocks us from setting cpu_mode to custom and selecting cpu_model which could be some way of getting same cpu for each instance despite of types of compute node processor cores.

The good side is that VM instances will work. The problem may appear when you migrate VM to cpu with other core — it may work. Or may not. Good luck!

We need some thermite…

Time goes and it is that time of year where Linaro Enterprise Group is working on a new release. And as usual jokes about lack of thermite starts…

Someone may ask “Why?”. Reason is simple: X-Gene 1 processor. I think that it’s hateclub grows and grows with time.

When it was released it was a nice processor. Eight cores, normal SATA, PCI Express, USB, DDR3 memory with ECC etc. It was used for distribution builders, development platforms etc. Not that there was any choice 😀

Nowadays with all those other AArch64 processors on a market it starts to be painful. PCI support requires quirks, serial console requires patching etc. We have X-Gene1 in Applied Micro Mustang servers and HPe Moonshot M400 cartridges. Maybe officially those machines are not listed as supported but we still use them so testing a new release work there has to be done.

And each time there are some issues to work around. Some could probably be fixed with firmware updates but I do not know do vendors still support that hardware.

So if you have some spare thermite (and a way to handle that legally) then contact us.

Twenty five years of Linux

As I came back from PTO I had to dig into work mails. One of threads was about 25 years of Linux and there was a question “which was your first kernel” and I thought that it may be not an easy question to answer.

For me first was 2.0.2[6-8] (do not remember) on some Uni server where I got my first Linux account (normally used SunOS and text terminal). I remember that there was simple root exploit we used.

Then 2.0.36 on my Amiga 1200 (Debian ‘slink’) was first I run on my hardware.

2.2.10 was first I used for longer as it was Debian ‘potato’ m68k one.

2.3.47 was first I cross compiled (on i686/linux for m68k/linux). And it worked!

2.4.0-test5 was first I built for my x86 desktop once I moved from Amiga/AmigaOS to PC/Debian. I had Duron/600 desktop and old 386 desktop both running same version. Duron got newer ones later, 386 stayed with this one for about year when I returned it.

When I bougth Sharp Zaurus SL-5500 PDA 2.4.18-rmk7-pxa3-embeddix was running on it. So this is my first Linux version on mobile device. Next jump was 2.6.11 on Zaurus c760 as first 2.6 one on mobile.

During OpenZaurus maintaince I started upstreaming kernel patches. 2.6.17 was first with my patches in.

When I had that strange ProGear webpad I wrote backlight for it (based on someone’s code) and 2.6.21 was first with my driver in (and I removed it in 3.7).

AArch64 desktop hardware?

Soon there will be four years since I started working on AArch64 architecture. Lot of software things changed during that time. Lot in a hardware too. But machines availability still sucks badly.

In 2012 all we had was software model. It was slow, terribly slow. Common joke was AArch64 developers standing in a queue for 10GHz x86-64 cpus. So I was generating working binaries by using cross compilation. But many distributions only do native builds. In models. Imagine Qt4 building for 3-4 days…

In 2013 I got access to first server hardware. With first silicon version of CPU. Highly unstable, we could use just one core etc. GCC was crashing like hell but we managed to get stable build results from it. Qt4 was building in few hours now.

Then amount of hardware at Red Hat was growing and growing. Farms of APM Mustangs, AMD Seattle and several other servers appeared, got racked and available to use. In 2014 one Mustang even landed on my desk (as first such machine in Poland).

But this was server land. Each of those machines costed about 1000 USD (if not more). And availability was hard too.

Linaro tried to do something about it and created 96boards project.

First came ‘Consumer Edition’ range. Yet another small form factor boards with functionality stripped as much as possible. No Ethernet, no storage other than emmc/usb, low amount of memory, chips taken from mobile phones etc. But it was selling! But only because people were hungry to get ANYTHING with AArch64 cores. First was HiKey then DragonBoard410 got released. Then few other boards. All with same set of issues: non-mainline kernel, weird bootloaders, binary blobs for this or that…

Then so called ‘Enterprise Edition’ got announced. With another ridiculous form factor (and microATX as an option). And that was it. There was a leak of Husky board which shown how fucked up design it was. Ports all around the edges, memory above and under board and of course incompatible with any industrial form factor. I would like to know what they were smoking…

Time passed by. Husky got forgotten for another year. Then Cello was announced as a “new EE 96boards board” while it looked as redesigned Husky with two SATA ports less (because who needs more than two SATA, right?). Last time I heard about Cello it was still ‘maybe soon, maybe another two weeks’. Prototypes looked like hand soldered, USB controller mounted rotated, dead on-board Ethernet etc.

In meantime we got few devices from other companies. Pine64 had big campaign on Kickstarter and shipped to developers. Hardkernel started selling ODROID-C2, Geekbox released their TV box and probably something else got released as well. But all those boards were limited to 1-2GB of memory, often lacked SATA and used mobile processors with their own set of bootloaders etc causing extra work for distributions.

Overdrive 1000 was announced. Without any options for expansion it looked like SoftIron wanted customers to buy Overdrive 3000 if they want to use PCI Express card.

So we have 2016 now. Four years of my work on AArch64 passed. Most of distributions support this architecture by building on proper servers but most of this effort is not used because developers do not have sane hardware to play with (sane means expandable, supported by distributions, capable).

There is no standard form factor mainboards (mini-itx, microATX, ATX) available on mass market. 96boards failed here, server vendors are not interested, small Chinese companies prefer to release yet-another-fruit/Pi with mobile processor. Nothing, null, nada, nic.

Developers know where to buy normal computer cases, storage, memory, graphics cards, USB controllers, SATA controllers and peripherals. So vendors do not have to worry/deal with this part. But still there is nothing to put those cards into. No mainboards which can be mounted into normal PC case, have some graphics plugged in, few SSD/HDD connected, mouse/keyboard, monitors and just be used.

Sometimes it is really hard to convince software developers to make changes for platform they are unable to test on. And current hardware situation does not help. All those projects of hardware being available “in a cloud” helps only for subset of projects — ever tried to run GNOME/KDE session over the network? With OpenGL acceleration etc?

So where is my AArch64 workstation? In desktop or laptop form.

Post written after my Google+ post where similar discussion happened in comments.

AArch64 desktop: last day

Each year you can hear “this is a year of Linux desktop” phrase. After few days with AArch64 desktop I know one thing: it is not a year of ARMv8 Linux desktop.

Web browsing

OK, I can be spoiled by speed of my i7-2600k desktop but situation when Firefox with less than 20 tabs open is unable to display characters I type into textarea fast enough shows that something is wrong (16GB ram machine). And tell me that this is not typical desktop use of web browser…

YouTube. Main source of any kind of videos. Sometimes it works, but most of time I lack patience to wait until it will start (VP9 and h264 codecs support present). And no way to watch “live hangouts”.

And say bye to music streaming services like Deezer or Spotify.


I am not a game player. Installed Quake3 (which I never played before) and it worked, SuperTuxKart worked as well. But that does not prove anything as both those games have low requirements.

It probably never will be gaming platform on Linux desktop.


For my style of development it is fine. But all I need is terminal and gVim ;D

Other hardware?

I think that results may be affected by a fact that all I have here is Applied Micro Mustang based on X-Gene 1 cpu. It is one of first ARMv8 processors in Linux world and it is optimized for server use rather than desktop.

One thing is sure: in next year I will try this experiment with other AArch64 hardware. Just hope it will be sooner than in a year from now (which is my feeling after lack of new aarch64 hardware announcements from Linaro members during this week Linaro Connect).

From a diary of AArch64 porter — vfp precision

During last years lot of work went into design of SIMD instructions for different cpus. X86-64 has some, Power has and so does AArch64.

But why I am writing about it? Simple — build failures. Or rather test failures. Especially in scientific software like HepMC or alglib. They build fine but that’s all.

The difference was small — something like e-15 or smaller but it was enough to make tests fail. And what to blame? SIMD of course.

AArch64 has FMA (fused multiply-add) instructions which speed up calculations but result is more precise than traditional way when all operations are done one by one. This is enough for tests to fail 🙁

But there is solution for it. To degrade you need to add “-ffp-context=off” to compiler flags. This disables use of FMA so results of tests are same as on x86-64 (on pre-Haswell/Bulldozer cores). As a bonus it works on powerpc64(le) and s390(x) too.

Thanks goes to David Abdurachmanov and Andrew Pinski for finding out which exactly flag needs to be used (instead of -fno-expensive-optimisations I used before).

From a diary of AArch64 porter — PAGE_SIZE

During fixing software to make it build and run on AArch64 sooner or later you can meet magical constant named PAGE_SIZE. And in most situations it will be used in a wrong way.

What it does is simple — tell how big page size is. But it does not work that way on AArch64 architecture as we have different values possible: 4K, 16K (may not be supported in all cpus) and 64K with latter being used in Fedora and other distributions. There were some packages which we built at time of 4K kernel being used and then wondered why things fail under 64K kernel…

But how to handle it as a userspace software developer? Simplest solution is to use sysconf(_SC_PAGESIZE) function call (same as getpagesize()). But remember to not hardcode anything based on what you get as a result. Otherwise your application can misbehave when run on kernel with other PAGE_SIZE size.

The good part is that if someone uses PAGE_SIZE constant in code then it will just do not compile on AArch64 as it is not present in system headers. From what I checked sys/user.h header has it defined on some platforms and does not on other so it can not be assumed as available.

UPDATE: added 16K page size which may not be supported in some cpus.

How to get Xserver running out of box on AArch64

As I want to have AArch64 desktop running with as small amount of changes needed as possible I decided that it is time to get Xserver to just run on APM Mustang.

Current setup

To get X11 running I need to have xorg.conf file. And this feels strange as on x86(-64) I got rid of it years ago.

Config snippet is small and probably could be even smaller:

Section "Device"
        Identifier " radeon"
        Driver  "radeon"
        BusID "PCI:1:0:0"

Section "Screen"
        Identifier      "Screen"
        Device          "radeon"

Section "DRI"
        Mode 0666

Without it Xserver failed to find graphics card.

Searching for solution

I cloned Xserver git repository and started hacking. During several hours (split into few days) I added countless LogMessage() calls to source code, generated few patches and sent them to x-devel ML. And finally found out why it did not work.

Turns out that I was wrong — Xserver was able to find graphics card. But then it went to platform_find_pci_info() function, called pci_device_is_boot_vga() and rejected it.

Why? Because firmware from Applied Micro did not initialized card so kernel decided not to mark it as boot gfx one. I do not know is it possible to get UEFI to properly initialize pcie card on AArch64 architecture but there are two other ways to get it working.

hack Xserver

We can hack Xserver to not check for pci_device_is_boot_vga() or to use first available card if it returns false:

diff a/hw/xfree86/common/xf86platformBus.c b/hw/xfree86/common/xf86platformBus.c
index f1e9423..d88c58e 100644
--- a/hw/xfree86/common/xf86platformBus.c
+++ b/hw/xfree86/common/xf86platformBus.c
@@ -136,7 +136,8 @@ platform_find_pci_info()
     if (info) {
         pd->pdev = info;
-        if (pci_device_is_boot_vga(info)) {
+        if (pci_device_is_boot_vga(info) || xf86_num_platform_devices == 1)
+        {
             primaryBus.type = BUS_PLATFORM;
    = pd;

This may not work on multi-gpu systems. In that case try removing “== 1” part.

hack Linux kernel

If firmware does not give us boot gfx card then maybe we can mark first one as such and everything will work? This is how PowerPC has it solved. So let’s take their code:

diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c
index b3d098b..eea39ba 100644
--- a/arch/arm64/kernel/pci.c
+++ b/arch/arm64/kernel/pci.c
@@ -18,6 +18,7 @@
 #include <linux/of_pci.h>
 #include <linux/of_platform.h>
 #include <linux/slab.h>
+#include <linux/vgaarb.h>
@@ -84,3 +85,15 @@ struct pci_bus *pci_acpi_scan_root()
        return NULL;
+static void fixup_vga(struct pci_dev *pdev)
+       u16 cmd;
+       pci_read_config_word(pdev, PCI_COMMAND, &cmd);
+       if ((cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) || !vga_default_device())
+               vga_set_default_device(pdev);
+                             PCI_CLASS_DISPLAY_VGA, 8, fixup_vga);


Both hacks work. I can just run Xserver and get X11 working. But which one will get into upstream and then to Fedora and other Linux distributions? Time will show.

There are some issues with those solutions. If there are multiple graphics cards in a system then which one is primary one? Can their order change after firmware or kernel update?

Thanks goes to Dave Airlie for help with Xserver, Mark Salter for pointing me to PowerPC solution and Matthew Garrett for discussion about issues with kernel solution.