Android at Google I/O: what’s the point?

Another year, another Google I/O. Another set of articles with “what’s new in xyz Google product”. Maps, Photos, AI, this, that. And then all those Android P features which nearly no one will see on their phones (tablets look like dead part of market already).

I have a feeling that this part is more or less useless with current state of Android. Latest release is Oreo. On 5.7% of devices. Which sounds like “feel free to ignore” value. Every 4th device runs 3 years old version (and usually lacks two years of security updates). Every 3rd one has 2 years old Nougat one.

How many users will remember what’s new in their phones when Android P will land on their devices? Probably very small part of crazy geeks. Some features will get renamed by device vendors. Other will be removed. Or changed (not always in positive way). Reviewers will write “OMG that feature added by VENDORNAME is so awesome” as no one will remember that it is part of base system.

In other words: I stopped caring what is happening in Android space. With most popular version being few years old I do not see a point in tracking new features. Who would use them in their apps when you have to care about running on four years old Android?

Mass removal of image tags on Docker hub

At Linaro we moved from packaged OpenStack to virtualenv tarballs. Then we packaged those. But as it took us lot of maintenance time we switched to Docker container images for OpenStack and whatever it needs to run. And then we added CI job to our Jenkins to generate hundreds of images per build. So now we have lot of images with lot of tags…

Finding out which tags are latest is quite easy — you just have to go to Docker hub page of linaro/debian-source-base image and switch to tags view. But how to know which build is complete? We had some builds where all images except one got built and pushed. And the missing one is first in deployment… So whole set was b0rken.

How to remove those tags? One solution is to login to Docker hub website and go image by image and click all those tags to be removed. No one is so insane to suggest it. And we do not have credentials to do that as well.

So let’s handle it as we do that in SDI team: by automation. Docker has some API so it’s hub should have some too, right? Hmm…

I went through some pages, then issues, bug reports, random projects. Saw code in JavaScript, Ruby, Bash but nothing usable in Python. Some of projects assume that no one has more than one hundred of images (no paging in getting list of images) and limits itself to some queries.

Started reading docs and some code. Learnt that GET/POST are not the only methods of doing HTTP. There is also DELETE one which was exactly what I needed. Sorted out authentication, web paths and something started to work.

First version was simple: login and remove tag from image. Then added querying for whole list of images (with proper paging) and looping through the list with removal of requested tags from requested images:

15:53 (s) hrw@gossamer:docker$ ./delimage.py haerwu debian-source 5.0.0
haerwu/debian-source-memcached:5.0.0 removed
haerwu/debian-source-glance-api:5.0.0 removed
haerwu/debian-source-nova-api:5.0.0 removed
haerwu/debian-source-rabbitmq:5.0.0 removed
haerwu/debian-source-nova-consoleauth:5.0.0 removed
haerwu/debian-source-nova-placement-api:5.0.0 removed
haerwu/debian-source-glance-registry:5.0.0 removed
haerwu/debian-source-nova-compute:5.0.0 removed
haerwu/debian-source-keystone:5.0.0 removed
haerwu/debian-source-horizon:5.0.0 removed
haerwu/debian-source-neutron-dhcp-agent:5.0.0 removed
haerwu/debian-source-openvswitch-db-server:5.0.0 removed
haerwu/debian-source-neutron-metadata-agent:5.0.0 removed
haerwu/debian-source-heat-api:5.0.0 removed

Final version got MIT license as usual, I created git repo for it and pushed code. Next step? Probably creation of a job on Linaro CI to have a way of removing no longer supported builds. And some more helper scripts.

25 years of Red Hat

Years ago I bought Polish translation of “Under the radar” book about how Red Hat was started. Was a good read and went to bookshelf.

Years passed. In meantime I got hired by Red Hat. To work on Red Hat Enterprise Linux. For AArch64 architecture.

Then one day I was talking with my wife about books and I looked at shelf. And found that book again. Took it and said:

You know, when I bought that book I did not even dreamt that one day I will be working at Red Hat.

Today company turned 25. Amount of time longer than my career. I remember how surprised I was when realised that some of my friends work at company for 20 years already.

This is the oldest company I worked for. Directly at least as some of the customers of companies I worked in past were probably older. And hope that one day my work title will be “Retired Software Engineer” as my wife once said. And that will be at this company.

Developers planet is online

People write blogs. People read blogs. But sometimes it is hard to find blogs of all those interesting people. That’s where so called “planets” are solution.

Years ago there was “Planet Linaro” website filled with blog posts from Linaro developers. Then it vanished. Later it got replaced by poor substitute.

But I do not want to have to track each Linaro developer to find their blog and add it into Feedly. So instead I decided to create new planet website. And that’s how Developers Planet got born.

So far it lists a bunch of blogs of Linaro developers. I used venus to run it. Few years old code but runs. Will adapt HTML/CSS template to be a bit more modern.

And why .cf domain? It is free — that’s why.

YADIBP

Let me introduce new awesome project: YADIBP. It is cool, foss, awesome, the one and only and invented here instead of there. And it does exactly what it has to do and in a way it has to be done. Truly cool and awesome.

Using that tool you can build disk images with several supported Linux distributions. Or maybe even with any BSD distribution. And Haiku or ReactOS. Patches for AmigaOS exist too!

Any architecture. Starting from 128 bit wide RUSC-VI to antique architectures like ia32 or m88k as long as you have either hardware or qemu port (patches for ARM fast models in progress).

Just fetch from git and use. Written in BASIC so it should work everywhere. And if you lack BASIC interpreter then you can run it as Python or Erlang. Our developers are so cool and awesome!

But let’s get back to reality — there are gazillions of projects of tool which does one simple thing: builds a disk image. And gazillion will be still written because some people have that “Not Invented Here” syndrome.

And I am getting tired of it.

Today I was fighting with Nova. No idea who won…

I am working on getting OpenStack running on AArch64 architecture, right? So recently I went from “just” building images to also using them to deploy working “cloud” setup. And that resulted in new sets of patches, updates to patches, discussions…

OpenStack is supposed to make virtualization easier. Create accounts, give access to users and they will make virtual machines and use them without worrying what kind of hardware is below etc. But first you have to get it working. So this week instead of only taking care of Kolla and Kolla-ansible projects I also patched Nova. The component responsible for running virtual machines.

One patch was simple edit of existing one to make it comply with all comments. Took some time anyway as I had to write some proper bug description to make sure that reviewers will know why it is so important for us. And once merged we will have UEFI used as default boot method on AArch64. Without any play with hw_firmware_type=uefi property on images (which is easy to forget). But this was the easy one…

Imagine that you have a rack of random AArch64 hardware and want to run a “cloud”. You may end in a situation where you have a mix of servers for compute nodes (the ones where VM instances run). In Nova/libvirt it is handled by cpu_mode option:

It is also possible to request the host CPU model in two ways:

  • “host-model” – this causes libvirt to identify the named CPU model which most closely matches the host from the above list, and then request additional CPU flags to complete the match. This should give close to maximum functionality/performance, which maintaining good reliability/compatibility if the guest is migrated to another host with slightly different host CPUs. Beware, due to the way libvirt detects host CPU, CPU configuration created using host-model may not work as expected. The guest CPU may confuse guest OS (i.e. even cause a kernel panic) by using a combination of CPU features and other parameters (such as CPUID level) that don’t work.

  • “host-passthrough” – this causes libvirt to tell KVM to passthrough the host CPU with no modifications. The difference to host-model, instead of just matching feature flags, every last detail of the host CPU is matched. This gives absolutely best performance, and can be important to some apps which check low level CPU details, but it comes at a cost wrt migration. The guest can only be migrated to an exactly matching host CPU.

Nova assumes host-model when KVM/QEMU is used as hypervisor. And crashes terribly on AArch64 with:

libvirtError: unsupported configuration: CPU mode ‘host-model’ for aarch64 kvm domain on aarch64 host is not supported by hypervisor

Not nice, right? So I made a simple patch to get host-passthrough to be default on AArch64. But when something is so simple then it’s description is probably not so simple…

Reported bug on nova with some logs attached. Then digged for some information which would explain issue better. Found Ubuntu’s bug on libvirt from Ocata times. They used same workaround.

So I thought: let’s report a bug for libvirt and request support for host-model option. There I got link to an another bug in libvirt with set of information why it does not make sense.

The reason is simple. No one knows what you run on when you run Linux on AArch64 server. In theory there are fields in /proc/cpuinfo but still you do not know do cpu cores in compute01 are same as compute02 servers. At least from nova/libvirt/qemu perspective. This also blocks us from setting cpu_mode to custom and selecting cpu_model which could be some way of getting same cpu for each instance despite of types of compute node processor cores.

The good side is that VM instances will work. The problem may appear when you migrate VM to cpu with other core — it may work. Or may not. Good luck!

We need some thermite…

Time goes and it is that time of year where Linaro Enterprise Group is working on a new release. And as usual jokes about lack of thermite starts…

Someone may ask “Why?”. Reason is simple: X-Gene 1 processor. I think that it’s hateclub grows and grows with time.

When it was released it was a nice processor. Eight cores, normal SATA, PCI Express, USB, DDR3 memory with ECC etc. It was used for distribution builders, development platforms etc. Not that there was any choice 😀

Nowadays with all those other AArch64 processors on a market it starts to be painful. PCI support requires quirks, serial console requires patching etc. We have X-Gene1 in Applied Micro Mustang servers and HPe Moonshot M400 cartridges. Maybe officially those machines are not listed as supported but we still use them so testing a new release work there has to be done.

And each time there are some issues to work around. Some could probably be fixed with firmware updates but I do not know do vendors still support that hardware.

So if you have some spare thermite (and a way to handle that legally) then contact us.

Twenty five years of Linux

As I came back from PTO I had to dig into work mails. One of threads was about 25 years of Linux and there was a question “which was your first kernel” and I thought that it may be not an easy question to answer.

For me first was 2.0.2[6-8] (do not remember) on some Uni server where I got my first Linux account (normally used SunOS and text terminal). I remember that there was simple root exploit we used.

Then 2.0.36 on my Amiga 1200 (Debian ‘slink’) was first I run on my hardware.

2.2.10 was first I used for longer as it was Debian ‘potato’ m68k one.

2.3.47 was first I cross compiled (on i686/linux for m68k/linux). And it worked!

2.4.0-test5 was first I built for my x86 desktop once I moved from Amiga/AmigaOS to PC/Debian. I had Duron/600 desktop and old 386 desktop both running same version. Duron got newer ones later, 386 stayed with this one for about year when I returned it.

When I bougth Sharp Zaurus SL-5500 PDA 2.4.18-rmk7-pxa3-embeddix was running on it. So this is my first Linux version on mobile device. Next jump was 2.6.11 on Zaurus c760 as first 2.6 one on mobile.

During OpenZaurus maintaince I started upstreaming kernel patches. 2.6.17 was first with my patches in.

When I had that strange ProGear webpad I wrote backlight for it (based on someone’s code) and 2.6.21 was first with my driver in (and I removed it in 3.7).

AArch64 desktop hardware?

Soon there will be four years since I started working on AArch64 architecture. Lot of software things changed during that time. Lot in a hardware too. But machines availability still sucks badly.

In 2012 all we had was software model. It was slow, terribly slow. Common joke was AArch64 developers standing in a queue for 10GHz x86-64 cpus. So I was generating working binaries by using cross compilation. But many distributions only do native builds. In models. Imagine Qt4 building for 3-4 days…

In 2013 I got access to first server hardware. With first silicon version of CPU. Highly unstable, we could use just one core etc. GCC was crashing like hell but we managed to get stable build results from it. Qt4 was building in few hours now.

Then amount of hardware at Red Hat was growing and growing. Farms of APM Mustangs, AMD Seattle and several other servers appeared, got racked and available to use. In 2014 one Mustang even landed on my desk (as first such machine in Poland).

But this was server land. Each of those machines costed about 1000 USD (if not more). And availability was hard too.

Linaro tried to do something about it and created 96boards project.

First came ‘Consumer Edition’ range. Yet another small form factor boards with functionality stripped as much as possible. No Ethernet, no storage other than emmc/usb, low amount of memory, chips taken from mobile phones etc. But it was selling! But only because people were hungry to get ANYTHING with AArch64 cores. First was HiKey then DragonBoard410 got released. Then few other boards. All with same set of issues: non-mainline kernel, weird bootloaders, binary blobs for this or that…

Then so called ‘Enterprise Edition’ got announced. With another ridiculous form factor (and microATX as an option). And that was it. There was a leak of Husky board which shown how fucked up design it was. Ports all around the edges, memory above and under board and of course incompatible with any industrial form factor. I would like to know what they were smoking…

Time passed by. Husky got forgotten for another year. Then Cello was announced as a “new EE 96boards board” while it looked as redesigned Husky with two SATA ports less (because who needs more than two SATA, right?). Last time I heard about Cello it was still ‘maybe soon, maybe another two weeks’. Prototypes looked like hand soldered, USB controller mounted rotated, dead on-board Ethernet etc.

In meantime we got few devices from other companies. Pine64 had big campaign on Kickstarter and shipped to developers. Hardkernel started selling ODROID-C2, Geekbox released their TV box and probably something else got released as well. But all those boards were limited to 1-2GB of memory, often lacked SATA and used mobile processors with their own set of bootloaders etc causing extra work for distributions.

Overdrive 1000 was announced. Without any options for expansion it looked like SoftIron wanted customers to buy Overdrive 3000 if they want to use PCI Express card.

So we have 2016 now. Four years of my work on AArch64 passed. Most of distributions support this architecture by building on proper servers but most of this effort is not used because developers do not have sane hardware to play with (sane means expandable, supported by distributions, capable).

There is no standard form factor mainboards (mini-itx, microATX, ATX) available on mass market. 96boards failed here, server vendors are not interested, small Chinese companies prefer to release yet-another-fruit/Pi with mobile processor. Nothing, null, nada, nic.

Developers know where to buy normal computer cases, storage, memory, graphics cards, USB controllers, SATA controllers and peripherals. So vendors do not have to worry/deal with this part. But still there is nothing to put those cards into. No mainboards which can be mounted into normal PC case, have some graphics plugged in, few SSD/HDD connected, mouse/keyboard, monitors and just be used.

Sometimes it is really hard to convince software developers to make changes for platform they are unable to test on. And current hardware situation does not help. All those projects of hardware being available “in a cloud” helps only for subset of projects — ever tried to run GNOME/KDE session over the network? With OpenGL acceleration etc?

So where is my AArch64 workstation? In desktop or laptop form.

Post written after my Google+ post where similar discussion happened in comments.

AArch64 desktop: last day

Each year you can hear “this is a year of Linux desktop” phrase. After few days with AArch64 desktop I know one thing: it is not a year of ARMv8 Linux desktop.

Web browsing

OK, I can be spoiled by speed of my i7-2600k desktop but situation when Firefox with less than 20 tabs open is unable to display characters I type into textarea fast enough shows that something is wrong (16GB ram machine). And tell me that this is not typical desktop use of web browser…

YouTube. Main source of any kind of videos. Sometimes it works, but most of time I lack patience to wait until it will start (VP9 and h264 codecs support present). And no way to watch “live hangouts”.

And say bye to music streaming services like Deezer or Spotify.

Gaming

I am not a game player. Installed Quake3 (which I never played before) and it worked, SuperTuxKart worked as well. But that does not prove anything as both those games have low requirements.

It probably never will be gaming platform on Linux desktop.

Development

For my style of development it is fine. But all I need is terminal and gVim ;D

Other hardware?

I think that results may be affected by a fact that all I have here is Applied Micro Mustang based on X-Gene 1 cpu. It is one of first ARMv8 processors in Linux world and it is optimized for server use rather than desktop.

One thing is sure: in next year I will try this experiment with other AArch64 hardware. Just hope it will be sooner than in a year from now (which is my feeling after lack of new aarch64 hardware announcements from Linaro members during this week Linaro Connect).