So you run OpenStack on your phone?

For about a year I have been working on OpenStack on AArch64 architecture. And the question from the title is asked from time to time in this or other forms.

Yes, I do have AArch64 powered phone nowadays. But it has just 4GB of memory and runs Android. So is not a good platform for using OpenStack.

I am aware that for many people anything which came from ARM Ltd means small, embedded, not worthy serious effort etc. For me they are not wrong — they are just ‘not up to date’.

We have servers. Sure, someone can say that we had them years ago and it will be right too. There were Marvell server boards, Calxeda had their “high density” boxes with huge amount of quad core cpus. But now we have ‘boring’ ones which can be used in same way as x86-64 ones.

ARM Ltd published SBSA and SBBR specifications which define what ARM server is nowadays. Short version is “boring box which you put into rack, plug power and network, power it on and install any Enterprise Linux distribution”. No need to deal with weird bootloaders (looking from server perspective), random kernel versions etc. Just unpack, connect and use.

But what you get inside? It depends on product. Can be 1 cpu with 8 cores but can also be 1-2 cpus with 48 cores per cpu. Or even more (I heard about 240 cpu cores products but not idea are they on market now). And processors means memory. What about 1TB (terabyte) of memory per CPU? Cavium ThunderX mainboards allow such setup with 8 memory dimms per cpu.

Then goes network. With 32bit ARM machines the problem was “will it support 1GbE?” and with AArch64 servers that problem can re-appear too as some systems do not support ports with less than 10GbE (some ThunderX boards have 3x40GbE + 4x10GbE ports). RJ-45 connector is usually to connect with BMC (think IPMI).

Storage is Serial-ATA, whatever you plug into PCI Express or something on network. Choose your way. I would not be surprised with M.2 connectors too.

Usually that means that several PCI Express chips are present on board to provide all that. On AArch64 most of controllers are already part of SoC to make things easier and faster.

On top of that we run standard distributions like CentOS, Debian, Fedora or OpenSUSE. Out of box, with distro kernels based on mainline kernels. And then we install OpenStack. From packages, as Docker containers, using devstack or any other way we tend to use.

And when I really have to use OpenStack on my phone then it looks like this:

First 96boards Enterprise board which will be on a market?

I am at Linaro Connect in Budapest, Hungary. And on Arrow’s stand I noticed something I did not expected — 96boards Enterprise Edition form factor board.

In past Linaro presented ‘Husky’ and ‘Cello’ devboards in 96boards EE form factor. None of them ever reached production. Only few prototypes existed (had some of them in hands). Both products were complete failures.

Systart Oxalis LS1020A got announced about month ago. They target routers, IoT gateways type devices with it.

As you can see board has ports all over the edges but that’s fault of 96boards EE specification which mandate such broken designs. When I saw it first time my question was “where is PCIe slot?” but found out that (according to spec) it is optional. Board has mini-pcie slot on bottom side anyway.

Speaking of design… Oxalis is made from two parts: carrier board and SoM (System on Module). SoM is based on NXP Network Processor QorIQ® LS1012A processor (single ARM Cortex-A53 core running up to 800 MHz) with 64MB of SPI flash (space for bootloader!) and 1GB of memory. Carrier board gives two GbE network ports, two USB 3.0 connectors, standard 96boards header, one SATA port (with power!), microSD and mini-pcie slot (on bottom side).

The beauty of such design is that you can replace CPU board with something different. According to Dieter Kiermaier from Arrow there are plans for other SoM board in future.

Will it be success? Time will show. Will I buy it? Rather not as for my development I need 16GB ram. Will it have case? Not asked. When on market? May/June 2017.

My work on Kolla

During last month I was working on one of OpenStack projects: Kolla. My job was adding support for non-x86 architectures: aarch64 and ppc64le. Also resurrecting Debian support.

A bit of background

At Linaro we work on getting AArch64 (64-bit ARM, arm64) to be present in many places. We have at least two OpenStack instances running at the moment – on AArch64 hardware only.

First we used Debian/jessie and Openstack ‘liberty’ version. Was working. Not best but we helped many projects by providing virtual machines for porting software.

It was built from packages and later (when ‘mitaka’ was released) we moved to virtualenv per component. Out second “cloud” runs that. With proper Neutron networking, live migration and few other nice things.

But virtualenvs were done as quick solution. We decided to move to Docker containers for next release.

And Kolla was chosen as a tool for it. We do not like to reinvent the wheel again and again…

Non-x86 support in Kolla

The problem was typical: Kolla being x86-64 centric. As most of software nowadays. But thanks to work done by Sajauddin Mohammad I had something to use as a base for adding aarch64 support.

I took his patch, slashed out most of it and concentrated on getting minimal changes needed to get something built on AArch64 . Effect was sent for review and is now at 10th version.

Docker images started to appear. But at beginning I was building Ubuntu ones as Debian support was “basically abandoned, on a way out”. From CentOS guys I got confirmation that official Docker image will be generated (it is done already).

I spent some time on making sure that whole non-x86 support is free from any hardcoding wherever possible. As you can see in my working branch it went quite well. Most of arch related changes are related to “distro does not provide package ZYS for that architecture” or to handling of external repositories.

Debian support

And here we come to Debian support. At Linaro we decided to support two community based distributions: CentOS and Debian. But Debian was on a way out in Kolla…

As this was not related much to non-x86 work I decided to use one of x86-64 machines for that stuff.

First builds were against ‘jessie-backports’ base tag. I had to make a patch to tell APT that if I want backports then I really want them. It was sent for review as rest of patches.

Images were building but not so many as for Ubuntu. So I went through all of them and enabled Debian where it was possible. Resulting patch went for review as usual.

Effect was quite nice (on x86-64):

  • debian-binary: 158
  • debian-source: 201

But ‘jessie’ was missing several packages even with backports enabled. So after discussion with my team I decided to drop it and go for Debian/testing ‘stretch’ one instead. It is already frozen for release so no big changes are allowed. Patch in review of course.

At that moment I abandoned one of previous patches as ‘jessie-backports’ was not something I planned to support.

Turned out that ‘stretch’ images have a bit different set of packages installed than ‘jessie’ had. So ‘gnupg’ and ‘dirmngr’ were missing while we need them for importing GPG keys into APT. Proper patch went to review again.

Did rebuild on x86-64:

  • stretch-binary: 137
  • stretch-source: 195

A bit less than ‘jessie-backports’ had, right? Sure, but it also shows that I have to make a new build to check numbers (laptop already has ~1500 docker images generated by kolla).

Cleaning of old Power patch

Remember the patch which all that started from? I did not forgot it and after building all those images I went back to it.

Some parts are just fugly so I skipped them but others were useful if done properly. That’s how new changes were done and some updates to previous ones.

Then I managed to put remote hands on one of Power machines at Red Hat and started builds:

  • debian-binary: 134
  • debian-source: 184
  • ubuntu-binary: 147
  • ubuntu-source: 190

No CentOS builds as there was no centos/ppc64le image available.

Summary

Non-x86 support looks quite nice. There are some images which can not be built as they rely on external repositories so no aarch64 nor ppc64le packages to use.

Debian ‘stretch’ support is not perfect yet but it is something which I plan to maintain so situation will be going to improve. Note that most of my work will go into ‘source’ type of builds as we want to have same images for both Debian and CentOS systems.

System calls again

Few months ago I created a page with HTML table. For own use basically. Then presented it to the people and found out that it got useful for them. So started improving and improving so it became side project.

Yes, system calls again. I wrote about it in past but yesterday I rewrote code so it now uses Linux source so I can generate tables for far more architectures without need of other computers (either real or emulated).

Next step was work on presentation layer. Old version was just table with added sorting. Things were ugly when scrolled as header was gone. Now it sticks to the top of page so it is easier to note which column relates to which architecture.

Odd/even lines are coloured now which makes is easier to find numbers for syscall.

And speaking of searching — there is filter box now. You can type syscall name (or part of it) there and have table filtered. Same can be done with system call number as well. You used Valgrind and it said that has no idea how to handle syscall 145? Just enter number and you see that it is getresuid(), nfsservctl(), readv(), sched_getscheduler(), setreuid() or setrlimit() — depends which architecture you are testing.

You wonder what that that system call does? There are links to man pages provided.

Go here to check it out and comment here, open a new issue if you found a bug or would like to colaborate. Patches are welcome.

Linaro Connect: interesting hardware

Before going for Linaro Connect I had a plan to look at all those 96boards devices and write some complains/opinions about them. But it would be like shooting fish in a barrel so I decided against. But there were some interesting pieces of hardware there.

One of them was Macchiatobin board from SolidRun. I think that this is same as their Armada 8040 community board but after design changes. Standard Mini-ITX format, quad core Cortex-A72 cpu (with upto 2GHz clock), one normal DIMM slot (max 16GB, ships with 4GB), three Serial-ATA ports, PCI-Express x4 slot, one USB 3.0 port, microSD slot.

UPDATE: SolidRun confirmed – this is final design of their Armada 8040 community board.

Photo (done by Riku Voipio) shows which goodies are available:

Network interfaces from top to bottom are (if I remember correctly):

  • 10GbE (SFP + RJ-45)
  • 10GbE (SFP + RJ-45)
  • 2.5GbE (SFP)
  • 1GbE (RJ-45)

When it comes to software I was told that board is SBSA compliant so any normal distribution should work. Kernel, bootloaders (U-Boot and UEFI) are mainlined.

Price? 350USD. Looks like nice candidate for AArch64 development platform or NAS.

Other device was Gumstix Nodana 96BCE board which is 96boards complaint carrierboard for Intel Joule modules.

On top it looks like typical 96boards device (except USB C port):

But once reversed Intel Joule module is visible:

This is first non-ARM based 96boards device. Maybe even one of most compliant ones. At least from software perspective because when it comes to hardware then module makes it a bit too thick to fit in 96boards CE specification limits.

Note that 96boards Consumer Electronics specification does not require using ARM or AArch64 cpu.

Linaro Connect: Las Vegas sightseeing

One of cool things of being Linaro assignee is going to Linaro Connect conference. This time it was Las Vegas, USA. I was flying Berlin Tegel -> London Heathrow -> Las Vegas. Last part was fun as I met several Linaro folks at the airport ;D

Arrived in Vegas, went to hotel and fall asleep. Sunday was planned for some Ingress playing and for sightseeing. As usual I had several places marked on Google Maps to make things easier.

So Sunday… It was really Sun day. I took some water with me and refueled several times during day just to stay hydrated. With Las Vegas climate I was not even felt sweety as it vaporated right away…

But let’s start from beginning. I walked few hundred meters from hotel and caught public transport bus which took me to Freemont Street (or somewhere around). When I walked I felt like the only person on the Earth or in a no-go zone. There was basically no one on the street. After some walking and few photos I got asked something like “who are you and what you are doing here???” from security guy. It turned out that some part of Freemont Street (and surroundings) were closed due to some arts/music festival. I probably missed some ‘no entry’ plate…

Anyway I did not get any problems and was pointed where the gate is. Walked around, saw some places, bought souvenirs (including fridge magnet to my collection), another water bottles and decided to walk to another point on my map.

Yes, it may feel strange but I walked. And walked. And walked. Then Arts district happened.

OMG it was awesome. Deserted streets, shops with some retro furniture/stuff, shops with some crazy junk, shop with wax figures from Last Supper etc. But the best part was murals and graffitis. Lot of them, different styles and quality. I spent about 2 hours just walking there and taking photos.

Next step was the Strip. All those big hotel/casino buildings. At Circus Circus I found room with arcade machines and spent 25 cents on Galaga. In Venetian I looked at their version of Venice canals (have to go to Venice and compare one day :D). Few minutes later I saw Eiffel tower (or rather miniature version of it). Decided to skip searching for copy of Statue of Liberty and instead crossed street and went to take a look at Bellagio fountains show. Have to admit that it was nice. I saw three shows (had to sort few things around) and then took a cab back to the hotel.

[youtube https://youtu.be/Q0gboKjabRY]

More photos in Google Photos album

Linaro Connect: good to be back

One of things you do while you are at Linaro is going to Linaro Connect conferences. My previous one was 2012 Copenhagen one so it was good to be back.

All those people from different companies or projects… Some faces I recognized, some not. Several people recognized me, some said that my beard complicated it. Fun ;D

Discussions about random things, random hacking (not all Snow chromebooks are the same), talks to attend, talks to give…

And speaking of speaking — our team had a speak about OpenStack on AArch64. It was recorded but volume level is very low.

[youtube https://www.youtube.com/watch?v=P5xmaT6H3JE]

Complaining. Sure, there was some. And I got my badge upgraded just to show all those impersonators that I am back.

bagde with the main complainer text on it

There was jetlag as usual so I was a bit of excluded from evening events but those which I attended were great. Like team dinner with the Big Kahuna cheeseburger (with Sprite to drink).

Next year I have to organize trip in a way that I would do some personal sightseeing on a week before conference. According to rumours it would be in one of areas where I have some places to visit ;D

AArch64 desktop hardware?

Soon there will be four years since I started working on AArch64 architecture. Lot of software things changed during that time. Lot in a hardware too. But machines availability still sucks badly.

In 2012 all we had was software model. It was slow, terribly slow. Common joke was AArch64 developers standing in a queue for 10GHz x86-64 cpus. So I was generating working binaries by using cross compilation. But many distributions only do native builds. In models. Imagine Qt4 building for 3-4 days…

In 2013 I got access to first server hardware. With first silicon version of CPU. Highly unstable, we could use just one core etc. GCC was crashing like hell but we managed to get stable build results from it. Qt4 was building in few hours now.

Then amount of hardware at Red Hat was growing and growing. Farms of APM Mustangs, AMD Seattle and several other servers appeared, got racked and available to use. In 2014 one Mustang even landed on my desk (as first such machine in Poland).

But this was server land. Each of those machines costed about 1000 USD (if not more). And availability was hard too.

Linaro tried to do something about it and created 96boards project.

First came ‘Consumer Edition’ range. Yet another small form factor boards with functionality stripped as much as possible. No Ethernet, no storage other than emmc/usb, low amount of memory, chips taken from mobile phones etc. But it was selling! But only because people were hungry to get ANYTHING with AArch64 cores. First was HiKey then DragonBoard410 got released. Then few other boards. All with same set of issues: non-mainline kernel, weird bootloaders, binary blobs for this or that…

Then so called ‘Enterprise Edition’ got announced. With another ridiculous form factor (and microATX as an option). And that was it. There was a leak of Husky board which shown how fucked up design it was. Ports all around the edges, memory above and under board and of course incompatible with any industrial form factor. I would like to know what they were smoking…

Time passed by. Husky got forgotten for another year. Then Cello was announced as a “new EE 96boards board” while it looked as redesigned Husky with two SATA ports less (because who needs more than two SATA, right?). Last time I heard about Cello it was still ‘maybe soon, maybe another two weeks’. Prototypes looked like hand soldered, USB controller mounted rotated, dead on-board Ethernet etc.

In meantime we got few devices from other companies. Pine64 had big campaign on Kickstarter and shipped to developers. Hardkernel started selling ODROID-C2, Geekbox released their TV box and probably something else got released as well. But all those boards were limited to 1-2GB of memory, often lacked SATA and used mobile processors with their own set of bootloaders etc causing extra work for distributions.

Overdrive 1000 was announced. Without any options for expansion it looked like SoftIron wanted customers to buy Overdrive 3000 if they want to use PCI Express card.

So we have 2016 now. Four years of my work on AArch64 passed. Most of distributions support this architecture by building on proper servers but most of this effort is not used because developers do not have sane hardware to play with (sane means expandable, supported by distributions, capable).

There is no standard form factor mainboards (mini-itx, microATX, ATX) available on mass market. 96boards failed here, server vendors are not interested, small Chinese companies prefer to release yet-another-fruit/Pi with mobile processor. Nothing, null, nada, nic.

Developers know where to buy normal computer cases, storage, memory, graphics cards, USB controllers, SATA controllers and peripherals. So vendors do not have to worry/deal with this part. But still there is nothing to put those cards into. No mainboards which can be mounted into normal PC case, have some graphics plugged in, few SSD/HDD connected, mouse/keyboard, monitors and just be used.

Sometimes it is really hard to convince software developers to make changes for platform they are unable to test on. And current hardware situation does not help. All those projects of hardware being available “in a cloud” helps only for subset of projects — ever tried to run GNOME/KDE session over the network? With OpenGL acceleration etc?

So where is my AArch64 workstation? In desktop or laptop form.

Post written after my Google+ post where similar discussion happened in comments.

My work on changing CirrOS images

What is CirrOS and why I was working on it? This was quite common question when I mentioned what I am working on during last weeks.

So, CirrOS is small image to run in a cloud. OpenStack developers use it to test their projects.

Technically it is yet another Frankenstein OS. Built using Buildroot 2015.05 uses uclibc or glibc (depending on target architecture). Then Ubuntu 16.04 kernel is applied on top and “grub” (also from Ubuntu) is used to make it bootable.

The problem was that it was not done in UEFI bootable way…

My first changes were: switch images to GPT, create EFI system partition and put some bootloader there. I first used CentOS “grub2-efi” packages (as they provided ready to use EFI binaries) and later switched to Ubuntu ones as upstream maintainer (Scott Moser) prefers to have all external binaries to came from one source.

When he was on vacations (so merge request had to wait) I started digging more and more into scripts.

Fixed getopt use as arguments passed between scripts were read partly via getopt, partially by assigning variables to ${X} (where X is a number).

All scripts were moved to use Bash (as /bin/sh in Ubuntu is usually Dash which is minimalist POSIX shell), whitespace got unified between all scripts and some other stuff happened as well.

At one moment all scripts had 1835 lines and my diff was 2250 lines (+1018/-603) long. Hopefully Scott was back and we got most of that stuff merged.

Recent (2016.07.21) images are available and work fine on all platforms. If someone uses them with OpenStack then please remember about setting “short_id” property to “ubuntu16.04” — otherwise there may be a problem with finding rootfs (no virtio-scsi in disk images).

Summary:

architecture booting before booting after
aarch64 direct kernel UEFI or direct kernel
arm direct kernel UEFI or direct kernel
i386 BIOS or direct kernel BIOS, UEFI or direct kernel
powerpc direct kernel direct kernel
ppc64 direct kernel direct kernel
ppc64le direct kernel direct kernel
x86-64 BIOS or direct kernel BIOS, UEFI or direct kernel

Visiting UK again — Bletchley Park and Cambridge Beer Festival

For some time I had “visit Bletchley Park” on my ‘places to visit’ list. Some people told me that there is nothing interesting to see, some said that I should definitely go there. So I will. And will also grab some beers at Cambridge Beer Festival like three years ago.

Due to some family duties my visit will be short — landing on Saturday (21st May) and departing on Wednesday (25th May). First Bletchley park and then Cambridge from Sunday evening.

Plans are simple: walk, see old computers, walk, visit long time no see friends, walk, see not so old computers, maybe play some Ingress, meet other friends, drink some beers, exchange some hardware, buy some hardware etc.

This time will skip visiting Linaro office — they moved somewhere outside of Cambridge so it takes too much time to get there just to say “hi” and drink tea.

As usual I will be online so catch me via Hangouts, Telegram, Facebook, mail or call me if you want to meet.