1. AArch64 desktop hardware?

    Soon there will be four years since I started working on AArch64 architecture. Lot of software things changed during that time. Lot in a hardware too. But machines availability still sucks badly.

    In 2012 all we had was software model. It was slow, terribly slow. Common joke was AArch64 developers standing in a queue for 10GHz x86-64 cpus. So I was generating working binaries by using cross compilation. But many distributions only do native builds. In models. Imagine Qt4 building for 3-4 days…

    In 2013 I got access to first server hardware. With first silicon version of CPU. Highly unstable, we could use just one core etc. GCC was crashing like hell but we managed to get stable build results from it. Qt4 was building in few hours now.

    Then amount of hardware at Red Hat was growing and growing. Farms of APM Mustangs, AMD Seattle and several other servers appeared, got racked and available to use. In 2014 one Mustang even landed on my desk (as first such machine in Poland).

    But this was server land. Each of those machines costed about 1000 USD (if not more). And availability was hard too.

    Linaro tried to do something about it and created 96boards project.

    First came ‘Consumer Edition’ range. Yet another small form factor boards with functionality stripped as much as possible. No Ethernet, no storage other than emmc/usb, low amount of memory, chips taken from mobile phones etc. But it was selling! But only because people were hungry to get ANYTHING with AArch64 cores. First was HiKey then DragonBoard410 got released. Then few other boards. All with same set of issues: non-mainline kernel, weird bootloaders, binary blobs for this or that…

    Then so called ‘Enterprise Edition’ got announced. With another ridiculous form factor (and microATX as an option). And that was it. There was a leak of Husky board which shown how fucked up design it was. Ports all around the edges, memory above and under board and of course incompatible with any industrial form factor. I would like to know what they were smoking…

    Time passed by. Husky got forgotten for another year. Then Cello was announced as a “new EE 96boards board” while it looked as redesigned Husky with two SATA ports less (because who needs more than two SATA, right?). Last time I heard about Cello it was still ‘maybe soon, maybe another two weeks’. Prototypes looked like hand soldered, USB controller mounted rotated, dead on-board Ethernet etc.

    In meantime we got few devices from other companies. Pine64 had big campaign on Kickstarter and shipped to developers. Hardkernel started selling ODROID-C2, Geekbox released their TV box and probably something else got released as well. But all those boards were limited to 1-2GB of memory, often lacked SATA and used mobile processors with their own set of bootloaders etc causing extra work for distributions.

    Overdrive 1000 was announced. Without any options for expansion it looked like SoftIron wanted customers to buy Overdrive 3000 if they want to use PCI Express card.

    So we have 2016 now. Four years of my work on AArch64 passed. Most of distributions support this architecture by building on proper servers but most of this effort is not used because developers do not have sane hardware to play with (sane means expandable, supported by distributions, capable).

    There is no standard form factor mainboards (mini-itx, microATX, ATX) available on mass market. 96boards failed here, server vendors are not interested, small Chinese companies prefer to release yet-another-fruit/Pi with mobile processor. Nothing, null, nada, nic.

    Developers know where to buy normal computer cases, storage, memory, graphics cards, USB controllers, SATA controllers and peripherals. So vendors do not have to worry/deal with this part. But still there is nothing to put those cards into. No mainboards which can be mounted into normal PC case, have some graphics plugged in, few SSD/HDD connected, mouse/keyboard, monitors and just be used.

    Sometimes it is really hard to convince software developers to make changes for platform they are unable to test on. And current hardware situation does not help. All those projects of hardware being available “in a cloud” helps only for subset of projects — ever tried to run GNOME/KDE session over the network? With OpenGL acceleration etc?

    So where is my AArch64 workstation? In desktop or laptop form.

    Written by Marcin Juszkiewicz on
  2. My work on changing CirrOS images

    What is CirrOS and why I was working on it? This was quite common question when I mentioned what I am working on during last weeks.

    So, CirrOS is small image to run in a cloud. OpenStack developers use it to test their projects.

    Technically it is yet another Frankenstein OS. Built using Buildroot 2015.05 uses uclibc or glibc (depending on target architecture). Then Ubuntu 16.04 kernel is applied on top and “grub” (also from Ubuntu) is used to make it bootable.

    The problem was that it was not done in UEFI bootable way…

    My first changes were: switch images to GPT, create EFI system partition and put some bootloader there. I first used CentOS “grub2-efi” packages (as they provided ready to use EFI binaries) and later switched to Ubuntu ones as upstream maintainer (Scott Moser) prefers to have all external binaries to came from one source.

    When he was on vacations (so merge request had to wait) I started digging more and more into scripts.

    Fixed getopt use as arguments passed between scripts were read partly via getopt, partially by assigning variables to ${X} (where X is a number).

    All scripts were moved to use Bash (as /bin/sh in Ubuntu is usually Dash which is minimalist POSIX shell), whitespace got unified between all scripts and some other stuff happened as well.

    At one moment all scripts had 1835 lines and my diff was 2250 lines (+1018/-603) long. Hopefully Scott was back and we got most of that stuff merged.

    Recent (2016.07.21) images are available and work fine on all platforms. If someone uses them with OpenStack then please remember about setting “short_id” property to “ubuntu16.04” — otherwise there may be a problem with finding rootfs (no virtio-scsi in disk images).

    Summary:

    architecture booting before booting after
    aarch64 direct kernel UEFI or direct kernel
    arm direct kernel UEFI or direct kernel
    i386 BIOS or direct kernel BIOS, UEFI or direct kernel
    powerpc direct kernel direct kernel
    ppc64 direct kernel direct kernel
    ppc64le direct kernel direct kernel
    x86-64 BIOS or direct kernel BIOS, UEFI or direct kernel
    Written by Marcin Juszkiewicz on
  3. Debian On Chromebooks

    Debian wiki has a section named “Debian On” where users can describe how to install Debian on any hardware. And there are several pages about Chromebooks.

    It is great idea but how it is done is far from being great. People just copy pasted one of pages and did some adaptations leaving rest untouched.

    So you can read “Do not play with ALSA mixer - you may fry your speakers!” warning which was valid on Samsung ARM Chromebook in 2012 but was quickly fixed by ChromeOS update.

    Some of those pages link to my blog so people often ask me about installing Debian on Any Random Chromebook Model when I have only one - Samsung ARM Chromebook from 2012 year. And do not use it with Debian. I do not use it at all. Maybe will start again one day but it is just maybe.

    So people: if you have issues with installing Debian/Fedora/Ubuntu/whatever-other-than-ChromeOS on your Chromebook then go to IRC, find your distribution channel and ask. Better chances for good answer than when you ask me.

    Written by Marcin Juszkiewicz on
  4. Visiting UK again — Bletchley Park and Cambridge Beer Festival

    For some time I had “visit Bletchley Park” on my ‘places to visit’ list. Some people told me that there is nothing interesting to see, some said that I should definitely go there. So I will. And will also grab some beers at Cambridge Beer Festival like three years ago.

    Due to some family duties my visit will be short — landing on Saturday (21st May) and departing on Wednesday (25th May). First Bletchley park and then Cambridge from Sunday evening.

    Plans are simple: walk, see old computers, walk, visit long time no see friends, walk, see not so old computers, maybe play some Ingress, meet other friends, drink some beers, exchange some hardware, buy some hardware etc.

    This time will skip visiting Linaro office — they moved somewhere outside of Cambridge so it takes too much time to get there just to say “hi” and drink tea.

    As usual I will be online so catch me via Hangouts, Telegram, Facebook, mail or call me if you want to meet.

    Written by Marcin Juszkiewicz on
  5. My workflow for building big sets of RPM packages

    In last months I did two rebuilds: NodeJS 4.x in Fedora and OpenStack Mitaka in CentOS. Both were targeting AArch64 and both were not done before. During latter one I was asked to write about my workflow, so will describe it with OpenStack one as base.

    identify what needs to be done

    At first I had to figure out what exactly needs to be built. Haïkel Guémar (aka number80) pointed me to openstack-mitaka directory on CentOS’ vault where all source packages are present. Also told me that EPEL repository is not required which helped a lot as it is not yet built for CentOS.

    structure of sources

    OpenStack set of packages in CentOS is split into two parts: “common” shared through all OpenStack versions and “openstack-mitaka” containing OpenStack Mitaka packages and build dependencies not covered by CentOS itself or “common” directory.

    prepare build space

    I used “mockchain” for such rebuilds. It is simple tool which does not do any ordering tricks just builds set of packages in given order and do it three times hoping that all build dependencies will be solved that way. Of course what got built once is not tried again.

    To make things easier I used shell alias:

    alias runmockchain mockchain -r default -l /home/hrw/rpmbuild/_rebuilds/openstack/mockchain-repo-centos
    

    With this I did not have to remember about those two switches. Common call was “runmockchain —recurse srpms/*” which means “cycle packages three times and continue on failures”.

    Results of all builds (packages and logs) were kept in “~/_rebuilds/openstack/mockchain-repo-centos/results/default/” subdirectories. I put all extra packages there to have all in one repository.

    populate “noarch” packages

    Then I copied x86-64 build of OpenStack Mitaka into “_IMPORT-openstack-mitaka/” to get all “noarch” packages for satisfying build dependencies. I built all those packages anyway but having them saved me several rebuilds.

    extra rpm macros

    When I started first build it turned out that some Python packages lack proper “Provides” fields. I was missing newer rpm build macros (“%python_provide” was added after Fedora 19 which was base for RHEL7). Asked Haïkel and added “rdo-rpm-macros” to mock configuration.

    But had to scrap everything I built so far.

    surprises and failures

    Building big set of packages for new architecture most of time generate failures which were not present with x86-64 build. Same was this time as several build dependencies were missing or wrong.

    packages missing in CentOS/aarch64

    Some were from CentOS itself — I told Jim Perrin (aka Evolution) and he added them to build queue to fill gaps. I built them in meantime or (if they were “noarch”) imported into “_IMPORT-extras” or “_IMPORT-os” directories.

    packages imported from other CBS tags

    Other packages were originally imported from other tags at CBS (CentOS koji). For those I created directory named “_IMPORT-cbs”. And again — if they were “noarch” I just copied them. For rest I did full build (using “runmockchain”) and they end in same repository as rest of build.

    For some packages it turned out that they got built long time ago with older versions of build dependencies and are not buildable from current versions. For them I tracked proper versions on CBS and imported/built (sometimes with their build dependencies and build dependencies of build dependencies).

    downgrading packages

    There was a package “python-flake8” which failed to build spitting out Python errors. I checked how this version was built on CBS and turned out that “python-mock” 1.3.0 (from “openstack-mitaka” repository) was too new… Downgraded it to 1.0 allowed me to build “python-flake8” one (upgrading of it is in queue).

    merging fixes from Fedora

    Both “galera” and “mariadb-galera” got AArch64 support merged from Fedora and got built with “.hrw1” added into “Release” field.

    directories

    I did whole build in “~/rpmbuild/_rebuilds/openstack/” directory. Extra folders were:

    • vault.centos.org/centos/7/cloud/Source/openstack-mitaka/common/
    • vault.centos.org/centos/7/cloud/Source/openstack-mitaka/
    • mockchain-repo-centos/results/default/_HACKED-by-hew/
    • mockchain-repo-centos/results/default/_IMPORT-cbs/
    • mockchain-repo-centos/results/default/_IMPORT-extras/
    • mockchain-repo-centos/results/default/_IMPORT-openstack-mitaka/
    • mockchain-repo-centos/results/default/_IMPORT-os/

    Vault ones were copy of OpenStack source packages and their build dependencies. Hacked ones got AArch64 support merged from Fedora. Both “extras” and “os” directories were for packages missing in CentOS/AArch64 repositories. CBS one was for source/noarch packages which had to be imported/rebuilt because they came from other CBS tags.

    status page

    In meantime I prepared web page with build results so anyone interested can see what builds, what not and check logs, packages etc. It has simple description and then table with list of builds (data can be sorted by clicking on column headers).

    thanks

    Whole job would take much more time if not help from CentOS developers: Haïkel Guémar, Alan Pevec, Jim Perrin, Karanbir Singh and others from #centos-devel and #centos-arm IRC channels.

    Written by Marcin Juszkiewicz on
  6. How to speed up mock

    Fedora and related distributions use “mock” to build packages. It creates chroot from “root cache” tarball, updates it, installs build dependencies and runs build. Similar to “pbuilder” under Debian. Whole build time can be long but there are some ways to get it faster.

    kill fsync()

    Everytime something calls “fsync()” storage slow downs because all writes need to be done. Build process goes in a chroot which will be removed at the end so why bother?

    Run “dnf install nosync” and then enable it in “/etc/mock/site-defaults.cfg” file:

    config_opts['nosync'] = True
    

    local cache for packages

    I do lot of builds. Often few builds of same package. And each build requires fetching of RPM packages from external repositories. So why not cache it?

    In LAN I have one machine working as NAS. One of services running there is “www cache” with 10GB space which I use only for fetching packages — both in mock and system and I use it on all machines. This way I can recreate build chroot without waiting for external repositories.

    config_opts['http_proxy'] = 'http://nas.lan:3128'
    

    Note that this also requires editing mock distribution config files to not use “mirrorlist=” but “baseurl=” instead so same server will be used each time. There is a script to convert mock configuration files if you go that way.

    decompress root cache

    Mock keeps tarball of base chroot contents which gets unpacked at start of build. By default it is gzip compressed and unpacking takes time. On my systems I switched off compression to gain a bit at cost of storage:

    config_opts['plugin_conf']['root_cache_opts']['compress_program'] = ""
    config_opts['plugin_conf']['root_cache_opts']['extension'] = ""
    

    tmpfs

    If memory is not an issue then tmpfs can be used for builds. Mock has own plugin for it but I do not use it. Instead I decide on my own about mounting “/var/lib/mock” as tmpfs or not. Why? I only have 16GB ram in pinkiepie so 8-12GB can be spent on tmpfs while there are packages which would not fit during build.

    parallel decompression

    Sources are compressed. Nowadays CPU has more than one core. So why not using it with multithreaded gzip/bzip2 depackers? This time distro file (like “/etc/mock/default.cfg” one) needs to be edited:

    config_opts['chroot_setup_cmd'] = 'install @buildsys-build /usr/bin/pigz /usr/bin/lbzip2'
    config_opts['macros']['%__gzip'] = '/usr/bin/pigz'
    config_opts['macros']['%__bzip2'] = '/usr/bin/lbzip2'
    

    extra mockchain tip

    For those who use “mockchain” a lot this shell snippet may help finding which packages failed:

    for dir in *
    do
        if [ -e $dir/fail ];then
            rm $dir/fail
            mv $dir _fail-$dir
        fi
    done
    

    summary

    With this set of changes I can do mock builds faster than before. Hope that it helps someone else too.

    Written by Marcin Juszkiewicz on
  7. Back @linaro.org

    Six years ago I was one of first members of project which later got “Linaro” name. Today I am back. But in different form.

    On 30th April 2010 I got email titled “Welcome to Linaro” and became software engineer at Linaro. Time shown that it was done in a way which helped to start project but was not liked by member companies. The plan was to leave in October 2012 but due to someone’s decision I stayed until May 2013.

    Today I got “Linaro Assignee On-Boarding” email which means that I am still officially software engineer at Red Hat but assigned to work at Linaro. Same as people from other member companies.

    I wonder will I get my unofficial title “main complainer at Linaro” back or do I have to deserve it again ;D

    Written by Marcin Juszkiewicz on
  8. Failed to set MokListRT: Invalid Parameter

    Somehow I managed to break UEFI environment on APM Mustang. As a result I was not able to enter boot manager menu nor UEFI shell. All I had was booting to 0001 boot entry (which was just installed Fedora 24 alpha).

    After reboot I scrolled a bit to take a look at firmware output:

    X-Gene Mustang Board
    Boot firmware (version 1.1.0 built at 14:50:19 on Oct 20 2015)
    PROGRESS CODE: V3020003 I0
    PROGRESS CODE: V3020002 I0
    PROGRESS CODE: V3020003 I0
    PROGRESS CODE: V3020002 I0
    PROGRESS CODE: V3020003 I0
    PROGRESS CODE: V3020002 I0
    PROGRESS CODE: V3020003 I0
    PROGRESS CODE: V3021001 I0
    TianoCore 1.1.0 UEFI 2.4.0 Oct 20 2015 14:49:32
    CPU: APM ARM 64-bit Potenza Rev A3 2400MHz PCP 2400MHz
         32 KB ICACHE, 32 KB DCACHE
         SOC 2000MHz IOBAXI 400MHz AXI 250MHz AHB 200MHz GFC 125MHz
    Board: X-Gene Mustang Board
    Slimpro FW:
            Ver: 2.4 (build 01.15.10.00 2015/04/22)
            PMD: 950 mV
            SOC: 950 mV
    Failed to set MokListRT: Invalid Parameter
    

    Here screen got cleared instantly and grub was shown. I booted into one of installed systems and started playing with EFI boot manager:

    17:38 root@pinkiepie-rawhide:~$ efibootmgr
    BootCurrent: 0001
    Timeout: 0 seconds
    BootOrder: 0001,0004,0000
    Boot0000  Fedora rawhide
    Boot0001* Fedora
    Boot0004* Red Hat Enterprise Linux
    

    Note “0 seconds” timeout. I changed it to 5s (efibootmgr -t 5), rebooted and UEFI menu appeared again:

    TianoCore 1.1.0 UEFI 2.4.0 Oct 20 2015 14:49:32
    CPU: APM ARM 64-bit Potenza Rev A3 2400MHz PCP 2400MHz
         32 KB ICACHE, 32 KB DCACHE
         SOC 2000MHz IOBAXI 400MHz AXI 250MHz AHB 200MHz GFC 125MHz
    Board: X-Gene Mustang Board
    Slimpro FW:
            Ver: 2.4 (build 01.15.10.00 2015/04/22)
            PMD: 950 mV
            SOC: 950 mV
    The default boot selection will start in   5 seconds
    [1] Fedora rawhide
    [2] Red Hat Enterprise Linux
    [3] Fedora
    [4] Shell
    [5] Boot Manager
    [6] Reboot
    [7] Shutdown
    Start:
    

    So I can boot whatever I want again ;D

    Written by Marcin Juszkiewicz on
Page 20 / 106