1. Standards in Arm space (part I)

    One of things which made AArch64 servers so successful was agreeing on set of standards and keeping them implemented. But that was not always a case…

    Wild, Wild West

    I started working with Arm architecture in 2004. This was time when nearly every device required own kernel… You had those ‘board files’ inside of arch/arm directory, each vendor made own versions of same drivers etc.

    From distribution perspective it was nightmare. I was maintaining OpenZaurus at that time and with ten models supported we had to build whole set of kernels. Good that four of them were differing only by amount of memory and flash so we were able to handle them as one machine leaving checking details to kernel once it booted. PXA250 or PXA255 processor was also handled by kernel.

    Those times also meant different bootloaders. Zaurus ones were awful. We even had to ignore kernel cmdline it gave as it did not fit even into our 2.4.18-crappix kernels and was completely wrong once we moved to 2.6 line.

    Nokia 770/N8x0 had another one. Developer boards had RedBoot, U-Boot (if lucky) or whatever vendor invented. Some had a way to change and store boot commands, some did not. Space for kernel could be limited in a way that getting something which fits was a challenge.

    Basically for most of devices you had to handle booting, updates of kernels etc. separately.

    Linaro to the rescue

    In 2010 Arm with some partners created Linaro to improve Linux situation on Arm devices. I was one of first engineers there. We were present in many areas. Porting software, benchmarking, improving performance etc.

    And cleaning kernel/boot situation. I do not know how many people remember this post by Linus Torvalds:

    Gaah. Guys, this whole ARM thing is a f*cking pain in the ass.

    You need to stop stepping on each others toes. There is no way that your changes to those crazy clock-data files should constantly result in those annoying conflicts, just because different people in different ARM trees do some masturbatory renaming of some random device. Seriously.

    This was reaction to a moment when someone created another copy of some drivers. It was popular way to do things on Arm architecture — each vendor had their own version of PL011 serial driver etc.

    Some time later “arm-soc” subsystem was created to handle merging code touching device support, drivers etc. This allowed Russell King to concentrate on maintaining Arm architecture support.

    During next years most of vendor versions were merged into single ones. And moved where they belong — from arch/arm/ to drivers/ area of kernel.

    At some moment adding new board files was forbidden as Arm architecture was migrating into DeviceTree world.

    DeviceTree migration

    Why going into DeviceTree (DT in short)? What it gave us? Other than new problems?

    There were several such questions. The good part is that it was not something new to the Linux kernel. DT was already in use on Power architecture (and iirc SPARC). After some adaptations Arm devices became more maintainable.

    DeviceTree solved one crucial problem of Arm — lack of hardware discovery. System on Chip (SoC) can contain several controllers, processor cores etc. Before it was handled inside of ‘board file’ but also required building kernels per nearly each device. Now kernel was finally able to boot, parse DT information and get idea what is available and which drivers need to be used.

    That way one kernel was able to support several devices. And amount of them was bigger and bigger each release. At some moment you could build one kernel for all Arm v4 and v5 devices plus second one for v6 and v7 ones. Huge improvement.


    When it comes to bootloaders situation changed here as well. Most of ones used in past vanished and U-Boot became kind of ‘gold standard’. DeviceTree support was present but still each device had own way of booting. Different commands, storage options etc.

    Distributions handled that in miscellaneous ways. Extlinux support, ‘flash-kernel’ scripts etc.

    At some moment Dennis Gilmore took some time and introduced generic boot command for U-Boot. It was merged in July 2014. So instead of having different ways of handling stuff there was now one command on all devices (once they migrated).

    Kernel and initramfs were checked on sd/mmc/emmc, sata, scsi, ide, usb and then fallback to tftp. It was expanded since then to support several options and is now standard in U-Boot.

    AArch64 arrival

    At the beginning of 2013 several AArch64 systems started to appear. SBC ones followed what was on 32-bit Arm but servers were driven into different direction.


    They were supposed to be as boring as x86 one were. You unpack, put it into rack, connect standard power/network cables and boot it without worrying will it work or not. At same time provide administrators with same environment as they had on x86.

    So it meant UEFI as a firmware. ACPI as hardware description. And I simplified a bit.

    So to make it right it needed work on defining standards and then vendors to follow them.

    SBSA defined hardware

    First specification was Server Base System Architecture (SBSA in short). It defined hardware part — each AArch64 machine needs to use PL011 serial port, PL031 RTC, PL061 GPIO controller etc. And PCI Express support without quirks. Without it it can not be called server.

    SBSA has several levels of compliance. Nowadays level 3 is minimal version.

    Level 0 was funny as it covered only X-Gene1 boxes (SoC was older than specification).

    SBBR defined firmware

    Simplest definition of Server Base Boot Requirements specification? Server needs to run UEFI and use ACPI to describe hardware. And has to be SBSA compliant.

    Someone may ask why UEFI and ACPI. One reason is that they are present in x86 servers and aarch64 ones follow them as much as possible in behaviour. Other is that this way there are things which can be done with firmware help.

    But ACPI was x86 only so it needed to be adapted to AArch64 architecture. Work started by making ACPI an open specification under UEFI Forum agenda so it became open to anyone (it was Intel, Microsoft, Phoenix and Toshiba only before). There were many changes made since then. And several new tables defined.

    I heard several rumours about why ACPI. Someone said that ACPI was forced by Microsoft. In reality it was decision taken by all major distros and Microsoft.

    So what SBBR compliance gives? For start it allows to run generic distribution kernels out of the box. Each server SoC has same basic components and use same standards to boot system. So far Linux distributions, several *BSD systems and Microsoft Windows support SBBR machines out of the box.

    For example getting Qualcomm Centriq or Huawei TaiShan servers supported in Debian ‘buster’ was very easy task. Both booted with distribution kernel. Huawei one required enabling of on-board network card, Centriq had SAS controller module to enable to connect to storage (which was enabled on few other architectures already).

    EBBR for those who can not follow

    In short Embedded Base Boot Requirements are kind of SBBR for non-server class hardware.

    Device can use ACPI and/or DeviceTree to describe hardware. May boot whatever as long it provides EFI Boot Services to bootloader used by distributions (grub2, gummiboot etc).

    Specification feels made especially for distributions to make their life easier. This way there is one way to boot both SBC and SBBR compliant machines.

    Getting distribution kernel running on EBBR board is usually more work than it is with SBBR compliant server. All hardware specific options need to be found and enabled (from SoC support to all it’s drivers etc).

    BSA, BBR?

    During Arm DevSummit 2020 there was announcement of new standards for Arm devices:

    Arm is extending the system architecture standards compliance from servers to other segments of the market, edge and IoT. We introduce the new BSA specification with market segment-specific supplements and provide the operating system-oriented boot requirements recipes in the new BBR specification.

    I need to spend some time reading specifications and then write second part of this article.

    Written by Marcin Juszkiewicz on
  2. Upgraded to Fedora 33

    I am running Fedora on my desktop since started working for Red Hat. Kind of ‘eat your own dogfood’ style despite fact that I am not active in Fedora development for some time.

    Fedora 33 reached Beta status so it was time to upgrade.

    Do it Debian style

    I use Debian since 1999 so got used to several ways of doing things which may not always fit into official Fedora guidelines.

    One of them is my way of upgrading to newer release:

    LANGUAGE=C dnf distrosync --releasever 33

    If there is a problem listed then I try to solve it with use of “—best” or even “—allow-erasing” options to check which packages are a problem. But this time it went smoothly.

    Rebooted and system works fine. Or so I thought…

    SSH keys

    Fedora ships OpenSSH 8.4p1 so if you used ‘ssh-rsa’ keys then you may need to generate newer ones. More info in OpenSSH 8.3 announcement.

    I got hit by this on Gerrit instances:

    $ git remote update
    Fetching gerrit
    marcin.juszkiewicz@review.linaro.org: Permission denied (publickey).

    One workaround is small addition to ssh configuration file:

    Host *
        PubkeyAcceptedKeyTypes +rsa-sha2-256,rsa-sha2-512

    TLS v1.2 is the new default

    New distribution version, new defaults. This time TLS v1.2 became default version. I was informed about it when wanted to send email and Thunderbird told me that it is unable to talk to my mail server…

    Logged to server, checked Postfix log and found this:

    connect from IP_ADDRESS
    SSL_accept error from IP_ADDRESS: -1
    warning: TLS library problem: error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol:../ssl/statem/statem_srvr.c:1661:
    lost connection after CONNECT from IP_ADDRESS
    disconnect from IP_ADDRESS commands=0/0

    Looks nasty. Did some searching and changed Postfix config to accept TLSv1.2 on incoming connections.


    One of features of Fedora 33 is switch from dnsmasq to systemd-resolved for name resolution. On my system I have some local changes to former one to get internal Red Hat names resolved without using company DNS for everything. Therefore I reverted migration and keep using dnsmasq.

    One day I may be able to understand systemd-resolved documentation and migrate my local configuration to it.


    So far no other issues found. System works as it should. I still run kind of KDE desktop on X11.

    Written by Marcin Juszkiewicz on
  3. 8 years of my work on AArch64

    Back in 2012 AArch64 was something new, unknown yet. There was no toolchain support (so no gcc, binutils or glibc). And I got assigned to get some stuff running around it.


    As there was no hardware cross compilation was the only way. Which meant OpenEmbedded as we wanted to have wide selection of software available.

    I learnt how to use modern OE (with OE Core and layers) by building images for ARMv7 and checking them on some boards I had floating around my desk.

    Non-public toolchain work

    Some time later first non-public patches for binutils and gcc arrived in my inbox. Then eglibc ones. So I started building and on 12th September 2012 I was able to build helloworld:

    12:38 hrw@puchatek:aarch64-oe-linux$ ./aarch64-oe-linux-gcc ~/devel/sources/hello.c -o hello
    12:38 hrw@puchatek:aarch64-oe-linux$ file hello
    hello: ELF 64-bit LSB executable, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.39, not stripped
    12:39 hrw@puchatek:aarch64-oe-linux$ objdump -f hello
    hello:     file format elf64-littleaarch64
    architecture: aarch64, flags 0x00000112: 
    start address 0x00000000004003e0

    Then images followed. Several people at Linaro (and outside) used those images to test misc things.

    At that moment we ran ARMv8 Fast models (quite slow system emulator from Arm). There was a joke that Arm developers formed a queue for single core 10 GHz x86-64 cpus to get AArch64 running faster.

    Toolchain became public

    Then 1st October 2012 came. I entered Linaro office in Cambridge for AArch64 meeting and was greeted with “glibc patches went to public ML” information. So I rebased my OpenEmbedded repository, updated patches, removed any traces of non-public ones and published whole work.

    Building on AArch64

    My work above added support for AArch64 as a target architecture. But can it be used as a host? One day I decided to check and ran OpenEmbedded on AArch64.

    After one small patch it worked fine.

    X11 anyone?

    As I had access to Arm Fast model I was able to play with graphics. So one day in January 2013 I did a build and and started Xorg. Through next years I had fun when people wrote that they got X11 running on their AArch64 devices ;D

    Two years later I had Applied Micro Mustang at home (still have it). Once it had working PCI Express support I added graphics card and started X11 on hardware.

    Then went debugging why Xorg requires configuration file and one day with help from Dave Airlie, Mark Salter and Matthew Garrett I got two solutions for the problem. Do not remember did any of them went upstream but some time later problem was solved.

    Few years later I met Dave Airlie at Linux Plumbers. We introduced to each other and he said “ah, you are the ‘arm64 + radeon guy’” ;D

    AArch64 Desktop week

    One day in September 2015 I had an idea. PCIe worked, USB too. So I did AArch64 desktop week. Connected monitors, keyboard, mouse, speakers and used Mustang instead of my x86-64 desktop.

    It was fun.


    First we had nothing. Then I added AArch64 target into OpenEmbedded.

    Same month Arm released Foundation model so anyone was able to play with AArch64 system. No screen, just storage, serial and network but it was enough for some to even start building whole distributions like Debian, Fedora, OpenSUSE, Ubuntu.

    At that moment several patches were shared by all distributions as it was faster way than waiting for upstreams. I saw multiple versions of some of them during my journey of fixing packages in some distributions.

    Debian and Ubuntu

    In February 2013 Debian/Ubuntu team presented their AArch64 port. It was their first architecture bootstrapped without using external toolchains. Work was done in Ubuntu due to different approach to development than Debian has. All work was merged back so some time later Debian also had AArch64 port.


    Fedora team started early — October 2012, right after toolchain became public. Used Fedora 17 packages and switched to Fedora 19 during work.

    When I joined Red Hat in September 2013 one of my duties was fixing packages in Fedora to get them built on AArch64.


    In January 2014 first versions of QEMU support arrived and people moved from using Foundation model. March/April OpenSUSE team did massive amount of builds to get their distribution built that way.


    Fedora bootstrap also meant RHEL 7 bootstrap. When I joined Red Hat there were images ready to use in models. My work was testing them and fixing packages. There were multiple times when AArch64 fix helped to build also on ppc64le and s390x architectures.

    Hardware I played with

    First Linux capable hardware was announced in June 2013. I got access to it at Red Hat. Building and debugging was much faster than using fast models ;D

    Applied Micro Mustang

    Soon Applied Micro Mustangs were everywhere. Distributions used them to build packages etc. Even without support for half of hardware (no PCI Express, no USB).

    I got one in June 2014. Running UEFI firmware out of the box. At first months I had a feeling that firmware is developed at Red Hat as we had fresh versions often right after first patches for missing hardware functionality were written. In reality it was maintained by Applied Micro and we had access to sources so there were some internal changes in testing (that’s why I had firmware versions like ‘0.12-rh’).

    All those graphics cards I collected to test how PCI Express works. Or testing USB before it was even merged into Linux mainline kernel. Using virtualization for development of armhf build fixes (8 cores, 12 gigabytes of ram and plenty of storage beat all armv7 hardware I had).

    I stopped using Mustang around 2018. It is still under my desk.

    For those who use: make sure you have 3.06.25 firmware.


    In February 2015 Linaro announced 96boards initiative. The plan was to make small, unified SBC with different Arm chips. Both 32- and 64-bit ones.

    First ones were ‘Consumer Edition’. Small, limited to basic connectivity. Now there are tens of them. 32-bit, 64-bit, fpga etc. Choose your poison ;D

    Second ones were ‘Enterprise Edition’. Few attempts existed, most of them did not survived prototype phase. There was joke that full length PCI Express slot and two USB ports requirements are there because I wanted to have AArch64 desktop ;D

    Too bad that nothing worth using came from EE spec.


    As Linaro assignee I have access to several servers from Linaro members. Some are mass-market ones, some never made to market. We had over hundred X-Gene1 based systems (mostly as m400 cartridges in HPe Moonshot chassis’) and shutdown them in 2018 as they were getting more and more obsolete.

    Main system I use for development is one of those ‘never went to mass-market’ ones. 46 cpu cores, 96 GB of ram make it nice machine for building container images, Debian packages or running virtual machines in OpenStack.


    For some time I was waiting for some desktop class hardware to have development box more up-to-date than Mustang. Months turned into years. I no longer wait as it looks like there will be no such thing.

    Solidrun has made some attempts in this area. First with Macchiatobin and later with Honeycomb. I did not used any of them.


    When I (re)joined Linaro in 2016 I became part of team working on getting OpenStack working on AArch64 hardware. We used Liberty, Mitaka, Newton releases and then changed way we work and started contributing more. And more. Kolla, Nova, Dib and other projects. Added aarch64 nodes to OpenDev CI.

    The effect of it was Linaro Developer Cloud used by hundreds of projects to speed-up their aarch64 porting, tens of projects hosting their CI systems etc.

    Two years later Amazon started offering aarch64 nodes in AWS.


    I spent half of my life with Arm on AArch64. Had great moments like building helloworld as one of first people outside of Arm Ltd. Got involved in far more projects then ever thought. Met new friends, visited several places in the world I would probably never go otherwise.

    I also got grumpy and complained far too many times that AArch64 market is ‘cheap but limited sbc or fast but expensive servers and nearly nothing in between’. Wrote some posts about missing systems targeting software developers and lost hope that such will happen.

    NOTE: It is 8 years of my work on AArch64. I work with Arm since 2004.

    Written by Marcin Juszkiewicz on
  4. From a diary of AArch64 porter — drive-by coding

    Working on AArch64 often means changing code in some projects. I did that so many times that I am unable to say where I have some commits. Such thing got a name: drive-by coding.


    Drive-by coding is situation when you appear in some software project, do some changes, get them merged and then disappear to never be seen again.

    Let’s build something

    All starts from simple thing: I have/want to build some software. But for some reason it does not cooperate. Sometimes it is simple architecture check missing, sometimes atomic operations are not present, intrinsics are missing or anything else.

    First checks

    Then comes moment of looking at build errors and trying to work out some solution. Have I seen that bug before? Does it look familiar?

    If this is something new then quick Google search for error message. And checking bug reports/issues on project’s website/repo. There can be ready to use patches, information how to fix it or even some ideas why does it happen.

    If this is system call failure in some tests then I check my syscalls table are those ones handled on aarch64 and try to change code if they are not (legacy ones like open, symlink, rename).

    Simple fixes

    When I started working with AArch64 (in 2012) there were moments when many projects were easy to fix. If atomics were issue then copying them from Linux kernel was usually solution (if license allowed).

    Architecture checks with pile of #ifdef __X86_64__ or similar ones which are trying to do decide for simple things like “32/64” or “little/big endian”. Nowadays such ones do not happen as often as it was.

    SIMD intrinsics can be a problem. All those vst1q_f32_x2(), vld1q_f32_x2 and similar. I do not have to understand them to know that it usually means that C compiler lacks some backports as those functions were added into gcc and llvm already (like it was with Pytorch recently).

    Complex stuff

    There are moments when getting software to build needs something more complicated. Like I wrote above, I usually start with searching for error message and checking was it an issue in some other projects. And how it got solved. If I am lucky then patch can be done in short time and send for review upstream (once it builds and passes tests).

    Sometimes all I can do is reporting issue upstream and hope that someone will care enough to respond. Usually it ends with at least discussion on potential ways to fix, sometimes hints or even patches to test.

    Projects response

    Projects usually accept patches, review them and merge. In several cases it took longer than expected, sometimes there was larger amount of those so they remember me (at least for some time). It helps when I have something for those project again months/years later.

    There are projects where I prefer to forget that they exist. Complicated contribution rules, crazy CI setup, weird build systems (ever heard about ‘bazel’?). Or comments in ‘we do not give a shit about non-x86’ style (with a bit polished language). Been there, fixed something to get stuff working and do not want to go back.


    Drive-by coding’ reminds me going abroad for conferences. People think that you saw interesting places when in reality you spent most of time inside of hotel and/or conference centre.

    It is similar with code. I was in several projects, usually had no idea what they do, how they work. Came, looked shortly, fixed something and went back home.

    Written by Marcin Juszkiewicz on
  5. So your hardware is ServerReady?

    Recently I changed my assignment at Linaro. From Cloud to Server Architecture. Which means less time spent on Kolla things, more on server related things. And at start I got some project I managed to forget about :D

    SBSA reference platform in QEMU

    In 2017 someone got an idea to make a new machine for QEMU. Pure hardware emulation of SBSA compliant reference platform. Without using of virtio components.

    Hongbo Zhang wrote code and got it merged into QEMU, Radosław Biernacki wrote basic support for EDK2 (also merged upstream). Out of box it can boot to UEFI shell. Linux is not bootable due to lack of ACPI tables (DeviceTree is not an option here).

    ACPI tables in firmware

    Tanmay Jagdale works on adding ACPI tables in his fork of edk2-platforms. With this firmware Linux boots and can be used.

    Testing tools

    But what the point of just having reference platform if there is no testing? So I took a look and found two interesting tools:

    Server Base System Architecture — Architecture Compliance Suite

    SBSA ACS tool requires ACPI tables to be present to work. And once started it nicely checks how compliant your system is:

    FS0:\> Sbsa.efi -p
     SBSA Architecture Compliance Suite
        Version 2.4
     Starting tests for level  4 (Print level is  3)
     Creating Platform Information Tables
     PE_INFO: Number of PE detected       :    3
     GIC_INFO: Number of GICD             :    1
     GIC_INFO: Number of ITS              :    1
     TIMER_INFO: Number of system timers  :    0
     WATCHDOG_INFO: Number of Watchdogs   :    0
     PCIE_INFO: Number of ECAM regions    :    2
     SMMU_INFO: Number of SMMU CTRL       :    0
     Peripheral: Num of USB controllers   :    1
     Peripheral: Num of SATA controllers  :    1
     Peripheral: Num of UART controllers  :    1
          ***  Starting PE tests ***
       1 : Check for number of PE            : Result:  PASS
       2 : Check for SIMD extensions                PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       3 : Check for 16-bit ASID support            PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       4 : Check MMU Granule sizes                  PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       5 : Check Cache Architecture                 PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       6 : Check HW Coherence support               PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       7 : Check Cryptographic extensions           PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       8 : Check Little Endian support              PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       9 : Check EL2 implementation                 PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      10 : Check AARCH64 implementation             PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      11 : Check PMU Overflow signal         : Result:  PASS
      12 : Check number of PMU counters             PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      13 : Check Synchronous Watchpoints            PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      14 : Check number of Breakpoints              PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      15 : Check Arch symmetry across PE            PSCI_CPU_ON: failure
           Reg compare failed for PE index=1 for Register: CCSIDR_EL1
           Current PE value = 0x0         Other PE value = 0x100FBDB30E8
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      16 : Check EL3 implementation                 PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      17 : Check CRC32 instruction support          PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      18 : Check for PMBIRQ signal
           SPE not supported on this PE      : Result:  -SKIPPED- 1
      19 : Check for RAS extension                  PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      20 : Check for 16-Bit VMID                    PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      21 : Check for Virtual host extensions        PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      22 : Stage 2 control of mem and cache         PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      23 : Check for nested virtualization          PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      24 : Support Page table map size change       PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      25 : Check for pointer signing                PSCI_CPU_ON: failure
      25 : Check for pointer signing                PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      26 : Check Activity monitors extension        PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      27 : Check for SHA3 and SHA512 support        PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
          *** One or more PE tests have failed... ***
          ***  Starting GIC tests ***
     101 : Check GIC version                 : Result:  PASS
     102 : If PCIe, then GIC implements ITS  : Result:  PASS
     103 : GIC number of Security states(2)  : Result:  PASS
     104 : GIC Maintenance Interrupt
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
          One or more GIC tests failed. Check Log
          *** Starting Timer tests ***
     201 : Check Counter Frequency           : Result:  PASS
     202 : Check EL0-Phy timer interrupt     : Result:  PASS
     203 : Check EL0-Virtual timer interrupt : Result:  PASS
     204 : Check EL2-phy timer interrupt     : Result:  PASS
     205 : Check EL2-Virtual timer interrupt
           v8.1 VHE not supported on this PE : Result:  -SKIPPED- 1
     206 : SYS Timer if PE Timer not ON
           PE Timers are not always-on.
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
     207 : CNTCTLBase & CNTBaseN access
           No System timers are defined      : Result:  -SKIPPED- 1
         *** Skipping remaining System timer tests ***
          *** One or more tests have Failed/Skipped.***
          *** Starting Watchdog tests ***
     301 : Check NS Watchdog Accessibility
           No Watchdogs reported          0
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
     302 : Check Watchdog WS0 interrupt
           No Watchdogs reported          0
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
          ***One or more tests have failed... ***
          *** Starting PCIe tests ***
     401 : Check ECAM Presence               : Result:  PASS
     402 : Check ECAM value in MCFG table    : Result:  PASS
            Unexpected exception occured
            FAR reported = 0xEBDAB180
            ESR reported = 0x97800010
         Total Tests run  =   42;  Tests Passed  =   11  Tests Failed =   22
          *** SBSA tests complete. Reset the system. ***

    As you can see there is still a lot of work to do.

    ACPI Tables View

    This tool displays content of ACPI tables in hex/ascii format and then with information interpreted field by field.

    What makes it more useful is “-r 2” argument as it enables checking tables against Server Base Boot Requirements (SBBR) v1.2 specification. On SBSA reference platform with Tanmay’s firmware it lists two errors:

    ERROR: SBBR v1.2: Mandatory DBG2 table is missing
    ERROR: SBBR v1.2: Mandatory PPTT table is missing
    Table Statistics:
            2 Error(s)
            0 Warning(s)

    So situation looks good as those can be easily added.


    So we have code to check and tools to do that. Add one to another and you have a clean need for CI job. So I wrote one for Linaro CI infrastructure: “LDCG SBSA firmware“. It builds top of QEMU and EDK2, then boot it and run above tools. Results are sent to mailing list.


    The Arm ServerReady compliance program provides a solution for servers that “just works”, allowing partners to deploy Arm servers with confidence. The program is based on industry standards and the Server Base System Architecture (SBSA) and Server Base Boot Requirement (SBBR) specifications, alongside Arm’s Server Architectural Compliance Suite (ACS). Arm ServerReady ensures that Arm-based servers work out-of-the-box, offering seamless interoperability with standard operating systems, hypervisors, and software.

    In other words: if your hardware is SBSA complaint then you can go with SBBR compliance tests and then go and ask for certification sticker or sth like that.

    But if your hardware is not SBSA compliant then EBBR is all you can get. Far from being ServerReady. Never mind what people tries to say — ServerReady requires SBBR which requires SBSA.

    Future work

    More tests to integrate. ARM Enterprise ACS is next on my list.

    Written by Marcin Juszkiewicz on
  6. NAS update

    In 2014 I bought Synology DS214se NAS and two 4TB hard drives. It worked fine for me for years and served files. But it was low cpu power system with just 256MB of ram so was too easy to overload.

    Let’s move to x86-64

    So few years ago friend was selling ASUS M5A78L-M LX3 mainboard with AMD FX-6300 processor. I bought it, added 8 GB of ram from my desktop (which got additional 16 GB instead) and put into Node 804 case from Fractal Design.

    Case fits MicroATX board and has plenty of space for storage (I think 10 3.5”, 2 2.5” and slot-in optical drive).

    Machine got several hard drives (from other home machines or drawers):

    • WD Red 4 TB x2
    • Toshiba 2 TB
    • Samsung 1.5 TB
    Hard drives
    Hard drives in cages
    Hard drive cage
    Hard drives cage


    Installed FreeNAS 11 on it and started using. Machine was named ‘lumpek’ (Lumpy the Heffalump) to follow my way of naming computers.

    4 TB drives went into simple mirror, 2 TB for less important data and 1.5 TB one for virtual machines and related storage (like installation iso files).

    ZFS works nice, some extra FreeNAS plugins allowed me to offload some services from my desktop to NAS (like Transmission daemon for fetching torrents or MySQL server for local needs).

    Memory upgrade

    Many people say that NAS machine should have ECC memory. So at some moment it got 16 GB (2x 8 GB sticks) of DDR3-1866 ECC memory recovered from old server:

    Handle 0x0026, DMI type 16, 15 bytes
    Physical Memory Array
            Location: System Board Or Motherboard
            Use: System Memory
            Error Correction Type: Single-bit ECC
            Maximum Capacity: 16 GB
            Error Information Handle: Not Provided
            Number Of Devices: 2

    More disks

    4 TB of space ends one day. So I went and bought another WD Red 4 TB disk. The idea was to move data from mirror to some spare storage, create new RAID-Z1 array from 3x 4 TB drives and migrate data back.

    But… Lumpek already had 4 hard drives and it was maximum this mainboard supported.

    Dell H310 aka LSI 9211-8i

    Luckily mainboard has on-board graphics so PCI Express x16 slot was empty. Asked friends, checked some internet pages and ordered used Dell H310 SAS controller. This is probably the most popular (among IBM M1015) storage solution in FreeNAS community.

    Card arrived with not needed SAS cable and SFF-8187 cables came in other order.


    How to make best use of server class RAID controller? Strip it from any RAID functionality ;D

    Turns out that Dell H310 is basically LSI 9211-8i card. Which means we can flash it with generic firmware to switch to “initiator target” (also called “IT mode”). Card will then presents each drive individually to the host.

    There are several pages describing process. One of them is JC-LAN. I do not remember which set of instructions I followed but they do not differ much.

    At the end I got generic LSI SAS2008 controller:

    root@lumpek:~ # sas2flash -listall
    LSI Corporation SAS2 Flash Utility
    Version (2013.03.01) 
    Copyright (c) 2008-2013 LSI Corporation. All rights reserved 
            Adapter Selected is a LSI SAS: SAS2008(B2)   
    Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
    0  SAS2008(B2)     00:02:00:00
            Finished Processing Commands Successfully.
            Exiting SAS2Flash.
    root@lumpek:~ # 

    And as a bonus all my hard drives got a bit more bandwidth:

    da2: <ATA WDC WD40EFRX-68W 0A82> Fixed Direct Access SPC-4 SCSI device
    da2: 600.000MB/s transfers
    da2: Command Queueing enabled
    da2: 3815447MB (7814037168 512 byte sectors)
    da2: quirks=0x8<4K>

    Not that 300->600 MB/s transfer update change anything with rusting plates ;D


    FreeNAS based machine serves me well. Five hard drives give lot of space for data. 1 GbE network connection is probably my main limit now but there are no plans so far for moving to 10 GbE cards/switch due to their price.

    Virtual machines run from NAS with good speed and if I need faster then I can move them to NVME in my desktop or laptop.

    Written by Marcin Juszkiewicz on
  7. Installing Fedora on RockPro64

    Continuing tests of distribution installers. This time I installed Fedora ‘rawhide’ from netinst iso (2020.06.20). Fetched, wrote to USB pen drive and booted. Due to U-Boot being present in on-board SPI flash I did not had to mess with installation media.


    There were some issues:

    1. Panfrost failing to initialize
    2. U-Boot unable to load grub efi

    Panfrost initialization failure

    Panfrost kernel module needs some devfreq governor. Kernel has four of them, Fedora enables one. There are no dependencies between those modules which ends with the same error as with Debian:

    panfrost ff9a0000.gpu: devfreq_add_device: Unable to find governor for the device
    panfrost ff9a0000.gpu: [drm:panfrost_devfreq_init [panfrost]] *ERROR* Couldn't initialize GPU devfreq
    panfrost ff9a0000.gpu: Fatal error during devfreq init
    panfrost: probe of ff9a0000.gpu failed with error -22

    Solution was the same as before — boot without ‘panfrost’ module. I interrupted grub from starting and added rd.driver.blacklist=panfrost to “linux” command. This allowed me to boot into Fedora installer and system installation went smoothly.

    First boot on installed system shown working Panfrost driver:

    panfrost ff9a0000.gpu: clock rate = 500000000
    panfrost ff9a0000.gpu: mali-t860 id 0x860 major 0x2 minor 0x0 status 0x0
    panfrost ff9a0000.gpu: features: 00000000,100e77bf, issues: 00000000,24040400
    panfrost ff9a0000.gpu: Features: L2:0x07120206 Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002830 AS:0xff JS:0x7
    panfrost ff9a0000.gpu: shader_present=0xf l2_present=0x1
    [drm] Initialized panfrost 1.1.0 20180908 for ff9a0000.gpu on minor 0

    U-Boot can not load Grub EFI

    After reboot U-Boot was not able to load Grub from EFI System Partition:

    Device 0: Vendor: ADATA    Rev: 1.00 Prod: USB Flash Drive 
                Type: Removable Hard Disk
                Capacity: 59200.0 MB = 57.8 GB (121241600 x 512)
    ... is now current device
    Scanning usb 0:1...
    Found EFI removable media binary efi/boot/bootaa64.efi
    libfdt fdt_check_header(): FDT_ERR_BADMAGIC
    Card did not respond to voltage select!
    Scanning disk mmc@fe310000.blk...
    Disk mmc@fe310000.blk not ready
    Card did not respond to voltage select!
    Scanning disk mmc@fe320000.blk...
    Disk mmc@fe320000.blk not ready
    Card did not respond to voltage select!
    Scanning disk sdhci@fe330000.blk...
    Disk sdhci@fe330000.blk not ready
    Scanning disk usb_mass_storage.lun0...
    ** Unrecognized filesystem type **
    ** Unrecognized filesystem type **
    Found 4 disks
    BootOrder not defined
    EFI boot manager: Cannot load any image
    858216 bytes read in 25 ms (32.7 MiB/s)
    libfdt fdt_check_header(): FDT_ERR_BADMAGIC
    System BootOrder not found.  Initializing defaults.
    Could not read \EFI\: Invalid Parameter
    Error: could not find boot options: Invalid Parameter
    start_image() returned Invalid Parameter
    ## Application terminated, r = 2
    EFI LOAD FAILED: continuing...

    It was already reported as ‘shim’ bug 1733817.

    How to work around it?

    1. connect your Fedora storage into other computer
    2. copy “/efi/fedora/grubaa64.efi” to “/efi/boot/bootaa64.efi”

    This way U-Boot will get grub efi binary to load in default location.

    Final effect

    Board boots directly to graphical login manager and then to GNOME3 session. Extreme Tux Racer and Xonotic worked out of the box. Speed-wise it feels slower than KDE Plasma session on Debian.

    Written by Marcin Juszkiewicz on
  8. Installing Debian on RockPro64

    Installed Debian ‘testing’ from netinst iso (2020.06.15) today. Fetched, wrote to USB pen drive and booted. Due to U-Boot being present in on-board SPI flash I did not had to mess with installation media.


    There were some issues:

    1. no graphics on default installer (known, someone promised to fix it)
    2. grub refusing to install (bug against installer reported)
    3. Panfrost failing to initialize

    Serial console FTW!

    Ok, this time I am joking. There are two choices: text and graphical installer. First option lacks kernel modules for graphics so only serial console is available. Graphical installer works fine.

    EFI Grub and lack of EFI variables storage

    As I booted board with U-Boot there was no EFI variables storage. Grub was not satisfied:

    os-prober: debug: running /usr/lib/os-probes/50mounted-tests on /dev/sdb2
    50mounted-tests: debug: mounted using GRUB fat filesystem driver
    50mounted-tests: debug: running subtest /usr/lib/os-probes/mounted/40lsb
    50mounted-tests: debug: running subtest /usr/lib/os-probes/mounted/90linux-distro
    grub-installer: info: Installing grub on 'dummy'
    grub-installer: info: grub-install does not support --no-floppy
    grub-installer: info: Running chroot /target grub-install  --force "dummy"
    grub-installer: Installing for arm64-efi platform.
    grub-installer: grub-install: warning: Cannot set EFI variable Boot0000.
    grub-installer: grub-install: warning: vars_set_variable: write() failed: Invalid argument.
    grub-installer: grub-install: warning: _efi_set_variable_mode: ops->set_variable() failed: No such file or directory.
    grub-installer: grub-install: error: failed to register the EFI boot entry: No such file or directory.
    grub-installer: error: Running 'grub-install  --force "dummy"' failed.

    How to work around it?

    1. chroot into target system and run update-grub by hand
    2. copy “/efi/debian/grubaa64.efi” to “/efi/boot/bootaa64.efi”

    This way U-Boot will get efi binary to load in default location.

    Panfrost initialization failure

    Panfrost kernel module needs some devfreq governor. Kernel has four of them, Debian enables one. There are no dependencies between those modules which ends with this error:

    panfrost ff9a0000.gpu: devfreq_add_device: Unable to find governor for the device
    panfrost ff9a0000.gpu: [drm:panfrost_devfreq_init [panfrost]] *ERROR* Couldn't initialize GPU devfreq
    panfrost ff9a0000.gpu: Fatal error during devfreq init
    panfrost: probe of ff9a0000.gpu failed with error -22


    1. boot system
    2. rmmod panfrost, modprobe governor_simpleondemand, modprobe panfrost
    3. update-initramfs -u -kall

    Good option at this phase is changing configuration of update-initramfs to include only needed kernel modules (by setting “MODULES=dep” in it’s configuration). This allowed me to shrink initramfs from 37 to 13 megabytes (removal of plymouth and ntfs-3g shrinked to 6.6 MB).

    Final effect

    Board boots directly to graphical login manager and then to KDE Plasma session. Some of OpenGL games work, some not (Nexuiz). Looks good.

    Written by Marcin Juszkiewicz on
  9. EBBR on RockPro64

    SBBR or GTFO


    But Arm world no longer ends on “SBBR compliant or complete mess”. For over a year there is new specification called EBBR (Embedded Base Boot Requirements).

    WTH is EBBR?

    In short it is kind of SBBR for devices which can not comply. So you still need to have some subset of UEFI Boot/Runtime Services but it can be provided by whatever bootloader you use. So U-Boot is fine as long it’s EFI implementation is enabled.

    ACPI is not required but may be present. DeviceTree is perfectly fine. You may provide both or one of them.

    Firmware can be stored wherever you wish. Even MBR partitioning is available if really needed.

    Make it nice way

    RockPro64 has 16MB of SPI flash on board. This is far more than needed for storing firmware (I remember time when it was enough for palmtop Linux).

    During last month I sent a bunch of patches to U-Boot to make this board as comfortable to use as possible. Including storing of all firmware parts into on board SPI flash.

    To have U-Boot there you need to fetch two files:

    Their sha256 sums:

    3985f2ec63c2d31dc14a08bd19ed2766b9421f6c04294265d484413c33c6dccc  idbloader.img
    35ec30c40164f00261ac058067f0a900ce749720b5772a759e66e401be336677  u-boot.itb

    Store them as files on USB pen drive and plug it into any of RockPro64 USB ports. Then reboot to U-Boot as you did before (stored in SPI or on SD card or on EMMC module).

    Next do this set of commands to update U-Boot:

    Hit any key to stop autoboot:  0 
    => usb start
    => ls usb 0:1
       163807   idbloader.img
       867908   u-boot.itb
    2 file(s), 0 dir(s)
    => sf probe
    SF: Detected gd25q128 with page size 256 Bytes, erase size 4 KiB, total 16 MiB
    => load usb 0:1 ${fdt_addr_r} idbloader.img
    163807 bytes read in 16 ms (9.8 MiB/s)
    => sf update ${fdt_addr_r} 0 ${filesize}
    device 0 offset 0x0, size 0x27fdf
    163807 bytes written, 0 bytes skipped in 2.93s, speed 80066 B/s
    => load usb 0:1 ${fdt_addr_r} u-boot.itb
    867908 bytes read in 53 ms (15.6 MiB/s)
    => sf update ${fdt_addr_r} 60000 ${filesize}
    device 0 offset 0x60000, size 0xd3e44
    863812 bytes written, 4096 bytes skipped in 11.476s, speed 77429 B/s

    And reboot board.

    After this your RockPro64 will have firmware stored in on board SPI flash. No need for wondering which offsets to use to store them on SD card etc.

    Booting installation media

    The nicest part of it is that no longer you need to mess with installation media. Fetch Debian/Fedora installer ISO, write it to USB pen drive, plug into port and reboot board.

    Should work with any generic AArch64 installation media. Of course kernel on media needs to support RockPro64 board. I played with Debian ‘testing’ and Fedora 32 and rawhide and they booted fine.

    My setup

    My board boots to either Fedora rawhide or Debian ‘testing’ (two separate pen drives).

    Written by Marcin Juszkiewicz on
  10. OpenDev CI speed-up for AArch64

    I work with OpenDev CI for a while. My first Kolla patches were over three years ago. We (Linaro) added AArch64 nodes few times — some nodes were taken down, some replaced, some added.

    Speed or lack of it

    Whenever you want to install some Python package using pip it is downloaded from Pypi (directly or mirror). If there is a binary package then you get it, if not then “noarch” package is fetched.

    In worst case source tarball is downloaded and whole build process starts. You need to have all required compilers installed, development headers for Python and all required libraries and rest of needed tools. And then wait. And wait as some packages require a lot of time.

    And then repeat it again and again as you are not allowed to upload packages into Pypi for projects you do not own.

    Argh you, protobuf

    There was a new release of protobuf package. OpenStack bot picked it up, sent patch for review and it got merged.

    And all AArch64 CI jobs failed…

    Turned out that protobuf 3.12.0 was released with x86 wheels only. No source tarball. At all.

    This turned out to be new maintainer mistake — after 2-3 weeks it was fixed in 3.12.2 release.

    Another CI job then

    So I started looking at ‘requirements’ project and created a new CI job for it. To check are new package versions are available for AArch64. Took some time and several side updates as well (yak shaving all the way again).

    Stuff got merged and works now.

    Wheels cache

    While working on above CI job I had a discussion with OpenDev infra team how to make it work properly. Turned out that there were old jobs doing exactly what I wanted: building wheels and caching them for next CI tasks.

    It took several talks and patches from Ian Wienand, Clark Boylan, Jeremy ‘fungi’ Stanley and others. Several CI jobs got renamed, some were moved from one project to another. Servers got configuration changes etc.

    Now we have wheels built for both x86-64 and AArch64 architectures. Covering CentOS 7/8, Debian ‘buster’ and Ubuntu ‘xenial/bionic/focal’ releases. For OpenStack ‘master’ and few stable branches.


    Requirements project has quick ‘check-uc’ job running on AArch64 to make sure that all packages are available for both architectures. All OpenStack projects profit from it.

    In Kolla ‘openstack-base’ image went from 23:49 to just 5:21 minutes. Whole Debian/source build is now 57 minutes instead of 2 hours 20 minutes.

    Nice result, isn’t it?

    Written by Marcin Juszkiewicz on
Page 1 / 79
Older posts »