1. 8 years of my work on AArch64

    Back in 2012 AArch64 was something new, unknown yet. There was no toolchain support (so no gcc, binutils or glibc). And I got assigned to get some stuff running around it.

    OpenEmbedded

    As there was no hardware cross compilation was the only way. Which meant OpenEmbedded as we wanted to have wide selection of software available.

    I learnt how to use modern OE (with OE Core and layers) by building images for ARMv7 and checking them on some boards I had floating around my desk.

    Non-public toolchain work

    Some time later first non-public patches for binutils and gcc arrived in my inbox. Then eglibc ones. So I started building and on 12th September 2012 I was able to build helloworld:

    12:38 hrw@puchatek:aarch64-oe-linux$ ./aarch64-oe-linux-gcc ~/devel/sources/hello.c -o hello
    12:38 hrw@puchatek:aarch64-oe-linux$ file hello
    hello: ELF 64-bit LSB executable, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.39, not stripped
    12:39 hrw@puchatek:aarch64-oe-linux$ objdump -f hello
    
    hello:     file format elf64-littleaarch64
    architecture: aarch64, flags 0x00000112: 
    EXEC_P, HAS_SYMS, D_PAGED 
    start address 0x00000000004003e0
    

    Then images followed. Several people at Linaro (and outside) used those images to test misc things.

    At that moment we ran ARMv8 Fast models (quite slow system emulator from Arm). There was a joke that Arm developers formed a queue for single core 10 GHz x86-64 cpus to get AArch64 running faster.

    Toolchain became public

    Then 1st October 2012 came. I entered Linaro office in Cambridge for AArch64 meeting and was greeted with “glibc patches went to public ML” information. So I rebased my OpenEmbedded repository, updated patches, removed any traces of non-public ones and published whole work.

    Building on AArch64

    My work above added support for AArch64 as a target architecture. But can it be used as a host? One day I decided to check and ran OpenEmbedded on AArch64.

    After one small patch it worked fine.

    X11 anyone?

    As I had access to Arm Fast model I was able to play with graphics. So one day in January 2013 I did a build and and started Xorg. Through next years I had fun when people wrote that they got X11 running on their AArch64 devices ;D

    Two years later I had Applied Micro Mustang at home (still have it). Once it had working PCI Express support I added graphics card and started X11 on hardware.

    Then went debugging why Xorg requires configuration file and one day with help from Dave Airlie, Mark Salter and Matthew Garrett I got two solutions for the problem. Do not remember did any of them went upstream but some time later problem was solved.

    Few years later I met Dave Airlie at Linux Plumbers. We introduced to each other and he said “ah, you are the ‘arm64 + radeon guy’” ;D

    AArch64 Desktop week

    One day in September 2015 I had an idea. PCIe worked, USB too. So I did AArch64 desktop week. Connected monitors, keyboard, mouse, speakers and used Mustang instead of my x86-64 desktop.

    It was fun.

    Distributions

    First we had nothing. Then I added AArch64 target into OpenEmbedded.

    Same month Arm released Foundation model so anyone was able to play with AArch64 system. No screen, just storage, serial and network but it was enough for some to even start building whole distributions like Debian, Fedora, OpenSUSE, Ubuntu.

    At that moment several patches were shared by all distributions as it was faster way than waiting for upstreams. I saw multiple versions of some of them during my journey of fixing packages in some distributions.

    Debian and Ubuntu

    In February 2013 Debian/Ubuntu team presented their AArch64 port. It was their first architecture bootstrapped without using external toolchains. Work was done in Ubuntu due to different approach to development than Debian has. All work was merged back so some time later Debian also had AArch64 port.

    Fedora

    Fedora team started early — October 2012, right after toolchain became public. Used Fedora 17 packages and switched to Fedora 19 during work.

    When I joined Red Hat in September 2013 one of my duties was fixing packages in Fedora to get them built on AArch64.

    OpenSUSE

    In January 2014 first versions of QEMU support arrived and people moved from using Foundation model. March/April OpenSUSE team did massive amount of builds to get their distribution built that way.

    RHEL

    Fedora bootstrap also meant RHEL 7 bootstrap. When I joined Red Hat there were images ready to use in models. My work was testing them and fixing packages. There were multiple times when AArch64 fix helped to build also on ppc64le and s390x architectures.

    Hardware I played with

    First Linux capable hardware was announced in June 2013. I got access to it at Red Hat. Building and debugging was much faster than using fast models ;D

    Applied Micro Mustang

    Soon Applied Micro Mustangs were everywhere. Distributions used them to build packages etc. Even without support for half of hardware (no PCI Express, no USB).

    I got one in June 2014. Running UEFI firmware out of the box. At first months I had a feeling that firmware is developed at Red Hat as we had fresh versions often right after first patches for missing hardware functionality were written. In reality it was maintained by Applied Micro and we had access to sources so there were some internal changes in testing (that’s why I had firmware versions like ‘0.12-rh’).

    All those graphics cards I collected to test how PCI Express works. Or testing USB before it was even merged into Linux mainline kernel. Using virtualization for development of armhf build fixes (8 cores, 12 gigabytes of ram and plenty of storage beat all armv7 hardware I had).

    I stopped using Mustang around 2018. It is still under my desk.

    For those who use: make sure you have 3.06.25 firmware.

    96boards

    In February 2015 Linaro announced 96boards initiative. The plan was to make small, unified SBC with different Arm chips. Both 32- and 64-bit ones.

    First ones were ‘Consumer Edition’. Small, limited to basic connectivity. Now there are tens of them. 32-bit, 64-bit, fpga etc. Choose your poison ;D

    Second ones were ‘Enterprise Edition’. Few attempts existed, most of them did not survived prototype phase. There was joke that full length PCI Express slot and two USB ports requirements are there because I wanted to have AArch64 desktop ;D

    Too bad that nothing worth using came from EE spec.

    Servers

    As Linaro assignee I have access to several servers from Linaro members. Some are mass-market ones, some never made to market. We had over hundred X-Gene1 based systems (mostly as m400 cartridges in HPe Moonshot chassis’) and shutdown them in 2018 as they were getting more and more obsolete.

    Main system I use for development is one of those ‘never went to mass-market’ ones. 46 cpu cores, 96 GB of ram make it nice machine for building container images, Debian packages or running virtual machines in OpenStack.

    Desktop

    For some time I was waiting for some desktop class hardware to have development box more up-to-date than Mustang. Months turned into years. I no longer wait as it looks like there will be no such thing.

    Solidrun has made some attempts in this area. First with Macchiatobin and later with Honeycomb. I did not used any of them.

    Cloud

    When I (re)joined Linaro in 2016 I became part of team working on getting OpenStack working on AArch64 hardware. We used Liberty, Mitaka, Newton releases and then changed way we work and started contributing more. And more. Kolla, Nova, Dib and other projects. Added aarch64 nodes to OpenDev CI.

    The effect of it was Linaro Developer Cloud used by hundreds of projects to speed-up their aarch64 porting, tens of projects hosting their CI systems etc.

    Two years later Amazon started offering aarch64 nodes in AWS.

    Summary

    I spent half of my life with Arm on AArch64. Had great moments like building helloworld as one of first people outside of Arm Ltd. Got involved in far more projects then ever thought. Met new friends, visited several places in the world I would probably never go otherwise.

    I also got grumpy and complained far too many times that AArch64 market is ‘cheap but limited sbc or fast but expensive servers and nearly nothing in between’. Wrote some posts about missing systems targeting software developers and lost hope that such will happen.

    NOTE: It is 8 years of my work on AArch64. I work with Arm since 2004.

    Written by Marcin Juszkiewicz on
  2. From the diary of AArch64 porter — drive-by coding

    Working on AArch64 often means changing code in some projects. I did that so many times that I am unable to say where I have some commits. Such thing got a name: drive-by coding.

    Definition

    Drive-by coding is situation when you appear in some software project, do some changes, get them merged and then disappear to never be seen again.

    Let’s build something

    All starts from simple thing: I have/want to build some software. But for some reason it does not cooperate. Sometimes it is simple architecture check missing, sometimes atomic operations are not present, intrinsics are missing or anything else.

    First checks

    Then comes moment of looking at build errors and trying to work out some solution. Have I seen that bug before? Does it look familiar?

    If this is something new then quick Google search for error message. And checking bug reports/issues on project’s website/repo. There can be ready to use patches, information how to fix it or even some ideas why does it happen.

    If this is system call failure in some tests then I check my syscalls table are those ones handled on aarch64 and try to change code if they are not (legacy ones like open, symlink, rename).

    Simple fixes

    When I started working with AArch64 (in 2012) there were moments when many projects were easy to fix. If atomics were issue then copying them from Linux kernel was usually solution (if license allowed).

    Architecture checks with pile of #ifdef __X86_64__ or similar ones which are trying to do decide for simple things like “32/64” or “little/big endian”. Nowadays such ones do not happen as often as it was.

    SIMD intrinsics can be a problem. All those vst1q_f32_x2(), vld1q_f32_x2 and similar. I do not have to understand them to know that it usually means that C compiler lacks some backports as those functions were added into gcc and llvm already (like it was with Pytorch recently).

    Complex stuff

    There are moments when getting software to build needs something more complicated. Like I wrote above, I usually start with searching for error message and checking was it an issue in some other projects. And how it got solved. If I am lucky then patch can be done in short time and send for review upstream (once it builds and passes tests).

    Sometimes all I can do is reporting issue upstream and hope that someone will care enough to respond. Usually it ends with at least discussion on potential ways to fix, sometimes hints or even patches to test.

    Projects response

    Projects usually accept patches, review them and merge. In several cases it took longer than expected, sometimes there was larger amount of those so they remember me (at least for some time). It helps when I have something for those project again months/years later.

    There are projects where I prefer to forget that they exist. Complicated contribution rules, crazy CI setup, weird build systems (ever heard about ‘bazel’?). Or comments in ‘we do not give a shit about non-x86’ style (with a bit polished language). Been there, fixed something to get stuff working and do not want to go back.

    Summary

    Drive-by coding’ reminds me going abroad for conferences. People think that you saw interesting places when in reality you spent most of time inside of hotel and/or conference centre.

    It is similar with code. I was in several projects, usually had no idea what they do, how they work. Came, looked shortly, fixed something and went back home.

    Written by Marcin Juszkiewicz on
  3. So your hardware is ServerReady?

    Recently I changed my assignment at Linaro. From Cloud to Server Architecture. Which means less time spent on Kolla things, more on server related things. And at start I got some project I managed to forget about :D

    SBSA reference platform in QEMU

    In 2017 someone got an idea to make a new machine for QEMU. Pure hardware emulation of SBSA compliant reference platform. Without using of virtio components.

    Hongbo Zhang wrote code and got it merged into QEMU, Radosław Biernacki wrote basic support for EDK2 (also merged upstream). Out of box it can boot to UEFI shell. Linux is not bootable due to lack of ACPI tables (DeviceTree is not an option here).

    ACPI tables in firmware

    Tanmay Jagdale works on adding ACPI tables in his fork of edk2-platforms. With this firmware Linux boots and can be used.

    Testing tools

    But what the point of just having reference platform if there is no testing? So I took a look and found two interesting tools:

    Server Base System Architecture — Architecture Compliance Suite

    SBSA ACS tool requires ACPI tables to be present to work. And once started it nicely checks how compliant your system is:

    FS0:\> Sbsa.efi -p
    
    
     SBSA Architecture Compliance Suite
        Version 2.4
    
     Starting tests for level  4 (Print level is  3)
    
     Creating Platform Information Tables
     PE_INFO: Number of PE detected       :    3
     GIC_INFO: Number of GICD             :    1
     GIC_INFO: Number of ITS              :    1
     TIMER_INFO: Number of system timers  :    0
     WATCHDOG_INFO: Number of Watchdogs   :    0
     PCIE_INFO: Number of ECAM regions    :    2
     SMMU_INFO: Number of SMMU CTRL       :    0
     Peripheral: Num of USB controllers   :    1
     Peripheral: Num of SATA controllers  :    1
     Peripheral: Num of UART controllers  :    1
    
          ***  Starting PE tests ***
       1 : Check for number of PE            : Result:  PASS
       2 : Check for SIMD extensions                PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       3 : Check for 16-bit ASID support            PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       4 : Check MMU Granule sizes                  PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       5 : Check Cache Architecture                 PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       6 : Check HW Coherence support               PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       7 : Check Cryptographic extensions           PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       8 : Check Little Endian support              PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       9 : Check EL2 implementation                 PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      10 : Check AARCH64 implementation             PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      11 : Check PMU Overflow signal         : Result:  PASS
      12 : Check number of PMU counters             PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      13 : Check Synchronous Watchpoints            PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      14 : Check number of Breakpoints              PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      15 : Check Arch symmetry across PE            PSCI_CPU_ON: failure
    
           Reg compare failed for PE index=1 for Register: CCSIDR_EL1
           Current PE value = 0x0         Other PE value = 0x100FBDB30E8
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      16 : Check EL3 implementation                 PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      17 : Check CRC32 instruction support          PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      18 : Check for PMBIRQ signal
           SPE not supported on this PE      : Result:  -SKIPPED- 1
      19 : Check for RAS extension                  PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      20 : Check for 16-Bit VMID                    PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      21 : Check for Virtual host extensions        PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      22 : Stage 2 control of mem and cache         PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      23 : Check for nested virtualization          PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      24 : Support Page table map size change       PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      25 : Check for pointer signing                PSCI_CPU_ON: failure
    
    
      25 : Check for pointer signing                PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      26 : Check Activity monitors extension        PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      27 : Check for SHA3 and SHA512 support        PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
    
          *** One or more PE tests have failed... ***
    
          ***  Starting GIC tests ***
     101 : Check GIC version                 : Result:  PASS
     102 : If PCIe, then GIC implements ITS  : Result:  PASS
     103 : GIC number of Security states(2)  : Result:  PASS
     104 : GIC Maintenance Interrupt
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
    
          One or more GIC tests failed. Check Log
    
          *** Starting Timer tests ***
     201 : Check Counter Frequency           : Result:  PASS
     202 : Check EL0-Phy timer interrupt     : Result:  PASS
     203 : Check EL0-Virtual timer interrupt : Result:  PASS
     204 : Check EL2-phy timer interrupt     : Result:  PASS
     205 : Check EL2-Virtual timer interrupt
           v8.1 VHE not supported on this PE : Result:  -SKIPPED- 1
     206 : SYS Timer if PE Timer not ON
           PE Timers are not always-on.
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
     207 : CNTCTLBase & CNTBaseN access
           No System timers are defined      : Result:  -SKIPPED- 1
    
         *** Skipping remaining System timer tests ***
    
          *** One or more tests have Failed/Skipped.***
    
          *** Starting Watchdog tests ***
     301 : Check NS Watchdog Accessibility
           No Watchdogs reported          0
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
     302 : Check Watchdog WS0 interrupt
           No Watchdogs reported          0
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
    
          ***One or more tests have failed... ***
    
          *** Starting PCIe tests ***
     401 : Check ECAM Presence               : Result:  PASS
     402 : Check ECAM value in MCFG table    : Result:  PASS
    
            Unexpected exception occured
            FAR reported = 0xEBDAB180
            ESR reported = 0x97800010
         -------------------------------------------------------
         Total Tests run  =   42;  Tests Passed  =   11  Tests Failed =   22
         ---------------------------------------------------------
    
          *** SBSA tests complete. Reset the system. ***
    

    As you can see there is still a lot of work to do.

    ACPI Tables View

    This tool displays content of ACPI tables in hex/ascii format and then with information interpreted field by field.

    What makes it more useful is “-r 2” argument as it enables checking tables against Server Base Boot Requirements (SBBR) v1.2 specification. On SBSA reference platform with Tanmay’s firmware it lists two errors:

    ERROR: SBBR v1.2: Mandatory DBG2 table is missing
    ERROR: SBBR v1.2: Mandatory PPTT table is missing
    
    Table Statistics:
            2 Error(s)
            0 Warning(s)
    

    So situation looks good as those can be easily added.

    CI

    So we have code to check and tools to do that. Add one to another and you have a clean need for CI job. So I wrote one for Linaro CI infrastructure: “LDCG SBSA firmware“. It builds top of QEMU and EDK2, then boot it and run above tools. Results are sent to mailing list.

    ServerReady?

    The Arm ServerReady compliance program provides a solution for servers that “just works”, allowing partners to deploy Arm servers with confidence. The program is based on industry standards and the Server Base System Architecture (SBSA) and Server Base Boot Requirement (SBBR) specifications, alongside Arm’s Server Architectural Compliance Suite (ACS). Arm ServerReady ensures that Arm-based servers work out-of-the-box, offering seamless interoperability with standard operating systems, hypervisors, and software.

    In other words: if your hardware is SBSA compliant then you can go with SBBR compliance tests and then go and ask for certification sticker or sth like that.

    But if your hardware is not SBSA compliant then EBBR is all you can get. Far from being ServerReady. Never mind what people tries to say — ServerReady requires SBBR which requires SBSA.

    Future work

    More tests to integrate. ARM Enterprise ACS is next on my list.

    Written by Marcin Juszkiewicz on
  4. NAS update

    In 2014 I bought Synology DS214se NAS and two 4TB hard drives. It worked fine for me for years and served files. But it was low cpu power system with just 256MB of ram so was too easy to overload.

    Let’s move to x86-64

    So few years ago friend was selling ASUS M5A78L-M LX3 mainboard with AMD FX-6300 processor. I bought it, added 8 GB of ram from my desktop (which got additional 16 GB instead) and put into Node 804 case from Fractal Design.

    Case fits MicroATX board and has plenty of space for storage (I think 10 3.5”, 2 2.5” and slot-in optical drive).

    Machine got several hard drives (from other home machines or drawers):

    • WD Red 4 TB x2
    • Toshiba 2 TB
    • Samsung 1.5 TB
    Hard drives
    Hard drives in cages
    Hard drive cage
    Hard drives cage

    FreeNAS

    Installed FreeNAS 11 on it and started using. Machine was named ‘lumpek’ (Lumpy the Heffalump) to follow my way of naming computers.

    4 TB drives went into simple mirror, 2 TB for less important data and 1.5 TB one for virtual machines and related storage (like installation iso files).

    ZFS works nice, some extra FreeNAS plugins allowed me to offload some services from my desktop to NAS (like Transmission daemon for fetching torrents or MySQL server for local needs).

    Memory upgrade

    Many people say that NAS machine should have ECC memory. So at some moment it got 16 GB (2x 8 GB sticks) of DDR3-1866 ECC memory recovered from old server:

    Handle 0x0026, DMI type 16, 15 bytes
    Physical Memory Array
            Location: System Board Or Motherboard
            Use: System Memory
            Error Correction Type: Single-bit ECC
            Maximum Capacity: 16 GB
            Error Information Handle: Not Provided
            Number Of Devices: 2
    

    More disks

    4 TB of space ends one day. So I went and bought another WD Red 4 TB disk. The idea was to move data from mirror to some spare storage, create new RAID-Z1 array from 3x 4 TB drives and migrate data back.

    But… Lumpek already had 4 hard drives and it was maximum this mainboard supported.

    Dell H310 aka LSI 9211-8i

    Luckily mainboard has on-board graphics so PCI Express x16 slot was empty. Asked friends, checked some internet pages and ordered used Dell H310 SAS controller. This is probably the most popular (among IBM M1015) storage solution in FreeNAS community.

    Card arrived with not needed SAS cable and SFF-8187 cables came in other order.

    Crossflashing

    How to make best use of server class RAID controller? Strip it from any RAID functionality ;D

    Turns out that Dell H310 is basically LSI 9211-8i card. Which means we can flash it with generic firmware to switch to “initiator target” (also called “IT mode”). Card will then presents each drive individually to the host.

    There are several pages describing process. One of them is JC-LAN. I do not remember which set of instructions I followed but they do not differ much.

    At the end I got generic LSI SAS2008 controller:

    root@lumpek:~ # sas2flash -listall
    LSI Corporation SAS2 Flash Utility
    Version 16.00.00.00 (2013.03.01) 
    Copyright (c) 2008-2013 LSI Corporation. All rights reserved 
    
            Adapter Selected is a LSI SAS: SAS2008(B2)   
    
    Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
    ----------------------------------------------------------------------------
    
    0  SAS2008(B2)     20.00.07.00    14.01.00.08    07.39.02.00     00:02:00:00
    
            Finished Processing Commands Successfully.
            Exiting SAS2Flash.
    root@lumpek:~ # 
    

    And as a bonus all my hard drives got a bit more bandwidth:

    da2: <ATA WDC WD40EFRX-68W 0A82> Fixed Direct Access SPC-4 SCSI device
    da2: 600.000MB/s transfers
    da2: Command Queueing enabled
    da2: 3815447MB (7814037168 512 byte sectors)
    da2: quirks=0x8<4K>
    

    Not that 300->600 MB/s transfer update change anything with rusting plates ;D

    Summary

    FreeNAS based machine serves me well. Five hard drives give lot of space for data. 1 GbE network connection is probably my main limit now but there are no plans so far for moving to 10 GbE cards/switch due to their price.

    Virtual machines run from NAS with good speed and if I need faster then I can move them to NVME in my desktop or laptop.

    Written by Marcin Juszkiewicz on
  5. Installing Fedora on RockPro64

    Continuing tests of distribution installers. This time I installed Fedora ‘rawhide’ from netinst iso (2020.06.20). Fetched, wrote to USB pen drive and booted. Due to U-Boot being present in on-board SPI flash I did not had to mess with installation media.

    Issues

    There were some issues:

    1. Panfrost failing to initialize
    2. U-Boot unable to load grub efi

    Panfrost initialization failure

    Panfrost kernel module needs some devfreq governor. Kernel has four of them, Fedora enables one. There are no dependencies between those modules which ends with the same error as with Debian:

    panfrost ff9a0000.gpu: devfreq_add_device: Unable to find governor for the device
    panfrost ff9a0000.gpu: [drm:panfrost_devfreq_init [panfrost]] *ERROR* Couldn't initialize GPU devfreq
    panfrost ff9a0000.gpu: Fatal error during devfreq init
    panfrost: probe of ff9a0000.gpu failed with error -22
    

    Solution was the same as before — boot without ‘panfrost’ module. I interrupted grub from starting and added rd.driver.blacklist=panfrost to “linux” command. This allowed me to boot into Fedora installer and system installation went smoothly.

    First boot on installed system shown working Panfrost driver:

    panfrost ff9a0000.gpu: clock rate = 500000000
    panfrost ff9a0000.gpu: mali-t860 id 0x860 major 0x2 minor 0x0 status 0x0
    panfrost ff9a0000.gpu: features: 00000000,100e77bf, issues: 00000000,24040400
    panfrost ff9a0000.gpu: Features: L2:0x07120206 Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002830 AS:0xff JS:0x7
    panfrost ff9a0000.gpu: shader_present=0xf l2_present=0x1
    [drm] Initialized panfrost 1.1.0 20180908 for ff9a0000.gpu on minor 0
    

    U-Boot can not load Grub EFI

    After reboot U-Boot was not able to load Grub from EFI System Partition:

    Device 0: Vendor: ADATA    Rev: 1.00 Prod: USB Flash Drive 
                Type: Removable Hard Disk
                Capacity: 59200.0 MB = 57.8 GB (121241600 x 512)
    ... is now current device
    Scanning usb 0:1...
    Found EFI removable media binary efi/boot/bootaa64.efi
    libfdt fdt_check_header(): FDT_ERR_BADMAGIC
    Card did not respond to voltage select!
    Scanning disk mmc@fe310000.blk...
    Disk mmc@fe310000.blk not ready
    Card did not respond to voltage select!
    Scanning disk mmc@fe320000.blk...
    Disk mmc@fe320000.blk not ready
    Card did not respond to voltage select!
    Scanning disk sdhci@fe330000.blk...
    Disk sdhci@fe330000.blk not ready
    Scanning disk usb_mass_storage.lun0...
    ** Unrecognized filesystem type **
    ** Unrecognized filesystem type **
    Found 4 disks
    BootOrder not defined
    EFI boot manager: Cannot load any image
    858216 bytes read in 25 ms (32.7 MiB/s)
    libfdt fdt_check_header(): FDT_ERR_BADMAGIC
    System BootOrder not found.  Initializing defaults.
    Could not read \EFI\: Invalid Parameter
    Error: could not find boot options: Invalid Parameter
    start_image() returned Invalid Parameter
    ## Application terminated, r = 2
    EFI LOAD FAILED: continuing...
    

    It was already reported as ‘shim’ bug 1733817.

    How to work around it?

    1. connect your Fedora storage into other computer
    2. copy “/efi/fedora/grubaa64.efi” to “/efi/boot/bootaa64.efi”

    This way U-Boot will get grub efi binary to load in default location.

    Final effect

    Board boots directly to graphical login manager and then to GNOME3 session. Extreme Tux Racer and Xonotic worked out of the box. Speed-wise it feels slower than KDE Plasma session on Debian.

    Written by Marcin Juszkiewicz on
  6. Installing Debian on RockPro64

    Installed Debian ‘testing’ from netinst iso (2020.06.15) today. Fetched, wrote to USB pen drive and booted. Due to U-Boot being present in on-board SPI flash I did not had to mess with installation media.

    Issues

    There were some issues:

    1. no graphics on default installer (known, someone promised to fix it)
    2. grub refusing to install (bug against installer reported)
    3. Panfrost failing to initialize

    Serial console FTW!

    Ok, this time I am joking. There are two choices: text and graphical installer. First option lacks kernel modules for graphics so only serial console is available. Graphical installer works fine.

    EFI Grub and lack of EFI variables storage

    As I booted board with U-Boot there was no EFI variables storage. Grub was not satisfied:

    os-prober: debug: running /usr/lib/os-probes/50mounted-tests on /dev/sdb2
    50mounted-tests: debug: mounted using GRUB fat filesystem driver
    50mounted-tests: debug: running subtest /usr/lib/os-probes/mounted/40lsb
    50mounted-tests: debug: running subtest /usr/lib/os-probes/mounted/90linux-distro
    grub-installer: info: Installing grub on 'dummy'
    grub-installer: info: grub-install does not support --no-floppy
    grub-installer: info: Running chroot /target grub-install  --force "dummy"
    grub-installer: Installing for arm64-efi platform.
    grub-installer: grub-install: warning: Cannot set EFI variable Boot0000.
    grub-installer: grub-install: warning: vars_set_variable: write() failed: Invalid argument.
    grub-installer: grub-install: warning: _efi_set_variable_mode: ops->set_variable() failed: No such file or directory.
    grub-installer: grub-install: error: failed to register the EFI boot entry: No such file or directory.
    grub-installer: error: Running 'grub-install  --force "dummy"' failed.
    

    How to work around it?

    1. chroot into target system and run update-grub by hand
    2. copy “/efi/debian/grubaa64.efi” to “/efi/boot/bootaa64.efi”

    This way U-Boot will get efi binary to load in default location.

    Panfrost initialization failure

    Panfrost kernel module needs some devfreq governor. Kernel has four of them, Debian enables one. There are no dependencies between those modules which ends with this error:

    panfrost ff9a0000.gpu: devfreq_add_device: Unable to find governor for the device
    panfrost ff9a0000.gpu: [drm:panfrost_devfreq_init [panfrost]] *ERROR* Couldn't initialize GPU devfreq
    panfrost ff9a0000.gpu: Fatal error during devfreq init
    panfrost: probe of ff9a0000.gpu failed with error -22
    

    Solution:

    1. boot system
    2. rmmod panfrost, modprobe governor_simpleondemand, modprobe panfrost
    3. update-initramfs -u -kall

    Good option at this phase is changing configuration of update-initramfs to include only needed kernel modules (by setting “MODULES=dep” in it’s configuration). This allowed me to shrink initramfs from 37 to 13 megabytes (removal of plymouth and ntfs-3g shrinked to 6.6 MB).

    Final effect

    Board boots directly to graphical login manager and then to KDE Plasma session. Some of OpenGL games work, some not (Nexuiz). Looks good.

    Written by Marcin Juszkiewicz on
  7. EBBR on RockPro64

    SBBR or GTFO

    Me.

    But Arm world no longer ends on “SBBR compliant or complete mess”. For over a year there is new specification called EBBR (Embedded Base Boot Requirements).

    WTH is EBBR?

    In short it is kind of SBBR for devices which can not comply. So you still need to have some subset of UEFI Boot/Runtime Services but it can be provided by whatever bootloader you use. So U-Boot is fine as long it’s EFI implementation is enabled.

    ACPI is not required but may be present. DeviceTree is perfectly fine. You may provide both or one of them.

    Firmware can be stored wherever you wish. Even MBR partitioning is available if really needed.

    Make it nice way

    RockPro64 has 16MB of SPI flash on board. This is far more than needed for storing firmware (I remember time when it was enough for palmtop Linux).

    During last month I sent a bunch of patches to U-Boot to make this board as comfortable to use as possible. Including storing of all firmware parts into on board SPI flash.

    Needed files

    To have U-Boot in SPI flash there you need to fetch two files:

    Their sha256 sums:

    3985f2ec63c2d31dc14a08bd19ed2766b9421f6c04294265d484413c33c6dccc  idbloader.img
    35ec30c40164f00261ac058067f0a900ce749720b5772a759e66e401be336677  u-boot.itb
    

    Store them as files on USB pen drive.

    Flashing RockPro64 board

    NOTE: I assume that you already have a way to boot your board to U-Boot shell (most common is to use microSD card with U-Boot on it).

    Reboot board to U-Boot shell. Plug USB pen drive into any of RockPro64 USB ports.

    Next do this set of commands to update U-Boot:

    Hit any key to stop autoboot:  0 
    => usb start
    
    => ls usb 0:1
       163807   idbloader.img
       867908   u-boot.itb
    
    2 file(s), 0 dir(s)
    
    => sf probe
    SF: Detected gd25q128 with page size 256 Bytes, erase size 4 KiB, total 16 MiB
    
    => load usb 0:1 ${fdt_addr_r} idbloader.img
    163807 bytes read in 16 ms (9.8 MiB/s)
    
    => sf update ${fdt_addr_r} 0 ${filesize}
    device 0 offset 0x0, size 0x27fdf
    163807 bytes written, 0 bytes skipped in 2.93s, speed 80066 B/s
    
    => load usb 0:1 ${fdt_addr_r} u-boot.itb
    867908 bytes read in 53 ms (15.6 MiB/s)
    
    => sf update ${fdt_addr_r} 60000 ${filesize}
    device 0 offset 0x60000, size 0xd3e44
    863812 bytes written, 4096 bytes skipped in 11.476s, speed 77429 B/s
    

    And reboot board.

    After this your RockPro64 will have firmware stored in on board SPI flash. No need for wondering which offsets to use to store them on SD card etc.

    Booting installation media

    The nicest part of it is that no longer you need to mess with installation media. Fetch Debian/Fedora installer ISO, write it to USB pen drive, plug into port and reboot board.

    Should work with any generic AArch64 installation media. Of course kernel on media needs to support RockPro64 board. I played with Debian ‘testing’ and Fedora 32 and rawhide and they booted fine.

    Testing U-Boot on microSD

    By default RockPro64 loads U-Boot from SPI flash. If you need/want to boot from microSD then follow instruction from official wiki:

    If you mess-up your SPI and are unable to boot, jumpering pins 23 (CLK) and 25 pin (GND) on the PI-2-bus header will disable the SPI as a boot device.

    You have to remove the jumper 2 seconds after having started your RP64 (before the white LED turns ON) otherwise the SPI will be missing and you won’t be able to flash it.

    My setup has ‘disable SPI’ button next to ‘disconnect Tx serial line’ switch — both on small breadboard next to the board.

    RockPro64 setup on my desk
    RockPro64 setup on my desk
    Written by Marcin Juszkiewicz on
  8. OpenDev CI speed-up for AArch64

    I work with OpenDev CI for a while. My first Kolla patches were over three years ago. We (Linaro) added AArch64 nodes few times — some nodes were taken down, some replaced, some added.

    Speed or lack of it

    Whenever you want to install some Python package using pip it is downloaded from Pypi (directly or mirror). If there is a binary package then you get it, if not then “noarch” package is fetched.

    In worst case source tarball is downloaded and whole build process starts. You need to have all required compilers installed, development headers for Python and all required libraries and rest of needed tools. And then wait. And wait as some packages require a lot of time.

    And then repeat it again and again as you are not allowed to upload packages into Pypi for projects you do not own.

    Argh you, protobuf

    There was a new release of protobuf package. OpenStack bot picked it up, sent patch for review and it got merged.

    And all AArch64 CI jobs failed…

    Turned out that protobuf 3.12.0 was released with x86 wheels only. No source tarball. At all.

    This turned out to be new maintainer mistake — after 2-3 weeks it was fixed in 3.12.2 release.

    Another CI job then

    So I started looking at ‘requirements’ project and created a new CI job for it. To check are new package versions are available for AArch64. Took some time and several side updates as well (yak shaving all the way again).

    Stuff got merged and works now.

    Wheels cache

    While working on above CI job I had a discussion with OpenDev infra team how to make it work properly. Turned out that there were old jobs doing exactly what I wanted: building wheels and caching them for next CI tasks.

    It took several talks and patches from Ian Wienand, Clark Boylan, Jeremy ‘fungi’ Stanley and others. Several CI jobs got renamed, some were moved from one project to another. Servers got configuration changes etc.

    Now we have wheels built for both x86-64 and AArch64 architectures. Covering CentOS 7/8, Debian ‘buster’ and Ubuntu ‘xenial/bionic/focal’ releases. For OpenStack ‘master’ and few stable branches.

    Effect

    Requirements project has quick ‘check-uc’ job running on AArch64 to make sure that all packages are available for both architectures. All OpenStack projects profit from it.

    In Kolla ‘openstack-base’ image went from 23:49 to just 5:21 minutes. Whole Debian/source build is now 57 minutes instead of 2 hours 20 minutes.

    Nice result, isn’t it?

    Written by Marcin Juszkiewicz on
Page 8 / 106