1. Sometimes one tweet is enough

    Two weeks ago I wrote on Twitter:

    Is there some company with spare AArch64 CPU cycles?

    Opendev (project behind OpenStack and some more) would make use of another aarch64 server offer.

    Current one is iirc paid by @Arm, hosted by @equinixmetal and operated by @LinaroOrg.

    Why I did that? Maybe frustration, maybe burnout. Hard to tell. But I did. Without targeting any Arm related company as I did not wanted to force anyone to do anything.

    Response

    A few hours later I got an email from Peter Pouliot from Ampere Computing. With information that they provided hardware to Oregon State University Open Source Lab (OSUOSL in short) and that we may get nodes there.

    As I have no idea how exactly Opendev infrastructure works I added Kevin Zhao to the list. He is Linaro employee working on all instances of Linaro Developer Cloud and he maintained all AArch64 resources provided to Opendev.

    Process

    Kevin added Opendev infra admins: Clark Boylan and Ian Wienand. Peter added Lance Alberson from OSUOSL. I was just one of addresses in emails looking how things go.

    And it went nice. If was pleasure to read how it goes. Two days, 8 emails, arrangements were made. Then changes to Opendev infrastructure configuration followed and week later ‘linaro-us’ was not the only provider of AArch64 nodes.

    Result

    Opendev has two providers of AArch64 nodes now:

    • linaro-us-regionone
    • osuosl-regionone

    First one is paid by Arm Ltd, hosted at Equinix Metal (formerly Packet) and operated by Kevin Zhao from Linaro.

    Second one runs on Ampere provided hardware and is operated by OSUOSL admins.

    check-arm64’ pipeline on Opendev CI gets less clogged. And I hope that more and more projects will use it to test their code not only on x86-64 ;D

    Written by Marcin Juszkiewicz on
  2. Five years @linaro.org

    Five years ago I got an email from Kristine with “Linaro Assignee On-Boarding” title. Those were busy years.

    OpenStack

    According to stackalytics I did 1144 reviews so far:

    OpenStack project Reviews
    kolla 641
    kolla-ansible 445
    releases 14
    nova 11
    loci 9
    requirements 7
    devstack 4
    tripleo-ci 3
    pbr 2
    magnum 2

    Those were different ones — from simple fixes to new features. Sometimes it was hard to convince projects that my idea makes sense. There were patches with over 50 revisions. Some needed split to smaller ones, reordering etc.

    And at the end AArch64 is just another architecture in OpenStack family. Linaro Developer Cloud managed to pass official certifications etc.

    Python

    Countless projects. From suggesting ‘can you publish aarch64 wheel’ to adding code to make it happen. Or work on getting manylinux2014 working properly.

    Linaro CI

    We have Jenkins setup at Linaro. With several build machines attached to it. And countless amount of jobs running there. I maintain several ones and my job is taking care and adding new ones when needed.

    Servers

    Due to my work at Linaro Enterprise Group (renamed later at Linaro Datacenter & Cloud Group) I dealt with many AArch64 server systems. From HPe Moonshot to Huawei D06. With Qualcomm Falkor in meantime. Used CentOS 7/8 on them. Debian 9/10/11. Added needed config entries to Debian kernel to get them working out of the box (iirc excluding M400 cartridges no one maintained any more).

    Conferences

    I did “OpenStack on AArch64” talk at three conferences in a row. First Linaro Connect LAS16 (as group talk), then on Pingwinaria in Poland and finally in Kiev, Ukraine. Same slides, translated English -> Polish -> English and updated each time.

    Since then I do not give lectures at conferences any more. Prefer to attend someone’s talks. Or spend time on hallway discussions. Linaro Connect events were good place for them (and hope they will be in future too).

    At the end of each there was listing of every person who worked at Linaro for 5 or 10 years. Due to pandemic I missed that part. But hope for that memorial glass ;D

    Written by Marcin Juszkiewicz on
  3. Let’s play with some new stuff

    Two days ago Arm announced Arm v9 architecture. Finally we can discuss it in open instead saying “I prefer not to talk about this” (because NDA etc.).

    New things

    There are several new things in v9 version. SVE2, Spectre/Meltdown like mitigations, memory tagging, realms… And some of them are present in v8 already (mitigations are v8.5 IIRC).

    Hardware

    But how it goes work in hardware? There was no new cpu core announcements so we need to wait for Arm Neoverse N2 derived designs (as it will be v9).

    As usual mobile phones and tablets will get it first. Then probably Apple will put it into newer Macbooks. Decade later servers and workstations.

    And there are always those machines in labs. Packed with NDAs, access queues etc…

    $ uname -a
    Linux bach 5.11.0-73 #1 SMP Fri Mar 12 11:34:12 UTC 2021 aarch64 GNU/Linux
    $ head -n8 /proc/cpuinfo
    processor       : 0
    BogoMIPS        : 50.00
    Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp rme
                      asimdhp cpuid sb16 fp16 asimdrdm uafp jscvt fcma sb128 lrcpc
                      dcpop sha3 sm3 sm4 asimddp afd sha512 sve asimdfhm dit ilrcpc
                      rng flagm tme ssbs paca pacg sb dcpodp ac97 flagm2 frint mte
                      sve2 bf16 bti
    CPU implementer : 0x4a
    CPU architecture: 9
    CPU variant     : 0x0
    CPU part        : 0xf02
    CPU revision    : 0
    

    And sorry, no benchmarks allowed on not-mass-market hardware. The good part? It is still AArch64 so no recompilation required. Some software packages gets first set of SVE2 improvements soon.

    That was only a joke

    The truth is that even labs do not have such stuff. While most of features listed in output of /proc/cpuinfo exists in newer Arm v8 cores some of them were added there for fun:

    • afd (April Fools’ Day)
    • sb16 and sb128 (SoundBlaster 16/128)
    • ac97 (yet another sound device)
    • uafp (Use-After-Free Protection)

    Thanks

    I would like to thank a few people.

    Arnd Bergmann pointed out that two fields related to CPU are wrong:

    • implementer should be ASCII code (so I changed from 0xe3 to 0x4a (‘J’ as in Joke))
    • part field is just 12 bits (so changed from 0x1f02 to - 0xf02)

    Mark Brown spotted duplicated ‘rng’ feature. And Ryan Houdek who wrote about ‘tme’ feature.

    Written by Marcin Juszkiewicz on
  4. From a diary of AArch64 porter — manylinux2014

    Python wheels… Everyone loves them, many people curse when they are not available for their setup. I am in both groups every time I have to do something more complex with Python.

    So today I will show how to build Python wheel in quick way.

    What is manylinux?

    Linux world has a lot of distributions. Even more when you add their releases into the mix. And they ship different versions of Python. At same time we have Pypi which works as repository of ready to use Python packages.

    So manylinux idea was created to have minimal requirements for building Python packages. To make sure that you can install it on any distribution.

    So far there were several versions built:

    name base distribution PEP
    manylinux1 CentOS 5 PEP 513
    manylinux2010 CentOS 6 PEP 571
    manylinux2014 CentOS 7 PEP 599
    manylinux_2_24 Debian 9 ‘stretch’ PEP 600

    As you see old releases are used to make sure that resulting binaries work on any newer distribution. manylinux2014 added non-x86 architectures (aarch64, ppc64le, s390x).

    Each image contains several versions of Python binaries ready to use in /opt/python/ directory.

    Manylinux images are distributed as container images on pypa repository on quay.io. Run under Docker, Kubernetes, Podman etc.

    Source code is available in ‘manylinux’ repository on GitHub.

    Let’s use!

    My work requires me to build TensorFlow 1.5.15 version, which depends on NumPy 1.18.* version. None of them are available as wheels for AArch64 architecture.

    So let me run container and install NumPy dependencies:

    $ docker run -it -u root -v $PWD:/tmp/numpy quay.io/pypa/manylinux2014_aarch64
    [root@fa339493a417 /]# cd /tmp/numpy/
    [root@fa339493a417 /tmp/numpy/]# yum install -y blas-devel lapack-devel
    [root@fa339493a417 /tmp/numpy/]# 
    

    Image has several versions of Python installed and I want to build NumPy 1.18.5 for each of them so my build script is easy:

    for py in /opt/python/cp3[6789]*
    do
        pyver=`basename $py`
        $py/bin/python -mvenv $pyver
        source $pyver/bin/activate
        pip wheel numpy==1.18.5
        deactivate
    done
    

    Result is simple — I got set of wheel files. One per Python version. But it is not the end of work as NumPy libraries depend on blas/lapack we installed into system.

    add libraries to the wheel

    There is a tool we need to run: “auditwheel”. What it does is inspecting wheel file, all library symbols used, external libraries etc. Then it bundles required libraries into wheel file:

    INFO:auditwheel.main_repair:Repairing numpy-1.18.5-cp39-cp39-linux_aarch64.whl
    INFO:auditwheel.wheeltools:Previous filename tags: linux_aarch64
    INFO:auditwheel.wheeltools:New filename tags: manylinux2014_aarch64
    INFO:auditwheel.wheeltools:Previous WHEEL info tags: cp39-cp39-linux_aarch64
    INFO:auditwheel.wheeltools:New WHEEL info tags: cp39-cp39-manylinux2014_aarch64
    INFO:auditwheel.main_repair:
    Fixed-up wheel written to /root/wheelhouse/numpy-1.18.5-cp39-cp39-manylinux2014_aarch64.whl
    

    File size changed from 13 467 772 to 16 806 338 bytes and resulting wheel can be installed on any distribution.

    Let summarise

    Manylinux is great tool to provide Python packages. It is easy to use on developer’s machine or on CI. And makes life of Python users much easier.

    Written by Marcin Juszkiewicz on
  5. U-Boot and generic distro boot

    Small board computers (SBC) usually come with U-Boot as firmware. There could be some more components like Arm Trusted Firmware, OPTEE etc but what user interact with is the U-Boot itself.

    Since 2016 there is the CONFIG_DISTRO_DEFAULTS option in U-Boot configuration. It selects defaults suitable for booting general purpose Linux distributions. Thanks to it board is able to boot most of OS installers out of the box without any user interaction.

    How?

    How does it know how to do that? There are several scripts and variables involved. Run “printenv” command in U-Boot shell and there you should see some of them named like “boot_*, bootcmd_* scan_dev_for_*”.

    In my example I would use environment from RockPro64 running U-Boot 2021.01 version.

    I will prettify all scripts for readability. Script contents may be expanded — in such case I will give name as a comment and then it’s content.

    Let’s boot

    First variable used by U-Boot is “bootcmd”. It reads it to know how to boot operating system on the board.

    In out case this variable has “run distro_bootcmd” in it. So what is there on RockPro64 SBC:

    setenv nvme_need_init
    for target in ${boot_targets}
    do 
        run bootcmd_${target}
    done
    

    It says that on-board NVME needs some initialization and then goes through set of scripts using order from “boot_targets” variable. On RockPro64 this variable sets “mmc0 mmc1 nvme0 usb0 pxe dhcp sf0” order which means:

    • eMMC
    • MicroSD
    • NVME
    • USB storage
    • PXE
    • DHCP
    • SPI flash

    Both eMMC and MicroSD look similar: ‘devnum=X; run mmc_boot’ — set MMC number and then try to boot by running ‘mmc_boot’ script:

    if mmc dev ${devnum}; then 
        devtype=mmc; 
        run scan_dev_for_boot_part; 
    fi
    

    NVME one initialize PCIe subsystem (via “boot_pci_enum”), then scans for NVME devices (via “nvme_init”) and do the similar stuff (here with expanded scripts):

    # boot_pci_enum
    pci enum
    
    # nvme_init
    if ${nvme_need_init}; then 
        setenv nvme_need_init false;
        nvme scan;
    fi
    
    if nvme dev ${devnum}; then 
        devtype=nvme; 
        run scan_dev_for_boot_part; 
    fi
    

    USB booting goes with “usb_boot”:

    usb start;
    if usb dev ${devnum}; then 
        devtype=usb; 
        run scan_dev_for_boot_part;
    fi
    

    PXE network boot? Initialize USB, scan PCI, get network configuration, do PXE boot:

    # boot_net_usb_start
    usb start
    
    # boot_pci_enum
    pci enum
    
    dhcp; 
    if pxe get; then 
        pxe boot; 
    fi
    

    DHCP method feels like last resort one (do not ask me for meaning of all those variables):

    # boot_net_usb_start
    usb start
    
    # boot_pci_enum
    pci enum
    
    if dhcp ${scriptaddr} ${boot_script_dhcp}; then 
        source ${scriptaddr}; 
    fi;
    
    setenv efi_fdtfile ${fdtfile}; 
    setenv efi_old_vci ${bootp_vci};
    setenv efi_old_arch ${bootp_arch};
    setenv bootp_vci PXEClient:Arch:00011:UNDI:003000;
    setenv bootp_arch 0xb;
    
    if dhcp ${kernel_addr_r}; then 
        tftpboot ${fdt_addr_r} dtb/${efi_fdtfile};
    
        if fdt addr ${fdt_addr_r}; then 
            bootefi ${kernel_addr_r} ${fdt_addr_r}; 
        else 
            bootefi ${kernel_addr_r} ${fdtcontroladdr};
        fi;
    fi;
    
    setenv bootp_vci ${efi_old_vci};
    setenv bootp_arch ${efi_old_arch};
    setenv efi_fdtfile;
    setenv efi_old_arch;
    setenv efi_old_vci;
    
    

    And last method is SPI flash:

    busnum=0
    
    if sf probe ${busnum}; then
        devtype=sf;
    
        # run scan_sf_for_scripts; 
        ${devtype} read ${scriptaddr} ${script_offset_f} ${script_size_f}; 
        source ${scriptaddr}; 
        echo SCRIPT FAILED: continuing...
    fi
    

    Search for boot partition

    Note how block devices end with one script: “scan_dev_for_boot_part”. What it does is quite simple:

    part list ${devtype} ${devnum} -bootable devplist; 
    env exists devplist || setenv devplist 1; 
    
    for distro_bootpart in ${devplist}; do 
        if fstype ${devtype} ${devnum}:${distro_bootpart} bootfstype; then 
            run scan_dev_for_boot; 
        fi; 
    done; 
    setenv devplist
    

    We know type and number of boot device from previous step so now we check for bootable partitions. Which means EFI System Partition for GPT disks and partitions marked as bootable in case of MBR. If none are present then first one is assumed to be bootable one.

    Search for distribution boot information

    Once we found boot partitions it is time to search for boot stuff with “scan_dev_for_boot” script:

    echo Scanning ${devtype} ${devnum}:${distro_bootpart}...;
    for prefix in ${boot_prefixes}; do 
        run scan_dev_for_extlinux; 
        run scan_dev_for_scripts; 
    done;
    
    run scan_dev_for_efi;
    

    Old style OS configuration

    First U-Boot checks for “extlinux/extlinux.conf” file, then go for old style “boot.scr” (in uimg and clear text formats). Both of them are checked in / and /boot/ directories of checked partition (those names are in “boot_prefixes” variable).

    Let us look at it:

    # scan_dev_for_extlinux
    if test -e ${devtype} ${devnum}:${distro_bootpart} ${prefix}${boot_syslinux_conf};then 
        echo Found ${prefix}${boot_syslinux_conf}; 
    
        # run boot_extlinux; 
        sysboot ${devtype} ${devnum}:${distro_bootpart} any ${scriptaddr} ${prefix}${boot_syslinux_conf}
    
        echo SCRIPT FAILED: continuing...; 
    fi
    
    # scan_dev_for_scripts
    for script in ${boot_scripts}; do 
        if test -e ${devtype} ${devnum}:${distro_bootpart} ${prefix}${script}; then 
            echo Found U-Boot script ${prefix}${script}; 
    
            # run boot_a_script; 
            load ${devtype} ${devnum}:${distro_bootpart} ${scriptaddr} ${prefix}${script}; 
            source ${scriptaddr}
    
            echo SCRIPT FAILED: continuing...; 
        fi; 
    done
    

    EFI compliant OS

    And finally U-Boot checks for EFI style BootOrder variables and generic OS loader path:

    # scan_dev_for_efi
    setenv efi_fdtfile ${fdtfile};
    for prefix in ${efi_dtb_prefixes}; do
        if test -e ${devtype} ${devnum}:${distro_bootpart} ${prefix}${efi_fdtfile}; then 
            # run load_efi_dtb; 
            load ${devtype} ${devnum}:${distro_bootpart} ${fdt_addr_r} ${prefix}${efi_fdtfile}
        fi;
    done;
    
    # run boot_efi_bootmgr;
    if fdt addr ${fdt_addr_r}; then 
        bootefi bootmgr ${fdt_addr_r};
    else 
        bootefi bootmgr;
    fi
    
    if test -e ${devtype} ${devnum}:${distro_bootpart} efi/boot/bootaa64.efi; then
        echo Found EFI removable media binary efi/boot/bootaa64.efi; 
    
        # run boot_efi_binary; 
        load ${devtype} ${devnum}:${distro_bootpart} ${kernel_addr_r} efi/boot/bootaa64.efi; 
        if fdt addr ${fdt_addr_r}; then 
            bootefi ${kernel_addr_r} ${fdt_addr_r};
        else 
            bootefi ${kernel_addr_r} ${fdtcontroladdr};
        fi
    
        echo EFI LOAD FAILED: continuing...;
    fi; 
    setenv efi_fdtfile
    

    Booted

    At this moment board should be in either OS or in OS loader (being EFI binary).

    Final words

    All that work on searching for boot media, boot scripts, boot configuration files, OS loaders, EFI BootOrder entries etc is done without any user interaction. Every bootable media is checked and tried.

    If I would add SATA controller support into U-Boot binary then all disks connected to such would also be checked. Without any code/environment changes from my side.

    So if your SBC has some weird setup then consider moving to distro generic one. Boot fresh mainline U-Boot, store copy of your existing environment (“printenv” shows it) and then reset to generic one with “env default -a” command. Probably would need to set MAC adresses for network interfaces.

    Written by Marcin Juszkiewicz on
  6. Time to change something on the blog

    I had some ideas for improvements on website. And finally found some time to implement them.

    Series of posts

    One of changes was implementing ‘series of posts’ to make it easier to find posts on one topic. For now there are two of them:

    • Standards in Arm space
    • From a diary of AArch64 porter

    Each post in series has a list at the top. I may group some other posts into additional series.

    Mobile devices fixes

    From time to time I had email from Google bots that some things on my website need improvements. Most of it were about mobile devices. So I went through Lighthouse audits and did some changes:

    • top menu is one entry per line
    • clickable lists have more padding between entries
    • removed ‘popular tags’ group from sidebar as it was not used
    • display more entries in ‘recent posts’ sidebar section

    About me section

    Also added ‘About me’ section to the sidebar. I often give links to my blog posts in several instant messaging channels (IRC, Discord, Telegram) and when people realize that I wrote them there is strange moment:

    <hrw> irc_user: https://marcin.juszkiewicz.com.pl/2020/06/17/ebbr-on-rockpro64/

    <irc_user> hrw: yes, I know that article, that is why I want to try it, I’ll get the RockPro64 today if all goes well! :)

    <irc_user> I wonder whether the images from uboot-images-armv8-2020.10-2.fc33.noarch.rpm also have EBBR support for rockpro64 though, without needing to download a random binary from Marcin’s website :)

    <hrw> iirc Fedora images are not built for writing into SPI flash

    <hrw> irc_user: and you reminded me that I need to add text block to website

    <irc_user> hrw: ah, you are Marcin! :-)

    <irc_user> now I feel stupid

    <irc_user> excellent blog you have! thanks so much for that, I learned a lot!

    <hrw> thx

    Now info about my nick name is right at the top of page (unless on mobile).

    Useful tables

    I also added a list of tables from my side projects:

    • BSA and SBSA checklist
    • Linux system calls for all architectures

    I know that they have some users but now both are more visible.

    Written by Marcin Juszkiewicz on
  7. Standards in Arm space (part III)

    In the first part I went from board files and ugly bootloaders to SBSA/SBBR and EBBR. The second part went through BSA, BBR etc. TLAs and what changes they brought into OFLAs.

    And both were about specifications written for developers (both hardware and software). This time I will write something about ones written for marketing people.

    SBSA, SBBR, EBBR, LBBR etc

    Who is gonna remember all those acronyms and what they mean? Only those who really have to. Rest of people needs something easier to remember.

    ServerReady

    Design a System that “Just Works”

    In 2018 Arm brought ServerReady program. Name sounds much better than “hardware needs to comply with SBSA and firmware has to be SBBR compliant”, right? Ah, and “has to pass ACS” (which stands for Architectural Compliance Suite).

    Yeah — one simple name instead of three acronyms. So imagine situation when you need to convince your boss that project needs a serious AArch64 machine for Continuous Integration builds. You can say “We buy XYZ because it is a Server Ready system” and they assume that it is a server so IT should be able to handle it.

    Try to say “we buy XYZ because it is SBSA and SBBR compliant and passes ACS” and you can get asked about your mental health…

    SystemReady

    But not every AArch64 system is server class hardware. Or needs what whole UEFI, ACPI etc. things in firmware.

    So in 2020 Arm came with SystemReady program. It is basically ServerReady renamed and extended to cover wider selection of hardware and firmware options.

    It came around same time as BSA, BBR, LBBR etc. which I described in the second part already so will not repeat what those acronyms mean.

    certification bands

    There are four ‘bands’ defined:

    Certification Description hardware specs firmware spec
    SystemReady SR ServerReady BSA + SBSA SBBR
    SystemReady ES Embedded Server (*) BSA SBBR
    SystemReady IR IoT Ready BSA EBBR
    SystemReady LS LinuxBoot ServerReady BSA + SBSA LBBR

    *) spec says “Embedded ServerReady” but it is probably an error as it was also mentioned as “Embedded Server” in few places outside of specification.

    What that means for developers?

    Certification hardware type usual firmware
    SystemReady SR server class UEFI + ACPI
    SystemReady ES some SBC UEFI + ACPI
    SystemReady IR some SBC (UEFI or U-Boot) + DTB
    SystemReady LS server class LinuxBoot

    Where UEFI means Tianocore EDK2 or similar. And U-Boot needs EFI layer enabled (to fulfill EBBR requirements).

    recertification

    SystemReady specification says that SR systems are also ES compliant. There is no need for recertification if someone wants to put other sticker.

    There are changes in progress. One of them is Devicetree requirement for IR band. So not every ES will be compliant with IR unless firmware be changed.

    BTW — specification mentions 32-bit systems. But IoT Ready only as they are not covered by BSA.

    Conclusion

    Creation of ServerReady and later SystemReady specifications was good move. We got simple name which can be understood by mere mortals.

    Developers and other interested people can go deeper and read about BSA, BBR, EBBR, LBBR, SBBR, SBSA and other TLAs and OFLAs.

    Written by Marcin Juszkiewicz on
  8. AArch64 boards and perception

    Recently I had a discussion with A13 and realized that people may have different perception of how AArch64 boards work:

    Sahaj told me that you can just install generic images on honeycomb

    it kinda blows my mind

    How did we got to that point?

    Servers are boring, right?

    I started working on AArch64 in 2012. First in fast models written by Arm developers, then also in QEMU. Both used direct kernel boot method without any firmware or bootloaders.

    In 2013 I moved from Canonical/Linaro to Red Hat. And there we got server from Applied Micro. I do not remember how it booted as I used it for building software. Some time later we had Mustangs and all of them were booting UEFI.

    Then I got Mustang at home. Fedora, RHEL were booting fine. Then CentOS and Debian joined. All of them used grub-efi like my x86-64 desktop or laptop.

    Time passed, I got other servers to work with. HPe M400, ThunderX, ThunderX2, Falkor, D05 etc. Each of them was running UEFI. Either Tianocore based or commercial one.

    And to install operating system all I needed was to boot generic install media.

    SBC hell

    At same time SBC world was fighting with users. Each vendor/SoC/board had to be treated specially as there was no way to store firmware on board (as SPI flash is very expensive).

    So depending on SBC your firmware could be written either:

    • at some special offset from start of microSD card
    • at the beginning of a partition of special type
    • in a file on vfat partition of any type
    • in a file on EFI System Partition (also using vfat)

    Some offsets forced the use of “obsolete” MBR partitioning as there was no space for GPT information. While UEFI systems require GPT not MBR.

    It also generated lot of wrong information like “this file needs to be named in UPPERCASE (on case insensitive filesystem)” or “needs to be first file written to a partition”. Some kind of “SBC boot voodoo”.

    So each SBC required its own boot media — you could not take it to a board with some other SoC and expect it to start. Or you spend some time to create some kind of hybrid image which had a few bootloaders written. Easier way was to prepare a separate boot media images per SBC.

    From time to time there was SBC with onboard flash available for storing firmware. Some people made use of it, others continued doing offset crap as they were used to it.

    SBBR, EBBR came

    Last years brought us several specifications from Arm. First was SBBR which stands for Server Base Boot Requirements. It said which features should be present in firmware (you can read more in my previous post about Arm standards).

    As SBCs are not servers, a new specification was created for them: EBBR (E means Embedded). It basically says “try to follow what server does” and has some requirements either dropped or relaxed.

    Both were designed to make distribution’s life easier. Never mind is it BSD, Linux or Microsoft Windows — they have to put EFI bootloader (like Grub-efi) in EFI System Partition and system will boot on any supported SBBR/EBBR hardware.

    For example I have a USB pendrive with Debian “bullseye” installed. It boots fine on RockPro64 and Espressobin SBCs (both have EBBR compliant U-Boot stored in on-board flash) and on Mustang and HoneyComb (both with SBBR compliant UEFI in on-board flash).

    Habits. Good, bad, forced.

    So it looks like the way how AArch64 system should boot depends on what your habits are.

    When you started from servers then SBBR/EBBR way is your way and you look weird at most of SBC systems with their offsets and “other mumbo jumbo”.

    If all you used were SBC then going into SBBR/EBBR world can be “zOMG, it just magically works!”.

    Note to SBC vendors

    Most SBCs already follow the EBBR standard or can easily be made compliant. Never mind you are using mainline U-Boot or some own fork (and then consider upstreaming as board’s life may be longer than you expect).

    Enable the CONFIG_DISTRO_DEFAULTS option in the config. Build U-Boot, store it to the board and boot. Then erase whatever environment you used before with “env default -a” command.

    On next reboot your SBC will iterate over “boot_targets” variable and check for few standard boot files:

    • extlinux/extlinux.conf
    • boot.scr.uimg
    • boot.scr
    • /efi/boot/bootaa64.efi

    When it gets something then it handles that and boots. If not then goes to another boot target.

    This allows to handle basically every operating system used on Arm systems. And allows to boot generic install ISO (as long as OS on it supports the device).

    Bonus points if your SBC has some on board flash or eMMC it can boot from. Then firmware can be stored there so user does not even have to worry about it.

    Written by Marcin Juszkiewicz on
Page 5 / 105