1. FOSDEM 2021 was the best online event ever

    My family got used to the fact that I am not available at beginning of February. Because of my FOSDEM trip. This year was not both not so different and different at same time.

    Due to COVID-19 pandemic FOSDEM 2021 was online. So there was no reason for any trip other than to local shops to buy some Belgian beers. And I was not available to anyone during weekend.

    Online? It will be terrible!

    During 2020 I attended several online events. For some of them I prefer to not remember that I did it. Terrible recordings of talk, some had bandwidth issues. On some organizers did not managed to get presenters agree on online presence so some meetings had to be dropped when they were supposed to start.

    Matrix to the rescue

    FOSDEM team decided to organize some way for attendees to chat with each other. Matrix was setup on chat.fosdem.org with some rooms for basic FOSDEM stuff, room for each devroom, rooms for continuing discussion after talk… There was also “Virtual Janson Bar” one for food/drink discussions (renamed to “Virtual Delirium” for after conference hours).

    Cloakroom had own channel, food trucks had another one. People were sending photos of their food/waffles/beer/etc.

    FOSDEM @fosdem tweeted:

    Oh and before you leave don’t forget to pickup your luggage and coat at the cloackroom https://chat.fosdem.org/#/room/#cloakroom:fosdem.org šŸ™ƒ

    Nearly every room had widget with Jitsi for video chat. It was used mostly for Q&A sessions and after talk discussions.

    And it worked good. There were hot discussions with questions asked during talk and then replied during Q&A session. Links to many projects and interesting additional pages were posted that way.

    Streaming all the way

    One of things FOSDEM is famous for is networking at event. And live streaming of all rooms. This year was no different. My monitor’s screen was split to two Firefox windows: left side kept discussions on Matrix server, right side had live streaming schedule and video of currently attended talk. At same time my phone has “FOSDEM Companion” app started with bookmarks opened to make it easy to check which talks I wanted to see.

    At some moments I had two videos started — one waited for start of presentation and second with some other talk running. Once new one started I closed watched one. Simple method of watching part of talk to see is it interesting or not.

    There were some talks where I dropped during first few minutes and moved to other one. Something quite hard to do when you are in a middle of a room at normal FOSDEM.

    Videos of talks will be available during next few days. I have a page with FOSDEM talks with slides/video links which will get updates during next days.

    At same time at ULB

    Normally FOSDEM takes place in Brussels at ULB. There were some attendees there so we had messages like that on Matrix:

    I went to tram and is was nearly empty. Did I messed timezones or what?

    I got permission from Luilegeant to post some pictures from ULB so you may see what we missed this year:

    FOSDEM 2021
    FOSDEM 2021
    Entry to J building
    Entry to J building
    Janson bar was closed
    Janson bar was closed
    crane instead of food trucks
    crane instead of food trucks

    Looks like typical FOSDEM weather ;D

    Some final words

    I enjoyed FOSDEM 2021. It was different that usual ones but adding Matrix for chat allowed to get that feeling of being with other attendees. I hope that online events in 2021 will copy it.

    Shin Ice @_ShinIce tweeted

    #FOSDEM21 is coming to an end and it was awesome as always šŸ¤˜

    we have an estimation of ~33.6k attendees for the conference and ~20k attendees for today…impressive…and now take this numbers and try to fill the ULB šŸ˜±

    Compare that with usual 8-9 thousands in previous years.

    There were many people from other continents taking part in conference just because being online allowed them. For several of them it was night time all the time.

    See you at ULB in 2022!

    Written by Marcin Juszkiewicz on
  2. Standards are boring

    We have made Arm servers boring.

    Jon Masters

    Standards are boring. Satisfied users may not want to migrate to other boards the market tries to sell them.

    So Arm market is flooded with piles of small board computers (SBC). Often they are compliant to standards only when it comes to connectors.

    But our hardware is not standard

    It is not a matter of ‘let produce UEFI ready hardware’ but rather ‘let write EDK2 firmware for boards we already have’.

    Look at Raspberry/Pi then. It is shitty hardware but got popular. And group of people wrote UEFI firmware for it. Probably without vendor support even.

    Start with EBBR

    Each new board should be EBBR compliant at start. Which is easy — do ‘whatever hardware’ and put properly configured U-Boot on it. Upstreaming support for your small device should not be hard as you often base on some already existing hardware.

    Add 16MB of SPI flash to store firmware. Your users will be able to boot ISO without wondering where on boot media they need to write bootloaders.

    Then work on EDK2 for board. Do SMBIOS (easy) and keep your existing Device Tree. You are still EBBR. Remember about upstreaming your work — some people will complain, some will improve your code.

    Add ACPI, go SBBR

    Next step is moving from Device Tree to ACPI. May take some time to understand why there are so many tables and what ASL is. But as several other systems show it can be done.

    And this brings you to SBBR compliance. Or SystemReady ES if you like marketing.

    SBSA for future design

    Doing new SoC tends to be “let us take previous one and improve a bit”. So this time change it a bit and make your next SoC compliant with SBSA level 3. All needed components are probably already included in your Arm license.

    Grab EDK2 support you did for previous board. Look at QEMU SBSA Reference Platform support, look at other SBSA compliant hardware. Copy, reuse their drivers, their code.

    Was it worth?

    At the end you will have SBSA compliant hardware running SBBR compliant firmware.

    Congratulations, your board is SystemReady SR compliant. Your marketing team may write that you are on same list as Ampere with their Altra server.

    Users buy your hardware and can install whatever BSD, Linux distribution they want. Some will experiment with Microsoft Windows. Others may work on porting Haiku or other exotic operating system.

    But none of them will have to think “how to get this shit running”. And they will tell friends that your device is as boring as it should be when it comes to running OS on it == more sales.

    Written by Marcin Juszkiewicz on
  3. I/O plate for APM Mustang

    Applied Micro Mustang uses standard Mini-ITX form factor just like many PC mainboards. The problem is that contrary to PC ones it does not come with I/O plate. So I decided to make one using 3d printer.

    First version

    Loaded Tinkercad page, did quick measurements and sent STL to my brother-in-law Szymon for printing. That’s how v1 was born.

    It was not good:

    v1
    v1

    As you may see several ports were misaligned (or too small). So I moved ports a bit. The other problem was thickness — turned out that 1mm of plastic was too weak. Second version got printed as 2mm thick:

    v2
    v2

    Other filament made it look ugly. And shows that my measurements were wrong.

    FreeCAD

    My brother-in-law uses FreeCAD for his designs so I decided to recreate model of I/O plate in this tool. There was some cursing involved as their approach to doing holes in objects. On the other side positioning of holes was much easier to do.

    There were several changes and versions were moving fast. I decided to not put any 3d text on model for several reasons:

    • it looked shitty on dark filament
    • low resolution made it look even worse
    • FreeCAD way of doing 3d text feels overcomplicated

    Fifth version was 1mm shorter as I had to add space for screws in case:

    v5
    v5

    At this moment I left APM Mustang for my brother-in-law to make changes easier.

    In meantime we got some hints from people more involved in 3d printing and decided to do some other changes to how model is done.

    Mesh design

    FreeCAD has this idea of ‘mesh design’ - you select object, create mesh of it and then export it as STL. The problem is that this part is completely broken under Ubuntu (used by Szymon) — it creates mesh elements but with 0 points/faces.

    Seventh version

    6th version existed for a moment and then 7th came as first one using mesh:

    v7
    v7

    Not that it changed much ;D

    Final one

    Few more iterations, another set of measurements and finally we got version which went into use:

    v10
    v10

    It is still far from being perfect but does it’s job. As 3d printing n00b I have no idea why there are vents around holes for ports. Maybe something wrong in printer setup or slicer configuration. Suggestions are welcome.

    One button is for power, other for reset as case had just one button on front panel (which I do not use).

    Files to download (MIT licensed):

    Written by Marcin Juszkiewicz on
  4. Switched to BorgBackup

    The old joke says that there are two types of people:

    • those who make backups
    • those will will make backups

    I was in second group long time ago and then moved to first one.

    Duplicity

    So far I used Duplicity along with it’s fronted called Duply. It was installed by default on Ubuntu systems and was quite easy to setup.

    But getting files restored from it was pain. And consistency was a problem. Especially when I wanted to restore one directory from quite old copy — turned out that one file (of many) was damaged so whole backup was useless…

    BorgBackup

    So I looked for alternatives and BorgBackup was one of suggestions.

    Features list was long, FOSS, several platforms, compression etc. There were two things which brought my attention:

    • deduplication
    • mountable backups with FUSE

    Deduplication

    So why deduplication? Because it allows me to backup several machines into one place and separate copies of git repositories, source code will not take extra space:

                           Original size      Compressed size    Deduplicated size
    All archives:                2.02 TB              1.57 TB            118.85 GB
                           Unique chunks         Total chunks
    Chunk index:                 1060634             18670536
    

    Or on my server where backups are run hourly:

                           Original size      Compressed size    Deduplicated size
    All archives:                7.29 TB              5.36 TB             11.35 GB
                           Unique chunks         Total chunks
    Chunk index:                  359783            260310747
    

    Nice difference compared to Duplicity I used before.

    FUSE mounting of backups

    Instead of checking which options I have to use to restore that one directory from 2 months old backup I can now mount each backup using FUSE:

    borgmatic mount --archive puchatek-2021-01-01 --mount-point /tmp/del/1
    

    And then just copy whatever file(s) I want to restore. Very handy way.

    Borgmatic

    What Duply was to Duplicity, Borgmatic is to BorgBackup. Simple, easy to use frontend hiding most of internals behind easy to use command line interface.

    Configure once, then simple commands like “borgmatic info”, “borgmatic prune” etc. All needed things are stored in config file.

    Will it work for me?

    Time will show does it work for me. Machines’ backup copies are done, syncing them between machines need some improvements.

    Written by Marcin Juszkiewicz on
  5. System calls by kernel version

    As you know I have a page with list of Linux system calls which I usually update every rc1 release.

    Recently Pavel Å najdr asked me on Twitter:

    Could you also track the first kernel version supporting each syscall? That’d make the table a lot more useful for assembler/calling_it_raw junkies :)

    I decided to give it a try.

    Decided to check every -rc1 release as this is point in history when most of new code is already merged. First kernel version with rc1 tag was v3.0-rc1 one. Small shell script later I got tables for each rc1 release between 3.0-rc1 and 5.11-rc1 and started digging.

    There were at least 80 system calls added in this time range (my method of checking was far from being perfect).

    Most active kernel versions

    Most active was 5.1 kernel with 24 new ones — mostly due to y2038 problem (21 time64 related calls added).

    Then 5.2 kernel added 6 new ones in filesystem handling.

    3.5 kernel had some Alpha related activity with 5 osf_* calls added of *stat* family. Also ‘kcmp’ appeared then.

    There were few kernel releases with 3 new system calls:

    • 3.17 added getrandom(), memfd_create() and seccomp()
    • 4.9 gave us pkey_alloc(), pkey_free() and pkey_mprotect()
    • 5.3 added clone3(), fp_udfiex_crtl() and pidfd_open()

    Two new calls were present in:

    kernel release system calls
    3.2 process_vm_readv() and process_vm_writev()
    3.3 cache_sync() and mq_getsetaddr()
    3.8 finit_module() and kern_features()
    3.9 arc_gettls() and arc_settls()
    3.14 sched_getattr() and sched_setattr()
    4.3 membarrier() and userfaultfd()
    4.6 preadv2() and pwritev2()
    4.18 io_pgetevents() and rseq()
    5.6 openat2() and pidfd_getfd()

    Sixteen kernel releases brought one system call each.

    Architecture specific calls

    Other arch specific entries:

    architecture system calls
    Alpha osf_*
    ARC arc_gettls() and arc_settls()
    Arm arm_fadvise64_64()
    OpenRISC or1k_atomic()
    S/390 s390_runtime_instr() and s390_sthyi()

    Those are usually hardware specific. Some get their name changed — for example Arm had arm_sync_file_range() which was later renamed to sync_file_range2() and used on several of other architectures.

    I do wonder why Alpha still has all those osf_* calls. Do people still use OSF/1 emulation there?

    Summary

    Most of kernel releases bring some new system call(s). Usually it takes 2-3 releases for most architectures to keep up. Looks like there are just 153 (of 595) calls supported on all archs (but this is affected by fact that I keep information also for architectures removed from Linux kernel).

    Some system calls get dropped or renamed. I drop those from my table as well.

    My work on this table will continue. It does not take much of my time and provides useful information to several FOSS projects.

    Written by Marcin Juszkiewicz on
  6. So long, and thanks for all the fun

    During last days I tried to get my Applied Micro Mustang running again. And it looks like it is no more. Like that Norwegian Blue parrot.

    Tried some things

    By default Mustang outputs information on serial console. It does not here. Checked serial cables, serial to usb dongles. Nothing.

    Tried to load firmware from SD card instead of on-board flash. Nope.

    Time to put it to rest.

    How it looked

    When I got it in June 2014 it came in 1U server case. With several loud fans. Including one on cpu radiator. So I took the board out and put into PC Tower case. Also replaced 50mm processor fan with 80mm one:

    Top view of Mustang
    Top view of Mustang
    Side view
    Side view

    All that development…

    I did several things on it:

    Some of them were done for first time on AArch64.

    Board gave me lot of fun. I built countless software packages on it. For CentOS, Debian, Fedora, RHEL. Tested installers of each of them.

    Was running OpenStack on it since ‘liberty’ (especially after moving from 16GB to 32GB ram).

    What next?

    I am going to frame it. With few other devices which helped me during my career.

    Replacement?

    It would be nice to replace Mustang with some newer AArch64 hardware. From what is available on mass market SolidRun HoneyComb looks closest. But I will wait for something with Armv8.4 cores to be able to play with nested virtualization.

    UPDATE: I got it working again.

    Written by Marcin Juszkiewicz on
  7. Standards in Arm space (part II)

    In the first part I went from board files and ugly bootloaders to SBSA/SBBR and EBBR. Now let me try to explain how it evolve.

    BSA, BBR?

    During Arm DevSummit 2020 there was announcement of new standards for Arm devices:

    Arm is extending the system architecture standards compliance from servers to other segments of the market, edge and IoT. We introduce the new BSA specification with market segment-specific supplements and provide the operating system-oriented boot requirements recipes in the new BBR specification.

    BSA is meant to describe basic recommendations and requirements for hardware. Just like SBSA did it for servers before, BBR covers booting.

    BSA

    Base System Architecture specifies hardware that software can rely on. Compliance is not required:

    Arm does not mandate compliance to this specification. However, Arm anticipates that OEMs, ODMs, cloud service providers and software providers will require compliance to maximize Out of Box software compatibility and reliability.

    According to BSA 1.0 (DEN0094A document) there are two supplements:

    • Server Base System Architecture (SBSA)
    • Client Base System Architecture (CBSA)

    Former is described in separate document (DEN0029E) covering AArch64 server requirements (look for spec v6.1+) while latter document (DEN0087) is not yet present on Arm developer website (I was told that you need to contact Arm for a copy).

    SBSA changes

    There are some interesting changes in specification. For example there is a table with hardware requirements for each SBSA level:

    Level A profile SMMU GIC
    3 v8.0 v2 or v3 v3.0
    4 v8.3 v3.0 v3.0
    5 v8.4 v3.2 v3.0
    6 v8.5 v3.2 v3.0

    Previous (v6) version of SBSA specification mentioned that all PEs (cpu cores) must implement XYZ introduced in Armv8.y version (I selected just a few):

    Level required features/extensions optional ones
    4 RAS (v8.2), 16-bit VMID, VHE pointer signing
    5 enhanced nested virt (v8.4), CS-BSA cryptography
    6 Armv8.5-PMU, restrictions on speculation Memory Tagging Extension

    So level 4 is now v8.3+ from v8.2+ before.

    Most of hardware requirements descriptions moved from SBSA to BSA. Due to this SBSA v6.1 spec is just 25 pages while SBSA v6.0 had 83 of them.

    BSA and SBSA checklists

    Both BSA and SBSA have now section with checklist. This allows to quickly check which components are required for ‘minimum BSA’ and each SBSA level.

    Funny part is when you compare ‘minimum BSA’ with SBSA level 3 — latter one lists operating system requirements up to B_PE_14 while former goes to B_PE_17. At beginning it feels like mistake but B_PE_15 to B_PE_17 describe optional parts (_15 is part of Armv8.3 so SBSA level 4+ and _16 and _17 are required for SBSA level 6).

    And the reason for above is backward compatibility. SBSA levels are defined as “previous one + some extras” so they can not be rebased on top of BSA. I wonder how it looks on CBSA.

    BBR

    Base Boot Requirements specifies firmware requirements to make booting easy and predictable.

    BBR specification is highly based on SBBR one. It is visible in document number (DEN0044) as A-E versions were SBBR, from F it is BBR.

    According to BBR 1.0 (DEN0044F document) there are four recipes:

    • SBBR for servers
    • ESBBR which is SBBR with some potential exceptions (none are defined so far)
    • EBBR for those who can not into SBBR
    • LBBR for LinuxBoot based systems

    ESBBR?

    So what for ESBBR was invented when it is the same as SBBR? Probably those so called “Edge” devices — nearly server class hardware but something went wrong or vendor was lazy to go for full SBBR.

    LBBR?

    LBBR stands for LinuxBoot BBR — systems where machine has very minimal firmware just to run Linux which initialize everything and then use kexec system calls to load final kernel image. Used by some datacenters that are compliant with the Open Compute Project (OCP).

    In theory LBBR system can load UEFI instead of kernel and be SBBR compliant.

    Required components

    Each recipe has own list of required components:

    Component SBBR ESBBR EBBR LBBR
    PSCI/SMCCC yes yes yes yes
    Secondary Core Boot yes yes no no
    UEFI yes yes yes no
    ACPI yes yes optional (*) yes
    DeviceTree forbidden forbidden optional (*) no
    SMBIOS yes yes no yes

    *) EBBR system must provide ACPI or DeviceTree — both at same time are not allowed.

    SBBR changes

    SBBR 6.0 specification required SBSA hardware:

    This document defines the Boot and Runtime Services expected by an enterprise platform Operating System or hypervisor, for an SBSA-compliant Arm AArch64 server which follows the UEFI and ACPI specifications.

    In BBR 1.0 spec this requirement got wiped out:

    Systems using SBBR recipe must meet the requirements that are specified in section 5 (PSCI/SMCCC), section 6 (Secondary Core Boot), section 7 (UEFI), section 8 (ACPI), and section 9 (SMBIOS).

    SBBR-compliant systems must not present a DeviceTree binary to the operating system.

    Now any device can be made ‘SBBR compliant’. SBSA requirement got moved to SystemReady specification but this is material for other post.

    Secure and Trusted Boot

    SBBR v1.2 spec (DEN0044E document) had “Secure and Trusted Boot” subsection in UEFI section. It was removed from BBR 1.0 version.

    The reason is simple — it has own specification now: “Base Boot Security Requirements (BBSR)” (DEN0107 document) with more details in it. There will be separate test suite and certificate program for it.

    Conclusion

    BSA/BBR feel a bit like cleaning process. Non-server hardware was not defined before so now BSA kind of does that. Several extensions from v8.5 are required to cover all those mitigation issues. Too bad that CBSA was not released at same time as BSA

    Embedded devices did not got updated specification yet. EBBR 1.0.1 is from August 2020 and does not even mention BBR. I would see it as part of BBR specification but was told that it is handled by other team so have to stay separate.

    Servers are like they were before — SBSA + SBBR cover like before. Unless you want Secure Boot as this is no longer defined. And for some server-like machines there is ESBBR allowing to make some exceptions (once they got defined).

    Several datacenter servers got own part with LBBR. Normal users would not even play with them so nothing to worry for them.

    Written by Marcin Juszkiewicz on
  8. Standards in Arm space (part I)

    One of things which made AArch64 servers so successful was agreeing on set of standards and keeping them implemented. But that was not always a case…

    Wild, Wild West

    I started working with Arm architecture in 2004. This was time when nearly every device required own kernel… You had those ‘board files’ inside of arch/arm directory, each vendor made own versions of same drivers etc.

    From distribution perspective it was nightmare. I was maintaining OpenZaurus at that time and with ten models supported we had to build whole set of kernels. Good that four of them were differing only by amount of memory and flash so we were able to handle them as one machine leaving checking details to kernel once it booted. PXA250 or PXA255 processor was also handled by kernel.

    Those times also meant different bootloaders. Zaurus ones were awful. We even had to ignore kernel cmdline it gave as it did not fit even into our 2.4.18-crappix kernels and was completely wrong once we moved to 2.6 line.

    Nokia 770/N8x0 had another one. Developer boards had RedBoot, U-Boot (if lucky) or whatever vendor invented. Some had a way to change and store boot commands, some did not. Space for kernel could be limited in a way that getting something which fits was a challenge.

    Basically for most of devices you had to handle booting, updates of kernels etc. separately.

    Linaro to the rescue

    In 2010 Arm with some partners created Linaro to improve Linux situation on Arm devices. I was one of first engineers there. We were present in many areas. Porting software, benchmarking, improving performance etc.

    And cleaning kernel/boot situation. I do not know how many people remember this post by Linus Torvalds:

    Gaah. Guys, this whole ARM thing is a f*cking pain in the ass.

    You need to stop stepping on each others toes. There is no way that your changes to those crazy clock-data files should constantly result in those annoying conflicts, just because different people in different ARM trees do some masturbatory renaming of some random device. Seriously.

    This was reaction to a moment when someone created another copy of some drivers. It was popular way to do things on Arm architecture — each vendor had their own version of PL011 serial driver etc.

    Some time later “arm-soc” subsystem was created to handle merging code touching device support, drivers etc. This allowed Russell King to concentrate on maintaining Arm architecture support.

    During next years most of vendor versions were merged into single ones. And moved where they belong — from arch/arm/ to drivers/ area of kernel.

    At some moment adding new board files was forbidden as Arm architecture was migrating into DeviceTree world.

    DeviceTree migration

    Why going into DeviceTree (DT in short)? What it gave us? Other than new problems?

    There were several such questions. The good part is that it was not something new to the Linux kernel. DT was already in use on Power architecture (and iirc SPARC). After some adaptations Arm devices became more maintainable.

    DeviceTree solved one crucial problem of Arm — lack of hardware discovery. System on Chip (SoC) can contain several controllers, processor cores etc. Before it was handled inside of ‘board file’ but also required building kernels per nearly each device. Now kernel was finally able to boot, parse DT information and get idea what is available and which drivers need to be used.

    That way one kernel was able to support several devices. And amount of them was bigger and bigger each release. At some moment you could build one kernel for all Arm v4 and v5 devices plus second one for v6 and v7 ones. Huge improvement.

    Bootloaders?

    When it comes to bootloaders situation changed here as well. Most of ones used in past vanished and U-Boot became kind of ‘gold standard’. DeviceTree support was present but still each device had own way of booting. Different commands, storage options etc.

    Distributions handled that in miscellaneous ways. Extlinux support, ‘flash-kernel’ scripts etc.

    At some moment Dennis Gilmore took some time and introduced generic boot command for U-Boot. It was merged in July 2014. So instead of having different ways of handling stuff there was now one command on all devices (once they migrated).

    Kernel and initramfs were checked on sd/mmc/emmc, sata, scsi, ide, usb and then fallback to tftp. It was expanded since then to support several options and is now standard in U-Boot.

    AArch64 arrival

    At the beginning of 2013 several AArch64 systems started to appear. SBC ones followed what was on 32-bit Arm but servers were driven into different direction.

    Servers

    They were supposed to be as boring as x86 one were. You unpack, put it into rack, connect standard power/network cables and boot it without worrying will it work or not. At same time provide administrators with same environment as they had on x86.

    So it meant UEFI as a firmware. ACPI as hardware description. And I simplified a bit.

    So to make it right it needed work on defining standards and then vendors to follow them.

    SBSA defined hardware

    First specification was Server Base System Architecture (SBSA in short). It defined hardware part — each AArch64 machine needs to use PL011 serial port, PL031 RTC, PL061 GPIO controller etc. And PCI Express support without quirks. Without it it can not be called server.

    SBSA has several levels of compliance. Nowadays level 3 is minimal version.

    Level 0 was funny as it covered only X-Gene1 boxes (SoC was older than specification).

    SBBR defined firmware

    Simplest definition of Server Base Boot Requirements specification? Server needs to run UEFI and use ACPI to describe hardware. And has to be SBSA compliant.

    Someone may ask why UEFI and ACPI. One reason is that they are present in x86 servers and aarch64 ones follow them as much as possible in behaviour. Other is that this way there are things which can be done with firmware help.

    But ACPI was x86 only so it needed to be adapted to AArch64 architecture. Work started by making ACPI an open specification under UEFI Forum agenda so it became open to anyone (it was Intel, Microsoft, Phoenix and Toshiba only before). There were many changes made since then. And several new tables defined.

    I heard several rumours about why ACPI. Someone said that ACPI was forced by Microsoft. In reality it was decision taken by all major distros and Microsoft.

    So what SBBR compliance gives? For start it allows to run generic distribution kernels out of the box. Each server SoC has same basic components and use same standards to boot system. So far Linux distributions, several *BSD systems and Microsoft Windows support SBBR machines out of the box.

    For example getting Qualcomm Centriq or Huawei TaiShan servers supported in Debian ‘buster’ was very easy task. Both booted with distribution kernel. Huawei one required enabling of on-board network card, Centriq had SAS controller module to enable to connect to storage (which was enabled on few other architectures already).

    EBBR for those who can not follow

    In short Embedded Base Boot Requirements are kind of SBBR for non-server class hardware.

    Device can use ACPI and/or DeviceTree to describe hardware. May boot whatever as long it provides EFI Boot Services to bootloader used by distributions (grub2, gummiboot etc).

    Specification feels made especially for distributions to make their life easier. This way there is one way to boot both SBC and SBBR compliant machines.

    Getting distribution kernel running on EBBR board is usually more work than it is with SBBR compliant server. All hardware specific options need to be found and enabled (from SoC support to all it’s drivers etc).

    BSA, BBR?

    During Arm DevSummit 2020 there was announcement of new standards for Arm devices:

    Arm is extending the system architecture standards compliance from servers to other segments of the market, edge and IoT. We introduce the new BSA specification with market segment-specific supplements and provide the operating system-oriented boot requirements recipes in the new BBR specification.

    They are described in second part of this article.

    Written by Marcin Juszkiewicz on
Page 7 / 106