1. Standards in Arm space (part II)

    In the first part I went from board files and ugly bootloaders to SBSA/SBBR and EBBR. Now let me try to explain how it evolve.

    BSA, BBR?

    During Arm DevSummit 2020 there was announcement of new standards for Arm devices:

    Arm is extending the system architecture standards compliance from servers to other segments of the market, edge and IoT. We introduce the new BSA specification with market segment-specific supplements and provide the operating system-oriented boot requirements recipes in the new BBR specification.

    BSA is meant to describe basic recommendations and requirements for hardware. Just like SBSA did it for servers before, BBR covers booting.

    BSA

    Base System Architecture specifies hardware that software can rely on. Compliance is not required:

    Arm does not mandate compliance to this specification. However, Arm anticipates that OEMs, ODMs, cloud service providers and software providers will require compliance to maximize Out of Box software compatibility and reliability.

    According to BSA 1.0 (DEN0094A document) there are two supplements:

    • Server Base System Architecture (SBSA)
    • Client Base System Architecture (CBSA)

    Former is described in separate document (DEN0029E) covering AArch64 server requirements (look for spec v6.1+) while latter document (DEN0087) is not yet present on Arm developer website (I was told that you need to contact Arm for a copy).

    SBSA changes

    There are some interesting changes in specification. For example there is a table with hardware requirements for each SBSA level:

    Level A profile SMMU GIC
    3 v8.0 v2 or v3 v3.0
    4 v8.3 v3.0 v3.0
    5 v8.4 v3.2 v3.0
    6 v8.5 v3.2 v3.0

    Previous (v6) version of SBSA specification mentioned that all PEs (cpu cores) must implement XYZ introduced in Armv8.y version (I selected just a few):

    Level required features/extensions optional ones
    4 RAS (v8.2), 16-bit VMID, VHE pointer signing
    5 enhanced nested virt (v8.4), CS-BSA cryptography
    6 Armv8.5-PMU, restrictions on speculation Memory Tagging Extension

    So level 4 is now v8.3+ from v8.2+ before.

    Most of hardware requirements descriptions moved from SBSA to BSA. Due to this SBSA v6.1 spec is just 25 pages while SBSA v6.0 had 83 of them.

    BSA and SBSA checklists

    Both BSA and SBSA have now section with checklist. This allows to quickly check which components are required for ‘minimum BSA’ and each SBSA level.

    Funny part is when you compare ‘minimum BSA’ with SBSA level 3 — latter one lists operating system requirements up to B_PE_14 while former goes to B_PE_17. At beginning it feels like mistake but B_PE_15 to B_PE_17 describe optional parts (_15 is part of Armv8.3 so SBSA level 4+ and _16 and _17 are required for SBSA level 6).

    And the reason for above is backward compatibility. SBSA levels are defined as “previous one + some extras” so they can not be rebased on top of BSA. I wonder how it looks on CBSA.

    BBR

    Base Boot Requirements specifies firmware requirements to make booting easy and predictable.

    BBR specification is highly based on SBBR one. It is visible in document number (DEN0044) as A-E versions were SBBR, from F it is BBR.

    According to BBR 1.0 (DEN0044F document) there are four recipes:

    • SBBR for servers
    • ESBBR which is SBBR with some potential exceptions (none are defined so far)
    • EBBR for those who can not into SBBR
    • LBBR for LinuxBoot based systems

    ESBBR?

    So what for ESBBR was invented when it is the same as SBBR? Probably those so called “Edge” devices — nearly server class hardware but something went wrong or vendor was lazy to go for full SBBR.

    LBBR?

    LBBR stands for LinuxBoot BBR — systems where machine has very minimal firmware just to run Linux which initialize everything and then use kexec system calls to load final kernel image. Used by some datacenters that are compliant with the Open Compute Project (OCP).

    In theory LBBR system can load UEFI instead of kernel and be SBBR compliant.

    Required components

    Each recipe has own list of required components:

    Component SBBR ESBBR EBBR LBBR
    PSCI/SMCCC yes yes yes yes
    Secondary Core Boot yes yes no no
    UEFI yes yes yes no
    ACPI yes yes optional (*) yes
    DeviceTree forbidden forbidden optional (*) no
    SMBIOS yes yes no yes

    *) EBBR system must provide ACPI or DeviceTree — both at same time are not allowed.

    SBBR changes

    SBBR 6.0 specification required SBSA hardware:

    This document defines the Boot and Runtime Services expected by an enterprise platform Operating System or hypervisor, for an SBSA-compliant Arm AArch64 server which follows the UEFI and ACPI specifications.

    In BBR 1.0 spec this requirement got wiped out:

    Systems using SBBR recipe must meet the requirements that are specified in section 5 (PSCI/SMCCC), section 6 (Secondary Core Boot), section 7 (UEFI), section 8 (ACPI), and section 9 (SMBIOS).

    SBBR-compliant systems must not present a DeviceTree binary to the operating system.

    Now any device can be made ‘SBBR compliant’. SBSA requirement got moved to SystemReady specification but this is material for other post.

    Secure and Trusted Boot

    SBBR v1.2 spec (DEN0044E document) had “Secure and Trusted Boot” subsection in UEFI section. It was removed from BBR 1.0 version.

    The reason is simple — it has own specification now: “Base Boot Security Requirements (BBSR)” (DEN0107 document) with more details in it. There will be separate test suite and certificate program for it.

    Conclusion

    BSA/BBR feel a bit like cleaning process. Non-server hardware was not defined before so now BSA kind of does that. Several extensions from v8.5 are required to cover all those mitigation issues. Too bad that CBSA was not released at same time as BSA

    Embedded devices did not got updated specification yet. EBBR 1.0.1 is from August 2020 and does not even mention BBR. I would see it as part of BBR specification but was told that it is handled by other team so have to stay separate.

    Servers are like they were before — SBSA + SBBR cover like before. Unless you want Secure Boot as this is no longer defined. And for some server-like machines there is ESBBR allowing to make some exceptions (once they got defined).

    Several datacenter servers got own part with LBBR. Normal users would not even play with them so nothing to worry for them.

    Written by Marcin Juszkiewicz on
  2. Standards in Arm space (part I)

    One of things which made AArch64 servers so successful was agreeing on set of standards and keeping them implemented. But that was not always a case…

    Wild, Wild West

    I started working with Arm architecture in 2004. This was time when nearly every device required own kernel… You had those ‘board files’ inside of arch/arm directory, each vendor made own versions of same drivers etc.

    From distribution perspective it was nightmare. I was maintaining OpenZaurus at that time and with ten models supported we had to build whole set of kernels. Good that four of them were differing only by amount of memory and flash so we were able to handle them as one machine leaving checking details to kernel once it booted. PXA250 or PXA255 processor was also handled by kernel.

    Those times also meant different bootloaders. Zaurus ones were awful. We even had to ignore kernel cmdline it gave as it did not fit even into our 2.4.18-crappix kernels and was completely wrong once we moved to 2.6 line.

    Nokia 770/N8x0 had another one. Developer boards had RedBoot, U-Boot (if lucky) or whatever vendor invented. Some had a way to change and store boot commands, some did not. Space for kernel could be limited in a way that getting something which fits was a challenge.

    Basically for most of devices you had to handle booting, updates of kernels etc. separately.

    Linaro to the rescue

    In 2010 Arm with some partners created Linaro to improve Linux situation on Arm devices. I was one of first engineers there. We were present in many areas. Porting software, benchmarking, improving performance etc.

    And cleaning kernel/boot situation. I do not know how many people remember this post by Linus Torvalds:

    Gaah. Guys, this whole ARM thing is a f*cking pain in the ass.

    You need to stop stepping on each others toes. There is no way that your changes to those crazy clock-data files should constantly result in those annoying conflicts, just because different people in different ARM trees do some masturbatory renaming of some random device. Seriously.

    This was reaction to a moment when someone created another copy of some drivers. It was popular way to do things on Arm architecture — each vendor had their own version of PL011 serial driver etc.

    Some time later “arm-soc” subsystem was created to handle merging code touching device support, drivers etc. This allowed Russell King to concentrate on maintaining Arm architecture support.

    During next years most of vendor versions were merged into single ones. And moved where they belong — from arch/arm/ to drivers/ area of kernel.

    At some moment adding new board files was forbidden as Arm architecture was migrating into DeviceTree world.

    DeviceTree migration

    Why going into DeviceTree (DT in short)? What it gave us? Other than new problems?

    There were several such questions. The good part is that it was not something new to the Linux kernel. DT was already in use on Power architecture (and iirc SPARC). After some adaptations Arm devices became more maintainable.

    DeviceTree solved one crucial problem of Arm — lack of hardware discovery. System on Chip (SoC) can contain several controllers, processor cores etc. Before it was handled inside of ‘board file’ but also required building kernels per nearly each device. Now kernel was finally able to boot, parse DT information and get idea what is available and which drivers need to be used.

    That way one kernel was able to support several devices. And amount of them was bigger and bigger each release. At some moment you could build one kernel for all Arm v4 and v5 devices plus second one for v6 and v7 ones. Huge improvement.

    Bootloaders?

    When it comes to bootloaders situation changed here as well. Most of ones used in past vanished and U-Boot became kind of ‘gold standard’. DeviceTree support was present but still each device had own way of booting. Different commands, storage options etc.

    Distributions handled that in miscellaneous ways. Extlinux support, ‘flash-kernel’ scripts etc.

    At some moment Dennis Gilmore took some time and introduced generic boot command for U-Boot. It was merged in July 2014. So instead of having different ways of handling stuff there was now one command on all devices (once they migrated).

    Kernel and initramfs were checked on sd/mmc/emmc, sata, scsi, ide, usb and then fallback to tftp. It was expanded since then to support several options and is now standard in U-Boot.

    AArch64 arrival

    At the beginning of 2013 several AArch64 systems started to appear. SBC ones followed what was on 32-bit Arm but servers were driven into different direction.

    Servers

    They were supposed to be as boring as x86 one were. You unpack, put it into rack, connect standard power/network cables and boot it without worrying will it work or not. At same time provide administrators with same environment as they had on x86.

    So it meant UEFI as a firmware. ACPI as hardware description. And I simplified a bit.

    So to make it right it needed work on defining standards and then vendors to follow them.

    SBSA defined hardware

    First specification was Server Base System Architecture (SBSA in short). It defined hardware part — each AArch64 machine needs to use PL011 serial port, PL031 RTC, PL061 GPIO controller etc. And PCI Express support without quirks. Without it it can not be called server.

    SBSA has several levels of compliance. Nowadays level 3 is minimal version.

    Level 0 was funny as it covered only X-Gene1 boxes (SoC was older than specification).

    SBBR defined firmware

    Simplest definition of Server Base Boot Requirements specification? Server needs to run UEFI and use ACPI to describe hardware. And has to be SBSA compliant.

    Someone may ask why UEFI and ACPI. One reason is that they are present in x86 servers and aarch64 ones follow them as much as possible in behaviour. Other is that this way there are things which can be done with firmware help.

    But ACPI was x86 only so it needed to be adapted to AArch64 architecture. Work started by making ACPI an open specification under UEFI Forum agenda so it became open to anyone (it was Intel, Microsoft, Phoenix and Toshiba only before). There were many changes made since then. And several new tables defined.

    I heard several rumours about why ACPI. Someone said that ACPI was forced by Microsoft. In reality it was decision taken by all major distros and Microsoft.

    So what SBBR compliance gives? For start it allows to run generic distribution kernels out of the box. Each server SoC has same basic components and use same standards to boot system. So far Linux distributions, several *BSD systems and Microsoft Windows support SBBR machines out of the box.

    For example getting Qualcomm Centriq or Huawei TaiShan servers supported in Debian ‘buster’ was very easy task. Both booted with distribution kernel. Huawei one required enabling of on-board network card, Centriq had SAS controller module to enable to connect to storage (which was enabled on few other architectures already).

    EBBR for those who can not follow

    In short Embedded Base Boot Requirements are kind of SBBR for non-server class hardware.

    Device can use ACPI and/or DeviceTree to describe hardware. May boot whatever as long it provides EFI Boot Services to bootloader used by distributions (grub2, gummiboot etc).

    Specification feels made especially for distributions to make their life easier. This way there is one way to boot both SBC and SBBR compliant machines.

    Getting distribution kernel running on EBBR board is usually more work than it is with SBBR compliant server. All hardware specific options need to be found and enabled (from SoC support to all it’s drivers etc).

    BSA, BBR?

    During Arm DevSummit 2020 there was announcement of new standards for Arm devices:

    Arm is extending the system architecture standards compliance from servers to other segments of the market, edge and IoT. We introduce the new BSA specification with market segment-specific supplements and provide the operating system-oriented boot requirements recipes in the new BBR specification.

    They are described in second part of this article.

    Written by Marcin Juszkiewicz on
  3. Upgraded to Fedora 33

    I am running Fedora on my desktop since started working for Red Hat. Kind of ‘eat your own dogfood’ style despite fact that I am not active in Fedora development for some time.

    Fedora 33 reached Beta status so it was time to upgrade.

    Do it Debian style

    I use Debian since 1999 so got used to several ways of doing things which may not always fit into official Fedora guidelines.

    One of them is my way of upgrading to newer release:

    LANGUAGE=C dnf distrosync --releasever 33
    

    If there is a problem listed then I try to solve it with use of “—best” or even “—allow-erasing” options to check which packages are a problem. But this time it went smoothly.

    Rebooted and system works fine. Or so I thought…

    SSH keys

    Fedora ships OpenSSH 8.4p1 so if you used ‘ssh-rsa’ keys then you may need to generate newer ones. More info in OpenSSH 8.3 announcement.

    I got hit by this on Gerrit instances:

    $ git remote update
    Fetching gerrit
    marcin.juszkiewicz@review.linaro.org: Permission denied (publickey).
    

    One workaround is small addition to ssh configuration file:

    Host *
        PubkeyAcceptedKeyTypes +rsa-sha2-256,rsa-sha2-512
    

    TLS v1.2 is the new default

    New distribution version, new defaults. This time TLS v1.2 became default version. I was informed about it when wanted to send email and Thunderbird told me that it is unable to talk to my mail server…

    Logged to server, checked Postfix log and found this:

    connect from IP_ADDRESS
    SSL_accept error from IP_ADDRESS: -1
    warning: TLS library problem: error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol:../ssl/statem/statem_srvr.c:1661:
    lost connection after CONNECT from IP_ADDRESS
    disconnect from IP_ADDRESS commands=0/0
    

    Looks nasty. Did some searching and changed Postfix config to accept TLSv1.2 on incoming connections.

    DNS

    One of features of Fedora 33 is switch from dnsmasq to systemd-resolved for name resolution. On my system I have some local changes to former one to get internal Red Hat names resolved without using company DNS for everything. Therefore I reverted migration and keep using dnsmasq.

    One day I may be able to understand systemd-resolved documentation and migrate my local configuration to it.

    Summary

    So far no other issues found. System works as it should. I still run kind of KDE desktop on X11.

    Written by Marcin Juszkiewicz on
  4. 8 years of my work on AArch64

    Back in 2012 AArch64 was something new, unknown yet. There was no toolchain support (so no gcc, binutils or glibc). And I got assigned to get some stuff running around it.

    OpenEmbedded

    As there was no hardware cross compilation was the only way. Which meant OpenEmbedded as we wanted to have wide selection of software available.

    I learnt how to use modern OE (with OE Core and layers) by building images for ARMv7 and checking them on some boards I had floating around my desk.

    Non-public toolchain work

    Some time later first non-public patches for binutils and gcc arrived in my inbox. Then eglibc ones. So I started building and on 12th September 2012 I was able to build helloworld:

    12:38 hrw@puchatek:aarch64-oe-linux$ ./aarch64-oe-linux-gcc ~/devel/sources/hello.c -o hello
    12:38 hrw@puchatek:aarch64-oe-linux$ file hello
    hello: ELF 64-bit LSB executable, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.39, not stripped
    12:39 hrw@puchatek:aarch64-oe-linux$ objdump -f hello
    
    hello:     file format elf64-littleaarch64
    architecture: aarch64, flags 0x00000112: 
    EXEC_P, HAS_SYMS, D_PAGED 
    start address 0x00000000004003e0
    

    Then images followed. Several people at Linaro (and outside) used those images to test misc things.

    At that moment we ran ARMv8 Fast models (quite slow system emulator from Arm). There was a joke that Arm developers formed a queue for single core 10 GHz x86-64 cpus to get AArch64 running faster.

    Toolchain became public

    Then 1st October 2012 came. I entered Linaro office in Cambridge for AArch64 meeting and was greeted with “glibc patches went to public ML” information. So I rebased my OpenEmbedded repository, updated patches, removed any traces of non-public ones and published whole work.

    Building on AArch64

    My work above added support for AArch64 as a target architecture. But can it be used as a host? One day I decided to check and ran OpenEmbedded on AArch64.

    After one small patch it worked fine.

    X11 anyone?

    As I had access to Arm Fast model I was able to play with graphics. So one day in January 2013 I did a build and and started Xorg. Through next years I had fun when people wrote that they got X11 running on their AArch64 devices ;D

    Two years later I had Applied Micro Mustang at home (still have it). Once it had working PCI Express support I added graphics card and started X11 on hardware.

    Then went debugging why Xorg requires configuration file and one day with help from Dave Airlie, Mark Salter and Matthew Garrett I got two solutions for the problem. Do not remember did any of them went upstream but some time later problem was solved.

    Few years later I met Dave Airlie at Linux Plumbers. We introduced to each other and he said “ah, you are the ‘arm64 + radeon guy’” ;D

    AArch64 Desktop week

    One day in September 2015 I had an idea. PCIe worked, USB too. So I did AArch64 desktop week. Connected monitors, keyboard, mouse, speakers and used Mustang instead of my x86-64 desktop.

    It was fun.

    Distributions

    First we had nothing. Then I added AArch64 target into OpenEmbedded.

    Same month Arm released Foundation model so anyone was able to play with AArch64 system. No screen, just storage, serial and network but it was enough for some to even start building whole distributions like Debian, Fedora, OpenSUSE, Ubuntu.

    At that moment several patches were shared by all distributions as it was faster way than waiting for upstreams. I saw multiple versions of some of them during my journey of fixing packages in some distributions.

    Debian and Ubuntu

    In February 2013 Debian/Ubuntu team presented their AArch64 port. It was their first architecture bootstrapped without using external toolchains. Work was done in Ubuntu due to different approach to development than Debian has. All work was merged back so some time later Debian also had AArch64 port.

    Fedora

    Fedora team started early — October 2012, right after toolchain became public. Used Fedora 17 packages and switched to Fedora 19 during work.

    When I joined Red Hat in September 2013 one of my duties was fixing packages in Fedora to get them built on AArch64.

    OpenSUSE

    In January 2014 first versions of QEMU support arrived and people moved from using Foundation model. March/April OpenSUSE team did massive amount of builds to get their distribution built that way.

    RHEL

    Fedora bootstrap also meant RHEL 7 bootstrap. When I joined Red Hat there were images ready to use in models. My work was testing them and fixing packages. There were multiple times when AArch64 fix helped to build also on ppc64le and s390x architectures.

    Hardware I played with

    First Linux capable hardware was announced in June 2013. I got access to it at Red Hat. Building and debugging was much faster than using fast models ;D

    Applied Micro Mustang

    Soon Applied Micro Mustangs were everywhere. Distributions used them to build packages etc. Even without support for half of hardware (no PCI Express, no USB).

    I got one in June 2014. Running UEFI firmware out of the box. At first months I had a feeling that firmware is developed at Red Hat as we had fresh versions often right after first patches for missing hardware functionality were written. In reality it was maintained by Applied Micro and we had access to sources so there were some internal changes in testing (that’s why I had firmware versions like ‘0.12-rh’).

    All those graphics cards I collected to test how PCI Express works. Or testing USB before it was even merged into Linux mainline kernel. Using virtualization for development of armhf build fixes (8 cores, 12 gigabytes of ram and plenty of storage beat all armv7 hardware I had).

    I stopped using Mustang around 2018. It is still under my desk.

    For those who use: make sure you have 3.06.25 firmware.

    96boards

    In February 2015 Linaro announced 96boards initiative. The plan was to make small, unified SBC with different Arm chips. Both 32- and 64-bit ones.

    First ones were ‘Consumer Edition’. Small, limited to basic connectivity. Now there are tens of them. 32-bit, 64-bit, fpga etc. Choose your poison ;D

    Second ones were ‘Enterprise Edition’. Few attempts existed, most of them did not survived prototype phase. There was joke that full length PCI Express slot and two USB ports requirements are there because I wanted to have AArch64 desktop ;D

    Too bad that nothing worth using came from EE spec.

    Servers

    As Linaro assignee I have access to several servers from Linaro members. Some are mass-market ones, some never made to market. We had over hundred X-Gene1 based systems (mostly as m400 cartridges in HPe Moonshot chassis’) and shutdown them in 2018 as they were getting more and more obsolete.

    Main system I use for development is one of those ‘never went to mass-market’ ones. 46 cpu cores, 96 GB of ram make it nice machine for building container images, Debian packages or running virtual machines in OpenStack.

    Desktop

    For some time I was waiting for some desktop class hardware to have development box more up-to-date than Mustang. Months turned into years. I no longer wait as it looks like there will be no such thing.

    Solidrun has made some attempts in this area. First with Macchiatobin and later with Honeycomb. I did not used any of them.

    Cloud

    When I (re)joined Linaro in 2016 I became part of team working on getting OpenStack working on AArch64 hardware. We used Liberty, Mitaka, Newton releases and then changed way we work and started contributing more. And more. Kolla, Nova, Dib and other projects. Added aarch64 nodes to OpenDev CI.

    The effect of it was Linaro Developer Cloud used by hundreds of projects to speed-up their aarch64 porting, tens of projects hosting their CI systems etc.

    Two years later Amazon started offering aarch64 nodes in AWS.

    Summary

    I spent half of my life with Arm on AArch64. Had great moments like building helloworld as one of first people outside of Arm Ltd. Got involved in far more projects then ever thought. Met new friends, visited several places in the world I would probably never go otherwise.

    I also got grumpy and complained far too many times that AArch64 market is ‘cheap but limited sbc or fast but expensive servers and nearly nothing in between’. Wrote some posts about missing systems targeting software developers and lost hope that such will happen.

    NOTE: It is 8 years of my work on AArch64. I work with Arm since 2004.

    Written by Marcin Juszkiewicz on
  5. From a diary of AArch64 porter — drive-by coding

    Working on AArch64 often means changing code in some projects. I did that so many times that I am unable to say where I have some commits. Such thing got a name: drive-by coding.

    Definition

    Drive-by coding is situation when you appear in some software project, do some changes, get them merged and then disappear to never be seen again.

    Let’s build something

    All starts from simple thing: I have/want to build some software. But for some reason it does not cooperate. Sometimes it is simple architecture check missing, sometimes atomic operations are not present, intrinsics are missing or anything else.

    First checks

    Then comes moment of looking at build errors and trying to work out some solution. Have I seen that bug before? Does it look familiar?

    If this is something new then quick Google search for error message. And checking bug reports/issues on project’s website/repo. There can be ready to use patches, information how to fix it or even some ideas why does it happen.

    If this is system call failure in some tests then I check my syscalls table are those ones handled on aarch64 and try to change code if they are not (legacy ones like open, symlink, rename).

    Simple fixes

    When I started working with AArch64 (in 2012) there were moments when many projects were easy to fix. If atomics were issue then copying them from Linux kernel was usually solution (if license allowed).

    Architecture checks with pile of #ifdef __X86_64__ or similar ones which are trying to do decide for simple things like “32/64” or “little/big endian”. Nowadays such ones do not happen as often as it was.

    SIMD intrinsics can be a problem. All those vst1q_f32_x2(), vld1q_f32_x2 and similar. I do not have to understand them to know that it usually means that C compiler lacks some backports as those functions were added into gcc and llvm already (like it was with Pytorch recently).

    Complex stuff

    There are moments when getting software to build needs something more complicated. Like I wrote above, I usually start with searching for error message and checking was it an issue in some other projects. And how it got solved. If I am lucky then patch can be done in short time and send for review upstream (once it builds and passes tests).

    Sometimes all I can do is reporting issue upstream and hope that someone will care enough to respond. Usually it ends with at least discussion on potential ways to fix, sometimes hints or even patches to test.

    Projects response

    Projects usually accept patches, review them and merge. In several cases it took longer than expected, sometimes there was larger amount of those so they remember me (at least for some time). It helps when I have something for those project again months/years later.

    There are projects where I prefer to forget that they exist. Complicated contribution rules, crazy CI setup, weird build systems (ever heard about ‘bazel’?). Or comments in ‘we do not give a shit about non-x86’ style (with a bit polished language). Been there, fixed something to get stuff working and do not want to go back.

    Summary

    Drive-by coding’ reminds me going abroad for conferences. People think that you saw interesting places when in reality you spent most of time inside of hotel and/or conference centre.

    It is similar with code. I was in several projects, usually had no idea what they do, how they work. Came, looked shortly, fixed something and went back home.

    Written by Marcin Juszkiewicz on
  6. So your hardware is ServerReady?

    Recently I changed my assignment at Linaro. From Cloud to Server Architecture. Which means less time spent on Kolla things, more on server related things. And at start I got some project I managed to forget about :D

    SBSA reference platform in QEMU

    In 2017 someone got an idea to make a new machine for QEMU. Pure hardware emulation of SBSA compliant reference platform. Without using of virtio components.

    Hongbo Zhang wrote code and got it merged into QEMU, Radosław Biernacki wrote basic support for EDK2 (also merged upstream). Out of box it can boot to UEFI shell. Linux is not bootable due to lack of ACPI tables (DeviceTree is not an option here).

    ACPI tables in firmware

    Tanmay Jagdale works on adding ACPI tables in his fork of edk2-platforms. With this firmware Linux boots and can be used.

    Testing tools

    But what the point of just having reference platform if there is no testing? So I took a look and found two interesting tools:

    Server Base System Architecture — Architecture Compliance Suite

    SBSA ACS tool requires ACPI tables to be present to work. And once started it nicely checks how compliant your system is:

    FS0:\> Sbsa.efi -p
    
    
     SBSA Architecture Compliance Suite
        Version 2.4
    
     Starting tests for level  4 (Print level is  3)
    
     Creating Platform Information Tables
     PE_INFO: Number of PE detected       :    3
     GIC_INFO: Number of GICD             :    1
     GIC_INFO: Number of ITS              :    1
     TIMER_INFO: Number of system timers  :    0
     WATCHDOG_INFO: Number of Watchdogs   :    0
     PCIE_INFO: Number of ECAM regions    :    2
     SMMU_INFO: Number of SMMU CTRL       :    0
     Peripheral: Num of USB controllers   :    1
     Peripheral: Num of SATA controllers  :    1
     Peripheral: Num of UART controllers  :    1
    
          ***  Starting PE tests ***
       1 : Check for number of PE            : Result:  PASS
       2 : Check for SIMD extensions                PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       3 : Check for 16-bit ASID support            PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       4 : Check MMU Granule sizes                  PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       5 : Check Cache Architecture                 PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       6 : Check HW Coherence support               PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       7 : Check Cryptographic extensions           PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       8 : Check Little Endian support              PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
       9 : Check EL2 implementation                 PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      10 : Check AARCH64 implementation             PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      11 : Check PMU Overflow signal         : Result:  PASS
      12 : Check number of PMU counters             PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      13 : Check Synchronous Watchpoints            PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      14 : Check number of Breakpoints              PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      15 : Check Arch symmetry across PE            PSCI_CPU_ON: failure
    
           Reg compare failed for PE index=1 for Register: CCSIDR_EL1
           Current PE value = 0x0         Other PE value = 0x100FBDB30E8
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      16 : Check EL3 implementation                 PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      17 : Check CRC32 instruction support          PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    1 for Level=  4 : Result:  --FAIL-- 129
      18 : Check for PMBIRQ signal
           SPE not supported on this PE      : Result:  -SKIPPED- 1
      19 : Check for RAS extension                  PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      20 : Check for 16-Bit VMID                    PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      21 : Check for Virtual host extensions        PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
      22 : Stage 2 control of mem and cache         PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      23 : Check for nested virtualization          PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      24 : Support Page table map size change       PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      25 : Check for pointer signing                PSCI_CPU_ON: failure
    
    
      25 : Check for pointer signing                PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      26 : Check Activity monitors extension        PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
      27 : Check for SHA3 and SHA512 support        PSCI_CPU_ON: failure
           PSCI_CPU_ON: failure
    : Result:  -SKIPPED- 1
    
          *** One or more PE tests have failed... ***
    
          ***  Starting GIC tests ***
     101 : Check GIC version                 : Result:  PASS
     102 : If PCIe, then GIC implements ITS  : Result:  PASS
     103 : GIC number of Security states(2)  : Result:  PASS
     104 : GIC Maintenance Interrupt
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
    
          One or more GIC tests failed. Check Log
    
          *** Starting Timer tests ***
     201 : Check Counter Frequency           : Result:  PASS
     202 : Check EL0-Phy timer interrupt     : Result:  PASS
     203 : Check EL0-Virtual timer interrupt : Result:  PASS
     204 : Check EL2-phy timer interrupt     : Result:  PASS
     205 : Check EL2-Virtual timer interrupt
           v8.1 VHE not supported on this PE : Result:  -SKIPPED- 1
     206 : SYS Timer if PE Timer not ON
           PE Timers are not always-on.
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
     207 : CNTCTLBase & CNTBaseN access
           No System timers are defined      : Result:  -SKIPPED- 1
    
         *** Skipping remaining System timer tests ***
    
          *** One or more tests have Failed/Skipped.***
    
          *** Starting Watchdog tests ***
     301 : Check NS Watchdog Accessibility
           No Watchdogs reported          0
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
     302 : Check Watchdog WS0 interrupt
           No Watchdogs reported          0
           Failed on PE -    0 for Level=  4 : Result:  --FAIL-- 1
    
          ***One or more tests have failed... ***
    
          *** Starting PCIe tests ***
     401 : Check ECAM Presence               : Result:  PASS
     402 : Check ECAM value in MCFG table    : Result:  PASS
    
            Unexpected exception occured
            FAR reported = 0xEBDAB180
            ESR reported = 0x97800010
         -------------------------------------------------------
         Total Tests run  =   42;  Tests Passed  =   11  Tests Failed =   22
         ---------------------------------------------------------
    
          *** SBSA tests complete. Reset the system. ***
    

    As you can see there is still a lot of work to do.

    ACPI Tables View

    This tool displays content of ACPI tables in hex/ascii format and then with information interpreted field by field.

    What makes it more useful is “-r 2” argument as it enables checking tables against Server Base Boot Requirements (SBBR) v1.2 specification. On SBSA reference platform with Tanmay’s firmware it lists two errors:

    ERROR: SBBR v1.2: Mandatory DBG2 table is missing
    ERROR: SBBR v1.2: Mandatory PPTT table is missing
    
    Table Statistics:
            2 Error(s)
            0 Warning(s)
    

    So situation looks good as those can be easily added.

    CI

    So we have code to check and tools to do that. Add one to another and you have a clean need for CI job. So I wrote one for Linaro CI infrastructure: “LDCG SBSA firmware“. It builds top of QEMU and EDK2, then boot it and run above tools. Results are sent to mailing list.

    ServerReady?

    The Arm ServerReady compliance program provides a solution for servers that “just works”, allowing partners to deploy Arm servers with confidence. The program is based on industry standards and the Server Base System Architecture (SBSA) and Server Base Boot Requirement (SBBR) specifications, alongside Arm’s Server Architectural Compliance Suite (ACS). Arm ServerReady ensures that Arm-based servers work out-of-the-box, offering seamless interoperability with standard operating systems, hypervisors, and software.

    In other words: if your hardware is SBSA compliant then you can go with SBBR compliance tests and then go and ask for certification sticker or sth like that.

    But if your hardware is not SBSA compliant then EBBR is all you can get. Far from being ServerReady. Never mind what people tries to say — ServerReady requires SBBR which requires SBSA.

    Future work

    More tests to integrate. ARM Enterprise ACS is next on my list.

    Written by Marcin Juszkiewicz on
  7. NAS update

    In 2014 I bought Synology DS214se NAS and two 4TB hard drives. It worked fine for me for years and served files. But it was low cpu power system with just 256MB of ram so was too easy to overload.

    Let’s move to x86-64

    So few years ago friend was selling ASUS M5A78L-M LX3 mainboard with AMD FX-6300 processor. I bought it, added 8 GB of ram from my desktop (which got additional 16 GB instead) and put into Node 804 case from Fractal Design.

    Case fits MicroATX board and has plenty of space for storage (I think 10 3.5”, 2 2.5” and slot-in optical drive).

    Machine got several hard drives (from other home machines or drawers):

    • WD Red 4 TB x2
    • Toshiba 2 TB
    • Samsung 1.5 TB
    Hard drives
    Hard drives in cages
    Hard drive cage
    Hard drives cage

    FreeNAS

    Installed FreeNAS 11 on it and started using. Machine was named ‘lumpek’ (Lumpy the Heffalump) to follow my way of naming computers.

    4 TB drives went into simple mirror, 2 TB for less important data and 1.5 TB one for virtual machines and related storage (like installation iso files).

    ZFS works nice, some extra FreeNAS plugins allowed me to offload some services from my desktop to NAS (like Transmission daemon for fetching torrents or MySQL server for local needs).

    Memory upgrade

    Many people say that NAS machine should have ECC memory. So at some moment it got 16 GB (2x 8 GB sticks) of DDR3-1866 ECC memory recovered from old server:

    Handle 0x0026, DMI type 16, 15 bytes
    Physical Memory Array
            Location: System Board Or Motherboard
            Use: System Memory
            Error Correction Type: Single-bit ECC
            Maximum Capacity: 16 GB
            Error Information Handle: Not Provided
            Number Of Devices: 2
    

    More disks

    4 TB of space ends one day. So I went and bought another WD Red 4 TB disk. The idea was to move data from mirror to some spare storage, create new RAID-Z1 array from 3x 4 TB drives and migrate data back.

    But… Lumpek already had 4 hard drives and it was maximum this mainboard supported.

    Dell H310 aka LSI 9211-8i

    Luckily mainboard has on-board graphics so PCI Express x16 slot was empty. Asked friends, checked some internet pages and ordered used Dell H310 SAS controller. This is probably the most popular (among IBM M1015) storage solution in FreeNAS community.

    Card arrived with not needed SAS cable and SFF-8187 cables came in other order.

    Crossflashing

    How to make best use of server class RAID controller? Strip it from any RAID functionality ;D

    Turns out that Dell H310 is basically LSI 9211-8i card. Which means we can flash it with generic firmware to switch to “initiator target” (also called “IT mode”). Card will then presents each drive individually to the host.

    There are several pages describing process. One of them is JC-LAN. I do not remember which set of instructions I followed but they do not differ much.

    At the end I got generic LSI SAS2008 controller:

    root@lumpek:~ # sas2flash -listall
    LSI Corporation SAS2 Flash Utility
    Version 16.00.00.00 (2013.03.01) 
    Copyright (c) 2008-2013 LSI Corporation. All rights reserved 
    
            Adapter Selected is a LSI SAS: SAS2008(B2)   
    
    Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
    ----------------------------------------------------------------------------
    
    0  SAS2008(B2)     20.00.07.00    14.01.00.08    07.39.02.00     00:02:00:00
    
            Finished Processing Commands Successfully.
            Exiting SAS2Flash.
    root@lumpek:~ # 
    

    And as a bonus all my hard drives got a bit more bandwidth:

    da2: <ATA WDC WD40EFRX-68W 0A82> Fixed Direct Access SPC-4 SCSI device
    da2: 600.000MB/s transfers
    da2: Command Queueing enabled
    da2: 3815447MB (7814037168 512 byte sectors)
    da2: quirks=0x8<4K>
    

    Not that 300->600 MB/s transfer update change anything with rusting plates ;D

    Summary

    FreeNAS based machine serves me well. Five hard drives give lot of space for data. 1 GbE network connection is probably my main limit now but there are no plans so far for moving to 10 GbE cards/switch due to their price.

    Virtual machines run from NAS with good speed and if I need faster then I can move them to NVME in my desktop or laptop.

    Written by Marcin Juszkiewicz on
  8. Installing Fedora on RockPro64

    Continuing tests of distribution installers. This time I installed Fedora ‘rawhide’ from netinst iso (2020.06.20). Fetched, wrote to USB pen drive and booted. Due to U-Boot being present in on-board SPI flash I did not had to mess with installation media.

    Issues

    There were some issues:

    1. Panfrost failing to initialize
    2. U-Boot unable to load grub efi

    Panfrost initialization failure

    Panfrost kernel module needs some devfreq governor. Kernel has four of them, Fedora enables one. There are no dependencies between those modules which ends with the same error as with Debian:

    panfrost ff9a0000.gpu: devfreq_add_device: Unable to find governor for the device
    panfrost ff9a0000.gpu: [drm:panfrost_devfreq_init [panfrost]] *ERROR* Couldn't initialize GPU devfreq
    panfrost ff9a0000.gpu: Fatal error during devfreq init
    panfrost: probe of ff9a0000.gpu failed with error -22
    

    Solution was the same as before — boot without ‘panfrost’ module. I interrupted grub from starting and added rd.driver.blacklist=panfrost to “linux” command. This allowed me to boot into Fedora installer and system installation went smoothly.

    First boot on installed system shown working Panfrost driver:

    panfrost ff9a0000.gpu: clock rate = 500000000
    panfrost ff9a0000.gpu: mali-t860 id 0x860 major 0x2 minor 0x0 status 0x0
    panfrost ff9a0000.gpu: features: 00000000,100e77bf, issues: 00000000,24040400
    panfrost ff9a0000.gpu: Features: L2:0x07120206 Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002830 AS:0xff JS:0x7
    panfrost ff9a0000.gpu: shader_present=0xf l2_present=0x1
    [drm] Initialized panfrost 1.1.0 20180908 for ff9a0000.gpu on minor 0
    

    U-Boot can not load Grub EFI

    After reboot U-Boot was not able to load Grub from EFI System Partition:

    Device 0: Vendor: ADATA    Rev: 1.00 Prod: USB Flash Drive 
                Type: Removable Hard Disk
                Capacity: 59200.0 MB = 57.8 GB (121241600 x 512)
    ... is now current device
    Scanning usb 0:1...
    Found EFI removable media binary efi/boot/bootaa64.efi
    libfdt fdt_check_header(): FDT_ERR_BADMAGIC
    Card did not respond to voltage select!
    Scanning disk mmc@fe310000.blk...
    Disk mmc@fe310000.blk not ready
    Card did not respond to voltage select!
    Scanning disk mmc@fe320000.blk...
    Disk mmc@fe320000.blk not ready
    Card did not respond to voltage select!
    Scanning disk sdhci@fe330000.blk...
    Disk sdhci@fe330000.blk not ready
    Scanning disk usb_mass_storage.lun0...
    ** Unrecognized filesystem type **
    ** Unrecognized filesystem type **
    Found 4 disks
    BootOrder not defined
    EFI boot manager: Cannot load any image
    858216 bytes read in 25 ms (32.7 MiB/s)
    libfdt fdt_check_header(): FDT_ERR_BADMAGIC
    System BootOrder not found.  Initializing defaults.
    Could not read \EFI\: Invalid Parameter
    Error: could not find boot options: Invalid Parameter
    start_image() returned Invalid Parameter
    ## Application terminated, r = 2
    EFI LOAD FAILED: continuing...
    

    It was already reported as ‘shim’ bug 1733817.

    How to work around it?

    1. connect your Fedora storage into other computer
    2. copy “/efi/fedora/grubaa64.efi” to “/efi/boot/bootaa64.efi”

    This way U-Boot will get grub efi binary to load in default location.

    Final effect

    Board boots directly to graphical login manager and then to GNOME3 session. Extreme Tux Racer and Xonotic worked out of the box. Speed-wise it feels slower than KDE Plasma session on Debian.

    Written by Marcin Juszkiewicz on
Page 7 / 105