1. April Fools’ Day

    Each year has a day when people make jokes and watch the world burn.

    2008: Let’s move to London

    Twelve years ago, when I worked with London based OpenedHand company, I wrote about moving to London. So many people sent congratulations, offered help in getting settled etc. Next day I wrote that no, I am not moving to London.

    2020: Let’s move to Vancouver

    Two days ago I decided to repeat it some way. But it was late in the evening so I just wrote that on Facebook (at 00:14 local time):

    12 years ago there was an idea to move to London. But it did not worked out. Looking at situation in Poland I am considering recent Amazon offer. Sure, Vancouver is far away but it is also nice city. And have snow during winter.

    I can fly with one change for xmas or holidays.

    There were many questions am I sure I want to do it. Even my wife asked ;D

    One of friends thought that I was either serious or fishing for a raise (as for him message was written on 31st March due timezones).

    AArch64 For Developers

    When I wrote above post I got sick idea. Why not try to fool wider range? On easy target?

    Quick check how ‘Prima Aprilis’ is called in English and hunt for good (and cheap) domain started. Turned out that Polish ones will be cheapest.

    In the morning I bought ‘afd.biz.pl’ domain for about 4 EUR and started preparations. Free Bootstrap template, some pictures from the Internet and started working on page of not existing Polish startup specializing in computer boards design.

    NUC

    Many people in AArch64 industry would like to be able to buy NUC like computer. So it went first. Picture of some random Chinese product, some text and done.

    Project date set to 1st April 2019, status ‘cancelled’ so simple reason: “lack of cheap PCI Express graphics chips which would fit on mainboard”. In my opinion it is valid reason as for AArch64 NUC to success either AMD Radeon or some NVidia GPU would need to be used. And I doubt that they can be bought in small quantities.

    ATX mainboard

    But AFD company had also more interesting project — full size ATX motherboard:

    ATX motherboard

    Design is an edited page of MSI X570-A PRO mainboard manual. Removed all RGB lights connectors, pump fan, additional 12V connector and few more elements. Copied some piece of I/O shield to not have a hole there.

    Then I presented page to Leif for some review and he provided some hints on how to make it more believable.

    Technical specification was too nice to be true (NUC mentioned above got subset of them):

    • SBSA level 5, SBBR, ServerReady etc compliant
    • Arm v8.4 cpu with 16 cores (Cortex-A72 like performance)
    • CPU will be user upgradable
    • cpu cooling is AMD AM4 compatible
    • Up to 64GB of DDR4 memory
    • ECC support not tested yet
    • 32GB sticks not tested but should work and give 128GB ram
    • Two m.2 slots (PCIe x4) for NVME storage
    • PCIe setup: x16, x8, 3 x1 slots
    • six SATA III ports
    • on board 1GbE network
    • 5.1 audio support
    • two USB 2.0 on i/o shield + 4 on headers
    • six USB 3.1 (4 type A, 2 type C) on i/o shield + 4 on headers
    • serial console on I/O board

    Final price was set to be around 800 EUR with an option to buy motherboard separately from CPU as there are companies where you can buy stuff quite easy if you fit in 500 USD.

    I also added something for Openmoko fans — first 50 boards go to FOSS developers (free of cost).

    Hints and guesses

    Some hints that it is fake were present. Project date was set to 1st April 2020, CPU listed as ARMv8.4 while there are no such ones on a market.

    I got several guesses from people. CPU being 16-core Ampere Skylark, mainboard being done by MSI using AMD southbridge for whole I/O:

    Using AMD’s southbridges makes a lot of sense anyway, those are really just PCIe-multi-I/O chips anyway. That way, the CPU just needs to be connected to RAM, PCIe and a BIOS flash, while the southbridge can provide USB, SATA, audio, i2c, etc along with additional PCIe lanes.

    AFD names

    Of course AFD mean April Fools’ Day. For a moment I wanted to use AArch64 For Developers name on website but it did not fit to scheme. I also got Alternativ Für Datencenter as possible name.

    Conclusion

    It was fun. I may repeat is one day but with a bit more preparation.

    Written by Marcin Juszkiewicz on
  2. Kapturek, Gossamer, Blossom!

    Few days ago I got company laptop refreshed (from Lenovo Thinkpad t460s to t490s). So it needed a new name…

    I name my personal devices by characters from Winnie the Pooh books. More info about it in blog post from 2012.

    But this is company laptop…

    Rule above applies only to my own machines (or ones I maintain). For company hardware I use names from other stories. And those characters have to be red.

    In 2013 I got first laptop. It got name ‘kapturek’ after “Czerwony Kapturek” (Little Red Riding Hood).

    As we replace laptops every three years 2016/7 brought another one. I could just move operating system from one machine to another but it is new machine so new name.

    It became ‘gossamer’ after monster from Looney Tunes:

    Gossamer from Looney Toons
    Gossamer from Looney Toons (on the right)

    Today I am installing operating system on 3rd laptop. Again, which name to choose…

    Quick web searching gave me some characters to choose from. So this laptop will be called ‘blossom’ after Blossom from The Powerpuff Girls series:

    Blossom from The Powerpuff Girls
    Blossom from The Powerpuff Girls

    Now I have some time before choosing new name ;D

    Written by Marcin Juszkiewicz on
  3. Sharing PCIe cards across architectures

    Some days ago during one of conference calls one of my co-workers asked:

    Has anyone ever tried PCI forwarding to an ARM VM on an x86 box?

    As my machine was opened I just turned it off and inserted SATA controller into one of unused PCI Express slots. After boot I started one of my AArch64 CirrOS VM instances and gave it this card. Worked perfectly:

    [   21.603194] pcieport 0000:00:01.0: pciehp: Slot(0): Attention button pressed
    [   21.603849] pcieport 0000:00:01.0: pciehp: Slot(0) Powering on due to button press
    [   21.604124] pcieport 0000:00:01.0: pciehp: Slot(0): Card present
    [   21.604156] pcieport 0000:00:01.0: pciehp: Slot(0): Link Up
    [   21.739977] pci 0000:01:00.0: [1b21:0612] type 00 class 0x010601
    [   21.740159] pci 0000:01:00.0: reg 0x10: [io  0x0000-0x0007]
    [   21.740199] pci 0000:01:00.0: reg 0x14: [io  0x0000-0x0003]
    [   21.740235] pci 0000:01:00.0: reg 0x18: [io  0x0000-0x0007]
    [   21.740271] pci 0000:01:00.0: reg 0x1c: [io  0x0000-0x0003]
    [   21.740306] pci 0000:01:00.0: reg 0x20: [io  0x0000-0x001f]
    [   21.740416] pci 0000:01:00.0: reg 0x24: [mem 0x00000000-0x000001ff]
    [   21.742660] pci 0000:01:00.0: BAR 5: assigned [mem 0x10000000-0x100001ff]
    [   21.742709] pci 0000:01:00.0: BAR 4: assigned [io  0x1000-0x101f]
    [   21.742770] pci 0000:01:00.0: BAR 0: assigned [io  0x1020-0x1027]
    [   21.742803] pci 0000:01:00.0: BAR 2: assigned [io  0x1028-0x102f]
    [   21.742834] pci 0000:01:00.0: BAR 1: assigned [io  0x1030-0x1033]
    [   21.742866] pci 0000:01:00.0: BAR 3: assigned [io  0x1034-0x1037]
    [   21.742935] pcieport 0000:00:01.0: PCI bridge to [bus 01]
    [   21.742961] pcieport 0000:00:01.0:   bridge window [io  0x1000-0x1fff]
    [   21.744805] pcieport 0000:00:01.0:   bridge window [mem 0x10000000-0x101fffff]
    [   21.745749] pcieport 0000:00:01.0:   bridge window [mem 0x8000000000-0x80001fffff 64bit pref]
    

    Let’s go deeper

    Next day I turned off desktop for CPU cooler upgrade. During process I went through my box of expansion cards and plugged additional USB 3.0 controller (Renesas based). Also added SATA hard drive and connected it to previously added controller.

    Once computer was back online I created new VM instance. This time I used Fedora 32 beta. But when I tried to add PCI Express card I got an error:

    Error while starting domain: internal error: process exited while connecting to monitor: 2020-03-25T13:43:39.107524Z qemu-system-aarch64: -device vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: VFIO_MAP_DMA: -22
    2020-03-25T13:43:39.107560Z qemu-system-aarch64: -device vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: vfio 0000:29:00.0: failed to setup container for group 28: memory listener initialization failed: Region mach-virt.ram: vfio_dma_map(0x563169753c80, 0x40000000, 0x100000000, 0x7fb2a3e00000) = -22 (Invalid argument)
    
    Traceback (most recent call last):
      File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
        callback(asyncjob, *args, **kwargs)
      File "/usr/share/virt-manager/virtManager/asyncjob.py", line 111, in tmpcb
        callback(*args, **kwargs)
      File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 66, in newfn
        ret = fn(self, *args, **kwargs)
      File "/usr/share/virt-manager/virtManager/object/domain.py", line 1279, in startup
        self._backend.create()
      File "/usr/lib64/python3.8/site-packages/libvirt.py", line 1234, in create
        if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
    libvirt.libvirtError: internal error: process exited while connecting to monitor: 2020-03-25T13:43:39.107524Z qemu-system-aarch64: -device vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: VFIO_MAP_DMA: -22
    2020-03-25T13:43:39.107560Z qemu-system-aarch64: -device vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: vfio 0000:29:00.0: failed to setup container for group 28: memory listener initialization failed: Region mach-virt.ram: vfio_dma_map(0x563169753c80, 0x40000000, 0x100000000, 0x7fb2a3e00000) = -22 (Invalid argument)
    

    Hmm. It worked before. Tried other card — with the same effect.

    Debugging

    Went to #qemu IRC channel and started discussing issue with QEMU developers. Turned out that probably no one tried sharing expansion cards to foreign architecture guest (in TCG mode instead of same architecture KVM mode).

    As I had VM instance where sharing card worked I started checking what was wrong. After some restarts it was clear that crossing 3054 MB of guest memory was enough to get VFIO errors like above.

    Reporting

    Issue not reported does not exist. So I opened a bug against QEMU. Filled it with error messages, “lspci” output data for used cards, QEMU command line (generated by libvirt) etc.

    Looks like the problem lies in architecture differences between x86-64 (host) and aarch64 (guest). Let me quote Alex Williamson:

    The issue is that the device needs to be able to DMA into guest RAM, and to do that transparently (ie. the guest doesn’t know it’s being virtualized), we need to map GPAs into the host IOMMU such that the guest interacts with the device in terms of GPAs, the host IOMMU translates that to HPAs. Thus the IOMMU needs to support GPA range of the guest as IOVA. However, there are ranges of IOVA space that the host IOMMU cannot map, for example the MSI range here is handled by the interrupt remmapper, not the DMA translation portion of the IOMMU (on physical ARM systems these are one-in-the-same, on x86 they are different components, using different mapping interfaces of the IOMMU). Therefore if the guest programmed the device to perform a DMA to 0xfee00000, the host IOMMU would see that as an MSI, not a DMA. When we do an x86 VM on and x86 host, both the host and the guest have complimentary reserved regions, which avoids this issue.

    Also, to expand on what I mentioned on IRC, every x86 host is going to have some reserved range below 4G for this purpose, but if the aarch64 VM has no requirements for memory below 4G, the starting GPA for the VM could be at or above 4G and avoid this issue.

    I have to admit that this is too low-level for me. I hope that the problem I hit will help someone to improve QEMU.

    Written by Marcin Juszkiewicz on
  4. CirrOS 0.5.0 released

    Someone may say that I am main reason why CirrOS project does releases.

    In 2016 I got task at Linaro to get it running on AArch64. More details are in my blog post ‘my work on changing CirrOS images’. Result was 0.4.0 release.

    Last year I got another task at Linaro. So we released 0.5.0 version today.

    But that’s not how it happened.

    Multiple contributors

    Since 0.4.0 release there were changes done by several developers.

    Robin H. Johnson took care of kernel modules. Added new ones, updated names. Also added several new features.

    Murilo Opsfelder Araujo fixed build on Ubuntu 16.04.3 as gcc changed preprocessor output.

    Jens Harbott took care of lack of space for data read from config-drive.

    Paul Martin upgraded CirrOS build system to BuildRoot 2019.02.1 and bumped kernel/grub versions.

    Maciej Józefczyk took care of metadata requests.

    Marcin Sobczyk fixed starting of Dropbear and dropped creation of DSS ssh key which was no longer supported.

    My Linaro work

    At Linaro I got Jira card with “Upgrade CirrOS’ kernel to Ubuntu 18.04’s kernel” title.

    This was needed as 4.4 kernel was far too old and gave us several booting issues. Internally we had builds with 4.15 kernel but it should be done properly and upstream.

    So I fetched code, did some test builds and started looking how to improve situation. Spoke with Scott Moser (owner of CirrOS project) and he told me about his plans to migrate from Launchpad to GitHub. So we did that in December 2019 and then fun started.

    Continuous Integration

    GitHub has several ways of adding CI to projects. First we tried GitHub Actions but turned out that it is paid service. Looked around and then I decided to go with Travis CI.

    Scott generated all required keys and integration started. Soon we had every pull request going through CI. Then I added simple script (bin/test-boot) so each image was booted after build. Scott improved script and fixed Power boot issue.

    Next step was caching downloads and ccache files. This was huge improvement!

    In meantime Travis bumped free service to 5 simultaneous builders which got our builds even faster.

    CirrOS supports building only under Ubuntu LTS. But I use Fedora so we merged two changes to make sure that proper ‘grub(2)-mkimage’ command is used.

    Kernel changes

    4.4 kernel had to go. First idea was to move to 4.18 from Ubuntu 18.04 release. But if we upgrade then why not going for HWE one? I checked 5.0 and 5.3 versions. As both worked fine we decided to go with newer one.

    Modules changes

    During start of CirrOS image several kernel modules are loaded. But there were several “no kernel module found” like messages for built-in ones.

    We took care of it by querying /sys/module/ directory so now module loading is quiet process. At the end a list of loaded ones is printed.

    VirtIO changes

    Lot of things happened since 4.4 kernels. So we added several VirtIO modules.

    One of results is working graphical console on AArch64. Thanks to ‘virtio-gpu’ providing framebuffer and ‘hid-generic’ handling usb input devices.

    As lack of entropy is common issue in VM instances we added ‘virtio-rng’ module. No more ‘uninitialized urandom read’ messages from kernel.

    Final words

    Yesterday Scott created 0.5.0 tag and CI built all release images. Then I wrote release notes (based on ones from pre-releases). Kolla project got patch to move to use new version.

    When next release? Looking at history someone may say 2023 as previous one was in 2016 year. But know knows. Maybe we will get someone with “please add s390x support” question ;D

    Written by Marcin Juszkiewicz on
  5. My whole career is built on FOSS

    Some time ago at one of Red Hat mailing lists someone asked “how has open source helped your career”. There were several interesting stories. I had mine as well.

    2000

    My first contribution to FOSS. It was updating Debian ‘potato’ installation guide for Amiga/m68k. I was writing article to new Amiga magazine called ‘eXec’ about installing Debian. So why not update official instruction at same time?

    2002

    Probably my first code contribution: small change to MPlayer. I completely forgot about it but as project was changing it’s license in 2017 I got an email about it.

    2004

    I bought my 3rd PDA (Sharp Zaurus SL-5500) and it was running Linux. I started building apps for it, hacking system to run better. Then cooperated with OpenZaurus distro developers and started contributing to OpenEmbedded build system. One day they gave me write access to repo and told to merge my changes.

    When I stopped using OE few years later I was the 5th on list of top contributors.

    I also count this year as first one of my FOSS career.

    2005

    Richard Jackson donated Zaurus c760 to me. As a gift for my OpenZaurus work. And then OPIE 1.2.2 release came due to my changes to make better use of VGA screen. I still have this device in running condition.

    2006

    Became release manager of OpenZaurus distribution, with team of users testing pre-release images. Released 3.5.4 (and later 3.5.4.1 and 3.5.4.2-rc) version.

    Started my own consulting company. Got some serious customers. End of work as PHP programmer.

    2007 - 2010

    I am doing what was hobby as full time job. Full FOSS work. Different companies, ARM architecture for 95% of time. Mostly consulting around OpenEmbedded.

    2010

    Due to my ARM foss involvement Canonical hired me. Started working at Linaro as software engineer. Cleaned cross compilers in Ubuntu/Debian, several other things.

    2012

    Became one of first AArch64 developers. Published OpenEmbedded support for it right after all toolchain patches became public.

    2013

    Left Linaro and Canonical, wrote about it on blog and in less then hour got “send me your CV” from Jon Masters from Red Hat. Joined company, did lot of changes in RHEL 7 and Fedora — mostly fixing FTBFS on !x86 architectures.

    2016

    My manager asked me do I want to go back to Linaro. This time as Red Hat assignee. Went, met old chaps, working mostly around OpenStack. Still on 64bit Arm.

    2017 - 2020

    Lot of work in OpenStack. Some work on Big Data stuff for other team at Linaro. Countless projects where I worked on getting stuff working on AArch64.

    Summary

    My whole career is built on FOSS.

    My x86(-64) desktop runs GNU/Linux since day one (September 2000) as main system. There was OpenDOS as second during studies due to some stuff.

    I had MS Windows XP as second system on one of laptops. But that’s due to some Arm hardware bringup tool being available only for this OS (later also for Linux). My family and friends learnt that I am unable to help them with MS Windows issues as I do not know that OS.

    Written by Marcin Juszkiewicz on
  6. FOSDEM 2020

    FOSDEM. In my opinion the best IT conference. Each year. And I was there for 12th time.

    Insane amount of talks (893 this time) allows to choose more than it is possible to see. Which is good because with thousands of attendees it is often impossible to enter the room. Having some headphones helps as everything is live streamed so later there are videos to download (mostly after conference as they go through review process).

    Friday

    Woke up at 3:45, shower, breakfast, taxi at 4:30, bus at 5:00, plane at 8:40 (TXL -> BRU). Met a friend at the bus, watched 2nd episode of “Star Trek: Picard” and tried to conserve energy for the rest of a day.

    Landed, took a train to Brussels Nord, added some tickets to MOBIB card and went to CentOS Dojo as usual on FOSDEM’s Friday.

    Cloud SIG talk turned into discussion how many projects are waiting for their packages. Thorsten’s talk about CentOS on desktop had some interesting points — you install once and use to the death of hardware. Then went to talk about Software Collections. It was Jan’s first presentation ever and went quite ok.

    Skipped armhfp talk and went for some food. And then hours at Delirium. I know that crowd makes more and more people going somewhere else. I go there as it is easy way to discuss with friends or people who recognize me.

    Saturday

    Took 71 bus at first stop and then went for talks…

    Thorsten gave a great talk about changes in Linux kernel over twenty years. I was a bit late so watched whole talk on my way back home.

    Next one was about selfish contributors. By James Bottomley. Great one! Interesting comparisons. Worth watching.

    I skipped some talks from my list and discussed with several friends instead.

    Next one was in AW - Stylo editor. Looked as done by academics for academics. Interesting approach. Not something for me but I understand why it was created.

    UEFI: edk2 and U-Boot. Two interesting talks one after another. I wonder will my home desktop pass SCT.

    Wanted to attend talk about loading fonts but room was already full. So decided to visit Embedded room instead.

    Talk about Yocto Project tools was boring. But it turned out that friend leads devroom so we had nice talk during break.

    Sudo talk was full so decided to go back to the hotel. Dropped stuff and went for some food and beer with friends.

    Sunday

    Early wake up, breakfast and then we took a risk of getting 71 on 3rd stop. Interesting challenge. Managed to squeeze into the second bus.

    First was one about Thunderbird. There are interesting changes coming. Enigmail and Lighting will be integrated and new UI will come too.

    Then Community room. During first talk Matt listed several chat platforms used by current generation of contributors. IRC was not one of them. I am old.

    Next was about ethics in Open Source. Kind of “are all four freedoms are always needed”. Worth watching.

    Then pile of bad luck. Virtualization devroom was full. I planned to attend talk about virtio-fs which just went into kernel and lands in qemu.

    Open Source Design one had long queue. Once I managed to enter room, I left. No seats so UI/UX tricks have to wait in queue of videos to watch.

    Went back to the K building. Took polar from cloakroom (left there a day before) and went for “coreboot on AMD processors” talk. Met Marek so decided to allow other people go. It was “go for fun” category anyway as I no longer have any hardware supported by this project.

    As I got tired, there could be just one decision: go to Janson to find a good seat for Maddog’s talk and all after.

    Talk about postmarketOS and Maemo Leste showed that not much changed since Openmoko times. Long list of different attempts to make OS for mobile phones. And that there are people still using Nokia n900 “so called phone”.

    Maddog was old as usual. Great talk. Definitely to watch if you missed.

    And then was one about FOSDEM history. Several facts, funny moment with name of organizer of first OSDEM (watch video!). Have to watch it again as presenter was quite hard to understand. Turned out that attendees took “come in oldest FOSDEM t-shirt you have” serious as each year was covered!

    Exhausted went to the hotel. Dropped stuff, ate something and went for beer with friends. And sleep. Without friends.

    Monday

    This time OpenEmbedded workshop organized by Philip Balister. Was good to meet some old friends.

    Talks went from containers to BSPs, signing binaries and some other stuff I skipped.

    Last talk was about past, present and future. Here I tried to help with some facts. And idea of collecting history of OE came. Will look at it - wiki page should be enough. Just finding facts and people who remember will take time.

    Videos

    During meeting I download set of videos for way back home.

    18 years of Blender” was great! I spent nearly an hour at TXL airport watching “Elephant’s dream”, “Big Buck Bunny”, “Sintel” and “Tears of steel”. Will watch rest of them the other day.

    Turned out that talk about font loading was worth fetching. Have to check my blog and maybe do some tweaks.

    One about coreboot was like I expected. Skipped most of it to check what was on slides.

    Nouveau status update I ended even faster. Will check website is there any hope for using it on GTX 1050Ti as so far I use closed source one.

    Summary

    Will I go next year? Sure, I do — I can not afford to not be there.

    Written by Marcin Juszkiewicz on
  7. The most expensive chip in the ARM world

    Arm world is weird. You can have server (expensive, UEFI, out of the box supported by any mainline distribution, normal SATA/NVME storage) or SBC (cheap, usually U-Boot, not supported for months by any distribution, microsd/USB storage mostly). But I do not want to talk about servers today.

    Bootloader(s) mess

    So SBC… Which means playing with U-Boot. In the past it meant random fork of it. I have a feeling (can be wrong) that it changed. Board starts with mainline one or quite fresh fork with mainlining in progress. Similar with ATF (Arm Trusted Firmware - level0 bootloader).

    So you have SBC, you have bootloader(s) built. The next step is taking microsd card, finding out which offsets to use. Then put bootloader(s) at proper places so CPU can read them and boot. Usually it also means that you have to use MBR style formatting as bootloader is where GPT would be.

    Better way maybe?

    Is there a better way? Sure it is. But it is very expensive. Or at least I have a feeling that it is for most of SBC vendors.

    Very expensive solution

    The solution is SPI flash chip. But it is very expensive - 3 (three) EUR for 512Mb (64MB) chip (if bought 1000 of them). Far too much for 99.99% of SBC probably.

    Price taken from simple search on Digikey: W25Q512JV 512 Mb Serial Flash Memory.

    64 megabytes is more than enough to store all. Bootloader(s), U-Boot/UEFI settings etc. You can even fit small OS image there.

    Probably 64/128Mb will be enough even. And they are cheaper (128Mb is 1.7 EUR). Still too expensive…

    Written by Marcin Juszkiewicz on
  8. 2019 review: FOSS projects

    Whatever you do, do it upstream” is IMHO one of mottos worth following. So I went upstream wherever possible.

    Python

    2020 came, Python 2 is no more. So 2019 was full of cleaning and patching. I filled several bugs on projects, created patches to mark some Python packages as Py2 only (or <Py3.5 only). Some projects just dropped support for Py2 packages due to it.

    pbr story

    With move to Python 3 came some issues with Unicode characters in README files of packages used by OpenStack components. Filled some bugs, created patches and then, after several emails, someone found out that it is a bug in ‘pbr’ and fixed it ;D

    manylinux2014

    Installing Python packages on AArch64 can be painful. While “pip install numpy” takes few seconds on x86-64 machine it can take an hour on Arm64 + extra time to find out which libraries and compilers need to be installed.

    So Python developers created PEP-599 to sort out situation and create ‘manylinux2014’ target which can be then used to build ‘wheels’ (aka binary Python packages) not only for x86(-64) but also for 32-bit Arm, 64-bit Arm, 64-bit Power (big and little endian) and Z/Arch (s390x).

    Somehow I managed to be involved in it. Went through pull request on GitHub, build checked, fixed dependencies and some other issues, filled CentOS bugs, discussed with CentOS developers on fixes and then one day all that stuff got merged.

    Now you can fetch “manylinux2014” images from “pypa” account on Quay.io container registry and build ‘wheel’ binaries for different architectures. If you lack target hardware then consider services like Travis CI which provide access (free for FOSS projects).

    CirrOS

    Speaking of Travis CI… Near the end of the year I got task from one of Linaro teams to update CirrOS images with kernels from Ubuntu 18.04 or later.

    I fetched current source, built images and started wondering how to do it best. So started discussing with Scott Moser (main developer) and we agreed that such move have sense and that some changes are needed in this project.

    In December we moved it from Launchpad/cirros to GitHub/cirros-dev/cirros. Opened several issues to not forget what we planned and I started looking at some of them.

    Project is now using Travis CI service for tests. With all builds being run parallel we have checks done in less than and hour. Including boot test for aarch64, arm, i386 and x86_64 architectures (ppc/ppc64/ppc64le need work).

    Doing releases on GitHub is one of next tasks. Current service will redirect.

    Linaro ERP

    After ERP 18.12 release we decided that there will be no more releases and Debian ‘buster’ will be used as it is (ERP based on Debian ‘stretch’). So I spent some time on working with Debian kernel maintainers.

    Most important part? Merging kernel configuration changes. We had about twenty extra enabled to get our servers supported. Everything worth using got merged. In meantime we even found a bug in Linux-stable tree which took some time to fix.

    We were ready for Debian ‘buster’ release. And it worked out of the box on all machines we supported ;D

    Big Data

    Bleh. Java, protobuf 2.5 and other madness… But as build engineer I get such stuff from time to time.

    Last year it was mostly Apache Drill and Apache Arrow. In both cases I worked on their Debian packaging (not in-Debian one) to add/fix AArch64 support.

    Other projects still wait for ‘let us move to newer protobuf’ but it will take years…

    Written by Marcin Juszkiewicz on
Page 9 / 105