1. We need some thermite…

    Time goes and it is that time of year where Linaro Enterprise Group is working on a new release. And as usual jokes about lack of thermite starts…

    Someone may ask “Why?”. Reason is simple: X-Gene 1 processor. I think that it’s hateclub grows and grows with time.

    When it was released it was a nice processor. Eight cores, normal SATA, PCI Express, USB, DDR3 memory with ECC etc. It was used for distribution builders, development platforms etc. Not that there was any choice :D

    Nowadays with all those other AArch64 processors on a market it starts to be painful. PCI support requires quirks, serial console requires patching etc. We have X-Gene1 in Applied Micro Mustang servers and HPe Moonshot M400 cartridges. Maybe officially those machines are not listed as supported but we still use them so testing a new release work there has to be done.

    And each time there are some issues to work around. Some could probably be fixed with firmware updates but I do not know do vendors still support that hardware.

    So if you have some spare thermite (and a way to handle that legally) then contact us.

    Written by Marcin Juszkiewicz on
  2. Is my work on Kolla done?

    During last few months I was working on getting Kolla running on AArch64 and POWER architectures. It was a long journey with several bumps but finally ended.

    When I started in January I had no idea how much work it will be and how it will go. Just followed my typical “give me something to build and I will build it” style. You can find some background information in previous post about my work on Kolla.

    A lot of failures were present at beginning. Or rather: there was a small amount of images which built. So my first merged change was to do something with Kolla output ;D

    • build: sort list of built/failed images before printing

    Debian support was marked for removal so I first checked how it looked, then enabled all possible images and migrated from ‘jessie’ to ‘stretch’ release. Reason was simple: ‘jessie’ (even with backports) lacked packages required to build some images.

    • debian: import key for download.ceph.com repository
    • debian: install gnupg and dirmngr needed for apt-key
    • debian: enable all images enabled for Ubuntu
    • handle rtslib(-fb) package names and dependencies
    • debian: move to stretch
    • Debian 8 was not released yet

    Both YUM and APT package managers got some changes. For first one I took care to make sure that it fails if there were missing packages (which was very often during builds for aarch64/ppc64le). It allowed to catch some typo in ‘ironic-conductor’ image. In other words: I make YUM behave closer to APT (which always complain about missing packages). Then I made change for APT to behave more like YUM by making sure that update of packages lists was done before packages were installed.

    • ironic-conductor: add missing comma for centos/source build
    • make yum fail on missing packages
    • always update APT lists when install packages

    Of course many images could not be built at all for aarch64/ppc64le architectures. Mostly due to lack of packages and/or external repositories. For each case I was checking is there some way for fixing it. Sometimes I had to disable image, sometimes update packages to newer version. There were also discussions with maintainers of external repositories on getting their stuff available for non-x86 architectures.

    • kubernetes: disable for architectures other than x86-64
    • gnocchi-base: add some devel packages for non-x86
    • ironic-pxe: handle non-x86 architectures
    • openstack-base: Percona-Server is x86-64 only
    • mariadb: handle lack of external repos on non x86
    • grafana: disable for non-x86
    • helm-repository: update to v2.3.0
    • helm-repository: make it work on non-x86
    • kubetoolbox: mark as x86-64 only
    • magnum-conductor: mark as x86-64 only
    • nova-libvirt: handle ppc64le
    • ceph: take care of ceph-fuse package availability
    • handle mariadb for aarch64/ubuntu/source
    • opendaylight: get it working on CentOS/non-x86
    • kolla-toolbox: use proper mariadb packages on CentOS/non-x86

    At some moment I had over ten patches in review and all of them depended on the base one. So with any change I had to refresh whole series and reviewers had to review again… Painful it was. So I decided to split out the most basic stuff to get whole patch set split into separate ones. After “base_arch” variable was merged life became much simpler for reviewers and a bit more complicated for me as from now on each patch was kept in separate git branch.

    • add base_arch variable for future non-x86 work

    At Linaro we support CentOS and Debian. Kolla supports CentOS/RHEL/OracleLinux, Debian and Ubuntu. I was not making builds with RHEL nor OracleLinux but had to make sure that Ubuntu ones work too. There was funny moment when I realised that everyone using Kolla/master was building images with Ocata packages instead of Pike ;D

    • Ubuntu: use Pike repository

    But all those patches meant “nothing” without first one. Kolla had information about which packages are available for aarch64/ppc64le/x86-64 architectures but still had no idea that aarch64 or ppc64le exist. Finally the 50th revision of patch got merged so it now knows ;D

    • Support non-x86 architectures (aarch64, ppc64le)

    I also learnt a lot about Gerrit and code reviews. OpenStack community members were very helpful with their comments and suggestions. We had hours of talk on #openstack-kolla IRC channel. Thanks goes to Alicja, Duong Ha-Quang, Jeffrey, Kurt, Mauricio, Michał, Qin Wang, Steven, Surya Prakash, Eduardo, Paul, Sajauddin and many others. You people rock!

    So is my work on Kolla done now? Some of it is. But we need to test resulting images, make a Docker repository with them, update official documentation with information how to deploy on aarch64 boxes (hope that there will be no changes needed). Also need to make sure that OpenStack Kolla CI gets Debian based gates operational and provide them with 3rdparty AArch64 based CI so new changes could be checked.

    Written by Marcin Juszkiewicz on
  3. So you run OpenStack on your phone?

    For about a year I have been working on OpenStack on AArch64 architecture. And the question from the title is asked from time to time in this or other forms.

    Yes, I do have AArch64 powered phone nowadays. But it has just 4GB of memory and runs Android. So is not a good platform for using OpenStack.

    I am aware that for many people anything which came from ARM Ltd means small, embedded, not worthy serious effort etc. For me they are not wrong — they are just ‘not up to date’.

    We have servers. Sure, someone can say that we had them years ago and it will be right too. There were Marvell server boards, Calxeda had their “high density” boxes with huge amount of quad core cpus. But now we have ‘boring’ ones which can be used in same way as x86-64 ones.

    ARM Ltd published SBSA and SBBR specifications which define what ARM server is nowadays. Short version is “boring box which you put into rack, plug power and network, power it on and install any Enterprise Linux distribution”. No need to deal with weird bootloaders (looking from server perspective), random kernel versions etc. Just unpack, connect and use.

    But what you get inside? It depends on product. Can be 1 cpu with 8 cores but can also be 1-2 cpus with 48 cores per cpu. Or even more (I heard about 240 cpu cores products but not idea are they on market now). And processors means memory. What about 1TB (terabyte) of memory per CPU? Cavium ThunderX mainboards allow such setup with 8 memory dimms per cpu.

    Then goes network. With 32bit ARM machines the problem was “will it support 1GbE?” and with AArch64 servers that problem can re-appear too as some systems do not support ports with less than 10GbE (some ThunderX boards have 3x40GbE + 4x10GbE ports). RJ-45 connector is usually to connect with BMC (think IPMI).

    Storage is Serial-ATA, whatever you plug into PCI Express or something on network. Choose your way. I would not be surprised with M.2 connectors too.

    Usually that means that several PCI Express chips are present on board to provide all that. On AArch64 most of controllers are already part of SoC to make things easier and faster.

    On top of that we run standard distributions like CentOS, Debian, Fedora or OpenSUSE. Out of box, with distro kernels based on mainline kernels. And then we install OpenStack. From packages, as Docker containers, using devstack or any other way we tend to use.

    And when I really have to use OpenStack on my phone then it looks like this:

    OpenStack dashboard on a phone
    OpenStack dashboard on a phone
    Written by Marcin Juszkiewicz on
  4. First 96boards Enterprise board which will be on a market?

    I am at Linaro Connect in Budapest, Hungary. And on Arrow’s stand I noticed something I did not expected — 96boards Enterprise Edition form factor board.

    In past Linaro presented ‘Husky’ and ‘Cello’ devboards in 96boards EE form factor. None of them ever reached production. Only few prototypes existed (had some of them in hands). Both products were complete failures.

    Systart Oxalis LS1020A got announced about month ago. They target routers, IoT gateways type devices with it.

    System on Module on carrierboard
    System on Module on carrierboard

    As you can see board has ports all over the edges but that’s fault of 96boards EE specification which mandate such broken designs. When I saw it first time my question was “where is PCIe slot?” but found out that (according to spec) it is optional. Board has mini-pcie slot on bottom side anyway.

    Speaking of design… Oxalis is made from two parts: carrier board and SoM (System on Module). SoM is based on NXP Network Processor QorIQ® LS1012A processor (single ARM Cortex-A53 core running up to 800 MHz) with 64MB of SPI flash (space for bootloader!) and 1GB of memory. Carrier board gives two GbE network ports, two USB 3.0 connectors, standard 96boards header, one SATA port (with power!), microSD and mini-pcie slot (on bottom side).

    System on Module top view
    System on Module top view
    System on Module bottom view
    System on Module bottom view

    The beauty of such design is that you can replace CPU board with something different. According to Dieter Kiermaier from Arrow there are plans for other SoM board in future.

    Carrierboard
    Carrierboard

    Will it be success? Time will show. Will I buy it? Rather not as for my development I need 16GB ram. Will it have case? Not asked. When on market? May/June 2017.

    Written by Marcin Juszkiewicz on
  5. My work on Kolla

    During last month I was working on one of OpenStack projects: Kolla. My job was adding support for non-x86 architectures: aarch64 and ppc64le. Also resurrecting Debian support.

    A bit of background

    At Linaro we work on getting AArch64 (64-bit ARM, arm64) to be present in many places. We have at least two OpenStack instances running at the moment - on AArch64 hardware only.

    First we used Debian/jessie and Openstack ‘liberty’ version. Was working. Not best but we helped many projects by providing virtual machines for porting software.

    It was built from packages and later (when ‘mitaka’ was released) we moved to virtualenv per component. Out second “cloud” runs that. With proper Neutron networking, live migration and few other nice things.

    But virtualenvs were done as quick solution. We decided to move to Docker containers for next release.

    And Kolla was chosen as a tool for it. We do not like to reinvent the wheel again and again…

    Non-x86 support in Kolla

    The problem was typical: Kolla being x86-64 centric. As most of software nowadays. But thanks to work done by Sajauddin Mohammad I had something to use as a base for adding aarch64 support.

    I took his patch, slashed out most of it and concentrated on getting minimal changes needed to get something built on AArch64 . Effect was sent for review and is now at 10th version.

    Docker images started to appear. But at beginning I was building Ubuntu ones as Debian support was “basically abandoned, on a way out”. From CentOS guys I got confirmation that official Docker image will be generated (it is done already).

    I spent some time on making sure that whole non-x86 support is free from any hardcoding wherever possible. As you can see in my working branch it went quite well. Most of arch related changes are related to “distro does not provide package ZYS for that architecture” or to handling of external repositories.

    Debian support

    And here we come to Debian support. At Linaro we decided to support two community based distributions: CentOS and Debian. But Debian was on a way out in Kolla…

    As this was not related much to non-x86 work I decided to use one of x86-64 machines for that stuff.

    First builds were against ‘jessie-backports’ base tag. I had to make a patch to tell APT that if I want backports then I really want them. It was sent for review as rest of patches.

    Images were building but not so many as for Ubuntu. So I went through all of them and enabled Debian where it was possible. Resulting patch went for review as usual.

    Effect was quite nice (on x86-64):

    • debian-binary: 158
    • debian-source: 201

    But ‘jessie’ was missing several packages even with backports enabled. So after discussion with my team I decided to drop it and go for Debian/testing ‘stretch’ one instead. It is already frozen for release so no big changes are allowed. Patch in review of course.

    At that moment I abandoned one of previous patches as ‘jessie-backports’ was not something I planned to support.

    Turned out that ‘stretch’ images have a bit different set of packages installed than ‘jessie’ had. So ‘gnupg’ and ‘dirmngr’ were missing while we need them for importing GPG keys into APT. Proper patch went to review again.

    Did rebuild on x86-64:

    • stretch-binary: 137
    • stretch-source: 195

    A bit less than ‘jessie-backports’ had, right? Sure, but it also shows that I have to make a new build to check numbers (laptop already has ~1500 docker images generated by kolla).

    Cleaning of old Power patch

    Remember the patch which all that started from? I did not forgot it and after building all those images I went back to it.

    Some parts are just fugly so I skipped them but others were useful if done properly. That’s how new changes were done and some updates to previous ones.

    Then I managed to put remote hands on one of Power machines at Red Hat and started builds:

    • debian-binary: 134
    • debian-source: 184
    • ubuntu-binary: 147
    • ubuntu-source: 190

    No CentOS builds as there was no centos/ppc64le image available.

    Summary

    Non-x86 support looks quite nice. There are some images which can not be built as they rely on external repositories so no aarch64 nor ppc64le packages to use.

    Debian ‘stretch’ support is not perfect yet but it is something which I plan to maintain so situation will be going to improve. Note that most of my work will go into ‘source’ type of builds as we want to have same images for both Debian and CentOS systems.

    Written by Marcin Juszkiewicz on
  6. Fresh WordPress

    I am running my blog for nearly 12 years now. And through all those years it was running same WordPress installation. Until today.

    At beginning it was WordPress MultiUser (WPMU) as I used it to run both my blog and website for my consulting company. It was fun. Some WP plugins were working with WPMU, some not. Then WordPress developers decided to merge both projects into one. And it was good.

    When I started blogging I did not used categories for posts but tagged them instead. Months turned into years and at some moment WP got tags natively so UltimateTagWarrior plugin went to trash (after converting to WP tags).

    I was changing blog theme every few years to bring some change. The other thing which was changing was http server - from Apache to Lighttpd and now it is powered by Nginx + PHP-fpm.

    Company website got trashed in meantime. Our wedding page appeared for few months as other blog. There was map with all required placemarks for church, flower shops, family homes, hotels and other useful services. Wish list for those who wanted to know what to give was also present. With “sepulki” as last entry — no one knows what “sepulki” are as they appear in one of Lem’s books. The only known thing is that you need to be married to be allowed to use them. Some guests had interesting ideas for it ;D

    At some moment I had a page with Mira’s photos. Page required registration and logging. Long time removed.

    And then Ania (my wife) requested page for her psychotherapist services. So she got it.

    At some moment I was running three different domains using one WP installation. It was mess. Terrible mess. At some moment there were authorization issues so I had to change something…

    So now I have fresh WordPress installed. Websites partially restored from backup to not keep settings/tables from long time not used plugins. Hope it will work fine ;D

    Written by Marcin Juszkiewicz on
  7. 2016: computer museums

    During previous year I visited some computer related museums. Not every I planned to but still there were a few of them.

    Faculty of Information Technology, Brno

    In February, during Devconf.cz conference, I visited their small “IT Museum” where several machines used in Czechoslovakia were presented.

    There were mainframe setups, several storage units and operating memories from different decades.

    ferrit core memory
    ferrit core memory

    80s (and 90s) called with several ZX Spectrum clones, PMD-85 with it’s clones and some other microcomputers from this side of Iron Curtain.

    PMD 85
    PMD 85

    It was nice place to visit even just to see all those computers made in Czechoslovakia.

    For more photos please go to my “2016-02 devconf.cz it museum” album.

    Technical Museum, Warsaw

    In April I came to Warsaw for OpenSource day conference. And visited Technical Museum there to see some Polish computers of mainframe era.

    There were many interesting machines. One of them was AKAT-1, the first transistor-based differential equation analyzer:

    AKAT-1
    AKAT-1

    Other was K-202 — first Polish 16bit computer. Never became popular due to being shutdown by goverment.

    K-202
    K-202

    Few years later Mera 400 was released. It used K-202 technology:

    Meta 400
    Meta 400

    There were also few Odra systems:

    Odra 1013
    Odra 1013

    For full resolution photos go to my Muzeum techniki w Warszawie album.

    The National Museum Of Computing, Bletchley Park

    May came. I went to UK to visit Bletchley Park. Awesome place to visit. And right next to it is The National Museum Of Computing (TNMOC in short).

    Inside there is history. I mean HISTORY.

    By mistake I entered museum through wrong door and started from oldest exhibition. It was showing the story of breaking Lorentz code used by Germany during second world war. And hardware designed for it. Contrary to Enigma there was no Lorentz machines in Allies possession.

    Rebuild of British Tunny Machine:

    British Tunny Machine
    British Tunny Machine

    Rebuild of Heath Robinson machine:

    Heath Robinson
    Heath Robinson

    Next to it was room with working replica of first computer: Colossus.

    Colossus
    Colossus

    And here you can see it running:

    There were several other computers of course. I saw ICL 2900 system, several Elliotts and PDP systems, some IBM machines and others from 50-70s.

    One of them was Harwell Dekatron Computer (also known as WITCH). It is oldest working computer:

    Harwell Dekatron Computer
    Harwell Dekatron Computer

    Then there was wide selection of microcomputers from 80s and 90s. Several British ones and others from anywhere else. There was a shelf with Tube extensions for BBC Micro but it lacked ARM1 one:

    BBC Micro Tube expansions
    BBC Micro Tube expansions

    For full resolution photos check my The National Museum Of Computing album.

    The Centre for Computing History, Cambridge

    This museum was on my list for far too long. When I was in Cambridge few years ago it was closed. Next time I did not managed to find time to go there. Finally, during last Linaro sprint, we agreed that we have to go there and we went during lunch break.

    For me the main reason of going there was my wish to see ARM1 cpu. It was available only as Tube (extension board for BBC Micro) and only for some selected companies which makes it quite rare.

    ARM1 Tube
    ARM1 Tube

    The first thing I saw after entering museum was “Macroprocesor”. Imagine CPU in size of 70s mainframe with LED on each line, register bit etc.

    Macroprocesor
    Macroprocesor

    Next room was arranged in a form of British classroom. Set of BBC Micro computers arranged with monitors, manuals, programs.

    BBC Micro equipped classroom
    BBC Micro equipped classroom

    And then I went to look around. There were many different computers shown. Some behind glass, some turned on with possibility to play with them (or on them). It was opportunity to see how design was changing through all those years.

    AES 7100 Model 203
    AES 7100 Model 203

    There were also several Acorn machines — both ARM and 6502 powered ones.

    Acorn Archimedes machines
    Acorn Archimedes machines

    As most of computer museums that one also has some exclusive content. This time it was NeXT workstation which was used as first web server by Tim Berners-Lee:

    NeXT workstation
    NeXT workstation

    And Apple Macintosh SE 30 owned by Douglas Adams, author of “Hitchhiker Guide to the Galaxy”. Note a towel on top of computer:

    Apple Macintosh SE 30 owned by Douglas Adams
    Apple Macintosh SE 30 owned by Douglas Adams

    Other interesting thing was comparison of storage density through all those years. Note 5MB hard drive being loaded into plane in top right corner.

    storage media compared
    storage media compared

    And again — for more pictures and higher resolution visit my The Centre for Computing History album.

    2017 plans

    In 2017 I would like to visit Computer History Museum in Mountain View and museum in Paderborn. Maybe something more ;)

    Written by Marcin Juszkiewicz on
  8. Nokia and their standard batteries

    Nokia. A company everyone knows and most of us probably even used one of their phones in past. They were better or worse but one thing was good - most of them shared batteries…

    My daughter (8.5y old) uses Nokia E50 as her daily phone. Sim card is covered by duct tape to not fall out when phone hit a floor (previous one went missing in such situation). Mira records how she and her friends sing, does some photo sessions to her dolls etc.

    But during weekend phone stopped charging. Hm… Is it charger? Nope, it was original Nokia one. Tried some crappy Chinese one with same result. So let’s check the battery.

    Opened drawer, took Nokia 101. Inside was BL-5CB battery. Inserted into E50 got phone back online. But I like my 101 and keep it as a spare just in case.

    Digged in a drawer with old devices. The one where I keep Sharp Zaurus c760, Sony Ericsson k750i, Openmoko FIC-GTA01bv3 and few other pieces of junk with some sentimental value. What I found there was Nokia 6230i which I got from Ross Burton during GUADEC 2007. Last time I used it about 5 years ago. But it had original Nokia BL-5C inside!

    So I put that battery inside of E50, plugged charger and guess what… It started charging and phone booted! With over 11 years old battery!

    During next few days I will buy BL-5C clone somewhere (they are 3-8€ now) and put it in my daughter’s phone.

    Written by Marcin Juszkiewicz on
Page 18 / 106