My code of conduct

Few days ago Linus Torvalds added code of conduct to the Linux kernel. And then lot of discussions started.

I had no plans to take part in any of them. But last week I was dragged into one of them and it was not fun. Turned out that people I know and trust when it comes to technical discussions (never met most of them) do not quite understand the need for such.

There are many “code of conduct” documents. Often they differ a lot. I have my own and it is probably the shortest one:

Do not be an asshole. Respect the others.

Simple. I do not care which gender people have when I speak with them (ok, may stare at your boobs or butt once) nor their sexual preferences. Colour of the skin does not matter as most of my friends I first met online without knowing anything about them. Political stuff? As long as we can be friends and do not discuss it I am fine. Etc etc.

It works on conferences. And in projects where I am/was involved.

Someone may say that part of it was shaped by working for corporation (is Red Hat corpo?) due to all those no harassment regulations and trainings. I prefer to think that it is more of how I was raised by parents, family and society.

Ctrl-Q issue or “are Firefox developers using Linux at all?”

When I started using Linux on my desktop there was only Mozilla based browsers which were usable. They had different names: Galeon, Firebird, Phoenix, Mozilla Suite and finally Firefox.

It worked better or worse but did. There were moments when on 2GB ram machine browser was using 6 gigabytes (which resulted in killing it). Then were moments when it started to be slower and slower so I moved to Google Chrome instead.

But still — Firefox had all those extensions which could do insane amount of things with how browser looks, how it works etc. But then [Quantum came](https://marcin.juszkiewicz.com.pl/2017/11/27/firefox-quantum/) and changed that. Good bye all nice addons. Hope we meet in other life.

But what it has with question from post title? Simple, little, annoying thing: “Ctrl-Q” shortcut. Lovely one which everyone is using to close application they work with. Not that it does not work — it does. Perfectly. And this is a problem…

Imagine you have few browser windows opened. On different virtual desktops. With several tabs per window. Some open notes there, somewhere some not-finished wiki edit etc. Normal day. And then you want to close ‘funny kitten’ tab and instead you close all those windows/tabs, drop not finished notes/edits etc. Just because your finger slipped to “Ctrl-Q”.

For years most of users I know used one of those “disable ctrl-q shortcut” addons to NOT close all browser windows when your finger slips a bit when you wanted to close a tab (with “Ctrl-W”) or switch a tab (with “Ctrl-Tab”). Since Quantum it is not possible at all as there is no way how addon can alter shortcuts. Or how user can alter shortcut. No Way At All.

And then it appears that “Ctrl-Q” problem exists **only under Linux**. Under Microsoft Windows developers of Mozilla Firefox decided that “Ctrl-**Shift-**Q” will be a good workaround for the problem. Something similar under MacOS. But Linux still on “Ctrl-Q”.

There is [a bug report](https://bugzilla.mozilla.org/show_bug.cgi?id=1325692) opened for it but there were 4 major releases of Firefox without any change I highly doubt that anything will change in this regard.

Slowly thinking of making COPR repo where I would provide Mozilla Firefox builds with one patch: removing that “Ctrl-Q” shortcut…

Apple Museum Poland is a magical place

You may noticed that I am trying to visit computer museums when there is a chance. Recently I visited [Apple Museum Poland](https://www.facebook.com/AppleMuzeum/) in small village near Warsaw, Poland.

Museum is open during weekends and visits need to be arranged earlier (as it is in private house). It is easy to get there (Google Maps or other navigation) and totally worth it. Never mind are you an Apple fan or not.

What’s there? Apple computers from replica of Apple I, through misc Apple II/III models to Lisa, Macintosh machines, Powerbooks, iMacs etc. Some clones too. Some old terminals. Apollo Computer Graphic Workstation. And that’s not all.

There is a lot of attention given to details. Same monitors as in original commercials. Same setups.

Apple II with Sanyo monitor and disk drive
Apple II with Sanyo monitor and disk drive
Apple III with dedicated monitor and some peripherals
Apple III with dedicated monitor and some peripherals
First Apple modem under AT&T phone
First Apple modem under AT&T phone

As ARM developer I could not notice that there was a shelf filled with first ARM powered Apple devices: Newton in several models.

Apple Newton PDA collection
Apple Newton PDA collection

Computers… What about servers? How many people remember that Apple was doing servers? Big, loud machines.

Apple servers: x86  and PowerPC based ones
Apple servers: x86 and PowerPC based ones

Of course like each museum that one also have some pearls:

Macintosh Portable in working condition
Macintosh Portable in working condition
Duo Dock II docking station for PowerBook Duo
Duo Dock II docking station for PowerBook Duo
Bell & Howell version of Apple II
Bell & Howell version of Apple II
Marron Carrel Apple IIe
Marron Carrel Apple IIe

There were also several non-Apple machines there. From that Apollo Computer Graphic Workstation to Franklin ACE 1200 (and some other Apple II clone). Also some industrial solutions.

Franklin ACE 1200 (Apple II clone)
Franklin ACE 1200 (Apple II clone)
Apollo Computer Graphic Workstation
Apollo Computer Graphic Workstation
NEC PC-8001A with peripherals
NEC PC-8001A with peripherals

There were several other computers, accessories and peripherals exhibited. Lot of interesting stories given by museum owner. Incredible amount of stuff not available outside of Apple dealers network (like official video instructions on laserdiscs).

I could add more and more photos here but trust me — it is far better to see it with own eyes than through blog post.

Again I highly recommend it to anyone. Never mind are you an Apple fan or just like old computers. Just remember to go to [Apple Museum Poland](https://www.facebook.com/AppleMuzeum/) on Facebook first to arrange a visit.

LOCI — other way of building OpenStack images

Earlier this month I got a new task at Linaro: to take a look at LOCI project. How they are building container images with OpenStack components and does it work on AArch64. And fix it if it does not.

So I went. Fetched code, started run. Looked at failures, did some first hacks (wrong and bad ones) and then discussed with Sam Yaple (one of core developers in LOCI project) about those. And went more proper way.

Turned out that whole project is quite simple. You build ‘requirements’ image which works as a kind of cache for Python packages and then use it to build rest of images. Took me some attempts to get it built. Mostly because some modules are available as binaries when your target is x86-64 architecture (think ‘numpy’, ‘scipy’, ‘scikit-learn’).

So first good patch was adding some extra headers to get those packages compiled. Then handling of ‘numpy’ and ‘scipy’ got several versions.

In meantime I created ‘loci-base’ image to not populate LOCI repo with AArch64 details like extra Linaro repo (ERP:18.06) with newer version of libvirt or ceph packages. It also has all packages required by LOCI already preinstalled to cut some seconds from build time.

Then I added building of images into Linaro CI where we have multiple cpu cores machine available. It shown that ‘scipy’ is a nightmare package to build if your processor does not have 10GHz clock… So I started checking are were able to build all images without building ‘scipy’ (and ‘scikit-learn’) at all.

Turned out that we can as those packages were requested by ‘monasca_analytics’ project which we do not have to care about. Build time got cut by about half (on old 8 core machine).

Now all my patches got merged. Images build. Next step? Verification by deployment.

OpenStack Days 2018

During last few days I was in Kraków, Poland at OpenStack Days conference. It had two (Tuesday) or three (Monday) tracks filled with talks. Of different quality as it happens on such small events.

Detailed list of presentations is available on conference’s agenda. As usual I attended some of them and spent time on hallway track.

There was one issue (common to Polish conferences): should speaker use Polish or English? There were attendees who did not understand Polish language so some talks were mix of Polish slides with English presentation, full English ones and also fully Polish ones. Few speakers asked for language option at start of their talks.

Interesting talks? The one from OVH about updating OpenStack (Juno on Ubuntu 14.04 -> Newton on Ubuntu 16.04). Interesting, simple to understand. Szymon Datko described how they started with Havana release and how moved from in-house development to cooperation with upstream.

Other one was about becoming upstream OpenStack developer given by Sławek Kapłoński from Red Hat. Git, gerrit etc. Talk turned into discussion with several questions and notes from the audience (including me).

DreamLab guys spoke about testing OpenStack. Rally, Shaker and few other names appeared during talk. It was interesting but their voices were making me sleepy ;(

Attended several other presentations but had a feeling that those small conferences give many slots to sponsors which not always have something interesting to fill them. Or title sounds good but then speaker lacks presentation experience and is unable to keep the flow.

Met several people from Polish division of Red Hat, spoke with folks from Mirantis, OVH, Samsung, Suse (and other companies), met local friends. Had several discussions. So it was worth going.

From a diary of AArch64 porter — parallel builds

Imagine that you have a package to build. Sometimes it takes minutes. Other one takes hours. And then you run `htop` and see that your machine is idle during such build… You may ask “Why?” and the answer would be simple: multiple cpu cores.

On x86-64 developers usually have from two to four cpu cores. Can be double of that due to HyperThreading. And that’s all. So for some weird reason they go for using `make -jX` where X is half of their cores. Or completely forget to enable parallel builds.

And then I came with ARM64 system. With 8 or 24 or 32 or 48 or even 96 cpu cores. And have to wait and wait and wait for package to build…

So next step is usually similar — edit of `debian/rules` file and adding `–parallel` argument to `dh` call. Or removal of `–max-parallel` option. And then build makes use of all those shiny cpu cores. And it goes quickly…

UPDATE: Riku Voipio told me that Debhelper 10 does parallel builds by default. If you set ‘debian/compat’ value to at least ’10’.