1. How fast is APM Mustang?

    During Linaro Connect there was a possibility to play with ThunderX2 workstation. I remember that Arnd Bergmann was comparing speed of kernel compilation with his AMD Threadripper workstation.

    Test was simple — checkout 4.18 source, use arm64 defconfig and do build of ‘Image modules’ with as many threads as you have cpu cores. He did several builds with limiting to one cpu, to disable cpu threads etc but idea stays the same.

    Dual socket ThunderX2 (28 cpu cores, 4 threads per core iirc) did that in about 2 minutes. So did Arnd’s Threadripper machine.

    So I decided to check that on my local hardware. Mustang needed 38 minutes, my i7-2600K based desktop did that in 9 minutes 20 seconds.

    For comparison: I was told that Synquacer with it’s 24 Cortex-A53 cores does that in about 16 minutes.

    Is it fast? Do not think so. But who would assume that retro hardware will be fast…

    Written by Marcin Juszkiewicz on
  2. I am gonna run retro server

    You probably know that I am fan of retro computers. Those from 80s, 70s and older ones. And for quite a time I told that I do not plan to run retro machines at home. But it has to change.

    Due to some work things I am going to run Mustang again. But where is retro in it someone may ask…

    Applied Micro Mustang uses X-Gene cpu. And this was first (or one of firsts) AArch64 CPU. I got mine over four years ago. It is obsolete in some areas (SBSA level 0 anyone?) but still works. And is hard to replace if you do not have spare few thousands USD :(

    Someone may say that I can buy Synquacer. Sure. 1160$ for mainboard in some box. With rotating plates which would go away on first day, not needed graphics card and just 4GB of memory. Good luck with finding ram sticks which will work. I heard rumours that there is a store somewhere which keeps a pile of those. And then you end with 24 slow cores which may be good at kernel compilation but then suck at linking.

    So now I am on a hunt for 2x16GB DDR3 ECC RDIMM sticks for Mustang. And some SSD as using rotating plates for development does not have sense in 2018.

    Maybe one day someone will finally realise that 500 USD is this magical point where hardware can be bought in “just go and buy” fashion. So we, developers, will be able to write to our managers “Hey, there is this arm64 mainboard for 499$” and hear “just go, buy and expense”. Memory, storage, case can be other expense raport (or even collected from spare parts at home).

    But until then I will have to live with my retro server.

    Written by Marcin Juszkiewicz on
  3. My code of conduct

    Few days ago Linus Torvalds added code of conduct to the Linux kernel. And then lot of discussions started.

    I had no plans to take part in any of them. But last week I was dragged into one of them and it was not fun. Turned out that people I know and trust when it comes to technical discussions (never met most of them) do not quite understand the need for such.

    There are many “code of conduct” documents. Often they differ a lot. I have my own and it is probably the shortest one:

    Do not be an asshole. Respect the others.

    Simple. I do not care which gender people have when I speak with them (ok, may stare at your boobs or butt once) nor their sexual preferences. Colour of the skin does not matter as most of my friends I first met online without knowing anything about them. Political stuff? As long as we can be friends and do not discuss it I am fine. Etc etc.

    It works on conferences. And in projects where I am/was involved.

    Someone may say that part of it was shaped by working for corporation (is Red Hat corpo?) due to all those no harassment regulations and trainings. I prefer to think that it is more of how I was raised by parents, family and society.

    Written by Marcin Juszkiewicz on
  4. Ctrl-Q issue or “are Firefox developers using Linux at all?”

    When I started using Linux on my desktop there was only Mozilla based browsers which were usable. They had different names: Galeon, Firebird, Phoenix, Mozilla Suite and finally Firefox.

    It worked better or worse but did. There were moments when on 2GB ram machine browser was using 6 gigabytes (which resulted in killing it). Then were moments when it started to be slower and slower so I moved to Google Chrome instead.

    But still — Firefox had all those extensions which could do insane amount of things with how browser looks, how it works etc. But then Quantum came and changed that. Good bye all nice addons. Hope we meet in other life.

    But what it has with question from post title? Simple, little, annoying thing: “Ctrl-Q” shortcut. Lovely one which everyone is using to close application they work with. Not that it does not work — it does. Perfectly. And this is a problem…

    Imagine you have few browser windows opened. On different virtual desktops. With several tabs per window. Some open notes there, somewhere some not-finished wiki edit etc. Normal day. And then you want to close ‘funny kitten’ tab and instead you close all those windows/tabs, drop not finished notes/edits etc. Just because your finger slipped to “Ctrl-Q”.

    For years most of users I know used one of those “disable ctrl-q shortcut” addons to NOT close all browser windows when your finger slips a bit when you wanted to close a tab (with “Ctrl-W”) or switch a tab (with “Ctrl-Tab”). Since Quantum it is not possible at all as there is no way how addon can alter shortcuts. Or how user can alter shortcut. No Way At All.

    And then it appears that “Ctrl-Q” problem exists only under Linux. Under Microsoft Windows developers of Mozilla Firefox decided that “Ctrl-Shift-Q” will be a good workaround for the problem. Something similar under MacOS. But Linux still on “Ctrl-Q”.

    There is a bug report opened for it but there were 4 major releases of Firefox without any change I highly doubt that anything will change in this regard.

    Slowly thinking of making COPR repo where I would provide Mozilla Firefox builds with one patch: removing that “Ctrl-Q” shortcut…

    2021 UPDATE

    Firefox 87+ has “browser.quitShortcut.disabled” option to get rid of shortcut from menu. And even without it used it now warns user after shortcut is used:

    Firefox warning about closing 5 windows after Ctrl-Q use
    Firefox warning about closing 5 windows after Ctrl-Q use
    Written by Marcin Juszkiewicz on
  5. Apple Museum Poland is a magical place

    You may noticed that I am trying to visit computer museums when there is a chance. Recently I visited Apple Museum Poland in small village near Warsaw, Poland.

    Museum is open during weekends and visits need to be arranged earlier (as it is in private house). It is easy to get there (Google Maps or other navigation) and totally worth it. Never mind are you an Apple fan or not.

    What’s there? Apple computers from replica of Apple I, through misc Apple II/III models to Lisa, Macintosh machines, Powerbooks, iMacs etc. Some clones too. Some old terminals. Apollo Computer Graphic Workstation. And that’s not all.

    There is a lot of attention given to details. Same monitors as in original commercials. Same setups.

    Apple II with Sanyo monitor and disk drive
    Apple II with Sanyo monitor and disk drive
    Apple III with dedicated monitor and some peripherals
    Apple III with dedicated monitor and some peripherals
    First Apple modem under AT&T phone
    First Apple modem under AT&T phone

    As ARM developer I could not notice that there was a shelf filled with first ARM powered Apple devices: Newton in several models.

    Apple Newton PDA collection
    Apple Newton PDA collection

    Computers… What about servers? How many people remember that Apple was doing servers? Big, loud machines.

    Apple servers: x86 and PowerPC based ones
    Apple servers: x86 and PowerPC based ones

    Of course like each museum that one also have some pearls:

    Macintosh Portable in working condition
    Macintosh Portable in working condition
    Duo Dock II docking station for PowerBook Duo
    Duo Dock II docking station for PowerBook Duo
    Bell & Howell version of Apple II
    Bell & Howell version of Apple II
    Marron Carrel Apple IIe
    Marron Carrel Apple IIe

    There were also several non-Apple machines there. From that Apollo Computer Graphic Workstation to Franklin ACE 1200 (and some other Apple II clone). Also some industrial solutions.

    Franklin ACE 1200 (Apple II clone)
    Franklin ACE 1200 (Apple II clone)
    Apollo Computer Graphic Workstation
    Apollo Computer Graphic Workstation
    NEC PC-8001A with peripherals
    NEC PC-8001A with peripherals

    There were several other computers, accessories and peripherals exhibited. Lot of interesting stories given by museum owner. Incredible amount of stuff not available outside of Apple dealers network (like official video instructions on laserdiscs).

    I could add more and more photos here but trust me — it is far better to see it with own eyes than through blog post.

    Again I highly recommend it to anyone. Never mind are you an Apple fan or just like old computers. Just remember to go to Apple Museum Poland on Facebook first to arrange a visit.

    Written by Marcin Juszkiewicz on
  6. LOCI — other way of building OpenStack images

    Earlier this month I got a new task at Linaro: to take a look at LOCI project. How they are building container images with OpenStack components and does it work on AArch64. And fix it if it does not.

    So I went. Fetched code, started run. Looked at failures, did some first hacks (wrong and bad ones) and then discussed with Sam Yaple (one of core developers in LOCI project) about those. And went more proper way.

    Turned out that whole project is quite simple. You build ‘requirements’ image which works as a kind of cache for Python packages and then use it to build rest of images. Took me some attempts to get it built. Mostly because some modules are available as binaries when your target is x86-64 architecture (think ‘numpy’, ‘scipy’, ‘scikit-learn’).

    So first good patch was adding some extra headers to get those packages compiled. Then handling of ‘numpy’ and ‘scipy’ got several versions.

    In meantime I created ‘loci-base’ image to not populate LOCI repo with AArch64 details like extra Linaro repo (ERP:18.06) with newer version of libvirt or ceph packages. It also has all packages required by LOCI already preinstalled to cut some seconds from build time.

    Then I added building of images into Linaro CI where we have multiple cpu cores machine available. It shown that ‘scipy’ is a nightmare package to build if your processor does not have 10GHz clock… So I started checking are were able to build all images without building ‘scipy’ (and ‘scikit-learn’) at all.

    Turned out that we can as those packages were requested by ‘monasca_analytics’ project which we do not have to care about. Build time got cut by about half (on old 8 core machine).

    Now all my patches got merged. Images build. Next step? Verification by deployment.

    Written by Marcin Juszkiewicz on
  7. OpenStack Days 2018

    During last few days I was in Kraków, Poland at OpenStack Days conference. It had two (Tuesday) or three (Monday) tracks filled with talks. Of different quality as it happens on such small events.

    Detailed list of presentations is available on conference’s agenda. As usual I attended some of them and spent time on hallway track.

    There was one issue (common to Polish conferences): should speaker use Polish or English? There were attendees who did not understand Polish language so some talks were mix of Polish slides with English presentation, full English ones and also fully Polish ones. Few speakers asked for language option at start of their talks.

    Interesting talks? The one from OVH about updating OpenStack (Juno on Ubuntu 14.04 -> Newton on Ubuntu 16.04). Interesting, simple to understand. Szymon Datko described how they started with Havana release and how moved from in-house development to cooperation with upstream.

    Other one was about becoming upstream OpenStack developer given by Sławek Kapłoński from Red Hat. Git, gerrit etc. Talk turned into discussion with several questions and notes from the audience (including me).

    DreamLab guys spoke about testing OpenStack. Rally, Shaker and few other names appeared during talk. It was interesting but their voices were making me sleepy ;(

    Attended several other presentations but had a feeling that those small conferences give many slots to sponsors which not always have something interesting to fill them. Or title sounds good but then speaker lacks presentation experience and is unable to keep the flow.

    Met several people from Polish division of Red Hat, spoke with folks from Mirantis, OVH, Samsung, Suse (and other companies), met local friends. Had several discussions. So it was worth going.

    Written by Marcin Juszkiewicz on
  8. From the diary of AArch64 porter — parallel builds

    Imagine that you have a package to build. Sometimes it takes minutes. Other one takes hours. And then you run htop and see that your machine is idle during such build… You may ask “Why?” and the answer would be simple: multiple cpu cores.

    On x86-64 developers usually have from two to four cpu cores. Can be double of that due to HyperThreading. And that’s all. So for some weird reason they go for using make -jX where X is half of their cores. Or completely forget to enable parallel builds.

    And then I came with ARM64 system. With 8 or 24 or 32 or 48 or even 96 cpu cores. And have to wait and wait and wait for package to build…

    So next step is usually similar — edit of debian/rules file and adding --parallel argument to dh call. Or removal of --max-parallel option. And then build makes use of all those shiny cpu cores. And it goes quickly…

    UPDATE: Riku Voipio told me that Debhelper 10 does parallel builds by default. If you set ‘debian/compat’ value to at least ‘10’.

    Written by Marcin Juszkiewicz on
Page 14 / 106