1. Ctrl-Q issue or “are Firefox developers using Linux at all?”

    When I started using Linux on my desktop there was only Mozilla based browsers which were usable. They had different names: Galeon, Firebird, Phoenix, Mozilla Suite and finally Firefox.

    It worked better or worse but did. There were moments when on 2GB ram machine browser was using 6 gigabytes (which resulted in killing it). Then were moments when it started to be slower and slower so I moved to Google Chrome instead.

    But still — Firefox had all those extensions which could do insane amount of things with how browser looks, how it works etc. But then Quantum came and changed that. Good bye all nice addons. Hope we meet in other life.

    But what it has with question from post title? Simple, little, annoying thing: “Ctrl-Q” shortcut. Lovely one which everyone is using to close application they work with. Not that it does not work — it does. Perfectly. And this is a problem…

    Imagine you have few browser windows opened. On different virtual desktops. With several tabs per window. Some open notes there, somewhere some not-finished wiki edit etc. Normal day. And then you want to close ‘funny kitten’ tab and instead you close all those windows/tabs, drop not finished notes/edits etc. Just because your finger slipped to “Ctrl-Q”.

    For years most of users I know used one of those “disable ctrl-q shortcut” addons to NOT close all browser windows when your finger slips a bit when you wanted to close a tab (with “Ctrl-W”) or switch a tab (with “Ctrl-Tab”). Since Quantum it is not possible at all as there is no way how addon can alter shortcuts. Or how user can alter shortcut. No Way At All.

    And then it appears that “Ctrl-Q” problem exists only under Linux. Under Microsoft Windows developers of Mozilla Firefox decided that “Ctrl-Shift-Q” will be a good workaround for the problem. Something similar under MacOS. But Linux still on “Ctrl-Q”.

    There is a bug report opened for it but there were 4 major releases of Firefox without any change I highly doubt that anything will change in this regard.

    Slowly thinking of making COPR repo where I would provide Mozilla Firefox builds with one patch: removing that “Ctrl-Q” shortcut…

    Written by Marcin Juszkiewicz on
  2. Apple Museum Poland is a magical place

    You may noticed that I am trying to visit computer museums when there is a chance. Recently I visited Apple Museum Poland in small village near Warsaw, Poland.

    Museum is open during weekends and visits need to be arranged earlier (as it is in private house). It is easy to get there (Google Maps or other navigation) and totally worth it. Never mind are you an Apple fan or not.

    What’s there? Apple computers from replica of Apple I, through misc Apple II/III models to Lisa, Macintosh machines, Powerbooks, iMacs etc. Some clones too. Some old terminals. Apollo Computer Graphic Workstation. And that’s not all.

    There is a lot of attention given to details. Same monitors as in original commercials. Same setups.

    Apple II with Sanyo monitor and disk drive Apple II with Sanyo monitor and disk drive

    Apple III with dedicated monitor and some peripherals Apple III with dedicated monitor and some peripherals

    First Apple modem under AT&T phone First Apple modem under AT&T phone

    As ARM developer I could not notice that there was a shelf filled with first ARM powered Apple devices: Newton in several models.

    Apple Newton PDA collection Apple Newton PDA collection

    Computers… What about servers? How many people remember that Apple was doing servers? Big, loud machines.

    Apple servers: x86 and PowerPC based ones Apple servers: x86 and PowerPC based ones

    Of course like each museum that one also have some pearls:

    Macintosh Portable in working condition Macintosh Portable in working condition

    Duo Dock II docking station for PowerBook Duo Duo Dock II docking station for PowerBook Duo

    Bell & Howell version of Apple II Bell & Howell version of Apple II

    Marron Carrel Apple IIe Marron Carrel Apple IIe

    There were also several non-Apple machines there. From that Apollo Computer Graphic Workstation to Franklin ACE 1200 (and some other Apple II clone). Also some industrial solutions.

    Franklin ACE 1200 (Apple II clone) Franklin ACE 1200 (Apple II clone)

    Apollo Computer Graphic Workstation Apollo Computer Graphic Workstation

    NEC PC-8001A with peripherals NEC PC-8001A with peripherals

    There were several other computers, accessories and peripherals exhibited. Lot of interesting stories given by museum owner. Incredible amount of stuff not available outside of Apple dealers network (like official video instructions on laserdiscs).

    I could add more and more photos here but trust me — it is far better to see it with own eyes than through blog post.

    Again I highly recommend it to anyone. Never mind are you an Apple fan or just like old computers. Just remember to go to Apple Museum Poland on Facebook first to arrange a visit.

    Written by Marcin Juszkiewicz on
  3. LOCI — other way of building OpenStack images

    Earlier this month I got a new task at Linaro: to take a look at LOCI project. How they are building container images with OpenStack components and does it work on AArch64. And fix it if it does not.

    So I went. Fetched code, started run. Looked at failures, did some first hacks (wrong and bad ones) and then discussed with Sam Yaple (one of core developers in LOCI project) about those. And went more proper way.

    Turned out that whole project is quite simple. You build ‘requirements’ image which works as a kind of cache for Python packages and then use it to build rest of images. Took me some attempts to get it built. Mostly because some modules are available as binaries when your target is x86-64 architecture (think ‘numpy’, ‘scipy’, ‘scikit-learn’).

    So first good patch was adding some extra headers to get those packages compiled. Then handling of ‘numpy’ and ‘scipy’ got several versions.

    In meantime I created ‘loci-base’ image to not populate LOCI repo with AArch64 details like extra Linaro repo (ERP:18.06) with newer version of libvirt or ceph packages. It also has all packages required by LOCI already preinstalled to cut some seconds from build time.

    Then I added building of images into Linaro CI where we have multiple cpu cores machine available. It shown that ‘scipy’ is a nightmare package to build if your processor does not have 10GHz clock… So I started checking are were able to build all images without building ‘scipy’ (and ‘scikit-learn’) at all.

    Turned out that we can as those packages were requested by ‘monasca_analytics’ project which we do not have to care about. Build time got cut by about half (on old 8 core machine).

    Now all my patches got merged. Images build. Next step? Verification by deployment.

    Written by Marcin Juszkiewicz on
  4. OpenStack Days 2018

    During last few days I was in Kraków, Poland at OpenStack Days conference. It had two (Tuesday) or three (Monday) tracks filled with talks. Of different quality as it happens on such small events.

    Detailed list of presentations is available on conference’s agenda. As usual I attended some of them and spent time on hallway track.

    There was one issue (common to Polish conferences): should speaker use Polish or English? There were attendees who did not understand Polish language so some talks were mix of Polish slides with English presentation, full English ones and also fully Polish ones. Few speakers asked for language option at start of their talks.

    Interesting talks? The one from OVH about updating OpenStack (Juno on Ubuntu 14.04 -> Newton on Ubuntu 16.04). Interesting, simple to understand. Szymon Datko described how they started with Havana release and how moved from in-house development to cooperation with upstream.

    Other one was about becoming upstream OpenStack developer given by Sławek Kapłoński from Red Hat. Git, gerrit etc. Talk turned into discussion with several questions and notes from the audience (including me).

    DreamLab guys spoke about testing OpenStack. Rally, Shaker and few other names appeared during talk. It was interesting but their voices were making me sleepy ;(

    Attended several other presentations but had a feeling that those small conferences give many slots to sponsors which not always have something interesting to fill them. Or title sounds good but then speaker lacks presentation experience and is unable to keep the flow.

    Met several people from Polish division of Red Hat, spoke with folks from Mirantis, OVH, Samsung, Suse (and other companies), met local friends. Had several discussions. So it was worth going.

    Written by Marcin Juszkiewicz on
  5. From a diary of AArch64 porter — parallel builds

    Imagine that you have a package to build. Sometimes it takes minutes. Other one takes hours. And then you run htop and see that your machine is idle during such build… You may ask “Why?” and the answer would be simple: multiple cpu cores.

    On x86-64 developers usually have from two to four cpu cores. Can be double of that due to HyperThreading. And that’s all. So for some weird reason they go for using make -jX where X is half of their cores. Or completely forget to enable parallel builds.

    And then I came with ARM64 system. With 8 or 24 or 32 or 48 or even 96 cpu cores. And have to wait and wait and wait for package to build…

    So next step is usually similar — edit of debian/rules file and adding --parallel argument to dh call. Or removal of --max-parallel option. And then build makes use of all those shiny cpu cores. And it goes quickly…

    UPDATE: Riku Voipio told me that Debhelper 10 does parallel builds by default. If you set ‘debian/compat’ value to at least ‘10’.

    Written by Marcin Juszkiewicz on
  6. Yet another blog theme change

    During morning discussions I had to check something on my website and decided that it is a time to change theme. For nth time.

    So I looked, checked several ones and then started editing ‘Spacious‘ one. Usual stuff — no categories, colours/fonts/styles changes. Went much faster than previous time.

    But then I realised that I do not remember all previous ‘looks’ of my blog. Web archive to the rescue ;D

    When I started on 1st April of 2005 I used some theme. Do not remember how it was called:

    2005

    About one year later I decided to change it. To Barthelme theme. Widgets arrived, clean view etc. At that time all my FOSS work was done in free time. As people were asking about donating money/hardware I had a special page about it. Anyone remembers Moneybookers?

    2006

    Year passed, another theme change. “Big Blue” this time. Something is wrong on styles as that white area in top left corner should have blue background. At that time I had my own one person company so website had information about available services. And blog got moved to “blog.haerwu.biz” domain instead of “hrw.one.pl” one.

    2007

    In 2009 I played with Atahualpa theme. Looks completely broken when loaded through web archive. Also changed site name to my full name instead of nickname. Also got rid of hard to pronounce properly name in favour of “marcin.juszkiewicz.com.pl” which may not be easier to pronounce but several people already were able to call my last name properly.

    2009

    Same year I went for “Carrington blog” theme. Looks much better than previous one.

    2010

    2012 happened. And change to Twenty Twelve happened too. End of the world did not happened.

    2012

    Some restyling was done later. And subtitle went from OpenEmbedded to ARM/AArch64 stuff.

    2015

    Three years with one theme. Quite long time. So another change: Twenty Sixteen. This was supposed to look properly on mobile devices (and did).

    2016

    And now new theme: Spacious. For another few years?

    2018

    One website and so many changes… Still keeping simplicity, no plans for adding images to every post etc.

    Written by Marcin Juszkiewicz on
  7. GDPR?

    Generic Data Protected Reduction or something like that. Everyone in EU (those in UK too) knows about it due to amount of spam from all those services/pages you registered in the past.

    I would not bother writing anything about it but we had a discussion (beer was involved) recently in a pub and I decided to blog.

    So to make sure you know: there was some data stored in this system. Every time you left a comment all that data you wrote was recorded. And could be used to identify author so we can agree that those were personal details, right?

    If by any chance you want those data removed then write to me. With url of comment you wrote, from email address used in that comment. I will remove your email, link to website (if present) and replace your name with some random words (like Herman Humpalla for example).

    If I remember correctly there is no other data stored in my system. All statistics done by WordPress are anonymous.

    Website moved to be generated into static pages. No statistics, no ads. The only place where any cookie/tracking may happen is YouTube videos embedded in pages.

    Written by Marcin Juszkiewicz on
  8. Android at Google I/O: what’s the point?

    Another year, another Google I/O. Another set of articles with “what’s new in xyz Google product”. Maps, Photos, AI, this, that. And then all those Android P features which nearly no one will see on their phones (tablets look like dead part of market already).

    I have a feeling that this part is more or less useless with current state of Android. Latest release is Oreo. On 5.7% of devices. Which sounds like “feel free to ignore” value. Every 4th device runs 3 years old version (and usually lacks two years of security updates). Every 3rd one has 2 years old Nougat one.

    Android versions usage chart

    How many users will remember what’s new in their phones when Android P will land on their devices? Probably very small part of crazy geeks. Some features will get renamed by device vendors. Other will be removed. Or changed (not always in positive way). Reviewers will write “OMG that feature added by VENDORNAME is so awesome” as no one will remember that it is part of base system.

    In other words: I stopped caring what is happening in Android space. With most popular version being few years old I do not see a point in tracking new features. Who would use them in their apps when you have to care about running on four years old Android?

    Written by Marcin Juszkiewicz on
  9. Mass removal of image tags on Docker hub

    At Linaro we moved from packaged OpenStack to virtualenv tarballs. Then we packaged those. But as it took us lot of maintenance time we switched to Docker container images for OpenStack and whatever it needs to run. And then we added CI job to our Jenkins to generate hundreds of images per build. So now we have lot of images with lot of tags…

    Finding out which tags are latest is quite easy — you just have to go to Docker hub page of linaro/debian-source-base image and switch to tags view. But how to know which build is complete? We had some builds where all images except one got built and pushed. And the missing one is first in deployment… So whole set was b0rken.

    How to remove those tags? One solution is to login to Docker hub website and go image by image and click all those tags to be removed. No one is so insane to suggest it. And we do not have credentials to do that as well.

    So let’s handle it as we do that in SDI team: by automation. Docker has some API so it’s hub should have some too, right? Hmm…

    I went through some pages, then issues, bug reports, random projects. Saw code in JavaScript, Ruby, Bash but nothing usable in Python. Some of projects assume that no one has more than one hundred of images (no paging in getting list of images) and limits itself to some queries.

    Started reading docs and some code. Learnt that GET/POST are not the only methods of doing HTTP. There is also DELETE one which was exactly what I needed. Sorted out authentication, web paths and something started to work.

    First version was simple: login and remove tag from image. Then added querying for whole list of images (with proper paging) and looping through the list with removal of requested tags from requested images:

    15:53 (s) hrw@gossamer:docker$ ./delimage.py haerwu debian-source 5.0.0
    haerwu/debian-source-memcached:5.0.0 removed
    haerwu/debian-source-glance-api:5.0.0 removed
    haerwu/debian-source-nova-api:5.0.0 removed
    haerwu/debian-source-rabbitmq:5.0.0 removed
    haerwu/debian-source-nova-consoleauth:5.0.0 removed
    haerwu/debian-source-nova-placement-api:5.0.0 removed
    haerwu/debian-source-glance-registry:5.0.0 removed
    haerwu/debian-source-nova-compute:5.0.0 removed
    haerwu/debian-source-keystone:5.0.0 removed
    haerwu/debian-source-horizon:5.0.0 removed
    haerwu/debian-source-neutron-dhcp-agent:5.0.0 removed
    haerwu/debian-source-openvswitch-db-server:5.0.0 removed
    haerwu/debian-source-neutron-metadata-agent:5.0.0 removed
    haerwu/debian-source-heat-api:5.0.0 removed
    

    Final version got MIT license as usual, I created git repo for it and pushed code. Next step? Probably creation of a job on Linaro CI to have a way of removing no longer supported builds. And some more helper scripts.

    Written by Marcin Juszkiewicz on
  10. XGene1: cursed processor?

    Years ago Applied Micro (APM) released XGene processor. It went to APM BlackBird, APM Mustang, HPe M400 and several other systems. For some time there was no other AArch64 cpu available on market so those machines got popular as distribution builders, developer machines etc…

    Then APM got aquired by someone, CPU part got bought by someone else and any support just vanished. Their developers moved to work on XGene2/XGene3 cpus (APM Merlin etc systems). And people woke up with not-supported hardware.

    For some time it was not an issue - Linux boots, system works. Some companies got rid of their XGene systems by sending them to Linaro lab, some moved them to ‘internal use only, no external support’ queue etc.

    Each mainline kernel release was “let us check what is broken on XGene this time” time. No serial console output again? Ok, we have that ugly patch for it (got cleaned and upstreamed). Now we have kernel 4.16 and guess what? Yes, it broke. Turned out that 4.15 was already faulty (we skipped it at Linaro).

    Red Hat bugzilla has a Fedora bug for it. Turns out that firmware has wrong ACPI tables. Nothing new, right? We already know that it lacks PPTT for example (but it is quite new thing for processors topology). This time bug is present in DSDT one.

    Sounds familiar? If you had x86 laptop about 10 years ago then it could. DSDT stands for Differentiated System Description Table. It is a major ACPI table used to describe what peripherals the machine has. And serial ports are described wrong there so kernel ignores them.

    One of solutions is bundling fixed DSDT to kernel/initrd but that would require adding support for it into Debian and probably not get merged as no one needs that nowadays (unless they have XGene1).

    So far I decided to stay on 4.14 for my development cartridges. It works and allows me to continue my Nova work. Do not plan to move to other platform as at Linaro we have probably over hundred XGene1 systems (M400 and Mustangs) which will stay there for development (hard to replace 4.3U case with 45 cartridges by something else).

    Written by Marcin Juszkiewicz on
« Newer posts
Page 3 / 76
Older posts »