Twenty five years of Linux

As I came back from PTO I had to dig into work mails. One of threads was about 25 years of Linux and there was a question “which was your first kernel” and I thought that it may be not an easy question to answer.

For me first was 2.0.2[6-8] (do not remember) on some Uni server where I got my first Linux account (normally used SunOS and text terminal). I remember that there was simple root exploit we used.

Then 2.0.36 on my Amiga 1200 (Debian ‘slink’) was first I run on my hardware.

2.2.10 was first I used for longer as it was Debian ‘potato’ m68k one.

2.3.47 was first I cross compiled (on i686/linux for m68k/linux). And it worked!

2.4.0-test5 was first I built for my x86 desktop once I moved from Amiga/AmigaOS to PC/Debian. I had Duron/600 desktop and old 386 desktop both running same version. Duron got newer ones later, 386 stayed with this one for about year when I returned it.

When I bougth Sharp Zaurus SL-5500 PDA 2.4.18-rmk7-pxa3-embeddix was running on it. So this is my first Linux version on mobile device. Next jump was 2.6.11 on Zaurus c760 as first 2.6 one on mobile.

During OpenZaurus maintaince I started upstreaming kernel patches. 2.6.17 was first with my patches in.

When I had that strange ProGear webpad I wrote backlight for it (based on someone’s code) and 2.6.21 was first with my driver in (and I removed it in 3.7).

AArch64 desktop hardware?

Soon there will be four years since I started working on AArch64 architecture. Lot of software things changed during that time. Lot in a hardware too. But machines availability still sucks badly.

In 2012 all we had was software model. It was slow, terribly slow. Common joke was AArch64 developers standing in a queue for 10GHz x86-64 cpus. So I was generating working binaries by using cross compilation. But many distributions only do native builds. In models. Imagine Qt4 building for 3-4 days…

In 2013 I got access to first server hardware. With first silicon version of CPU. Highly unstable, we could use just one core etc. GCC was crashing like hell but we managed to get stable build results from it. Qt4 was building in few hours now.

Then amount of hardware at Red Hat was growing and growing. Farms of APM Mustangs, AMD Seattle and several other servers appeared, got racked and available to use. In 2014 one Mustang even landed on my desk (as first such machine in Poland).

But this was server land. Each of those machines costed about 1000 USD (if not more). And availability was hard too.

Linaro tried to do something about it and created 96boards project.

First came ‘Consumer Edition’ range. Yet another small form factor boards with functionality stripped as much as possible. No Ethernet, no storage other than emmc/usb, low amount of memory, chips taken from mobile phones etc. But it was selling! But only because people were hungry to get ANYTHING with AArch64 cores. First was HiKey then DragonBoard410 got released. Then few other boards. All with same set of issues: non-mainline kernel, weird bootloaders, binary blobs for this or that…

Then so called ‘Enterprise Edition’ got announced. With another ridiculous form factor (and microATX as an option). And that was it. There was a leak of Husky board which shown how fucked up design it was. Ports all around the edges, memory above and under board and of course incompatible with any industrial form factor. I would like to know what they were smoking…

Time passed by. Husky got forgotten for another year. Then Cello was announced as a “new EE 96boards board” while it looked as redesigned Husky with two SATA ports less (because who needs more than two SATA, right?). Last time I heard about Cello it was still ‘maybe soon, maybe another two weeks’. Prototypes looked like hand soldered, USB controller mounted rotated, dead on-board Ethernet etc.

In meantime we got few devices from other companies. Pine64 had big campaign on Kickstarter and shipped to developers. Hardkernel started selling ODROID-C2, Geekbox released their TV box and probably something else got released as well. But all those boards were limited to 1-2GB of memory, often lacked SATA and used mobile processors with their own set of bootloaders etc causing extra work for distributions.

Overdrive 1000 was announced. Without any options for expansion it looked like SoftIron wanted customers to buy Overdrive 3000 if they want to use PCI Express card.

So we have 2016 now. Four years of my work on AArch64 passed. Most of distributions support this architecture by building on proper servers but most of this effort is not used because developers do not have sane hardware to play with (sane means expandable, supported by distributions, capable).

There is no standard form factor mainboards (mini-itx, microATX, ATX) available on mass market. 96boards failed here, server vendors are not interested, small Chinese companies prefer to release yet-another-fruit/Pi with mobile processor. Nothing, null, nada, nic.

Developers know where to buy normal computer cases, storage, memory, graphics cards, USB controllers, SATA controllers and peripherals. So vendors do not have to worry/deal with this part. But still there is nothing to put those cards into. No mainboards which can be mounted into normal PC case, have some graphics plugged in, few SSD/HDD connected, mouse/keyboard, monitors and just be used.

Sometimes it is really hard to convince software developers to make changes for platform they are unable to test on. And current hardware situation does not help. All those projects of hardware being available “in a cloud” helps only for subset of projects — ever tried to run GNOME/KDE session over the network? With OpenGL acceleration etc?

So where is my AArch64 workstation? In desktop or laptop form.

Post written after my Google+ post where similar discussion happened in comments.

AArch64 desktop: last day

Each year you can hear “this is a year of Linux desktop” phrase. After few days with AArch64 desktop I know one thing: it is not a year of ARMv8 Linux desktop.

Web browsing

OK, I can be spoiled by speed of my i7-2600k desktop but situation when Firefox with less than 20 tabs open is unable to display characters I type into textarea fast enough shows that something is wrong (16GB ram machine). And tell me that this is not typical desktop use of web browser…

YouTube. Main source of any kind of videos. Sometimes it works, but most of time I lack patience to wait until it will start (VP9 and h264 codecs support present). And no way to watch “live hangouts”.

And say bye to music streaming services like Deezer or Spotify.

Gaming

I am not a game player. Installed Quake3 (which I never played before) and it worked, SuperTuxKart worked as well. But that does not prove anything as both those games have low requirements.

It probably never will be gaming platform on Linux desktop.

Development

For my style of development it is fine. But all I need is terminal and gVim ;D

Other hardware?

I think that results may be affected by a fact that all I have here is Applied Micro Mustang based on X-Gene 1 cpu. It is one of first ARMv8 processors in Linux world and it is optimized for server use rather than desktop.

One thing is sure: in next year I will try this experiment with other AArch64 hardware. Just hope it will be sooner than in a year from now (which is my feeling after lack of new aarch64 hardware announcements from Linaro members during this week Linaro Connect).

From a diary of AArch64 porter — vfp precision

During last years lot of work went into design of SIMD instructions for different cpus. X86-64 has some, Power has and so does AArch64.

But why I am writing about it? Simple — build failures. Or rather test failures. Especially in scientific software like HepMC or alglib. They build fine but that’s all.

The difference was small — something like e-15 or smaller but it was enough to make tests fail. And what to blame? SIMD of course.

AArch64 has FMA (fused multiply-add) instructions which speed up calculations but result is more precise than traditional way when all operations are done one by one. This is enough for tests to fail 🙁

But there is solution for it. To degrade you need to add “-ffp-context=off” to compiler flags. This disables use of FMA so results of tests are same as on x86-64 (on pre-Haswell/Bulldozer cores). As a bonus it works on powerpc64(le) and s390(x) too.

Thanks goes to David Abdurachmanov and Andrew Pinski for finding out which exactly flag needs to be used (instead of -fno-expensive-optimisations I used before).

From a diary of AArch64 porter — PAGE_SIZE

During fixing software to make it build and run on AArch64 sooner or later you can meet magical constant named PAGE_SIZE. And in most situations it will be used in a wrong way.

What it does is simple — tell how big page size is. But it does not work that way on AArch64 architecture as we have different values possible: 4K, 16K (may not be supported in all cpus) and 64K with latter being used in Fedora and other distributions. There were some packages which we built at time of 4K kernel being used and then wondered why things fail under 64K kernel…

But how to handle it as a userspace software developer? Simplest solution is to use sysconf(_SC_PAGESIZE) function call (same as getpagesize()). But remember to not hardcode anything based on what you get as a result. Otherwise your application can misbehave when run on kernel with other PAGE_SIZE size.

The good part is that if someone uses PAGE_SIZE constant in code then it will just do not compile on AArch64 as it is not present in system headers. From what I checked sys/user.h header has it defined on some platforms and does not on other so it can not be assumed as available.

UPDATE: added 16K page size which may not be supported in some cpus.

How to get Xserver running out of box on AArch64

As I want to have AArch64 desktop running with as small amount of changes needed as possible I decided that it is time to get Xserver to just run on APM Mustang.

Current setup

To get X11 running I need to have xorg.conf file. And this feels strange as on x86(-64) I got rid of it years ago.

Config snippet is small and probably could be even smaller:

Section "Device"
        Identifier " radeon"
        Driver  "radeon"
        BusID "PCI:1:0:0"
EndSection

Section "Screen"
        Identifier      "Screen"
        Device          "radeon"
EndSection

Section "DRI"
        Mode 0666
EndSection

Without it Xserver failed to find graphics card.

Searching for solution

I cloned Xserver git repository and started hacking. During several hours (split into few days) I added countless LogMessage() calls to source code, generated few patches and sent them to x-devel ML. And finally found out why it did not work.

Turns out that I was wrong — Xserver was able to find graphics card. But then it went to platform_find_pci_info() function, called pci_device_is_boot_vga() and rejected it.

Why? Because firmware from Applied Micro did not initialized card so kernel decided not to mark it as boot gfx one. I do not know is it possible to get UEFI to properly initialize pcie card on AArch64 architecture but there are two other ways to get it working.

hack Xserver

We can hack Xserver to not check for pci_device_is_boot_vga() or to use first available card if it returns false:

diff a/hw/xfree86/common/xf86platformBus.c b/hw/xfree86/common/xf86platformBus.c
index f1e9423..d88c58e 100644
--- a/hw/xfree86/common/xf86platformBus.c
+++ b/hw/xfree86/common/xf86platformBus.c
@@ -136,7 +136,8 @@ platform_find_pci_info()
     if (info) {
         pd->pdev = info;
         pci_device_probe(info);
-        if (pci_device_is_boot_vga(info)) {
+        if (pci_device_is_boot_vga(info) || xf86_num_platform_devices == 1)
+        {
             primaryBus.type = BUS_PLATFORM;
             primaryBus.id.plat = pd;
         }

This may not work on multi-gpu systems. In that case try removing “== 1” part.

hack Linux kernel

If firmware does not give us boot gfx card then maybe we can mark first one as such and everything will work? This is how PowerPC has it solved. So let’s take their code:

diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c
index b3d098b..eea39ba 100644
--- a/arch/arm64/kernel/pci.c
+++ b/arch/arm64/kernel/pci.c
@@ -18,6 +18,7 @@
 #include <linux/of_pci.h>
 #include <linux/of_platform.h>
 #include <linux/slab.h>
+#include <linux/vgaarb.h>
 
 #include 
 
@@ -84,3 +85,15 @@ struct pci_bus *pci_acpi_scan_root()
        return NULL;
 }
 #endif
+
+static void fixup_vga(struct pci_dev *pdev)
+{
+       u16 cmd;
+
+       pci_read_config_word(pdev, PCI_COMMAND, &cmd);
+       if ((cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) || !vga_default_device())
+               vga_set_default_device(pdev);
+
+}
+DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_ANY_ID, PCI_ANY_ID,
+                             PCI_CLASS_DISPLAY_VGA, 8, fixup_vga);

Summary

Both hacks work. I can just run Xserver and get X11 working. But which one will get into upstream and then to Fedora and other Linux distributions? Time will show.

There are some issues with those solutions. If there are multiple graphics cards in a system then which one is primary one? Can their order change after firmware or kernel update?

Thanks goes to Dave Airlie for help with Xserver, Mark Salter for pointing me to PowerPC solution and Matthew Garrett for discussion about issues with kernel solution.

From a diary of AArch porter – POSIX.1 functionality

During years of development GCC got several switches which are considered obsolete/deprecated now. And as such they are not available for new ports. Guess what? AArch64 has such status too.

One of switches is “-posix” one. It is not needed anymore as “_POSIX_SOURCE” macro deprecated it:

Macro: _POSIX_SOURCE

If you define this macro, then the functionality from the POSIX.1 standard (IEEE Standard 1003.1) is available, as well as all of the ISO C facilities.

But it happens sometimes (I saw it in pdfedit 0.4.5 which is so old that it still uses Qt3). So if you find it somewhere then please save world with “s/-posix/-D_POSIX_SOURCE/g” 🙂

96boards goes enterprise?

96boards is an idea from Linaro to produce some 32 and 64-bit ARM boards. So far there were two boards released in “consumer” format and few more announced of rumoured. The specification also lists “extended” version which has space for some more components.

But during Red Hat Summit there was announcement from AMD with mention of “enterprise” format:

How would you like an affordable and compact 160x120mm board to jump start your development efforts with AArch64? AMD and Linaro have been collaborating to develop a 96Boards Enterprise Edition (EE) specification that is ideal for the individual developer. Targeting the server and networking markets, the board will feature a 4-core AMD Opteron A1100 Series processor with two SO-DIMM memory slots, PCIe®, USB, SATA, and Gigabit Ethernet capabilities. Popular operating systems such as CentOS, Fedora, and Red Hat Enterprise Linux Server for ARM Development Preview are targeted for use with this particular board. Additional software downloads, updates, and a forum for software developers will be available via the 96Boards web site. The board is slated to be available in 2H 2015 from distribution partners worldwide and it will be supported through the Linaro Enterprise Group’s 96Boards.org site.

I do wonder where from they took idea to name yet-another-crazy-non-standard board format “Enterprise Edition”. In my understanding what enterprise user like is something which just works and comes with support and does not require crazy embedded nonsense hacks.

So when I saw post from Jeff Underhill with photos of the board I noticed few arghs.

Top view of AMD "Enterprise" board

Bottom view of AMD "Enterprise" board

First of course is board format. 160x120mm does not sound like any industrial format. Nano-ITX is 120×120, Mini-ITX is 170x170mm. But everyone knows that enterprise people love to be creative and make own cases. Why it was not done as 170×120 with partial compatibility with mini-itx cases?

Second thing (related to first) is connectors placing. With PCI-Express x16 slot (with x8 signals) I wonder how it will look when some cables go one side or the other while card sticks out of board. With SATA ports moved to the other side there would be space for USB and Ethernet ports so all cables would be in same area. Note also molex connector to give power to SATA disks.

Nice that there are two memory slots (DDR3 ECC SO-DIMM). But with second on the bottom we probably can say goodbye to all PC cases as it would not fit. Yay for creativity when it comes to cases (again).

There are holes to mount heatsink above CPU. From quick look I think that those for FM2 socket may fit.

HDMI connector suggests some graphics to be present. I did not heard about Radeon core inside AMD Seattle CPU but it could change since last check.

But even with those “issues” I would like to have that board 😉

Git commands which you should really know

Git is now ten years old. More and more developers get lost when have to deal with CVS or Subversion as first SCM they learnt was git. But in daily work I see many people limited to very basic use of it ;(

There is a lot of commands and external plugins for git. I do not want to mention them but rather concentrate on ones installed as part of git package. And only those which I think EVERY developer using git should know that they exist and how to use them.

Dealing with other repo is easy set: “pull” to merge changes (“fetch” if you only want to have them locally), “push” to send them out. “git remote” is useful too.

Branching is easy and there is a lot of articles how to do it. Basically “git branch” to see which one you use, “git branch -a” to check which are available and “git checkout” to grab code from one.

Checking changes is next step. “git diff” with all variants like checking local not committed changes against local repo, comparing to other branches, checking differences between branches etc. “git log -p” to check what was changed in earlier commits.

Then goes “status” to see which local files are changed/added/removed and need attention. And “add”, “rm” and finally “commit” to get all of them sorted out.

Lot of people ends here. The problem appears when they get patches…

So how to deal with patches in git world? You can of course do “patch -p1 <some.patch” and take care of adding/removing files and doing commit. But git has a way for it too.

To generate patch you can use “git diff” and store output into file. But this will lack author information and description. So it is better to commit changes and then use “git format-patch” to export what you did into file. Such file can be attached to bug tracker, sent by email, put online etc. Importing it is simple: “git am some.patch” and if it applies then it is merged like you would do local commit.

There are other ways probably too. Quilt, stgit etc. But this one is using basic git commands.

And I still remember days when I thought that git and me do not match ;D

Rawhide: unwanted baby in Fedora world?

For something about 15 years I was using Debian distribution and ones which derived from it (like Ubuntu). Basically whole time I used development versions of them and amount of issues was nearly not existing. Now I run Rawhide…

For those who do not know: Fedora world contains four distributions: Fedora, RHEL, CentOS and Rawhide. All new stuff goes to Rawhide which is then branched to make Fedora release. Every few years Red Hat forks released Fedora and uses it as a base for new RHEL release. Then CentOS guys create new release based on RHEL. At least this is how I see it — others will say “but rawhide is fedora”.

I think that the problem lies in development model. All new stuff goes to Rawhide but at same time nearly no one is using it anything can happen there. For example my KDE session lacks window decorations, Konsole5 freezes on any window resize and the common answer for such issues is “You should expect that in rawhide”.

Going into Fedora irc channels with questions is just waste of TCP/IP pockets because in a moment when you mention rawhide it is like everyone fired /ignore on you.

And it is some kind of fun (for some sick/weird definition of it) to watch how people start development of packages just after Fedora releases something. They upgrade and then start to seek what interesting happens in rawhide and can be built.

Each day I am closer to go back to Debian/Ubuntu for a desktop with just keeping Fedora in VM for development of some packages.