Soon there will be four years since I started working on AArch64 architecture. Lot of software things changed during that time. Lot in a hardware too. But machines availability still sucks badly.

In 2012 all we had was software model. It was slow, terribly slow. Common joke was AArch64 developers standing in a queue for 10GHz x86-64 cpus. So I was generating working binaries by using cross compilation. But many distributions only do native builds. In models. Imagine Qt4 building for 3-4 days…

In 2013 I got access to first server hardware. With first silicon version of CPU. Highly unstable, we could use just one core etc. GCC was crashing like hell but we managed to get stable build results from it. Qt4 was building in few hours now.

Then amount of hardware at Red Hat was growing and growing. Farms of APM Mustangs, AMD Seattle and several other servers appeared, got racked and available to use. In 2014 one Mustang even landed on my desk (as first such machine in Poland).

But this was server land. Each of those machines costed about 1000 USD (if not more). And availability was hard too.

Linaro tried to do something about it and created 96boards project.

First came ‘Consumer Edition’ range. Yet another small form factor boards with functionality stripped as much as possible. No Ethernet, no storage other than emmc/usb, low amount of memory, chips taken from mobile phones etc. But it was selling! But only because people were hungry to get ANYTHING with AArch64 cores. First was HiKey then DragonBoard410 got released. Then few other boards. All with same set of issues: non-mainline kernel, weird bootloaders, binary blobs for this or that…

Then so called ‘Enterprise Edition’ got announced. With another ridiculous form factor (and microATX as an option). And that was it. There was a leak of Husky board which shown how fucked up design it was. Ports all around the edges, memory above and under board and of course incompatible with any industrial form factor. I would like to know what they were smoking…

Time passed by. Husky got forgotten for another year. Then Cello was announced as a “new EE 96boards board” while it looked as redesigned Husky with two SATA ports less (because who needs more than two SATA, right?). Last time I heard about Cello it was still ‘maybe soon, maybe another two weeks’. Prototypes looked like hand soldered, USB controller mounted rotated, dead on-board Ethernet etc.

In meantime we got few devices from other companies. Pine64 had big campaign on Kickstarter and shipped to developers. Hardkernel started selling ODROID-C2, Geekbox released their TV box and probably something else got released as well. But all those boards were limited to 1-2GB of memory, often lacked SATA and used mobile processors with their own set of bootloaders etc causing extra work for distributions.

Overdrive 1000 was announced. Without any options for expansion it looked like SoftIron wanted customers to buy Overdrive 3000 if they want to use PCI Express card.

So we have 2016 now. Four years of my work on AArch64 passed. Most of distributions support this architecture by building on proper servers but most of this effort is not used because developers do not have sane hardware to play with (sane means expandable, supported by distributions, capable).

There is no standard form factor mainboards (mini-itx, microATX, ATX) available on mass market. 96boards failed here, server vendors are not interested, small Chinese companies prefer to release yet-another-fruit/Pi with mobile processor. Nothing, null, nada, nic.

Developers know where to buy normal computer cases, storage, memory, graphics cards, USB controllers, SATA controllers and peripherals. So vendors do not have to worry/deal with this part. But still there is nothing to put those cards into. No mainboards which can be mounted into normal PC case, have some graphics plugged in, few SSD/HDD connected, mouse/keyboard, monitors and just be used.

Sometimes it is really hard to convince software developers to make changes for platform they are unable to test on. And current hardware situation does not help. All those projects of hardware being available “in a cloud” helps only for subset of projects — ever tried to run GNOME/KDE session over the network? With OpenGL acceleration etc?

So where is my AArch64 workstation? In desktop or laptop form.

Post written after my Google+ post where similar discussion happened in comments.

AArch64 desktop hardware?

27 thoughts on “AArch64 desktop hardware?

  • 25th July 2016 at 15:44

    Thanks for summing up the sad state of AArch64 hardware. With decades of industry experience you would think that vendors at least get the form factor, port and memory placement right but clearly I give them too much credit. Let’s hope 2017 will be the year of the (AArch64) Linux desktop! 🙂

    • 26th July 2016 at 11:22

      It goes into range of toy boards. Small amount of memory, no real storage, weird bootloaders setup.

  • 25th July 2016 at 17:07

    Well, on server side we are using this:!tab=specs – And yes, we were first in Poland (in 2015) to use it. Really nice and stable devices, with best – as for us – form factor of moonshot cartridge. On the desktop side… Well – is another story. There is a chance that some Moonshot cartridges will work in small desktop casing. For now – just some XEON based machines are compatible with this setup. On the other side OLIMEX has this – but when they will ship it and how it will perform? Well, you point a problem that is known – and nobody is doing anything with it. Witch means – it`s a opportunity to make some business… But…

    • 26th July 2016 at 11:23

      I already have X-Gene 1 based machine at home and do not want another one ;D

      Moonshot is nice piece of hardware. Used such remotely (we have them at Red Hat).

  • 25th July 2016 at 17:45

    What about Cavium’s offerings?

  • 25th July 2016 at 18:00

    It is not meant as development platform, but the Nvidia Shield TV for example runs Ubuntu with some hacks and sports true OpenGL (side by side with OpenGL ES). Despite the lack of a decent amount of I/O ports, misusing it as a desktop machine sometimes makes you forgot the misuse.

  • 25th July 2016 at 18:18

    JTX1 is probably closest – but fixed memory configuration. Still, 4MB not ridiculously bad. I’ve also got ShieldTVs hacked for pseudo-desktop and a Pixel C hacked for pseudo-laptop-oriented AARCH64 dev. Still waiting for the sub $500 option with standard form factor, DIMM slots and a PCI-E…

  • 25th July 2016 at 18:26

    With experiences on armhf rather than arm64, I’d say those little boards are not that far from being usable as one’s GUI machine. You can run any DE other than Gnome3 on them (Gnome3 is for all practical purposes i386/amd64 only), can run a browser, can connect to servers you manage, etc. Heck, even an armhf laptop (Omega OAN133) I bought many years ago would be usable if not for shittily made keyboard. I don’t really play games, and when I do, it’s usually in DosBox/etc which works fine on underpowered boxes of any architecture. All of those have working sound, can play videos, etc.

    For me, the only hard show stopper is lack of dual monitor support. I got too used to the convenience to be able to stand single-screen tunnel vision for long. The closest I’ve heard of is “Gert’s VGA adapter” for RPi (but then, you don’t want any RPi near you…), having one monitor on VGA is fine for me as that’s what I use right now on amd64 — I hate modern narrow strip aspect ratios so I use a pair of legacy 1280×1024 ones, they are far easier to get VGA-only.

    Memory isn’t that big a problem: you do want lots of memory on a server, but on a desktop/laptop, once you can run a browser and compiler, the only big user are virtual machines — which can run fine on a noisy box in your cellar. Current small boards tend to have 2GB which does suck (right now my Firefox with ~40 extensions and ~200 tabs takes 3GB RSS) but might be manageable.

    Lack of SATA sucks but isn’t a show stopper: I just noticed my /home is only 8GB (!). The rest are umpteen chroots and virtual machines on SSD, plus terabytes of linear access files on HDD. Everything that’s on HDD could be just as well attached via network.

    Thus, while I agree with you that a big desktop would be nice, modern cheap boards are nearing usability.

    • 26th July 2016 at 11:26

      I run several virtual machines during work day. Hardware without 16GB ram, few hard drives does not handle such load.

      And desktop environments like KDE/GNOME3 are usable when you can plug normal PCI Express card 😀 Look into archives for my ‘AArch64 desktop’ posts.

  • 25th July 2016 at 18:41

    This is exactly my question too. With the revelations about the IME I’m just done with Intel… and it feels like aarch64 could be my answer. I’ve already gotten started getting used to the platform. Have gentoo running well on Odroid-C2s, and I’m working on doing the same for the NanoPi-M3. Only Arm64 platforms I have at the moment. I’m going to cram an Odroid-C2 into a PiTop just for fun, but I too want real desktop hardware.

  • 25th July 2016 at 19:13

    Not a 64bit, but chromebook hp 14 g3 fhd touch version is a nice piece of hw, I couldn’t find any 64bit version of the tegra either, and the few arm chromebooks out there target only the lower end, thank you Intel…..

  • 26th July 2016 at 08:46

    The ARM ecosystem is horrendous. I’d be happy if I could just get a decent enough ARMv7 with a SOC that’s completely supported upstream in Linux and U-Boot (instead of some ancient fork). I don’t even care about GPU, I just want decent low-powered ARM units to play with.

  • 26th July 2016 at 15:10

    Why don’t we finally admit this ARM experiment was a failure, that nobody is going to use it outside of locked-down smartphone and tablet hardware, and let Fedora concentrate on the x86 architecture its users are actually using in the real world? 32-bit ARM needs to be demoted back to the secondary architecture it really is, and the proposal to make secondary architectures (including aarch64) essentially primary in Koji must be rejected. Toolchain bugs on exotic architectures should not fail our x86 builds, and we also should not have to wait for slow ARM builders every time. There will likely just never be ARM hardware that can compete with x86 in the desktop space, and if this should ever change defying all odds, that would be the time to make ARM primary, not now.

    • 26th July 2016 at 15:41

      Kevin: maybe for you Fedora matter only on desktop. But for other people it is only subset of Fedora. And ARM != AArch64.

      So far several bugs in Fedora was fixed just because they were caught on secondary architectures before they appeared on x86-64. Speedwise things are going to change when it comes to 32-bit ARM builders — we have to finish stuff around virtualization and then all 32-bit builders will be replaced by VMs on 64-bit AArch64 boxes.

      • 27th July 2016 at 02:21

        Kevin: maybe for you Fedora matter only on desktop. But for other people it is only subset of Fedora.

        Who seriously uses Fedora on mobile devices (other than notebooks/laptops)? All the non-toy devices are unsupported due to lack of non-blob drivers and/or a locked-down bootloader. (Sorry, but a board that ships without even a case and which requires peripherals 10+ times the price of the board to do anything is what I call a “toy device”. And your article seems to agree with this assertion. Such a device is also typically no longer mobile as soon as you attach the peripherals; true mobile devices need integrated peripherals.) Smartphones are explicitly a non-target for Fedora ARM, and only very few tablets are supported.

        And ARM != AArch64.

        That’s like saying “x86 != x86_64”.

        So far several bugs in Fedora was fixed just because they were caught on secondary architectures before they appeared on x86-64.

        That’s negligible compared to the bugs that appear ONLY on secondary architectures (or the “primary” architecture armv7hl), typically because the toolchain is just broken (which means we either have to wait for the toolchain to get fixed, which can take months, or find some way to hack around the breakage, or just use ExcludeArch).

        Speedwise things are going to change when it comes to 32-bit ARM builders — we have to finish stuff around virtualization and then all 32-bit builders will be replaced by VMs on 64-bit AArch64 boxes.

        And those will likely still be slower than the x86 builders for everything that cannot be parallelized.

  • 26th July 2016 at 18:21

    i am using my chromebook flip now, it works well and it has long battery life while still very cheap also i can install an arch on the sdcard and let it run the only problem is like you i need more powerful cpu and more ram also i need sata or usb3 support(i mean real speed, not just soldering a port on it)

  • 15th August 2016 at 03:42

    I’m towards the end of the EOMA68-A20 Crowd-funding campaign and have been tracking the fabless SoC semiconductor industry closely for the past five years, evaluating almost a hundred different SoCs in that time.

    It was 2012 when the “Towards an FSF Endorseable Processor” initiative was announced, based on the same understanding that Martin voices here, namely that leaving things to the incumbent Fabless Semi companies simply is not going to get results.

    I was staggered to learn of linaro’s 96boards efforts because as someone who had had to spend 5 years creating an open standard, 96board’s “fait-accomplit” announcement took all of five minutes to spot half a dozen nails in its coffin. The public write-up that I made was responded to in a half-hearted effort by the CEO with “promises to improve the standard” which, as anybody knows in the Standards World, just makes matters WORSE as now the standard – whatever it is – cannot be trusted by ANYONE because you now no longer know which version of the standard you have to be compatible with. To date my open comments and invitations to help them develop future standards have gone completely unanswered. They’re right there on the forum!

    The problem that we have with SoCs is that the cost of licensing the various hard macros (DDR3 up to 2GByte memory addressing for example is $USD 300k exclusing royalties) is really expensive. you’d think it was easy to add an extra addressing line to make it 4gb or two more to make it 8gb? ahhh… no. because that’s “outside of normal” – nobody in the low-power market tablet/smartphone market does 4gb RAM or 8gb RAM right now, right? ergo, you simply can’t have above 2GB RAM addressing because it would price any SoC made with that hard macro out of the market.

    So this is why we see only 2GB RAM addressing even on the lower-power 64-bit SoCs right now. Even the Intel-Rockchip collaboration is restricted addressing. Even the latest 600-pin 8300 series stuff? restricted to 2GB RAM. But the desktop stuff? yeah forget it for now.

    I think basically it’s going to take a specialised Chromebook processor to be brought out, and then we might get above 2GB RAM addressing. But if anyone can work out a way to raise $5m to $10m I can re-activate the contacts that I have to get a processor designed and made. That would, I feel, be much more effective: don’t bother waiting around just take responsibility for dealing with the problem directly and get on with it. The EOMA68 initiative is a start on that.

    • 24th August 2016 at 23:37

      Something doesn’t add up here. There are high-end phones released right now with 3 or 4GB of RAM. I distinctly remember even ye olde OMAP5 to support at least 8GB RAM (or was it 16GB?). Seems like this is only a problem for the lowest of the lowest end SoCs.

Comments are closed.