2016: computer museums

During previous year I visited some computer related museums. Not every I planned to but still there were a few of them.

Faculty of Information Technology, Brno

In February, during Devconf.cz conference, I visited their small “IT Museum” where several machines used in Czechoslovakia were presented.

There were mainframe setups, several storage units and operating memories from different decades.

80s (and 90s) called with several ZX Spectrum clones, PMD-85 with it’s clones and some other microcomputers from this side of Iron Curtain.

It was nice place to visit even just to see all those computers made in Czechoslovakia.

For more photos please go to my “2016-02 devconf.cz it museum” album.

Technical Museum, Warsaw

In April I came to Warsaw for OpenSource day conference. And visited Technical Museum there to see some Polish computers of mainframe era.

There were many interesting machines. One of them was AKAT-1, the first transistor-based differential equation analyzer:

Other was K-202 — first Polish 16bit computer. Never became popular due to being shutdown by goverment.

Few years later Mera 400 was released. It used K-202 technology:

There were also few Odra systems:

For full resolution photos go to my Muzeum techniki w Warszawie album.

The National Museum Of Computing, Bletchley Park

May came. I went to UK to visit Bletchley Park. Awesome place to visit. And right next to it is The National Museum Of Computing (TNMOC in short).

Inside there is history. I mean HISTORY.

By mistake I entered museum through wrong door and started from oldest exhibition. It was showing the story of breaking Lorentz code used by Germany during second world war. And hardware designed for it. Contrary to Enigma there was no Lorentz machines in Allies possession.

Rebuild of British Tunny Machine:

Rebuild of Heath Robinson machine:

Next to it was room with working replica of first computer: Colossus.

And here you can see it running:

[youtube https://www.youtube.com/watch?v=c4UTrfv0HwI]

There were several other computers of course. I saw ICL 2900 system, several Elliotts and PDP systems, some IBM machines and others from 50-70s.

One of them was Harwell Dekatron Computer (also known as WITCH). It is oldest working computer:

Then there was wide selection of microcomputers from 80s and 90s. Several British ones and others from anywhere else. There was a shelf with Tube extensions for BBC Micro but it lacked ARM1 one:

For full resolution photos check my The National Museum Of Computing album.

The Centre for Computing History, Cambridge

This museum was on my list for far too long. When I was in Cambridge few years ago it was closed. Next time I did not managed to find time to go there. Finally, during last Linaro sprint, we agreed that we have to go there and we went during lunch break.

For me the main reason of going there was my wish to see ARM1 cpu. It was available only as Tube (extension board for BBC Micro) and only for some selected companies which makes it quite rare.

The first thing I saw after entering museum was “Macroprocesor”. Imagine CPU in size of 70s mainframe with LED on each line, register bit etc.

Next room was arranged in a form of British classroom. Set of BBC Micro computers arranged with monitors, manuals, programs.

And then I went to look around. There were many different computers shown. Some behind glass, some turned on with possibility to play with them (or on them). It was opportunity to see how design was changing through all those years.

There were also several Acorn machines — both ARM and 6502 powered ones.

As most of computer museums that one also has some exclusive content. This time it was NeXT workstation which was used as first web server by Tim Berners-Lee:

And Apple Macintosh SE 30 owned by Douglas Adams, author of “Hitchhiker Guide to the Galaxy”. Note a towel on top of computer:

Other interesting thing was comparison of storage density through all those years. Note 5MB hard drive being loaded into plane in top right corner.

And again — for more pictures and higher resolution visit my The Centre for Computing History album.

2017 plans

In 2017 I would like to visit Computer History Museum in Mountain View and museum in Paderborn. Maybe something more 😉

Goodbye rawhide

During Flock I decided to start removing rawhide from my main development systems. Got too tired of funny things like X11 freezes , common applications segfaults or non-reacting programs. I had to create F24 virtual machine just to get LibreOffice Impress running to finish my slides…

So laptop went first:

19:51 root@kapturek:~# LANGUAGE=C dnf --releasever 24 distro-sync --allowerasing
[..]
Transaction Summary
================================================================================
Install      12 Packages
Upgrade     180 Packages
Remove       10 Packages
Downgrade  1330 Packages

Once it will be done and tested working I will do same with my main desktop. Then no more rawhide on my desktop/laptops. For development boards/servers I will keep rawhide but only there.

AArch64 desktop hardware?

Soon there will be four years since I started working on AArch64 architecture. Lot of software things changed during that time. Lot in a hardware too. But machines availability still sucks badly.

In 2012 all we had was software model. It was slow, terribly slow. Common joke was AArch64 developers standing in a queue for 10GHz x86-64 cpus. So I was generating working binaries by using cross compilation. But many distributions only do native builds. In models. Imagine Qt4 building for 3-4 days…

In 2013 I got access to first server hardware. With first silicon version of CPU. Highly unstable, we could use just one core etc. GCC was crashing like hell but we managed to get stable build results from it. Qt4 was building in few hours now.

Then amount of hardware at Red Hat was growing and growing. Farms of APM Mustangs, AMD Seattle and several other servers appeared, got racked and available to use. In 2014 one Mustang even landed on my desk (as first such machine in Poland).

But this was server land. Each of those machines costed about 1000 USD (if not more). And availability was hard too.

Linaro tried to do something about it and created 96boards project.

First came ‘Consumer Edition’ range. Yet another small form factor boards with functionality stripped as much as possible. No Ethernet, no storage other than emmc/usb, low amount of memory, chips taken from mobile phones etc. But it was selling! But only because people were hungry to get ANYTHING with AArch64 cores. First was HiKey then DragonBoard410 got released. Then few other boards. All with same set of issues: non-mainline kernel, weird bootloaders, binary blobs for this or that…

Then so called ‘Enterprise Edition’ got announced. With another ridiculous form factor (and microATX as an option). And that was it. There was a leak of Husky board which shown how fucked up design it was. Ports all around the edges, memory above and under board and of course incompatible with any industrial form factor. I would like to know what they were smoking…

Time passed by. Husky got forgotten for another year. Then Cello was announced as a “new EE 96boards board” while it looked as redesigned Husky with two SATA ports less (because who needs more than two SATA, right?). Last time I heard about Cello it was still ‘maybe soon, maybe another two weeks’. Prototypes looked like hand soldered, USB controller mounted rotated, dead on-board Ethernet etc.

In meantime we got few devices from other companies. Pine64 had big campaign on Kickstarter and shipped to developers. Hardkernel started selling ODROID-C2, Geekbox released their TV box and probably something else got released as well. But all those boards were limited to 1-2GB of memory, often lacked SATA and used mobile processors with their own set of bootloaders etc causing extra work for distributions.

Overdrive 1000 was announced. Without any options for expansion it looked like SoftIron wanted customers to buy Overdrive 3000 if they want to use PCI Express card.

So we have 2016 now. Four years of my work on AArch64 passed. Most of distributions support this architecture by building on proper servers but most of this effort is not used because developers do not have sane hardware to play with (sane means expandable, supported by distributions, capable).

There is no standard form factor mainboards (mini-itx, microATX, ATX) available on mass market. 96boards failed here, server vendors are not interested, small Chinese companies prefer to release yet-another-fruit/Pi with mobile processor. Nothing, null, nada, nic.

Developers know where to buy normal computer cases, storage, memory, graphics cards, USB controllers, SATA controllers and peripherals. So vendors do not have to worry/deal with this part. But still there is nothing to put those cards into. No mainboards which can be mounted into normal PC case, have some graphics plugged in, few SSD/HDD connected, mouse/keyboard, monitors and just be used.

Sometimes it is really hard to convince software developers to make changes for platform they are unable to test on. And current hardware situation does not help. All those projects of hardware being available “in a cloud” helps only for subset of projects — ever tried to run GNOME/KDE session over the network? With OpenGL acceleration etc?

So where is my AArch64 workstation? In desktop or laptop form.

Post written after my Google+ post where similar discussion happened in comments.

Visiting UK again — Bletchley Park and Cambridge Beer Festival

For some time I had “visit Bletchley Park” on my ‘places to visit’ list. Some people told me that there is nothing interesting to see, some said that I should definitely go there. So I will. And will also grab some beers at Cambridge Beer Festival like three years ago.

Due to some family duties my visit will be short — landing on Saturday (21st May) and departing on Wednesday (25th May). First Bletchley park and then Cambridge from Sunday evening.

Plans are simple: walk, see old computers, walk, visit long time no see friends, walk, see not so old computers, maybe play some Ingress, meet other friends, drink some beers, exchange some hardware, buy some hardware etc.

This time will skip visiting Linaro office — they moved somewhere outside of Cambridge so it takes too much time to get there just to say “hi” and drink tea.

As usual I will be online so catch me via Hangouts, Telegram, Facebook, mail or call me if you want to meet.

Read of scrambled sector without authentication

My daughter is in 1st class of elementary school. One of things she has there is English. For which they use a “Super Sparks Student 1” book from Oxford University Press. Book came with CDROM disk (marked as DVD Video). There are 73 audio files (mp3) and one short movie on it.

But why I write about it? Because I think that we live in an era when CSS means Cascading Style Shits rather than Content Scramble System. But not everyone thinks like that.

So back to DVD^wCDROM. Let’s copy data from it. Audio tracks went fine but problems started when tried to copy video:

[ 3701.096102] sr 5:0:0:0: [sr1]  
[ 3701.096105] Sense Key : Illegal Request [current] 
[ 3701.096109] sr 5:0:0:0: [sr1]  
[ 3701.096114] Add. Sense: Read of scrambled sector without authentication
[ 3701.096117] sr 5:0:0:0: [sr1] CDB: 
[ 3701.096119] Read(10): 28 00 00 01 8e e3 00 00 01 00
[ 3701.116089] sr 5:0:0:0: [sr1]  
[ 3701.116096] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE

Took me a while to remind myself what it means. Scrambled disk! When it became useless? 2001? 1996? No — 1999 (according to Wikipedia). So just 15 years ago…

I think that Oxford is quite orthodox/conservative but IT goes on faster than they expect.

Took me few minutes to compile libdvdcss, libdvdread and dvdread then it was just simple “sudo dvdread /dev/sr1 >english.iso” plus extracting files from disk image. Would be faster but you know: I use Fedora (patents, mp3, decss are not accepted in repo).