1. Hotplug in VM. Easy to say…

    You run VM instance. Never mind is it part of OpenStack setup or just local one started using Boxes, virt-manager, virsh or other that kind of fronted to libvirt daemon.

    Let hotplug some hardware

    Then you want to add some virtual hardware to it. And another card and one more controller…

    Easy to imagine scenario, right? What can go wrong, you say? “No more available PCI slots.” message can happen. On second/third card or controller… But how? Why?

    Like I wrote in one of my previous posts most of VM instances are 90s pc hardware virtual boxes. With simple PCI bus which accepts several cards to be added/removed at any moment.

    PCI Express is different

    Things are different on AArch64 architecture. Or on x86-64 with Q35 machine type. What is a difference? Both are PCI Express machines. And by default they have far too small amount of pcie slots (called pcie-root-port in qemu/libvirt language). More about PCI Express support can be found in PCI topology and hotplug page of libvirt documentation.

    So I wrote a patch to Nova to make sure that enough slots will be available. And then started testing. Tried few different approaches, discussed with upstream libvirt developers about ways of solving the problem and finally we selected the one and only proper way of doing it. Then discussed failures with UEFI developers. And went for help to Qemu authors. And explained what I want to achieve and why to everyone in each of those four projects. At some point I had seen pcie-root-port things everywhere…

    Turned out that the method of fixing it is kind of simple: we have to create whole pcie structure with root port and slots. This tells libvirt to not try any automatic adding of slots (which may be tricky if not configured properly as you may end with too small amount of slots for basic addons).

    More PCI Express slots

    Then I went with idea of using insane values. VM with one hundred PCIe slots? Sure. So I made one, booted it and then something weird happen: landed in UEFI shell instead of getting system booted. Why? How? Where is my storage? Network? Etc?

    Limits, limits everywhere…

    Turns out that Qemu has limits. And libvirt has limits… All ports/slots went into one bus and memory for MMCONFIG and/or I/O space was gone. There are two interesting threads about it on qemu-devel mailing list.

    So I added magic number into my patch: 28 — this amount of pcie-root-port entries in my aarch64 VM instance was giving me bootable system. Have to check it on x86-64/q35 setup still but it should be more or less the same. I expect this patch to land in ‘Rocky’ (the next OpenStack release) and probably will have to find a way to get it into ‘Queens’ as well because this is what we are planning to use for next edition of Linaro Developer Cloud.

    Conclusion

    Hotplug may be complicated. But issues with it can be solved.

    Written by Marcin Juszkiewicz on
  2. One-hit wonders and their other hits

    There are so many musical bands and signers that not every one can get popular. Sometimes they are popular in their country/region but not necessary worldwide. Or they get one good song and nothing else gets such popularity. So called ‘one hit wonders’.

    One of my friends recently shared “one hit wonders” playlist. But as it is with all those lists created during parties it contained several false entries which rather shown that someone did not know other hits for some bands. Anyway it was interesting enough to play in background.

    Music was playing, letters were scrolling in terminal so I took a bit of time and created something more fancy: playlist with less known hits of ‘one-hit wonders’.

    Sure, there are many missing entries and that some of listed artists/bands were more popular here and there. I am open for suggestions ;D

    Written by Marcin Juszkiewicz on
  3. Developers planet is online

    People write blogs. People read blogs. But sometimes it is hard to find blogs of all those interesting people. That’s where so called “planets” are solution.

    Years ago there was “Planet Linaro” website filled with blog posts from Linaro developers. Then it vanished. Later it got replaced by poor substitute.

    But I do not want to have to track each Linaro developer to find their blog and add it into Feedly. So instead I decided to create new planet website. And that’s how Developers Planet got born.

    So far it lists a bunch of blogs of Linaro developers. I used venus to run it. Few years old code but runs. Will adapt HTML/CSS template to be a bit more modern.

    And why .cf domain? It is free — that’s why.

    Written by Marcin Juszkiewicz on
  4. Graphical console in OpenStack/aarch64

    OpenStack users are used to have graphical console available. They take it for granted even. But we lacked it…

    We want graphics on AArch64

    When we started working on OpenStack on 64-bit ARM there were many things missing. Most of them got sorted out already. One thing was still in a queue: graphical console. So two weeks ago I started looking at the issue.

    Whenever someone tried to use it Nova reported one simple message: “No free USB ports.” You can ask what it has to console? I thought similar and started digging…

    Long live architecture differences

    As usual reason was simple: yet another aarch64 <> x86 difference in libvirt. Turned out that arm64 is one of those few architectures which do not have USB host controller in default setup. When Nova is told to provide graphical console it adds Spice (or VNC) graphics, video card and USB tablet. But in our case VM instance does not have any USB ports so VM start fails with “No free USB ports” message.

    Let patch Nova

    Solution was simple: let’s add USB host controller into VM instance. But where? Should libvirt do that or should Nova? I discussed it with libvirt developers and got mixed answers. Opened a bug for it and went to Nova hacking.

    Turned out that Nova code for building guest configuration is not that complicated. I created a patch to add USB host controller and waited for code reviews. There were many suggestions, opinions and questions. So I rewrote code. And then again. And again. Finally 15th version got “looks good” opinion from all reviewers and got merged.

    Final effect

    And how result looks? Boring as it should:

    OpenStack graphical console on aarch64
    OpenStack graphical console on aarch64
    Written by Marcin Juszkiewicz on
  5. Everyone loves 90s PC hardware?

    Do you know what is the most popular PC machine nowadays? It is “simple” PCI based x86(-64) machine with i440fx chipset and some expansion cards. Everyone is using them daily. Often you do not even realise that things you do online are handled by machines from 90s.

    What? 90s hardware in modern world?

    Sounds like a heresy? Who would use 90s hardware in modern world? PCI cards were supposed to be replaced by PCI Express etc, right? No one uses USB 1.x host controllers, hard to find PS/2 mouse or keyboard in stores etc. And you all are wrong…

    Meet virtual PC

    Most of virtual machines in x86 world is a weird mix of 90s hardware with a bunch of PCI cards to keep them usable in modern world. Parallel ATA storage went to trash replaced by SCSI/SATA/virtual ones. Graphic card is usually simple framebuffer without any 3D acceleration (so like 90s) and typical PS/2 input devices connected. And you have USB 1.1 and 2.0 controllers with one tablet connected. Sounds like retro machines my friends prepare for retro events.

    You can upgrade to USB 3.0 controller, graphic card with some 3D acceleration. Or add more memory and cpu cores that i440fx based PC owner ever dreamt about. But it is still 90s hardware.

    What about some XXI century hardware?

    Want to have something more modern? You can migrate to PCI Express. But nearly no one does that in virtual x86 world. And in AArch64 world we start from here.

    And that’s the reason why working with developers of projects related to virtualization (qemu, libvirt, openstack) can be frustrating.

    Issues with hotplug?

    Hotplug issues? Which hotplug issues? My VM instance allows to plug 10 cards while it is running so where is a problem? The problem is that your machine is 90s hardware with simple PCI bus and 31 slots present on virtual mainboard while VM instance with PCI Express (aarch64, x86 with q35 model) has only TWO free slots present on motherboard. And once they are used no new slots arrive. Unless you shutdown, add free slots and power up again.

    Architecture differences?

    Or my recent stuff: adding USB host controller. x86 has it because someone ‘made a mistake’ in past and enabled it. But other architectures got sorted out and did not get it. Now all of them need to have it added when it is needed. Have a patch to Nova for it and I am rewriting it again and again to get it into acceptable form.

    Partitioning is fun too. There are so many people who fear switching to GPT

    Written by Marcin Juszkiewicz on
  6. Devconf.cz? FOSDEM? PTG? Linaro Connect?

    What connects those names? All of them are conference or team meeting names. Spread over Europe + Asia. And I will be on all of them this year.

    First will be in Brno, Czechia. Terrible city to travel to but amount of people I can meet there is enormous. Some guys from my Red Hat team will be there, my boss’ boss (and boss of my boss’ boss) and several people from CentOS, Fedora, RHEL (and some other names) communities. Meetings, sessions… Nice to be there. Will be in A-Sport hotel as it is closest to Red Hat office where I have some meetings to attend.

    Then FOSDEM. The only “week-long conference squeezed into two days” I know. Good luck with meeting me probably. As usual my list of sessions covers all buildings and have several conflicts. Will meet many people, miss even more, do some beers. This year I stay in Floris Arlequin Grand-Place hotel.

    Next one? OpenStack PTG in Dublin, Ireland. Finally will meet all those developers reviewing my patches, helping me with understanding source code of several OpenStack projects. And probably answering several questions about state of AArch64 support. Conference hotel.

    Linaro Connect. Hong Kong again. Not registered yet, not looked at flights. Have to do that sooner than later and checking which airline sucks less on intercontinental connections sucks too. With few members of SDI team I will talk about our journey through/with OpenStack on our beloved architecture. Conference hotel.

    What after those? Probably OpenSource Day in Poland, maybe some other ones. Will see.

    Written by Marcin Juszkiewicz on
  7. YADIBP

    Let me introduce new awesome project: YADIBP. It is cool, foss, awesome, the one and only and invented here instead of there. And it does exactly what it has to do and in a way it has to be done. Truly cool and awesome.

    Using that tool you can build disk images with several supported Linux distributions. Or maybe even with any BSD distribution. And Haiku or ReactOS. Patches for AmigaOS exist too!

    Any architecture. Starting from 128 bit wide RUSC-VI to antique architectures like ia32 or m88k as long as you have either hardware or qemu port (patches for ARM fast models in progress).

    Just fetch from git and use. Written in BASIC so it should work everywhere. And if you lack BASIC interpreter then you can run it as Python or Erlang. Our developers are so cool and awesome!

    But let’s get back to reality — there are gazillions of projects of tool which does one simple thing: builds a disk image. And gazillion will be still written because some people have that “Not Invented Here” syndrome.

    And I am getting tired of it.

    Written by Marcin Juszkiewicz on
  8. Today I was fighting with Nova. No idea who won…

    I am working on getting OpenStack running on AArch64 architecture, right? So recently I went from “just” building images to also using them to deploy working “cloud” setup. And that resulted in new sets of patches, updates to patches, discussions…

    OpenStack is supposed to make virtualization easier. Create accounts, give access to users and they will make virtual machines and use them without worrying what kind of hardware is below etc. But first you have to get it working. So this week instead of only taking care of Kolla and Kolla-ansible projects I also patched Nova. The component responsible for running virtual machines.

    One patch was simple edit of existing one to make it comply with all comments. Took some time anyway as I had to write some proper bug description to make sure that reviewers will know why it is so important for us. And once merged we will have UEFI used as default boot method on AArch64. Without any play with hw_firmware_type=uefi property on images (which is easy to forget). But this was the easy one…

    Imagine that you have a rack of random AArch64 hardware and want to run a “cloud”. You may end in a situation where you have a mix of servers for compute nodes (the ones where VM instances run). In Nova/libvirt it is handled by cpu_mode option:

    It is also possible to request the host CPU model in two ways:

    • host-model” - this causes libvirt to identify the named CPU model which most closely matches the host from the above list, and then request additional CPU flags to complete the match. This should give close to maximum functionality/performance, which maintaining good reliability/compatibility if the guest is migrated to another host with slightly different host CPUs. Beware, due to the way libvirt detects host CPU, CPU configuration created using host-model may not work as expected. The guest CPU may confuse guest OS (i.e. even cause a kernel panic) by using a combination of CPU features and other parameters (such as CPUID level) that don’t work.

    • host-passthrough” - this causes libvirt to tell KVM to passthrough the host CPU with no modifications. The difference to host-model, instead of just matching feature flags, every last detail of the host CPU is matched. This gives absolutely best performance, and can be important to some apps which check low level CPU details, but it comes at a cost wrt migration. The guest can only be migrated to an exactly matching host CPU.

    Nova assumes host-model when KVM/QEMU is used as hypervisor. And crashes terribly on AArch64 with:

    libvirtError: unsupported configuration: CPU mode ‘host-model’ for aarch64 kvm domain on aarch64 host is not supported by hypervisor

    Not nice, right? So I made a simple patch to get host-passthrough to be default on AArch64. But when something is so simple then it’s description is probably not so simple…

    Reported bug on nova with some logs attached. Then digged for some information which would explain issue better. Found Ubuntu’s bug on libvirt from Ocata times. They used same workaround.

    So I thought: let’s report a bug for libvirt and request support for host-model option. There I got link to an another bug in libvirt with set of information why it does not make sense.

    The reason is simple. No one knows what you run on when you run Linux on AArch64 server. In theory there are fields in /proc/cpuinfo but still you do not know do cpu cores in compute01 are same as compute02 servers. At least from nova/libvirt/qemu perspective. This also blocks us from setting cpu_mode to custom and selecting cpu_model which could be some way of getting same cpu for each instance despite of types of compute node processor cores.

    The good side is that VM instances will work. The problem may appear when you migrate VM to cpu with other core — it may work. Or may not. Good luck!

    Written by Marcin Juszkiewicz on
Page 17 / 107