Each time I update my Fedora desktop to new release (usually around Beta) I
give a try to Wayland. Which shows that I still use X11.
My setup
My desktop has Ryzen cpu and NVidia GTX 1050 Ti graphics card. Only one monitor
(34” 3440x1440). I use binary blobs as this generation of GPU chipset is not
really usable with FOSS driver (nouveau).
For desktop environment I use KDE. Which means Plasma desktop/panel, Konsole and
few KDE apps. Firefox and Chrome as web browsers, Thunderbird for mail, Steam
for gaming and Zoom (or Google Meet) for most of video calls.
About two and half year passed since start of COVID-19 pandemic. A time when
most of conferences got cancelled or went online (with different level of success).
A conference which should have been a YouTube playlist
There is saying “a meeting which should have been an email”. For meetings which
were complete waste of time for most of attendees. During pandemic several
events were “a conference which should have been a YouTube playlist”. Never mind
did talks were interesting or not.
Far too often there was no way to chat with speakers or other “attendees”. Or
there was one chat channel per whole conference or track. Also without threading
(just one long list of messages) and no way to mention names.
Life kicks in
Some time ago one online conference took place. Three days of talks about things
which interest me. I had plans to attend several of them. Then life happened —
video calls and local stuff. Also several hours of time difference was a problem.
Good part is that organizers recorded all talks so I can watch them later. I
“just” missed a way to get part in chat and Q&A session at the end.
Started to ignore
I started to ignore most of invites to such events. Sooner or later videos from
them land online so I pick those which interest me.
Sitting in front of a screen and watching people talking is boring. Especially
after over two years of doing it because there was no other way.
It was yet another boring week. I got some thanks for work we did to get
Tensorflow running on AArch64 and was asked is there any other Python project
which could use our help.
I had some projects already on my list of things to check. And then found “Top
PyPI Packages” website…
Let install 5000 Python packages
So I got an idea — let me grab list of top5000 PyPI packages and check how many
of them have issues on AArch64.
The plan was simple. Loop over list and do 3 tasks:
create virtualenv
install package
destroy virtualenv
This way each package had the same environment and I did not had to worry about
version conflicts.
Virtualenv preparation
To not repeat creation of virtualenv five thousand times I did it once:
So if package provided only source tarball/zip then it was marked as failed.
There were 1569 packages which failed to pass this phase. Common issues (other
than missing some development headers):
INFO: pip is looking at multiple versions of PACKAGE_NAME to
determine which version is compatible with other requirements. This could
take a while.
INFO: pip is looking at multiple versions of <Python from Requires-Python>
to determine which version is compatible with other requirements. This could
take a while.
INFO: This is taking longer than usual. You might need to provide the
dependency resolver with stricter constraints to reduce runtime. See
https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort
this run, press Ctrl + C.
ERROR: Cannot install OTHER_PACKAGE_NAME because these package versions
have conflicting dependencies.
ERROR: Could not find a version that satisfies the requirement
OTHER_PACKAGE_NAME (from versions: none)
ERROR: Could not find a version that satisfies the requirement
OTHER_PACKAGE_NAME==N.V.R (from ANOTHER_PACKAGE_NAME) (from versions: x.y,
x.y.z, x.z.z)
ERROR: No matching distribution found for OTHER_PACKAGE_NAME
Note that failure at this phase is allowed as I just want ready to use wheel files.
Whole process took about 20 hours on Honeycomb.
Phase 2
The main difference was getting rid of “—only-binary” and “—no-compile”
options from pip install calls.
Still no additional development packages installed. Cache from phase 1 in use to
not re-download/re-build existing wheel files.
The main issue is how single threaded pip install is. Nevermind that
Honeycomb has 16 cpu cores — only one is used (and this is Cortex-A72 so
nothing fancy). This makes building times higher than they suppose to be:
Building wheels for collected packages: pandas, typing
Building wheel for pandas (setup.py): started
Building wheel for pandas (setup.py): still running...
Building wheel for pandas (setup.py): still running...
Building wheel for pandas (setup.py): still running...
Building wheel for pandas (setup.py): still running...
Building wheel for pandas (setup.py): still running...
Building wheel for pandas (setup.py): still running...
Building wheel for pandas (setup.py): still running...
Building wheel for pandas (setup.py): still running...
Building wheel for pandas (setup.py): still running...
Building wheel for pandas (setup.py): still running...
Building wheel for pandas (setup.py): still running...
There were 313 packages which failed to pass this phase. Issues were similar to
those in phase 1 with one exception (as building packages was allowed):
ERROR: Could not build wheels for OTHER_PACKAGE_NAME, which is required to
install pyproject.toml-based projects
This phase took about 13 hours on Honeycomb.
Phase 3
About 6% packages left. Now it is time to install some development headers:
blas-devel
bzip2-devel
cairo-devel
cyrus-sasl-devel
gmp-devel
gobject-introspection-devel
graphviz-devel
gtk3-devel
httpd-devel
krb5-devel
lapack-devel
libcap-devel
libcurl-devel
libicu-devel
libjpeg-devel
libmemcached-devel
mariadb-devel
ncurses-devel
openldap-devel
openssl-devel
poppler-cpp-devel
postgresql-devel
protobuf-compiler
unixODBC-devel
xmlsec1-devel
I created this list by checking how packages failed to build. It should be
longer but CentOS 7 (base of “manylinux2014” container image) does not provide
everything needed (for example up-to-date Rust compiler or LLVM).
Before starting phase 3 run I removed all entries related to “pyobjc” as they are
MacOS related so there is no need to waste time again.
After 3.5 hours I had another 54 packages built.
Phase 4
Some packages are not present in CentOS 7 but are present in EPEL repository. So
after enabling EPEL (yum install -y epel-release) I installed another set of
development packages:
augeas-devel
boost-devel
cargo
gdal-devel
leptonica-devel
leveldb-devel
suitesparse-devel
portaudio-devel
proj
protobuf-devel
rust
zbar-devel
Some of those packages should be installed in previous step. I did not caught
them because build processes failed earlier.
Before starting round I went through logs and removed everything:
failed with “No matching distribution for PACKAGE_NAME“
failed with “use_2to3 is invalid” (aka “I need old setuptools”)
requiring Bazel
requiring tensorflow
At the end I had about one hundred of failed to build packages. For different reasons:
missing build dependencies
expecting newer libraries than “manylinux2014” (CentOS 7) has
not listing all dependencies (everyone has “numpy” installed, right?)
being Python 2.7 only
using removed modules or classes
breaking install to say “this module is deprecated, use OTHER_NAME”
not supporting AArch64 architecture
Summary
One hundred of top five thousand packages equals two percent of failures. There
were 13 failures in top 1000, another 14 in second thousand.
Is 2% acceptable amount? I think that it is. Some improvements can still be made
but nothing requiring shown. OK, would be nice to get Tensorflow for AArch64
released by upstream under same name (instead of “tensorflow_aarch64” builds
done by team at Linaro).
How to run it?
After my tweet I had
several comments and people wanted to run this test on other architectures,
operating systems or devices. So I wrote simple script:
#!/bin/bash
echo "cleanup after previous runs"
rm -rf venvs/* logs/*
echo "Prepare clean virtualenv"
python3 -mvenv venvs/test
. venvs/test/bin/activate
pip install -U pip wheel setuptools
deactivate
cp -a venvs/test venvs/clean
echo "fetch and prepare top5000 list"
rm top-pypi-packages-30-days.*
wget https://hugovk.github.io/top-pypi-packages/top-pypi-packages-30-days.json
grep project top-pypi-packages-30-days.json \
|sed -e 's/"project": "\(.*\)"/\1/g' > top-pypi-packages-30-days.text
echo "go through packages"
mkdir -p logs
for package in `cat top-pypi-packages-30-days.text`; do
echo "processing ${package}"
rm -rf venvs/test
cp -a venvs/clean venvs/test
source venvs/test/bin/activate
pip install --no-input \
-U --upgrade-strategy=only-if-needed \
$package | tee logs/${package}.log
deactivate
echo "-----------------------------------------------------------------"
done
It should work on any operating system capable of running Python. All build
dependencies need to be installed first. I suggest mounting “tmpfs” over
“venvs/” directory as there will be lot of temporary i/o going on there.
Once it finish just run grep to check how many packages were installed with success:
grep "^Successfully installed" logs/*|wc -l
Please share your results. Contact page lists several ways to catch me.
Due to current events it was hard to concentrate on work lately. So I decided to
learn something new but still work related.
In OpenStack Kolla project we provide images with Collectd, Grafana, Influxdb,
Prometheus, Telegraf to give people options for monitoring their deployments. I
never used any of them. Until now.
Local setup
At home I have some devices I could monitor for random data:
TrueNAS based NAS
OpenWRT based router
OpenWRT based access point
HoneyComb arm devbox
other Linux based computers
Home Assistant
All machines can ping each other so sending data is not a problem.
Software
For software stack I decided to go for PIG — Prometheus, Influxdb, Grafana.
I used instruction from blog posts written by Chris Smart:
Read both — they have more details than I used to mention. Also more graphics.
OpenWRT
Turned out that OpenWRT devices can either use “collectd” or Prometheus node
exporter to gather metrics. I had first one installed already (to have some
graphs in webUI). If you do not then all you need is this set of commands:
TrueNAS already has reporting in webUI. Can also send data to remote Graphite
server. So again I had everything in place for gathering metrics.
Next step was sorting out data collector and visualisation. There is community
provided “Grafana & Influxdb” plugin. Installed it, gave jail a name, set to
request own IP and got “grafana.lan” running in a minute.
Configuration
At this moment everything is installed for both gathering and visualisation of
data. Time for some configuration.
Logged into jail (“sudo iocage console grafana”) and fun started.
Influxdb
First Influxdb needed to be configured to accept both “collectd” data from
OpenWRT nodes and “graphite” data from TrueNAS. Simple edit of
“/usr/local/etc/influxd.conf” file to have this:
[[collectd]]
enabled = true
bind-address = ":25826"
database = "collectd"
# retention-policy = ""
#
# The collectd service supports either scanning a directory for multiple types
# db files, or specifying a single db file.
typesdb = "/usr/local/share/collectd"
#
security-level = "none"
# auth-file = "/etc/collectd/auth_file"
And data should be gathered from both OpenWRT (collectd) and TrueNAS (graphite) nodes.
OpenWRT
Here are two ways to do configure OpenWRT based system:
First one is visiting webUI -> Statistics -> Setup -> Output
plugins, then enabling “network” plugin and giving IP of InfluxDB server.
Second one is simple edit of “/etc/collectd.conf” file:
LoadPlugin network
<Plugin network>
Server "192.168.202.183" "25826"
</Plugin>
No idea why it wants IP address instead of FQDN.
TrueNAS
TrueNAS is simple — visit webUI -> System -> Reporting and give address (IP or
FQDN) of remote graphite server.
I enabled both “Report CPU usage in percent” and “Graphite separate instances”
checkboxes because of Grafana dashboard I use.
Grafana
Logged into http://grafana.lan:3000, setup admin account and then setup data
sources. Both will be “InfluxDB” type — one for ‘collectd’ database keeping
metrics from OpenWRT nodes, second for ‘graphite’ one with data from TrueNAS.
Use “http://localhost:8086” as URL, provide database names and name each data
source in a way telling you which one is which.
Visualisation
Ok, software installed on all nodes, configuration files edited. Metrics flow to
InfluxDB (you can check them using “show series” in Influx shell tool). Time to
setup some dashboards in Grafana.
This one looks like good base. Also suggests me to visit my server wardrobe and
check fans in NAS box.
Future
I need to alter both dashboards to make them show some really usable data. Then
add some alerts. And more nodes — there is still no Prometheus in use here
gathering metrics from my generic Linux machines. Nor my server.
2021 ended so let me try to mark some interesting moments.
January
Started looking for a new flat. This time to buy not to rent.
As kind of experiment I started working on I/O plate for APM Mustang as it
never had one. After several hours and versions finally I got something
usable. Not perfect but good enough.
Friend found me a nice flat. Went, saw, decided to buy.
Jon Nettleton from SolidRun asked do I have any interest in a HoneyComb. We
discussed and few days later I got HoneyComb
at home. Bought things like case and ram for it, used spare nvme and got it
working. I use it as build machine for countless projects.
Replaced old Netgear router with EspressoBin. I got that SBC from a friend,
printed 3d case and sorted out fresh firmware. Now it boots straight from SPI
flash and reads operating system from microsd card.
I wrote one tweet and it was enough
to get extra AArch64 nodes for Opendev CI.
I got “Mars 2020 Helicopter Contributor” badge on GitHub. As one of ~12000 developers.
First dose of COVID-19 vaccine.
May
Moved to my own flat. With help from
my friends and moving company. Still have a few unpacked boxes.
As my daughter had 3 free days in a middle of a week we went for nice trip
through museums.
Spent some time on designing new structure for home network. EspressoBin got
replaced with very small x86-64 system (4 “igb” network interfaces). Finally
reached 1Gbps speed on home link.
June
Organizing my new home. Looking for good office furniture ended with buying VS
901 desk and organizer.
Second dose of COVID-19 vaccine.
July
Got two Sharp Zaurus palmtops: SL-5500 (collie) and SL-5600 (poodle) from a
friend. First model was a device which brought me to working full time on
FOSS.
My 13 years old daughter asked me what “that huge phone with printer” she saw
in anime was. Took me a moment or two to find out that it was fax machine…
Crashed my car. No one got hurt. Insurance paid what they had to and I later
sold what was left of the car.
August
Vacations! Rented Opel Astra combi and we went for a tour. Met some friends,
visited new places (and some old ones). Good time.
Friends managed to organize demoscene party in COVID-19 times. So I went to
Katowice, Poland and spent great time at Xenium 2021.
Upgraded home wireless network from 802.11n to 802.11ax standard. Or WiFi 6 if
you prefer new names.
October
Left some projects maintained by Arm employees. Open source with closed
development process is not something I am fan of. Especially when there is no
serious review done on code contributions.
I got information that moderators on “Arm Software Developers” Discord are not
fans of my comments there. Went through history, dropped most of my posts and
left server. Was asked to go back some time later. No longer taking part of
conversations there.
November
I was positive! Too bad that it was result of COVID-19 test. 2 weeks of
isolation as a result. Boring time with flu-like symptoms. Friends handled my
shopping needs.
Migrated from Pocketbook Touch HD to Onyx Boox Poke 3. Just to migrate to Onyx
Boox Nova 2 three weeks later. Reading books on 7.8” screen is the other experience.
December
Polish demoscene became official Polish cultural heritage.
Started playing with configuration of my router to get native IPv6/IPv4 setup
working. Took far too many attempts and curses but got it done (in 2022).
Third dose of COVID-19 vaccine (this time Moderna instead of Pfizer).
During last days I had discussions about devices to read electronic books. You
know: e-book readers like Amazon Kindle, Pocketbook. I am unable to count how
many of them I bought and which models. But I know which I used.
Very old times
Long, long time ago I had Palm M105 and then Sony Clie SJ30 palmtops. Used both
to read some electronic books. But small screen made it not comfortable.
First try
In 2011 I wanted to check how it feels to read e-books on proper e-ink device.
Borrowed Amazon Kindle Keyboard from one of my friends and it was good.
Month later I was in USA on Linaro Developer Summit and bought myself Kindle
Keyboard. And got Kindle ‘no touch’ one too. First one went on shelf quickly as
I used smaller one more often. Still have it — in storage now as battery finally
gave up.
Amount of books I bought and read in electronic form quickly passed amount of
paper ones.
Upgrading Kindle
Amazon released e-ink device with touchscreen: Kindle Touch. So I bought it on
next US visit (another Linaro conference). This time I also got few devices for
my friends (no customs == profit) and sold my ‘no touch’ one.
Then Kindle Paperwhite on next visit. Screen backlight was huge step. Reading
books in buses, planes, trains became comfortable.
I had one or two newer Kindle Paperwhite devices but for short time.
E-Book subscription
In 2015 I gave Kindle Paperwhite to my mother with some books on it. She enjoyed
using device but there was a problem with selection of content…
At around same time Legimi started their e-book subscription service. So I
bought Inkbook Obsidian from them and gave my mother as Xmas gift. After some
training she became a fan of both service and e-books.
Goodbye Kindle, welcome Pocketbook
At some moment Legimi started tests of ‘one account, two readers’ offer. I
decided that it is good time to change device. Sold my Kindle Paperwhite and
bought Pocketbook Touch Lux 3 instead.
New device allowed me to use subscription on two e-book readers == less money
spent on buying books.
Few months later I had to buy another device as my daughter took Pocketbook from
me and started using it. So I bought Pocketbook Touch HD for myself. And another
Legimi subscription ;D
This was also first e-book reader where I started experimenting with software.
Coolreader is nice alternative to original reading application. My favourite
option was “ignore publisher formatting” so each e-book looked the same. Of
course if it was properly done which was not granted.
Let go Android with Onyx
Time passed, I cancelled my Legimi subscription in meantime. And PB Touch Lux 3
one day decided to not refresh screen. Like at all. Dead.
So Mira got Pocketbook Touch HD and I started looking for something new for myself.
Asked friends, did some research, watched countless review videos. And decided
on Onyx Boox Poke 3. Still 6” 300dpi screen but with all fancy backlight things
and Android 10.
Did not even got used to it and replaced it with Onyx Boox Nova 2 from one of
friends. 7.8” screen made a difference especially with pre-formatted PDF files.
FBreader works great on it and allows to use OPDS catalogs directly from device.
And amount of configuration options beats everything I used before.
I miss one thing (compared to Kindle or Pocketbook) — there is no way to send
e-books via e-mail straight to the device.
Summary
During those ten years of using e-book readers I have read countless amount of
books. On trip to San Diego I have read above thousand pages on plane (one s-f
book series was hard to put away).
Bought hundreds of books in electronic form. And just a few paper ones just
because they lacked any other form. I also feel unable to concentrate while
reading paper ones — have to hold them in a way to not close etc…
Twenty five years ago, in 1996 year, I failed exam for Computer Science studies.
And went for Automation and Robotics instead. Next five years were funny, tough,
hard and interesting. All at once as it is with studies.
And I got access to the Internet there.
Beginning
One day during walking through university buildings I went to not visited yet
corridor. There was a group of people sitting in front of some weird terminals.
Turned out that those Hewlett Packard 2623A machines offer access to the
Internet. All I had to do was to knock the system admins’ door, show some id and
pay 10 PLN per month.
Some time later I got access to terminal. Landed in SunOS without basically any
UNIX experience (I was AmigaOS user at that time). Other users gave me some hints:
use “screen” as soon as you login
PINE is for email
Lynx is for web
Pico is your editor of choice
IRC is for chatting and you want Venom or Lice for it (popular ircii scripts)
use exit once you finish
And I started spending time there. First weeks were tough — getting used to
text interface and remembering useful commands and hints. One of most important
was where to store extra files as account was 1.5 megabyte in size (with warning
after crossing 1MB). Ah those /var/tmp/ or /var/news/ etc. subdirectories with
everyone-can-write access :D The other was how to transfer them to floppy disks.
Terminal knowledge
None of terminals had a battery to keep configuration data. So often they were
setup as 2400bps ones (if you powered it on and kept trashing Return key then
SunOS finally appeared at such speed). One of things to learn was how to
reconfigure it to 9600bps which meant quite “comfortable” work.
Forget about using tools like Midnight Commander — it refreshed screen too
often so most of time you saw only redrawing of characters.
Also none of keys outside of alphanumeric part were usable.
When I was starting fourth year university connected one dormitory to the
Internet. Surprise, surprise I lived there (it wasn’t default one for my faculty).
During vacations I earned some good money and bought PCMCIA Ethernet card for my
Amiga 1200. Oh, what change it was! No more queues to terminals. All those
graphical tools and graphical web browser! And no more wondering where to go to
grab files from my shell account.
All that on 14” VGA mono monitor in 720x480 resolution.
Commodore 128dcr can be online too
There were two of us in the room and only one computer. So still the problem of
access existed.
One day I came out from visiting home and found out that my room mate bought
Commodore 128dcr with 80 column monitor. We wired some serial cable and
connected it to my Amiga. After fetching and sending some software over the wire
we had 9600bps connection and 80x25 text terminal ready for use.
Oh, those moments on IRC when we said that we use 8-bit Commodore ;D Too bad
that we lack any photo from that time.
Personal website
I had some kind of personal website since basically forever. First version was
my Lynx bookmarks with a bunch of extra text. This was quite popular type then.
After studies my website moved between servers. Landed on some free hosting,
then my friend’s server. Later on some paid space where I also had email in own
domain and finally moved to self hosting in the cloud.
The oldest copy I found in the Web Archive is from July 2003. It was on different
domain that current page uses. And you can still use it to get here (there are
five domains pointing here).
I used several tools to maintain my page: php wiki, own code and over 16.5 years
ago decided to give Wordpress a try. After over
decade of using it moved to Pelican and it
will stay that way for now.
Future is in mobile
I remember presentation of Nokia 9210 Communicator at some event at university.
Device in your pocket with direct access to the Internet. It looked like future.
Time passed, I met people with PalmOS devices. They used Irda for data transfer
with their cellphones. Then Bluetooth with newer models. Integrated GSM modems
in next ones…
Cellphones got first versions of web browsers, mail apps… And all that
moved faster and faster. We also started calling them smart phones.
Nowadays I have multicore device in my pocket. With screen resolution higher
than many people’s monitors, more memory than computers some of my friends use
at work. And close to 4 terabytes of data allowance on my prepaid sim card.
Something not imaginable for 20 years younger me.
TensorFlow requires several other Python packages and some of them are not
distributed as AArch64 binary wheels. For this we have Python cache
repository at Linaro snapshots.
So how to install TensorFlow (2.6.0 or 2.7.0 version):
And it will be done. Package is renamed from “tensorflow-cpu” as there is a plan
of uploading them to Pypi. For older versions please check our snapshots server.
How do we build?
Jenkins runs shell script which then starts container, installs Ansible and then
the rest of build is done using it. Playbooks, roles and shell scripts are
stored in Jenkins jobs repository.
The process is quite simple — we choose which versions to build (from 1.5, 2.4,
2.5, 2.6, git HEAD selection) and then Ansible loops over them. All
dependencies’ versions are stored in variables file.
Whole work is done inside of “manylinux2014” container to get a way of building
for wide selection of Python releases. Build covers versions from 3.6 to 3.9 one
(we plan to enable 3.10 when possible) in one run.
Build times
To compare speed of several systems I have available I ran a build on each and
compared to times on Linaro CI machine.
Some details:
Machine name
processor
cores
threads
memory
note
Oracle cloud A1
Altra
16
16
96
VM.Standard.A1.Flex
Linaro CI
ThunderX2
2x28
56
240
SMT disabled
SolidRun HoneyComb
LX2160
16
16
32
my work laptop
i7-8665U
4
8
32
SMT enabled
my desktop
Ryzen 5 3600
6
12
32
SMT enabled
Build time includes fetching files from network. Python packages comes from
either Pypi or Linaro Python cache repository.
Procedure
Install Docker, pull manylinux2014 image, fetch script from Linaro CI job git
repository, run it.
Oracle cloud instance used Ampere Altra cpu. Would not be surprised if it beats
ThunderX2 system when more cores are used (16 cores was limit of free tier).
I used my work laptop because it was available. Did not expected much from it.
But in past benchmarks it was close to my previous desktop system.
And with my desktop… It was quite cheap solution 2 years ago.
Looks like price/performance is something where x86-64 is a king.