Git is now ten years old. More and more developers get lost when have to deal with CVS or Subversion as first SCM they learnt was git. But in daily work I see many people limited to very basic use of it ;(
There is a lot of commands and external plugins for git. I do not want to mention them but rather concentrate on ones installed as part of git package. And only those which I think EVERY developer using git should know that they exist and how to use them.
Dealing with other repo is easy set: “pull” to merge changes (“fetch” if you only want to have them locally), “push” to send them out. “git remote” is useful too.
Branching is easy and there is a lot of articles how to do it. Basically “git branch” to see which one you use, “git branch -a” to check which are available and “git checkout” to grab code from one.
Checking changes is next step. “git diff” with all variants like checking local not committed changes against local repo, comparing to other branches, checking differences between branches etc. “git log -p” to check what was changed in earlier commits.
Then goes “status” to see which local files are changed/added/removed and need attention. And “add”, “rm” and finally “commit” to get all of them sorted out.
Lot of people ends here. The problem appears when they get patches…
So how to deal with patches in git world? You can of course do “patch -p1 <some.patch” and take care of adding/removing files and doing commit. But git has a way for it too.
To generate patch you can use “git diff” and store output into file. But this will lack author information and description. So it is better to commit changes and then use “git format-patch” to export what you did into file. Such file can be attached to bug tracker, sent by email, put online etc. Importing it is simple: “git am some.patch” and if it applies then it is merged like you would do local commit.
There are other ways probably too. Quilt, stgit etc. But this one is using basic git commands.
And I still remember days when I thought that git and me do not match ;D
When GNU project was announced over 30 years ago it was something great. But time passed and I have a feeling that it is more and more politics instead of coding.
I do builds. For over 10 years now. It was ARM all the time with some bits of AVR32, MIPS, x86. During last two years it is nearly 100% AArch64. And during last months my dislike to GNU project grows.
Why? several reasons.
There is a lot of articles about “how to write good commit messages”. I can tell you where to look for bad ones: gcc, binutils, glibc — base of most of GNU/Linux distributions. All of them can be fetched from GIT repositories but look like deep in CVS era. Want to find what was changed? You will get it in commit message. Why it was changed? Forget.
Do you know what IceCat is? Or GNUzilla? Let me quote official homepage:
GNUzilla is the GNU version of the Mozilla suite, and GNU IceCat is the GNU version of the Firefox browser. Its main advantage is an ethical one: it is entirely free software. While the Firefox source code from the Mozilla project is free software, they distribute and recommend non-free software as plug-ins and addons.
Where is source? Somewhere in GNU cvs probably. I failed to find it. OK, there is a link to something which is probably source tarball but we have XXI century — developers take source control systems got granted.
Of course IceCat fails to build on AArch64. Why? Because it is based on already obsoleted by upstream version 24 of Firefox. Support for 64-bit ARM platform was merged around Firefox 30 and is complete in version 31. Sure, I could dig for patches for IceCat version but no. This time I refuse.
I do not know, but maybe GNU project needs some fresh blood which will make them more developer friendly?
Each time when I have to use Bazaar I feel sick. It takes hours to do things which should not take more then few minutes. Let’s start with today problem and compare it to git.
I finally got armel-cross-toolchain-base to build with non-sysrooted binutils. It required adding one patch and building binutils twice (once with sysroot for use during build, second without to put it in Ubuntu archive). After whole work I had 2 files to add, 3 hunks of changes in debian/rules and that’s all.
So I used “bzr add” on new files and “bzr commit -i” to select which hunks goes into which commit and repeated it until all changes landed in local branch. But then I discovered that I forgot to add 1.51 version into debian/changelog file before whole that work. So I did it now and next commit added 1.52 version.
And this is when whole “fun” started… I wanted to move r131 (the one with 1.51 changelog entry) to be before r128 one. With git it is just a matter or “git rebase -i” and whole work is done. But this was bzr…
“bzr rebase” command (part of “bzr-rewrite” package) supports only rebasing on top of other branch. But it was also useful as my branch was a bit out of sync with upstream one. So I started to ask questions on #linaro channel as Zygmunt was there and his knowledge of Bazaar was a big help in past. There were few ideas and I did some reading in meanwhile.
One solution was “use pipelines” and you know? They even works but then I needed to cherry pick one commit (r131 one). The only way which I found was “bzr merge -r130..131 :first” and I got all changes but not changeset. Right, git users, you have to commit after merge (and use “bzr log” to find commit message). And when you try “bzr merge” for few revisions they will be squashed… So finally what I did was merging changeset by changeset and commiting with original commit messages. Good that it was just few of them.
So after more then 10 minutes (“git rebase -i”) I finally got branch ready to be pushed. Next time I will spend that time on checking what is wrong with git-bzr-ng plugin as this will give me working two-way handling of Bazaar repositories without touching “bzr” command.
I am using Dell D400 laptop as my 32bit test machine and during conferences. It has Pentium-M cpu and Intel 855GM/ICH4 chipset. And this is where problem starts…
As I like to use text console on it I wanted to get XGA (1024×768) framebuffer on it. So first try was “use intelfb”. But “video=intelfb:mode=1024×768-32@60” or “modprobe intelfb mode=1024×768-32@60” results in same message:
[ 1760.280291] intelfb: Framebuffer driver for Intel(R) 830M/845G/852GM/855GM/865G/915G/915GM/945G/945GM/945GME/965G/965GM chipsets
[ 1760.280368] intelfb: Version 0.9.6
[ 1760.280471] intelfb: 00:02.0: Intel(R) 855GM, aperture size 128MB, stolen memory 892kB
[ 1760.289927] intelfb: Non-CRT device is enabled ( LVDS port ). Disabling mode switching.
[ 1760.290251] intelfb: Video mode must be programmed at boot time.
The solution is to give kernel “vga=792” argument but it is not possible here as kernel thinks that 800×600-8 is highest available one.
OK, I can use X11 and terminal there — this works fine. But why kernel got so broken? Probably Intel developers do not test their changes on something older then i945 chipsets (and thats only because it is used in Atom based devices).
Looks like I need to do “git bisect” to check when it broke and then create some hack to get proper behaviour…
Recently I started merging interesting stuff from OpenedHand’s Poky into OpenEmbedded. Now, with OE using GIT as storage for metadata it is much, much easier then it was in Monotone times.
I have over 3 thousands of revisions exported (using
git format-patch) from Poky and I am reviewing them and adapt to add into OE. Useful ones are changed by simple shell script which adapts paths, change authors informations (I use Poky via git-svn so no real names/emails) and adds “(from Poky)” message to end of patch description. Then just
git am and patch lands in OpenEmbedded.
For now I added newer APT and DPKG package tools, newer QEMU (not the latest but working with ARMv6/v7 instructions), U-Boot mkimage tool which does not use lot of Openmoko patches (just one is needed), Shared MIME Info which does not need any processing on target device and some tweaks here or there.
Next in queue are Maemo4 cleaned recipes (Diablo ones), binary locales for Angstrom powered ARMv6/7 based machines and miscellaneous tweaks or updates.
Long time no post — I wonder does someone wondered what happened 🙂
Job situation change
15th October was my last day of work for OpenedHand (which was acquired by Intel two months earlier). Since then I am free to work for anyone and I have few offers of cooperation in discussion. I will still have my own company (HaeRWu) but probably will change name of it as this is very hard to pronounce for English speakers.
I have to admit that I will miss atmosphere of OpenedHand. That company had great people with many ideas, there was lot of interesting projects (sane and insane ones), interesting hardware which no one had idea of existence etc. I hope that our future roads will cross one day and we will meet on some conferences or in some projects.
OpenEmbedded switched to GIT from Monotone
After long time of git trail we finally switched to GIT. I hope that Monotone guys will not be sad (we were one of biggest projects according to their wiki) but this system was too slow to handle our metadata.
I was using GIT during my work with Poky (by git-svn). It really change a way how does people work. Ok, Monotone also has local branches support but it is too slow compared to git.
During near time I plan to merge some interesting stuff from Poky to OpenEmbedded — for example updates to Maemo libraries and applications.
My profile on LinkedIn is updated. Got connected to companies which I was working with during my OpenedHand times, got some recommendations etc. BTW – I am in “Szczecin Area, Poland” not “Lublin Area, Poland” (as listed in LI profile) but due to the bug in the system I can not fix it ;(
After long time Ohloh service managed to handle aliases in OpenEmbedded project so I can claim all my commits in OE repository as mine. The result can be seen in my profile there.
Due to recent discussion on OpenEmbedded mailing list I decided to give GIT second chance (first one was few months ago).
I imported Poky using
git-svn tool and started hacking. First work was switching to OPKG (described in other post). I created branch for it and changed bit after bit — result was patchset with 17 patches. I pushed them into official Subversion repository in a bit other order and as few less revisions. After that I dropped branch as not needed any more.
Next was creating few branches for local hacks. Merging branches is easy when there are no conflicts and require manual calling of
git mergetool FILE (instead of that being called automatically). Cherry picker works very nice and “rebasing” branches recognize such revisions.
Nasty thing is that every change has to be committed before switching branches as there is only one “working copy” at time (not like in CVS, Subversion or Monotone where you need “working copy” per branch).
What do I feel about GIT now? I started to like it.
For both of my desktop machines I use git kernels and from time to time I add some additional patches to get something experimental to test. By default I use “quilt” to manage patches so my usual kernel session looks like:
quilt pop -a
quilt push -a
And as a result I have updated kernel with all my patches applied. If one of them do not apply I usually do updating by hand and call
quilt refresh or search for newer version of patch.
Today I decided to do another attempt to use just GIT for managing my patched kernel tree instead of using GIT + quilt. And I failed 🙁
I cannot understand why GIT developers say that they hate CVS but follow its way when it comes to merging stuff… If any operation ends in merge conflicts all you get is file with CVS conflict markers inside. You need to call merge tool by hand, resolve problem, add files back to repository (do not ask me why adding files already known to SCM is needed) and tell that you resolved conflict. Even CVS or Subversion does not works that way…
I like the way it works with monotone — if there is conflict during update (so
git pull like) merge tool is called (kdiff3 on my system) and user has to resolve all conflicts before monotone will go into next step. Whole merging stuff is then stored as another revision (with git it can be then remove during
Maybe one day I will find a way to get familiar with git but it is not today…