After using Btrfs on both my work and home machines, I’m switching back to ext4. I actually like Btrfs and I wish I could keep using it, but I had two major issues.
The first one and this was enough for me to switch back to ext4, Google Chrome startup time was almost 10 times slower. At first I didn’t realize it was Btrfs causing this, but after some investigation, it’s noticeable how fragmentation affects some applications using Btrfs. I actually switched my work machine to ext4 after I found I could not use the virtual machines I had anymore, because it kept reading the disk for hours at anything I did in the virtual machine (in the end they could not even boot anymore). Btrfs is not always slower, or at least not noticeable, but for a few scenarios that can cause fragmentation, they really do make Btrfs unusable for me.
The second one might just be a misunderstanding of my part, but I read manuals and wikis and could not find an answer. I started with one 150GB partition at work and had another 150GB partition if I wanted to easily switch back to ext4. Then I wanted to test adding multiple devices to Btrfs, which it makes really easy, and added the other partition to the first and did a balance operation, so data was distributed between them. What I did not realize, is that by doing that it created a RAID 1 with my partitions. As far as I know, RAID 1 with two partitions in the same drive are just useless, it just duplicates data between both partitions. After reading more manuals, I learned that on creation time you can control the RAID level, but I found no way to do it afterwards. And the worst of it, I found no way to degrade back to RAID 0, so I found no way to remove the second partition from the filesystem, which means I ended up with 300GB of space being used as 150GB.
And another issue I had, it was not always obvious how much disk space I had free. In some places it indicated 300GB, others 150GB. And in my home machine I even had an out of free space error when it indicated I had 6GB free.
Despite its powerful features, I could not justify the problems I had. Actually, I was not using snapshots and sobvolumes as much as I expected. I believe this may change as more tools are written to take advantage of these features.
I knew well before I started using it that it’s not production ready or even finished yet. Btrfs is definitely a step in the right direction and I hope I can try using it again in some time. It even managed not to lose any of my data despite me poking with stuff I did not understand.
I just got back from Santa Cruz de la Sierra, Bolivia, where I was attending CONSOL 2010, the Congreso de Software Libre of Bolivia. And I had an amazing time over there and met some really amazing people. This is a thank you note for everyone there, for welcoming me and trying to understand my lousy spanish talking about Google Summer of Code and Gnome.
It amazes me every day how far Gnome and my Summer of Code experience is taking me. I’m doing an internship in an open-source company in Bolivia simply because I had Gnome in my curriculum and it got the attention from someone. And I am learning so much, both from my work here and from living in such an incredible country as Bolivia, where the diversity and contrast is amazing even for someone coming from Brazil. And the motivation I saw there, not only because in the poorest country of South America, free software makes a lot of sense, but because of their passion to hacking on stuff, to learn about new stuff and make something useful out of all of that.
So, congrats to these guys there and thank you for everything (especially Amos Batto and Hardy Beltran for inviting me).
On a side note, Rhythmbox 0.13.2 released a few weeks ago contains my first real contribution to Gnome, with my Google Summer of Project of this year being released. So if you have an iPhone or Android, enable the DAAP plugin together with the Remote switch and enjoy controlling Rhythmbox anywhere in your home! Thanks to W. Michael Petulio, Jonathan Matthew and Peter, without their help this would never be released.
I’m doing some reverse-engineering stuff and it has been quite fun so far (hopefully I’ll blog more about why I’m doing this in the future). I needed to dump some HTTP traffic and analyse the data. Of course, Wireshark comes straight to mind for something like this and it is indeed really useful. It took me some time to understand the Wireshark interface and I still think it’s hiding some great functionality from me. But anyway, I was able to set the filters I wanted and it was showing me exactly the data I wanted. But I still had to right-click the data I wanted and save it to disk, which was not ideal.
Then I thought, if people were smart enough to build such a powerful tool, they probably created a command-line interface as well, probably with scripting. Indeed they did! The command-line interface is called Tshark and the scripting is done in Lua. But I don’t know Lua and it would take too much time to learn it for this task. So I started to look a way to dump everything and then write a small script in Python to extract the data I really want. Took some time but the solution was much simpler then I thought (by the way, there are probably other solutions for this, but my Google skills were not good enough to find anything obvious).
First you run Tshark to dump any HTTP traffic to a XML file (I usually hate XML, but this time it was useful). This is what I used:
sudo tshark -i wlan0 "host 192.168.1.100 and port 45000" -d tcp.port==45000,http -T pdml > dump.xml
Of course, it all depends on what you want to dump. You should read the “man pcap-filter” to get the capture filter right and it is really useful (crucial sometimes) to only get the traffic you want. And I wanted to treat traffic in port 45000 as HTTP, so I think that is what the -d switch does The most important thing it “-T pdml”, which tells tshark to dump in this XML format.
Next thing is to analyze in Python, which was much easier than I thought. I was only worried about the data field in the HTTP packets, but if you take a look in the dumped file, you’ll see you have information about all kind of things. My script turned out to be this:
from lxml import etree import binascii tree = etree.parse('dump.xml') data = [binascii.unhexlify(e.get("value")) for e in tree.xpath('/pdml/packet/proto[@name="http"]/field[@name="data"]')]
I used lxml because I found it has great support for XPath, which is quite useful here. Also, the HTTP data is stored as a hex string, which you can easily convert with unhexlify. So, in the end I was able to automate an annoying process with just a few lines of code. And if I need anything else, it’s quite easy to expand the script. I’m quite happy with the results!
Update: Someone pointed out in the comments about Scapy (http://www.secdev.org/projects/scapy/), which by reading it’s documentation seems awesome!
A recent post in Planet Gnome about moving away from Arch into Ubuntu got me thinking, because I just did the same thing a few weeks ago, when Ubuntu 10.10 was released. But I didn’t really liked the reasons I did so.
First, I love Arch Linux. It’s simplicity and speed are amazing. It’s clearly focused on power users, which is great for me. It’s package manager (Pacman) is very fast and powerful, while still easy to use. I love how I can search and query packages both installed and from the cache with concise commands that usually do what you want at the first time. Compared to Ubuntu’s apt-cache and apt-get, which I usually have to read the man page to remember a few commands and to Fedora’s yum, which I’m never comfortable with, Pacman is always the winner. If that was not enough, you can create useful packages in under 10 minutes. Better yet if you can find a pre-made PKGBUILD in AUR, which contains thousands of recipes to build packages. The binary packages are usually enough for a desktop, but sometimes you do need to dig into Yaourt, which automatically downloads and compiles recipes. It is time-consuming sometimes, but comparing to finding a PPA with a decent enough version of a package that you can’t find in Ubuntu, it’s not much different.
That leads to the first reason I switched away from Arch, is that Ubuntu usually has recent enough versions when it’s released. But six months afterwards, I really want to try new versions of packages. If I can find a good and well-maintained PPA, then it’s ok. If I can’t, there are a thousand other things I would like to do then to create my own packages for new upstream releases. So, this time of the year, a few weeks after Ubuntu was released, I can actually enjoy it for a while (actually, Rhythmbox is just out with a new version, which I will probably never get on this version of Ubuntu).
The second reason and one that is a hot topic right now, but this is mostly a user perspective, I got locked into the Gnome modifications that Ubuntu did. Honestly, I like the modifications, I like the Indicator applet, I’m using the Indicator Applet Menu (despite a few bugs) and I love the Notify OSD. And I miss those on Arch. I tried building some components of Ayatana Project in Arch, but didn’t had much luck (understandable actually). The Arch way is to use as much as upstream as possible without modifications, which is usually very good. You can find a few bugs which are corrected already on other distros, but you get releases faster and you get to see how upstream really is. And that is a big issue for Ubuntu, because no matter how much they talk about not forking Gnome, it’s just not upstream anymore. If I would move to Fedora or OpenSuse from Arch, I would get a very close experience. Not because they don’t improve Gnome or add their own modifications, but because all their modifications are upstream. So even on Arch, I can enjoy all the great investment these guys made to Gnome. But now I can’t use Ubuntu’s investment outside Ubuntu (I could actually if I spend enough time porting it, but it’s just not worth).
And now, I’m locked in to Ubuntu, I’m locked to the Ubuntu OS to use Ubuntu Software, which are both actually very high quality, but I’d prefer a different OS. Some other OS vendors work the same tactics to get more users and I definitely don’t want to use their software, no matter how great they are. Of course Ubuntu is miles away from these vendors, but it’s going through a similar and very dangerous path. Again, this is only a user perspective and a user that Ubuntu is not focused in (and I’m glad they have a very strong focus on other users). I just wished Ubuntu would give a bit more back to a community it takes so much from.
I’m posting this here both to help someone else looking for this and to check if I got everything right.
I needed to use enums in a GObject property. So I needed a enum type for my property param spec. I thought I could hard-code it somehow, but after some long time pondering (actually knocking my head into the wall) I decided to integrate glib-mkenums into autoconf so that it could generate the types for me automatically when running make from my C sources. Unfortunately for me, Google wasn’t being friendly when searching for information on glib-mkenums.
I found somewhere, some program that used glib-mkenums in a simple way (sorry, I forgot where I fount it), close to what I had in mind, so I decided to adapt it to my needs. What I had to do was to add two more files, automatically generated (in this case named dmap-enums.c and dmap-enums.h) adding some commands to Makefile.am (linked to Gitorious because WordPress removes all formatting). Hopefully that is the right way to do it, at least it is working for me.
I just went from week 2 to week 11 in the GSoC progress in my blog Well, there is not much to tell in a blog if there is not a picture to show (and showing off the iPhone remote working with Rhythmbox should not be any different than iTunes if I’m doing my job correctly).
I realized these last weeks that I had completely mis-planned my project, because I had no idea what I was getting into. I thought DACP would be quite easy to implement and I would focus on other things (making a client library for instance). But I discovered it is a much more complex protocol then I thought, mostly because of DAAP.
What I discovered is that DACP is just an extension for DAAP, and being an Apple protocol, it’s closed and has been reverse-engineered and re-implemented several times in the open source world. The problem is that DACP uses several features in DAAP that were not implemented in libdmapsharing, simply because there is no real standard on DAAP. It took some time for me to learn DAAP enough to find out what was missing in libdmapsharing for DACP to work.
So I spent most of the time fixing, tweaking and implementing stuff on DAAP in libdmapsharing. Which was pretty cool, I improved a lot of my C skills, learned GObject (and quite frankly, liked it a lot) and learned a lot about DAAP and libdmapharing.
I like the Indicator Applet and I like Cryptkeeper, so I decided to create an indicator for Cryptkeeper:
For anyone who doesn’t know, Cryptkeeper is a very useful application that allows you to mount/unmount an encrypted folder with just one click. It’s one of the most useful applications running on my startup. But up until now it had a quite ugly icon:
Can you spot the difference?
This is actually a plot to make people to like it and finish it. The patch adds support for showing and letting you mount/unmount folders, but it doesn’t let you delete folders or view information, as it did. Also, when adding a new folder, it doesn’t show up on the list if you don’t restart Cryptkeeper. But the patch does what I wanted it to (I rarely create or delete an encrypted folder), so I probably won’t change it further.
So, the patch is here. Have fun