Alexandre Rosenfeld's blog

A hacker and it's journey.

Laravel 5 and FileMaker — June 30, 2015

Laravel 5 and FileMaker

A few weeks ago I was searching Google for exactly the two words in the title, how to connect a Laravel 5 web application to a FileMaker database and I couldn’t find anything at all. Not really surprising, since that is one of the last things you would want to do.

So, why we wanted to do that in the first place? Well, we have a client that has a legacy application running on FileMaker and maintaining it was becoming a huge burden, so they want to migrate to something else. The server itself to host FileMaker is expensive (let alone it has to be a Mac or Windows) and it’s almost impossible to add new features.

While a Laravel connector for FileMaker was not found, I did find a PHP library to connect to FileMaker called SimpleFM. Adding a provider for it in Laravel was pretty easy (I also included the .env configuration introduced in Laravel 5 and the database.php configuration).

There is a bunch of ways you could have implemented this, I just wanted an easy way to get a FileMaker connection.

I also wanted to see information about the available layouts, since I prefer to avoid logging in to FileMaker at all costs. So I wrote a command to display all layout names in a database or to display column information in a specific database:

Please note: I’m using a Laravel 5.1 feature to describe the command as a signature, instead of the obscure and error-prone getOptions and getArguments.

Use it like this:

./artisan fm:show
Getting layout names
• Layout1
• Layout2
...

And you get a pretty table in response of a specific layout:

./artisan fm:show REPORTS
Getting info for REPORTS
+-------+-------+-------+
| index | recid | modid |
+-------+-------+-------+
| 0 | 77 | 0 |
+-------+-------+-------+

Take a look at the FileMaker docs for more commands and then you’re ready to do whatever you want with FileMaker inside your Laravel 5 application.

Happy hacking!

GNOME 3.0 and Fedora 15 — April 12, 2011

GNOME 3.0 and Fedora 15

On last sunday it happened the FLISOL, which I was able to partially attend here in La Paz. It was a good event, though I wasn’t able to see all the talks. Unfortunately I didn’t had much time to convince people to use GNOME 3, but I did a short 15 minute demonstration of GNOME 3.0 running on the Fedora livecd. Most questions were about how to use it in Ubuntu, so I had some bad news for them.

As a user of GNOME, I need to say thank you for everyone involved in the GNOME 3.0 release. I started using it only last week and I fell in love immediately. It’s not only beatiful and great to use, it’s also inspiring. It’s great to see the direction GNOME is taking and what we can build on.

I also have to say congratulations to the Fedora guys. I installed Fedora 15 Alpha then updated to the latest packages. Even with the Alpha packages it was rock solid. Sure enough after using it for my day to day work for a week I had some crashes, but it’s amazingly stable. And it has some cool features as well (I love the systemd idea). So at least for me, Fedora has everything to be the best distro around this year.

First impressions with Gnome 3 — March 25, 2011

First impressions with Gnome 3

I tried Gnome Shell a few months ago but I had so many issues I didn’t actually experienced anything. Yesterday I downloaded the Gnome 3 livecd but it seems my Radeon doesn’t work at all with it (all I see are some random dots in the screen even with safe mode on). Since I still wanted to test Gnome 3, I updated an Arch Linux installation I had to Gnome 3 (took a long time to download since I had tons of stuff to update). So then I was finally able to actually test Gnome 3 today. And I was quite impressed by two things.

One, it crashed a lot for me. Don’t know if it’s a Gnome 3 issue or an Arch Linux issue. But it crashed every 5 minutes. I couldn’t change the wallpaper without it crashing. And most of the times it wouldn’t login telling me something went wrong and I had to logout. I even added a new user to make sure the old settings were not giving it problems, but it was the same.

I do hope these things are eventually fixed, because of my second point. I loved it! It’s amazingly beautiful, I loved the animations, loved the details like how items glow when you hover them. I loved the menus on the top right and how I could press the Super button to get the overview mode. The new control center is beautiful and it just feels right.

I tried Unity last week and in my personal opinion, the Gnome 3 experience is far better then anything Ubuntu can offer right now.

Back to Ext4 from Btrfs — December 27, 2010

Back to Ext4 from Btrfs

After using Btrfs on both my work and home machines, I’m switching back to ext4. I actually like Btrfs and I wish I could keep using it, but I had two major issues.

The first one and this was enough for me to switch back to ext4, Google Chrome startup time was almost 10 times slower. At first I didn’t realize it was Btrfs causing this, but after some investigation, it’s noticeable how fragmentation affects some applications using Btrfs. I actually switched my work machine to ext4 after I found I could not use the virtual machines I had anymore, because it kept reading the disk for hours at anything I did in the virtual machine (in the end they could not even boot anymore). Btrfs is not always slower, or at least not noticeable, but for a few scenarios that can cause fragmentation, they really do make Btrfs unusable for me.

The second one might just be a misunderstanding of my part, but I read manuals and wikis and could not find an answer. I started with one 150GB partition at work and had another 150GB partition if I wanted to easily switch back to ext4. Then I wanted to test adding multiple devices to Btrfs, which it makes really easy, and added the other partition to the first and did a balance operation, so data was distributed between them. What I did not realize, is that by doing that it created a RAID 1 with my partitions. As far as I know, RAID 1 with two partitions in the same drive are just useless, it just duplicates data between both partitions. After reading more manuals, I learned that on creation time you can control the RAID level, but I found no way to do it afterwards. And the worst of it, I found no way to degrade back to RAID 0, so I found no way to remove the second partition from the filesystem, which means I ended up with 300GB of space being used as 150GB.

And another issue I had, it was not always obvious how much disk space I had free. In some places it indicated 300GB, others 150GB. And in my home machine I even had an out of free space error when it indicated I had 6GB free.

Despite its powerful features, I could not justify the problems I had. Actually, I was not using snapshots and sobvolumes as much as I expected. I believe this may change as more tools are written to take advantage of these features.

I knew well before I started using it that it’s not production ready or even finished yet. Btrfs is definitely a step in the right direction and I hope I can try using it again in some time. It even managed not to lose any of my data despite me poking with stuff I did not understand.

CONSOL 2010 — December 14, 2010

CONSOL 2010

I just got back from Santa Cruz de la Sierra, Bolivia, where I was attending CONSOL 2010, the Congreso de Software Libre of Bolivia. And I had an amazing time over there and met some really amazing people. This is a thank you note for everyone there, for welcoming me and trying to understand my lousy spanish talking about Google Summer of Code and Gnome.

It amazes me every day how far Gnome and my Summer of Code experience is taking me. I’m doing an internship in an open-source company in Bolivia simply because I had Gnome in my curriculum and it got the attention from someone. And I am learning so much, both from my work here and from living in such an incredible country as Bolivia, where the diversity and contrast is amazing even for someone coming from Brazil. And the motivation I saw there, not only because in the poorest country of South America, free software makes a lot of sense, but because of their passion to hacking on stuff, to learn about new stuff and make something useful out of all of that.

So, congrats to these guys there and thank you for everything (especially Amos Batto and Hardy Beltran for inviting me).

On a side note, Rhythmbox 0.13.2 released a few weeks ago contains my first real contribution to Gnome, with my Google Summer of Project of this year being released. So if you have an iPhone or Android, enable the DAAP plugin together with the Remote switch and enjoy controlling Rhythmbox anywhere in your home! Thanks to W. Michael Petulio, Jonathan Matthew and Peter, without their help this would never be released.

Analyzing HTTP packets with Wireshark and Python — November 21, 2010

Analyzing HTTP packets with Wireshark and Python

I’m doing some reverse-engineering stuff and it has been quite fun so far (hopefully I’ll blog more about why I’m doing this in the future). I needed to dump some HTTP traffic and analyse the data. Of course, Wireshark comes straight to mind for something like this and it is indeed really useful. It took me some time to understand the Wireshark interface and I still think it’s hiding some great functionality from me. But anyway, I was able to set the filters I wanted and it was showing me exactly the data I wanted. But I still had to right-click the data I wanted and save it to disk, which was not ideal.

Then I thought, if people were smart enough to build such a powerful tool, they probably created a command-line interface as well, probably with scripting. Indeed they did! The command-line interface is called Tshark and the scripting is done in Lua. But I don’t know Lua and it would take too much time to learn it for this task. So I started to look a way to dump everything and then write a small script in Python to extract the data I really want. Took some time but the solution was much simpler then I thought (by the way, there are probably other solutions for this, but my Google skills were not good enough to find anything obvious).

First you run Tshark to dump any HTTP traffic to a XML file (I usually hate XML, but this time it was useful). This is what I used:

sudo tshark -i wlan0 "host 192.168.1.100 and port 45000" -d tcp.port==45000,http -T pdml > dump.xml

Of course, it all depends on what you want to dump. You should read the “man pcap-filter” to get the capture filter right and it is really useful (crucial sometimes) to only get the traffic you want. And I wanted to treat traffic in port 45000 as HTTP, so I think that is what the -d switch does ;) The most important thing it “-T pdml”, which tells tshark to dump in this XML format.

Next thing is to analyze in Python, which was much easier than I thought. I was only worried about the data field in the HTTP packets, but if you take a look in the dumped file, you’ll see you have information about all kind of things. My script turned out to be this:

from lxml import etree
import binascii
tree = etree.parse('dump.xml')
data = [binascii.unhexlify(e.get("value")) for e in tree.xpath('/pdml/packet/proto[@name="http"]/field[@name="data"]')]

I used lxml because I found it has great support for XPath, which is quite useful here. Also, the HTTP data is stored as a hex string, which you can easily convert with unhexlify. So, in the end I was able to automate an annoying process with just a few lines of code. And if I need anything else, it’s quite easy to expand the script. I’m quite happy with the results!

Update: Someone pointed out in the comments about Scapy (http://www.secdev.org/projects/scapy/), which by reading it’s documentation seems awesome!

Move to Ubuntu — November 2, 2010

Move to Ubuntu

A recent post in Planet Gnome about moving away from Arch into Ubuntu got me thinking, because I just did the same thing a few weeks ago, when Ubuntu 10.10 was released. But I didn’t really liked the reasons I did so.

First, I love Arch Linux. It’s simplicity and speed are amazing. It’s clearly focused on power users, which is great for me. It’s package manager (Pacman) is very fast and powerful, while still easy to use. I love how I can search and query packages both installed and from the cache with concise commands that usually do what you want at the first time. Compared to Ubuntu’s apt-cache and apt-get, which I usually have to read the man page to remember a few commands and to Fedora’s yum, which I’m never comfortable with, Pacman is always the winner. If that was not enough, you can create useful packages in under 10 minutes. Better yet if you can find a pre-made PKGBUILD in AUR, which contains thousands of recipes to build packages. The binary packages are usually enough for a desktop, but sometimes you do need to dig into Yaourt, which automatically downloads and compiles recipes. It is time-consuming sometimes, but comparing to finding a PPA with a decent enough version of a package that you can’t find in Ubuntu, it’s not much different.

That leads to the first reason I switched away from Arch, is that Ubuntu usually has recent enough versions when it’s released. But six months afterwards, I really want to try new versions of packages. If I can find a good and well-maintained PPA, then it’s ok. If I can’t, there are a thousand other things I would like to do then to create my own packages for new upstream releases. So, this time of the year, a few weeks after Ubuntu was released, I can actually enjoy it for a while (actually, Rhythmbox is just out with a new version, which I will probably never get on this version of Ubuntu).

The second reason and one that is a hot topic right now, but this is mostly a user perspective, I got locked into the Gnome modifications that Ubuntu did. Honestly, I like the modifications, I like the Indicator applet, I’m using the Indicator Applet Menu (despite a few bugs) and I love the Notify OSD. And I miss those on Arch. I tried building some components of Ayatana Project in Arch, but didn’t had much luck (understandable actually). The Arch way is to use as much as upstream as possible without modifications, which is usually very good. You can find a few bugs which are corrected already on other distros, but you get releases faster and you get to see how upstream really is. And that is a big issue for Ubuntu, because no matter how much they talk about not forking Gnome, it’s just not upstream anymore. If I would move to Fedora or OpenSuse from Arch, I would get a very close experience. Not because they don’t improve Gnome or add their own modifications, but because all their modifications are upstream. So even on Arch, I can enjoy all the great investment these guys made to Gnome. But now I can’t use Ubuntu’s investment outside Ubuntu (I could actually if I spend enough time porting it, but it’s just not worth).

And now, I’m locked in to Ubuntu, I’m locked to the Ubuntu OS to use Ubuntu Software, which are both actually very high quality, but I’d prefer a different OS. Some other OS vendors work the same tactics to get more users and I definitely don’t want to use their software, no matter how great they are. Of course Ubuntu is miles away from these vendors, but it’s going through a similar and very dangerous path. Again, this is only a user perspective and a user that Ubuntu is not focused in (and I’m glad they have a very strong focus on other users). I just wished Ubuntu would give a bit more back to a community it takes so much from.

Follow

Get every new post delivered to your Inbox.