Thursday, April 26, 2012

Patching /etc/resolv.conf file in VMs

I use a Linux VM for my development. I run my VM using VMWare Player. The VM obtains the IP address using a DHCP client run by NetworkManager (it is a part of vmware-tools suite). Every time the VMWare Player issues a new IP address, my /etc/resolv.conf file gets overwritten and I am left with only "localhost" as the search domain.

Some of the machines that I connect to require fully qualified names to resolve. For e.g. I cannot look up host1, instead I should look up host1.corpdomain.com. Hence I should make sure that the /etc/resolv.conf has the correct search directive after every DHCP lease.

The NetworkManager first copies the /etc/dhcp/dhclient.conf file under /var/run/nm-dhclient-eth0.conf and then adds a few extra directives of its own. So all that I had to do to fix the issue is to add the following two lines at the end of /etc/dhcp/dhclient.conf file:

prepend domain-search "corpdomain.com";
prepend dhcp6.domain-search "corpdomain.com";

Please make sure you don't miss the double-quotes and the semi-colon. Otherwise it will silently fail without giving any error. After adding this just restart your VM. Everything should work fine.

Wednesday, April 18, 2012

A minor annoyance of running "rails console" inside Emacs

If you run "rails console" from within Emacs, you might have been annoyed by the output that contains control characters whenever you run a query. Like the one given below:

1.9.3-p125 :001 > BlogPost.all
^[[1m^[[36mBlogPost Load (0.2ms)^[[0m  ^[[1mSELECT "blog_posts".* FROM "blog_posts" ^[[0m
I could not find a satisfactory solution for this anywhere. So I came up with the following solution that works fine for me.


What this code does is to make use of "term" instead of "shell" to run the rails console. In term-mode you have better support for rendering these control chars. To make navigation and kill/yank easier, it switches to line-mode. The default mode is char-mode. You can switch between line-mode and char-mode using C-c C-j and C-c C-k respectively. Hope that helps.

Friday, April 13, 2012

Have a primary key, for Christ's sake!

This posting is a venting of my frustration.

Never ever agree to have a table that doesn't have a primary key. There may be forces of nature that might attempt to convince you to have one, like:

  • Your peers might say you don't need to perform an UPDATE in that table, now or ever
  • You just need that table only for audit or reporting purposes
  • We store data in that table just to go back and refer later which may never happen, so don't bother
  • You may be under a gun where the big boss says "you don't need a primary key, because I say so!"
For Christ's sake, don't yield to any of these things. I was in a limbo recently where I had to make use of one of the legacy tables that didn't have a primary key. This table was once thought as insignificant, but now has suddenly become an important table. The issue is that, for me to perform the operations that I want to perform, I need a primary key.

The issue is that this monster table has more than 40 million rows. Eventually I needed to create a primary key (ID) and back fill all the rows using a sequence. Had to add a trigger to make sure the ID column is populated every time when the application INSERTs a row into that table in future. It was unnecessary hassle.

So, my friend, here are my two cents:
  • Always make sure your table has a primary key. It is very cheap to have one right from the beginning, and populate it using AUTO_INCREMENT (MySQL) or triggers (Oracle), rather than attempting to fix later.
  • If you will perform UPDATEs in a table, make sure you have VERSION column. This is important since it will help you in performing optimistic locking. And if you have more than one instance of application running, it definitely helps. I have seen people using LAST_UPDATE_TIME for optimistic locking. Though that works, it is error prone.
Versioning is an important aspect, which gets easily overlooked. Today you may have only one Tomcat (or any application server) running. Say your product becomes a hit and you need to double the number of application servers. Versioning helps you to perform optimistic locking, and ensure you don't screw up the integrity of data.

Tuesday, April 10, 2012

Converting a byte array to long - a performance test

WARNING: Micro benchmarks are mostly deceiving, if you aren't careful about how to interpret the results. I leave it up to you to interpret these results.

Recently I came across a problem when I had to convert an array of bytes into a long value. The obvious choice was for me to write a bytesToLong method that takes a byte array as an argument and returns a long value. When I dug around a little bit, I found that I could also make use of the ByteBuffer to wrap the bytes and  make use of the getLong() method to get the long value.

I was curious to know if there is any performance benefits of going one way or the other. So I wrote a small micro benchmark, and here is the source code and the results:

The results are given below:
The columns are: number of iterations, time taken by the shift version, time taken by the ByteBuffer version, the ratio between the two times. As you can see, the ByteBuffer version is four times slower than the hand coded version. For my case, this slowness doesn't make any difference. But depending on your application, it may be significant.