Adventures in HttpContext All the stuff after 'Hello, World'

Performance Tracing For Your Applications via Enterprise Library

Performance is one of the most important, yet often overlooked, aspects of programming.  It’s usually not something you worry about until it gets really bad.  And at that point, you have layers of code you need to sift through to figure out where you can remove bottlenecks.  It’s also tricky, especially on complex, process orientated systems, to aggregate performance information for analysis.  That’s where this gist for a utility performance tracer class comes into play.  It’s meant to hook into Enterprise Library Logging Block by aggregating the elapsed time or information statements and pass them to the Event Log (or other listeners you’ve hooked up).  Even though the gist is designed for the Enterprise Library, it can easily be modified for other utilities like Log4Net.  Below is an example of how the entry looks in the EventLog.

Why Do It?

I was disappointed with the built in trace listener provided with Enterprise Library.  It would write a begin and end message to the trace utility, and output a rather verbose message containing only a few relevant lines.  Even though it was easy to use in a using() statement, it was rather difficult to view related trace statements for a single high-level call.  I could have explored using the Tracer class more effectively, but isn’t always easier to write your own?

How it Works

Usually you’ll create an instance of the PerformanceTracer in a using statement, just like the Tracer class.

using(new PerformanceTracer("How long does this function take?") {
     //Your Code Here
}

This will write the current message as the Title of the LogEntry and write the elapsed time between instantiation and disposal (thanks to the IDisposable pattern).  But the real benefit comes when you use the instantiated class for tracing within the using block:

using(var tracer = new PerformanceTracer("Some long process") {
    //Step One
    tracer.WriteElapsedTime("Step One Complete");
    //Step Two
    tracer.WriteElapsedTime("Step Two Complete");
}

This will write both the elapsed time and the difference between statement calls, and will allow you to easily gain insight into long running steps.  There’s also another method WriteInfo, which allows you to write important information without clogging the performance messages.  This is important for information such as the current user, or information about the request:

tracer.WriteInfo("CurrentUser", Identity.User);
tracer.WriteInfo("QueryString", Request.QueryString);

More often than not you probably differ high level function calls (those that orchestrate complex logic) to sub routines.  In Domain Driven Design, your Domain objects will probably interact with support services or other entities.  You may need to gain insight into these routines, but you don’t want to create separate tracers- that will create multiple EventLog entries and won’t give you a clear picture of the entire process.  In a production environment, where there could be hundreds of concurrent requests, aggregating those calls in a meaningful way is a nightmare.  That’s the main issue I had with the default Tracer class.  The solution for the PerformanceTracer is simple-  In your dependent classes you create a property of type IPerformanceTracer.  There are two extension methods, LogPerformanceTrace and LogInfoTrace you can use dependent classes.  These simply do a null object check before writing the trace so you don’t get any nasty null reference errors- helpful if you need to add logging and don’t want to update dependencies in your unit tests.  Notice how in the example above there was one call which took the majority of the time? I should probably get more insight into that function to see what’s going on.  Here’s an example of a property/extension method combination:

//Create a property to store the tracer
public IPerformanceTracer Tracer { get; set; }
public void MySubRoutineCalledFromAnotherFunction(string someParam)
{
    //Do Work
    Tracer.LogPerformanceTracer("Subroutine Step One");
    //Do More Work
    Tracer.LogPerformanceTrace("Subroutine Step Two");
}

It doesn’t matter if the Tracer property is set, because the extension method handles null objects correctly. You could also put the tracer entity in an IoC framework and pull it out that way.  I prefer this approach when dealing with web apps.  You can create the tracer in a BeginRequest HttpModule, plug it into a container, and pull it out in the EndRequest method and dispose of it.  Using the tracer with a Per WebRequest Lifetime Container, supported with most frameworks, is even better.  This way it’s available everywhere without have to wire up new properties, and you can scatter it around in various places.  Here’s some code which pulls it out of a ServiceLocator ActionFilterAttribute in a ASP.NET MVC App:

public class PerformanceCounterAttribute : ActionFilterAttribute

{

IPerformanceTracer tracer;

public override void OnActionExecuting(ActionExecutingContext filterContext)

{

     tracer = ServiceLocator.Current.GetInstance<IPerformanceTracer>();

     tracer.LogPerformanceTrace("Action Executing");

}

public override void OnResultExecuted(ResultExecutedContext filterContext)

{

     tracer.LogPerformanceTrace("Result Finished");

}

public override void OnActionExecuted(ActionExecutedContext filterContext)

{

     tracer.LogPerformanceTrace("Action Finished");

}

}

The final aspect of this setup is formatting the output.  EnterpriseLibrary provides a ton of information about the the context which logged the entry- not all of it may be applicable to you.  I use a customer formatter to only write the Title, Message and Category of the trace, like so:

<formatters>
 <add template="Title:{title}
Message: {message}
Category: {category}" type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.TextFormatter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Text Formatter"/>
 </formatters>

This makes entries so much more readable. By default, the EventLog takes care of important information like the Computer name and Timestamp. So I can keep the message info very clean.

One thing I’m specifically not trying to do is get an average of the same function call over a period of time.  That insight is also very important in understanding bottlenecks.  But what I’m trying to do is get a broad sense of the flow of a large process and see which chunks can be improved.  With the ability to aggregate EventLogs in a single place, I can easily sift through and see how my app is behaving in production.  By linking that with relevant information like the current user, url, and query string, I can find specific scenarios which are problematic that may not have gotten caught in lower environments.

A copy of the entire class is below, and available from GitHub.  It doesn’t make sense to roll out an entire project for it- just roll it somewhere in your app and tweak as necessary.  Then enjoy finding all the places you could optimize!

Using Flickr and jQuery to learn JSONP

I was playing around with the Flickr API recently and got a little stuck: when using jQuery to call the Flickr public feed using jQuery’s $.getJSON method, I wasn’t getting any results.  I thought maybe I was parsing the response incorrectly, but when I went to check out the data coming back in firebug, nothing was there.  I couldn’t believe it- the response headers were present, but the body was blank.  Calling the public feed url from the browser worked fine.  What’s more interesting was everything worked in IE.  So I did some experimenting and learned the issue: I wasn’t correctly using the endpoint to work with JSONP, which is required when using jQuery with Flickr.  Then I thought I better learn more about JSONP.

There are plenty of good articles about JSONP on the net.  Essentially, JSONP allows you to specify custom callbacks when making remote ajax calls.  Firefox seems to be more strict when dealing with jsonp, which is why I didn’t get a response body.  What did the trick was adding the

jsoncallback=?

query string parameter to the end of my Flickr url.  This allows the jQuery framework to route to the default success function you pass to .getJSON(), like so:

$.getJSON(
            'http://api.flickr.com/services/feeds/photos_public.gne?format=json&jsoncallback=?',
               function(data) {
                   $.each(data.items, function(i, item) {
                       $("img").attr("src", item.media.m).appendTo("#images");
                   });
               });

Here’s an example of this call.   So what happens if we replace the ? with something else? Well, passing a simple text string will be treated like a normal function call.  Take a look at the example below:

function GetItemsUsingExternalCallback() {
            $.ajax({ url: 'http://api.flickr.com/services/feeds/photos_public.gne?format=json&jsoncallback=myCallbackFunction',
                dataType: 'jsonp'
            });
        }
  function myCallbackFunction(data) {
  //dostuff
  }

jQuery will try and call “myCallbackFunction” as a normal function call instead of routing to the standard success function that’s part of the getJSON call.  The example page also includes this approach. There’s only a slight difference between the two approaches, but it’s cool that jQuery can call a function outside of the normal success callback.  Of course, if you needed to reuse a common callback across multiple ajax calls, you’d probably want to call that function directly from the success method, rather than inlining the function in the .ajax call.

How eAccelerator Improved WordPress on My Fedora Server

I recently moved this blog and some other smaller websites to a virtual machine running on Rackspace Cloud. So far I’m loving having my own server, and have been able to get my hands dirty with Linux administration, apache, and mysql.

But one quirk was really bothering me: at times, my WordPress blog would hang. I don’t get too much traffic, and using

top

showed no real load on the cpu. But there were a lot of Apache threads with a good chunk of memory allocated, and I had a little free memory available.  I tried adding more memory to the server, but no luck.  I thought the issue could be mysql, but running queries wasn’t a problem.  Neither was static html- the root index.html loaded quickly.  So that left php itself.

A couple of apache config changes didn’t help.  The thing which really did the trick was installing eAccelerator.  This tool simply keeps the compiled php script available, so apache doesn’t have to recompile the php script on every load.  No, the blog is a lot faster and much more reliable.

Installation is easy on Fedora (or CentOS, or whatever distro uses yum):

sudo yum install php-accelerator

then just restart apache and you’re good to go:

sudo /sbin/service httpd reload

If you don’t notice an immediate improvement, and want to check to make sure it’s loaded, then create a info.php file with the following code:

<?php phpinfo(); ?>

and you should see the accelerator info in the list.

An Outlook Macro to Archive Items like GMail

I’m a big fan of Gmail’s archive feature- I can move items out of the inbox quickly and find them later with search.  To mimic this behavior at work with outlook, I’ve created this quick and dirty macro which does the job.  Simply create a folder named “archive” in your mailbox.  When you run the macro, it will move the current item in your inbox to that folder.  I’ve created a keystroke which lets me run the macro easily.

Sub ArchiveItem()

On Error Resume Next

 Dim objFolder As Outlook.MAPIFolder, objInbox As Outlook.MAPIFolder
 Dim objNS As Outlook.NameSpace, objItem As Outlook.MailItem

 Set objNS = Application.GetNamespace("MAPI")
 Set objInbox = objNS.GetDefaultFolder(olFolderInbox)
 Set objFolder = objInbox.Parent.Folders("Archive")

'Assume this is a mail folder
 If objFolder Is Nothing Then
 MsgBox "This folder doesn't exist!", vbOKOnly + vbExclamation, "INVALID FOLDER"
 End If

 If Application.ActiveExplorer.Selection.Count = 0 Then
 'Require that this procedure be called only when a message is selected
 Exit Sub
 End If

 For Each objItem In Application.ActiveExplorer.Selection
 If objFolder.DefaultItemType = olMailItem Then
 If objItem.Class = olMail Then
 objItem.Move objFolder
 End If
 End If
 Next

 Set objItem = Nothing
 Set objFolder = Nothing
 Set objInbox = Nothing
 Set objNS = Nothing
End Sub

Rocking the Rackspace Cloud

I’ve been unhappy with my current hosting provider and in looking for an alternative I tried Rackspace Cloud Server.  I’ve wanted a dedicated server for a while, but it’s been cost prohibitive compared to shared machine hosting.  This blog, as well as some other, smaller sites, are now running on a Rackspace Cloud Server.  Needless to say I love it.  Rackspace Cloud Server is simply awesome.

Setup

Signing up and setup was fast and simple.  There was a weird account verification process where Rackspace had to call me to finalize and confirm setup, but I only had to wait about ten minutes after I signed up for the call.  After that, it was only a couple of clicks before I was logging in to my new server.  There are several Linux images to choose from- you select the size of the machine you want and the os- and within seconds you have a public ip v4 address and root access. For me that was the most amazing part of cloud computing- I wanted a new server, and I got one with instant gratification.

Getting Started

I’m not a Linux expert coming from the Windows world, but I was able to configure everything and install mysql/apache in no time.  Rackspace has a series of extremely well written and straightforward how-to’s to get your server configured correctly.  Each OS image is a stripped down bare-bones install so there’s some setup required, but by using the knowledge base I was up and running quickly.  Backups are integrated into the management website with Rackspace’s Cloud Files product, so I took a snapshot for reuse later.  You can even schedule snapshots as needed.

It’s really the knowledge base which I loved- I got ftp, apache, php, ssh, mysql all installed and secured quickly.  There’s a wealth of information there.  If you need to learn Linux or do anything with popular OS applications check out their kb. I’ve never seen so much easy to access documentation on open source software in one place.

**Other Goodies

**

The reason why I wanted to try Rackspace is because the cost of entry is extremely cheap- it starts at 1.5 cents/hr which comes out to about $11/month.  Much cheaper than Amazon’s EC2, which starts around $20 only if you prepay.  There are a lot of major differences between the two products which I talk about below.

Rackspace Servers come in different sizes all differentiated by the amount of RAM.  I chose the 256mb option because it was the cheapest.  It turns out it was a little too light on power for what I wanted, so I upgraded to the 512mb option.  The upgrade was seemless- you just say “make this a 512mb server”, and Rackspace queues the request, reboots when appropriate, you verify, and then you get the same server with better “hardware”.  Loved it.  I queued the request before I left work and it was good to go when I got home.

The best part about Rackspace is the support.  There’s a live chat link from the management page.  I had a question about DNS- I clicked live chat- had a quick IM conversation- got the answer- and I was done.  That’s service.

Rackspace vs. EC2

The main reason I chose Rackspace over EC2 was cost.  But in looking into the differences between Rackspace and EC2 I’m happy I chose Rackspace.  The biggest difference is with Rackspace you can reboot the OS without losing state- so if you make configuration change or install some software, a reboot behaves like you’d expect- everything stays the same.  EC2, on the other hand, resets the server state back to what it was on the first boot.  You need to have your EC2 server link to EBS to persist state. WIth Rackspace you get a hard drive which acts, surprisingly, just like a hard drive.

With Rackspace, the default is ip v4, but with AWS, it’s ip v6 and you need to link to an ip v4 address.  The initial setup is also slightly different- EC2 involves the use of a keypair for the initial login, but Rackspace gives you root access which you then change and configure as needed.  I haven’t actually set up an EC2 server but by looking at the how to I an safely say Rackspace is a lot simpler.

Another major difference in options is the “size” of the server.  EC2 is explicit in the hardware configuration- RAM, cpu power, cores, etc.  Size is capped.  Rackspace takes a different approach: all CPUs are 64 bit with four cores and you only choose the amount of RAM.  Storage and CPU power change with the size of RAM.  You’re guaranteed a certain minimum power (rather than a capped maximum), and if more power is available you get it.

I can say EC2 has much more options available for OSes- EC2 offers Windows in addition to Linux (apparently this is coming soon to Rackspace).  I also like the idea of EC2’s community AMI’s.  You only get a bare bones OS with Rackspace, but with EC2 you can get a fully configured LAMP stack or some other ready-to-go server.  But it was fun getting everything set up myself with Rackspace.  I haven’t done a long haul with either product, and like I said I haven’t actually spun up an EC2 instance, so I can’t get into any more descriptive details.

Rocking the Cloud

I can’t tell you how fun it is to be able to spin up a new server out of the blue.  There are a million things I want to do- being able to run a server full time, with great bandwidth, without having to find a place for it in my house (or buy and salvage hardware) is just geeky awesome.  And for only $11/mo!  It’s crazy!  Whether with Rackspace or Amazon I encourage you to give it a shot.  I encode a lot of video for my ipod, which means running my laptop at night or slow performance when doing other things.  I’d love to be able to spin up a server, upload a video, encode it, pull it down, and shut down the server to save money.  Power on demand!

Disclaimer: I own stock in both Rackspace and Amazon, but I wasn’t paid nor asked to write this article.

Getting Ruby 1.9, Readline, Rails, and Mysql all running on Snow Leopard

In my never ending love/hate relationship with Ruby, Rails and my Mac I’ve finally gotten Ruby 1.9 up and running with Rails 2.3 and MySql 64 bit.  All on Snow Leopard.  There was an even a little detour with Readline.  If you’ve scoured other posts about Snow Leopard, Ruby, Rails and Mysql and ended up here I feel your pain.  I hope this helps you on your way. Most of this info is from other places which I’ve explained in a little (just a little) but more depth.

Install XCode

You need XCode to do any of this, so install it.  If you’re upgrading to Snow Leopard, reinstall XCode so you get the correct c compiler.

Your profile

Here’s the deal.  We’re going to install ruby 1.9 to your /usr/local directory.  It will dump stuff in /usr/local/bin and other stuff in /usr/local/lib.  Why here?  That’s where it goes.  The default install of ruby on Snow Leopard 1.8, lives in /System/Library/Frameworks/Ruby.framework/Versions/Current.  Current is really an alias (actually, symlink) to the 1.8 directory at the same level.  For some it may seem like a good idea to install ruby 1.9 here.  It’s not.  Just put in /usr/local like everyone else.

Because ruby 1.9 will live in our /usr/local you have to help out your terminal a little.  You have to tell it where to look for the bin of ruby 1.9.  So when you run “ruby” from terminal you get the 1.9 version in /usr/local, not the 1.8 version in the System Library.  That’s why you have to add a path to /usr/local in your profile.  Do this from the terminal:

mate ~/.bash_profile

This says, “open or create the file .bash_profile, in the home directory (~/), using textmate”.  You can use any other editor if you know how- but if you did you probably wouldn’t need to read this.  So just buy- and use- textmate.  It’s a nice app.  Now, .bash_profile is a file used by bash, aka the terminal app, for settings.  Some places you’ll see “mate .profile” instead.  This will work too– but if you have a .profile and .bash_profile you may run in to problems.  Just have one, preferably .bash_profile, and write this in it:

export PATH=/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:$PATH

This appends our /usr/local/bin and sbin to the current list of directories to search for when trying to find out where all those little commands live which you type into the terminal.  Keen eyes may have noticed the mysql/bin thrown in there.  This is for later.  The :$PATH at the end is extremely important- it includes other paths which are included in other places. Once this is done then type “source .bash_profile” from terminal to load the changes.

Try Installing Ruby

One way to install ruby is by using MacPorts.  If you want to get a little more hands on, we’re going to pull the source down and build it ourselves.  MacPorts is probably the easiest option.  We’re not doing the easy option.

Note: Read all this first!  In the terminal, make sure you’re in your home directory by doing a simple “cd”.  Then, do this:

mkdir src
cd src

We’re creating a new directory called src for our source files, and moving into said directory.  Now we can get the code:

curl -O ftp://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.1-p376.tar.gz

tar xzvf ruby-1.9.1-p376.tar.gz

cd ruby-1.9.1-p376/

Curl is a nice little app to pull things from the internet.  The -O option simply names the local file the same as the remote file.  Ruby 1.9.1-p376 was the latest version as of this writing, but [In my never ending love/hate relationship with Ruby, Rails and my Mac I’ve finally gotten Ruby 1.9 up and running with Rails 2.3 and MySql 64 bit.  All on Snow Leopard.  There was an even a little detour with Readline.  If you’ve scoured other posts about Snow Leopard, Ruby, Rails and Mysql and ended up here I feel your pain.  I hope this helps you on your way. Most of this info is from other places which I’ve explained in a little (just a little) but more depth.

Install XCode

You need XCode to do any of this, so install it.  If you’re upgrading to Snow Leopard, reinstall XCode so you get the correct c compiler.

Your profile

Here’s the deal.  We’re going to install ruby 1.9 to your /usr/local directory.  It will dump stuff in /usr/local/bin and other stuff in /usr/local/lib.  Why here?  That’s where it goes.  The default install of ruby on Snow Leopard 1.8, lives in /System/Library/Frameworks/Ruby.framework/Versions/Current.  Current is really an alias (actually, symlink) to the 1.8 directory at the same level.  For some it may seem like a good idea to install ruby 1.9 here.  It’s not.  Just put in /usr/local like everyone else.

Because ruby 1.9 will live in our /usr/local you have to help out your terminal a little.  You have to tell it where to look for the bin of ruby 1.9.  So when you run “ruby” from terminal you get the 1.9 version in /usr/local, not the 1.8 version in the System Library.  That’s why you have to add a path to /usr/local in your profile.  Do this from the terminal:

mate ~/.bash_profile

This says, “open or create the file .bash_profile, in the home directory (~/), using textmate”.  You can use any other editor if you know how- but if you did you probably wouldn’t need to read this.  So just buy- and use- textmate.  It’s a nice app.  Now, .bash_profile is a file used by bash, aka the terminal app, for settings.  Some places you’ll see “mate .profile” instead.  This will work too– but if you have a .profile and .bash_profile you may run in to problems.  Just have one, preferably .bash_profile, and write this in it:

export PATH=/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:$PATH

This appends our /usr/local/bin and sbin to the current list of directories to search for when trying to find out where all those little commands live which you type into the terminal.  Keen eyes may have noticed the mysql/bin thrown in there.  This is for later.  The :$PATH at the end is extremely important- it includes other paths which are included in other places. Once this is done then type “source .bash_profile” from terminal to load the changes.

Try Installing Ruby

One way to install ruby is by using MacPorts.  If you want to get a little more hands on, we’re going to pull the source down and build it ourselves.  MacPorts is probably the easiest option.  We’re not doing the easy option.

Note: Read all this first!  In the terminal, make sure you’re in your home directory by doing a simple “cd”.  Then, do this:

mkdir src
cd src

We’re creating a new directory called src for our source files, and moving into said directory.  Now we can get the code:

curl -O ftp://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.1-p376.tar.gz

tar xzvf ruby-1.9.1-p376.tar.gz

cd ruby-1.9.1-p376/

Curl is a nice little app to pull things from the internet.  The -O option simply names the local file the same as the remote file.  Ruby 1.9.1-p376 was the latest version as of this writing, but]4 for the latest release.  Tar xzvf simply unpacks the compressed download.

Very helpful hint: If you ever are unsure about a command, simply type the command and –help.  As in, curl –help.  This is very helpful.

Next, we get to the good stuff.  First, run autoconf simply by typing “autoconf”:

autoconf

Autoconf is a tool used for generating configuration scripts.  It’s important you run this.  Then, run the configuration script.  The thing which worked for me was:

./configure --prefix=/usr/local/ --enable-shared --with-readline-dir=/usr/local

This tells us to install ruby in the /usr/local directory, build a shared library for ruby, and use the readline installation found in /usr/local. Very Important: some users prefer adding a suffix to the ruby 1.9 install so it doesn’t interfere with the system install of ruby.  By adding the –program-suffix=19 option to configure you’ll append “19” to all commands, like “ruby19″ and “gem19″.  This is a smart idea as it won’t interfere with the default ruby installation.  Using this technique there are ways to easily switch between ruby installations.  If you don’t care about 1.8, and just want the ease of typing “ruby” and getting the latest 1.9, omit the –program-suffix option.

If you run ./configure and get an error of:

configure: WARNING: unrecognized options: –with-readline-dir

You did not run autoconf.  Type it and run it, then run configure again.

Sometimes you’ll see the –enable-pthread option.  There seems to be some debate on whether this is a good idea.  I say omit it unless you know what you’re doing.  You can google for more info.  Feel free to explore and google other configure options- simply type “configure –help” to list them all.

Next we need to run:

make

This builds all the source files needed for the install.  If you run this and get an error of:

readline.c: In function ‘username_completion_proc_call’:

readline.c:1159: error: ‘username_completion_function’ undeclared (first use in this function)

You don’t have readline- or at least the proper version of readline- installed.  This is a problem.  Let’s get it.

Readline

cd ~/src
curl -O ftp://ftp.gnu.org/gnu/readline/readline-6.0.tar.gz
tar xzvf readline-6.0.tar.gz
cd readline-6.0
./configure --prefix=/usr/local
make
sudo make install

You’ve now installed readline.  You may get a warning of install: you may need to run ldconfig at at the end. Don’t worry about it.  At least I didn’t have to worry about it.

Do everything again

By now, you know the drill.  Hop back to the ruby source code directory and try it again.  But if you ran make in the ruby install step and got errors, just run “make clean” to reset everything.  It’s a good idea.

make clean
autoconf
./configure --prefix=/usr/local/ --with-readline-dir=/usr/local --enable-shared
make
sudo make install

That should be it.  Hopefully you’re error free.  Typing:

which ruby

should return:

/usr/local/bin/ruby

and typing “ruby -v” should return the version of ruby you’ve just downloaded.  Unless you used the –program-suffix option above, then it’s probably “ruby19 -v”

Rails

Now you can download and install rails.  Simply run:

sudo gem update --system
sudo gem install rails

Mysql

Mysql is a piece of cake.  Simply grab the x86_64 install package from the Mysql site. Even though it’s for 10.5, it works fine on 10.6 (Snow Leopard).  Once this is done, you can build the mysql gem:

sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config

Fin

Now, try to get everything running.  Go back to your home directory, create a rails app, and see if it works:

cd ~/
rails playground --databse=mysql
cd playground
rake db:create
script/server

Go to http://localhost:3000, check your environment settings, and you should see:

Ruby version 1.9.1 (i386-darwin10.2.0)
RubyGems version 1.3.5
Rack version 1.0
Rails version 2.3.5
Active Record version 2.3.5
Active Resource version 2.3.5
Action Mailer version 2.3.5
Active Support version 2.3.5

And voila, you’re done!

UPDATE:

If you use Textmate to develop with Rails, your Textmate Ruby path will point to the system’s 1.8 version, so you’ll get awakard issues of Gems not being available or other weird stuff when trying to run Ruby within Textmate (like when you want to RSpec tests).  This fix is simple: go to Textmate -> Preferences -> Advanced -> Shell Variables and add TM_RUBY with a value of /usr/local/bin/ruby and you’ll be good to go.

Organizing Javascript for Event Pooling with jQuery

It turns out my most popular article of the past year was Event Pooling with jQuery’s Bind and Trigger.  I wanted to write a follow up article taking this approach one step further by discussing how to logically organize the relationship between binders and triggers on a javascript heavy UI.  It’s important to properly design the code structure of your javascript to create a flexible and maintainable system.  This is essential for any software application.  For javascript development, you don’t want to end up with odd dependencies hindering changes or randomly bubbled events causing bugs.

Event Pooling: A Quick Review

What is event pooling and why is it important?  Quite simply it’s a way to manage dependencies.  You create a loosely coupled system between the thing which triggers an action to happen and the thing which responds to the action, called the binder.  jQuery has some cool bind and trigger functionality which allows you to create custom events for event pooling- and you can use this to easily wire up multiple functions to write complex javascript with ease.  I encourage you to check out the original how-to article Event Pooling with jQuery’s Bind and Trigger to learn more.  It’s a very powerful technique.

Now, as we all know, with great power comes great responsibility.  If you structure the relationship between binders and triggers incorrectly you’ll end up with a mess on your hands, a spider web of logic which is almost impossible to unwind.  Here are some tips to better structure your binders and triggers for a logical and maintainable application

Presenting the Problem

Stupid Form

Let’s take a look at an example app.  Here’s a simple form where a user can draft a report, save it for later, or send it immediately. There are three options to enter data in the name/email/department field.  The user can enter it manually, choose from the autosuggest area at left, or simply select one of the favorite links.  The autosuggest/favorite link would pre-populate the textboxes used in the form.  If all the data is there, the send button is enabled.  If not, it’s disabled.

Let’s discuss how we’d deal with enabling and disabling the send button.  There are many ways to approach this.  A traditional way is to wire up some standard event to our textboxes:

$('#name').keyup(function(){
    //Logic (hopefully structured in a reusable way across all controls).
});
//Do this for all other input controls.

This solution is perfectly valid and works, but it has some shortcomings.  First, you need to wire up all three controls and route them to the same function.  Next, even though the keyup will handle user input, the textboxes could change in other ways- either from the favorite link or the auto suggest box.  We’ll need to call our validation function in numerous places.  There may also be other logic we want to incorporate within the text change event unrelated to enabling of the send button- email validation, for instance, which is only applicable to the email textbox, or a certain length requirement on the name textbox.  What happens if there’s a new requirement where the subject/body must be filled to enable the send button? Where does all this functionality fall into our larger requirements?

You can imagine the complexity of writing the javascript required for this UI.  We need a slew of functions to deal with validation logic: Both for enabling the send button and other input controls.  We need to wire up all our onkeyup and on click events across the auto suggest field, textboxes, and favorite links.  Function calls are everywhere.  It will take all your code complete skills to manage those dependencies and keep this code lean.

This is where event pooling comes into play: instead of direct dependencies between these controls and logic, you bubble events to a middle man and allow interested parties to respond accordingly.  Instead of all the controls telling the send button to enable/disable itself, the send button “listens” for changes it cares about- when the value of the textbox changes- and responds accordingly.  The responsibility is reversed- the function which needs to be called can choose when it’s called, rather than waiting for something to call it.

Dependencies Between Binders and Triggers

With event pooling there are two things you need to be conscience of:  the events themselves and the data passed between the binder and trigger.  These items form the two dependencies between binders and triggers.  The event serves as a link between the binder and trigger and the data is the information which is passed between the two.  It’s essential to properly structure your events and choose the right data transfer option to avoid pitfalls over the life of the application.  We’ll deal with each aspect individually.  This post will be about the structure of events, and I’ll write various ways to pass data between parties in another post.

Structuring Custom Events

Most people are familiar with the standard javascript events: click, blur, enter, exit, etc.  These are what you usually wire up to functions when you want to do something.  However, they only go so far- you need custom events when you have a lot of stuff happening.  Why use them?  Quite simply, it’s more logical for your application control flow.  For our send button functionality, we want our validation function to act when something happens.  This “something” will be a custom event we create.  We have two options for naming our event: we can name the event after what is intended, like “ENABLE_DISABLE_SEND_BUTTON”, or name the event after what has happened, like “NAME_CHANGED” or “FAVORITE_LINK_SELECTED”.  The former option requires multiple events to be fired when something happens.  The autosuggest box, for instance, would require the ENABLE_DISABLE_SEND_BUTTON trigger, a SELECTED_CONTACT event for setting the textboxes, and whatever else happens after a contact is selected.  The latter option requires just one trigger to be fired, but the binder must subscribe to multiple events.  The choice can also be presented like this: do we want each element to fire a trigger for each action which should occur, or have a function to listen for multiple triggered events?


Naming events may seem like an arbitrary and thoughtless act, but it’s essential to have a naming strategy with event pooling.  Just like wiring up functions directly can be unwieldy, so can a slew of events wired up every which way.

For event pooling, it’s more important to name events after what has happened, rather than what is intended. So the correct approach is to go with things like “NAME_CHANGED”, “CONTACT_SELECTED”, or “EMAIL_CHANGED”.  The rationale has to do with dependencies themselves:  we don’t want functions called from random parts of the system via specific events:  this is no better than calling functions directly from disparate parts of the system.  And with specific action related events like “ENABLE_SEND” you just know someone is going to take a shortcut and wire some random binder to a totally unrelated trigger, because that trigger is fired from the control they want to monitor.  Binders- and the functions they are wired to- should proactively “listen” for things which it’s interested in.  This allows you to easily know why a function is being called.  If you need to change why or how something is handled, you go to the recipient, not the caller.  The caller, after all, could be anything, and more importantly the caller could be triggering multiple things.

//Funnel all possibilities to single custom trigger
$('#name').bind('keyup enter', function() {
 $(document).trigger('NAME_CHANGED');
});

//Funnel all possibilites to single custom trigger
$('#email').bind('keyup enter', function() {
$(document).trigger('EMAIL_CHANGED');
});

//I know why this is being called because I've subscribed
//to the events I'm interested in.
$(document).bind('NAME_CHANGED EMAIL_CHANGED', function()
{
 //Handle validation, etc.
});

The cool thing about this approach is there may be something else which will cause an email or name changed event to happen: specifically, when someone has selected a contact from the autosuggest list or a favorite link is selected.  You can fire the name_changed event from multiple places and not worry about wiring up any new triggers.  The code would look something like this:

//Fired from elsewhere
$(document).bind('FAVORITE_LINK_SELECTED', function()
{
  //Handle selection
  //Set Name, Email, etc.
  //Fire event:
  $(document).trigger('EMAIL_CHANGED');
  $(document).trigger('NAME_CHANGED');
});

How nice:  another part of the system changes the name indirectly, and we don’t have to worry about hooking anything up because the binder is already subscribed to the NAME_CHANGED event.

Note the name doesn’t necessarily correspond to a specific element: It’s not a textbox_name_changed event, nor are any specific html id’s involved except for wiring up a trigger from a standard keyup event.  This is an important difference: we could rename the id’s, or switch to some other input control, and not have to rewire everything.  We don’t care about the textbox nor that the textbox’s value has changed.  We care that the textbox respresents the name entered or the email address- and we want to know when the name or email has changed.  The favorite link clicked is a good example of this nuanced difference.  Take a look at the following html:

<a href='#' id='fav1' class='favorite'>My Favorite 1</a>
<a href='#' id='fav2' class='favorite'>My Favorite 2</a>

Imagine we’ve wired the click event to fire a FAVORITE_LINK_SELECTED trigger.  With this trigger, we don’t care that the link with id fav1 or fav2 has been clicked, nor the a.favorite selector has been clicked.  We don’t care about ids or css classes.  We care a FAVORITE_LINK_SELECTED event has happened because that’s what the id fav1 and favorite class represents- the favorite link- and we want to know when that has been selected.  We can rename the id, change the class, or even change the entire element.  As long as FAVORITE_LINK_SELECTED is fired we’re good to go.  Our custom FAVORITE_LINK_SELECTED trigger is the abstraction which creates the loosely coupled system.

Conclusion

One thing I haven’t discuss is how data is passed from trigger to binder.  It’s the other important dependency between triggers and binders which we’ll discuss in another post.  The important thing to take away is why you’d want to use custom named events to create a loosely coupled system.  For very straightforward pages it’s probably not worth the overhead- the abstraction isn’t required.  However, in a complex form where there are lots of validation dependencies, or many routes to the same function, or when multiple events can trigger an update- you want to use event pooling.  More importantly, you want to think about your strategy when it comes to naming events, because it’s the description linking the caller and method together.  You do not want to code yourself into a corner!

Help Your Business Be A Technology Company

It doesn’t matter what industry you work for- every company is a technology company.  At the end of the day a business needs to provide something somebody wants- quickly and efficiently.  This is- quite simply- driven by technology.  And you- as a software developer- are directly responsible for making that happen.  What will separate you from other developers is not how well you program-  it’s how well you create the right technology to help your business work better.

Unfortunately most people don’t really understand technology nor how to use it.  I’m not talking about silly things like printing a pdf or syncing your Outlook calendar- I’m talking about how technology can optimize business processes.  How technology can both improve efficiency and quickly provide accurate, meaningful information. How technology can be the engine keeping your business two steps ahead of everyone else.

Luckily you have an advantage to accomplish this goal: you know how to create technology.  The important thing, however, is knowing both what the business is and also needs so you can help make it work better.  Building software isn’t nearly as important as knowing what to build.

Don’t just Program-  Provide Solutions

Don’t be the type of heads down programmer that wants to be told what to do just so they can code.  People that wait around for requirements and specs are a waste of time and energy.  Everything has to be articulated to a T, then assumptions and misunderstandings lead to bad a deliverable which can sink a ship.  Yes, it’s important to code and code well, but all too often code eclipses the bigger picture.

As a programmer, your job is to create something to fill a gap.  You need to understand why that gap exists so you can figure out the best way to fill it.  You’re the programmer- you literally create the solution.  You know what’s possible and what’s not- what’s easy and what’s hard.  And if you’re experienced you know the consequences of decisions.  So don’t blindly accept solutions.  Spend time understanding the problem- come up with the solution- and then work with people to figure out if your solution will solve the problem.

Lead by Listening

Coming up with the solution should not equate to arrogance.  The world, unfortunately, does not revolve around you.  Despite how annoying your users may seem, they are your responsibility and you have an obligation to them.  Nobody likes to be told what to do or have something shoved in their face.  Yes, it is a lot easier for you to just do it yourself and say “here”.  Unfortunately, for everybody else, that way sucks.  You need to lead by listening: sit with people, listen to what they have to say, and show that you get “it”.

If you listen well you’re in a better position to explain why you’re making the decisions you’re making.  Work with users: build consensus and understanding.  If you have a business advocate who’s on board with your plan, you have a powerful ally to help create the change you want.  Bounce ideas off of people- and if you think an idea is bad, say so, but always provide an alternative solution.  Being negative and impatient is the quickest way to alienate yourself and have the best idea shot down.

Change the Process

You can only make a car go so fast- at some point, you need to fly.  So don’t just help the driver drive; help them get to the destination.  Efficiency is key, but it’s found in unusual ways.  People want to get their job done as quickly as possible with the least amount of annoyance.  Helping them achieve this is often a good thing, but beware: making them do their job better could make a lot of other stuff worse.  It can’t just be about speeding up the process- the process may need to change.  It’s up to you to see the big picture and help everybody, not just a single person.  It’s just like scaling a system: at some point the current process will hit a limit; you need to know what that limit is, when it will happen, and what the hell to do next.

Usually what’s being asked for is different than what’s actually needed.  You need to create an “ah ha” moment- when you can take all the frustrations and problems of your users, combine them into one solution, and have everyone say “yes, that’s perfect”!  Have a vision.

Roadmap

Sadly, you always have two things against you: time and patience.  Nobody has either.  You can’t drastically change a users day to day activity.  Imagine if all of a sudden we had to start driving on the wrong side of the road- total chaos would ensue.  It’s the same thing with drastically changing an application- for the end user, it’s hard for the long term benefit to outweigh the short term difficulties.  It’s also a horrible idea to wait a long time for your great solution to be delivered: by that time, it’s no longer relevant, and everyone’s forgotten why it’s important.

Small, incremental changes are key: you need a roadmap to the goal.  That’s why we have agile.  You need to shepherd the heard to the promise land, and constantly remind them of the goal.  As new requirements come up and new requests come in you need to make sure everything is fitting into that big picture.  People may even come in with quick one-offs as stop gap measures- the biggest setback for moving forward.  You need to be able to say “Here’s why this is bad” and provide better alternatives. If something must happen quicker, prioritize, refactor and meet half way.  Agile development has numerous benefits:

  • You can provide iterative releases.  People can see progress unfold.
  • You get in refactor/response cycles.  It’s amazing how excited and appreciative people are if you make a change they ask for- no matter how small.  If something isn’t working as envisioned, and gets changed quickly, you are a rock star.
  • You can deliver faster: software can be used when it’s usable, not when it’s finished.
  • Things can be prioritized.  When you can’t do something somebody wants, it’s not a surprise.  If you communicate well they know why it didn’t happen.
  • The learning curve upon delivery is minimized.  People know what they’re getting and how it works.  After all, they were part of the design.

The Deliverable is not the Release

Good software is something that people like using.  At the end of the day, it doesn’t matter which design patterns were used, how cool your ajax widget was, or even how many bugs the product has.  Writing code well does not equal a good product.  If it doesn’t help somebody do something better you wasted a lot of time- particularly your own and even worse your users.  That’s why time to market is so essential- it gets the product used sooner so reactions can be gauged and adjustments made.  Bells and whistles are nice to haves, but at the end of the day you need to ask yourself how important was it really?  Would a simple web 1.0 form have been better? Would it have been easier to use and quicker to develop? Is this thing that I’m doing actually solving a problem?

All of these questions equate to a simple rational:  Am I giving my business what it needs? The better you are at answering that question the better you’ll be at programming the solution.

HDR Photography

fox glacier sunset at lake-24, originally uploaded by mhamrah.

<p>
  I&#8217;ve been playing around with HDR photography after coming back from an incredible vacation to New Zealand. This photo came out especially well.
</p>

AppStorm Giveaway!

AppStorm is an incredible site which reviews Mac software. I’ve followed their reviews for a while and it’s made my Mac 8000% more awesome. They frequently give away software- currently copies of Flux and Forklift are up for grabs. so check it out!