May 29 2009
I came across this great site for the Ruby on Rails API: http://railsapi.com
Here’s a link to Ruby 2.3.2.1 with Ruby 1.9. Why is this great? It has a slick interface with a fast AJAX powered search filter. From a UI perspective, it makes great use of whitespace and font sizing to present information clearly and legibly. Finally, the simplistic color scheme is easy on the eyes. Also includes links for source and github.
Bookmark it!
May 24 2009
I’ve been playing around rspec in my application which also uses AuthLogic’s OpenID. When doing
rake spec
I get the following, very weird error:
(in /Users/Michael/r/wc)
/Users/Michael/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:1964:in `method_missing': undefined method `config' for #<Class:0x263efdc> (NoMethodError)
from /Library/Ruby/Gems/1.8/gems/authlogic-oid-1.0.3/lib/authlogic_openid/acts_as_authentic.rb:35:in `optional_fields='
from /Users/Michael/r/wc/app/models/user.rb:3
from /Library/Ruby/Gems/1.8/gems/authlogic-2.0.13/lib/authlogic/acts_as_authentic/base.rb:37:in `acts_as_authentic'
from /Users/Michael/r/wc/app/models/user.rb:2
from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require'
from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require'
As seen above I’m using version 1.0.3 of the source code. For the life of me, I couldn’t figure out what was wrong. What was even stranger was the code was working when running the site. It turns out, config got changed to rw_config, and the gem repository at rubyforge hasn’t been updated to the latest 1.0.4 which includes this change:
Change from using config to the new rw_config. Requires the latest version
of authlogic. config was too general of a name and was causing conflicts
for people in their own projects.
So I changed the acts_as_authentic.rb file so config is named rw_config, and everything worked fine.
I’d love to find a good reference on how rspec loads gems, and why this isn’t an issue when running the site. I’m also interested in knowning how to pull content as a gem from github source (the openid gem isn’t at gems.github.com).
May 12 2009
If there’s one thing I can’t stand in the development process it’s writing code to save data. In fact, there are only a few things which I’d consider more useless than dealing with data persistence- one of them being data migration. I hate dealing with persistence because it’s totally mundane and repetitive code. Worse, nobody outside of IT really understands the details of persistence. Which means it has no business value. And why should users care about unit of work or active record vs. repository? Should saving an object really be that complicated of a task? Absolutely not. Which is why no one cares. It’s like caring about how your beer was delivered. Really, I just want to drink the damn beer. Yet somehow the thing which should be a no brainer takes the most time and effort to do. And causes the most debate. And causes the most problems. However, I recently got my first glimmer of hope- a cloud with a silver lining- that maybe, just maybe, someday, we’ll actually have simple data access.
We as a development community have been transfixed on data access strategies. It’s the great elephant in the room, and has totally consumed all our attention. It’s distracting. Only a .NET/Java language war in ’04 could rival such a fierce debate as between Ayende and Greg Young over the repository pattern. (My two cents: ActiveRecord. Just kidding, ActiveRecord sucks (not SRP). Use Subsonic– which lost me at “drag and drop”, but I would not want to piss off Rob Conery).
Why do I hate persistence? It’s never really gotten any better. The first major advancement came with sprocs. Everything had to be a sproc, and if you weren’t using sprocs then you just weren’t cool. You can do these great things with sprocs- like writing select statements with inner joins. But not in your vbscript! This mostly lead to awesome interview questions like “why are sprocs important” and dba’s who would whine about developers writing bad sql (guess what- they can write bad code too)!
Next, and silently from the Java community, we were given NHibernate and the concept of an object relational mapper. Suddenly, an awakening occurred, and we as developers could now stick it to those dba’s and there precious sprocs. Try changing my auto generated sql, you bastards! Unfortunately, the NHibernate craze did not take off. We gave up sprocs for those awful configuration files- as if things could get any worse! I personally have a big chip on my shoulder because NHibernate comes from the Java community. Java, after all, just sucks (sidenote: this a totally baseless claim). Fluent NHibernate doesn’t make it that much better: I’m not digging Cascade. All(). Inverse(). WithForeignKey(). WithIndex(). WithConvention(). ButNotForThisClass(). How. Many. Can. I. Add() and double code associations for parent/child relationships etc. Not a big win. But at least we can rub Fluent NHibernate in Hibernate’s xml hell! Yes, there’s the auto mapper configuration which I admit is slick- but you’re kind of locked into the default convention (over configuration). Stray from the path and you’re back in the configuration quicksand. And don’t even get me started on the NHibernate query syntax. Yes, there’s Linq to NHibernate, but why use that when you go directly to the best thing to come out of Microsoft since the Steve Ballmer video.
Finally there was a breath of fresh air, and Microsoft was awesome again. We were given Linq to Sql. But when Microsoft realized they did something cool, they quickly pushed for Linq to Entities (oh wait- Entity Framework- Linq to Entities sounds like the cool Linq to Sql). We were quickly back to the WTF relationship we have with Microsoft. If you want to know why EF sucks, just try using it. Or check out this good series of articles about l2s vs. l2e. I gave up on Entity Framework when I tried setting up an auto generated db value on an insert. It’s a pain in the ass and not nearly as simple as Linq To Sql’s “This is an auto generated property” property.
What are we left with? Just the endless data access debate. Which in a way is good for us because it keeps us from actually delivering something our users care about- like an app that saves data. (Um, why can’t I change the customer address and overcharge them for adventure works gear at the same time??!?!?!)
So where’s the glimmer of hope? Azure Table Storage- which might put us back in the Microsoft is cool again category. Don’t worry Microsoft- you only support sorting on only one field (“Choose… Choose Wisely”), so I’m sure you can still spark lots of criticism. But please give large bonuses to the people behind ATS. Why is it so great? The same reason Linq to Sql is great- it just works. ATS with ADO.NET Data Services is about as easy as you can do persistence without actually having to do it. (Side note: I’m ignoring object databases). But you really have true POCO storage. No messy attributes, no base classes, no schema generators, no xml configuration files, no .edmx files. You have an object- it has public getters and setters- and you can save it. That’s all you have to do. The catch? A row key and a partition key (Sidenote: I’m ignoring the whole deploy/run cloud part). I could live without the partition key, but you’re not really asking for much. What else? It’s built on Linq- and Linq, as we all know, is awesome (just like sql, but no strings!). (Sidenote: more criticism fuel: not all Linq operators are supported).
What else? ATS is also schema free, so no more entity/table mappings. Just save an object. In fact, save different objects in the same table. Like parent/child classes. Together. Like an aggregate root. Then search on any field (sorry, still no sorting!) That’s all you have to do. And did I mention there’s no schema migration? Because there’s no schema! We’re free! Freedom! Sure, it’s not the first schema free storage solution– but it’s the only one backed with ADO.NET Data Services. And that’s the special sauce which makes it yummy!
Yes, ATS isn’t perfect. In fact it’s far from perfect. But it’s a great step in the right direction. NHibernate, Linq to Sql, EF, sprocs are all usable tools (sorry, didn’t mean to include EF). There are some great 3rd party toolkits too, but if I have to pay for data access I’d like you to just do it or me. There has been a great effort in the OS community (FluentNHibernate is a huge win) to make this stuff easier, but the bottom line is you’re doing double programming: you got your db and your code, and all that stuff that sits between is just crap. Even with FluentNHibernate you’re just shifting your db programming to configuration. At the end of the day doing data access well isn’t even a win because the user still doesn’t have a damn form! ATS is different- it’s just really slick. It’s scary slick. Yes, it’s not ready for prime time, but it’s a huge step in the right direction.
May 11 2009
So, I’m working on a Rails App and I want to use OpenID (and only OpenID) for authentication. I was going to use Restful_Authentication with the open_id_authentication extension, but then I saw Ryan Bates’ Railscast on AuthLogic. Authlogic has an OpenID extention which looked perfect for my needs, and Authlogic seemed like a great gem for authentication. My goal was simple: I wanted to support, and only support, authentication via OpenID. None of this username/password/salt stuff.
Now, I should warn you: I HAVE NO IDEA WHAT I AM DOING. I’m just getting started with Rails so I hit a few bumps getting this working. I learned a lot along the way, so here’s a quick rundown of my adventure with Authlogic and OpenID on Rails.
First, getting started with sample code is always a good idea. Authlogic has an example app on github where you can check out how to use the gem. After pulling down the code, I was surprised I couldn’t find anything relating to OpenID. It turns out, the master repository doesn’t have the OpenID example. But there’s a branch of the master example that does have the OpenID functionality. Here’s how to get it:
First, clone the Authlogic example like so:
git clone git://github.com/binarylogic/authlogic_example.git
Then, cd into the new Authlogic directory and get the OpenID fork like so:
git checkout --track -b authlogic_with_openid origin/with-openid
Then, you can explore away. My approach was simple: using the OpenID example as a guide, I’d start a new app and add OpenID support from Authlogic.
First, I need the required gems. Authlogic and authlogic-oid are obvious, but I also need ruby-openid. Authlogic-oid is built on the rails plugin open_id_authentication, which in turn is built on ruby-openid.
sudo gem install ruby-openid
sudo gem install authlogic
sudo gem install authlogic-oid
You can also set these in your environment.rb file, like so:
config.gem "authlogic"
config.gem "authlogic-oid", :lib => "authlogic_openid"
config.gem "ruby-openid", :lib => "openid"
Note: I got thwarted when I tried running the app using the config.gem approach. I kept getting an “Uninitialized constant Authlogic” in User_sessions.rb when I ran the app. It sucked! It took me a while to figure this out, but it was a horrible beginner mistake. I was doing rake gems:install instead of sudo rake gems:install. So when I ran my app, the proper gems weren’t in the right place. Ouch!
Next, it’s time to install the open_id_authentication plugin which is required by Authlogic-oid. It’s not available as a gem.
script/plugin install git://github.com/rails/open_id_authentication.git
The next step is to create the necessary migration scripts for the open_id_authentication plugin.
rake open_id_authentication:db:create
Another beginner mistake: I got a failure when I tried doing open_id_authentication:db:create before I installed the Authlogic gem. Something about “acts_as_authenticated” wasn’t there. So the order of installation is important!
Next, use the authlogic command to create the user_session model. I’m prettying much following instructions from the Authlogic github pages at this point:
script/generate session user_session
script/generate model user
Another beginner mistake: if you’re making changes to your migration files without creating a new migration, make sure your schema is correct. I don’t know the best way to do this except by using rake db:drop, rake db:create, and rake db:migrate. This was due to an error I was seeing in my view: “undefined method :openid_identifier”. I had a text_field in my form for the openid field, and I had the openid_identifier field in my model. The problem? It wasn’t in the database schema so Rails couldn’t do its thing to make it a property of the user model and render the textfield correctly.
From there, the views and controllers are pretty much the same as the example. My user model is also pretty slim:
class CreateUsers < ActiveRecord::Migration
def self.up
create_table :users do |t|
t.string :email, :null=>false
t.string :persistence_token, :null=> false
t.string :openid_identifier, :null=> false
t.datetime :last_request_at
t.timestamps
end
add_index :users, :openid_identifier
add_index :users, :persistence_token
end
def self.down
drop_table :users
end
end
Now, when I put everything together to run the app, I got some weird failures. When trying to login using my OpenID, I got a nasty error:
undefined local variable or method 'crypted_password_field'
Not having a crypted_pasword field made sense seeing how I wasn’t supporting login or password fields. But I wasn’t expecting not having it to be an issue. In fact, from the docs the only required field on the user model is :persistance_token. So what’s going on?
Well, it turns out the Authlogic OpenID extension is designed to work with a login/password by default. I had to dig through the stack trace and source code, but was able to figure out what’s going on. There’s a method called attributes_to_save which is responsible for persisting form fields across the OpenID process. By default, attributes_to_save includes password related information. It treats the crypted password and password salt fields a little differently, which causes a problem when you don’t have a :crypted_password attribute on your model. The solution is simple: just override the method with one which doesn’t include the password fields. The user model will look like this:
class User < ActiveRecord::Base
acts_as_authentic do |c|
def attributes_to_save # :doc:
attrs_to_save = attributes.clone.delete_if do |k, v|
[ :persistence_token, :perishable_token, :single_access_token, :login_count,
:failed_login_count, :last_request_at, :current_login_at, :last_login_at, :current_login_ip, :last_login_ip, :created_at,
:updated_at, :lock_version].include?(k.to_sym)
end
end
end
end
The second mistake was because I got ahead of myself. In the example app there’s code in the application.html.erb file rendering a login/register link or user info based on the current_user method in the application controller. I was getting an error:
unknown method 'logged_out?'
which was occurring deep in the Authlogic codebase. The problem was I didn’t go a good job copying everything I needed from the example files! The authlogic example project used the
logout_on_timeout true
filter on the UserSession method. After digging through the documentation, this callback relies on the
<pre class="brush: ruby; title: ; notranslate" title="">
t.datetime :last_request_at
<p>
</span>
</p>
<p>
<span class="n">field on the user model, which I didn’t have at the time. And not having this field was throwing everything off. (The best part was the documentation clearly states you need that attribute on the model for the callback to work correctly.<br /> </span>
</p>
<p>
<span class="n">Lesson learned: always know what you’re doing. (Also: don’t be afraid of source code).<br /> </span>
</p>
<p>
The whole process has really made me appreciate Authlogic. It’s very extensible and extremely easy to customize. If you know what you’re doing it’s pretty slick- the best way to figure it out is by reading the documentation and playing around with some code.
</p>
<p>
Good luck!
</p>
Mar 11 2009
One of the biggest hurdles in transitioning to scrum is getting everyone- especially the dev team- on the same page. Sprint planning and even scrum meetings can be painfully boring and seem useless. There’s a lack of understanding of what to do, communication is either too little or too much, and a few people bad mouth the process and continue their heads down development. You’re left wondering what’s the point- thinking it’s never going to work. Well, have faith: proper task management is key in making scrum work for you and your team. Task management is an aspect of scrum that’s entirely contained within the dev group, which makes implementation a lot easier.
Have a Purpose
The key is you- the scrummaster- need to provide an environment (like a framework) for scrum. If you think by simply sending some scrum info around, meeting every day, and having a backlog will magically make everything better you’re wrong and you’ll fail. You’ll also make everyone miserable. The reason why “It gets worse before it gets better” is because it takes time to transition the team into the scrum process. The key to success is facilitating that transition as quickly as possible, and the solution is having a purpose for everything you do.
That purpose comes via two simple rules which will help you and your team transition to scrum faster:
- All developers should work on the same backlog item.
- No task item should take more than four hours.
Think of these rules as grease in the wheels of scrum. It keeps the machine moving.
A Quick Refresher
A backlog item and task item are two different things. A backlog item is something that’s business focused. It’s specifically created by the product owner for the development team to implement. It’s not, and shouldn’t be, developer focused in any way. A task item, on the other hand, is developer focused. It’s a list of things needed to be done to finish a backlog item. Task items are created in either a sprint planning or a developer meeting at the start of a sprint. They can also be refactored in the middle of the sprint.
A crucial step in scrum is having the team gauge complexity of backlog items. This requires developers to have an understanding of what needs to be done from a user perspective- which is not something developers are usually good at. Without understanding what needs to be done, they don’t know how to do it, and they can’t tell you how long it will take. Another popular excuse is “I need to start coding to figure this out”.
The key is to make developers think and plan first so there are fewer variables mid sprint. This is not mini-waterfall. It’s strategy.
How the Rules Help
Having everyone work on the same item is essential for scrum. Why? If people are working on different items there’s little reason for them to communicate. If Developer A is working on feature X, and Developer B is working on feature Y, there’s no reason for A or B to care what the other is doing. In fact, it’s a waste of time. However, if both A and B are working on feature X, there’s every reason in the world for the two to talk. Not only talk, but collaborate. It’s very XP without the side by side awkwardness. The major benefit is the entire team is focused on the highest priority item- which means something will get completed in the sprint. By working on single items together each item gets finished faster. QA has more time to test- and they do it one at a time rather than dealing with multiple things coming together at the end of the sprint.
Break Things Down
The four hour rule is key to facilitate working together. By breaking down every task into small pieces, more analysis is done on each backlog item. The trick is each task is a single thing which can be done and checked in- a class creation, a new method- all with unit tests. The benefits materialize in different ways. First, sprint planning becomes much more effective as communication occurs via the process of breaking down items. Analysis is done and development ideas are thought out and debated amongst team members. A roadmap is created outlining steps needed to complete each item- everyone is in the know and on the bus.
Why four hours? Easy: in theory, a developer should be able to get two four-hour tasks done between scrums. A developer should never be in a position to say, “I’m working on X, and I’m still planning on working on X”. (Unless, of course, they fall behind- in which case, break down the task again). The point is you want to focus on granularity. If you have to do a controller, a task of “Write a controller” is not adequate. What are you doing? What actions are needed? What dependencies are there? Are you using ModelBinders, or method parameters? Where’s the error handling, if any? The API and code structure evolves out of granular task items- and the whole team is part of the process, everyone on the same page as to what needs to get done. This level of discussion, collaboration, and task tracking are what you’re looking for.
Of course, it’s hard to get the entire team working on backlog items when they’re small. It’s really your judgment call- but in my experience, there are only a few tasks which are too small to be worked on by more than one person in less than four hours. The end of completing a backlog item is the most difficult when it’s a lot of little to dos. But the things which take longer then they should are always the items worked on by one person- which usually aren’t properly tasked out. The bottom line is there’s a constant cycle of breaking things down and redistribution.
TDD
A key factor to success is TDD. TDD allows isolation of the moving parts. In this way, a developer can work on a class, a method, or a service without having to worry about dependent objects. The task description and sprint planning identify what the moving parts are, and how the moving parts fit together. TDD is the environment to implement those moving parts, and make sure expectations are being met. You’ll usually end up rewriting task items mid sprint. That’s okay. Sprint planing is really about establishing guidelines for development. TDD is where the code and project materializes, and you need to change course when necessary as the project evolves.
Unit Tests provide a great way to review code. If you’re working on a data layer for a business object, the business object developer can look at the data layer tests and get all the usage available. Little time is spent “figuring it out”. And if the business object developers mock objects don’t match the concrete ones, somebody wasn’t on the same page. However, with small, iterative tasks taking a step back and refactoring is a non-issue.
A Better Perspective from Above
A great result of this process is the development lead or architect has great insight to how the application shapes up. During sprint plannings, guidelines can be set on how to implement backlog items. If something isn’t tasked out correctly, point out the issue and discuss. Dependencies can be identified and spun off into new classes. When issues arise during scrums, refactoring can happen and new directions can be taken- all in the presence of senior devs, which minimize one offs or crazy spaghetti code. BTW writing spaghetti code is nearly impossible- it’s hard to make a mess in less than four hours when you’re only doing one specific thing and everyone is watching.
Mentoring
This process works great for junior developers- they can learn from the entire team. Similar to an architect having insight to a system, a junior developer has better access to seasoned developers. One on one time is reduced in favor sprint planning and scrums. Their work can be evaluated when code comes together for a backlog item. Any developer is capable of answering a question because everyone is working on the same thing.
Cleaner Code
Code reviews happen naturally when code is shared for feature development. Consider the usual application stack: UI, Controller, Application, Domain, Data Layer. This stack is no longer in the hands of one person, who either blurs the line between layers or creates a million helper classes. Code review happens when the pieces gets fused together. If the puzzle pieces don’t fit, refactor. They usually will fit because everyone is clear on task items- because they were small, granular, and created together.
No one person is left to the whim of their own devices- the expectation is clear and apparent to the team during scrums. And the best developers have a better surface area for showing off, by being exposed to the entire application and not regulated to a single feature. You’ll also get better exposure to cross functional teams, like UI. When the Javascript developer and the backend developer are in the same room, creating the same roadmap, knowing what needs to be done, development is truly fluid.
Conclusion
Scrum, at its core, is really about structure. Each part is an important gear on the clock which helps the hands move smoothly. A key to getting the team working in that structure is setting up an environment that meets the . Task management is key for managing developers and the sprint. The four hour task breakdown and single item focus are two tools for managing complexity and fostering communication. The end result are the following benefits:
- Better gauge complexity of backlog items.
- Complete work in priority order.
- Fully meet Got Done criteria at end of sprint.
- Understand where things are with development mid-sprint.
- Understand why things are delayed with development mid-sprint.
- Fix cohesion with development. Specifically, enforce development standards and architecture guidelines, write clean code, and minimize hacks and bugs.
- Top down visibility for architects, bottom up visibility for developers.
Remember, task management is only part of the entire scrum process. In order for scrum to be effective, everything must fall into place. But task management will have an immediate benefit for you and your development team.
Mar 2 2009
You’ve seen this problem before- you deploy a new version of your website but the style is off and you’re getting weird javascript errors. You know the issue: Firefox or IE is caching and old version of the css/js file and it’s screwing up the web app. The user needs to clear the cache so the latest version is pulled. The solution: versionstamp your include files!
Take a lesson from Rails and create a helper which appends a stamp to your include files (and takes care of the other required markup). It’s simple- embed the following code in your views:
<%=SiteHelper.JsUrl("global.js") %>
will render
<script type="text/javascript" src="http://blog.michaelhamrah.com/content/js/global.js?33433651"></script>
The browser will invalidate the cache because of the new query string and you’ll be problem free. Version stamps are better than timestamps because the version will only change if you redeploy your site.
Here’s the code, which is based on the AppHelper for Rob Conery’s Storefront MVC:
using System.Reflection;
using System.Web.Mvc;
namespace ViewSample
{
public static class SiteHelper
{
private static readonly string _assemblyRevision = Assembly.GetExecutingAssembly().GetName().Version.Build.ToString() + Assembly.GetExecutingAssembly().GetName().Version.Revision.ToString();
/// <summary>
/// Returns an absolute reference to the Content directory
/// </summary>
public static string ContentRoot
{
get
{
return "/content";
}
}
/// <summary>
/// Builds a CSS URL with a versionstamp
/// </summary>
/// <param name="cssFile">The name of the CSS file</param>
public static string CssUrl(string cssFile)
{
string result = string.Format("<link rel='Stylesheet' type='text/css' href='{0}/css/{1}?{2}' />", ContentRoot, cssFile, _assemblyRevision);
return result;
}
/// <summary>
/// Builds a js URL with a versionstamp
/// </summary>
/// <param name="cssFile">The name of the CSS file</param>
public static string JsUrl(string jsPath)
{
return string.Format("<script type='text/javascript' src='{0}/js/{1}?{2}'></script>", ContentRoot, jsPath, _assemblyRevision);
}
}
}
Feb 27 2009
Sure, you can use the !important css keyword in your stylesheets to override inherited styles, but what about when you want to simply wipe out or remove a style defined in a parent? Use the auto keyword.
Let’s say you have a style defined in a global.css sheet defining a position for a class called menu:
ul.menu {
position:absolute;
left: 10px;
top: 10px;
}
This will position a
10 pixels away from the top left corner of the parent element. But what if you want to move that menu to the right side of the parent element, but you can’t change the defined style in global.css?
You create a new stylesheet or define an inline style, and override the ul.menu class by specifying
ul.menu {
right:10px;
}
But this simply stretches the menu to the other corner. We need to override the left:10px; style with the default style (as if we never specified the left:10px; to begin with). left:none; won’t work and left:0; won’t either. The solution is the auto keyword:
ul.menu {
right:10px;
left:auto; /* removes left:10px from parent */
}
Voila, you’re done! It’s as if the left:10px was never defined!
Feb 25 2009
After working on an ongoing ASP.NET MVC project for a couple of months I’ve learned a couple of lessons when it comes to dealing with Views. Keep as much logic out of the views as possible! This can be tricky because it’s so easy to let code sneak into your views. But by following the tips below you’ll be able to keep your logic organized and views clean.
1) Follow the Rails pattern of having a Helper class for each controller. The helper class deals with html snippets or formatting functions on a per controller level.
Pushing out bits of code like date formatting into helper classes not only cleans up the views, but aids in testability. Logic is consolidated in one place simplifying maintenance. Creating Helpers on a per controller level also creates a namespace where functionality can be found intuitively.
2) Don’t overload the Html helper with one-off functions. Organize functions into their respective helper classes.
It’s so easy to want to add extension methods to the HtmlHelper class. It’s like a siren crying out “do it, do it”. This is bad! Extension methods are good for some things, but can easily garble an API. The HtmlHelper class should be reserved for rendering core html elements. As an alternative, leverage helper classes you create on a per-controller basis.
//These shouldn't belong to HtmlHelper!
public static string ShowSomethingOnlyForHomeController((this HtmlHelper helper) ...
public static string RenderDateTimeNowInSpan(this HtmlHelper helper) ...
3) Leverage partial views for iterative content, even if it’s not reused elsewhere (hint: it probably will be eventually).
So instead of:
<%= foreach (var photo in photos)
{ %>
<%= photo.Name %>
<%= photo.Description %>
<%= photo.Date %>
<%= } %>
use:
<% foreach (var photo in photos)
{ %>
<% Html.RenderPartial("~/Views/Photos/PhotoTemplate", photo); %>
<% } %>
This will greatly minimize the amount of markup scattered throughout a view, keeping view files focused on a specific task (just like good class design). It’ll also make version control easier because changes will be isolated to their respective file, allowing someone to better see what changes happened where. It also eliminates excessive merging.
4) Organize partial views within their respective controller, not a shared folder- even if it’s a global skin.
Dumping partials within a shared folder can cause overcrowding and jumbling in the long term. Prefer organization and grouping of related content (again, just like good class design).
5) Prefer a strongly typed view and leverage specialized ViewData types for aggregating random models under one root.
It can be easy to dump stuff into the ViewData hash. However, prefer using a custom ViewData class instead. It’s easier to know what data is available for the view. This is a lot more intuitive than rummaging through a controller, which happens when teams share code. Also leverage the null object pattern for properties to avoid having to do null reference checks in the views.
Instead of:
ViewData["Title"] = "My Photos";
ViewData["Photos"] = myPhotos;
ViewData["User"] = currentUser;
return View();
use:
//Title and User can be properties of a base view data class.
var vd = new PhotoListViewData()
{ Photos = myPhotos, Title = "My Photos", User = currentUser };
return View(vd);
//Sample null object pattern (always returns a valid object, so no if null or Count == 0):
private List<Photo> _photos = new List<Photo>();
public List<Photo> Photos
{
get
{ if (_photos == null) _photos = new List<Photo>();
return _photos;
}
set {
_photos = value; }
}
6) Minimize code snippets in views.
Code snippets occur in many ways. They can be as simple as formatting a date, to string concatenation, to even doing a grouping/projection to get data correct. Doing this in a view easily leads to bugs and isn’t testable. Any data processing specifically for views should occur in the controller action, custom ViewData object itself, or in a helper class. Code in views should be simplified to looping, calling a property, or calling a single method. Anything more than that will get you into trouble!
Leverage RenderAction or MvcContrib’s SubController for dealing with shared isolated functionality not relevant to a view.
Unfortunately, as of RC1 there still isn’t a great way to deal with aggregating disparate content in an action. You’ll need to resort to the RenderAction in the Future’s dll or use MvcContrib’s SubController. The point is the same- keep actions specific to what you’re doing. If you need to aggregate disparate content in a view (like a menu in the header, or a shopping cart) offload functionality into an action and call RenderAction. Having actions do multiple, random things leads to messy code. Prefer a single point of entry into supporting view content.
Good luck and share your tips!
Feb 25 2009
In my previous post on understanding TDD I discussed how to analyze existing code for creating unit tests. This was somewhat of “reverse TDD”, the idea being to look for what to needs to be tested- the relationship between what one class expects and what another class actually does. Unfortunately I stopped short of actually implementing those tests- which is the subject of this post.
As a quick refresher, our demo app has some simple functionality which involves getting an order from a data tier then uses a shipping service to ship the order. You can download the sample app here.
Prefer Mocking Over Stubs
Had I written this post about two months ago, I would have favored creating a suite of stub classes to mimic the expected behavior of our interfaces. The approach would involve creating various stub classes for our dependent interfaces- the IShippingService and IOrderRepository- and plugging those stubs into the various unit tests. In the simplest sense these would be hard coded classes to do what I wanted- from throwing exceptions to hard coding return values. Probably even use some fancy random number generation for ids. However, I’m learning from a current project that the stub approach isn’t the best route to take.
The Problem, Distilled
Stubbing is definitely easy, but code quickly spirals out of control. With stub classes, especially those backed by interfaces, more classes are created which then need to be managed and supported. In an evolving software project this management creates added infrastructure which gets in the way when making changes to the original functionality. An interface change, for example (either adding a new method or refactoring parameters) causes numerous changes to be made in stubbed out classes. What’s worse, there can easily be a divergence in the behavior of the stubbed out class with the behavior it’s supposed to represent. True, this can happen with mocking, but as we’ll see mocking is much more granular and can be tailored to specific circumstances with minimal code duplication.
There’s simply no need to create extensive, or multiple, stub objects with all those great mocking frameworks around. Most mocking frameworks offer a ton of features creating granular control of expectations, return values, parameter verification, and method invocation checking. I strongly recommend reading Martin Fowler’s Mocks Aren’t Stubs article for more information on the difference between mocks and stubs.
Using Moq to Mimic Behavior
We’ll use Moq 3 to demo what you can do with mocking. Let’s start off with our first unit test, ShipOrder_Returns_New_Shipment_With_Order_Items. We simply want to call our ShipOrder function and ensure we get a shipment back with shipment items. To begin with, we’ll use Moq to mock our IOrderStorage and IShipmentService so we can construct our OrderShipmentManager.
[TestMethod()]
public void OrderShipmentManager_ShipOrder_Returns_New_Shipment_With_Shipment_Items()
{
var mockOrderStorage = new Mock<IOrderStorage>();
var mockShipmentService = new Mock<IShipmentService>();
var osm = new OrderShipmentManager(mockOrderStorage.Object, mockShipmentService.Object);
var shipment = osm.ShipOrder(5);
Assert.IsNotNull(shipment, "Returned Shipment was not null");
Assert.IsInstanceOfType(shipment, typeof(Shipment), "Returned object was not of type shipment.");
Assert.IsNotNull(shipment.ShipmentProducts, "ShipmentProducts were null");
Assert.IsTrue(shipment.ShipmentProducts.Count > 0, "ShipmentProducts were empty");
}
This test will fail because we haven’t specified any functionality for our interfaces! But, we didn’t get a failing test with a null reference exception when calling GetOrder()- we got a failing test because our returned shipment object from GetOrder was null. Stepping through code you’ll see that Moq returned void when calling the IOrderStorage.GetOrder method. Moq creates a simple “dumb” proxy class for the interface automatically. You can specify a strict behavior using MockBehavior.Strict in the constructor to throw an exception for anything that isn’t explicitly set up. This can be helpful for ensuring control flow.
The solution to pass our test is to explicitly tell Moq what to do when this method is called, like so:
mockOrderStorage.Setup(os => os.GetOrder(It.IsAny<int>())).Returns((int id) => {
var order = new Order() { OrderId = id };
return order;
});
When working with Moq you may struggle with the extensive use of Lambda expressions the framework requires. But after a couple of tests you’ll become a lambda ninja. Moq relies on a pair of Setup/Return calls to dictate behavior. Also notice how we didn’t have to implement every method in our interfaces- we only had to implement the ones we needed to make the test pass. That saves a lot of code from being written!
The previous expression states “when you call GetOrder, with any int value, return a new order with the input id”. Moq gives a lot of power in dictating the behavior of the return call based on the expected function. By specifing (int id) in our return Lambda we can get a reference to our input parameter, which helps us construct a valid Order object based on any input. This is helpful when setting up exceptions. By using a predicate expression with It.Is instead of It.IsAny, we can have our call throw an exception for an invalid input:
mockShipmentService.Setup(ss => ss.CreateShipment(It.Is<int>(id => id < 0))).Throws<ArgumentException>();
Setting up exceptions using Moq is instrumental in properly dealing with exception handling in your code. Also, using It.Is to tailor return methods can help out when dealing with validation or verifying state. It’s much faster and easier to use Moq to create behavior than hard coding stub classes.
There’s one final advantage to using Moq which you can’t easily do with stubs. It’s verifying method invocation. In our sample project, we have a call to _shipmentService.Ship(shipment). What does Ship() do? It doesn’t return anything so checking a return parameter is difficult. It doesn’t manipulate the shipment (or, let’s pretend it doesn’t). We want to ensure this function is called. The solution? Use Moq’s Verify() method, like so:
mockShipmentService.Verify(svc => svc.Ship(shipment));
This code will throw an exception if Ship wasn’t called with the shipment variable, ensuring that Ship was actually called if it should. Using Verify() is helpful when testing out methods returning void- like sending an e-mail, doing a file operation, or ftp.
Conclusion
Spending some time getting up to speed with a mocking framework will be instramental in learning TDD. It’ll also come in handy with just writing unit tests. Mocking provides a simple, slimmed down solution that’s much easier to manage than stub classes.
I like Moq because it’s quick to express what you want done- but the heavy use of lambdas can be a struggle with getting started. The moq quickstart is a great guide to the framework and covers all the bases. The bottom line is don’t worry about which mocking framework you choose. They all have their respective advantages. Pick one, learn it, use it. If you don’t like how it does something, switch to something else down the line which does what you want.
Feb 23 2009
_Note: This is part of a series on hiring software developers. See articles tagged with interview for the series. The in house interview post has two parts. This part focuses on non-development questions._
Key Concepts
- Present yourself well- the candidate is checking you out too.
- Prefer a group style interview approach over one-on-ones.
- Ask specific non-development questions and evlove the follow ups.
They’re Interviewing You Too
It’s important to note that you- both as a company, a potential co-worker, or a future boss- are being interviewed too. A smart candidate (read: one you want to hire) will be sizing up the situation and the interview process themselves. What’s the office like? Do people know what’s going on? Where is the interview going to happen? Am I, the candidate, waiting around a lot for something to happen, or someone to do something? How fluid is the entire process? Are people droning on at their desks or do they look happy?
What goes on during the interview will give the developer hints as to what to expect during the day to day of the job. If the office is awesome, the people are cool, and things are moving smoothly, a much better impression will be made on the candidate than a sloppy process where people are totally frazzled, have no interest in what’s going on, or are treated/act like sheep.
Bottom line: be respectful. Present yourself- and your company- as the epitome of who you’re expecting to hire. Joel Spolsky, of Joel on Software, puts great effort into making his office and company a kick-ass place. Even his blog makes you sit at your desk wishing you worked at Fog Creek. Hell, the New York Times even wrote an article on the office architecture! Don’t be afraid to sell yourself.
The Interviewers
It’s important to have the candidate meet with a variety of team members. This shouldn’t be considered a chore for people, but an honor. It’s important when building a cohesive team to give current members a voice as to who’s joining the team. Most people have no idea how to interview- so make sure to give current members specific questions, role playing scenarios or guidelines on how to conduct the interview. Make sure meetings are timed box- to between 5 to 20 minutes- so the interviewer knows the correct pace. There’s nothing worse than awkward silence.
The Interview Structure
There are essentially two ways to structure the interview: a series of one on one interviews where different developers ask a distinct set of questions, or a group style interview where a group of developers have a round table discussion with a candidate. The way you decide to structure the interview says a lot about your company and how you conduct your development (that whole thing about presenting yourself).
Personally, I don’t like the one-on-one style. It’s pretty exhaustive, both for you and the candidate. The candidate is constantly bombarded with questions in an interrogation-like manner. If the questions aren’t planned ahead of time people will also ask the same set of questions (Um, so, why are you leaving your job? Why do you want to work here?). More importantly, it’s hard for the candidate to get into a rhythm- the mandatory warm up time, body, and wrap up is just too cramped. Finally, the one on one approach offers poor breadth of a candidate. If each person is only focusing on a specific set of questions, no one person can see the big picture.
A roundtable discussion is a much better approach, even if there are only two interviewers. This is much more representative of an agile development team- you want to see how the group dynamic plays with a new person. It also gives every person an opportunity to determine breadth- you see the whole package, and not just individual parts. The conversation has a much better chance to evolve in a solid discussion- the positive attributes of each interviewer can shine while the negative attributes are mitigated by the group.
The one advantage to the one-on-one style is you can interview more people easily. It’s just math- let’s say you’re interviewing four candidates with four developers. An interview should last about two hours. That’s eight hours using all four developers! On the other hand, if each developer spent 30 minutes with each candidate concurrently, that’s only four hours total- half the time!
Personal/Non-Technical Questions
It’s important to ask non-development questions. You want to get a sense of how well rounded a candidate is and what their personality is like. Here are some example questions:
How do you stay on top of technology?
I’m looking to know what their go-to resources are for learning and keeping up with stuff. Google is not a good answer. Do they read blogs? What are their favorite books? Have they been to any conferences? Subscribe to any magazines? What can they bring to the table?
What personal program projects have you worked on, if any?
I’m interested to know if they have any programming experience outside of work. Not too bad if they don’t, but it shows how into programming they are.
What is the coolest thing you’ve done with technology? What are you proud of?
One of my favorites. It’s important to hire people who are proud of what they do. You can’t beat someone who’s about personal responsibility.
What are some websites you’ve used where you appreciate the design of the site? Why?
An analytical question. Developers with design insight are rare. A developer who shows appreciation for good design, or can “talk” design, is invaluable.
What do you want to be doing in a couple of years? What is one area where you want to be developing your skill set?
I used to hate asking this question because its so cookie cutter. But it’s important to ask and to have the candidate elaborate on. It does two things- let you know how driven the candidate is, and what they’re interested in doing and evolving. Both important things to know. A good follow up is “How are you going to do that?”
Describe how you would handle a situation if you were required to finish multiple tasks by the end of the day, and there was no conceivable way that you could finish them.
Answer: Prioritize.
If you could get rid of any one of the US states, which one would you get rid of, and why?
Answer: Canada. Just kidding. This one’s good to change pace of the conversation if needed. Ask if the candidate completely failed another question or is feeling insecure. It can help them get on track. It’s one of those silly filler questions that’s good at wasting time if you need too or changing the mood.
With your eyes closed, tell me step-by-step how to tie my shoes.
If selected for this position, can you describe your strategy for the first 90 days?
You may not need this if you’re doing role-playing development questions, but it’s a good question to gauge how well a candidate can explain something.
Describe a criticism you were given at a job and how you worked to improve it.
This is another good question to ask. Most people don’t respond well to criticism, or aren’t that self critical. You don’t want someone to be cocky nor beat themselves up. But there are times when people do things they shouldn’t or simply make mistakes. Being self-aware and improving on weaknesses is one of the best qualities a person can have.