I had a heck of a time getting fixtures working with Mongoid when it came to a required embeds_many property. No matter what I did, I kept getting an error: “Access to the collection for XXX is not allowed since it is an embedded document, please access a collection from the root document.”
Then I stumbled upon Machinist v2 and the machinist_mongo gem which solved the problem. And it had a nice API, to boot!
Getting Started
As of this post, you’ll need to pull the machinist_mongo gem directly from git and get the machinist2 branch. That’s easy with Rails 3:
I’m using RSpec, so I put my blueprints in the spec/support/ directory so they get automatically loaded. Let’s say I have a User class with a :username and an embeds_many :authentications property (as if you’re following the Railscasts episode on using Devise and Omniauth). The blueprint will look like this:
Authentication.blueprint do
uid { "user#{serial_number}" }
provider { "machinist" }
end
User.blueprint do
username { "user#{serial_number}" }
authentications(1) { Authentication.make }
end
My authentications blueprint sets a unique id using the #{serial_number} counter in machinist 2. Then I’m declaring an array of one item in my authentications array, and calling Authentication.make to load the Authentication blueprint. This essentially lazy loads the authentications property via the root document, which is exactly what Mongoid wants, as there’s no Authentications table or root-level document.
Now you can build away using Machinist 2’s User.make (for creating an object without saving it) or make! (which makes and saves the object).
That template has been updated to use MVC 3 RC and Html5 Boilerplate .0.9.5. That template has been updated to use MVC 3 full and html5 boilerplate’s master as of 2/17/11.
In playing around with Rails this weekend, I ran into an annoying error when trying to set up some bundles- specifically with Webrat and Cucumber, which I found very odd:
/Users/Michael/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rubygems/package/tar_input.rb:111:in
`initialize': No metadata found! (Gem::Package::FormatError)
Removing the dependencies in my Gemfile fixed the issue, but obviously left me without Cucumber and Webrat gems.
Googling didn’t provide an immediate solution to my problem, which is why I’m writing this post. An issue hidden in the Bundle Github tracker had a solution: delete the cache directory in your Ruby’s gem directory. The problem isn’t necessarily specific to Webrat or Cucumber; the problem appears to be when the cache directory gets out of sync with the actual repository, and gems which should be installable cannot be found.
After deleting the cache, bundle install ran without error with my new Webrat and Cucumber gems.
I’ve been working on a toolkit called Redaculous– it’s a .NET Library for the really cool key/value store Redis. It’s built on top of the ServiceStack.Redis library, which provides various .NET clients for Redis. Redaculous is meant to make aggregating Redis commands a little easier- but don’t get too excited. The project __is in its infancy, and will undergo many changes, if it even gets off the ground. This post isn’t about Redis nor Redaculous- it’s about how parts of Redaculous leverage Expressions and Lambdas to drive a lot of the functionality Redaculous is meant to provide, and how you can leverage Expressions to make your programming life easier. ASP.NET MVC and lots of other great frameworks do it, so why can’t you?
The problem was simple: I have a class, with a bunch of properties, and those properties have values. In order to put them into Redis, I need to know the property name and the value it contains. I need to know, at runtime, that myObj.SomeProperty has a property called “SomeProperty” and the value of that property. This is a problem shared with most serialization tools and ORM mappers: how does a “SomeProperty” make it to a column in table in a database, or make it to a node in xml?
This problem can (and has been) solved in a variety of ways. The most common was attributes to decorate classes and properties- which is how WCF constructs contracts or how some ORM toolkits work. But that’s extremely intrusive, in this pseudo example which combines WCF and DataAccess :
[Table("DtoTb")]
[DataContract]
public class Dto
{
[PrimaryKey]
[DataMember]
public int Id { get; set; }
[ColumnName("Name")]
[DataMember]
public string Name { get; set; }
}
There’s a weird combination of data storage and data definition which goes on in toolkits which use that approach. It’s not really a smooth way to operate- and the approach falls out when the model greatly diverages from the underlying table, or when a class has numerous other complex types. Worse, when we want to also expose that class via WCF, we’re essentially tacking on both data access description and web service schema description. How many attributes can you tack on there? Validation attributes, too?
Another approach is to use code-generation. This is essentially what Linq-to-Sql does; it provides the underlying property descriptions at design time, saving a lot of manual typing. I’m not a big fan of code generators; they have their place, but you lose a lot of control in defining explicit functionality. You’re boxed into what is generated and most customizations usually don’t fit in well: when you stray outside of what the code generator provides (or even outside of what the core generation is focused on) you find yourself trying to shove a square peg in a round hole.
The best approach is to do what great tools like Norm, Fluent NHibernate,AutoMapper, and .MVC’s Html helpers do: they use Expressions. Most often the configuration is offloaded to separate classes which rely heavily on Expressions for property introspection. This allows an end user to write code like:
The TextBoxFor creates an input html element with a name=”ProductName” attribute; the Id() says NHIbernate should use a class’s ImageId Property as the Id for the table.
Expressions where introduced in .NET 3.5, and have been heavily leveraged ever since to provide a level of meta-programming which previously didn’t exist in the .NET world. Expressions greatly differ from Reflection by the fact Expressions are code which has been parsed into a set of various descriptive classes, while Reflection is compiled code which has been deconstructed into a descriptive semantic. An Expression statement will not actually “do” anything. It can, however, be compiled into a callable function- which is the core advantage over Reflection. With Expressions, you can have both the description of code and the actual, runnable code together. Most frameworks create a package or container around the two as a performance optimization- it prevents an application from having to compile the expression multiple times. The performance is much greater than using reflection to dynamically invoke or inspect properties.
How It Works
Expressions and Lambdas go hand-in-hand, and can often be confused with one another. Let’s take a look at the following code:
Expression<Func<SomeDto, string>> expressionLambda = m => m.SomeStringProp;
Func<SomeDto, string>> funcLambda = m => m.SomeStringProp;
What’s the difference between the two? It’s the same value, right? Well, running that snippet in Visual Studio, and viewing the debugger properties, you’ll find two different results for the m=>m.SomeStringProp statement. m=>m.SomeStringProp is the lambda: it’s simply an alternative way of writing C# for various purposes. Usually, Lambdas are either Func or Action statements used as an alternative for delegate methods. The compiler will generate runnable IL code for the m => m.SomeStringProp statement and create a callable method expecting an instance of SomeDto as the input. In the above example, you could get the value of a SomeStringProp using funcLambda like so:
var dto = new SomeDto() { SomeStringProp = "Hell Yeah!" };
funcLambda(dto); //Returns "Hell Yeah!"
This can provide a level of agnosticism by passing around logic as variables in a much easier way than vanilla delegates.
Expressions, on the other hand, don’t compile into runnable IL code. You can’t write funcExpression(dto) in the same was a normal lambda. The compiler does something different: it actually parses the m=>m.SomeStringProp into various expression components which can be traversed, manipulated, rearranged and even compiled into a callable action, as if it were a Func all along. The [I’ve been working on a toolkit called Redaculous– it’s a .NET Library for the really cool key/value store Redis. It’s built on top of the ServiceStack.Redis library, which provides various .NET clients for Redis. Redaculous is meant to make aggregating Redis commands a little easier- but don’t get too excited. The project __is in its infancy, and will undergo many changes, if it even gets off the ground. This post isn’t about Redis nor Redaculous- it’s about how parts of Redaculous leverage Expressions and Lambdas to drive a lot of the functionality Redaculous is meant to provide, and how you can leverage Expressions to make your programming life easier. ASP.NET MVC and lots of other great frameworks do it, so why can’t you?
The problem was simple: I have a class, with a bunch of properties, and those properties have values. In order to put them into Redis, I need to know the property name and the value it contains. I need to know, at runtime, that myObj.SomeProperty has a property called “SomeProperty” and the value of that property. This is a problem shared with most serialization tools and ORM mappers: how does a “SomeProperty” make it to a column in table in a database, or make it to a node in xml?
This problem can (and has been) solved in a variety of ways. The most common was attributes to decorate classes and properties- which is how WCF constructs contracts or how some ORM toolkits work. But that’s extremely intrusive, in this pseudo example which combines WCF and DataAccess :
[Table("DtoTb")]
[DataContract]
public class Dto
{
[PrimaryKey]
[DataMember]
public int Id { get; set; }
[ColumnName("Name")]
[DataMember]
public string Name { get; set; }
}
There’s a weird combination of data storage and data definition which goes on in toolkits which use that approach. It’s not really a smooth way to operate- and the approach falls out when the model greatly diverages from the underlying table, or when a class has numerous other complex types. Worse, when we want to also expose that class via WCF, we’re essentially tacking on both data access description and web service schema description. How many attributes can you tack on there? Validation attributes, too?
Another approach is to use code-generation. This is essentially what Linq-to-Sql does; it provides the underlying property descriptions at design time, saving a lot of manual typing. I’m not a big fan of code generators; they have their place, but you lose a lot of control in defining explicit functionality. You’re boxed into what is generated and most customizations usually don’t fit in well: when you stray outside of what the code generator provides (or even outside of what the core generation is focused on) you find yourself trying to shove a square peg in a round hole.
The best approach is to do what great tools like Norm, Fluent NHibernate,AutoMapper, and .MVC’s Html helpers do: they use Expressions. Most often the configuration is offloaded to separate classes which rely heavily on Expressions for property introspection. This allows an end user to write code like:
The TextBoxFor creates an input html element with a name=”ProductName” attribute; the Id() says NHIbernate should use a class’s ImageId Property as the Id for the table.
Expressions where introduced in .NET 3.5, and have been heavily leveraged ever since to provide a level of meta-programming which previously didn’t exist in the .NET world. Expressions greatly differ from Reflection by the fact Expressions are code which has been parsed into a set of various descriptive classes, while Reflection is compiled code which has been deconstructed into a descriptive semantic. An Expression statement will not actually “do” anything. It can, however, be compiled into a callable function- which is the core advantage over Reflection. With Expressions, you can have both the description of code and the actual, runnable code together. Most frameworks create a package or container around the two as a performance optimization- it prevents an application from having to compile the expression multiple times. The performance is much greater than using reflection to dynamically invoke or inspect properties.
How It Works
Expressions and Lambdas go hand-in-hand, and can often be confused with one another. Let’s take a look at the following code:
Expression<Func<SomeDto, string>> expressionLambda = m => m.SomeStringProp;
Func<SomeDto, string>> funcLambda = m => m.SomeStringProp;
What’s the difference between the two? It’s the same value, right? Well, running that snippet in Visual Studio, and viewing the debugger properties, you’ll find two different results for the m=>m.SomeStringProp statement. m=>m.SomeStringProp is the lambda: it’s simply an alternative way of writing C# for various purposes. Usually, Lambdas are either Func or Action statements used as an alternative for delegate methods. The compiler will generate runnable IL code for the m => m.SomeStringProp statement and create a callable method expecting an instance of SomeDto as the input. In the above example, you could get the value of a SomeStringProp using funcLambda like so:
var dto = new SomeDto() { SomeStringProp = "Hell Yeah!" };
funcLambda(dto); //Returns "Hell Yeah!"
This can provide a level of agnosticism by passing around logic as variables in a much easier way than vanilla delegates.
Expressions, on the other hand, don’t compile into runnable IL code. You can’t write funcExpression(dto) in the same was a normal lambda. The compiler does something different: it actually parses the m=>m.SomeStringProp into various expression components which can be traversed, manipulated, rearranged and even compiled into a callable action, as if it were a Func all along. The]4 has all of the available types a lambda expression can be broken down into. Essentially, a hierarchal tree is generated, with each distinct component of the lambda expression being represented by one of the many Expression classes. These classes can be used to inspect various aspects of that part. The above lambda is simply a MemberExpression, which is used for field and properties. Using MemberExpressions you can get the name of the Member, the Property Type, you can use the NodeType property- that’s available with all Expression classes- to learn that it’s a MemberAccess call.
Because Expressions can be compiled into runnable code, frameworks get the best of both worlds: metadata about the call, and the call itself. This is the precisely how ASP.NET MVC Html Helpers are built: that TextBoxFor method takes in an expression which it uses to generate the Html output. The Html helpers in MVC inspect the input expression to figure out what the name of the property for the html name attribute value, and then it runs the compiled expression against the current ViewModel object to get the value for the value html attribute. The expression metadata and compiled function is actual cached in various static classes for performance: you don’t want to have to compile the expression every time you use it. The performance is much better than using reflection alone.
Redaculous is using Expressions to avoid the magic string conundrum- by using expressions, Redaculous can parse a statement like “m = m.Name” and know it should put the value of a class’s Name property in the store using a key involving the word “Name” in some way. You should explore the MVC Framework’s source code to dig in to how they’re using expressions for strongly typed helpers. Both Norm and Automapper have some pretty straightforward usage too. By inspecting how other tools use these features you can more easily integrate them into your own projects- and increase productivity by eliminating redundant code and code smells involving magic strings.
Other languages, like Ruby, have a similar level of functionality built in. This is inherit in all dynamic languages: functionality to not only provide an operation, but functionality to describe that operation as well. I still get some hits on my ASP.NET MVC and Rails article, and I’d say one of the biggest differences is how Rails, and Ruby in general, use dynamic language features like code metadata to drive functionality. .NET has always had reflection, but expressions provide a much easier, and much more performant way of dealing with meta-programming. Expressions provide a core of the functionality in the Dynamic Language Runtime and have always been a strong part of Linq’s roots. DLR capabilities, using expressions, will continue to increase its surface area in the .NET world and should be a part of any developers toolkit.
UPDATE: This post is outdated since ASP.NET MVC Beta. Use the DependencyResolver static class instead.
The integration of the CommonServiceLocator pattern within ASP.NET MVC is a positive step forward for the .MVC framework. Dependency management via ServiceLocation is a smart way to go, especially for large codebases with complex dependency needs. ServiceLocation keeps constructors clean, and prevents bloat in higher-level classes which don’t need to know about lower-level dependencies.
However, the fact that .MVC now has its own ServiceLocation infrastructure, via the System.Web.Mvc.IServiceLocator interface, is a little troublesome for code which already uses the Microsoft CommonServiceLocator class found in the Unity Enterprise Application Block. But don’t fret- luckily, the IServiceLocator interface is exactly the same in the System.Web.Mvc namespace and the Microsoft.Practices.ServiceLocation namespace. This means you can have one class implement both interface simultaneously, like so:
public class SomeServiceLocatorWrapper : System.Web.Mvc.IServiceLocator, Microsoft.Practices.ServiceLocation.IServiceLocator
{
//Implicity Implementation of Methods
}
What’s even easier is when there’s already a wrapper class around the IServiceLocator for you, such as the one provided by Unity via the UnityServiceLocator class in Microsoft.Practices.Unity’s namespace. The following code below provides all the functionality you need to use both ServiceLocators:
public class UnityMvcServiceLocator : UnityServiceLocator, System.Web.Mvc.IServiceLocator
{
public UnityMvcServiceLocator(IUnityContainer container)
: base(container)
{
}
}
Once you have that class in place, then it’s just a matter of hooking both up in your Global.asax file like so:
//In Global.asax's Application_Start hook:
var container = UnityContainerBuilder.CreateContainer();
var locator = new UnityMvcServiceLocator(container);
ServiceLocator.SetLocatorProvider(() => locator);
MvcServiceLocator.SetCurrent(locator);
This allows you to access the same locator either via the MvcServiceLocator.Current instance or the ServiceLocator.Current instance.
I’ve created an (arguably) bare-minimum Visual Studio 2010 Template for getting started with MVC 3 Preview 1 using the Razor view engine and some other web goodies. I got tired of the vanilla “Welcome to MVC” homepage and I’m not a fan of the MembershipProvider abstraction and all the Account junk included by default. This template is meant to provide a bare-bones setup of Html 5 and provide OpenId authentication for users (but not full-on user management)- it’s a simple cocktail of Html5 Boilerplate and DotNetOpenAuth. The rest is left up to you to build up!
Paul Irish’s Html5 Boilerplate
Paul Irish created a nice starting point for Html5 websites with his Html5 Boilerplate. There’s a lot of goodies in it which go beyond just changing the DOCTYPE. Check out the NetTuts+ official guide to Html5 Boilerplate explained by Paul himself. I’ve ported over the .9 version with comments so you can see what’s what. The only changes made where to replace the url’s in script and link tags with @Url.Content methods. The original index page is at the root (you’ll want to delete, but it’s a handy reference) and the _Layout.cshtml now has a Header and Script section to use in child pages. This functionality is used in the LogOn view to set up OpenId authentication.
OpenId with DotNetOpenAuth
I was never a fan of the stock MembershipProvider infrastructure and setup of the sql tables required for user management. Usually, user management requires very specific needs site to site, and the default functionality gets butchered anyway. So all that stuff gets pulled and replaced with the ease and simplicity of OpenId. The template uses the OpenId Selector on the login page and DotNetOpenAuth with Forms Authentication on the backend. There’s no Register page, as the specifics of the required user schema varies too much. But you should be able to combine the OpenId logic on the Login page with a new Register page to suit your needs. There’s also no persistence store integrated into the template as that is left entirely up to you.
Future Plans
I really have no set goals for this template, so it will evolve in a free form manner. The code is on GitHub so feel free to branch and edit as necessary. I’m going to add some unit tests around the OpenId logic, and possibly evolve the Account creation in some way. So that’s about it!
Over the past couple of years there has been a slow progression in the .NET web app world to fully separate out client/server interaction. Long gone are the horrible days of ViewState and Events; MVC provided a nice step to better structure web applications for powerful Web 2.0 experiences. But the barrier between client and server interaction has never really been clean- MVC markup has always been littered with C# code and there hasn’t always been widespread tools available to easily build desktop class applications in the browser. Sure, spark and haml provide alternatives, but these are essentially make a core problem easier to bear.
With the new preview of MVC 3, we can eliminate that core problem of server based html rendering: the built in Json Support via the JsonValueProviderFactory along with some jQuery goodies presents us with the new web app architecture: service- orientated web apps built with a rich, js based client driven by Json based services. Using core tools of html, css, and js and not requiring the overhead of Silverlight, Flash, or JS based web frameworks. This JSON functionality, combined with javascript templating using PURE gives us the ability to break free from the static html navigation and build dynamic apps in much more efficient ways.
ASP.NET MVC 3 and the JsonValueProviderFactory
Scott Guthrie writes about the new ASP.NET MVC 3 preview features, and among the hype is the default Json support for action methods. In his blog post he forgets one crucial step to actually make this work, which is adding the new JsonValueProviderFactory to the global value provider factories class in the Application start method of the global.asax, like so:
protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
//Must add this factory explicitly (for now, at least):
ValueProviderFactories.Factories.Add(new JsonValueProviderFactory());
RegisterGlobalFilters(GlobalFilters.Filters);
RegisterRoutes(RouteTable.Routes);
}
Once this is in place we can treat our controller actions as usual, even when using Json. The serialization mechanism is agnostic (this is pretty much exactly the same as Scott’s code):
[HttpPost]
public ActionResult Search(ImageSearchInput input)
{
//AssetSearchInput can be posted via Json
return new JsonResult() { Data = new { ImageInfo = new Repository.ImageRepository().Search(input).Images } };
}
ImageSearchInput, with a string property of “Caption”, will be built from the Json data posted to the server by the following Ajax call:
The server will route the request to the Home/Search method, and will serialize the posted data in the ImageSearchInput class. As long the properties match up and are of the correct type, the deserialization will be fluid. Notice how we don’t actually need to specify the ImageSearchInput class name when building the Json object.
Those with keen eyes may have noticed the success: callback containing the autoRender() function. This is the [Over the past couple of years there has been a slow progression in the .NET web app world to fully separate out client/server interaction. Long gone are the horrible days of ViewState and Events; MVC provided a nice step to better structure web applications for powerful Web 2.0 experiences. But the barrier between client and server interaction has never really been clean- MVC markup has always been littered with C# code and there hasn’t always been widespread tools available to easily build desktop class applications in the browser. Sure, spark and haml provide alternatives, but these are essentially make a core problem easier to bear.
With the new preview of MVC 3, we can eliminate that core problem of server based html rendering: the built in Json Support via the JsonValueProviderFactory along with some jQuery goodies presents us with the new web app architecture: service- orientated web apps built with a rich, js based client driven by Json based services. Using core tools of html, css, and js and not requiring the overhead of Silverlight, Flash, or JS based web frameworks. This JSON functionality, combined with javascript templating using PURE gives us the ability to break free from the static html navigation and build dynamic apps in much more efficient ways.
ASP.NET MVC 3 and the JsonValueProviderFactory
Scott Guthrie writes about the new ASP.NET MVC 3 preview features, and among the hype is the default Json support for action methods. In his blog post he forgets one crucial step to actually make this work, which is adding the new JsonValueProviderFactory to the global value provider factories class in the Application start method of the global.asax, like so:
protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
//Must add this factory explicitly (for now, at least):
ValueProviderFactories.Factories.Add(new JsonValueProviderFactory());
RegisterGlobalFilters(GlobalFilters.Filters);
RegisterRoutes(RouteTable.Routes);
}
Once this is in place we can treat our controller actions as usual, even when using Json. The serialization mechanism is agnostic (this is pretty much exactly the same as Scott’s code):
[HttpPost]
public ActionResult Search(ImageSearchInput input)
{
//AssetSearchInput can be posted via Json
return new JsonResult() { Data = new { ImageInfo = new Repository.ImageRepository().Search(input).Images } };
}
ImageSearchInput, with a string property of “Caption”, will be built from the Json data posted to the server by the following Ajax call:
The server will route the request to the Home/Search method, and will serialize the posted data in the ImageSearchInput class. As long the properties match up and are of the correct type, the deserialization will be fluid. Notice how we don’t actually need to specify the ImageSearchInput class name when building the Json object.
Those with keen eyes may have noticed the success: callback containing the autoRender() function. This is the]3 javascript templating engine at work. I came across PURE when reading about the jQuery templating proposal on GitHub and was immediately drawn to its simplistic syntax.
The philosophy behind PURE is simple: instead of interleaving markup and template directives (which, to be honest, is just as bad as mixing code with markup), PURE can make assumptions about what to repeat and what to bind based on css class names and Json property names.
Let’s say I had the following json markup:
var data = {
Image:
{ Filename: 'mypic1.jpg', ImageUrl: '/images/mypic1.jpg'},
{ Filename: 'agoodphoto.jpg', ImageUrl: '/images/agoodphoto.jpg'}
}
This is simply an Array with two objects in it. Using PURE’s autoRender function, we can specify binding directives using CSS classes, building li elements and populating content as appropriate:
Because the li has the Image css class, and we have an array of Image objects, PURE will duplicate the li contents for each element in the array. The span tag will show the filename property because it has the Filename css class. The src attribute of the img tag will get the ImageUrl property because it is using the @ directive, which simply says “put the ImageUrl property in the src attribute”. PURE can easily get it’s own post, but the point is simple: we no longer need to generate server side html for displaying complex content. Even though this has always been possible before, it hasn’t been as fluid as PURE. Another primary benefit is that we can send incredibly complex Json data back to the client which can update numerous page snippets. Orchestrating multi-updates, like a detail page, shopping cart, or other areas simultaneously has not been trivial, but because we have json data and templating these complex scenarios are easily achievable.
Gotchas
Client side templating is dangerous- you don’t really know how well the client’s computer it is. Be mindful of performance and memory requirements, and ensure you’re targeting the right browsers. Chrome’s V8 javascript engine has evolved remarkably well in handling javascript intensive applications.
As for the MVC 3 preview, I wish we saw dynamic results from controllers: specifically, one action to return either Json, Xml, or Html output based on some request directive. This is possible with various ActionResults or ActionFilters, and there’s some functionality already out there to do it, but it isn’t as nice as Ruby on Rail’s respond_to functionality. A single action supporting multiple outputs will allow developers to easily target various platforms and scenarios in the web world.
Final Thoughts
Despite techniques of Json serialization and jQuery templating already existing in the ecosystem, it has never been as robust and integrated as with the new MVC 3 support (sure, Rails has had this for a while). Combining this functionality with templating tools like PURE will open up new development paths for rich web applications based entirely on standard tools like javascript, html, and css without relying on chunky js frameworks. Keeping core tools simple means having the ability to build out specific functionality as needed, rather than getting boxed into more complete frameworks. From the managing side, it also keeps the available pool of developers large so you’re not requiring knowledge on a specific tool. Play around with these tools and see what you can do!
By far the trickiest part of building the single speed bike (and, consequently, the most fun) was building the wheel. I was a little hesitant to take on wheel assembly, but I couldn’t cop out and not try to give it a shot. It turns out, it’s rather easy and a lot of fun. Once you get the pattern down lacing the spokes is pretty straightforward, and out of all the parts of bike building this step really connects you to the bike.
The first round of spokes are in. Luckily, the Rebel Yell was close at hand.
The Wheels
The wheels are a pair of Velocity Deep V‘s with Origin-8 branded hubs (they’re actually made by Formula). The spokes are DT Champions. Luckily, I didn’t have to worry about calculating spoke length because I ordered everything from Ride Brooklyn (a great bike store in Park Slope) and they took care of the details.
Videos
I did a lot of research before I got started. Books weren’t that helpful, as you really need to learn by example to mimic what’s going on as you lace the spokes. I found the best video on building a 36 hole front wheel from the bike tube on YouTube. I’d just skip to wheel building part two where the action starts. The bike tube also has two videos on lacing a 32 hole rear wheel. It’s the same lacing pattern, but the video goes into a little more depth when building the wheel. The three cross pattern (which means a spoke crosses three other spokes between the hub and rim) is the same between front and rear wheels.
The first set of spokes are in. Notice the spoke next to the valve hole. The second set of spokes should go in to the right of this one, so it doesn’t straddle the valve hole. Or, you can start one away from the valve hole, and string the second set between the valve hole and the first spoke. The alternate side spoke goes in the flange hole immediately adjacent to the spoke on the opposite side.
I recommend watching both sets. The approach is a little different in that the 32 hole front wheel has you lace one side first, then the other, while the 36 hole rear wheel has you lace alternate sides (you put in for sets of spokes: the “innies” and “outies” on both the left and right flange). The valve hole ends up being in a different place with the rear wheel video- as the author explains, you want it to be between a parallel set of spokes to make using the valve easier. With the front wheel video the valve ends up being between a slanted set of spokes (I could have also followed the video incorrectly). Not a big deal; it’s just a nuanced difference.
I recommend following the 36 hole front wheel video- it’s a little easier to get the hub centered in the flange because you lace alternate sides. My rims have 32 holes and there really isn’t any difference in building the wheel. (A 36 hole wheel has 4 sets of 9 spokes each, while a 32 hole wheel only has 8 spokes per set).
I followed the 32 hole rear wheel video, which laces one side at a time. It proved difficult to lace the 2nd side because the hub wasn’t centered.
Getting Started
I followed the rear wheel video because it was more succinct. However, I messed up in the process and had to start again. I laced one side, and the hub was off center- one side was flush with the rim. This made lacing the second side very difficult, as there wasn’t a lot of give in the hub. I also had a hard time figuring out where to start lacing on the second side. I started with the wrong hole, and the hub got very twisted half way through the second side. I really had to pull to get the nipples connected. When it proved extremely difficult, I knew I must be doing something wrong. If you’re fighting the wheel, stop and restart. You missed something. After starting again, I realized where I went wrong the first time with the second set of spokes on the other side.
Notice how the second set of spikes line up with the valve hole. This will put the valve hole between parallel spokes.
Try, and Try Again
When I attempted to build the wheel again, using the front wheel video, the process was a piece of cake. Every spoke went in without a hitch. The benefit of alternating sides while lacing is the hub is centered in the rim which makes connecting the spokes to the nipples very easy (although, I could have messed that up too when following the rear wheel video).
The only glitch (which was a minor issue) is when I did the first wheel the valve hole didn’t end up between parallel spots. I either messed up following the video, or the front wheel video didn’t make a point to lace the wheel in such a way. The rear wheel video does explicitly call this out. As long you don’t end up with the valve hole in a cross spoke triangle you’re all set. Then it will be impossible to fill the tire with air.
It’s a nuanced difference, but you want the valve hole between parallel spots.
Correcting the error when I did the rear wheel was easy- I just started one hole away from the valve hole, rather than right next to the valve hole. After I twisted the hub to get the spoke slant to start the second side, I made sure I started the second side next to the valve hole. This way the two slanted spokes are next to the valve hole, rather than straddling it. This will put the valve hole between parallel spokes.
The wheel after lacing the second side.
Truing
Once I got all the spokes on the wheel I dropped some wet lube on the nipples and tightened every spoke with a screwdriver until only a couple threads remained. The front wheel came out pretty straight, but the rear wheel was way off. It just took a little more work with the spoke wrench to get it center. I came up with a make shift truing stand with the front fork and some plastic sticky tabs (I needed to use the frame to do the rear wheel, but it’s the same idea).
Awesome makeshift truing stand, with helpful plastic sticky tabs used in lieu of calipers.
I placed the sticky tabs as close as possible to the frame and spun the wheel (the wheel spun forever- the hubs must be really slick!). When the frame hit the tab it would make a noise, and I would adjust. It’s really hard to see the space shift between the tab and the rim. The noise thing really helped. Slow and calculated increments worked best- you really only need a quarter turn. This approach worked for both the horizontal and vertical trueness.
I have no way of properly checking the dish, so I may need to take a trip to the bike store (dish is making sure the hub is centered on the rim). I’m also worried that the spokes are too tight. I read that you shouldn’t have a lot of spoke tension- the spokes feel firm but I definitely need a second opinion.
What’s Next
I’m hoping to get the rest of the parts tomorrow to finish this build. I’ll have another post to show how it all came together!
As we’re getting ready to gear up for summer, my wife was looking for a new bike. I wanted to get her something cool and unique, and (selfishly) use the opportunity to indulge myself in a bike building project. I heard building a bike was a pretty straightforward process, and in a worse case scenario I’ll just bring the parts to a bike shop and have them finish it up. We knew we wanted to go with a single speed- a friend has a Trek Soho S which I fell in love with. A single speed is perfect for jetting around the city- really light and nimble. The acceleration is surprisingly easy and you hit a nice cruising speed quickly. I’m anxiously awaiting to compare this custom build with other commercial bikes.
The Bike
We headed over to the good folks at Ride Brooklyn for a recommendation on what to buy. It’s a great bike shop with a really friendly and helpful staff. My wife wanted something light and we both wanted to keep the cost down. They had two frames handy for comparison: An Origin-8 track frame and a slightly more expensive 183rd Street track frame. The 183rd Street frame was a lot lighter than the Origin-8 (surprisingly so) and has a nice powder coat paint job in black which gives it a cool matte finish. The owner built a bike using the same frame, so we figured it was the way to go. My wife is 5’4″ and we got the 51cm frame and fork.
Half the parts ready to go. My boss says only in NYC do you build a bike in the kitchen.
We also got a pair of Velocity Deep V rims in lime green with Formula hubs and a purple Origin-8 crank. Once I saw everything together this bike is definitely getting a nickname: “The Joker”.
The Build: Bottom Bracket
With parts in hand, it’s time to get building! I started with the bottom bracket, which is an Origin-8 cartridge style bracket with a length of 107mm. It turns out I should have gotten the 103mm, but the 107 will work just fine. Check the specifications of your crank to find out for sure (I thought it would have to do with the frame, but it’s sized for the crank). I followed this video from Bicycle Tutor to find out what to do. It’s pretty straightforward. Honestly, the hardest part was figuring out how to get the lock cup off the bottom bracket. Hint: You just pull it off! I kept turning it and was afraid to pull too hard. Turns out, not a big deal. I borrowed a bottom bracket tool, but didn’t have a torque wrench to find the right resistance. Using some polylube 1000 to grease the threads, I just screwed the bracket until it was pretty tight and checked the turn on the spindle. Seemed alright. I couldn’t get the lock cup flush with the frame on the non drive side, but it turns out this isn’t a big deal.
The Headset
The 183rd Street frame has a threadless 1 1⁄8″ headtube. I picked up a Crane Creek threadless headset, and, funny enough, used their great video on how to install a threadless headset. I didn’t have a threadless headset press, and tried using a 2×4 and a hammer to get the bottom and top cups on. It didn’t work. Lesson learned: Don’t try using a hammer to bang your threadless headset on the frame.
Headset fully pressed on frame, but text is off center
You need the right tool for the job. I didn’t want to buy a tool, so I brought the headset and frame back to Ride Brooklyn (I should start a side bet to see how many times I need to go back there!) and they pressed the cups on the frame, and used a crown race setter to put the crown race on the fork. I did mess up a little bit, because I forgot to center the logo on the cups when I put them on the frame. I already had them half on when I went to the store and didn’t want to deal with taking them off again.
Next Steps
Lacing and truing the wheel was a fun but tedious process. I’ll write about that next as well as putting the other parts together, so check out the BikeBuild tag to follow on the progress.