It's been almost a year since we started using AngularJS in production here at Localytics. We began by experimenting with a small piece of our site and were so happy with it that we rewrote our entire analytics dashboard product — overhauling nearly four years' worth of legacy production code — in Angular.
With a surprising lack of documentation or blog posts on real-world use of Angular on production Rails applications, we've had to be fairly creative with how we organized our code and integrated the two technologies. Most documentation on Angular code organization and workflow assumes a JavaScript/Grunt-centric worldview, and it was hard to see how we would introduce Grunt into our Rails kingdom, where Sprockets and Capistrano reign supreme. Meanwhile, we've seen a few cute examples on how to structure a fresh Angular/Rails hello-world seed app, but odds are, if you're in production, you've already got a big steaming Rails app chugging away and don't enjoy the luxury of starting from scratch with rails new my-todo-list-cookbook-twitter-seed-demo-world-app
. So we thought we'd share some of the practices and tricks we've come up with at Localytics to get Angular running on Rails.
Code Organization and the Asset Pipeline
The hardest part about getting started was figuring out a sane way to organize and serve our Angular code through the asset pipeline, while keeping page loads snappy in development. Even before deciding to switch to Angular, page loads had already become maddeningly slow in our local Rails dev environment, with Sprockets taking nearly a full second to serve each JavaScript file. Desperate for a solution, we’d even begun resorting to manually concatenating our JavaScripts into single monolithic files to take some of the load off of Sprockets, abandoning any semblance of modular code organization. Now with a new JavaScript framework on our plate, we were ready to make the leap to Grunt, of which we'd heard much praise.
But how? To compile and fingerprint our JavaScript assets through Grunt, we'd also need to stop using Rails' path helpers to link to them from our ERB. And we were quite comfortable deploying with Capistrano. Would we have to rewrite our deploy scripts to accommodate Grunt? The prospect of abandoning the tried and true Rails Way™ for some half-conceived hybrid workflow that would certainly complicate and might or might not ease our development process sounded more ill-advised the more we pondered it.
In the end, we came up with some less drastic solutions with which we've been able to dramatically cut asset load time while staying on Sprockets (namely, the rails-dev-tweaks gem, and moving our development environment out of an Ubuntu VM into native OSX). With the asset pipeline humming smoothly again, we've been free to experiment with patterns to integrate Angular while taking full advantage of Rails' directory structure and asset pipeline. Here’s what we’ve been doing:
Script organization and loading
Before moving to Angular, we'd been following the Rails default of loading application-wide JavaScripts in application.js, which in turn loaded shared library-like scripts used throughout the site in /lib
. Additionally, we loaded page-specific scripts via a naming convention, where each controller required its own manifest file in /app/javascripts
named [CONTROLLER NAME]_controller.js
:
/app
/assets
/javascripts
analytics_controller.js
campaigns_controller.js
...
/controllers
analytics_controller.rb
campaigns_controller.rb
...
/lib
/assets
/javascripts
/jquery-localytics
index.js
plugins.js.coffee
...
/backbone-localytics
index.js.coffee
uploader.js.coffee
stickit-helpers.js.coffee
...
To keep all our page-specific scripts precompiled in production, we had the following line in our application.rb
:
config.assets.precompile << '*_controller.js'
With Angular, we chose to stick with this practice of keeping shared code in /lib
and page-specific scripts in /app
. This way, everybody on our team knows to be careful and write unit tests when changing shared code in /lib
, and we minimize site-wide regression risk by keeping page-specific code quarantined. Since we needed to keep our existing code running while we built out our new Angular screens, we kept things organized by simply adding an /angular
subfolder:
/app
/assets
/javascripts
analytics_controller.js
campaigns_controller.js
...
/angular
/dashboards
index.js.coffee
controllers.js.coffee
services.js.coffee
/funnels
index.js.coffee
...
/controllers
analytics_controller.rb
campaigns_controller.rb
dashboards_controller.rb
funnels_controller.rb
...
/lib
/assets
/javascripts
/jquery-localytics
...
/backbone-localytics
...
/angular-localytics
index.js.coffee
config.js.coffee
/ui
/directives
/reports
/services
...
With this new folder we needed to add another entry to our precompile list in application.rb
:
config.assets.precompile << '*_controller.js'
config.assets.precompile << 'angular/*/index.js'
Now the trick was to find a way to both load up the required scripts for a screen, and to tell Angular the name of the module to bootstrap. Our simple but effective solution was to use an instance variable named @ng_app
, which we use in three places:
In the Rails controller:
# /app/controllers/dashboards_controller.rb
class DashboardsController < ApplicationController
respond_to :html, :json
# index is generally the only method in our Angular
# controllers that responds to both html and json:
# html for initial page load, json for whatever
# resource the controller manages.
# We set @ng_app in the html response here
# to bootstrap our Angular app:
def index
respond_to do |format|
format.html { @ng_app = 'dashboards' }
format.json { @dashboards = current_user.dashboards }
end
end
# Basic RESTful methods follow:
def create
@dashboard = Dashboard.new(params[:dashboard])
@dashboard.creator = current_user
@dashboard.save
respond_with(@dashboard, :template => 'dashboards/show')
end
def update
...
end
end
In our layout’s header and footer:
# /app/views/layouts/application.html.erb
<!DOCTYPE html>
<!-- Bootstrap our page-specific Angular app
if defined, otherwise fallback to our generic
app (defined in /lib) which contains all
the code needed for basic UI functionality: -->
<html ng-app="<%= @ng_app || 'localytics' %>">
<head>
<!-- We load Angular from CDN here, and also have
some backup helpers to serve it locally from
/public if the CDN connection fails so
users don't see all our curlies. -->
...
</head>
<body>
...
<%= yield :layout %>
...
<% if @ng_app %>
<%= javascript_include_tag "angular/#{@ng_app}" %>
<% else %>
<%= javascript_include_tag "#{controller_name}_controller" %>
<% end %>
</body>
</html>
In our Angular app's JavaScript files:
# /app/assets/javascripts/angular/dashboards/index.js.coffee
#= require reports
#= require_tree .
#= require_self
# This module name (and the folder name in which this file
# resides) matches @ng_app set in the Rails controller:
dashboards = angular.module 'dashboards',
[ 'localytics'
'reports'
'dashboards.controllers'
'dashboards.directives'
'dashboards.services'
'dashboards.filters'
]
Since Angular module names are just strings, which allow slashes, we can even use a nested directory structure for our scripts if we want. For example, we keep all admin-related controllers and scripts namespaced, so the @ng_app
for the admin home screen is just "admin/home".
Manifest files and module definition
As you can see, we embraced Sprocket's naming convention of index.js for our JavaScript manifest files. Through trial and error, we've found it's best to keep these manifest files as lean as possible. The only things that go in them should be Sprockets directives (require
, require_tree
, etc.), and a top-level Angular module definition, with configuration or run blocks as needed. Otherwise, all your JavaScript ends up living in files named index.js, which will make you go insane.
We also experimented with two patterns for Angular module definition. Our first experiment — let's call it the Top-Down Approach — looked like this:
# dashboards/index.js.coffee
#= require reports
#= require_self
#= require_tree .
angular.module 'dashboards', [ 'localytics', 'reports' ]
# dashboards/controllers.js.coffee
dashboards = angular.module 'dashboards'
dashboards.controller 'DashboardsCtrl', ['DashboardResource',
(DashboardResource) -> ... ]
# dashboards/services.js.coffee
dashboards = angular.module 'dashboards'
dashboards.factory 'DashboardResource', ['RailsResource',
(RailsResource) -> ... ]
With this approach, the manifest file defines the module, calls require_self
first, so that the module definition is run, then calls require_tree
. Then each file in the tree re-opens the module and tacks things onto it like controllers and services. While easy, we eventually found that as our Angular codebase grew, this method wasn't great for maintainability since our modules tended to grow monolithic, and unwittingly led to creation of circular dependencies that didn't surface until we tried to refactor or write tests.
Now we try to stick to a more granular Bottom-Up Approach that looks like this:
# dashboards/index.js.coffee
#= require reports
#= require_tree .
#= require_self
dashboards = angular.module 'dashboards',
[ 'localytics'
'reports'
'dashboards.controllers'
'dashboards.directives'
'dashboards.services'
'dashboards.filters'
]
# dashboards/controllers.js.coffee
angular.module('dashboards.controllers',
['dashboards.services', 'localytics.services', 'localytics.ui'])
.controller 'DashboardsCtrl',
['DashboardResource', (DashboardResource) -> ... ]
# dashboards/services.js.coffee
angular.module('dashboards.services', [])
.factory 'DashboardResource',
['RailsResource', (RailsResource) -> ... ]
Note that with this pattern, require_tree
is called first, then require_self
is called and the top level module is created, explicitly requiring each submodule in the directory. This way, it's easier to stay on top of dependency management since each module is explicit about its requirements, which also makes it easier to isolate individual modules for unit tests.
Partial loading
One of our earlier and more spectacular failed attempts to integrate Angular and Rails was to serve Angular templates on the fly through the asset pipeline. Many Angular examples demonstrate lazily loading and caching partials from the server on demand with templateUrl
. We tried this method by creating an /app/assets/templates
directory from which we served HTML partials, which we linked to by using asset_path
in js.coffee.erb
files. So for example, we would set up our Angular routes like so:
# funnels/index.js.coffee.erb
angular.module 'funnels', ['localytics']
.config ['$routeProvider',
($routeProvider) -> $routeProvider.when '/view/:funnelId',
templateUrl: <%= asset_url('funnels/show.html') %>
controller: 'FunnelShowCtrl' ... ]
This kind of worked, except that we eventually realized that we had to deploy twice every time we touched one of these partial templates to get the changes to show up. We suspect this was due to a bug in Sprockets, where the js.coffee.erb
and asset_url
links would be compiled and evaluated before the HTML templates were recompiled, so the links would always be pointing to versioned templates from the previous deploy.
We considered pulling these files out of the asset pipeline and serving them from /public
, but we really didn't want our users' browsers to cache these templates between deploys, and fingerprinting was the only sure way we knew of to ensure 100% cache busting. In the end we made a little Rails helper function to render all our partial templates into <script type="text/ng-template">
tags:
# app/helpers/application_helper.rb
# Render a partial into a script tag so Angular
# sticks it into $templateCache
def load_ng_template(partial)
content_tag :script,
type: 'text/ng-template',
id: "#{partial}.html" do
render partial
end
end
Sure it would be nice to only load Angular templates on the fly like all the cool kids do, but we think the overhead of a few extra lines of HTML on page load is minimal, and with this problem solved, we've been able to devote our energies to more important things like...
Testing
Testing was another major reason we had wanted to switch to Grunt and break free of the asset pipeline. We wanted to use Karma to unit-test our JavaScript, but didn't see how this would be possible if our assets were served through Rails. And end-to-end tests with Angular's test runner or this new-fangled Protractor thingy seemed entirely impractical, considering the amount of backend data setup our Rails tests require just to render a page, for which we rely heavily on tools like FactoryGirl. We just couldn't imagine how to bridge the JavaScript and Ruby testing worlds using the tools at hand, and all the documentation assured us that any end-to-end test attempted on an Angular page from our Ruby testing environment would fail, because none of our Ruby test drivers would know how to wait for Angular to compile the page.
Integration tests
As it turns out, they were wrong. Capybara and Poltergeist play just fine with Angular, and we've begun building out some wonderful integration tests with MiniTest that navigate to our Angular screens, click around using normal CSS selectors (none of this super-Angular-specific element(by.binding('yourName'))
silliness for us, thank you), and even assert that records in our test database have been changed through our simulated user interaction.
In addition to normal Rails integration tests that run against a test database, we've also put in place a set of livetests that run against our deploy targets through an after:deploy
hook in Capistrano. These livetests provide full stack coverage against real production data, even ensuring that asynchronous queries to our API are returning successfully and that our charts load without errors.
To accomplish this, we've taken advantage of one of Poltergeist's nicest features, which is enabled by default: the ability to re-raise JavaScript errors in Ruby. This means we can get some great practical test coverage just by hitting every page and waiting for Ajax requests to finish. Unfortunately, this doesn't work right out of the box with Angular, because its $exceptionHandler
service catches any errors thrown and safely logs them to the console, preventing Poltergeist from raising them in our tests. To circumvent this, we used Poltergeist's extensions option to inject a script into each page that effectively monkeypatches Angular's error logger:
# test/live_test_helper.rb
Capybara.register_driver :poltergeist do |app|
Capybara::Poltergeist::Driver.new(app,
{ window_size: [1200, 800], inspector: true,
extensions: ["test/support/scripts/angular_errors.js"] })
end
// test/support/scripts/angular_errors.js
window.onload = function() {
var $injector = angular.element(document).injector();
var $log = $injector.get('$log');
$log.error = function(error) { throw(error); };
};
Unit tests
As for unit testing, we are using Karma to run Jasmine tests, which we mostly write for our Angular services. It's a little hairy, but we achieved this through a rake task that boots up a Rails server, adds our /spec/javascripts directory
to Sprockets's load path (the same as the jasminerice gem), spits out a karma.coffee
configuration file that includes a link to the test files on the server, and runs Karma against this file using PhantomJS. We are unable enjoy the full awesomeness of Karma this way, since it isn't polling our served assets for changes and automatically running tests in the background, but at least our continuous integration server is able to run our Jasmine tests, and we haven't been forced to pull our scripts out of the asset pipeline to do so.
Choosing the Right Tool
Now that we have these two powerhouses, Rails and Angular, working side by side, we are faced with a new dilemma: which tool to use? When we first began using Angular we were overwhelmed by its power and became tempted to use it for everything, even for tasks that would have been better done server-side. Now with our honeymoon behind us, we can take a tempered look at Angular and consider these principles before setting finger to keyboard:
- Angular is just JavaScript. JavaScript is for dynamic things that change after page load. If it doesn't need to change during the lifetime of a page, odds are you can render it server-side using ERB.
- JavaScript is hackable. Keep the heavy lifting server side, especially if you find yourself exposing or duplicating business logic that really ought to be encapsulated in your Rails models. Unit tests are easier server-side, too (at least in our setup).
- Write less code. In the end, probably the most important question to ask yourself is: which way requires the least code?
To illustrate these principles, allow me to share a story in which we wantonly violated all three. We began with a new Subscription
model in Rails to track customer access to our new products. We needed information from this model in the UI, so the first thing we did was create a Subscription
service in Angular, which we populated with the current user's subscription records by dumping them all into gon on every page load. Some of these subscriptions might have been expired or duplicates, so we filled the Angular service with methods to retrieve the most current and relevant subscription, and wrote Jasmine tests for these — all mostly duplicates of methods and tests that already existed in Rails.
After we released our new UI and the had dust settled, we took another look at this Angular service and found that basically the only thing it was ultimately used for was printing a static string in the footer of the page that said "Thanks for using Localytics Enterprise Analytics." We cut the entire service out — about 30 lines of CoffeeScript plus 60 lines of Jasmine tests — and replaced it with a one-line method on the Ruby model, which we called from our ERB template. Refactoring never felt so good.
Conclusion
We're still reinventing our practices as we evolve, but we're mostly pretty happy with how we have Angular and Rails playing together, and no longer plan to abandon the asset pipeline. That is, until we throw away our Rails project and rewrite it in Clojure, as my boss keeps half-joking.
Do you have an Angular/Rails app in production? We’re interested to hear how others have tackled these problems.
P.S.: We're hiring.