Alex Sexton

Web Hacking. JavaScript.

The Productivity Cycle

| Comments

People are interesting. We know so little about ourselves compared to what we’d like to think we know. We’re all subtly different even though we’re, on a whole, overwhelmingly predictable. There are copious studies to back up “average” data from people, and plenty of arm-chair anthropologists and psychologists that have very nice theories on how we tick. But most of us aren’t ‘average,’ and perhaps some of us tock more than we tick (mark this as the first of many “stretches” in this post).

I’d like to take the chance, while we’re still mostly clueless, to write some of my non-scientific theories on cognitive ability and “focus” (the noun) in the context of creating and building things (or “shipping,” as it were).

Motive and Privilege

I see a thin thread woven through everything that I do, and everything that I see my peers do.

Legacy.

We’re all creative, and many of us want to “build things.” I hear it often in interviews, and over beers, that it’s just built into the core of many of us to want to “create something from nothing.” It often takes shape as some other pleasant-sounding turn of phrase, but the “need to create” seems to be innate in all but the most blissful of us. In my experience, this can often be traced back to the desire for someone to ‘make their mark,’ or as I previously alluded, the desire to pursue a legacy.

I’m all for this. This feeling is built in to my DNA as well. And while I would hope to have purely selfless reasons for wanting to create things, I’m certain that my ego, and my human-nature drive me to want to live longer than my bones.

I can’t help but lead with “ego” and “legacy” because the entire ability to create something from nothing (programming) and to get disproportionally rewarded for doing so (programming salaries) comes along with more than a touch of blind privilege. It’s a pretty good spot to be in to be hacking your mental focus levels so you can build bigger and better websites. Blindness to our privilege is mostly to be expected as long as we fight to improve our own awareness and adjust our actions appropriately. Read on now, forewarned of my ego, blindness to privilege, and extreme lack of brevity.

Caffeine is a Zero-Sum Game

/images/prod_cycle/focuswave.png

Ostensibly, this is a graph of my potential “focus” levels during a day.

I read a fascinating blog post several years ago by Arvind Narayanan called The Calculus of Caffeine Consumption. It was pretty eye-opening for me to see my “focus” levels throughout the day graphed out as a sine wave. Naturally, it’s a massive over-simplification, but in my personal experience, not an entirely incorrect approximation of my energy levels in a day. I am tired, perk up, get hungry and eat and dip down, then hit a stride, rinse and repeat. It’s not a sine wave, but it’s a wave alright.

Like my Uncle Ray always says, “Any wave is sinusoidal with a sufficiently low sampling rate!”, though I think eventually it becomes a straight line. (Which I guess is a wave with an amplitude of zero?)

Our wave of “focus”/energy has all the normal properties of a wave: a wavelength, a period, and an amplitude. The premise of the article is that caffeine, a stimulant, is often consumed during the low points in our day. So we drink coffee when we wake up or when we feel tired after lunch in order to boost our ability to concentrate and sometimes even function. The effect of this decision that is lost on us, is that this reduces the total amplitude of our “focus” wave.

In other words, by consuming caffeine to reduce how low we get during the low points, we inherently reduce our high points as well.

Narayanan states that this type of consumption actually works pretty well for people who are trying keep from falling below a threshold of “focus” or energy. Consider a construction worker, data-entry employee, truck-driver, or similar, where the time put into the task is important, and dipping below a certain level of energy would be dangerous/deadly.

Creative workers, however, don’t have the same limitations. They often need “a moment of clarity” to spark their work, or to break out of “writer’s block.” They would want the absolute highest amplitude of focus, regardless of the consequences on the downside, and to be working while at their highs.

Narayanan, a true hacker spirit (read: CS Ph.D. and Assistant Professor at Princeton), attempts to exploit this relationship to his advantage. Could he use caffeine to increase the amplitude of his “focus?” A conclusive answer here would take quite a few more double blind studies, but in my experience, and seemingly in Dr. Narayanan’s as well, the answer is a resounding “yes.”

Drink a latte 30 minutes before a high point, work as hard as you can, and then use the warmth of your laptop to take a nap a few hours later, because you’ll be spent.

Caffeine is a zero-sum game, but you can use that to your advantage. Consuming caffeine in time for it to affect you at the exact peak of your “focus wave” effectively makes the highs higher, and the lows lower. The rich get richer, while the poor get poorer. It’s like the sad state of our socioeconomic classes, except not awful, and for brain power!

You Require More Vespene Gas

Many people who have read Daniel Kahneman’s Thinking Fast And Slow will remember the chocolate cake experiment.

/images/prod_cycle/cake.jpg

Photo by Food Thinkers under CC-by-NC-SA

The experiment, led by Baba Shiv at Stanford University, is pretty simple. Half the students are asked to remember a 2 digit number, and half are asked to remember a 7-digit number. They walk down a hall, are told they’re done, and are asked if they want chocolate cake, or fruit salad as a refreshment. The students who were asked to remember the 7-digit number were nearly twice as likely to choose the chocolate cake. Why is this?

Simply put, we have a finite amount of mental energy. The students who spent their energy on remembering 7-digit numbers had no more energy left to spend on avoiding cake.

The prefrontal cortex is primarily responsible for the things that creative people crave, like focus, but also other functions like short-term memory, abstract problem solving, and willpower. The conclusion of the chocolate cake experiment implies that there is a finite amount of resources in the prefrontal cortex, and that one system’s use of those resources could directly affect the available resources of another function.

Graph with area under curve highlighted In the context of our sine-wave, I’m pretty sure I could make a good reference to calculus, because this concept has a lot to do with the area under the curve, but I’ll get it wrong, and I won’t hear the end of it in the comments.

This plays into our first theory quite well. If we use more mental energy in a quick burst (because we have a higher amplitude), we’ll need deeper rest in order to recharge this energy. During our rest periods, the troughs in our sine-wave, we have to refill the energy that we spent during our peaks.

I can’t safely postulate much about how to best do this, except for that studies show that naps are increasingly good for doing such things. Since you’re probably not going to have your big break during a lull in your cognitive ability, why not speed up the process of getting to another high point? Nap away! Refuel the exact part of your brain that will allow you to get in the zone again.

Some folks will recommend a caffeine nap. I don’t have a ton of intentional experience with caffeine naps, but the idea is that if you consume caffeine just prior to taking a nap, you’ll sleep until it kicks in (usually takes at least 30min for caffeine to metabolize), and then it’ll allow you to skip some of the groggier[1] steps on your way back to productivity. It probably works.

[1] Apparently, I can’t write the word ‘grog’ without thinking about Guybrush Threepwood.

Moderation

I would add, finally, that Dr. Narayanan found that the body adjusts to regular caffeine intake in as little as 2 to 3 weeks. That means that if you’re a long-time coffee-drinker, you really do need that cup of coffee to get going in the morning in order to get up to your pre-caffeine addiction baseline.

He notes that it takes 5 days to reach adenosine normality (good) if you’re not consuming caffeine. He retrospectively adds that he initially did not place enough value in these ‘quitting cycles’ (hinting that perhaps you should not repeat his mistakes).

Practical Application

Most of this should be obvious at this point, so I won’t drag on.

  • Plan your high points, work during them
  • Refuel during low points instead of stretching them out with forced work
  • Slingshot your amplitude with caffeine; 3 weeks on and 1 week off
  • Avoid non-intentional caffeine, only drink it on schedule
  • Don’t ignore the people you love

The Nap Month

I have nothing but my own personal experience to back this up, but I find that my motivation and passion levels work in a similar way on a yearly cycle as they do on a daily cycle. For parts of the year, I’m excited by work, and go out of my way to build things in my free time, and other parts of the year, I just want to come home and binge watch a season of a TV show on Netflix.

For this reason I have to wonder if some of the same energy principles apply. Can I increase the intensity and duration of the productive months? If so, at what cost?

I really struggle (in a not very serious kind of way) during the month or two when I feel like I can’t get anything done, but I think we can use a similar trick to help our productivity pick up again. Specifically, we need to A) simulate caffeine on a macro scale, and B) simulate naps on a macro scale.

Macrocaffeine

I find that an interesting, new passion project gets my creative energy flowing much better than jumping into old work. If I really want to get in the mood to program, I’ll hop off my projects with deadlines and build something that I know probably won’t ever even get finished, but that I’m just excited to build.

Caffeine blocks adenosine (a sleep chemical) receptors in our brain, causing us to avoid sleep longer. If we substitute the current list of things we have to do, with a temporarily more engaging list, it may give us the same slingshot effect that caffeine does for our energy on a micro level.

Macronaps

This one seems a little bit more straight forward to me.

Just stop working so much.

Take a nap from your work. I understand that this is not a viable solution for businesses that intend to make money, but I think there can be some good compromises here. Namely, most hip tech companies have very generous vacation allowances already. Use your vacation during a low point, and in perfect cliché form, “recharge.”

Additionally, companies with large enough teams can have two modes of employment that employees could ideally opt into. “Passion-mode” and “coast-mode.” Someone who is on an upswing should get put on a big project that’s going to take a lot of energy. Someone who is burnt out from the last big project should be given work that will allow them to show up a little late, and leave a little early.

There’s lots of work like this. The support team at your company would probably love it if developers frequently did 6-hour customer support stints. In no way do I imply that the support team doesn’t regularly break its back working very demanding hours/problems and doesn’t deserve their own down-time. There’s also plenty of documentation that I’m happy to churn out during my down month.

The point is to allow employees to be maximally lazy while still maintaining their minimum required value. The more quickly I’m able to get through the less motivated time, the more quickly I’ll be able to jump back into a difficult and challenging project and do it well.

A Final Moment of Clarity

I have very few projects and accomplishments that haven’t come to me in a “moment of clarity.” Naturally, I want to maximize the amount of these moments, and increase the odds that I’ll be working on something that I love when they occur. I have no idea if hacking your body is a good long-term strategy for making this happen, but I find that researching all of this sleep stuff is an excellent tool for procrastinating during my focus droughts.

I can’t guarantee that any of this will resonate with you, or work for you if you try it. But I do think that everyone goes through the motivational recessions, and we should be actively attempting to eliminate or reduce them. What is Quantitative Easing for the Soul?

I simply want my hard work to be spent most efficiently.

Special thanks to Michelle Bu for reading this ahead of time.

Understanding JavaScript Inheritance

| Comments

So someone shoulder-taps you and asks you to explain the concepts behind JavaScript Inheritance to them. In my eyes you’ve got a few options.

The Terminology Play

You mention that it’s prototypal inheritance, not prototypical and pretty much gloss over the rest, comfortable in your superiority in terminology. You may go as far as saying “Objects just come from other Objects because there aren’t any classes.” Then you just link to Crock’s Post on it, and try to seem busy for the next few days.

Many years later you find out that Prototypal and Prototypical are synonyms, but you choose to ignore this.

The Like-Classical-Inheritance-But-Different Play aka the Run-On Sentence Play

“So in Java, like, you have classes or whatever, right? Well so imagine that you don’t have those, but you still want to do that same type of thing or whatever, so then you just take another object instead of a class and you just kind of use it like it’s a class, but it’s not because it can change and it’s just a normal object, and if it changes and you don’t override the object, oh yea, so you can decide to override the parent object class thing, so if you dont do that and the parent changes the link is live…”

And so forth.

The Animal Play

This is a pretty popular one.

So let’s say we want to make an Animal class in our code. As is often necessary in production JavaScript applications.

First we make a “constructor function,” which acts kind of like a constructor method on the inside of a class in a classical language when it’s invoked with the new operator. Except this one is on the outside.

1
2
3
4
5
function Animal (name) {
  this.name = name;
}

var myAnimal = new Animal('Annie');

Then we want to have actions that all animals can do.

1
2
3
Animal.prototype.walk = function () {
  console.log(this.name + ' is walking.');
};

But then you want to define a more specific type of animal. Things start to get weird.

1
2
3
4
5
6
7
8
9
10
11
// I think we need to define a new Animal type and extend from it somehow

function Dog (name) {
  this.name = name;
}

// BUT HOW DO WE EXTEND
// WITHOUT AN INSTANCE TO USE?
Dog.prototype = Animal.prototype; // ?? I HAVE NO IDEA
// Maybe that'll work for some stuff?
// ProHintâ„¢: probably not much, once you start modifying one of them :D

Then you remember that Prototypal Inheritance doesn’t really do ‘classes’ so much. So you do something like this:

1
2
3
4
5
6
7
8
9
10
var Dog = new Animal('Annie'); // ??? NO THATS NOT IT >:(

// Maybe we can try Object.create? I hear it's prototypal-y
var Dog = Object.create(Animal);

// Maybe that worked? Let's see...
var myDog = new Dog('Sparky');
// TypeError: object is not a function

// Shucks

And you eventually simply converge on the…

The Father/Son Analogy Play

Here we go. Finally a real world example of ‘instances begetting instances.’ It’ll be a perfect analogy. It’s even an interview question some places. Let’s see how we might implement the relationship of a father and son (or a parent to its child) in JavaScript.

We’ll start out like we did before, with a Human constructor

1
2
3
function Human( name ) {
  this.name = name;
}

Then we’ll add in a common human shared action.

1
2
3
Human.prototype.sayHi = function () {
  console.log("Hello, I'm " + this.name);
};

So we’ll create my dad first.

1
2
3
4
5
6
// Instantiate him
var myDad = new Human('Bill Sexton');

// Greet him
myDad.sayHi();
// "Hello, I'm Bill Sexton"

Score. Now let’s create me.

1
2
3
4
// Let's use ES5 `object.create` in order to be as 'prototypal' as possible.
var me = Object.create(myDad);
me.sayHi();
// "Hello, I'm Bill Sexton"

It’s a start! Seems like I inherited a little too much from my dad, but I inherited, none the less.

Let’s try to smooth things out to make the analogy work better. So we’ll instantiate objects without a name and have a parent name them after they’re created.

1
2
3
4
5
6
7
8
9
10
11
// Wrap it all together
function makeBaby(parent, name) {
  // Instantiate a new object based on the parent
  var baby = Object.create(parent);

  // Set the name of the baby
  baby.name = name;

  // Give the baby away
  return baby;
}

Perfect. Now the baby can sayHi on its own.

1
2
3
4
var alex = makeBaby(myDad, 'Alex Sexton');

alex.sayHi();
// "Hello, I'm Alex Sexton"

Err. yipes. Babies can’t talk. And what’s this deal with a baby being made by one parent. Not to worry, we can fix all of this.

First we’ll probably want to try to take two parents into the makeBaby function (no giggles).

1
2
3
function makeBaby(father, mother, name) {
  var baby = Object.create(...// fuuu
}

Multiple Inheritance! How did you get here? Ugh. Fine. We’ll just simply mock the human chromosome pattern into our little inheritance example.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
// Let's take a set of 4 genes for ease of
// example here. We'll put them in charge
// a few things.
function Human (name, genes_mom, genes_dad) {
  this.name = name;

  // Set the genes
  this.genes = {
    darkHair: this._selectGenes(genes_mom.darkHair, genes_dad.darkHair),
    smart:    this._selectGenes(genes_mom.smart,    genes_dad.smart),
    athletic: this._selectGenes(genes_mom.athletic, genes_dad.athletic),
    tall:     this._selectGenes(genes_mom.tall,     genes_dad.tall)
  };

  // Since genes affect you since birth we can set these as actual attributes
  this.attributes = {
    darkHair: !!(~this.genes.darkHair.indexOf('D')),
    smart: !!(~this.genes.smart.indexOf('D')),
    athletic: !!(~this.genes.athletic.indexOf('D')),
    tall: !!(~this.genes.tall.indexOf('D'))
  };
}

// You don't have access to your own gene selection
// so we'll make this private (but in the javascript way)
Human.prototype._selectGenes = function (gene1, gene2) {
  // Assume that a gene is a 2 length array of the following possibilities
  // DD, Dr, rD, rr -- the latter being the only non "dominant" result

  // Simple random gene selection
  return [ gene1[Math.random() > 0.5 ? 1 : 0], gene2[Math.random() > 0.5 ? 1 : 0] ]
};

Human.prototype.sayHi = function () {
  console.log("Hello, I'm " + this.name);
};

function makeBaby(name, mother, father) {
  // Send in the genes of each parent
  var baby = new Human(name, mother.genes, father.genes);
  return baby;
}

Elementary. My only beef is that we no longer are using real prototypal inheritance. There is no live link between the parents and the child. If there was only one parent, we could use the __proto__ property to set the parent as the prototype after the baby was instantiated. However we have two parents…

So we’ll need to implement runtime getters that do a lookup for each parent via ES Proxies.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
function makeBaby(name, mother, father) {
  // Send in the genes of each parent
  var baby = new Human(name, mother.genes, father.genes);

  // Proxy the baby
  return new Proxy(baby, {
    get: function (proxy, prop) {
      // shortcut the lookup
      if (baby[prop]) {
        return baby[prop];
      }

      // Default parent
      var parent = father;

      // Spice it up
      if (Math.random() > 0.5) {
        parent = mother;
      }

      // See if they have it
      return parent[prop];
    }
  });
}

So now we support live lookups of parents, and, you know, some simplified genetics.

Isn’t that just a simple, well-defined, example of how straightforward inheritance can be in JavaScript?

Conclusion

Sometimes these analogies get pretty crazy in my head, and I start to think that maybe instead of trying to apply known examples in the outside world in order to help people understand, it’s often better to just let someone know why they might wanna use inheritance in their programs!

I personally find the best Prototypal Inheritance analogy to be:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
var defaults = {
  zero: 0,
  one: 1
};

var myOptions = Object.create(defaults);
var yourOptions = Object.create(defaults);

// When I want to change *just* my options
myOptions.zero = 1000;

// When you wanna change yours
yourOptions.one = 42;

// When we wanna change the **defaults** even after we've got our options
// even **AFTER** we've already created our instances
defaults.two = 2;

myOptions.two; // 2
yourOptions.two; // 2

So stop making everything so confusing and go program cool stuff, and ignore my old presentations when I used these analogies.

<3z

Deploying JavaScript Applications

| Comments

Preface: Nothing in this post is necessarily new, or even anything I thought of first (save for a name or two). However, I’m writing it because I’d like to start building some consistency and naming conventions around a few of the techniques that I am using (and are becoming more common), as well as document some processes that I find helpful.

Much of this comes from my experience deploying applications at Bazaarvoice as a large third party vendor, and should probably be tailored to your specific environment. I’m sure someone does the opposite of me in each step of this with good results.

Also, I fully understand the irony of loading a few MBs of GIFs in a post largely about performance, but I like them. Any specific tools I mention are because I’m familiar with them, not necessarily because there are no good alternatives. Feel free to comment on other good techniques and tools below. Facts appreciated.

You

You work on a large app. You might be a third party, or you might not be. You might be on a team, or you might not be. You want maximum performance, with a high cache rate and extremely high availability.

Dev with builds in mind

/images/js_app_deploy/gruntbuild.gif

Locally, you might run a static server with some AMD modules, or a “precompile server” in front of some sass and coffeescript, or browserify with commonjs modules. Whatever you’re doing in development is your choice and not the topic du jour.

The hope is that you have a way of taking your dev environment files and wrapping them up into concisely built and minified JavaScript and CSS files. Ideally this is an easy step for you, otherwise, you’ll tend to skip it. Optimize for ease of mind here. I tend to disagree with the sentiment that ‘script tags are enough.’ Try to manage your dependencies in a single place, and that place probably isn’t in the order of your script tags in your HTML. Avoiding this step is easy until it isn’t.

Loading what you need is better than byte shaving

One technique at the build-stage that is ideal for performance, is building minimal packages based on likely use. At page load, you’ll want to load, parse, and execute as little JavaScript as possible. Require.js allows you to “exclude” modules from your builds and create separate secondary modules. Rather than shaving bytes in your app files, you can avoid loading entire sections of code. Most sections of an app have predictable entry points that you can listen for before injecting more functionality.

In our current app, only a fraction of the users click on the button that causes a specific flow to popup. Because of this we can save ~20kb of code at page load time, and instead load it as a mouse gets close to the button, or after a few seconds of inactivity (to prime the cache). This technique will go a much longer way than any of your normal byte saving tricks, but is not always the easiest and for that reason is often avoided.

/images/js_app_deploy/delaypackage.gif

Check your network panel the next time you have Gmail open to see how Google feels about this technique. They take an extra step and bring the code in as text, and don’t bother parsing or executing it until they need to. This is good for low-powered/mobile devices.

In fact, some Googlers released a library, Module Server, that allows you to do some of this dynamically. It works with lots of module formats. And technically you could just use it to see how it decides to break up your files, and then switch over to fully static files after you get that insight. They presented on it at JSConf.eu 2012:

So instead of using a microjs cross-domain communication library that your coworker hacked together, just delay loading EasyXDM until you need to do cross domain form POSTs.

Don’t penalize modern users

I’m all for progressive enhancement, and have to support IE6 in our primary application. However, it pains me when modern browser users have to pay a performance price for the sins of others. It’s a good idea to try to support some level of “conditional builds” or “profile builds.” In the AMD world, you can use the has.js integration, or if you’re feeling especially dirty, a build pragma. However, third-parties have written some pretty nifty tools for doing this as a plugin.

One of the best tools for this that I’ve seen is AMD-feature. It allows you to use a set of known supported features to load the best fitting build for the current user. This can be great on mobile. You can silently switch out jQuery with Zepto (assuming you stick to the shared subset). You can add and remove polyfills for the correct users. If 20% of your JavaScript is loading for %3 of your users, something is backwards.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
define({
    'dropdown': [
        {
            isAvailable: function(){
                // test if we are on iOS
                return iOS ? true: false;
            },

            implementation: 'src/dropdown-ios'
        },
        {
            isAvailable: function(){
                // if we end up here, we're not on iOS,
                // so we can just return true.
                return true;
            },
            // Naturally this is simiplified and doesn't actually
            // imply Android
            implementation: 'src/dropdown-android'
        }
    ]
});

// In your code, you would load your feature like this:

define(['feature!dropdown'], function(dropdown){

    // The variable 'dropdown' now contains
    // the right implementation - no matter
    // what platform the code is executed on,
    // and you can just do this:
    var myDropdown = new dropdown();
});

One less jpeg

Lots of people like repeating this one. I think Paul Irish coined it Adam J Sontag naturally coined it, but the idea is if that if you loaded one less jpeg on your site, you could fit in quite a bit of unshaved JavaScript in its place. Consider this the next time you are sacrificing readability or compatibility for file size.

/images/js_app_deploy/onelessjpg.gif

Requests matter

File size aside, the balance of a fast JS deployment lies somewhere between the number of requests, and the cachability of those requests. It’s often alright to sacrifice the cachability of a small script if you can inline it without causing an additional request. The exact balance is not one that I could possibly nail down, but you can probably think of a file that is dynamic enough and small enough in your application that might make sense to just print it inline in your page.

Package all the pieces together

Fonts and Icons

These days, these two are synonymous. I really like using fonts as icons and have done so with great success. We try to find appropriate unicode characters to map to the icons, but it can sometimes be a stretch. Drew Wilson’s Pictos Server is an incredible way to get going with this technique, though I might suggest buying a font pack in the end for maximum performance (so you can package it with your application).

/images/js_app_deploy/pictos.png

First, we inline fonts as data URIs for supporting browsers. Then we fallback to referencing separate files (at the cost of a request), and then we fallback to images (as separate requests). This means we end up with different builds of our CSS files. Each CSS build only includes one of the techniques, so no one user is penalized by the way another browser might need fonts. The Filament Group has a tool for this called Grunticon. I’d highly recommend this technique. For every modern browser, you have a single request for all styles and icons, with no additional weight from old IEs that don’t support data-URIs.

CSS Files

It’s typically the case that updates to JavaScript files necessitate changes to CSS as well. So these files usually have the same update times. For that reason it’s pretty safe to package them together.

So, as part of our build step, we first build our necessary CSS for our package into a file (for Bazaarvoice: styles are dependencies of templates, which are dependencies of views, which are dependencies of the separate module packages we’re loading, so this is an automatic step). Then we read this file in, minify it, and inject it as a string in our main JavaScript file. Because we have control over when the templates are rendered, we can just inject the CSS into a style tag before rendering the template. We have to render on the serverside occasionally, as well, and in these cases I would recommend against this technique to avoid a flash of unstyled content.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var css   = '#generated{css:goes-here;}';
var head  = document.head || document.getElementsByTagName('head')[0],
var style = document.createElement('style');

style.type = 'text/css';

if (style.styleSheet) {
  style.styleSheet.cssText = css;
}
else {
  style.appendChild(document.createTextNode(css));
}

// Only assume you have a <head> if you control
// the outer page.
head.appendChild(style);

Since we inline the fonts and icons into our CSS files, and then inline the CSS into our JS file (of which only 1 is injected on load), we end up with a single packaged app that contains fonts, icons, styles, and application logic. The only other request will be necessary media and the data (we’ll get to those).

You may notice that we now have a couple of combinations of packages. Yep. If we have 3 ways to load fonts/icons multiplied by the number of build profiles that we chose to create (mobile, oldIE, touch, etc), we can get 10-20 combinations fast. I consider this a really good thing. When you generate them, have some consistent way of naming them, and we’ll be able to choose our exact needed app for a user, rather than a lot of extra weight for other users.

Quick Note: Old IEs can be fickle with inlining a lot of CSS. Just test your stuff and if it breaks, just fall back to link tag injection for oldIEs.

The Scout File

This post actually started out as a means to solidify this term. Turns out I am a bit more long-winded than I anticipated.

The Scout File or Scout Script is the portion of JavaScript that decides which package needs to be loaded. It kicks off every process that can happen in parallel, has a low cache time, and is as small as possible.

It gets its name from being a small entity that looks out of the cache from time to time to warn everybody else that things have changed. It’s ‘scouting’ for an app update and gathering data.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
// Simplified example of a scout file
(function () {
  // Feature test some stuff
  var features = {
    svg: Modernizr.svg,
    touch: Modernizr.touch
  };

  // The async script injection fanfare
  var script = document.createElement('script');
  var fScript = document.getElementsByTagName('script')[0];

  var baseUrl = '//cool-cdn.com/bucket/' + __BUILDNUM__ + '/';


  // Build up a file url based on the features of this user
  var featureString = '';
  for (var i in features) {
    if (features.hasOwnProperty(i) && features[i]) {
      featuresString += '-' + i;
    }
  }
  var package = 'build' + featureString + '.js'

  // Set the URL on the script
  script.src = baseUrl + package;

  // Inject the script
  fScript.parentNode.insertBefore(script, fScript);

  // Start loading the data that you know you can grab right away
  // JSONP is small and easy to kick off for this.
  var dataScript = document.createElement('script');

  // Create a JSONP Url based on some info we have.
  // We'll assume localstorage for this example
  // though a cookie or url param might be safer.

  window.appInitData = function (initialData) {
    // Get it to the core application when it eventually
    // loads or if it's already there.
    // A global is used here for ease of example
    window.comeGetMe = initialData;
  };

  // If we're on a static site, the url might tell us
  // the data we need, and the user cookie might customize
  // it. Simplified.
  dataScript.src = '//api.mysite.com' +
                   document.location.pathname +
                   '?userid=' +
                   localStorage.getItem('userid') +
                   '&callback=appInitData;';

  // Inject it
  fScript.parentNode.insertBefore(dataScript, fScript);
})();

If you’re a third party or have little control over the pages you’re injected on, you’ll probably use a file. Otherwise, the code should be small enough and dynamic enough to warrant inlining on a page.

Build apps into self-contained folders

When you build your application, you end up with a set of static files in a folder. Take this folder of files and assign a build number to it. Then upload this to a distributed content delivery network. S3 with CloudFront on top of it is an easy choice. The Grunt S3 Plugin is a good way to do this with the Grunt toolchain. Bazaarvoice has an Akamai contract, so we tend to use them, but the idea is that you are getting your built files onto servers that are geographically close to your users. It’s easy and cheap. Don’t skimp! Latency is king.

Now that you have an app on a static CDN, make sure it gets served gzipped (where appropriate, grunt-s3 can help with this), and then set the cache headers on your built files to forever. Any changes will get pushed as a different set of built files in a totally different folder, these files should be guaranteed to never change. The only exception to this rule is the Scout File, which lives outside of the build folders in the root directory.

The scout file for our third-party app is a very small JS file that contains a build number and a bit of JavaScript to determine the build profile that needs to be loaded. It also contains the minimum amount of code to determine the initial data that we’re going to need for a page. It doesn’t have jQuery, or really any dependencies, it just does exactly what it needs to do. This file is cached for about 5 minutes (should be relatively short, but close to the average session length).

Parallelizing the initial data request

Many people use each of their models to make separate requests for data once the app is loaded. Unfortunately, this is terrible for performance. Not only are there multiple requests, but they can’t be fired off until the BIG app files are loaded and executed. We want to parallelize the loading of our app and our data. This is going to be tough for some folks, but it’s a huuuge performance win.

We use node.js to run our models at build time. We feed in each of the “page types” that we know how to handle. For each of these page types, each model registers its intent to load data, and we build up a hash of data that is needed for each page type and stick that into the scout file.

Then we had our API folk create a batch API so we can make multiple data requests at once. We use this hash of needed data for each page type (we have less than 10 page types, and you probably do too) in order to fire off a single request for the data that all the models will need, before they are loaded. Unfortunately the way to do this changes drastically based on your framework, but it’s worth your time!

Statically generate your container pages and CDN them too

If you aren’t rendering templates on the server, then there’s likely no reason you shouldn’t be statically compiling all of your page shells at their appropriate urls, and uploading them to a static CDN along with your scripts. This is a huge performance improvement.

Distributing the HTML to geographically close servers can have big wins towards getting to your actual content more quickly. In the case that you are uploading your static HTML pages up to the static cdn along with your JS Application, your HTML files can become your Scout File. Put a small cache on each static HTML page and inline the contents that you would have put in a scout file. This serves the same purpose as before, except we’ve saved a request. The only thing that isn’t highly cached on a close-by server is the data, and we’re already loading that in parallel with our app if we’ve followed the previous instructions.

This means the main URL for your site is just a CNAME to a Cloudfront url. Doesn’t that just sound nice? Talk about good uptime! Of course that means the dynamic parts of your site would come from a subdomain like api.mysite.com or similar. The reduced latency of your initial HTML can be a very nice win for performance since you’ve inlined a scout file to immediately load the rest of the app in parallel.

The smart peeps at Nodejitsu put out Blacksmith to help with static site generation a while back, but there are plenty of options. Many apps are single page apps with only an index.html file anyways, so you can skip the static generation all together.

All this together

The goal in all of this is to:

  • geographically cache anything that’s static, not just images and jQuery.
  • cache your app until it changes, but not much longer.

The folder structure I normally see is something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Simplified for Ease of Eyes
| S3-Bucket-1
  |- app
     |- b1234
        |- build.js
        |- build-svg.js
        |- build-optional.css
     |- b1235
        |- build.js
        |- build-svg.js
        |- build-optional.css
     |- b1236
        |- build.js
        |- build-svg.js
        |- build-optional.css
  |- index.html # or scout.js

The index.html file is the only thing that changes, everything else is just added. If we’re a third party, it’d be the scout.js file since we’d be included in someone else’s markup. Everything else has a 30yr cache header. We can upload our build into a folder, verify it, and then switch the build number in the scout file.

1
2
3
// Simplification of the above process
var build = '__BUILD__'; // Replaced at build time
injectApp('/app/' + build + '/build.js');

Deploying a new version of the app becomes “updating one variable.” This means that every user on the site will have a fully updated app in the amount of time you cached your scout file for. In our case it’s 5 minutes. It’s a pretty good trade off for us. We get lifetime caching for our big files and media, but have a very quick turn around time for critical fixes and consistent roll-outs. It also means that if we ever need to roll back, it’s a single variable change to get people fully back on the old code. Clean up old builds as you feel is necessary.

Other media requests

Naturally, you’ll have some logo images, or some promo images to load as part of the app. These should probably just be imageOptim’d, and sprited as best as possible. However, there is usually a second class of media on a site. Usually these are thumbnails and previews and avatars and such. For these files, I’d suggest using a mechanism to lazy load these media files. Make sure you’re doing smart things with scroll event handlers (hint: throttling the hell out of them), but you don’t want to load 50 avatars if the user is 1000px away from that part of your app. Just be smart about this stuff. It’s not really my intent to cover this portion of app performance since it’s not entirely related to deployment.

Wrap Up

There’s nothing that surprising about these techniques. Everything that could possibly be statically generated is statically generated, and thrown out on edge-cached servers. Every piece of functionality that isn’t needed on page load, isn’t loaded on page load. Everything that is needed is loaded in parallel right away. Everything is cached forever, save for the scout file and the data request (you can save recent requests in local storage though!).

You aren’t left with much else to optimize. You are always only loading and executing the minimum amount of JavaScript, and saving it for the maximum amount of time. Naturally the more common tips of not going overboard with external libraries, and paying attention to render performance, and serving HTML with the page response are all ways to change the performance (usually for the better), but this architecture fits well with many of today’s more app-like deployments.

There’s something really comforting about exposing a minimal dynamic API that needs to be fast and having everything else served out of memory from nearby static servers. You should totally try it.

Introducing the Jed Toolkit

| Comments

The Jed Internationalization Toolkit

The Jed Toolkit is a collection of interoperable tools to help facilitate the full process of internationalizing applications in JavaScript. These tools have a wide range of utility, from small modules to help format messages, dates, and numbers to services that facilitate translation, and code integration. The goal of the project is to bring the experience and quality of internationalizing JavaScript applications up to par with the rest of the current state of JavaScript tooling.

I’m in the process of moving everything over, but you’ll likely want to watch this space: github.com/jedtoolkit and jedtoolkit.org

The Dojo Foundation and Future

I’m excited that The Jed Toolkit has been accepted into the Dojo Foundation so its users can be sure that it will be a safe, unencumbered resource for them into the future. I’m extremely happy to be part of a family that includes require.js, sizzle, and the dojo toolkit.

After being tasked with internationalizing a large application that I was building I quickly realized that there was little available for JavaScript developers. I had been using Gettext in a python application a few weeks prior, and decided it might be nice to implement in JavaScript. So I did. I called it “Jed” (soon to be gettext2.js) after Jed Schmidt, everybody’s favorite “hobbyist” JavaScripter / Japanese translator.

I was drawn to this problem because there were so few people considering its intricacies, but I was shown by some very smart folks that there was a lot more to internationalization than the little library I wrote. So I wrote another library, and ported a few others.

I was quite happy with how these were turning out. They weren’t especially hard to create, because most of them follow well-documented specifications. I really liked how ICU MessageFormat made a lot of decisions based on how translators think, instead of how programmers think. But naturally, they locked away that goodness behind a syntax/grammar that no non-programmer should ever have to deal with. MessageFormat is great for translations but not for translators. Not in the real world at least. That’s when I realized that the problem was not (only) the tools for writing international apps, but even deeper: in the tools and integration with translators.

It’s all about tools

The translation space hasn’t grown much since computers first existed. We can barely encode files correctly in 2012. However, in other spaces, like content authoring, we have a whole system of tools and integrations to map non-technical users’ intent to structured usable data for consumption. If your local tech writer wants to start a blog, they can! They don’t need to know how to set up a server or that HTML even exists.

The process of getting an app translated is cumbersome, and is a blocker to getting good applications out there. FTP zips and crazy XML specs mixed with Word Documents rule the landscape. There are no decent apis, or automatic integrations that anybody is using at scale. I want to set out to change this.

Translators aren’t all to blame. If you were a translator and got the message “fair”, would you translate it as a carnival, or as ‘just’? We set our translators up for failure with our context. We can do better. We can describe messages, and their variables. We can offer examples and photos of the context. We can even translate the app in real time and they can see their translation literally running in the place it will live.

The goal of the Jed I18n Toolkit is to help make the internationalization process much more accurate and enjoyable for all parties. We should be able to write our messages directly in our templates in whatever format we think is best. Our messages should be automatically culled, and deduped and sent into a translation queue. The translator shouldn’t be presented with anything other than things that help them translate. The programmer’s format should be irrelevant. Context is king, and a bunch of crazy sprintf characters and html are just noise. When the translations are done, they should exist as a service or api and be updated in real time. Gone should be the days of the 2 month translation code freeze. You should be able to write a post commit hook that gets your translations through the system as fast as you can find someone to translate them.

There’s a lot to decide on how to bring all of these ideas into the project in a generic, but still usable way, and it will take some time to get everything right. Right now I’m starting by putting in the few open source projects that are already out there as well as showing early beta work on some of the integration tools. Please be patient with me and send me your suggestions and frustrations so we can finally bring internationalization out of the dark ages.

Third Party JavaScript in the Third Person

| Comments

At the end of last year I spoke at the awesome CapitolJS Conference in DC. I was encouraged to talk a little more broadly about third-party JavaScript development and its various quirks. When I got back home to AustinJS we had some time for a short talk. Luckily, Logan Lindquist is usually there to film talks for Austin Tech Videos. Long story short, my talk and slides are now available for Third Party JavaScript in the Third Person.

The slides are available (likely in webkit) at: alexsexton.com/talks/thirdparty

I can’t thank Logan enough for doing Austin Tech Videos. It’s a treasure trove of all kinds of talks from around town.