Alex Sexton

Web Hacking. JavaScript.

The 15 Commandments of Front-End Performance

| Comments

This list is the product of many years of experience in the front-end web development field. I maintain this list as a reminder to myself to always follow best practices, and to not compromise on performance, even if I’m in a time crunch. I read it before I start any project, and share it with my team at work (sometimes we joke and call them “codemandments”) so we can all be on the same page and do our part to keep the web fast.

Feel free to fork this for your own use.

15 Codemandments

The Commandments

  • I will gzip all server responses.

  • I will not include any blocking scripts.

  • I will use SVGs instead of JPGs, wherever possible.

  • I will not include ads, even ones that request users to join groups or lists on Facebook or Twitter.

  • I will debounce scroll events.

  • I will not include third party JavaScript libraries just to detect if users are part of my “Just Cool Friends” Facebook group, even if it wouldn’t take up that much extra load time.

  • I will ignore “the fold” - no matter what the client says.

  • I will resist the urge to use window.alert to inform visitors that there’s a Facebook group for cool friends and if they wanna join it, that’s fine, it only takes a few clicks.

  • I will not use translate3d as a hack.

  • I will not use synchronous XHR to request the list of friends in my Facebook group, and then use the list in order to check to see if the current visitor is on the list, and then show a warning to people who aren’t in the group that says that they have their priorities “messed up” and that “jeeze,” it’s just a stupid group, why can’t you just join it.

  • I will use a CDN to serve static content (with no cookies!).

  • I will not “waste bytes” in HTML comments to explain that I’d really appreciate it if you joined the Just Cool Friends™ Facebook group. Things haven’t really been the same for me since Linda left, and it’s just so easy to join that it’s actually a little bit rude that you wouldn’t. I don’t post much in there, and I won’t even know if you ‘mute’ the posts from showing up on your feed. But honestly it’s only like one or two posts every day, so it’s not like seeing them in your feed would kill you.

  • I mean it’s one crummy group that you join and it makes a guy feel better about himself. The number of people that join goes up and so does my happiness, how does that not make sense? If you have a problem with me or with something I’ve done in the past, you could just bring it up on the group, that’s literally what it’s for. Sometimes I just sit around refreshing the group page, waiting as those numbers tick up. Each number…

  • … is another dollar in the bank of my emotional stability. I scratch the pixels into my screen: “10,000,000”. I count in my sleep: “two-hundred-and-six, two-hundred-and-seven” - each time a friend is added my joy grows, my sadness pales, my existence means that much more. I weep as the numbers hit double and then triple digits. So many friends. So many lives touched. How can I be this lucky? How can I be this influential and popular? See Linda?! I’m not a “loner.” I have way more friends in my group than you ever had! Maybe you’re the loner, Linda, or should I call you “Lone-da?!” I hope you can find your own group one day. I hope you can be as meaningful and influential to so many people as I am. And another friend joins. And another friend joins.

  • I will minify all of my CSS.

The Monty Hall Rewrite

| Comments

Here at Acme, Inc. we switched to Soy Milk when we rewrote our app, and now it’s 8.3x better than our old Almond Milk app!

I call this a Monty Hall Rewrite.

What’s a Monty Hall?

For those loyal readers who haven’t heard of The Monty Hall Problem, to which I’m alluding, it comes from an old game show, called Let’s Make A Deal. It later became an interesting probability question, and even later a less interesting common interview question.

On Let’s Make A Deal (on which Monty Hall was the original host), contestants were shown three doors and told that behind one of the doors was a brand new car. If they could correctly guess which door hid the car, they could keep it. The guessing happened in two phases.

monty hall problem

First the contestant would choose a door that they believed had the car behind it. Then one of Monty Hall’s assistants would walk over to the other two doors (the ones that had not been chosen) and proceed to open one of them, always revealing a bad prize (often a donkey!). Then the contestant, who now had this new information, was allowed to stick with their original answer or they could switch to the remaining closed door. That would be the final allowed guess.

They’d open the final chosen door and their winnings (hopefully a car!) would be displayed.

In 1990, well into the show, advice columnist Ask Marilyn from Parade Magazine was asked whether there was a particular advantage to staying or switching. Much to many folks’ surprise at the time, she answered that the probability of winning was greater if you switched. Thousands of people wrote in to tell her that she was wrong, but most (not all!) were eventually convinced.

My best summarization would be that, initially, each door has 1/3 probability of being correct. Therefore, the door that the contestant first chooses has a 1/3 chance of being correct, and the sum of other 2 choices must be 2/3. When the assistant reveals the donkey behind one of the remaining doors, it does not change these facts. The original choice still has a 1/3 chance of being correct, and the sum of the other doors still has a 2/3 chance of being correct. However, now that the contestant knows that one of the non-chosen doors is a bad door, the 2/3 chance must lie solely in the unchosen, unopened door.

The conclusion is that you have a 2/3 chance of guessing correctly if you switch, and you have a 1/3 chance of being correct if you stay. This has since been mostly proven as well as observed in repeated computer simulated trials. It’s a bad interview question, but remains a popular Probability 101 problem and a decent anecdote.

It’s not really, obviously. I don’t think this metaphor goes very deep, but what it hopefully gets across is that in the case of a rewrite, the new framework that you choose doesn’t necessarily have more merit than your original choice. The switch itself is what’s important.

To immediately break down my own metaphor, the switch doesn’t matter as much as the rewrite itself, but people don’t tend to rewrite things on their old stacks.

If you had started with B, and rewrote in A, you’d also likely have better results than in your first pass.

The only fair comparison is a rewrite of your app with your original tools as well as a rewrite of your app with the new tools. This isn’t very practical, and I’m not advocating it. I’m simply advocating awareness of the fallacy.

I’d like us all to be keenly aware of what it really takes to make great software, and to me that involves avoiding false traps. The sexiness of switching to a new, hip library often comes along with a strong confirmation bias, and an even stronger sunk-cost bias. Let us measure only what we are able to measure and leave the rest to the marketing teams and social media experts.

Real Quick For Car People

1989 Mercedes C Class

“This 2015 Hyundai Sonata is way better than our old 1989 Mercedes C Class.”

Please ignore the 2015 Mercedes C Class.


Switching, Rewriting, & Refactoring

Throw away any assumptions or knowledge of tools that you have and purely consider that if you are switching, rewriting, or refactoring, you are now n years better at programming than you were when you initially wrote the software.

Now factor in that you’ve had x years more time to consider the exact problem that you’re trying to solve as well as rid yourself of uninformed assumptions. When going in for the rewrite, you have a much clearer picture of what a successful product looks like, purely because of the initial app. You know to abstract certain parts of the code that need to grow, and to externalize other parts that you know you won’t be able to support indefinitely. For all its negative qualities, second system syndrome is actually somewhat helpful in understanding exactly what needs to be built (even though there isn’t time to build it all).

In your original app, things almost certainly went directions that you weren’t expecting. You had to tack on widgets and endpoints that you never intended, and you ended up with some frankenversion of your original vision. The rewrite allows you to consider all of this stuff from the begining (at least for a bit), which results in things being ‘faster’ or ‘more secure’ or ‘more user friendly’ or whatever metric your new app is better than the old one.

If you’d like another bad metaphor, imagine your original app as The Homer.

Are you talking about any specific rewrite? (aka are you sub-blogging me?)

No. This seems to come up a lot, though. I’m happy to tell you that I think you’re measuring things incorrectly directly to your face. If you must have an example, I’d say that Facebook’s switch from an HTML5 app to a native app would be an ideal candidate for a well-defined Monty Hall Rewrite.

If you weren’t familiar, their old, not-so-great app was written as an HTML5 web app in a native wrapper. It had all sorts of warts (HTML as the data transfer layer, etc) and was indeed not that nice. Mark Zuckerberg famously said that betting on HTML5 was one of their worst mistakes and that the new native app was better, faster, stronger.

So after they had a laundry list of things they didn’t like about their old app, they set out to completely rewrite a new app with new tools. It was better! And faster! But I think it is wrong to attribute a vast majority of these improvements to the fact they were now “native,” but instead to the fact that they were rewritten.

Luckily, at the time, Sencha put out something called Fastbook. A Facebook “rewrite” of sorts to match the new native Facebook app. They made a nice video about how they were able to make a faster/snappier/cooler app than the new native one, all in HTML5 (with Sencha’s tooling, obviously). I don’t necessarily think Facebook should have switched to the Sencha platform or anything along those lines, but it is a good example of people blaming their tools for causing their old problems, and thanking their new tools for solving them.

To Facebook’s credit, @tobie eventually put out a list of things he’d wish the web would fix with things that had actively been a problem for them when building those old web-based apps. I haven’t seen the equivalent list on the problems building cross platform native apps, but the list was quite valid. It’s much more in line with the type of discourse we should be having when we make claims about why our new choices are better than our old ones.

What now?

I don’t mean to discourage rewrites, trying new tools, or any other variant on that theme. I love a good rewrite, and I’m constantly trying new tools. More than anything I wanted to point a thing out that I think muddies a lot of the waters when developers are evaluating new tools.

So when you pick new tools, don’t trash your old ones. And once you decide on one tool instead of another that does something very similar, don’t speak as if you made the obvious or correct choice. Instead, focus on the fact that you made a good choice.

Software is not a zero-sum game. Two tools can be good at the same time, and we all win because of it.

The Productivity Cycle

| Comments

People are interesting. We know so little about ourselves compared to what we’d like to think we know. We’re all subtly different even though we’re, on a whole, overwhelmingly predictable. There are copious studies to back up “average” data from people, and plenty of arm-chair anthropologists and psychologists that have very nice theories on how we tick. But most of us aren’t ‘average,’ and perhaps some of us tock more than we tick (mark this as the first of many “stretches” in this post).

I’d like to take the chance, while we’re still mostly clueless, to write some of my non-scientific theories on cognitive ability and “focus” (the noun) in the context of creating and building things (or “shipping,” as it were).

Motive and Privilege

I see a thin thread woven through everything that I do, and everything that I see my peers do.

Legacy.

We’re all creative, and many of us want to “build things.” I hear it often in interviews, and over beers, that it’s just built into the core of many of us to want to “create something from nothing.” It often takes shape as some other pleasant-sounding turn of phrase, but the “need to create” seems to be innate in all but the most blissful of us. In my experience, this can often be traced back to the desire for someone to ‘make their mark,’ or as I previously alluded, the desire to pursue a legacy.

I’m all for this. This feeling is built in to my DNA as well. And while I would hope to have purely selfless reasons for wanting to create things, I’m certain that my ego, and my human-nature drive me to want to live longer than my bones.

I can’t help but lead with “ego” and “legacy” because the entire ability to create something from nothing (programming) and to get disproportionally rewarded for doing so (programming salaries) comes along with more than a touch of blind privilege. It’s a pretty good spot to be in to be hacking your mental focus levels so you can build bigger and better websites. Blindness to our privilege is mostly to be expected as long as we fight to improve our own awareness and adjust our actions appropriately. Read on now, forewarned of my ego, blindness to privilege, and extreme lack of brevity.

Caffeine is a Zero-Sum Game

/images/prod_cycle/focuswave.png

Ostensibly, this is a graph of my potential “focus” levels during a day.

I read a fascinating blog post several years ago by Arvind Narayanan called The Calculus of Caffeine Consumption. It was pretty eye-opening for me to see my “focus” levels throughout the day graphed out as a sine wave. Naturally, it’s a massive over-simplification, but in my personal experience, not an entirely incorrect approximation of my energy levels in a day. I am tired, perk up, get hungry and eat and dip down, then hit a stride, rinse and repeat. It’s not a sine wave, but it’s a wave alright.

Like my Uncle Ray always says, “Any wave is sinusoidal with a sufficiently low sampling rate!”, though I think eventually it becomes a straight line. (Which I guess is a wave with an amplitude of zero?)

Our wave of “focus”/energy has all the normal properties of a wave: a wavelength, a period, and an amplitude. The premise of the article is that caffeine, a stimulant, is often consumed during the low points in our day. So we drink coffee when we wake up or when we feel tired after lunch in order to boost our ability to concentrate and sometimes even function. The effect of this decision that is lost on us, is that this reduces the total amplitude of our “focus” wave.

In other words, by consuming caffeine to reduce how low we get during the low points, we inherently reduce our high points as well.

Narayanan states that this type of consumption actually works pretty well for people who are trying keep from falling below a threshold of “focus” or energy. Consider a construction worker, data-entry employee, truck-driver, or similar, where the time put into the task is important, and dipping below a certain level of energy would be dangerous/deadly.

Creative workers, however, don’t have the same limitations. They often need “a moment of clarity” to spark their work, or to break out of “writer’s block.” They would want the absolute highest amplitude of focus, regardless of the consequences on the downside, and to be working while at their highs.

Narayanan, a true hacker spirit (read: CS Ph.D. and Assistant Professor at Princeton), attempts to exploit this relationship to his advantage. Could he use caffeine to increase the amplitude of his “focus?” A conclusive answer here would take quite a few more double blind studies, but in my experience, and seemingly in Dr. Narayanan’s as well, the answer is a resounding “yes.”

Drink a latte 30 minutes before a high point, work as hard as you can, and then use the warmth of your laptop to take a nap a few hours later, because you’ll be spent.

Caffeine is a zero-sum game, but you can use that to your advantage. Consuming caffeine in time for it to affect you at the exact peak of your “focus wave” effectively makes the highs higher, and the lows lower. The rich get richer, while the poor get poorer. It’s like the sad state of our socioeconomic classes, except not awful, and for brain power!

You Require More Vespene Gas

Many people who have read Daniel Kahneman’s Thinking Fast And Slow will remember the chocolate cake experiment.

/images/prod_cycle/cake.jpg

Photo by Food Thinkers under CC-by-NC-SA

The experiment, led by Baba Shiv at Stanford University, is pretty simple. Half the students are asked to remember a 2 digit number, and half are asked to remember a 7-digit number. They walk down a hall, are told they’re done, and are asked if they want chocolate cake, or fruit salad as a refreshment. The students who were asked to remember the 7-digit number were nearly twice as likely to choose the chocolate cake. Why is this?

Simply put, we have a finite amount of mental energy. The students who spent their energy on remembering 7-digit numbers had no more energy left to spend on avoiding cake.

The prefrontal cortex is primarily responsible for the things that creative people crave, like focus, but also other functions like short-term memory, abstract problem solving, and willpower. The conclusion of the chocolate cake experiment implies that there is a finite amount of resources in the prefrontal cortex, and that one system’s use of those resources could directly affect the available resources of another function.

Graph with area under curve highlighted In the context of our sine-wave, I’m pretty sure I could make a good reference to calculus, because this concept has a lot to do with the area under the curve, but I’ll get it wrong, and I won’t hear the end of it in the comments.

This plays into our first theory quite well. If we use more mental energy in a quick burst (because we have a higher amplitude), we’ll need deeper rest in order to recharge this energy. During our rest periods, the troughs in our sine-wave, we have to refill the energy that we spent during our peaks.

I can’t safely postulate much about how to best do this, except for that studies show that naps are increasingly good for doing such things. Since you’re probably not going to have your big break during a lull in your cognitive ability, why not speed up the process of getting to another high point? Nap away! Refuel the exact part of your brain that will allow you to get in the zone again.

Some folks will recommend a caffeine nap. I don’t have a ton of intentional experience with caffeine naps, but the idea is that if you consume caffeine just prior to taking a nap, you’ll sleep until it kicks in (usually takes at least 30min for caffeine to metabolize), and then it’ll allow you to skip some of the groggier[1] steps on your way back to productivity. It probably works.

[1] Apparently, I can’t write the word ‘grog’ without thinking about Guybrush Threepwood.

Moderation

I would add, finally, that Dr. Narayanan found that the body adjusts to regular caffeine intake in as little as 2 to 3 weeks. That means that if you’re a long-time coffee-drinker, you really do need that cup of coffee to get going in the morning in order to get up to your pre-caffeine addiction baseline.

He notes that it takes 5 days to reach adenosine normality (good) if you’re not consuming caffeine. He retrospectively adds that he initially did not place enough value in these ‘quitting cycles’ (hinting that perhaps you should not repeat his mistakes).

Practical Application

Most of this should be obvious at this point, so I won’t drag on.

  • Plan your high points, work during them
  • Refuel during low points instead of stretching them out with forced work
  • Slingshot your amplitude with caffeine; 3 weeks on and 1 week off
  • Avoid non-intentional caffeine, only drink it on schedule
  • Don’t ignore the people you love

The Nap Month

I have nothing but my own personal experience to back this up, but I find that my motivation and passion levels work in a similar way on a yearly cycle as they do on a daily cycle. For parts of the year, I’m excited by work, and go out of my way to build things in my free time, and other parts of the year, I just want to come home and binge watch a season of a TV show on Netflix.

For this reason I have to wonder if some of the same energy principles apply. Can I increase the intensity and duration of the productive months? If so, at what cost?

I really struggle (in a not very serious kind of way) during the month or two when I feel like I can’t get anything done, but I think we can use a similar trick to help our productivity pick up again. Specifically, we need to A) simulate caffeine on a macro scale, and B) simulate naps on a macro scale.

Macrocaffeine

I find that an interesting, new passion project gets my creative energy flowing much better than jumping into old work. If I really want to get in the mood to program, I’ll hop off my projects with deadlines and build something that I know probably won’t ever even get finished, but that I’m just excited to build.

Caffeine blocks adenosine (a sleep chemical) receptors in our brain, causing us to avoid sleep longer. If we substitute the current list of things we have to do, with a temporarily more engaging list, it may give us the same slingshot effect that caffeine does for our energy on a micro level.

Macronaps

This one seems a little bit more straight forward to me.

Just stop working so much.

Take a nap from your work. I understand that this is not a viable solution for businesses that intend to make money, but I think there can be some good compromises here. Namely, most hip tech companies have very generous vacation allowances already. Use your vacation during a low point, and in perfect cliché form, “recharge.”

Additionally, companies with large enough teams can have two modes of employment that employees could ideally opt into. “Passion-mode” and “coast-mode.” Someone who is on an upswing should get put on a big project that’s going to take a lot of energy. Someone who is burnt out from the last big project should be given work that will allow them to show up a little late, and leave a little early.

There’s lots of work like this. The support team at your company would probably love it if developers frequently did 6-hour customer support stints. In no way do I imply that the support team doesn’t regularly break its back working very demanding hours/problems and doesn’t deserve their own down-time. There’s also plenty of documentation that I’m happy to churn out during my down month.

The point is to allow employees to be maximally lazy while still maintaining their minimum required value. The more quickly I’m able to get through the less motivated time, the more quickly I’ll be able to jump back into a difficult and challenging project and do it well.

A Final Moment of Clarity

I have very few projects and accomplishments that haven’t come to me in a “moment of clarity.” Naturally, I want to maximize the amount of these moments, and increase the odds that I’ll be working on something that I love when they occur. I have no idea if hacking your body is a good long-term strategy for making this happen, but I find that researching all of this sleep stuff is an excellent tool for procrastinating during my focus droughts.

I can’t guarantee that any of this will resonate with you, or work for you if you try it. But I do think that everyone goes through the motivational recessions, and we should be actively attempting to eliminate or reduce them. What is Quantitative Easing for the Soul?

I simply want my hard work to be spent most efficiently.

Special thanks to Michelle Bu for reading this ahead of time.

Understanding JavaScript Inheritance

| Comments

So someone shoulder-taps you and asks you to explain the concepts behind JavaScript Inheritance to them. In my eyes you’ve got a few options.

The Terminology Play

You mention that it’s prototypal inheritance, not prototypical and pretty much gloss over the rest, comfortable in your superiority in terminology. You may go as far as saying “Objects just come from other Objects because there aren’t any classes.” Then you just link to Crock’s Post on it, and try to seem busy for the next few days.

Many years later you find out that Prototypal and Prototypical are synonyms, but you choose to ignore this.

The Like-Classical-Inheritance-But-Different Play aka the Run-On Sentence Play

“So in Java, like, you have classes or whatever, right? Well so imagine that you don’t have those, but you still want to do that same type of thing or whatever, so then you just take another object instead of a class and you just kind of use it like it’s a class, but it’s not because it can change and it’s just a normal object, and if it changes and you don’t override the object, oh yea, so you can decide to override the parent object class thing, so if you dont do that and the parent changes the link is live…”

And so forth.

The Animal Play

This is a pretty popular one.

So let’s say we want to make an Animal class in our code. As is often necessary in production JavaScript applications.

First we make a “constructor function,” which acts kind of like a constructor method on the inside of a class in a classical language when it’s invoked with the new operator. Except this one is on the outside.

function Animal (name) {
  this.name = name;
}

var myAnimal = new Animal('Annie');

Then we want to have actions that all animals can do.

Animal.prototype.walk = function () {
  console.log(this.name + ' is walking.');
};

But then you want to define a more specific type of animal. Things start to get weird.

// I think we need to define a new Animal type and extend from it somehow

function Dog (name) {
  this.name = name;
}

// BUT HOW DO WE EXTEND
// WITHOUT AN INSTANCE TO USE?
Dog.prototype = Animal.prototype; // ?? I HAVE NO IDEA
// Maybe that'll work for some stuff?
// ProHint™: probably not much, once you start modifying one of them :D

Then you remember that Prototypal Inheritance doesn’t really do ‘classes’ so much. So you do something like this:

var Dog = new Animal('Annie'); // ??? NO THATS NOT IT >:(

// Maybe we can try Object.create? I hear it's prototypal-y
var Dog = Object.create(Animal);

// Maybe that worked? Let's see...
var myDog = new Dog('Sparky');
// TypeError: object is not a function

// Shucks

And you eventually simply converge on the…

The Father/Son Analogy Play

Here we go. Finally a real world example of ‘instances begetting instances.’ It’ll be a perfect analogy. It’s even an interview question some places. Let’s see how we might implement the relationship of a father and son (or a parent to its child) in JavaScript.

We’ll start out like we did before, with a Human constructor

function Human( name ) {
  this.name = name;
}

Then we’ll add in a common human shared action.

Human.prototype.sayHi = function () {
  console.log("Hello, I'm " + this.name);
};

So we’ll create my dad first.

// Instantiate him
var myDad = new Human('Bill Sexton');

// Greet him
myDad.sayHi();
// "Hello, I'm Bill Sexton"

Score. Now let’s create me.

// Let's use ES5 `object.create` in order to be as 'prototypal' as possible.
var me = Object.create(myDad);
me.sayHi();
// "Hello, I'm Bill Sexton"

It’s a start! Seems like I inherited a little too much from my dad, but I inherited, none the less.

Let’s try to smooth things out to make the analogy work better. So we’ll instantiate objects without a name and have a parent name them after they’re created.

// Wrap it all together
function makeBaby(parent, name) {
  // Instantiate a new object based on the parent
  var baby = Object.create(parent);

  // Set the name of the baby
  baby.name = name;

  // Give the baby away
  return baby;
}

Perfect. Now the baby can sayHi on its own.

var alex = makeBaby(myDad, 'Alex Sexton');

alex.sayHi();
// "Hello, I'm Alex Sexton"

Err. yipes. Babies can’t talk. And what’s this deal with a baby being made by one parent. Not to worry, we can fix all of this.

First we’ll probably want to try to take two parents into the makeBaby function (no giggles).

function makeBaby(father, mother, name) {
  var baby = Object.create(...// fuuu
}

Multiple Inheritance! How did you get here? Ugh. Fine. We’ll just simply mock the human chromosome pattern into our little inheritance example.

// Let's take a set of 4 genes for ease of
// example here. We'll put them in charge
// a few things.
function Human (name, genes_mom, genes_dad) {
  this.name = name;
  
  // Set the genes
  this.genes = {
    darkHair: this._selectGenes(genes_mom.darkHair, genes_dad.darkHair),
    smart:    this._selectGenes(genes_mom.smart,    genes_dad.smart),
    athletic: this._selectGenes(genes_mom.athletic, genes_dad.athletic),
    tall:     this._selectGenes(genes_mom.tall,     genes_dad.tall)
  };

  // Since genes affect you since birth we can set these as actual attributes
  this.attributes = {
    darkHair: !!(~this.genes.darkHair.indexOf('D')),
    smart: !!(~this.genes.smart.indexOf('D')),
    athletic: !!(~this.genes.athletic.indexOf('D')),
    tall: !!(~this.genes.tall.indexOf('D'))
  };
}

// You don't have access to your own gene selection
// so we'll make this private (but in the javascript way)
Human.prototype._selectGenes = function (gene1, gene2) {
  // Assume that a gene is a 2 length array of the following possibilities
  // DD, Dr, rD, rr -- the latter being the only non "dominant" result

  // Simple random gene selection
  return [ gene1[Math.random() > 0.5 ? 1 : 0], gene2[Math.random() > 0.5 ? 1 : 0] ]
};

Human.prototype.sayHi = function () {
  console.log("Hello, I'm " + this.name);
};

function makeBaby(name, mother, father) {
  // Send in the genes of each parent
  var baby = new Human(name, mother.genes, father.genes);
  return baby;
}

Elementary. My only beef is that we no longer are using real prototypal inheritance. There is no live link between the parents and the child. If there was only one parent, we could use the __proto__ property to set the parent as the prototype after the baby was instantiated. However we have two parents…

So we’ll need to implement runtime getters that do a lookup for each parent via ES Proxies.

function makeBaby(name, mother, father) {
  // Send in the genes of each parent
  var baby = new Human(name, mother.genes, father.genes);

  // Proxy the baby
  return new Proxy(baby, {
    get: function (proxy, prop) {
      // shortcut the lookup
      if (baby[prop]) {
        return baby[prop];
      }

      // Default parent
      var parent = father;

      // Spice it up
      if (Math.random() > 0.5) {
        parent = mother;
      }

      // See if they have it
      return parent[prop];
    }
  });
}

So now we support live lookups of parents, and, you know, some simplified genetics.

Isn’t that just a simple, well-defined, example of how straightforward inheritance can be in JavaScript?

Conclusion

Sometimes these analogies get pretty crazy in my head, and I start to think that maybe instead of trying to apply known examples in the outside world in order to help people understand, it’s often better to just let someone know why they might wanna use inheritance in their programs!

I personally find the best Prototypal Inheritance analogy to be:

var defaults = {
  zero: 0,
  one: 1
};

var myOptions = Object.create(defaults);
var yourOptions = Object.create(defaults);

// When I want to change *just* my options
myOptions.zero = 1000;

// When you wanna change yours
yourOptions.one = 42;

// When we wanna change the **defaults** even after we've got our options
// even **AFTER** we've already created our instances
defaults.two = 2;

myOptions.two; // 2
yourOptions.two; // 2

So stop making everything so confusing and go program cool stuff, and ignore my old presentations when I used these analogies.

<3z

Deploying JavaScript Applications

| Comments

Preface: Nothing in this post is necessarily new, or even anything I thought of first (save for a name or two). However, I’m writing it because I’d like to start building some consistency and naming conventions around a few of the techniques that I am using (and are becoming more common), as well as document some processes that I find helpful.

Much of this comes from my experience deploying applications at Bazaarvoice as a large third party vendor, and should probably be tailored to your specific environment. I’m sure someone does the opposite of me in each step of this with good results.

Also, I fully understand the irony of loading a few MBs of GIFs in a post largely about performance, but I like them. Any specific tools I mention are because I’m familiar with them, not necessarily because there are no good alternatives. Feel free to comment on other good techniques and tools below. Facts appreciated.

You

You work on a large app. You might be a third party, or you might not be. You might be on a team, or you might not be. You want maximum performance, with a high cache rate and extremely high availability.

Dev with builds in mind

/images/js_app_deploy/gruntbuild.gif

Locally, you might run a static server with some AMD modules, or a “precompile server” in front of some sass and coffeescript, or browserify with commonjs modules. Whatever you’re doing in development is your choice and not the topic du jour.

The hope is that you have a way of taking your dev environment files and wrapping them up into concisely built and minified JavaScript and CSS files. Ideally this is an easy step for you, otherwise, you’ll tend to skip it. Optimize for ease of mind here. I tend to disagree with the sentiment that ‘script tags are enough.’ Try to manage your dependencies in a single place, and that place probably isn’t in the order of your script tags in your HTML. Avoiding this step is easy until it isn’t.

Loading what you need is better than byte shaving

One technique at the build-stage that is ideal for performance, is building minimal packages based on likely use. At page load, you’ll want to load, parse, and execute as little JavaScript as possible. Require.js allows you to “exclude” modules from your builds and create separate secondary modules. Rather than shaving bytes in your app files, you can avoid loading entire sections of code. Most sections of an app have predictable entry points that you can listen for before injecting more functionality.

In our current app, only a fraction of the users click on the button that causes a specific flow to popup. Because of this we can save ~20kb of code at page load time, and instead load it as a mouse gets close to the button, or after a few seconds of inactivity (to prime the cache). This technique will go a much longer way than any of your normal byte saving tricks, but is not always the easiest and for that reason is often avoided.

/images/js_app_deploy/delaypackage.gif

Check your network panel the next time you have Gmail open to see how Google feels about this technique. They take an extra step and bring the code in as text, and don’t bother parsing or executing it until they need to. This is good for low-powered/mobile devices.

In fact, some Googlers released a library, Module Server, that allows you to do some of this dynamically. It works with lots of module formats. And technically you could just use it to see how it decides to break up your files, and then switch over to fully static files after you get that insight. They presented on it at JSConf.eu 2012:

So instead of using a microjs cross-domain communication library that your coworker hacked together, just delay loading EasyXDM until you need to do cross domain form POSTs.

Don’t penalize modern users

I’m all for progressive enhancement, and have to support IE6 in our primary application. However, it pains me when modern browser users have to pay a performance price for the sins of others. It’s a good idea to try to support some level of “conditional builds” or “profile builds.” In the AMD world, you can use the has.js integration, or if you’re feeling especially dirty, a build pragma. However, third-parties have written some pretty nifty tools for doing this as a plugin.

One of the best tools for this that I’ve seen is AMD-feature. It allows you to use a set of known supported features to load the best fitting build for the current user. This can be great on mobile. You can silently switch out jQuery with Zepto (assuming you stick to the shared subset). You can add and remove polyfills for the correct users. If 20% of your JavaScript is loading for %3 of your users, something is backwards.

define({
    'dropdown': [
        {
            isAvailable: function(){
                // test if we are on iOS
                return iOS ? true: false;
            },

            implementation: 'src/dropdown-ios'
        },
        {
            isAvailable: function(){
                // if we end up here, we're not on iOS,
                // so we can just return true.
                return true;
            },
            // Naturally this is simiplified and doesn't actually
            // imply Android
            implementation: 'src/dropdown-android'
        }
    ]
});

// In your code, you would load your feature like this:

define(['feature!dropdown'], function(dropdown){

    // The variable 'dropdown' now contains
    // the right implementation - no matter
    // what platform the code is executed on,
    // and you can just do this:
    var myDropdown = new dropdown();
});

One less jpeg

Lots of people like repeating this one. I think Paul Irish coined it Adam J Sontag naturally coined it, but the idea is if that if you loaded one less jpeg on your site, you could fit in quite a bit of unshaved JavaScript in its place. Consider this the next time you are sacrificing readability or compatibility for file size.

/images/js_app_deploy/onelessjpg.gif

Requests matter

File size aside, the balance of a fast JS deployment lies somewhere between the number of requests, and the cachability of those requests. It’s often alright to sacrifice the cachability of a small script if you can inline it without causing an additional request. The exact balance is not one that I could possibly nail down, but you can probably think of a file that is dynamic enough and small enough in your application that might make sense to just print it inline in your page.

Package all the pieces together

Fonts and Icons

These days, these two are synonymous. I really like using fonts as icons and have done so with great success. We try to find appropriate unicode characters to map to the icons, but it can sometimes be a stretch. Drew Wilson’s Pictos Server is an incredible way to get going with this technique, though I might suggest buying a font pack in the end for maximum performance (so you can package it with your application).

/images/js_app_deploy/pictos.png

First, we inline fonts as data URIs for supporting browsers. Then we fallback to referencing separate files (at the cost of a request), and then we fallback to images (as separate requests). This means we end up with different builds of our CSS files. Each CSS build only includes one of the techniques, so no one user is penalized by the way another browser might need fonts. The Filament Group has a tool for this called Grunticon. I’d highly recommend this technique. For every modern browser, you have a single request for all styles and icons, with no additional weight from old IEs that don’t support data-URIs.

CSS Files

It’s typically the case that updates to JavaScript files necessitate changes to CSS as well. So these files usually have the same update times. For that reason it’s pretty safe to package them together.

So, as part of our build step, we first build our necessary CSS for our package into a file (for Bazaarvoice: styles are dependencies of templates, which are dependencies of views, which are dependencies of the separate module packages we’re loading, so this is an automatic step). Then we read this file in, minify it, and inject it as a string in our main JavaScript file. Because we have control over when the templates are rendered, we can just inject the CSS into a style tag before rendering the template. We have to render on the serverside occasionally, as well, and in these cases I would recommend against this technique to avoid a flash of unstyled content.

var css   = '#generated{css:goes-here;}';
var head  = document.head || document.getElementsByTagName('head')[0],
var style = document.createElement('style');

style.type = 'text/css';

if (style.styleSheet) {
  style.styleSheet.cssText = css;
}
else {
  style.appendChild(document.createTextNode(css));
}

// Only assume you have a <head> if you control
// the outer page.
head.appendChild(style);

Since we inline the fonts and icons into our CSS files, and then inline the CSS into our JS file (of which only 1 is injected on load), we end up with a single packaged app that contains fonts, icons, styles, and application logic. The only other request will be necessary media and the data (we’ll get to those).

You may notice that we now have a couple of combinations of packages. Yep. If we have 3 ways to load fonts/icons multiplied by the number of build profiles that we chose to create (mobile, oldIE, touch, etc), we can get 10-20 combinations fast. I consider this a really good thing. When you generate them, have some consistent way of naming them, and we’ll be able to choose our exact needed app for a user, rather than a lot of extra weight for other users.

Quick Note: Old IEs can be fickle with inlining a lot of CSS. Just test your stuff and if it breaks, just fall back to link tag injection for oldIEs.

The Scout File

This post actually started out as a means to solidify this term. Turns out I am a bit more long-winded than I anticipated.

The Scout File or Scout Script is the portion of JavaScript that decides which package needs to be loaded. It kicks off every process that can happen in parallel, has a low cache time, and is as small as possible.

It gets its name from being a small entity that looks out of the cache from time to time to warn everybody else that things have changed. It’s ‘scouting’ for an app update and gathering data.

// Simplified example of a scout file
(function () {
  // Feature test some stuff
  var features = {
    svg: Modernizr.svg,
    touch: Modernizr.touch
  };

  // The async script injection fanfare
  var script = document.createElement('script');
  var fScript = document.getElementsByTagName('script')[0];

  var baseUrl = '//cool-cdn.com/bucket/' + __BUILDNUM__ + '/';


  // Build up a file url based on the features of this user
  var featureString = '';
  for (var i in features) {
    if (features.hasOwnProperty(i) && features[i]) {
      featuresString += '-' + i;
    }
  }
  var package = 'build' + featureString + '.js'

  // Set the URL on the script
  script.src = baseUrl + package;

  // Inject the script
  fScript.parentNode.insertBefore(script, fScript);

  // Start loading the data that you know you can grab right away
  // JSONP is small and easy to kick off for this.
  var dataScript = document.createElement('script');

  // Create a JSONP Url based on some info we have.
  // We'll assume localstorage for this example
  // though a cookie or url param might be safer.

  window.appInitData = function (initialData) {
    // Get it to the core application when it eventually
    // loads or if it's already there.
    // A global is used here for ease of example
    window.comeGetMe = initialData;
  };

  // If we're on a static site, the url might tell us
  // the data we need, and the user cookie might customize
  // it. Simplified.
  dataScript.src = '//api.mysite.com' +
                   document.location.pathname +
                   '?userid=' +
                   localStorage.getItem('userid') +
                   '&callback=appInitData;';

  // Inject it
  fScript.parentNode.insertBefore(dataScript, fScript);
})();

If you’re a third party or have little control over the pages you’re injected on, you’ll probably use a file. Otherwise, the code should be small enough and dynamic enough to warrant inlining on a page.

Build apps into self-contained folders

When you build your application, you end up with a set of static files in a folder. Take this folder of files and assign a build number to it. Then upload this to a distributed content delivery network. S3 with CloudFront on top of it is an easy choice. The Grunt S3 Plugin is a good way to do this with the Grunt toolchain. Bazaarvoice has an Akamai contract, so we tend to use them, but the idea is that you are getting your built files onto servers that are geographically close to your users. It’s easy and cheap. Don’t skimp! Latency is king.

Now that you have an app on a static CDN, make sure it gets served gzipped (where appropriate, grunt-s3 can help with this), and then set the cache headers on your built files to forever. Any changes will get pushed as a different set of built files in a totally different folder, these files should be guaranteed to never change. The only exception to this rule is the Scout File, which lives outside of the build folders in the root directory.

The scout file for our third-party app is a very small JS file that contains a build number and a bit of JavaScript to determine the build profile that needs to be loaded. It also contains the minimum amount of code to determine the initial data that we’re going to need for a page. It doesn’t have jQuery, or really any dependencies, it just does exactly what it needs to do. This file is cached for about 5 minutes (should be relatively short, but close to the average session length).

Parallelizing the initial data request

Many people use each of their models to make separate requests for data once the app is loaded. Unfortunately, this is terrible for performance. Not only are there multiple requests, but they can’t be fired off until the BIG app files are loaded and executed. We want to parallelize the loading of our app and our data. This is going to be tough for some folks, but it’s a huuuge performance win.

We use node.js to run our models at build time. We feed in each of the “page types” that we know how to handle. For each of these page types, each model registers its intent to load data, and we build up a hash of data that is needed for each page type and stick that into the scout file.

Then we had our API folk create a batch API so we can make multiple data requests at once. We use this hash of needed data for each page type (we have less than 10 page types, and you probably do too) in order to fire off a single request for the data that all the models will need, before they are loaded. Unfortunately the way to do this changes drastically based on your framework, but it’s worth your time!

Statically generate your container pages and CDN them too

If you aren’t rendering templates on the server, then there’s likely no reason you shouldn’t be statically compiling all of your page shells at their appropriate urls, and uploading them to a static CDN along with your scripts. This is a huge performance improvement.

Distributing the HTML to geographically close servers can have big wins towards getting to your actual content more quickly. In the case that you are uploading your static HTML pages up to the static cdn along with your JS Application, your HTML files can become your Scout File. Put a small cache on each static HTML page and inline the contents that you would have put in a scout file. This serves the same purpose as before, except we’ve saved a request. The only thing that isn’t highly cached on a close-by server is the data, and we’re already loading that in parallel with our app if we’ve followed the previous instructions.

This means the main URL for your site is just a CNAME to a Cloudfront url. Doesn’t that just sound nice? Talk about good uptime! Of course that means the dynamic parts of your site would come from a subdomain like api.mysite.com or similar. The reduced latency of your initial HTML can be a very nice win for performance since you’ve inlined a scout file to immediately load the rest of the app in parallel.

The smart peeps at Nodejitsu put out Blacksmith to help with static site generation a while back, but there are plenty of options. Many apps are single page apps with only an index.html file anyways, so you can skip the static generation all together.

All this together

The goal in all of this is to:

  • geographically cache anything that’s static, not just images and jQuery.
  • cache your app until it changes, but not much longer.

The folder structure I normally see is something like:

# Simplified for Ease of Eyes
| S3-Bucket-1
  |- app
     |- b1234
        |- build.js
        |- build-svg.js
        |- build-optional.css
     |- b1235
        |- build.js
        |- build-svg.js
        |- build-optional.css
     |- b1236
        |- build.js
        |- build-svg.js
        |- build-optional.css
  |- index.html # or scout.js

The index.html file is the only thing that changes, everything else is just added. If we’re a third party, it’d be the scout.js file since we’d be included in someone else’s markup. Everything else has a 30yr cache header. We can upload our build into a folder, verify it, and then switch the build number in the scout file.

// Simplification of the above process
var build = '__BUILD__'; // Replaced at build time
injectApp('/app/' + build + '/build.js');

Deploying a new version of the app becomes “updating one variable.” This means that every user on the site will have a fully updated app in the amount of time you cached your scout file for. In our case it’s 5 minutes. It’s a pretty good trade off for us. We get lifetime caching for our big files and media, but have a very quick turn around time for critical fixes and consistent roll-outs. It also means that if we ever need to roll back, it’s a single variable change to get people fully back on the old code. Clean up old builds as you feel is necessary.

Other media requests

Naturally, you’ll have some logo images, or some promo images to load as part of the app. These should probably just be imageOptim‘d, and sprited as best as possible. However, there is usually a second class of media on a site. Usually these are thumbnails and previews and avatars and such. For these files, I’d suggest using a mechanism to lazy load these media files. Make sure you’re doing smart things with scroll event handlers (hint: throttling the hell out of them), but you don’t want to load 50 avatars if the user is 1000px away from that part of your app. Just be smart about this stuff. It’s not really my intent to cover this portion of app performance since it’s not entirely related to deployment.

Wrap Up

There’s nothing that surprising about these techniques. Everything that could possibly be statically generated is statically generated, and thrown out on edge-cached servers. Every piece of functionality that isn’t needed on page load, isn’t loaded on page load. Everything that is needed is loaded in parallel right away. Everything is cached forever, save for the scout file and the data request (you can save recent requests in local storage though!).

You aren’t left with much else to optimize. You are always only loading and executing the minimum amount of JavaScript, and saving it for the maximum amount of time. Naturally the more common tips of not going overboard with external libraries, and paying attention to render performance, and serving HTML with the page response are all ways to change the performance (usually for the better), but this architecture fits well with many of today’s more app-like deployments.

There’s something really comforting about exposing a minimal dynamic API that needs to be fast and having everything else served out of memory from nearby static servers. You should totally try it.