Lodestar Gambit Released

At long last I’m happy to announce that Lodestar Gambit is finally available on Google Play. I’ve written a little bit about sound and path scripting code that is in the game, but I’ll try to add some more in the near future.

Any downloads/feedback/etc. is greatly appreciated! It’s available here

Join the Discussion

Styling in React and Atomic CSS

React.js certainly seems to have ascended as a major FE tool over the last year or so. As best practices continue to be ironed out, most elements of web development seem to fitting into the new ecosystem nicely. However, discussing CSS in the context of React still stirs up some vigorous debate. CSS best practices in React applications is a fairly divisive topic, particularly since the developers themselves seem to eschew “best practices.” Essentially the two camps are divided into 1.) moving towards component-level styles and declaring CSS directly in their components, or 2.) A more traditional approach of keeping styles in separate, global stylesheets (including using preprocessors like SASS/LESS/etc.).

I encourage you to listen to the presentation above, but in summary it advocates putting your component CSS inside of the component itself as opposed to an outside stylesheet. There are a myriad of reasons to do so, but essentially it fixes some key and long-standing issues with typical CSS practices (these are taken from the presentation above):

  1. Everything is global
  2. Dependencies are hard to manage
  3. Difficult to remove dead code/unused rules
  4. Sharing constants (class names, tags) is tough
  5. Separate minification process
  6. Non-deterministic resolution (async loading causes potentially different styles)
  7. Breaks isolation (teams can override components/modules from other teams, often by accident)

Now many of these issues can be mitigated with best practices, preprocessors, or what have you, but they are all band-aids to the inherently global nature of CSS. No matter how good your development process, the same issues always seem to keep cropping up. Sharing code with other teams, using third party libraries, etc. will ensure that regardless of you or your team’s skill with CSS, some things are just out of your control and you should be insulating yourself as much as possible.

But isn’t the alternative inline styling? And inline styling is bad! First, what I’m (and others) are advocating isn’t exactly inline styling, it’s defining your styles inside of your React components (it’s more than a semantic difference, I promise). Nevertheless, grouping your CSS with your markup IS bad in a typical web app, but for completeness’ sake let’s identify why it is considered such a bad practice, and how it doesn’t apply in this situation:

Breaks DRY- I might have to change the same rule 20 times just to edit every button/margin/whatever! The entire point of CSS is to avoid presentational HTML!

Anytime you find yourself repeating a common style in react, you should be asking yourself why it isn’t in it’s own component. I’m not saying you won’t have minor duplication here and there, but using components liberally to enforce DRY principles should essentially eliminate most duplication of styles.

Hard to Maintain- I need to search through markup to find the rules that I want to change!

This isn’t much of an issue if you are thoughtful about how you separate components. More importantly, it is going to be much simpler to find a rule inside a specific component that you need to change than searching through several global stylesheets/SASS files and figuring out the proper specificity.

Specificity Nightmare- If I have inline styles AND external sheets, specificity issues are sure to creep up on me!

Removing the global namespace of CSS makes it MUCH easier to reason about how styles are being applied to the markup. The only global CSS that should still be present are really just thematic elements/constants like background-color, etc.

No Separation of Concerns- But we are mixing the presentation with structure/behavior!

We are just separating our concerns in a different (and more logical) way. React separates concerns between components, not technologies like HTML/CSS/JS. More succinctly, we control complexity by dividing our application into self-contained components that box everything they need to render in one place, not by forcing a separation between intrinsically mixed page assets.

Cacheability- I can’t cache my CSS if it’s embedded in the markup!

We can still cache components, but see below for a way to address this issue.

The critical point here isn’t that we are just going back to putting styles in the markup, it is that components themselves aren’t just markup. Components are entirely self-contained independent views, and in order for them to operate effectively (and for us to reason about them easily) they have to be able to render themselves in entirety. Cascading global CSS disrupts the barrier components have from their surrounding context, irrevocably weakening their ability to fulfill their purpose. Having CSS in a global namespace instead of the components themselves is like letting a team of artists only draw an outline while you scramble to fill everything in afterwards with a box of Crayolas.

Additionally, this practice isn’t really inline styling. Common CSS classes/rules that are the same within a component can still be pulled out and referenced with a variable, making it easy to change a component’s style on the fly. Large style blocks can be easily generated inside their own functions, stored in their own variables, etc. so that there is no problems with readability. And app-level styles like color themes can still be referenced on a traditional stylesheet. Generating styles programmatically and not directly placing them in an attribute isn’t just syntactic sugar, but a way for more granular and effective organization.

While I truly think this way is more effective (particularly for large teams), we haven’t really solved all of the issues with CSS. We still cannot cache our styles between pages without external sheets. Also, if we are working between multiple teams and importing other react-components, we still may have duplication of styles that we can’t really help. Finally, tons of inline styles can hugely increase the size of the DOM. While none of these are huge issues (and I think the tradeoffs are worth it for getting rid of some of the major CSS problems above), they are annoyances that we have to deal with. Luckily, adding one more element to the puzzle can help mitigate each of these.

Atomic CSS is a framework that seeks to address many of the issues we are trying to solve with React components, only limited to CSS itself. This article goes over the reasonings and main benefits behind using a modular CSS approach (although it is a bit dated in syntax, the ideas are the same). Adopting a tool like Atomic CSS fixes many remaining issues with putting our CSS inside the component, such as cacheability of resources and repetition of styles across components (again, mostly when importing components from other teams/3rd party).

React even solves many of the problems people have with Atomic CSS itself. The major criticism of Atomic (or any extremely granular CSS technique) is how difficult it is to extricate from the markup and change if required by design, but with a component-based application this is no longer an issue! React makes is much easier to confine our related Atomic classes inside one component, and we can even fix readability/syntax complaints about Atomic with the use of modules like classnames. There isn’t any problems with losing the self-documenting nature of our CSS either, since the components do all of the self-documenting we need. The combination of Atomic CSS with React gives us an incredibly modular CSS toolset that can be used in every component without fear of losing cacheability, breaking DRY, or becoming too difficult to maintain.

Atomic CSS isn’t for everyone, but for large teams it can be a great tool in order to avoid the massive style bloat that is sure to develop over time. It doesn’t even need to be specifically Atomic CSS–any granular CSS framework can go a long way to improving productivity and performance. However, even if something like Atomic doesn’t look appealing (as there is a bit of a learning curve), component-based styles still offer some tremendous advantages. I hope this post will at least continue to stir up the discussion of how best to implement our CSS styles in React applications.

Join the Discussion

Better Animations in React

Animating content in react can be a bit tricky, especially if you are trying to rely on the ReactCSSTransitionGroup library that is part of the React/addons package. The crux of the issue is that React’s DOM-diffing doesn’t exactly work when you need to animate an element leaving the page, especially if that element is being replaced by a similar component (or just the same component with different data). In this case, instead of diffing the current DOM, you need an entirely new component to be rendered, while apply a ‘leave’ animation to the old component/DOM node. ReactCSSTransitionGroup seeks to do this for you by applying an animation-leave and animation-enter class to the two nodes, but since it relies on the transitionEnd event this can be pretty frustrating.

The transition event (in my experience) is simply not reliable enough to use in a user-facing application. TransitionEnd can fail to fire if the user changes tabs, if there is any sort of error, or a myriad of other reasons. When this fails to fire, React won’t remove the old node from the page while still adding the new node. The result is a broken looking experience at best, and likely an unusable page at worst.

The solution to this is to use a different mechanism to signal that the node should be removed from the DOM. I’ve been using a library adapted from Khan Academy’s TimeoutTransitionGroup, which uses simple timeouts to signal animation competition. The downside of this is that you must manually set the timeout time to be the same as the animation time, but this is a small price to pay for an infinitely more reliable mechanism.

It would be great if ReactCSSTransitionGroup was more robust. Maybe adding a fallback timeout event or the ability to specify your own events for node removal would help, but I guess that is what the underlying TransitionGroup library is for.

Join the Discussion

Oculus Connect

I had the good fortune to be selected to attend Oculus Connect this past weekend, and it was truly an awesome experience. Listening to John Carmack and Michael Abrash speak on their experiences as developers/researchers was both informative and entertaining. I had only watched recordings of John’s talks at events like QuakeCon, but by seeing him in person (both on stage and just talking to people around the conference) I was able to truly appreciate his enthusiasm for VR.

I was also able to grab a demo with the Crescent Bay prototype just before I had to leave. I’ve been following the development of VR for a while, and having owned a DK1 and DK2 I doubted I would be surprised by any new iterations of the Rift (at least coming so soon after the Dk2). Trying Crescent Bay, however, was akin to putting on the DK1 the first time. While the DK1 showed me the potential for VR and a glimpse at what could be achieved, Crescent Bay was the first time I actually experienced the level of immersion (presence) VR enthusiasts say is the Holy Grail of virtual reality.

After showing up for my slotted demo time and waiting in line for a few minutes, I was eventually led into a room with several identical booths lining each side, reminding me of a large set of dressing rooms in a big department store. I was shown into one of the small rooms (about 6 x 6 or so) with a rather breathless attendant. She told me she had just tried the demo herself for the first time, so we talked a few second while I let her catch her breath. The room was fairly bare, with a small mat in the center with the rift against one wall and a small camera set up against the other. The attendant told me the mat was just so I could tell where I was, and then handed me the Rift and told me to adjust it to my face. She tightened a few straps and the demos soon began.

I think there were thirteen demos or so lasting about 30 seconds to 1 minute each, and since I’m not going to remember all of them here are some standouts in no particular order:

  • Standing in front of a full-size alien on what appeared to be the moon. It reminded me of some sort of Pixar creature from its animations/mannerisms and the way it was styled. At one point it reached out it’s hand and, without thinking, I tried to shake it as a friendly “hello.” I think this one was my favorite because of the way the alien’s eyes followed me as I moved around and how it made me feel like it was actually interested in me.

  • A tiny city made of what at first looked like a child’s toys. I bent down for a closer look and could see small “people” dancing around, a fire truck putting out a fire, and several other moving pieces. I enjoyed watching the tiny bustle of the city immensely and wanted to pick up/place a car or two.

  • A huge T-rex walking down a large hallway and roaring at me. The scale of the animal was incredible, and I looked up at it with awe as it stepped over me and continued on its way.

  • A slow-motion, futuristic battle with a large robot on what I assume was Earth. This was the “Showdown” demo talked about earlier in the conference (video here) and was the grande finale of the whole demo real. The demo places you in the middle of a group of futuristic soldiers shooting at a large robot, all in the middle of a typical city-setting. As soon as the demo started I jumped out of the way of an oncoming bullet. Later on, a car to my front exploded, sending it hurdling over me. As it passed over I looked up to see a passenger twisting like a rag doll from the explosion.

These are just a few of the demos I got to experience, but they were all incredible. Each one was buttery smooth, likely due to the 90hz refresh rate of the new display (which really made a big difference). There was no screen door effect to speak of, just a barely-noticeable film. I’m not sure if this was just the increased resolution, but I felt like there was some type of diffuser (although I could very well be wrong). 360 Degree head tracking worked great. I turned in every direction, and the only time I lost tracking was went I bent down under the camera to look closely at the tiny town demo. The actual HMD was much lighter than the Dk2, which helped with all of the frantic looking around I was doing (with no motion blur to speak of). The integrated audio also seemed well done (minus all the chatter going on outside the booth with excited developers). The only con I had was that a considerable amount of light seeped through from the bottom, but in my haste to put it on I doubt that I had it fitted as well as it could have been.

I think the demos really highlighted how VR is potentially so much more than just a tool for games. Yes, games in VR will be amazing, but so many of the demos were just experiences. Seeing massive dinosaurs, meeting a friendly alien, and being shrunken down to a microscopic size (one demo I didn’t mention) all show that VR is going to be a device that permeates many different fields. Academia, early education, and a myriad of other fields are all potential applications. I’m willing to bet that in the future, gaming will only be seen as the catalyst that launched VR into a huge variety of markets.

All in all the whole experience was excellent, and I feel fortunate to have been able to attend. I got to see some fantastic demos (from Oculus and indie devs alike) and listen to leaders in the field. Hopefully this time next year I’ll be able to attend again and discuss fresh experiences with CV1!

Join the Discussion

Debugging in Node.js Part 2: Untangling Asynchronous Events

In the last post we looked at a basic set of debugging tools which probably will help with about 90% of the bugs you encounter. For some of those tough other 10%, I’ll go over some more specialized tools and techniques that help with some of the more unique aspects of Node (read: asynchronous events). While print statements and breakpoints (and stepping through the code) are helpful when you have an easily repeatable error, the asynchronous nature of Node can make many errors seem almost impossible to diagnose. By default, an event handler will render an incredibly useless stack trace, leaving you to wonder at what point your application actually went off the rails. Hopefully the use of some of these tools will keep you from breaking a couple of keyboards in frustration.

Long Stack Traces

All errors thrown by the v8 engine have the same basic stack trace api). Modules such as the Node stack-trace module help expose this api, but what can you really do with it? The long-stack-trace module (or longjohns module, a fork of long-stack-trace) is one answer. Long stack traces go beyond the basic stack trace you get with vanilla Node events, and instead allow your stack traces to span asynchronous events. An example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
/* GET home page. */
router.get('/', function(req, res) {
  function someAsyncHandler() {
     throw new Error('Oh no! Event Error!');
  }

  setTimeout(someAsyncHandler, 1000);

  res.render('index', { title: 'Express' });
});

We get the following trace:

/blab/routes/index.js:7
  throw new Error('Oh no! Event Error!');
       ^
Error: Oh no! Event Error!
    at someAsyncHandler [as _onTimeout] (/Users/danielhood/Dev/Workspace/blab/routes/index.js:7:12)
    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)

Not very helpful, is it? That’s because when the error is thrown, we only print the trace of the current stack frame (since it gets reset by any asynchronous operation). However, adding the longjohns or long-stack-trace module wraps these asynchronous functions in order to store the current stack at the instant of registration. It then uses this information and concatenates stack frames when an error is actually thrown. As a result, we get much better looking and more useful stack traces (in this case I’m using longjohns):

/Users/danielhood/Dev/Workspace/blab/node_modules/longjohn/dist/longjohn.js:185
        throw e;
              ^
Error: Oh no! Event Error!
    at someAsyncHandler (/Users/danielhood/Dev/Workspace/blab/routes/index.js:8:12)
    at listOnTimeout (timers.js:112:15)
---------------------------------------------
    at router.get.res.render.title (/Users/danielhood/Dev/Workspace/blab/routes/index.js:11:3)
    at next_layer (/Users/danielhood/Dev/Workspace/blab/node_modules/express/lib/router/route.js:103:13)
    at Route.dispatch (/Users/danielhood/Dev/Workspace/blab/node_modules/express/lib/router/route.js:107:5)
    at /Users/danielhood/Dev/Workspace/blab/node_modules/express/lib/router/index.js:205:24
    at proto.process_params (/Users/danielhood/Dev/Workspace/blab/node_modules/express/lib/router/index.js:269:12)
    at next (/Users/danielhood/Dev/Workspace/blab/node_modules/express/lib/router/index.js:199:19)
    ....//stack trace continues...

Note that the line after the break is where we originally set the timeout function. This way we can see how our asynchronous events propagated. Remember though, this method should only be used for debugging. There is a reason that the Node core doesn’t generate these Error objects constantly, and that is because they are pretty expensive. If you used these modules in a production environment, expect to have some serious performance issues.

Domains and the TryCatch Module

Domains are a part of the Node.js core (introduced in v0.8) that help developers deal with errors in their applications instead of letting them cause a crash. Domains let you declare a single point of exit for any uncaught errors, or namespace different errors in ways that allow you to define only a few specific event handlers. For example, if we modify our code above:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
var domain = require('domain');
var newDomain = domain.create();

/*An error handler for this domain*/
newDomain.on('error', function(err) {
  console.log('Error handled!');
});
/* GET home page. */
router.get('/', function(req, res) {

  function someAsyncHandler() {
     throw new Error('Oh no! Event Error!');
  }

  /* Any error that bubbles up from this run function will be caught
  by our domain */
  newDomain.run(function() {
    setTimeout(someAsyncHandler, 1000);
  });

  res.render('index', { title: 'Express' });
});

Now instead of crashing our program, we just get he message “Error handled!” every second. By using Domains we can:

  • Simplify error handling
  • Create single points of exit for our application
  • Gracefully handle errors so that we don’t upset our users

Domains can also be used to store contexts for particular sessions (i.e. creating a new domain per each http connection), or to intercept errors from different modules (instead of using a typical callback) so that all errors can be handled in the same place. There are many good uses for Domains, so check out some more examples.

If you want an easy abstraction over the Domains (or just want to use them for error handling), the trycatch module is a solid choice. Like the longjohn or long-stack-trace module, trycatch allows us to print long stack traces after a thrown error. Instead of wrapping every asynchronous call like the former modules, it wraps every try function inside of its own domain. If an error is thrown, trycatch uses the Domain functionality demonstrated above to give us access to the full stack trace. The trycatch module is incredibly easy to use (a single function with try and catch functions as arguments), so its a great drop-in tool for grabbing long stack traces. For the sake of completeness, here is a quick example from our code above:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
var trycatch = require('trycatch')
trycatch.configure({
    'long-stack-traces': true
})
/* GET home page. */
router.get('/', function(req, res) {

  function someAsyncHandler() {
     throw new Error('Oh no! Event Error!');
  }

  /* Any error that bubbles up from this run function will be caught
  by our domain */
  trycatch(function() {
    setTimeout(someAsyncHandler, 1000);
  }, function(err) {
    console.log("Error Handled!", err.stack);
  });
  res.render('index', { title: 'Express' });
});

This results in similar stack-trace to what we had before. Note that the trycatch module also allows you to color-code your output.

Zones

Zones are a faily recent addition to the Node.js ecosystem. They allow the creation of execution contexts (similar to continuation-local-storage) with the addition of long stack traces, layered exception handling, and a bunch of cool features you should check out. I’m going to hold off on going in-depth into zones since they are relatively new and I’m unsure about how “Zone.js overrides all asynchronous functions in the browser with custom implementations,” but they are definitely a feature to watch.

The Future With The AsyncListener API

So far we have looked at modules that help us debug asynchronous events by either collecting stack frames (expensive) or by using Domains. Several developers have expressed a need for a more generic API that isn’t quite as bulky as Domains, and still performant. More specifically, Domains:

  1. Can’t be turned off. That is, once you attach a domain, it will always catch errors (even if you module is included in some larger app).

  2. Can’t easily be used in a highly modular app. By their very nature, Domains aren’t the most modular of tools. Attaching modules to a piece of code that uses Domains is automatically going to keep any errors from bubbling up. Thus using Domains in one part of an app means that anything that is built on top of it will be stuck using the same error-handling paradigm. And, because of (1), there really isn’t a way to mitigate this issue.

  3. Can’t scale to large applications easily. This is really a result of 1 and 2, but trying to introduce Domains in a large project with several developers is going to cause a problem. They likely aren’t going to appreciate you attaching a domain to some part of the app (without discussing it) and changing how errors are dealt with.

The AsyncListener API is an answer to these issues. Essentially, the goal of this API is to allow users to, as the name suggests, attach a listener to any type of asynchronous operation. This, in turn, allows the easy creation of modules like long-stack-trace and trycatch without the massive performance overhead of saving stack frames or the unwieldiness of using Domains. This API isn’t set to relase until Node v0.12 (and is still in a bit of flux), but there is a working polyfill if you want to try it out now. Once this API becomes current, expect to see many more asynchronous tracing and logging modules in the Npm registry.

Continuous Local Storage

Before I dive into CLS there are two important points you should remember:

  1. CLS can be used for much more than debugging–it allows for easier attachment of data along your call chain (for example, instead of attaching a bunch of properties to the req/res objects).

  2. It is relatively new, and much of the underlying AsyncListener API is still in flux. Currently, it uses the polyfill.

That being said, continuation-local-storage can be a pretty interesting tool for debugging and logging issues. Essentially, it allows us to attach meta-data to a particular chain of execution, including asynchronous calls. We don’t have to wrap our whole chain in a particular listener (like a domain), and we can use it as a place to store additional data. If you picture a chain of callbacks/events as something like a thread (in a more traditional multi-threaded language), then CLS is conceptually similar to thread-local storage.

From the docs:

A simple rule of thumb is anywhere where you might have set a property on the request or response objects in an HTTP handler, you can (and should) now use continuation-local storage.

This works great in our example Express app. In our outer app.js file we have:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
/* Grab the CLS module and initialize our new context */
var cls = require('continuation-local-storage');
var session = cls.createNamespace('mainSession');

app.use(function(req,res,next){
    req.db = db;

    /* bind all of the event handlers related to the req/res objects
        to our current session. */
    session.bindEmitter(req);
    session.bindEmitter(res);

    session.run(function() {
      /* we can even set session variables */
      session.set('userName', 'testPerson');
      /* Continue executing.  All under our 'mainSession' context */
      session.run(next());
    });

});

And in our routes index.js file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
var cls = require('continuation-local-storage');
/* GET home page. */
router.get('/', function(req, res) {

  var session = cls.getNamespace('mainSession');
  /* get the user from our CLS session */
  console.log(session.get('userName')); //prints 'testPerson'

  res.render('index', { title: 'Express' });
});

So this is a great way to isolate individual execution contexts in our applications (at least it is better than overloading the req/res objects all the time), but how does this help with debugging our apps? CLS is more of a tool that can be used to build other debugging modules. Using it you can easily record both syncronous and asynchronous calls in order to print long stack traces, pass objects to your error constructors for more detailed messages, or just implement basic logging across critical or problematic code. All of this comes with more flexibility and less of a performance impact than the implementation of Domains.

As the AsyncListener API matures, expect to see more tools appear that make debugging Node’s asynchronous events easier. Core components such as the experimental tracing module will slowly improve the overall efficacy of Node debugging and performance tuning. Until then, the tools mentioned here (and others like them) will be the extent of our debugging toolbox.

Join the Discussion

Check out some more posts!