How to pass variables to a Docker container when building a Node app

Environment variables are declared with the ENV statement and are notated in the Dockerfile either with $VARIABLE_NAME or ${VARIABLE_NAME}.

Passing variables at build-time

The ENV instruction sets the environment variable to the value. The environment variables set using ENV will persist when a container is run from the resulting image. For example:

FROM node:9

ENV NODE_ENV development

The Dockerfile allows you to specify arguments at build-time. The ARG instruction defines a variable that users can pass at to the builder:

FROM node:9


When building a Docker image from the command line, you can set those values using –build-arg:

 docker build --tag webapp --build-arg PORT=3000 --build-arg NODE_ENV=development .

Executing commands using the shell

And, here is the secret ingredient. If the $NODE_ENV variable is set, then you can use the shell to run an NPM script:

FROM node:9 



RUN mkdir -p /usr/app
WORKDIR /usr/app
RUN cd /usr/app
ADD . .

RUN npm install
RUN /bin/bash -c '[[ "${NODE_ENV}" == "production" ]] && npm run build:prod || npm run build:dev'


CMD ["npm", "run", "start"]

Finally, you expose the port number and start the HTTP server.

That’s it! Thanks for reading and happy Dockering :)

How to create a Data Container Component in React

One pattern I’ve used quite a lot while working with React at the BBC and Discovery Channel is the Data Container pattern. It became popular in the last couple of years thanks to libraries like Redux and Komposer.


The idea is simple. When you build UI components in React you feed data into them via containers. Inside those containers you may need to access different data sources, filter data, handle errors, etc. So data containers help you build data-driven components and separate them into two categories: Data components and Presentational components.

  • A Presentational component is mainly concerned with the view, it doesn’t specify how the data is loaded or mutated. They receive data and callbacks exclusively via props.
  • A Data component talks to the data sources and provides the data and behaviour to the Presentational component. It’s usually generated using higher order function, such as connect() or createContainer().

There are actually 2 ways to implement this pattern, using inheritance or composition:

  1. Inheritance: a React component class extends a Data Container component class.
  2. Composition: a React component is injected into the Data Container (React Komposer uses this approach).

I recommend composition over inheritance as a design principle because it gives you more flexibility.

Code Example

Lets say you want to display a list of notifications and you have 2 components:
NotificationsContainer and NotificationsList

First you need to fetch the data and add it to the NotificationsContainer:

import React, {createElement} from 'react';
import PropTypes from 'prop-types';
import https from 'https';
import DataStore from '/path/to/DataStore';

export default function createContainer(SubComponent, subComponentProps) {

    class DataContainer extends React.Component {

        constructor(props) {

            this.dataSourceUrl = props.dataSourceUrl;
            this.state = {
                data: null,
                error: null

        componentDidMount() {

        setInitialData() {
            if (DataStore.hasData( {
                    data: DataStore.getData(
            } else {

        fetchData() {
            https.get(this.dataSourceUrl, res => {
                let chunkedData = '';

                res.on('data', data => {
                    chunkedData += data;

                res.on('end', () => {
                        data: chunkedData

                res.on('error', (error) => {

        render() {
            return createElement(
                Object.assign({}, subComponentProps, this.state)

    DataContainer.propTypes = {
        name: PropTypes.string,
        dataSourceUrl: PropTypes.string

    return DataContainer;

Then you need to create a NotificationsList component that receives the data as a prop:

import React from 'react';
import PropTypes from 'prop-types';

class NotificationsList extends React.Component {

    constructor(props) {

    render() {
        const listItems = || [];

        return (
                {, index) => {
                    return <NotificationListItem item={item} index={index} />;

NotificationsList.propTypes = {
    data: PropTypes.object,
    error: PropTypes.object

export default NotificationsList;

And, finally, you need to create and render the data container:

import React from 'react';
import NotificationsList from './NotificationsList';
import createContainer from './createContainer';

export default class HomePage extends React.Component {

    render() {
        const NotificationsContainer = createContainer(
            NotificationsList, {
                propName: 'propValue'

        return (
                name="notifications" />

If you are looking for something a bit more advanced, similar to what I was using at the BBC, then check out this nice little project called  Second. Or, if you are building a more complex app and need to manage state or map components to multiple containers, then you should consider using Redux. Here’s a great presentation about React/Redux.

For those using React 16.3, keep an eye on the following projects: react-waterfall and unistore. They are data stores built on top of the new Context API.

If you don’t want to miss any of my articles, follow me on twitter @fedecarg

Website performance monitoring tool

Monitoring systems allow you to monitor changes to your front-end code base over time, catching any regression issues and monitoring the ongoing effects of any performance optimisation changes. Easy to use dashboards are a must when it comes to monitoring the state of your web apps. Companies like Calibre or SpeedCurve offer this as a professional service, but not everyone can afford them.

Meet SpeedTracker

SpeedTracker is an open source (MIT license) self-hosted solution to monitor your app’s uptime and APIs, developed by Eduardo Bouças. It runs on top of WebPageTest and makes periodic performance tests on your website and shows a visualisation of how the various performance metrics evolve over time.

SpeedTracker provides clean charts and graphs that can help you identify possible problem areas.


Check out the demo here:

WebPageTest is an incredibly useful resource for any web developer, but the information it provides becomes much more powerful when monitored regularly, rather than at isolated events. Web application monitoring is not just for detecting downtime, it also gives you additional insight into performance trends during peak load times, as well as by time of day, and day of the week.


For me, the best thing about SpeedTracker is that it runs on your GitHub repository! Data from WebPageTest is pushed to a GitHub repository. It can be served from GitHub Pages, from a private or public repository, with HTTPS baked in for free.

SpeedTracker also allows you to define performance budgets for any metric you want to monitor and receive alerts when a budget is overrun. This can be an e-mail or a message on Slack.

For instructions on how to install this tool, visit the following GitHub repo:


Node.js: How to mock the imports of an ES6 module

The package mock-require is useful if you want to mock require statements in Node.js. It has a simple API that allows you to mock anything, from a single exported function to a standard library. Here’s an example:


function init() {
    // ...

module.exports = init;


import config from '../../config.js';

function load() {
    // ...

module.exports = load;


import {assert} from 'chai';
import sinon from 'sinon';
import mockRequire from 'mock-require';

describe('My module', () => {

    let module; // module under test
    let configMock;

    beforeEach(() => {
        configMock = {
            init: sinon.stub().returns("foo")

        // mock es6 import (tip: use the same import path)
        mockRequire("../../config.js", configMock);

        // require es6 module
        module = require("../../../app/services/content.js");

    afterEach(() => {
        // remove all registered mocks

    describe('Initialisation', () => {

        it('should have an load function', () => {



JavaScript: Retrieve and paginate JSON-encoded data

I’ve created a jQuery plugin that allows you to retrieve a large data set in JSON format from a server script and load the data into a list or table with client side pagination enabled. To use this plugin you need to:

Include jquery.min.js and jquery.paginate.min.js in your document:


Include a small css to skin the navigation links:

<style type="text/css">
a.disabled {
    text-decoration: none;
    color: black;
    cursor: default;

Define an ID on the element you want to paginate, for example: “listitems”. If you have a more than 10 child elements and you want to avoid displaying them before the javascript is executed, you can set the element as hidden by default:

<ul id="listitems" style="display:none"></ul>

Place a div in the place you want to display the navigation links:

Finally, include an initialization script at the bottom of your page like this:

$(document).ready(function() {
    $.getJSON('data.json', function(data) {
        var items = [];
        $.each(data.items, function(i, item) {
  • ' + item + '
  • ');         });         $('#listitems').append(items.join(''));         $('#listitems').paginate({itemsPerPage: 5});     }); });

    You can fork the code on GitHub or download it.

    JavaScript: Asynchronous Script Loading and Lazy Loading

    Most of the time remote scripts are included at the end of an HTML document, right before the closing body tag. This is because browsers are single threaded and when they encounter a script tag, they halt any other processes until they download and parse the script. By including scripts at the end, you allow the browser to download and render all page elements, style sheets and images without any unnecessary delay. Also, if the browser renders the page before executing any script, you know that all page elements are already available to retrieve.

    However, websites like Facebook for example, use a more advanced technique. They include scripts dynamically via DOM methods. This technique, which I’ll briefly explain here, is known as “Asynchronous Script Loading”.

    Lets take a look at the script that Facebook uses to download its JS library:

    (function () {
        var e = document.createElement('script');
        e.src = '';
        e.async = true;

    When you dynamically append a script to a page, the browser does not halt other processes, so it continues rendering page elements and downloading resources. The best place to put this code is right after the opening body tag. This allows Facebook initialization to happen in parallel with the initialization on the rest of the page.

    Facebook also makes non-blocking loading of the script easy to use by providing the fbAsyncInit hook. If this global function is defined, it will be executed when the library is loaded.

    window.fbAsyncInit = function () {
            appId: 'YOUR APP ID',
            status: true,
            cookie: true,
            xfbml: true

    Once the library has loaded, Facebook checks the value of window.fbAsyncInit.hasRun and if it’s false it makes a call to the fbAsyncInit function:

    if (window.fbAsyncInit && !window.fbAsyncInit.hasRun) {
        window.fbAsyncInit.hasRun = true;

    Now, what if you want to load multiple files asynchronously, or you need to include a small amount of code at page load and then download other scripts only when needed? Loading scripts on demand is called “Lazy Loading”. There are many libraries that exist specifically for this purpose, however, you only need a few lines of JavaScript to do this.

    Here is an example:

    $L = function (c, d) {
        for (var b = c.length, e = b, f = function () {
                if (!(this.readyState
                		&& this.readyState !== "complete"
                		&& this.readyState !== "loaded")) {
                    this.onload = this.onreadystatechange = null;
                    --e || d()
            }, g = document.getElementsByTagName("head")[0], i = function (h) {
                var a = document.createElement("script");
                a.async = true;
                a.src = h;
                a.onload = a.onreadystatechange = f;
            }; b;) i(c[--b])

    The best place to put this code is inside the head tag. You can then use the $L function to asynchronously load your scripts on demand. $L takes two arguments: an array (c) and a callback function (d).

    var scripts = [];
    scripts[0] = '';
    scripts[1] = '';
    $L(scripts, function () {
        console.log("ga and jquery scripts loaded");
    $L([''], function () {
        console.log("facebook script loaded");
        window.fbAsyncInit.hasRun = true;
            appId: 'YOUR APP ID',
            status: true,
            cookie: true,
            xfbml: true

    You can see this script in action here (right click -> view page source).

    Google Page Speed: Web Performance Best Practices

    When you profile a web page with Page Speed, it evaluates the page’s conformance to a number of different rules. These rules are general front-end best practices you can apply at any stage of web development. Google provides documentation of each of the rules, so whether or not you run the Page Speed tool, you can refer to these pages at any time.

    The best practices are grouped into five categories that cover different aspects of page load optimization:

    • Optimizing caching: Keeping your application’s data and logic off the network altogether
    • Minimizing round-trip times: Reducing the number of serial request-response cycles
    • Minimizing request size: Reducing upload size
    • Minimizing payload size: Reducing the size of responses, downloads, and cached pages
    • Optimizing browser rendering: Improving the browser’s layout of a page

    Web Performance Best Practices

    ActiveRecord: JavaScript ORM Library

    Aptana has just released a beta version of its ActiveRecord.js which is an ORM JavaScript library that implements the ActiveRecord pattern. It works with AIR and other environments:

    ActiveRecord.js is a single file, MIT licensed, relies on no external JavaScript libraries, supports automatic table creation, data validation, data synchronization, relationships between models, life cycle callbacks and can use an in memory hash table to store objects if no SQL database is available.


    var User = ActiveRecord.define('users',{
        username: '',
        email: ''
    var ryan = User.create({
        username: 'ryan',
        email: ''
    var Article = ActiveRecord.define('articles',{
        name: '',
        body: '',
        user_id: 0
    var a = Article.create({
        name: 'Announcing ActiveRecord.js',
    a.set('name','Announcing ActiveRecord.js!!!');;
    a.getUser() == ryan;
    ryan.getArticleList()[0] == a;


    Building desktop Linux applications with JavaScript

    During his keynote presentation at OSCON last year, Ubuntu founder Mark Shuttleworth described application extensibility as an important enabler of innovation and user empowerment. Citing the Firefox web browser and its rich ecosystem of add-ons as an example, Shuttleworth suggested that the Linux community could deliver a lot of extra value by making scriptable automation and plugin capabilities available pervasively across the entire desktop stack.

    Mark Shuttleworth also described his strategy for accelerating the adoption of Linux. He discussed the importance of extensibility in open platforms, contemplated the challenges of adapting conventional software methodologies so that they can be used for community-driven development, and contended that the open source software community has the potential to deliver a user experience which exceeds that of Apple’s Mac OS X platform.

    Ryan Paul: Building desktop Linux apps with JavaScript