Node.js, Tools, Web Apps, Web Services

Website performance monitoring tool

Monitoring systems allow you to monitor changes to your front-end code base over time, catching any regression issues and monitoring the ongoing effects of any performance optimisation changes. Easy to use dashboards are a must when it comes to monitoring the state of your web apps. Companies like Calibre or SpeedCurve offer this as a professional service, but not everyone can afford them.

Meet SpeedTracker

SpeedTracker is an open source (MIT license) self-hosted solution to monitor your app’s uptime and APIs, developed by Eduardo Bouças. It runs on top of WebPageTest and makes periodic performance tests on your website and shows a visualisation of how the various performance metrics evolve over time.

SpeedTracker provides clean charts and graphs that can help you identify possible problem areas.


Check out the demo here:

WebPageTest is an incredibly useful resource for any web developer, but the information it provides becomes much more powerful when monitored regularly, rather than at isolated events. Web application monitoring is not just for detecting downtime, it also gives you additional insight into performance trends during peak load times, as well as by time of day, and day of the week.


For me, the best thing about SpeedTracker is that it runs on your GitHub repository! Data from WebPageTest is pushed to a GitHub repository. It can be served from GitHub Pages, from a private or public repository, with HTTPS baked in for free.

SpeedTracker also allows you to define performance budgets for any metric you want to monitor and receive alerts when a budget is overrun. This can be an e-mail or a message on Slack.

For instructions on how to install this tool, visit the following GitHub repo:


Deployment, Linux, Open-source, Security, Tools

Check whether your web server is correctly configured

Last year Zone-H reported a record number of 1.5 million websites defacements. 1 million of those websites where running Apache.

When it comes to configuring a web server, some people tend to turn everything on by default. Developers are happy because the functionality that they wanted is available without any extra configuration, and there is a reduction in support calls due to functionality not working out-of-the-box. This has proven to be a major source of problems for security in general. A web server should start off with total restriction and then access rights should be applied appropriately.

You can check whether your web server is correctly configured by using Nikto, a great open source vulnerability scanners that is able to scan for quite a large number of web server vulnerabilities. From their site:

“Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 6400 potentially dangerous files/CGIs, checks for outdated versions of over 1200 servers, and version specific problems on over 270 servers. It also checks for server configuration items such as the presence of multiple index files, HTTP server options, and will attempt to identify installed web servers and software. Scan items and plugins are frequently updated and can be automatically updated.”

I’m going to run a default scan by just supplying the IP of the target:

$ cd nikto-2.1.4
$ ./ -h

- ***** SSL support not available (see docs for SSL install) *****
- Nikto v2.1.4
+ Target IP:
+ Target Hostname:    localhost.localdomain
+ Target Port:        80
+ Start Time:         2011-12-12 13:06:59
+ Server: Apache
+ No CGI Directories found (use '-C all' to force check all possible dirs)
+ 6448 items checked: 0 error(s) and 0 item(s) reported on remote host
+ End Time:           2011-12-12 13:08:07 (68 seconds)
+ 1 host(s) tested

By looking at the last section of the Nikto report, I can see that there are no issues that need to be addressed.

Tools like Nikto and Skipfish serve as a foundation for professional web application security assessments. Remember, the more tools you use, the better.


Open-source, Tools

Command-line memcached stat reporter

Nicholas Tang wrote a nice little perl script that shows a basic memcached top display for a list of servers. You can specify thresholds, for instance, and it’ll change color to red if you exceed the thresholds. You can also choose the refresh/sleep time, and whether to show immediate (per second) stats, or lifetime stats.

To install it you only need to download the script and make it executable:

$ curl > ~/bin/memcache-top
$ chmod +x ~/bin/memcache-top
$ memcache-top --sleep 3 --instances,,

Here’s some sample output:

memcache-top v0.6       (default port: 11211, color: on, refresh: 3 seconds)

INSTANCE                USAGE   HIT %   CONN    TIME    EVICT/s GETS/s  READ/s  WRITE/s        88.9%   69.7%   1661    0.9ms   0.3     47      13.9K   9.8K        88.8%   69.9%   2121    0.7ms   1.3     168     17.6K   68.9K        88.9%   69.4%   1527    0.7ms   1.7     48      14.4K   13.6K
AVERAGE:                84.7%   72.9%   1704    1.0ms   1.3     69      13.5K   30.3K   

TOTAL:          19.9GB/ 23.4GB          20.0K   11.7ms  15.3    826     162.6K  363.6K
(ctrl-c to quit.)

Project Home

Agile Development, Deployment, Java, Linux, PHP, Tools, Web Apps

Managing Multiple Build Environments

Last updated: 3 March, 2010

One of the challenges of Web development is managing multiple build environments. Most applications pass through several environments before they are released. These environments include: A local development environment, a shared development environment, a system integration environment, a user acceptance environment and a production environment.

Automated Builds

Automated builds provide a consistent method for building applications and are used to give other developers feedback about whether the code was successfully integrated or not. There are different types of builds: Continuous builds, Integration builds, Release builds and Patch builds.

A source control system is the main point of integration for source code. When your team works on separate parts of the code base, you have to ensure that your checked in code doesn’t break the Integration build. That’s why it is important that you run your unit tests locally before checking in code.

Here is a recommended process for checking code into source control:

  • Get the latest code from source control before running your tests
  • Verify that your local build is building and passing all the unit tests before checking in code
  • Use hooks to run a build after a transaction has been committed
  • If the Integration build fails, fix the issue because you are now blocking other developers from integrating their code

Hudson can help you automate these tasks. It’s extremely easy to install and can be configured entirely from a Web UI. Also, it can be extended via plug-ins and can execute Phing, Ant, Gant, NAnt and Maven build scripts.

Build File

We need to create a master build file that contains the actions we want to perform. This script should make it possible to build the entire project with a single command line.

First we need to separate the source from the generated files, so our source files will be in the “src” directory and all the generated files in the “build” directory. By default Ant uses build.xml as the name for a build file, this file is usually located in the project root directory.

Then, you have to define whatever environments you want:


Build files tend to contain the same actions:

  • Delete the previous build directory
  • Copy files
  • Manage dependencies
  • Run unit tests
  • Generate HTML and XML reports
  • Package files

The target element is used as a wrapper for a sequences of actions. A target has a name, so that it can be referenced from elsewhere, either externally from the command line or internally via the “depends” or “antcall” keyword. Here’s a basic build.xml example:

<?xml version="1.0" encoding="iso-8859-1"?>
<project name="project" basedir="." default="main">

    <target name="init"></target>
    <target name="test"></target>
    <target name="test-selenium"></target>
    <target name="profile"></target>
    <target name="clean"></target>
    <target name="build" depends="init, test, profile, clean"></target>
    <target name="package"></target>


The property element allows the declaration of properties which are like user-definable variables available for use within an Ant build file. Properties can be defined either inside the buildfile or in a standalone properties file. For example:

<?xml version="1.0" encoding="iso-8859-1"?>
<project name="project" basedir="." default="main">

    <property file="${basedir}/build/" />
    <property file="${basedir}/build/${build.env}.properties" />


The core idea is using property files which name accords to the environment name. Then simply use the custom build-in property build.env. For better use you should also provide a file with default values. The following example intends to describe a typical Ant build file, of course, it can be easily modified to suit your personal needs.

<?xml version="1.0" encoding="iso-8859-1"?>
<project name="project" basedir="." default="main">

    <property file="${basedir}/build/" />
    <condition property="build.env" value="${build.env}" else="local">
        <isset property="build.env" />
    <property file="${basedir}/build/${build.env}.properties" />

     <property environment="env" />
     <condition property="env.BUILD_ID" value="${env.BUILD_ID}" else="">
         <isset property="env.BUILD_ID" />

    <target name="init">
        <echo message="Environment: ${build.env}"/>
        <echo message="Hudson build ID: ${env.BUILD_ID}"/>
        <echo message="Hudson build number: ${env.BUILD_NUMBER}"/>
        <echo message="SVN revision: ${env.SVN_REVISION}"/>
            <format property="build.datetime" pattern="dd-MMM-yy HH:mm:ss"/>
        <echo message="Build started at ${build.datetime}"/>

    <target name="test">

    <target name="clean">
        <delete dir="${build.dir}/files/${build.env}"/>
        <delete dir="${build.dir}/packages/${build.env}"/>
        <mkdir dir="${build.dir}/files/${build.env}"/>
        <mkdir dir="${build.dir}/packages/${build.env}"/>

    <target name="build" depends="init, test, profile, clean">


Using ant -Dname=value lets you define values for properties on the Ant command line. These properties can then be used within your build file as any normal property: ${name} will put in value.

$ ant build -Dbuild.env=development

There are different ways to target multiple environments. I hope I have covered enough of the basic functionality to get you started.

Linux, Security, Tools

Apache HTTP DoS tool released

Yesterday an interesting HTTP DoS tool has been released. The tool performs a Denial of Service attack on Apache (and some other, see below) servers by exhausting available connections. While there are a lot of DoS tools available today, this one is particularly interesting because it holds the connection open while sending incomplete HTTP requests to the server.

More info here

Programming, Tools

Google Page Speed: Web Performance Best Practices

When you profile a web page with Page Speed, it evaluates the page’s conformance to a number of different rules. These rules are general front-end best practices you can apply at any stage of web development. Google provides documentation of each of the rules, so whether or not you run the Page Speed tool, you can refer to these pages at any time.

The best practices are grouped into five categories that cover different aspects of page load optimization:

  • Optimizing caching: Keeping your application’s data and logic off the network altogether
  • Minimizing round-trip times: Reducing the number of serial request-response cycles
  • Minimizing request size: Reducing upload size
  • Minimizing payload size: Reducing the size of responses, downloads, and cached pages
  • Optimizing browser rendering: Improving the browser’s layout of a page

Web Performance Best Practices

Open-source, PHP, Tools, Web Apps

TypeFriendly: A Documentation And User Manual Builder

TypeFriendly is a documentation generation script written in PHP5. It was designed to be easy in use and it allows to achieve the first results immediately, a couple of minutes after you start the work. The script contains everything you need to write clear, multilingual documentation for your project, so that you do not have to code everything on your own.

The most important features of TypeFriendly:

  1. Modular documentation structure – it is generated from text files and the structure and navigation are generated from the file names.
  2. Simple syntax – the text is written in intuitive and clean Markdown syntax.
  3. Multilingual support and tools – TypeFriendly allows you to create your manuals in many language versions. It also contains a tool that shows whether the derived languages are up-to-date.
  4. Configurable output formats – currently, TypeFriendly is able to generate the documentation in XHTML (many pages) and XHTML (single page). There is also a third format – metadata – still under development. It will allow to import the docs to a database in order to make an on-line version with, for example, user comments.
  5. Various add-ons such as syntax highlighting, references, class description fields.
  6. Navigation generators.
  7. It is portable – works under Linux, FreeBSD and Windows. All you need is the PHP interpreter available.

TypeFriendly is distributed under the terms of GNU General Public License 3, which means that you can use, modify and share it for free.



Source Code