Check whether your web server is correctly configured

Last year Zone-H reported a record number of 1.5 million websites defacements. 1 million of those websites where running Apache.

When it comes to configuring a web server, some people tend to turn everything on by default. Developers are happy because the functionality that they wanted is available without any extra configuration, and there is a reduction in support calls due to functionality not working out-of-the-box. This has proven to be a major source of problems for security in general. A web server should start off with total restriction and then access rights should be applied appropriately.

You can check whether your web server is correctly configured by using Nikto, a great open source vulnerability scanners that is able to scan for quite a large number of web server vulnerabilities. From their site:

“Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 6400 potentially dangerous files/CGIs, checks for outdated versions of over 1200 servers, and version specific problems on over 270 servers. It also checks for server configuration items such as the presence of multiple index files, HTTP server options, and will attempt to identify installed web servers and software. Scan items and plugins are frequently updated and can be automatically updated.”

I’m going to run a default scan by just supplying the IP of the target:

$ cd nikto-2.1.4
$ ./nikto.pl -h 127.0.0.1

- ***** SSL support not available (see docs for SSL install) *****
- Nikto v2.1.4
---------------------------------------------------------------------------
+ Target IP:          127.0.0.1
+ Target Hostname:    localhost.localdomain
+ Target Port:        80
+ Start Time:         2011-12-12 13:06:59
---------------------------------------------------------------------------
+ Server: Apache
+ No CGI Directories found (use '-C all' to force check all possible dirs)
+ 6448 items checked: 0 error(s) and 0 item(s) reported on remote host
+ End Time:           2011-12-12 13:08:07 (68 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested

By looking at the last section of the Nikto report, I can see that there are no issues that need to be addressed.

Tools like Nikto and Skipfish serve as a foundation for professional web application security assessments. Remember, the more tools you use, the better.

Links

Managing Multiple Build Environments

Last updated: 3 March, 2010

One of the challenges of Web development is managing multiple build environments. Most applications pass through several environments before they are released. These environments include: A local development environment, a shared development environment, a system integration environment, a user acceptance environment and a production environment.

Automated Builds

Automated builds provide a consistent method for building applications and are used to give other developers feedback about whether the code was successfully integrated or not. There are different types of builds: Continuous builds, Integration builds, Release builds and Patch builds.

A source control system is the main point of integration for source code. When your team works on separate parts of the code base, you have to ensure that your checked in code doesn’t break the Integration build. That’s why it is important that you run your unit tests locally before checking in code.

Here is a recommended process for checking code into source control:

  • Get the latest code from source control before running your tests
  • Verify that your local build is building and passing all the unit tests before checking in code
  • Use hooks to run a build after a transaction has been committed
  • If the Integration build fails, fix the issue because you are now blocking other developers from integrating their code

Hudson can help you automate these tasks. It’s extremely easy to install and can be configured entirely from a Web UI. Also, it can be extended via plug-ins and can execute Phing, Ant, Gant, NAnt and Maven build scripts.

Build File

We need to create a master build file that contains the actions we want to perform. This script should make it possible to build the entire project with a single command line.

First we need to separate the source from the generated files, so our source files will be in the “src” directory and all the generated files in the “build” directory. By default Ant uses build.xml as the name for a build file, this file is usually located in the project root directory.

Then, you have to define whatever environments you want:

project/
    build/
        files/
            local/
            development/
            integration/
            production/
        packages/
            development/
                project-development-0.1-RC.noarch.rpm
            integration/
            production/
        default.properties
        local.properties
        development.properties
        production.properties
    src/
        application/
            config/
            controllers/
            domain/
            services/
            views/
        library/
        public/
    tests/
    build.xml

Build files tend to contain the same actions:

  • Delete the previous build directory
  • Copy files
  • Manage dependencies
  • Run unit tests
  • Generate HTML and XML reports
  • Package files

The target element is used as a wrapper for a sequences of actions. A target has a name, so that it can be referenced from elsewhere, either externally from the command line or internally via the “depends” or “antcall” keyword. Here’s a basic build.xml example:

<?xml version="1.0" encoding="iso-8859-1"?>
<project name="project" basedir="." default="main">

    <target name="init"></target>
    <target name="test"></target>
    <target name="test-selenium"></target>
    <target name="profile"></target>
    <target name="clean"></target>
    <target name="build" depends="init, test, profile, clean"></target>
    <target name="package"></target>

</project>

The property element allows the declaration of properties which are like user-definable variables available for use within an Ant build file. Properties can be defined either inside the buildfile or in a standalone properties file. For example:

<?xml version="1.0" encoding="iso-8859-1"?>
<project name="project" basedir="." default="main">

    <property file="${basedir}/build/default.properties" />
    <property file="${basedir}/build/${build.env}.properties" />
    ...

</project>

The core idea is using property files which name accords to the environment name. Then simply use the custom build-in property build.env. For better use you should also provide a file with default values. The following example intends to describe a typical Ant build file, of course, it can be easily modified to suit your personal needs.

<?xml version="1.0" encoding="iso-8859-1"?>
<project name="project" basedir="." default="main">

    <property file="${basedir}/build/default.properties" />
    <condition property="build.env" value="${build.env}" else="local">
        <isset property="build.env" />
    </condition>
    <property file="${basedir}/build/${build.env}.properties" />

     <property environment="env" />
     <condition property="env.BUILD_ID" value="${env.BUILD_ID}" else="">
         <isset property="env.BUILD_ID" />
     </condition>

    <target name="init">
        <echo message="Environment: ${build.env}"/>
        <echo message="Hudson build ID: ${env.BUILD_ID}"/>
        <echo message="Hudson build number: ${env.BUILD_NUMBER}"/>
        <echo message="SVN revision: ${env.SVN_REVISION}"/>
        <tstamp>
            <format property="build.datetime" pattern="dd-MMM-yy HH:mm:ss"/>
        </tstamp>
        <echo message="Build started at ${build.datetime}"/>
    </target>

    <target name="test">
        ...
    </target>

    <target name="clean">
        <delete dir="${build.dir}/files/${build.env}"/>
        <delete dir="${build.dir}/packages/${build.env}"/>
        <mkdir dir="${build.dir}/files/${build.env}"/>
        <mkdir dir="${build.dir}/packages/${build.env}"/>
    </target>

    <target name="build" depends="init, test, profile, clean">
        ...
    </target>
    ...

</project>

Using ant -Dname=value lets you define values for properties on the Ant command line. These properties can then be used within your build file as any normal property: ${name} will put in value.

$ ant build -Dbuild.env=development

There are different ways to target multiple environments. I hope I have covered enough of the basic functionality to get you started.

Increase speed and reduce bandwidth usage

Apache’s mod_deflate module provides the DEFLATE output filter that allows output from your server to be compressed before being sent to the client over the network.

There are two ways of enabling gzip compression:

  1. Using Apache’s mod_deflate
  2. Using output buffering

Encoding the output and setting the appropriate headers manually makes the code more portable. Keep in mind that there are hundreds of Linux distributions, each slightly different to significantly different. To allow portability the application should not make assumptions about the OS or config involved.

Using Apache

1. Enable mod_deflate

Debian/Ubuntu:

$ a2enmod deflate
$ /etc/init.d/apache2 force-reload

2. Configure mode_deflate

$ nano /etc/apache2/mods-enabled/deflate.conf

#
# mod_deflate configuration
#
<IfModule mod_deflate.c>
 AddOutputFilterByType DEFLATE text/plain
 AddOutputFilterByType DEFLATE text/html
 AddOutputFilterByType DEFLATE text/xml
 AddOutputFilterByType DEFLATE text/css
 AddOutputFilterByType DEFLATE application/xml
 AddOutputFilterByType DEFLATE application/xhtml+xml
 AddOutputFilterByType DEFLATE application/rss+xml
 AddOutputFilterByType DEFLATE application/javascript
 AddOutputFilterByType DEFLATE application/x-javascript

 DeflateCompressionLevel 9

 BrowserMatch ^Mozilla/4 gzip-only-text/html
 BrowserMatch ^Mozilla/4\.0[678] no-gzip
 BrowserMatch \bMSIE !no-gzip !gzip-only-text/html

 DeflateFilterNote Input instream
 DeflateFilterNote Output outstream
 DeflateFilterNote Ratio ratio
</IfModule>

Using output buffering

Create a gzip compressed string in your bootstrap file:

try {
    $frontController = Zend_Controller_Front::getInstance();
    if (@strpos($_SERVER['HTTP_ACCEPT_ENCODING'], 'gzip') !== false) {
        ob_start();
        $frontController->dispatch();
        $output = gzencode(ob_get_contents(), 9);
        ob_end_clean();
        header('Content-Encoding: gzip');
        echo $output;
    } else {
        $frontController->dispatch();
    }
} catch (Exeption $e) {
    if (Zend_Registry::isRegistered('Zend_Log')) {
        Zend_Registry::get('Zend_Log')->err($e->getMessage());
    }
    $message = $e->getMessage() . "\n\n" . $e->getTraceAsString();
    /* trigger event */
}

Reference

Use mod_deflate to compress Web content delivered by Apache

Apache HTTP DoS tool released

Yesterday an interesting HTTP DoS tool has been released. The tool performs a Denial of Service attack on Apache (and some other, see below) servers by exhausting available connections. While there are a lot of DoS tools available today, this one is particularly interesting because it holds the connection open while sending incomplete HTTP requests to the server.

More info here

Redis: The Lamborghini of Databases

Antonio Cangiano wrote:

Redis is a key-value database written in C. It can be used like memcached, in front of a traditional database, or on its own thanks to the fact that the in-memory datasets are not volatile but instead persisted on disk. Redis provides you with the ability to define keys that are more than mere strings, as well as being able to handle multiple databases, lists, sets and even basic master-slave replication. It’s amazingly fast. On an entry level Linux box, Redis has been benchmarked performing 110,000 SET operations, and 81,000 GETs, per second.

Despite being a very young project, it already has client libraries for several languages: Python and PHP, Erlang, and Ruby. Salvatore Sanfilippo, the creator of Redis, has implemented a Twitter clone known as Retwis to showcase how you can use Redis and PHP to build applications without the need for a database like MySQL or any SQL query.

Salvatore will be publishing a beginner’s article based on the PHP Twitter clone he wrote, soon.

Full article: Introducing Redis: a fast key-value database

Fabric: Simple Pythonic Deployment Tool

Here’s the thing: you’re developing a server deployed application, it could be a web application but it doesn’t have to be, and you’re probably deploying to more than one server. Even if you just have one server to deploy to, it still get tiresome in the long run to build your project, fire up your favorite SFTP utility, upload your build, log in to the server with SSH, possibly stop the server, deploy the build, and finally start the server again.

What we’d like to do, is to build, upload and deploy our application with a single command line. Fabric is a tool that, at its core, logs into a number of hosts with SSH, and executes a set of commands, and possibly uploads or downloads files.

Fabric Project Site

Command Line Kung Fu Developer

Want to become a Linux guru, or just look like one? Then visit Command-Line-Fu.

From the site:

Command-Line-Fu is the place to record those command-line gems that you return to again and again. Delete that bloated snippets file you’ve been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on and discussed. Voting is also encouraged so the best float to the top.

You can subscribe to the feeds or follow the latest commands on Twitter. Every new command is wrapped in a tweet and posted to the Command-Line-Fu account.

Command-Line-Fu