Check whether your web server is correctly configured

Last year Zone-H reported a record number of 1.5 million websites defacements. 1 million of those websites where running Apache.

When it comes to configuring a web server, some people tend to turn everything on by default. Developers are happy because the functionality that they wanted is available without any extra configuration, and there is a reduction in support calls due to functionality not working out-of-the-box. This has proven to be a major source of problems for security in general. A web server should start off with total restriction and then access rights should be applied appropriately.

You can check whether your web server is correctly configured by using Nikto, a great open source vulnerability scanners that is able to scan for quite a large number of web server vulnerabilities. From their site:

“Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 6400 potentially dangerous files/CGIs, checks for outdated versions of over 1200 servers, and version specific problems on over 270 servers. It also checks for server configuration items such as the presence of multiple index files, HTTP server options, and will attempt to identify installed web servers and software. Scan items and plugins are frequently updated and can be automatically updated.”

I’m going to run a default scan by just supplying the IP of the target:

$ cd nikto-2.1.4
$ ./nikto.pl -h 127.0.0.1

- ***** SSL support not available (see docs for SSL install) *****
- Nikto v2.1.4
---------------------------------------------------------------------------
+ Target IP:          127.0.0.1
+ Target Hostname:    localhost.localdomain
+ Target Port:        80
+ Start Time:         2011-12-12 13:06:59
---------------------------------------------------------------------------
+ Server: Apache
+ No CGI Directories found (use '-C all' to force check all possible dirs)
+ 6448 items checked: 0 error(s) and 0 item(s) reported on remote host
+ End Time:           2011-12-12 13:08:07 (68 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested

By looking at the last section of the Nikto report, I can see that there are no issues that need to be addressed.

Tools like Nikto and Skipfish serve as a foundation for professional web application security assessments. Remember, the more tools you use, the better.

Links

Command-line memcached stat reporter

Nicholas Tang wrote a nice little perl script that shows a basic memcached top display for a list of servers. You can specify thresholds, for instance, and it’ll change color to red if you exceed the thresholds. You can also choose the refresh/sleep time, and whether to show immediate (per second) stats, or lifetime stats.

To install it you only need to download the script and make it executable:

$ curl http://memcache-top.googlecode.com/files/memcache-top-v0.6 > ~/bin/memcache-top
$ chmod +x ~/bin/memcache-top
$ memcache-top --sleep 3 --instances 10.50.11.3,10.50.11.4,10.50.11.5

Here’s some sample output:

memcache-top v0.6       (default port: 11211, color: on, refresh: 3 seconds)

INSTANCE                USAGE   HIT %   CONN    TIME    EVICT/s GETS/s  READ/s  WRITE/s
10.50.11.3:11211        88.9%   69.7%   1661    0.9ms   0.3     47      13.9K   9.8K
10.50.11.4:11211        88.8%   69.9%   2121    0.7ms   1.3     168     17.6K   68.9K
10.50.11.5:11211        88.9%   69.4%   1527    0.7ms   1.7     48      14.4K   13.6K
AVERAGE:                84.7%   72.9%   1704    1.0ms   1.3     69      13.5K   30.3K   

TOTAL:          19.9GB/ 23.4GB          20.0K   11.7ms  15.3    826     162.6K  363.6K
(ctrl-c to quit.)

Project Home
http://code.google.com/p/memcache-top/

Managing Multiple Build Environments

Last updated: 3 March, 2010

One of the challenges of Web development is managing multiple build environments. Most applications pass through several environments before they are released. These environments include: A local development environment, a shared development environment, a system integration environment, a user acceptance environment and a production environment.

Automated Builds

Automated builds provide a consistent method for building applications and are used to give other developers feedback about whether the code was successfully integrated or not. There are different types of builds: Continuous builds, Integration builds, Release builds and Patch builds.

A source control system is the main point of integration for source code. When your team works on separate parts of the code base, you have to ensure that your checked in code doesn’t break the Integration build. That’s why it is important that you run your unit tests locally before checking in code.

Here is a recommended process for checking code into source control:

  • Get the latest code from source control before running your tests
  • Verify that your local build is building and passing all the unit tests before checking in code
  • Use hooks to run a build after a transaction has been committed
  • If the Integration build fails, fix the issue because you are now blocking other developers from integrating their code

Hudson can help you automate these tasks. It’s extremely easy to install and can be configured entirely from a Web UI. Also, it can be extended via plug-ins and can execute Phing, Ant, Gant, NAnt and Maven build scripts.

Build File

We need to create a master build file that contains the actions we want to perform. This script should make it possible to build the entire project with a single command line.

First we need to separate the source from the generated files, so our source files will be in the “src” directory and all the generated files in the “build” directory. By default Ant uses build.xml as the name for a build file, this file is usually located in the project root directory.

Then, you have to define whatever environments you want:

project/
    build/
        files/
            local/
            development/
            integration/
            production/
        packages/
            development/
                project-development-0.1-RC.noarch.rpm
            integration/
            production/
        default.properties
        local.properties
        development.properties
        production.properties
    src/
        application/
            config/
            controllers/
            domain/
            services/
            views/
        library/
        public/
    tests/
    build.xml

Build files tend to contain the same actions:

  • Delete the previous build directory
  • Copy files
  • Manage dependencies
  • Run unit tests
  • Generate HTML and XML reports
  • Package files

The target element is used as a wrapper for a sequences of actions. A target has a name, so that it can be referenced from elsewhere, either externally from the command line or internally via the “depends” or “antcall” keyword. Here’s a basic build.xml example:

<?xml version="1.0" encoding="iso-8859-1"?>
<project name="project" basedir="." default="main">

    <target name="init"></target>
    <target name="test"></target>
    <target name="test-selenium"></target>
    <target name="profile"></target>
    <target name="clean"></target>
    <target name="build" depends="init, test, profile, clean"></target>
    <target name="package"></target>

</project>

The property element allows the declaration of properties which are like user-definable variables available for use within an Ant build file. Properties can be defined either inside the buildfile or in a standalone properties file. For example:

<?xml version="1.0" encoding="iso-8859-1"?>
<project name="project" basedir="." default="main">

    <property file="${basedir}/build/default.properties" />
    <property file="${basedir}/build/${build.env}.properties" />
    ...

</project>

The core idea is using property files which name accords to the environment name. Then simply use the custom build-in property build.env. For better use you should also provide a file with default values. The following example intends to describe a typical Ant build file, of course, it can be easily modified to suit your personal needs.

<?xml version="1.0" encoding="iso-8859-1"?>
<project name="project" basedir="." default="main">

    <property file="${basedir}/build/default.properties" />
    <condition property="build.env" value="${build.env}" else="local">
        <isset property="build.env" />
    </condition>
    <property file="${basedir}/build/${build.env}.properties" />

     <property environment="env" />
     <condition property="env.BUILD_ID" value="${env.BUILD_ID}" else="">
         <isset property="env.BUILD_ID" />
     </condition>

    <target name="init">
        <echo message="Environment: ${build.env}"/>
        <echo message="Hudson build ID: ${env.BUILD_ID}"/>
        <echo message="Hudson build number: ${env.BUILD_NUMBER}"/>
        <echo message="SVN revision: ${env.SVN_REVISION}"/>
        <tstamp>
            <format property="build.datetime" pattern="dd-MMM-yy HH:mm:ss"/>
        </tstamp>
        <echo message="Build started at ${build.datetime}"/>
    </target>

    <target name="test">
        ...
    </target>

    <target name="clean">
        <delete dir="${build.dir}/files/${build.env}"/>
        <delete dir="${build.dir}/packages/${build.env}"/>
        <mkdir dir="${build.dir}/files/${build.env}"/>
        <mkdir dir="${build.dir}/packages/${build.env}"/>
    </target>

    <target name="build" depends="init, test, profile, clean">
        ...
    </target>
    ...

</project>

Using ant -Dname=value lets you define values for properties on the Ant command line. These properties can then be used within your build file as any normal property: ${name} will put in value.

$ ant build -Dbuild.env=development

There are different ways to target multiple environments. I hope I have covered enough of the basic functionality to get you started.

Apache HTTP DoS tool released

Yesterday an interesting HTTP DoS tool has been released. The tool performs a Denial of Service attack on Apache (and some other, see below) servers by exhausting available connections. While there are a lot of DoS tools available today, this one is particularly interesting because it holds the connection open while sending incomplete HTTP requests to the server.

More info here

Google Page Speed: Web Performance Best Practices

When you profile a web page with Page Speed, it evaluates the page’s conformance to a number of different rules. These rules are general front-end best practices you can apply at any stage of web development. Google provides documentation of each of the rules, so whether or not you run the Page Speed tool, you can refer to these pages at any time.

The best practices are grouped into five categories that cover different aspects of page load optimization:

  • Optimizing caching: Keeping your application’s data and logic off the network altogether
  • Minimizing round-trip times: Reducing the number of serial request-response cycles
  • Minimizing request size: Reducing upload size
  • Minimizing payload size: Reducing the size of responses, downloads, and cached pages
  • Optimizing browser rendering: Improving the browser’s layout of a page

Web Performance Best Practices

TypeFriendly: A Documentation And User Manual Builder

TypeFriendly is a documentation generation script written in PHP5. It was designed to be easy in use and it allows to achieve the first results immediately, a couple of minutes after you start the work. The script contains everything you need to write clear, multilingual documentation for your project, so that you do not have to code everything on your own.

The most important features of TypeFriendly:

  1. Modular documentation structure – it is generated from text files and the structure and navigation are generated from the file names.
  2. Simple syntax – the text is written in intuitive and clean Markdown syntax.
  3. Multilingual support and tools – TypeFriendly allows you to create your manuals in many language versions. It also contains a tool that shows whether the derived languages are up-to-date.
  4. Configurable output formats – currently, TypeFriendly is able to generate the documentation in XHTML (many pages) and XHTML (single page). There is also a third format – metadata – still under development. It will allow to import the docs to a database in order to make an on-line version with, for example, user comments.
  5. Various add-ons such as syntax highlighting, references, class description fields.
  6. Navigation generators.
  7. It is portable – works under Linux, FreeBSD and Windows. All you need is the PHP interpreter available.

TypeFriendly is distributed under the terms of GNU General Public License 3, which means that you can use, modify and share it for free.

Demo
http://static.invenzzia.org/docs/tf/0_1/book/en/index.html

Screenshots

http://www.invenzzia.org/en/projects/typefriendly/screenshots

Source Code
http://svn.invenzzia.org/browser/TypeFriendly/trunk/

Website
http://www.invenzzia.org/en/projects/typefriendly

Is this the best open source CMS ever created?

Meet TYPOlight, a powerful Web content management system that specializes in accessibility (back end and front end) and uses XHTML and CSS to generate W3C/WAI compliant pages.

Accessibility

A growing number of countries around the world have introduced legislation which either directly addresses the need for websites to be accessible to people with disabilities, or which addresses the more general requirement for people with disabilities not to be discriminated against. TYPOlight does not treat accessibility as just an additional feature and is thoroughly accessible.

Web 2.0

PHP 5 and Ajax are modern “Web 2.0″ technologies that you can find in a lot of contemporary applications. TYPOlight has a solid codebase built on the new object-oriented programming features of PHP 5 and can therefore be considered a future-proof software. To ensure back end accessibility, every Ajax feature includes a graceful fallback in case JavaScript is disabled.

Page features

  • Different page types
  • Multiple websites in one tree
  • Manual or timed publication
  • Hidden pages
  • Password protect pages

Editing features

  • Clipboard feature
  • Edit multiple records
  • Built-in rich text editor (TinyMCE)
  • Different content elements and modules
  • Multilingual spellchecker
  • Insert tags (similar to server side includes)
  • Manual or timed publication

File manager

  • Multiple file uploads
  • Image thumbnails and file preview
  • Edit uploaded files with the source editor
  • File operation permissions
  • Copy, move, rename files or folders
  • Delete folders recursively

Form generator

  • Automatic input validation
  • Store uploaded files on the server
  • Send form data via e-mail
  • Send uploaded files as e-mail attachment

Search engine

  • Automatic page indexing
  • Search indexing on protected pages
  • Phrase search, wildcard search, AND/OR search
  • Search result caching and pagination

Full feature list

  • Intuitive user interface
  • Accessible XHTML strict output
  • Meets W3C/WAI requirements
  • Web 2.0 support (mootools-based)
  • Live update service
  • Accessible administration area
  • Multiple back end languages and themes
  • Generates search engine friendly URLs
  • Multi-language support
  • Powerful permission system
  • Versioning and undo management
  • Advanced search and sorting options
  • Front end output 100% template based
  • Automatic e-mail encryption (spam protection)
  • Supports SMTP in addition to PHP’s mail function
  • Supports multiple websites in one tree
  • Supports GZip compression
  • Print articles as PDF

System features

  • Open Source (LGPL)
  • Web-based administration
  • Platform independent
  • Over 150 third party extensions
  • Multilingual documentation

Links