JavaScript: Retrieve and paginate JSON-encoded data

I’ve created a jQuery plugin that allows you to retrieve a large data set in JSON format from a server script and load the data into a list or table with client side pagination enabled. To use this plugin you need to:

Include jquery.min.js and jquery.paginate.min.js in your document:

http://js/jquery.min.js
http://js/jquery.paginate.min.js

Include a small css to skin the navigation links:

<style type="text/css">
a.disabled {
    text-decoration: none;
    color: black;
    cursor: default;
}
</style>

Define an ID on the element you want to paginate, for example: “listitems”. If you have a more than 10 child elements and you want to avoid displaying them before the javascript is executed, you can set the element as hidden by default:

<ul id="listitems" style="display:none"></ul>

Place a div in the place you want to display the navigation links:

Finally, include an initialization script at the bottom of your page like this:

$(document).ready(function() {
    $.getJSON('data.json', function(data) {
        var items = [];
        $.each(data.items, function(i, item) {
            items.push('
  • ' + item + '
  • ');         });         $('#listitems').append(items.join(''));         $('#listitems').paginate({itemsPerPage: 5});     }); });

    You can fork the code on GitHub or download it.

    Building a RESTful Web API with PHP and Apify

    Apify is a small and powerful open source library that delivers new levels of developer productivity by simplifying the creation of RESTful architectures. You can see it in action here. Web services are a great way to extend your web application, however, adding a web API to an existing web application can be a tedious and time-consuming task. Apify takes certain common patterns found in most web services and abstracts them so that you can quickly write web APIs without having to write too much code.

    Apify exposes similar APIs as the Zend Framework, so if you are familiar with the Zend Framework, then you already know how to use Apify. Take a look at the UsersController class.

    Building a RESTful Web API

    In Apify, Controllers handle incoming HTTP requests, interact with the model to get data, and direct domain data to the response object for display. The full request object is injected via the action method and is primarily used to query for request parameters, whether they come from a GET or POST request, or from the URL.

    Creating a RESTful Web API with Apify is easy. Each action results in a response, which holds the headers and document to be sent to the user’s browser. You are responsible for generating the response object inside the action method.

    class UsersController extends Controller
    {
        public function indexAction($request)
        {
            // 200 OK
            return new Response();
        }
    }

    The response object describes the status code and any headers that are sent. The default response is always 200 OK, however, it is possible to overwrite the default status code and add additional headers:

    class UsersController extends Controller
    {
        public function indexAction($request)
        {
            $response = new Response();
    
            // 401 Unauthorized
            $response->setCode(Response::UNAUTHORIZED);
    
            // Cache-Control header
            $response->setCacheHeader(3600);
    
            // ETag header
            $response->setEtagHeader(md5($request->getUrlPath()));
    
            // X-RateLimit header
            $limit = 300;
            $remaining = 280;
            $response->setRateLimitHeader($limit, $remaining);
    
            // Raw header
            $response->addHeader('Edge-control: no-store');
    
            return $response;
        }
    }

    Content Negotiation

    Apify supports sending responses in HTML, XML, RSS and JSON. In addition, it supports JSONP, which is JSON wrapped in a custom JavaScript function call. There are 3 ways to specify the format you want:

    • Appending a format extension to the end of the URL path (.html, .json, .rss or .xml)
    • Specifying the response format in the query string. This means a format=xml or format=json parameter for XML or JSON, respectively, which will override the Accept header if there is one.
    • Sending a standard Accept header in your request (text/html, application/xml or application/json).

    The acceptContentTypes method indicates that the request only accepts certain content types:

    class UsersController extends Controller
    {
        public function indexAction($request)
        {
        	// only accept JSON and XML
            $request->acceptContentTypes(array('json', 'xml'));
    
            return new Response();
        }
    }

    Apify will render the error message according to the format of the request.

    class UsersController extends Controller
    {
        public function indexAction($request)
        {
            $request->acceptContentTypes(array('json', 'xml'));
    
        	$response = new Response();
            if (! $request->hasParam('api_key')) {
                throw new Exception('Missing parameter: api_key', Response::FORBIDDEN);
            }
            $response->api_key = $request->getParam('api_key');
    
            return $response;
        }
    }

    Request

    GET /users.json

    Response

    Status: 403 Forbidden
    Content-Type: application/json
    {
        "code": 403,
        "error": {
            "message": "Missing parameter: api_key",
            "type": "Exception"
        }
    }

    Resourceful Routes

    Apify supports REST style URL mappings where you can map different HTTP methods, such as GET, POST, PUT and DELETE, to different actions in a controller. This basic REST design principle establishes a one-to-one mapping between create, read, update, and delete (CRUD) operations and HTTP methods:

    HTTP Method URL Path Action Used for
    GET /users index display a list of all users
    GET /users/:id show display a specific user
    POST /users create create a new user
    PUT /users/:id update update a specific user
    DELETE /users/:id destroy delete a specific user

     

    If you wish to enable RESTful mappings, add the following line to the index.php file:

    try {
        $request = new Request();
        $request->enableUrlRewriting();
        $request->enableRestfulMapping();
        $request->dispatch();
    } catch (Exception $e) {
        $request->catchException($e);
    }

    The RESTful UsersController for the above mapping will contain 5 actions as follows:

    class UsersController extends Controller
    {
        public function indexAction($request) {}
        public function showAction($request) {}
        public function createAction($request) {}
        public function updateAction($request) {}
        public function destroyAction($request) {}
    }

    By convention, each action should map to a particular CRUD operation in the database.

    Building a Web Application

    Building a web application can be as simple as adding a few methods to your controller. The only difference is that each method returns a view object.

    class PostsController extends Controller
    {
        /**
         * route: /posts/:id
         *
         * @param $request Request
         * @return View|null
         */
        public function showAction($request)
        {
            $id = $request->getParam('id');
            $post = $this->getModel('Post')->find($id);
            if (! isset($post->id)) {
                return $request->redirect('/page-not-found');
            }
    
            $view = $this->initView();
            $view->post = $post;
            $view->user = $request->getSession()->user
    
            return $view;
        }
    
        /**
         * route: /posts/create
         *
         * @param $request Request
         * @return View|null
         */
        public function createAction($request)
        {
            $view = $this->initView();
            if ('POST' !== $request->getMethod()) {
                return $view;
            }
    
            try {
                $post = new Post(array(
                    'title' => $request->getPost('title'),
                    'text'  => $request->getPost('text')
                ));
            } catch (ValidationException $e) {
                $view->error = $e->getMessage();
                return $view;
            }
    
            $id = $this->getModel('Post')->save($post);
            return $request->redirect('/posts/' . $id);
        }
    }

    The validation is performed inside the Post entity class. An exception is thrown if any given value causes the validation to fail. This allows you to easily implement error handling for the code in your controller.

    Entity Class

    You can add validation to your entity class to ensure that the values sent by the user are correct before saving them to the database:

    class Post extends Entity
    {
        protected $id;
        protected $title;
        protected $text;
    
        // sanitize and validate title (optional)
        public function setTitle($value)
        {
            $value = htmlspecialchars(trim($value), ENT_QUOTES);
            if (empty($value) || strlen($value) < 3) {
                throw new ValidationException('Invalid title');
            }
            $this->title = $title;
        }
    
        // sanitize text (optional)
        public function setText($value)
        {
            $this->text = htmlspecialchars(strip_tags($value), ENT_QUOTES);
        }
    }

    Routes

    Apify provides a slimmed down version of the Zend Framework router:

    $routes[] = new Route('/posts/:id',
        array(
            'controller' => 'posts',
            'action'     => 'show'
        ),
        array(
            'id'         => '\d+'
        )
    );
    $routes[] = new Route('/posts/create',
        array(
            'controller' => 'posts',
            'action'     => 'create'
        )
    );

    HTTP Request

    GET /posts/1

    Incoming requests are dispatched to the controller “Posts” and action “show”.

    Feedback

    • If you encounter any problems, please use the issue tracker.
    • For updates follow @fedecarg on Twitter.
    • If you like Apify and use it in the wild, let me know.

    JavaScript: Asynchronous Script Loading and Lazy Loading

    Most of the time remote scripts are included at the end of an HTML document, right before the closing body tag. This is because browsers are single threaded and when they encounter a script tag, they halt any other processes until they download and parse the script. By including scripts at the end, you allow the browser to download and render all page elements, style sheets and images without any unnecessary delay. Also, if the browser renders the page before executing any script, you know that all page elements are already available to retrieve.

    However, websites like Facebook for example, use a more advanced technique. They include scripts dynamically via DOM methods. This technique, which I’ll briefly explain here, is known as “Asynchronous Script Loading”.

    Lets take a look at the script that Facebook uses to download its JS library:

    (function () {
        var e = document.createElement('script');
        e.src = 'http://connect.facebook.net/en_US/all.js';
        e.async = true;
        document.getElementById('fb-root').appendChild(e);
    }());

    When you dynamically append a script to a page, the browser does not halt other processes, so it continues rendering page elements and downloading resources. The best place to put this code is right after the opening body tag. This allows Facebook initialization to happen in parallel with the initialization on the rest of the page.

    Facebook also makes non-blocking loading of the script easy to use by providing the fbAsyncInit hook. If this global function is defined, it will be executed when the library is loaded.

    window.fbAsyncInit = function () {
        FB.init({
            appId: 'YOUR APP ID',
            status: true,
            cookie: true,
            xfbml: true
        });
    };

    Once the library has loaded, Facebook checks the value of window.fbAsyncInit.hasRun and if it’s false it makes a call to the fbAsyncInit function:

    if (window.fbAsyncInit && !window.fbAsyncInit.hasRun) {
        window.fbAsyncInit.hasRun = true;
        fbAsyncInit();
    }

    Now, what if you want to load multiple files asynchronously, or you need to include a small amount of code at page load and then download other scripts only when needed? Loading scripts on demand is called “Lazy Loading”. There are many libraries that exist specifically for this purpose, however, you only need a few lines of JavaScript to do this.

    Here is an example:

    $L = function (c, d) {
        for (var b = c.length, e = b, f = function () {
                if (!(this.readyState
                		&& this.readyState !== "complete"
                		&& this.readyState !== "loaded")) {
                    this.onload = this.onreadystatechange = null;
                    --e || d()
                }
            }, g = document.getElementsByTagName("head")[0], i = function (h) {
                var a = document.createElement("script");
                a.async = true;
                a.src = h;
                a.onload = a.onreadystatechange = f;
                g.appendChild(a)
            }; b;) i(c[--b])
    };

    The best place to put this code is inside the head tag. You can then use the $L function to asynchronously load your scripts on demand. $L takes two arguments: an array (c) and a callback function (d).

    var scripts = [];
    scripts[0] = 'http://www.google-analytics.com/ga.js';
    scripts[1] = 'http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.js';
    
    $L(scripts, function () {
        console.log("ga and jquery scripts loaded");
    });
    
    $L(['http://connect.facebook.net/en_US/all.js'], function () {
        console.log("facebook script loaded");
        window.fbAsyncInit.hasRun = true;
        FB.init({
            appId: 'YOUR APP ID',
            status: true,
            cookie: true,
            xfbml: true
        });
    });

    You can see this script in action here (right click -> view page source).

    Collective Wisdom from the Experts

    I’ve finally had a chance to read a book I bought a while ago called “97 Things Every Software Architect Should Know – Collective Wisdom from the Experts“. Not the shortest title for a book, but very descriptive. I bought this book at the OSCON Conference in Portland last year. It’s an interesting book and I’m sure anyone involved in software development would benefit from reading it.

    More than 40 architects, including Neal Ford and Michael Nygard, offer advice for communicating with stakeholders, eliminating complexity, empowering developers, and many more practical lessons they’ve learned from years of experience. The book offers valuable information on key development issues that go way beyond technology. Most of the advice given is from personal experience and is good for any project leader involved with software development no matter their job title. However, you have to keep in mind that this is a compilation book, so don’t expect in-depth information or theoritical knowledge about architecture design and software engineering.

    Here are some extracts from the book:

    Simplify essential complexity; diminish accidental complexity – By Neal Ford

    Frameworks that solve specific problems are useful. Over-engineered frameworks add more complexity than they relieve. It’s the duty of the architect to solve the problems inherent in essential complexity without introducing accidental complexity.

    Chances are your biggest problem isn’t technical – By Mark Ramm

    Most projects are built by people, and those people are the foundation for success and failure. So, it pays to think about what it takes to help make those people successful.

    Communication is King – By Mark Richards

    Every software architect should know how to communicate the goals and objectives of a software project. The key to effective communication is clarity and leadership.

    Keeping developers in the dark about the big picture or why decisions were made is a clear recipe for disaster. Having the developer on your side creates a collaborative environment whereby decisions you make as an architect are validated. In turn, you get buy-in from developers by keeping them involved in the architecture process

    Architecting is about balancing – By Randy Stafford

    When we think of architecting software, we tend to think first of classical technical activities, like modularizing systems, defining interfaces, allocating responsibility, applying patterns, and optimizing performance.  Architects also need to consider security, usability, supportability, release management, and deployment options, among others things.  But these technical and procedural issues must be balanced with the needs of stakeholders and their interests.

    Software architecting is about more than just the classical technical activities; it is about balancing technical requirements with the business requirements of stakeholders in the project.

    Skyscrapers aren’t scalable – By Michael Nygard

    We cannot easily add lanes to roads, but we’ve learned how to easily add features to software. This isn’t a defect of our software processes, but a virtue of the medium in which we work. It’s OK to release an application that only does a few things, as long as users value those things enough to pay for them.

    Quantify – Keith Braithwaite

    The next time someone tells you that a system needs to be “scalable” ask them where new users are going to come from and why. Ask how many and by when? Reject “Lots” and “soon” as answers. Uncertain quantitative criteria must be given as a range: the least, the nominal, and the most. If this range cannot be given, then the required behavior is not understood.

    Some simple questions to ask: How many? In what period? How often? How soon? Increasing or decreasing? At what rate? If these questions cannot be answered then the need is not understood. The answers should be in the business case for the system and if they are not, then some hard thinking needs to be done.

    Architects must be hands on – By John Davies

    A good architect should lead by example, he/she should be able to fulfill any of the positions within his team from wiring the network, and configuring the build process to writing the unit tests and running benchmarks. It is perfectly acceptable for team members to have more in-depth knowledge in their specific areas but it’s difficult to imagine how team members can have confidence in their architect if the architect doesn’t understand the technology.

    Use uncertainty as a driver – By Kevlin Henney

    Confronted with two options, most people think that the most important thing to do is to make a choice between them. In design (software or otherwise), it is not. The presence of two options is an indicator that you need to consider uncertainty in the design. Use the uncertainty as a driver to determine where you can defer commitment to details and where you can partition and abstract to reduce the significance of design decisions.

    You can purchase “97 Things Every Software Architect Should Know” from Amazon.

    NoSQL solutions: Membase, Redis, CouchDB and MongoDB

    Each database has specific use cases and every solution has a sweet spot in terms of data, hardware, setup and operation. Here are some of the most popular key-value and document data stores:

    Key-value

    Membase

    • Developed by members of the memcached core team.
    • Simple (key value store), fast (low, predictable latency) and elastic (effortlessly grow or shrink a cluster).
    • Extensions are possible through a plug-in architecture (full-text search, backup, etc).
    • Supports Memcached ASCII and Binary protocols (uses existent Memcached libraries and clients).
    • Guarantees data consistency.
    • High-speed failover (server failures recoverable in under 100ms).
    • User management, alerts and logging and audit trail.

    Redis

    • Developed by Salvatore Sanfilippo and acquired by VMWare in 2010.
    • Very fast. Non-blocking I/O. Single threaded.
    • Data is held in memory but can be persisted by written to disk asynchronously.
    • Values can be strings, lists or sets.
    • Built-in support for master/slave replication.
    • Distributes the dataset across multiple Redis instances.

    Document-oriented

    The major benefit of using a document database comes from the fact that while it has all the benefits of a key/value store, you aren’t limited to just querying by key. However, documented-oriented databases and MapReduce aren’t appropriate for every situation.

    CouchDB

    • High read performance.
    • Supports bulk inserts.
    • Good for consistent master-master replica databases that are geographically distributed and often offline.
    • Good for intense versioning.
    • Android, MeeGo and WebOS include services for syncing locally stored data with a CouchDB non-relational database in the cloud.
    • Better than MongoDB at durability.
    • Uses REST as its interface to the database. It doesn’t have “queries” but instead uses “views”.
    • Makes heavy use of the file system cache (so more RAM is always better).
    • The database must be compacted periodically.
    • Conflicts on transactions must be handled by the programmer manually (e.g. if someone else has updated the document since it was fetched, then CouchDB relies on the application to resolve versioning issues).
    • Scales through asynchronous replication but lacks an auto-sharding mechanism. Reads are distributed to any server while writes must be propagated to all servers.

    MongoDB

    • High write performance. Good for systems with very high update rates.
    • It has the flexibility to replace a relational database in a wider range of scenarios.
    • Supports auto-sharding.
    • More oriented towards master/slave replication.
    • Compaction of the database is not necessary.
    • Both CouchDB and MongoDB support map/reduce operations.
    • Supports dynamic ad hoc queries via a JSON-style query language.
    • The pre-filtering provided by the query attribute doesn’t have a direct counterpart in CouchDB. It also allows post-filtering of aggregated values.
    • Relies on language-specific database drivers for access to the database.

    Links

    OSCON 2010, The O’Reilly Open Source Convention

    A couple of weeks ago I attended the O’Reilly Open Source Convention (OSCON) in Portland. OSCON has hundreds of sessions and activities focused on all aspects of open source software. I met some great people, the talks were good and I saw some promising ideas and technologies.

    Workshops attended

    • Android for Java Developers
      Marko Gargenta (Marakana)
    • Building a NoSQL Data Cloud
      Krishna Sankar (Cisco Systems Inc)
    • Building Native Mobile Apps Using Open Source
      Kevin Whinnery (Appcelerator)

    Sessions attended

    • Building Mobile Apps with HTML, CSS, and JavaScript
      Jonathan Stark (Jonathan Stark Consulting)
    • Open Source Tool Chains for Cloud Computing
      Mark Hinkle (Zenoss), John Willis (Opscode, Inc.), Alex Honor
    • Doctor, I Have a Problem with My Innovation.
      Rolf Skyberg (eBay, Inc.)
    • Ingex: Bringing Open Source to the Broadcast Industry
      By Brendan Quinn (BBC R&D)
    • membase.org: The Simple, Fast, Elastic NoSQL Database
      Matt Ingenthron (NorthScale, Inc.)
    • Introducing WebM: High Quality, Royalty-Free, Open Source Video
      John Koleszar (Google, Inc.)
    • Whiskey, Tango, Foxtrot: Understanding API Activity
      Clay Loveless (Mashery)
    • Deploying an Open Source Private Cloud On a Shoe String Budget
      Louis Danuser (AT&T Labs, Inc.)
    • Eucalyptus: The Open Source Infrastructure for Cloud Computing
      Shashi Mysore (Eucalyptus Systems Inc.)
    • Hadoop, Pig, and Twitter
      Kevin Weil (Twitter, Inc.)
    • Mahout: Mammoth Scale Machine Learning
      Robin Anil (Apache Software Foundation)
    • BlackBerry development for Web Application Developers
      Kevin Falcone (Best Practical Solutions)
    • Practical Concurrency
      Tim Bray (Google, Inc.)
    • Scribe – Moving Data at Massive Scale
      Robert Johnson (Facebook)
    • Make Open Easy
      Dan Bentley (Google)

    Implementing Dynamic Finders and Parsing Method Expressions

    Most ORMs support the concept of dynamic finders. A dynamic finder looks like a normal method invocation, but the method itself doesn’t exist, instead, it’s generated dynamically and processed via another method at runtime.

    A good example of this is Ruby. When you invoke a method that doesn’t exist, it raises a NoMethodError exception, unless you define “method_missing”. Rails ActiveRecord::Base class implements some of its magic thanks to this method. For example, find_by_title(title) and find_by_title_and_date(title, date) are turned into:

    find(:first, :conditions => ["title = ?", title])
    find(:first, :conditions => ["title = ? AND date = ?", title, date])

    What’s nice about Ruby is that the language allows you to define methods dynamically using the “define_method” method. That’s how Rails defines each dynamic finder in the class after it is first invoked, so that future attempts to use it do not run through the “method_missing” method.

    Method Expressions

    GORM, Grails ORM library, introduces the concept of dynamic method expressions. A method expression is made up of the prefix such as “findBy” followed by an expression that combines one or more properties. Grails takes advantage of Groovy features to provide dynamic methods:

    findByTitle("Example")
    findByTitleLike("Exa%")

    Method expressions can also use a boolean operator to combine two criteria:

    findAllByTitleLikeAndDateGreaterThan("Exampl%", '2010-03-23')

    In this case we are using AND in the middle of the query to make sure both conditions are satisfied, but you could equally use OR:

    findAllByTitleLikeOrDateGreaterThan("Exampl%", '2010-03-23')

    Parsing Method Expressions

    MethodExpressionParser is a PHP library for parsing method expressions. It’s designed to quickly and easily parse method expressions and construct conditions based on attribute names and arguments.

    Description

    [finderMethod]([attribute][expression][logicalOperator])?[attribute][expression]

    Expressions

    • LessThan: Less than the given value
    • LessThanEquals: Less than or equal a give value
    • GreaterThan: Greater than a given value
    • GreaterThanEquals: Greater than or equal a given value
    • Like: Equivalent to a SQL like expression
    • NotEqual: Negates equality
    • IsNotNull: Not a null value (doesn’t require an argument)
    • IsNull: Is a null value (doesn’t require an argument)

    Examples

    findByTitleAndDate('Example', date('Y-m-d'));
    SELECT * FROM book WHERE title = ? AND date = ?
    
    findByTitleOrDate('Example', date('Y-m-d'))
    SELECT * FROM book WHERE title = ? OR date = ?
    
    findByPublisherOrTitleAndDate('Name', 'Example', date('Y-m-d'))
    SELECT * FROM book WHERE publisher = ? OR (title = ? AND date = ?)
    
    findByPublisherInAndTitle(array('Name1', 'Name2'), 'Example')
    SELECT * FROM book WHERE publisher IN (?, ?) AND date = ?
    
    findByTitleLikeAndDateNotNull('Examp%')
    SELECT * FROM book WHERE title LIKE ? AND date NOT NULL
    
    findByIdOrTitleAndDateNotNull(1, 'Example')
    SELECT * FROM book WHERE (id = ?) OR (title = ? AND date NOT NULL)

    Example 1:

    findByTitleLikeAndDateNotNull('Examp%');

    Outputs:

    array
      0 =>
        array
          0 =>
            array
              'attribute' => string 'title'
              'expression' => string 'Like'
              'format' => string '%s LIKE ?'
              'placeholders' => int 1
              'argument' => string 'Examp%'
          1 =>
            array
              'attribute' => string 'date'
              'expression' => string 'NotNull'
              'format' => string '%s IS NOT NULL'
              'placeholders' => int 0
              'argument' => null

    Example 2:

    findByTitleAndPublisherNameOrTitleAndPublisherName('Title', 'a', 'Title', 'b');

    Outputs:

    array
      0 =>
        array
          0 =>
            array
              'attribute' => string 'title'
              'expression' => string 'Equals'
              'format' => string '%s = ?'
              'placeholders' => int 1
              'argument' => string 'Title'
          1 =>
            array
              'attribute' => string 'publisher_name'
              'expression' => string 'Equals'
              'format' => string '%s = ?'
              'placeholders' => int 1
              'argument' => string 'a'
      1 =>
        array
          0 =>
            array
              'attribute' => string 'title'
              'expression' => string 'Equals'
              'format' => string '%s = ?'
              'placeholders' => int 1
              'argument' => string 'Title'
          1 =>
            array
              'attribute' => string 'publisher_name'
              'expression' => string 'Equals'
              'format' => string '%s = ?'
              'placeholders' => int 1
              'argument' => string 'b'

    See more examples: Project Wiki

    Usage

    class EntityRepository
    {
        private $methodExpressionParser;
    
        // Return a single instance of MethodExpressionParser
        public function getMethodExpressionParser() {
        }
    
        // Finder methods
        public function findBy($conditions) {
            var_dump($conditions);
        }
        public function findAllBy($conditions) {
            var_dump($conditions);
        }
    
        // Invoke finder methods
        public function __call($method, $args) {
            if ('f' === $method{0}) {
                try {
                    $result = $this->getMethodExpressionParser()->parse($method, $args);
                    $finderMethod = key($result);
                    $conditions = $result[$finderMethod];
                } catch (MethodExpressionParserException $e) {
                    $message = sprintf('%s: %s()', $e->getMessage(), $method);
                    throw new EntityRepositoryException($message);
                }
                return $this->$finderMethod($conditions);
            }
    
            $message = 'Invalid method call: ' . __METHOD__;
            throw new BadMethodCallException($message);
        }
    }

    Performance

    PHP doesn’t allow you to define methods dynamically, this means that every time you invoke a finder method the parser has to search, extract and map all the attribute names and expressions. To avoid introducing this performance overhead you can cache the attribute names. For example:

    class EntityRepository
    {
        private $methodExpressionParser;
        private $classMetadata;
    
        // Return a single instance of MethodExpressionParser
        public function getMethodExpressionParser() {
        }
    
        // Return a single instance of ClassMetadata
        public function getClassMetadata() {
        }
    
        // Invoke finder methods
        public function __call($method, $args) {
            if ('f' === $method{0}) {
                $parser = $this->getMethodExpressionParser();
                $classMetadata = $this->getClassMetadata();
                try {
                    $finderMethod = $parser->determineFinderMethod($method);
                    if ($classMetadata->hasMissingMethod($method)) {
                        $attributes = $classMetadata->getMethodAttributes($method);
                        $conditions = $parser->map($args, $attributes);
                    } else {
                        $expressions = substr($method, strlen($finderMethod));
                        $attributes = $this->extractAttributeNames($expressions);
                        $conditions = $parser->map($args, $attributes);
                        $classMetadata->setMethodAttributes($method, $attributes);
                    }
                } catch (MethodExpressionParserException $e) {
                    $message = sprintf('%s: %s()', $e->getMessage(), $method);
                    throw new EntityRepositoryException($message);
                }
                return $this->$finderMethod($conditions);
            }
    
            $message = 'Invalid method call: ' . __METHOD__;
            throw new BadMethodCallException($message);
        }
    }

    The Expression objects are lazy-loaded, depending on the expressions found in the method name.

    Extensibility

    The MethodExpressionParser class was designed with extensibility in mind, allowing you to add new Expressions to the library.

    abstract class Expression {
    }
    class EqualsExpression extends Expression {
    }

    Source Code

    Browse source code:
    http://fedecarg.com/repositories/show/expressionparser

    Check out the current development trunk with:

    $ svn checkout http://svn.fedecarg.com/repo/Zf/Orm

    Sky Named Britain’s Most Admired Company

    Based on a survey of thousands of managers and investment analysts, Management Today has named BSkyB as Britain’s Most Admired Company for 2009. BSkyB is the youngest company ever to win this Award.

    BSkyB beat off the superstore giant Tesco into second place. Johnson Mathey took the third slot, with Cadbury, GlaxoSmithKline and Rolls-Royce trailing at numbers four, five and six.

    BSkyB headed off the sector in all of the criteria laid down by the organizers, coming out top in “quality of goods and services”, “quality of marketing” and “capacity to innovate”.

    Most Admired Top 20, 2009

    (Last year’s position in brackets)

    1 (4)     BSkyB 72.25
    2 (5)     Tesco 71.38
    3 (2)     Johnson Matthey 71.00
    4 (18)    Cadbury 70.40
    5 (19)    GlaxoSmithKline 70.00
    6 (7)     Rolls-Royce 69.96
    7 (26)    BP 67.08
    8 (11)    BG Group 67.03
    9 (1)     Diageo 65.83
    10 (47)   Cobham 65.75
    11 (3)    Unilever 65.0
    12 (52)   BAE Systems 64.9
    13 (51)   Ultra Electronics 64.7
    14 (154)  Centrica 64.4
    14 (24)   Royal Dutch Shell 64.4
    16 (81)   Admiral 63.9
    16 (17)   Capita Group 63.9
    18 (27)   Sainsbury 63.8
    19 (55)   Balfour Beatty 63.1
    20 (29)   Marks & Spencer 62.9
    

    BSkyB is a great company to work for, filled with talented people. Congratulation for this prestigious award!

    Links

    Management Today
    Sky News

    Command-line memcached stat reporter

    Nicholas Tang wrote a nice little perl script that shows a basic memcached top display for a list of servers. You can specify thresholds, for instance, and it’ll change color to red if you exceed the thresholds. You can also choose the refresh/sleep time, and whether to show immediate (per second) stats, or lifetime stats.

    To install it you only need to download the script and make it executable:

    $ curl http://memcache-top.googlecode.com/files/memcache-top-v0.6 > ~/bin/memcache-top
    $ chmod +x ~/bin/memcache-top
    $ memcache-top --sleep 3 --instances 10.50.11.3,10.50.11.4,10.50.11.5
    

    Here’s some sample output:

    memcache-top v0.6       (default port: 11211, color: on, refresh: 3 seconds)
    
    INSTANCE                USAGE   HIT %   CONN    TIME    EVICT/s GETS/s  READ/s  WRITE/s
    10.50.11.3:11211        88.9%   69.7%   1661    0.9ms   0.3     47      13.9K   9.8K
    10.50.11.4:11211        88.8%   69.9%   2121    0.7ms   1.3     168     17.6K   68.9K
    10.50.11.5:11211        88.9%   69.4%   1527    0.7ms   1.7     48      14.4K   13.6K
    AVERAGE:                84.7%   72.9%   1704    1.0ms   1.3     69      13.5K   30.3K   
    
    TOTAL:          19.9GB/ 23.4GB          20.0K   11.7ms  15.3    826     162.6K  363.6K
    (ctrl-c to quit.)
    

    Project Home
    http://code.google.com/p/memcache-top/

    Managing Multiple Build Environments

    Last updated: 3 March, 2010

    One of the challenges of Web development is managing multiple build environments. Most applications pass through several environments before they are released. These environments include: A local development environment, a shared development environment, a system integration environment, a user acceptance environment and a production environment.

    Automated Builds

    Automated builds provide a consistent method for building applications and are used to give other developers feedback about whether the code was successfully integrated or not. There are different types of builds: Continuous builds, Integration builds, Release builds and Patch builds.

    A source control system is the main point of integration for source code. When your team works on separate parts of the code base, you have to ensure that your checked in code doesn’t break the Integration build. That’s why it is important that you run your unit tests locally before checking in code.

    Here is a recommended process for checking code into source control:

    • Get the latest code from source control before running your tests
    • Verify that your local build is building and passing all the unit tests before checking in code
    • Use hooks to run a build after a transaction has been committed
    • If the Integration build fails, fix the issue because you are now blocking other developers from integrating their code

    Hudson can help you automate these tasks. It’s extremely easy to install and can be configured entirely from a Web UI. Also, it can be extended via plug-ins and can execute Phing, Ant, Gant, NAnt and Maven build scripts.

    Build File

    We need to create a master build file that contains the actions we want to perform. This script should make it possible to build the entire project with a single command line.

    First we need to separate the source from the generated files, so our source files will be in the “src” directory and all the generated files in the “build” directory. By default Ant uses build.xml as the name for a build file, this file is usually located in the project root directory.

    Then, you have to define whatever environments you want:

    project/
        build/
            files/
                local/
                development/
                integration/
                production/
            packages/
                development/
                    project-development-0.1-RC.noarch.rpm
                integration/
                production/
            default.properties
            local.properties
            development.properties
            production.properties
        src/
            application/
                config/
                controllers/
                domain/
                services/
                views/
            library/
            public/
        tests/
        build.xml

    Build files tend to contain the same actions:

    • Delete the previous build directory
    • Copy files
    • Manage dependencies
    • Run unit tests
    • Generate HTML and XML reports
    • Package files

    The target element is used as a wrapper for a sequences of actions. A target has a name, so that it can be referenced from elsewhere, either externally from the command line or internally via the “depends” or “antcall” keyword. Here’s a basic build.xml example:

    <?xml version="1.0" encoding="iso-8859-1"?>
    <project name="project" basedir="." default="main">
    
        <target name="init"></target>
        <target name="test"></target>
        <target name="test-selenium"></target>
        <target name="profile"></target>
        <target name="clean"></target>
        <target name="build" depends="init, test, profile, clean"></target>
        <target name="package"></target>
    
    </project>

    The property element allows the declaration of properties which are like user-definable variables available for use within an Ant build file. Properties can be defined either inside the buildfile or in a standalone properties file. For example:

    <?xml version="1.0" encoding="iso-8859-1"?>
    <project name="project" basedir="." default="main">
    
        <property file="${basedir}/build/default.properties" />
        <property file="${basedir}/build/${build.env}.properties" />
        ...
    
    </project>

    The core idea is using property files which name accords to the environment name. Then simply use the custom build-in property build.env. For better use you should also provide a file with default values. The following example intends to describe a typical Ant build file, of course, it can be easily modified to suit your personal needs.

    <?xml version="1.0" encoding="iso-8859-1"?>
    <project name="project" basedir="." default="main">
    
        <property file="${basedir}/build/default.properties" />
        <condition property="build.env" value="${build.env}" else="local">
            <isset property="build.env" />
        </condition>
        <property file="${basedir}/build/${build.env}.properties" />
    
         <property environment="env" />
         <condition property="env.BUILD_ID" value="${env.BUILD_ID}" else="">
             <isset property="env.BUILD_ID" />
         </condition>
    
        <target name="init">
            <echo message="Environment: ${build.env}"/>
            <echo message="Hudson build ID: ${env.BUILD_ID}"/>
            <echo message="Hudson build number: ${env.BUILD_NUMBER}"/>
            <echo message="SVN revision: ${env.SVN_REVISION}"/>
            <tstamp>
                <format property="build.datetime" pattern="dd-MMM-yy HH:mm:ss"/>
            </tstamp>
            <echo message="Build started at ${build.datetime}"/>
        </target>
    
        <target name="test">
            ...
        </target>
    
        <target name="clean">
            <delete dir="${build.dir}/files/${build.env}"/>
            <delete dir="${build.dir}/packages/${build.env}"/>
            <mkdir dir="${build.dir}/files/${build.env}"/>
            <mkdir dir="${build.dir}/packages/${build.env}"/>
        </target>
    
        <target name="build" depends="init, test, profile, clean">
            ...
        </target>
        ...
    
    </project>

    Using ant -Dname=value lets you define values for properties on the Ant command line. These properties can then be used within your build file as any normal property: ${name} will put in value.

    $ ant build -Dbuild.env=development
    

    There are different ways to target multiple environments. I hope I have covered enough of the basic functionality to get you started.