Web Server response time is the amount of time it takes for a web server to respond to a request from a browser.
No matter how optimized your web pages are for speed, if your web server response time is slow your pages will display slow. Google says “You should reduce your server response time to under 200ms”.
As a website gets more traffic, it uses more server resources. A website that once was quick, crisp and fast becomes slow and sluggish as traffic goes up.
Think the speed of your website doesn’t matter?
A 1-second delay in page load time yields:
- 11% fewer page views
- 16% decrease in customer satisfaction
- 7% loss in conversions (source: Aberdeen Group)
Amazon found this to be true, reporting increased revenue of 1% for every 100 milliseconds improvement to their site speed. (source: Amazon)
So did Walmart, who found a 2% increase in conversions for every 1 second of improvement.
There are many other factors that could add to the request execution time for every single page request on the server. The server software used, the server configuration, the packages loaded, hosting plan among others could contribute to a slow server.
In most of my coding practices, however, I have established that all factors remaining constant, database access can reasonably lead to extended server response time, if not server timeouts.
5 Tips for Speeding Up Web Server Response Time
Below, I will outline some of the best practices that you can implement as a web developer while working with relational databases that can significantly speed up your server response while still handling complex logic in the server per request.
1. Reduce the number of database queries per page request
Database query requests significantly take up server resources including memory and loading of packages. This significantly eats into the server execution time especially if you have to loop database queries during a single request.
If there is vital information that you need to have access to at every server request, and that information does not change, then you can use alternative means of storing that information like Sessions or Cache. It takes the server no additional resources to read data in session than to perform a database query.
2. Load table rows into memory and iterate over the array.
Many a time you need to return response to the view in the form of an array of objects filtered in a particular manner or format. A tempting, and common, practice is to perform queries – one row at a time – and filter the result and see if it is fit to add to your array to be returned to the view.
For just 2 or 10 rows this would work fine. But for anything more, I bet your server would time out before the response for the view is ready.
The way out of this is to load the database rows into memory in chunks, of say 10,000 at a time, and then iterate over the array and filter using your criteria as necessary. The processor only need fractions of milliseconds to iterate over a 10,000 size array than to make 10 database calls.
3. Perform query joins on multiple table instead of querying each table separately.
Even for a simple login request on the server, often than not, you are going to need to query more than one table before a user can log in. User login credentials might be correct but you probably want to check if this user has been suspended, deactivated, archived, exceeded limits or so before they can see their dashboard.
For an ORM designed database, chances are high that this information is defined on separate tables, as opposed to one table. You are probably going to have to query 3 tables or even more.
A common floppy and easy practice in development is to begin with the login table and only query the others tables if credentials are correct. The argument is that you can just respond with an error message when the credentials are wrong.
But look at this, what if they were right? The username and password pair match. You are still going to have to make more database calls after. This will eat into execution time if you are to make 10 trips to the DB and there is heavy traffic on your site.
The way out is to perform query joins. Join all the tables that you need for authentication into one query using SQL INNER JOINs by unique IDs and then perform your logic on the resultset array of row(s) in memory.
4. Perform bulk Insert/Update SQL operations instead of single insert/update one at a time.
Most simple CRUD applications just involve insertion of single records at a time. An example is the case of user sign up on your website. In that case you might not realize a significant difference in execution when using bulk insert queries.
However, say you are importing users from a CSV file into a database for a mailing/autoresponder application. Chances are you could be having 10,000 users to import from the CSV file uploaded by the client. This means every single line in the CSV file would represent a row/entry in the database table.
Performing entries one line at a time would help you get the server timeout well after the 50th user, if not much less. This is an ideal situation where you need to compute a single SQL query string that would insert all these records with just one trip to the database. You’ll be amazing at how fast it worked. You will think you just got a Quad Core processor!
5. Perform wildcard matching on the database and not on the result set.
Almost all modern web applications give the ability to search data in some way.
If you are not writing another Google, probably you will need to search users based on certain keywords on your system. Say you want users with email addresses ending in @yahoo.com – whatever you wanna do with them! Or you only want to list watches for sale tagged with the location ‘San Francisco Bay Area’.
For a database with well below 50,000 records, you could load this into memory and search the array and filter as appropriate. But this approach terribly fails when you have rows spanning 10,000,000+ records.
Most, if not all, SQL databases give you the ability to perform wildcard matching in the database by specifying the LIKE parameter of the query. This way only rows that match your criteria is returned with only one trip to the database. You might also want to remember to use the limit parameter to limit the number of rows returned to avoid loading 10M rows of data into memory and run out of memory resources.
A Final Thought…
There are various ways through which you could speed up your web server for faster web applications. I only highlighted those that are related to the database. There could even be better ways to optimize on these even further.
It is also important to note that your slow server could be the compound effect of many other factors.
What do you think about reducing web server response time by optimizing database queries? Please share your thoughts below.