Scirra cog

About Us

We're a London based startup that develops Construct 2, software that lets you make your own computer games!

Archives

Browse all our blog posts

Latest Blog Entries

We love brains!

Join us! Joiiinnn ussss! Mooooree brains!

Making a Fast Website

by Tom | 29th, February 2012

It’s always been important to make your website fast. Not only is it obvious visitors are going to prefer it but it’s now well-known that Google uses loading speed as a ranking metric. The initial page load of your website is perhaps the most important. The longer it takes to load the more visitors are going to press back and find an alternative. A slow website is something that could potentially frustrate visitors so it’s important to try and remove it from the equation.

Jakob Nielsen is an authority on usability and has studied response times extensively. He notes that response times of:

  • Up to 100 milliseconds is perceived as instantaneous response
  • Up to 1 second will be noticed by the user but their thought flow will remain uninterrupted.
  • Over 10 seconds is also mentioned but this shouldn’t be applicable to normal websites which is the focus of this blog. If your site takes longer than 10 seconds to load something is probably critically wrong!

He makes a point of noting that these numbers haven’t changed over the years. A lot of research related to the Internet is going to expire quite rapidly but the numbers for fundamental qualities like response times appear to be hardwired.

It’s hard to get a page load under 1 second, but it’s the sweet spot people should be aiming for. Under one second the user’s thought flow remains intact which is crucial if you are trying to sell the user an idea or a product. It’s also going to reflect well on your product and services – if the website is fast it’s likely your product is going to share similar qualities as well (especially if your product is the website).

If you’re still not convinced, there’s some interesting data out there if you dig around. Marissa Mayer (Currently VP of Location and Local Services at Google) spoke at Web 2.0 about a test they ran where they concluded a 500ms increase in page load time on Google led to 20% less traffic and revenue.

Greg Linden (ex Senior Manager & Principal at Amazon.com) ran A/B tests on Amazon where they delayed the page speed in 100ms increments and found that “even very small delays would result in substantial and costly drops in revenue”. According to Kohavi and Longbotham (2007) every 100ms increase in load time decreased sales by 1%.

The effects aren’t just financial either according to some researchers. Slow web pages:

  • Lower perceived credibility (Fogg et al. 2001)
  • Lower perceived quality (Bouch, Kuchinsky, and Bhatti 2000)
  • Increase user frustration (Ceaparu et al. 2004)
  • Increase blood pressure (Scheirer et al. 2002)
  • Reduce flow rates (Novak, Hoffman, and Yung 200)
  • Reduce conversion rates (Akamai 2007)
  • Increase exit rates (Nielsen 2000)
  • Are perceived as less interesting (Ramsay, Barbesi, and Preece 1998)
  • Are perceived as less attractive (Skadberg and Kimmel 2004)
(Source: http://www.websiteoptimization.com/speed/tweak/psychology-web-performance/)

So now we know it’s important…

Where do we start?

A good starting point when measuring your website’s speed is to establish the minimum overheads. A good way to calculate this is by measuring ping times to your server with this tool. A ping is a round trip to the server and the response times will vary depending on the distance between the server and the client as well as the quality of the network between. Our server is located in New York, which is reflected in the chart below with a blazingly fast average round trip time of 5.5 milliseconds. Further away places like Europe see a ping of around 100 milliseconds which for the distance is still very impressive. You’ll also notice that countries like China are the slowest. They have a much poorer miles per millisecond rating probably due to a worse quality network (the ‘Round Trip Miles’ is obviously a simplistic estimate which will have an impact on the ‘Miles per ms’ but for the purposes of this chart it sufficiently illustrates the points and is nonetheless interesting to calculate!)

I’ve also added in a theoretical physical minimum response time which is based on the time taken to cover the distance at speed of light down a fibre optic. The speed of light is 670,616,629 mph, but this is reduced to "only" 445,000,000 down a fibre optic due to the effect of refraction. Obviously geographical obstacles, indirect routing paths, and real network hardware will increase the time. However it provides a useful theoretical minimum - there's no point trying to get your ping time less than the direct fibre optic time - it's impossible!

City Average Response (MS) Theoretical Minimum (MS) Round Trip Distance (Miles) Miles Per MS (higher is better)
New York (USA) 5.5 0 0 -
London (England) 77.2 56.5 6,981 90.4
Sydney (Australia) 231 160.6 19,856 86
Moscow (Russia) 125.4 75.4 9,326 74.4
Sao Paulo (Brazil) 129.9 77.2 9,542 73.5
Beijing (China) 315.3 109.2 13,500 42.9

It’s quite amazing when you think about it - a message being transmitted and returned over at least 7,000 miles of network in 77 milliseconds.

These are the inescapable minimum times it will take visitors from around the world to load your webpage. You have very little influence over these numbers.

Useful Free Tools

When working on speeding up your website there are a few excellent tools you can use for free that can help you measure your progress.

Pingdom Website Speed Test
http://tools.pingdom.com/fpt/

This is a great tool that I use a lot to measure our website loading time with options to test from different servers as well. One thing worth noting is that social networking ‘share’ or ‘like’ boxes (like we have) will make the results appear a lot slower than they actually are. Some of the CDNs involved seem to have highly variable response times. Also, they often make AJAX requests that continue to run after your page has finished loading, and some tools will include this time when measuring your page load time. So when testing a website I tend to pick a page that doesn’t have any of these social buttons on it. This isn’t really cheating, I think the base page load time is the most important and a delayed load of social buttons is generally out of your control and not the meat of the content which the visitor is likely most interested in.

YSlow

http://developer.yahoo.com/yslow/

Developed by Yahoo, this free tool lets you know what areas on your website can be changed to improve your page load time. It’s also very useful for showing you if you’ve set your caches up correctly! It can be installed as a Chrome extension too.

How to make your website faster

We’ve worked quite hard to make sure our website loads fast. The YSlow page describes a lot of techniques in great detail and is an excellent resource. I’m not going to try to write a replacement for YSlow’s guide as they are far more knowledgeable than I and go into far more detail, but instead I will just give an overview of what I consider the most important techniques and my experiences with implementing them.

The most obvious – page size

This is the most obvious but often overlooked. The more data you have on your page, the longer it will take to transmit. This is partly why I’m generally against the use of CMS (content management systems) where possible. I’ve observed a lot of websites that are bloated with a lot of HTML and JS includes. Some websites are bloated to the point of approaching 1MB of raw HTML code which is insanity. Our HTML5 game engine page is probably in the realms of a ‘normal’ and the raw HTML code is only 15kb in size. If you hand control of your content over to a CMS you also lose a lot of control over the code which can severely negatively impact your page size.

Serve your pages compressed

In IIS there’s a simple switch (under ‘Compression’) which allows you to distribute content from your server to the client in compressed format. There are virtually no downsides to using this with modern servers. The benefits are obvious: if we use YSlow to analyse our homepage we can see if we expand the CSS tab the 7.7k CSS file is sent GZipped as a total size of 1.7k. That’s about 22% its size, so now less data will need to be transmitted. Compression algorithms work excellently on text, especially highly repetitive text. CSS, HTML and JS by nature have a lot of repeating chunks inside them which make them compress very efficiently.

GZip compression is also well supported, and according to HTTP-compression.com ALL common web browsers support it. Internet Explorer has had support for this since version 4.0.

If your server isn’t an antique it shouldn’t make any noticeable impact on performance either (except perhaps in some edge cases which I’ve yet to see or hear about).

Put your Javascript at the bottom

Loading Javascript files will block other downloads on the page. It’s recommended in the HTTP specification that browsers can download up to 2 resources in parallel from each hostname (Edit: chrisacky on HackerNews correctly points out that a lot of modern browsers surpass this guideline. For example Chrome and FireFox allow 6 connections per hostname). However when your browser is downloading JS it will block all other downloads even if they exist on different hostnames. Putting your Javascript at the top (in the head tag) can create seemingly sluggish behaviour and a perceived slower loading time, since it takes longer for anything else to render on-screen.

It’s best to put your script includes at the bottom of the page just before the closing body tag. This could create some design problems for websites (again this is another problem with CMSs) and may not be as simple in some cases as just moving them, but it’s advisable where possible. The HTML5 games Construct 2 exports use this technique to ensure the game loads last, after the rest of the page is ready. Also if you specially design your scripts so they can load in any order (not just the order they’re listed in the HTML), you can also look in to using the async or defer attributes, but this can increase difficulty even further in some cases.

Use Sprite Sheets

Every time your browser makes an HTTP request there are overheads made to making the request itself. If you have a page with a dozen or so small icons like this page: http://www.scirra.com/html5-game-engine you are making a dozen or so HTTP requests with their associated overheads!

This is why it’s best to use a sprite sheet: http://static2.scirra.net/images/store-icons.png

All the images in one file mean there’s only one HTTP request, so the cost of the overhead is only paid once. You can use this image as a background image for a div, for example as follows:

.store-icon { width: 32px; height: 32px; background: url(http://static2.scirra.net/images/store-icons.png); } <div class=”store-icon” style=”background-position:0 -576px”></div>

I’ve found that using inline CSS for positioning the background can be more maintainable than defining it inside a CSS class. There is little difference though so I wouldn’t think it matters either way.

Sprite sheets are good for decorative elements. The disadvantage of sprite sheets is that they are very unfriendly for Google image search - but then again who cares for these elements?

Sprite sheets work best for uniform elements - that is, images that have the same dimensions. These are easy to maintain. Beware if you start adding different sized images to your sprite sheets, it can become fiddly to maintain compared to standalone images.

Another benefit of sprite sheets is using them as mouseover images. If you have all your images and mouseover images in one sprite sheet, they all load at the same time so when the mouseover image is called it switches instantaneously, compared to a separate image file the user may experience a few hundred milliseconds delay as the mouseover image.

Cookieless Domains and Multiple Domains

If you’re hosting your images on the same domain as your webpage and you have cookies of some description (don’t forget sessions still use a cookie!) then with every request to image files, CSS files, JS etc the cookie data is also sent with the request. This adds to the amount of data that needs to be uploaded from the client.

We use the domain Scirra.net to host all our static content. Almost all our static content is hosted on Scirra.net which is a domain that doesn’t use cookies at all. This makes loading the images faster than if they were hosted on Scirra.com.

As we mentioned before, a browser is recommended to only download 2 files from each host in parallel. Therefore multiple cookieless domains will work in parallel, improving your page load time. We’ve set up 4 subdomains, static1.scirra.net to static4.scirra.net.

Setting up these domains is remarkably simple. You set them up on your domain and make them all point to the same directory on your server. Now if you have 12 static resources on your pages you can distribute them evenly over the static domains which will help with parallel loading of static resources.

One important thing to note is that if you load static1.scirra.net/images/construct2-splash.jpg on one page and then load it with the static2 subdomain on another page the browser will reload the image and not take advantage of the cache! Ensure that once a resource is loaded on one subdomain it is permanently assigned that subdomain. Although it is accessible on other subdomains it is only used on one so the browser can cache the response.

To solve this issue on our website, I wrote a small function `allocateStaticPath`. When we want to put a picture on our website we will put it our code as:

<img src="<%=CommonFunctions.AllocateStaticPath("/images/construct2-splash.jpg")%>" width="672" height="359" />

We then define the function:

/// <summary> /// Turns a path for static resource into it's related static URL static[1-4].scirra.net. /// </summary> /// <param name="resource">Original URL (eg "/images/picture.png")</param> /// <returns>Full URL to resource</returns> public static string AllocateStaticPath(string resource) { // Get the root static domain (//static$.scirra.net) string returnPath = Settings.CookielessDomainImagePrefix; resource = resource.ToLower(); // Add missing slash if one doesn't exist if (resource.Substring(0, 1) != "/") resource = "/" + resource; var splitpath = resource.Split('/'); var chars = splitpath[splitpath.Length - 1].ToCharArray(); var seed = 0; for (int i = 0; i < chars.Length; i++) seed += chars[i]; var random = new Random(seed); // Set the static ID subdomain returnPath = returnPath.Replace("$", random.Next(1, 5).ToString()); return returnPath + resource; }

This function uses the characters of the files name to set a seed value, then a random number 1-4 is picked. As the seed is going to be the same every time for each filename it will return the same number every time. It then returns the assigned static URL and renders the image like this:

<img src="//static1.scirra.net/images/construct2-splash.jpg" width="672" height="359" />

As you can see this is incredibly easy to maintain once it’s all setup and yields real benefits in regards to speed of page loading. You should notice that all static resources are pretty evenly distributed across the static domains on each page. You may be unlucky and find a bias towards one domain on important pages load your homepage but these can be manually tweaked.

We found that for our website 4 static domains seems to be most effective. You may find that you need more or less (I would caution against more than 4 generally though as you will start to suffer slowdowns from too many DNS lookups).

One interesting trick is not to specify `http://` at the start of your URLs for static elements, but simply a double forward slash `//`. This is a very useful and little-known URI feature that will select the protocol of the current page being viewed, very handy if you’re switching between HTTPs and HTTP as it will select the current one and not throw security warnings!

Always specify your image sizes

It’s too easy to not add width and height attributes to every image. However it is incredibly important for perceived loading time! If the images dimensions are not known to the browser rendering the page than the page will constantly be resizing various elements on its page as it works out how big the image actually is. By specifying the dimensions the browser can reserve that fixed space as it renders all elements which will stop content jumping around the page. Although this has no effect on actual page load time, it has a big impact on usability of your website and perceived loading time.

External JS

It’s important to keep your Javascript in external files where possible, since it allows the browser to cache the scripts and not load them on every page load!

It sounds pretty simple but often it is not. Sometimes Javascripts need to contain variables which differ for each user which means it seems difficult to put them in external files. A lot of websites I’ve seen will have huge swathes of Javascript code on their pages because of this very reason. We solved this issue by using global variables which render directly to the page, and our external Javascript files will reference these variables when needed. It now allows us to cache these Javascript files.

Caches

Check that browsers correctly cache static resources. Aggressive caches are best, with an incrementing querystring parameter at the end of them to force browsers to reload them when you make a change to them. This helps prevent static resources being reloaded every page view.

Content Delivery Networks

We have not utilised a CDN yet, but are looking into it. CDNs are well documented and incorporate a lot of the benefits described above on their own accord. CDNs are geographically dispersed which means there is less geographical bias in loading times for static resources.

Other tweaks

There’s a host of other tweaks you can do, depending on how obsessive you are about speed of your page! It’s important to remember however that you reach a point where the time you put into making new changes outweigh any gained benefit. It also can cost you a lot of time in the future with maintainability.

A Fast Server

We recently moved server, away from a 2GB RAM, 2 core 2.2ghz system to a 4 core 3.2ghz server with 8GB of ram. We also now use licensed SQL server which remove hardware restrictions in the express edition and we’ve noticed pages are loading a lot quicker and our CPU isn’t maxing out during peak times! When you’re nearing your server’s processing limits it’s important to upgrade to better hardware to give your site more breathing space for growth. More RAM also means more of your application and database can permanently sit in memory.

The slowest webpage in the world is a page that doesn’t ever load. Although I appreciate the comedy value when people congratulate startups for breaking due to the ‘Hacker News’ or ‘Reddit’ effect (when the site gains so much traffic it breaks) it really is a bit of a disaster for a startup as they missed out on a boat of visitors and potential customers/press for the sake of saving a few extra dollars a month on server costs. It’s important to have a server that can cope with a sudden influx of traffic, so do upgrade when your natural daily server performance is starting to approach it’s capacity. Don’t wait until it’s too late!

The Results

Below I’ve printed out the page load results between us and some competitors. A couple of points to note are that:

  • Pingdom happens to use a NYC server to test response from which is where our server is located, however other test subjects show significant geographical bias in different regions
  • We’ve used the homepages of competitors as test pages because homepages are arguably the most important pages
  • Social buttons add a large amount of time to page loads. As these don’t form the meat of the content we’ve stripped these from our page for tests (no other competitors homepage has these social buttons).
Page Page Size Amsterdam, NL Dallas, USA NYC, USA Average
scirra.com?t=1 530kb 1.18s 0.81s 0.5s 0.83s
gamesalad.com 1.5mb 2.45s 0.72s 1.1s 1.40s
yoyogames.com 1.4mb 2.21s 3.12s 2.13s 2.47s
stencyl.com 747kb 3.88s 1.29s 1.9s 2.36s
Tests were run at 3.30pm GMT+1 29 Feb from Pingdom.com.

Why?

These performance tweaks might all seems like a time drain but I firmly believe it isn’t. If your website loads faster than your competitors you have a distinct advantage. It may be hard to measure but I’m convinced this advantage exists. Visitors will be more likely to visit more pages on your website, they will be more likely to enjoy the experience and they will be less likely to lose their train of thought which is valuable when pitching an idea or product to prospective customers and users.

Startups should be leading the field, they have the agility and knowledge to execute these techniques and gain another much needed competitive edge!

Now follow us and share this

Tags:

Comments

3
Diggetydog 3,968 rep

Nice... I am all about fast

Wednesday, February 29, 2012 at 6:02:49 PM
3
Dobandon 6,366 rep

I love web development, really good blog. Thanks!

Wednesday, February 29, 2012 at 6:11:47 PM
3
anthonykojima 10.0k rep

Really interesting. Thanks a lot for this article!

Wednesday, February 29, 2012 at 6:27:19 PM
3
cjr1974 5,109 rep

Thanks for the info that was a good read!

Wednesday, February 29, 2012 at 6:49:57 PM
2
Velojet 20.7k rep

Superb advice here, thanks, Tom! The best is that it's soundly based in your own practical experience. The comparison chart of competitors is another plus to Scirra!

Wednesday, February 29, 2012 at 6:50:22 PM
1
Kyatric 67.1k rep

Thanks for the insightful practical article.
I'll be sure to read it again and follow the links to YSlow guide and other ressources you gave when in the process of building my site.

Wednesday, February 29, 2012 at 8:48:02 PM
1
shinkan 33.8k rep

...every 100ms increase in load time decreased sales by 1%.... i'm sorry but for me that kind of talk it's pure theoretical bull shit. If every man on this planet had same internet connection then maybe it could be true but reality is not so sweet. If you have fast broaband you don't really care about how big is someones website. Few month's ago I had 8Mb/s and on every page i've been, never saw any delays. Type web page adress, press enter, website open. After moving to different house I can only use mobile connection duo to my location. For example opening scirra page now takes me good 2-6 seconds if theres a nice wheather outside. Opening google takes around 3 seconds and so what? I should go somewhere else because their website opens for me in 2.9s instead of 3s?
Point is, if someone want's to visit specific page he/she will.

Wednesday, February 29, 2012 at 9:59:54 PM
2
shinkan 33.8k rep

...besides that, quite interesting article @Tom :)

Wednesday, February 29, 2012 at 10:01:59 PM
1
Tom 48.7k rep

@shinkan these are probably large sample sizes and well designed and executed A/B tests conducted by Amazon. It's easy to look at individuals responses but when you deal with large data sets it's quite possible to observe these sorts of results.

The point you mention about "if someone want's to visit specific page he/she will" is interesting though, on a site like Amazon a user is far more likely to find an alternative if the site is too slow for them, online shoppers are usually not very loyal and will discover alternatives very quickly.

On the other hand a site like ours the user will have more loyalty, there's less known alternatives therefore you could argue that a site like ours would be less sensitive to page speed decreases over something like Amazon.

Wednesday, February 29, 2012 at 10:05:57 PM
1
shinkan 33.8k rep

@Tom yes, but how can you tell from people visiting your page, who's interested in buying and who is only looking?

Wednesday, February 29, 2012 at 10:18:11 PM
1
Kyatric 67.1k rep

Agreed with @Tom and @shinkan 's comments.

Amazon's figures stand in a "business perspective", according to the nature of the website tested and in parallel with its competitors.
In the case of Scirra's, or "your game website" indeed loyalty of niche users will come into play.

But I believe you can link the "every 100ms increase in load time decreased sales by 1%" with the reputation and the impact on the user.

A "mainstream" user coming to an indy website, I believe will be less inclined to make a payment on paypal for the product if the site feels "cheap" and "slow quality".
I know that's something that would make me think about it twice (despite being used to slow connexion and havign already taken "risks" sending money to other indy developpers in the past).

It's more a psychological impact than any figures you can quantify, and apply to only a fraction of the users that would "think" like me.
Still in a niche market, user's retention is important and you can't afford to waste potential customers.

Anyway, good point/discussion guys, thanks again for the article.

Wednesday, February 29, 2012 at 10:24:13 PM
1
ericdfields 2,032 rep

When specifying image sizes, I'm assuming in the CSS rather than width & height attributes? Is one better than the other for page load time? Just checking…

Thursday, March 01, 2012 at 2:48:59 AM
2
gtmetrix 2,062 rep

At the risk of self linking, check out http://gtmetrix.com/ to get Google pagespeed and Yahoo Yslow breakdowns in nice and easy format. =)

Thursday, March 01, 2012 at 2:57:36 AM
2
johnstalcup 2,037 rep

Analyzing http://www.scirra.com/?t=1 page load:

Your footer-bottom.png is 109kb, resave it as a 45kb JPG (visually identical on the page, you're not using the transparency you included since your page background is all white anyway)

Your home-buttons.png is 30kb and is a very plain text highlight effect that you should be doing in CSS for 0kb.

You should be concatenating jquery.min.js, jquery.colorbox-min.js, common.min.js, and alert.js into a single HTTP request.

You should be concatenating newmaster.css, home.css, colorbox.css and alert.css into a single HTTP request.

None of your secondary files are beginning loading till after your ENTIRE html file is sent to the client. This means you are not flushing your output buffer till you have completed with your template processing. Flush the buffer a couple of times along the way so that the browser can start loading your images and CSS in parallel with the second half of your HTML text.

Several of your images aren't starting to load until after jquery.colorbox-min.js is done loading. It is potentially costing you 150ms.

Thursday, March 01, 2012 at 8:12:07 AM
1
zjane 2,061 rep

Agreed with Kyatric comment,
Also come to think of it, many businesses are failing due to slow servers and loosing alot of potential clients. Though getting fast internet at first time may be expensive its good and helps you stay on the safe side like the saying goes" better safe than sorry"

Thursday, March 01, 2012 at 9:12:09 AM

Leave a comment

Everyone is welcome to leave their thoughts! Register a new account or login.