BerandaComputers and TechnologyLow Hanging Fruits in Front End Performance Optimization

Low Hanging Fruits in Front End Performance Optimization

Web apps frontend performance is represented by grapes Photo by Amos Bar-Zeev on Unsplash

I conduct Rails performance audits for a living. Clients usually approach me with a request to speed up the backend, i.e., optimize the bottleneck API endpoint or tune the database queries. After the initial research, it often turns out that tweaking the frontend will make a better impact on the perceivable performance than fine-tuning the backend.

In this blog post, I describe the often-overlooked techniques that can significantly improve your web app’s overall performance.

These tips apply to all the web technologies like Ruby on Rails, NodeJS, Python Django, or Elixir Phoenix. It does not matter if you render an HTML or serve an API consumed by the JavaScript SPA framework. It all comes down to transferring bytes over the HTTP protocol. Frontend performance optimization is all about making this process as efficient as possible.

Why is frontend performance critical for your website’s success?

I guess that developers often disregard the frontend performance because it doesn’t directly affect the infrastructure costs. Rendering the unoptimized website is offloaded to the visitor’s desktop or mobile device and cannot be measured using backend monitoring tools.

Developers usually work on top-notch desktop computers with a high-speed internet connection. They do not experience poor performance themselves. The UX of visiting your landing page on a 15 inch Mac Book Pro with a fiber connection cannot be compared to an old Android device on a shaky 3G network.

A typical web app issues dozens of requests on initial load. Only a few are backend-related, i.e., website HTML, API calls, etc. The majority of requests are static assets, JavaScript libraries, images. Fine-tuning the frontend-related requests will give a much greater return than shaving a couple of hundered milliseconds off a database query.

Google Bot measures the performance of your website, and it directly affects the SEO rating. Since July 2019, Google Bot is using a “Mobile first” approach to assessing your website.

You might not care about frying the CPU and wasting the bandwidth of your mobile users. Maybe landing a sweet spot in Google search results should convince you to focus on your frontend performance?

Test in your client’s shoes

“If you want to write fast websites, use slow internet.”.

You should regularly throttle the internet speed during the development process to experience first-hand how your app will behave for most users.

On macOS, you can use the Network Link Conditioner to do it:

Simulate mobile network on a desktop computer

Also, both Firefox and Chrome developer tools offer the option to throttle the internet speed in the Network tab:

Chrome network throttle setting

Chrome network throttle

Firefox network throttle setting

Firefox network throttle

Maybe the internal demos of the new features should also be done on the throttled network? Everyone in the company should have the chance to see how the app really works for most users.


Discovering frontend issues is usually more straightforward than backend ones. You don’t even need admin access to the website. By definition, the frontend issues are in the frontend. You can scan and diagnose every website out there. I use the following tools to perform the initial scan:



Google PageSpeed Insights

GoogleChrome lighthouse

There’s no reason why ANY website shouldn’t score top on each of those tools. Read on if your score is anywhere below 90%.

Abot for Slack FastOrSlow score

Abot for Slack WebPageTest score

Abot for Slack Google speed score

The Abot landing page is a dynamic Rails website getting top performance rating

Client-side caching

Correctly configuring client-side caching is the most critical frontend optimization. I’ve seen it misconfigured in multiple production apps so far. Webpack comes with a great mechanism to easily leverage client-side caching, i.e., MD5 digest. The production assets generation process must be configured to append the MD5 digest tag to the filename.

It means that in the production environment, the application.js file becomes application-5bf4f97...95c2147.js. The random suffix is generated based on the file contents, so it is guaranteed to change if the file changes. You must add the correct cache-control header to make sure that once downloaded, the file will persist in the browser cache:

cache-control: public, max-age=31536000, immutable

The immutable parameter ensures that cache is not cleared when the user explicitly refreshes the website on the Chrome browser.

If you’re using NGINX as reverse proxy you can use the following directive:

location ~* .(?:ico|css|js|gif|jpe?g|png|woff2)$ {
  add_header Cache-Control "public, max-age=31536000, immutable";
  try_files $uri =404;

I’ve seen many apps using Etag and Last-Modified headers instead of Cache-Control. Etag is also generated based on the file contents, but the client has to talk to the server to confirm that the cached version is still correct. It means that on every page visit, the browser has to issue a request to validate its cache contents and wait for 304 Not Modified response. This completely unnecessary network roundtrip can be avoided if you add a Cache-Control header.

Limit bandwidth usage

Nowadays, websites are just MASSIVE. It often takes multiple MBs to render a static landing page. Let me point out the most common mistakes that affect it and how they can be resolved.

Compress and resize images

There’s no excuse for serving uncompressed images on your website. You must make sure to process all your images with tools like There’s often no perceivable difference for images processed with Lossy compression, and it usually means ~70% size reduction.

Resizing an image to the size that it actually needs is often overlooked. To check it, visit your website using Firefox on a large desktop screen, right-click the image, and select View image info. You’ll see what dimensions the image needs vs. how large it is now:

Checking real image

Make sure first to resize the image and only then compress it. Otherwise, you might lose quality.

Defer images loading

You should defer the loading of the images that are not visible in the initial viewport. During the initial load, dozens of requests are competing for network throughput. Delaying the transfer of unnecessary images will leave more resources for necessary assets like CSS stylesheets etc.

There’s plenty of different JavaScript libraries that offer this feature. Including them means additional bandwidth usage, so I prefer to keep things simple and use a native loading='lazy' HTML attribute.

It has decent browser support. Have a look at how it affected one of my blog posts:

Checking real image

Without lazy loaded images

Checking real image

Lazy loading for images enabled

As you can see, adding loading='lazy' to all the images reduced ten requests and over 250kb of transfer on the initial load. That’s a massive deal for slower internet connections!

Enough with the GIFs already…

GIFs are HUGE! I understand you want to showcase a fancy UI on your landing page, but maybe you could use a lazy-loaded movie clip instead? 10MB GIF can be converted to 250kb mp4 file… Twitter automatically changes GIF images to mp4 files, so I’d trust them on this one.

Cherry-pick and measure dependencies size

Many frontend libraries offer a modular approach to including them in your application. For example, Bootstrap allows you to customize the build to include only the components you need.

Some popular libraries have lightweight alternatives. Since recently, ChromeDevTools suggests them, so make sure to use it for your application.

Reconsider 3rd party dependencies

Overusing externally hosted 3rd party JavaScript libraries is the simplest way to kill the performance of your website.

Dropping in yet another



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments