BerandaComputers and TechnologyBBC World Service and Web Performance – By Chris Hinds – Nov,...

BBC World Service and Web Performance – By Chris Hinds – Nov, 2020

BBC World Service & Web Performance

Image for post

Image for post

The BBC World Service publishes news stories in over 40 languages globally. Stories are written by journalists around the world in their native language instead of using translations. World Service covers everything from local to global news and content is delivered in multiple formats, including text, video and audio.

As mentioned in Moving BBC Online to the cloud the frontend (and many backend services) powering the World Service websites were previously written mostly in PHP, hosted by BBC-owned data centres. Over the last couple of years, teams within BBC Online have been working tirelessly to migrate their services to the cloud and the BBC World Service has nearly completed this transition.

Over the past 12 months we’ve migrated our pages which are spread across 41 discrete sites from a legacy PHP monolith to a new React based application. This application is called Simorgh, an open source, isomorphic single page application developed by the World Service languages team.

What do we mean by Single page application and Isomorphic?

Single Page Application (SPA)

A single page application (SPA) is a web application that works solely in the browser, removing the need to refresh or reload the page. This creates an outstanding user experience that feels close to that of a native mobile application. Some common services you may use on a daily basis make use of this technology, including Gmail, GitHub, Facebook and Google Maps.


An isomorphic (sometimes referred to as “universal”) app is a web app that can run on both the server and the client. The idea being that the first request to a web page e.g. will be rendered on the web server delivering server side rendered HTML to the readers browser. Once the rendered page reaches the client and the JavaScript is downloaded and parsed the browser is able to take control and then treat subsequent page views as a single page application. In React this is handled via the React hydrate function which “hydrates” the client side DOM with the data that was used to render the page on the server. In most cases the reader does not notice this phase as React is performing a diff on the DOM between the server side render and the client side render. For the most part these will be identical however at this point React is in control of the page rendering in the browser.

Simorgh is the rendering platform built by the BBC World Service web team using the technologies described above. What made Simorgh challenging to build wasn’t the technology we used, but instead the specific requirements of BBC World Service. When building Simorgh to replace our dated PHP solution we had to bear in mind the following:

  • Performance — The websites must be as performant as they can be. Many of our readers are on lower end smart/feature phones on networks with low bandwidth rates and high data costs, slow connections and patchy coverage.
  • Accessibility — The BBC aims to provide a fully accessible web platform, ensuring that anyone can access our websites using any assistive technology.
  • Support for multiple languages — BBC World Service currently supports 41 different languages, each language site has its own editorial team and from the outside this is seen as 41 separate websites.
  • Huge volumes of traffic — The World Service currently serves 31m weekly readers. Simorgh despite being behind many different caching/routing layers is rendering on average 1 million unique pages per day with an average of 11 million daily renders across the 41 languages.
  • First class AMP support — Offering AMP variants of all supported pages. This allows us to move away from the previously separate AMP rendering system that was built on an internal Ruby based framework for static rendering.

We are planning to post a dedicated writeup on the history of Simorgh and the technologies chosen in the near future so keep an eye out for that.

Simorgh currently supports twelve different page types:

These pages may appear visually similar, but our internal content management system treats them as different page types. One of the biggest advantages we have seen by rebuilding our platform is the ability to reuse code as much as possible. Many of these pages share the same code and React components. These components are located in our open source React component library (Storybook

We knew that we wanted to focus on improving web performance for the World Service, however this was difficult with the previous PHP platform. So one of the main goals of the migration was to build a platform that would enable this type of rapid prototyping allowing us to make changes and improve the feedback loop of those pages.

Web performance wasn’t the number one goal when we created Simorgh, but as we followed best practice in developing the new platform, we did see vast improvements compared to the old one. Some of this was attributed to some early design decisions such as no blocking JS, minimal layout shift, server side rendering etc.

We released pages in batches, grouped by the language and the first language we released onto the new platform saw huge gains performance in many areas.

  • Lighthouse performance score saw a 224% increase from 24 > 94
  • Lighthouse best practice score saw a 27% increase from 79 > 100
  • Total number of requests dropped by 85% to 17 down from 112
  • Blocking JS requests dropped by 100% from 9 to 0
  • JS requests dropped by 79%
  • Total page weight is now 60% smaller than before
  • JS size dropped by 61%
  • Dom Content Loaded is 85% faster at just 0.4s down from 2.6s
  • Visually complete time dropped by 62% down to just 1.8s vs the previous 4.7

So as you can see we have already made a great improvement in our frontend web performance but we won’t stop here. A large proportion of the BBC World Service audience are on slower 2g and 3g networks, they use lower end budget-friendly android handsets or even feature phones. In some of our supported regions network coverage is patchy at best, some readers may only have network access whilst traveling to work or even whilst at work. We must continue to make improvements in every way we can to make our pages some of the most accessible web pages in the news category both in terms of accessibility requirements and performance.

This video demonstrates the performance improvement between the old and new platforms.

Since the migration we have already released a number of new features that aim to help improve performance, perhaps the most notable one was the lazyloading of social embeds (Tweets, Instagram posts and YouTube videos).

Social embeds are often a key part of telling a story. We have found that many of our Journalists add a number of social embeds to each page. For instance one language always embeds 2–3 youtube videos at the bottom of each story. When looking into the performance metrics for these pages we noticed upwards of 500Kb of JS (that was more than the entire Simorgh application) was being loaded by YouTube and that some of this JS was actually blocking the rendering of our page as it was being parsed. In one extreme example the Time to First Byte (TTFB) was at 12s.

This content had to be on the page as it was part of the onward journeys experience. However not every reader would scroll down the page to where these embeds were rendered so why should they have to download the extra JavaScript, why should they have to spend the extra data allowance when they may never interact with these social embeds.

The Solution?

Lazyloading of third party content, we already do this with any images that are outside of viewport so why not for social embeds? A quick pull request later and we were lazyloading social embeds, no new library, no JS size increase, just using an already existing feature on the platform. Soon after releasing we saw a wide variety of results as these were dependent on where the social embeds were in the story and how many a given story had.

In most cases we were seeing a 10–15% improvement in TTI as well as reducing if not eliminating the render blocking time. Where I was most impressed though was in the story mentioned earlier. We had taken the TTI from 12s down to 6s. 6s is still a long time however this was a story with many different social embeds so kind of a worst case scenario. In any-case a 50% improvement in just a few lines of code is phenomenal. This kind of change would not have been possible, at least not so quickly on the previous platform.

Now that the migration is complete we are in a position to start making more improvements to web performance and changes to the platform. Before we can make many meaningful improvements to the application we need to be able to monitor web performance.

There are two common ways of monitoring web performance;

Synthetic Testing

Synthetic testing is great for catching regressions during the development lifecycle. We use Lighthouse, SpeedCurve and WebPageTest to measure our web page performance.

RUM (Real User Monitoring)

RUM testing is a method of capturing performance metrics from our users. RUM is generally more expensive in comparison to synthetic testing, however it provides a vital look into how real users are experiencing our site.

We use a combination of Synthetic and RUM monitoring for Simorgh. During development, Lighthouse runs on every pull request/feature branch. Lighthouse tests a subset of pages and for the most part is looking at the Accessibility, PWA and BestPractices audits.

Lighthouse is also used in our continuous delivery pipeline. After we deploy to the test environment, we run Lighthouse against the environment and can choose to fail the build if the audits fail. This same test will then also run against the live environment once the deployment is complete.

SpeedCurve runs daily tests against a smaller subset of URLs. SpeedCurve is a tool that essentially wraps around WebPageTest and Lighthouse, providing a fantastic UI on top of those underlying tools. These tests give us an insight into performance of our pages from different regions around the world.

A recent initiative from Google is the Core Web-Vitals. The idea behind these metrics is that they are a way to determine/monitor the user experience of your site. Google collects the metrics themselves from popular sites and publishes the CRUX dataset (Chrome User Experience). These metrics include things like; Time to first byte, First input delay, Cumulative Layout Shift.

Through a new package in our component library we are now able to collect these same metrics ourselves if the user has opted into performance tracking via the BBC cookies settings page.

This is a fairly cheap way (in comparison to procuring a 3rd party tool) for us to be collecting real user metrics. RUM is very important for us in the BBC World Service as our users are situated all around the world, they all use different devices with different capabilities and run on a wide variety of different networks. Getting this sort of test coverage with just synthetic testing would be impossible. This new data will allow us to start making informed production decisions about where we need to improve the web pages directly affecting the readers experience.

We hope to publish a dedicated post in the near future about how we collect and use Web-Vitals.

It’s been a busy period for many teams at the BBC this year but we are seeing the light at the end of the tunnel. The World Service migration has been a great success thus far. We have migrated to a modern platform that is open source, faster than ever both in terms of product/feature iteration and web performance.

Our journey has only just begun. Simorgh represents a new beginning for the BBC World Service, and we will continue to improve the performance and accessibility of our news web pages for our global audiences.

Read More



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments