Written by Michel Fortin. Originally published on seoplus.com.
Traditional websites are evolving. Not only are they becoming more dynamic, but the way we develop and optimize websites is becoming increasingly dynamic, too. The reason for this is simple: as the world is shifting to mobile, the demand for more dynamic content, services, and experiences will grow along with it.
Digital transformation has been an accepted business practice since 2013. Even The Wall Street Journal in 2018 declared that every company is now a tech company. The world was poised for quite some time to transition to a remote workforce.
Obviously, the recent viral outbreak has accelerated this transformation. Some companies have recently embraced a remote work policy, even closing their physical offices permanently. Others have adopted a mobile-driven, remote-first, or hybrid-work model.
For example, seoplus+ is an award-winning digital marketing agency in Ottawa, Canada. We serve clients all across Canada and from around the world. Many of our clients have seen major growth online, particularly after COVID hit. Some have become fully remote as well.
So it made logical sense when, in June 2021, seoplus+ has become a work-from-anywhere company. Many large enterprise-level companies are leading the way, too. Firms switching to remote work in whole or in part include Amazon, Capital One, Facebook, Ford, Google, Hubspot, Microsoft, Shopify, Verizon, and many others. And the list keeps growing.
In short, gone are the days of being stuck at some centralized head office or headquarters. In fact, you can say that businesses and their practices are becoming more and more “headless.” (The pun is quite intentional. You’ll soon understand why.)
The Pros and Cons of Content Management
When a growing number of traditional, brick-and-mortar companies are adopting a mobile approach, we can safely say it’s not some mere trend. More importantly, watching where they’re going is also a great indicator of where digital marketing is heading, too.
For this reason, the websites we develop, the tools we use to develop those websites, and the practices we engage in to promote those websites, must adapt and evolve as well.
Web development and digital marketing are becoming increasingly mobile, decentralized, and dynamic. Think of the growth in cloud computing, software as a service (SaaS), marketing automation, and above all, content management.
When I started online in the late 80s, I hardcoded all my pages in HTML. They were simple black text on white backgrounds with a few blue hyperlinks. Simple to make but tedious. By the mid-90s, I started using server-side includes (SSI). This process allowed me to reuse common parts of the website without having to recode repeated sections every time.
Then, the introduction of client-side scripting languages like Javascript made it easier for me to create pages. It also made them faster and a little more dynamic, too. Some early browsers called it dynamic HTML or DHTML to differentiate them from standard HTML.
A few years later, the introduction of the content management system (CMS) vastly improved website deployment let alone enjoyment. With the increase in speed, efficiency, and performance, it’s no wonder there’s hypergrowth in the use of CMSes. WordPress, the world’s leading CMS, operates 40% of all websites today and 65% of all CMSes.
The real value of a CMS comes from the ability to manage the content, hence the name. But another key benefit is that it also manages the output of that content without the need to code it. Using a WYSIWYG interface, we are capable of designing and not just developing websites.
But this added benefit creates its own set of problems.
Where CMSes Can Become Counterproductive
The output, often referred to as the “presentation layer,” is comprised of a combination of three parts: server-side static assets (files), database content, and client-side scripts. As CMSes grow (and the demand for content and better experiences grow), each of these three parts will inevitably grow as well, thus adding more to that layer.
The result is a heavier payload and, consequently, a slower website.
When someone visits a site, their browser downloads all the files and pulls all the data from the database. It then assembles them, parses them, renders them, and paints the penultimate document. (There’s a lot more going on, of course. But for the sake of simplicity, those are the general steps.)
I said penultimate because any client-side scripts can modify any of the document’s objects (or DOM) to make the page or parts of it dynamic and interactive. This step is usually the fastest because, after loading the resources from the source server, the rest can be done on the client’s side and is not dependent on bandwidth.
(Some files load sequentially or synchronously, others load together or asynchronously. Either way, scripts cannot execute until the intended resources are loaded.)
But as we add more and more files and data, things can slow down and negatively affect the way users get to load and experience the web page.
So in an effort to improve performance and speed, the penultimate document, assembled as a whole or in part, is cached at various points around the web. This way, a user’s browser will pull up the nearest cached version instead of downloading new copies of the files and reassembling everything with every visit.
It works. But it, too, has its limitations.
In many cases, it can also make things worse.
Going From Bloated to Boring
Mobile devices have become the chief method of browsing the Internet. Almost 60% of the traffic on the Internet is through mobile phones — and over 75% if you consider other mobile devices such as tablets, handheld consoles, digital cameras, and smart devices.
Influenced in large part by the explosive growth of mobile usage, Google introduced new algorithms this year that will rank your site based on a variety of additional factors related to the site’s user experience.
Google’s page experience algorithm measures a set of performance metrics, which consist of speed, stability, and interactability (i.e., how long it takes the page to load, stop moving around, and able to be interacted with). Google calls these Core Web Vitals. Other web vitals include security, safety, responsiveness, and accessibility (i.e., no intrusive interstitials).
Ultimately, what it means is that your pages will rank a little better if their user experience is better than those of your competitors. The challenge is that, other than improving server processing power and caching, we have reached a point where there are only so many technical speed optimizations we can do.
So the only solution left at this point is to scale back.
We reduce, minify, remove, compress. Rinse and repeat.
The issue is that this process can reach a point of diminishing returns. While many web designers and developers have adopted a minimalist approach, if we keep reducing pages more and more, the result is the slow return to those simple white pages with black text and blue links. In an effort to improve user experience, we hinder it.
So what’s a better solution?
Decoupling: The Evolution of The CMS
Fueled by the increase in digital transformation, remote work, and mobile usage, the demand for speed and performance is growing at a rapid pace. For example, cloud as-a-service offerings are projected to reach $927.51 billion by 2027.
Subsequently, the demand for faster and higher-performing digital experiences will push the need for better websites. But how do you create fast websites that are still helpful? How do you create dynamic content that’s not stripped down, overoptimized, or bland?
Enter headless content management.
A traditional CMS is a monolithic system that dictates the entire website’s functionality, from the backend to the frontend. Each end is inextricably tied to the other. One small change in the backend shows up in the frontend.
The frontend is called the “head.” It’s where the content is rendered and experienced by the user. The backend is where the content is created, stored, and managed. That’s the “body”. In traditional CMSes, the body and the head are connected or “coupled,” so to speak.
Now, the term “headless” comes from the process of “decoupling” the two ends. In other words, the backend (i.e., the content along with all its necessary components) is separate from the frontend (i.e., the rendering of those components in the user’s browser).
The content is accessible but does not come with a built-in frontend or presentation layer. It is (almost entirely) independent from the user’s device, operating system, bandwidth speed, screen size, and geographic location.
The CMS becomes less of an all-in-one content publishing tool and more of a database — a content-driven repository where you can create, edit, store, and manage content. And that’s it.
What is a Headless CMS?
By rendering the content on the user’s end in a separate process, the result is a vastly faster load time and better page experience. The CMS becomes a true CMS, if you will, because it only manages the content and not how it’s published or presented.
The best way to think of it is like an application. An API (application programming interface) supplies data (in this case, to an application on a user’s device, even if it’s a web page), which is rendered separately and independently.
This decoupled CMS or “headless CMS” is gaining popularity because it offers far greater flexibility let alone speed. By treating the content like data, it offers easier collaboration, greater scalability, and faster development times.
There’s no need to accommodate different browsers, operating systems, mobile devices, connection speeds, and so on. Instead, it unifies data in one place, simplifies (and speeds up) the creation of content, and makes it faster to deploy.
Plus, it’s more secure as the content is completely separate from the presentation layer. By removing the many access points between the server and the client, it reduces the number of opportunities for hackers to exploit.
Does that mean that the traditional CMS is dead?
Not any time soon, of course. Traditional CMSes still power the majority of the Internet, and many of them are sufficient for most simple, standalone websites. But for websites that require speed, agility, interactivity, and scalability, headlessness is a viable solution.
Moreover, many headless environments also use traditional CMSes as their backend. In the case of WordPress, for example, one can still access the backend to write and manage content. But the frontend is, quite simply, useless.
Instead of the CMS’ frontend, a separate client-side frontend pulls content from the CMS. This front is coded (often manually) using a framework — most of which are based on Javascript. There are quite a few of them: GatsbyJS, Next.JS, React, Angular, Nuxt.JS, VueJS, Node.JS, and so many more.
Each one has its own set of benefits depending on what you want to achieve. It depends on things like user-generated content (UGC), ecommerce capability, real-time interactivity, distributable applications, embeddable systems, and so on.
But for the most part, simply decoupling a traditional CMS can get the job done.
Does a Headless CMS Affect Rankings?
The important thing to remember is that using a headless CMS is not necessary. And to be fair, it does have its drawbacks. The lack of control over the output, along with the need for a Javascript framework and coding to manage the presentation layer, can be a significant impediment for some.
The good news is that you don’t have to dump your current content management system if you ever decide to go headless. You can still use it in many cases. If you do, the speed and performance benefits may help rankings, whether directly or indirectly.
(For example, if you use WordPress, a popular option is a plugin that decouples your CMS and uses a prebuilt Gatsby framework to deliver the frontend.)
However, search engine optimization with headless architectures is often referred to as “Javascript SEO.” But Javascript SEO is really just a form of technical SEO. In other words, it’s focused on making sure the website is crawlable and indexable.
The issue is renderability. When a page loads in a user’s browser, Javascript renders the final output a user sees. But Google doesn’t render the page the same way a user does. While it does use its own rendering engine, the results may vary.
The goal, therefore, is to optimize the version Google sees and not just the version users see. For this reason, there are several ways to check how search engines render websites. There’s also the option to pre-render pages for search engine crawlers specifically.
It goes without saying that, if you have robots directives that block crawlers any access to your Javascript (such as disallowing JS files or folders containing them), you’re shooting yourself in the foot. So be careful. It seems obvious, but this is a mistake we commonly encounter.
Other Javascript SEO Considerations
Three key areas also need attention:
- Link Behaviour
- Interactive Content
- Crawl Budget
1. Link Behaviour
Rendered links should behave like traditional links. As long as they appear in the DOM as HTML (i.e., using anchor tags and attributes), then Google will find them, crawl them, and follow them. But if they are Javascript links, these can be a little tricky.
While they should be fine, if the links need human interaction to appear and activate (called “pseudo links” or “onclick events”), Google may ignore them. This may negate any important internal linking SEO benefits by skipping pages or disregarding any anchor texts.
2. Interactive Content
Similarly, watch out for content that appears after a user’s interaction with the page. Event-based content is fine, as long as it is not relevant for SEO purposes, does not contribute to the page’s overall quality, or does not impede the user’s experience.
The reason is simple: Google may not see this content and therefore will not index it much less rank it. Google is lazy. When it crawls a page, it will not press all the buttons and pull all the levers just to see what’s there. Plus, too much hidden content may be considered “cloaking,” which is frowned upon.
3. Crawl Budget
Finally, keep in mind that search engines only want to do three things on your site: crawl, fetch, and leave. This is because they only have a certain budgeted amount of time and resources to do so. After all, search engines like Google have trillions of others pages to crawl and fetch.
Typical HTML websites are no problem. But headless websites are adding an extra step: crawl, render, fetch, and leave. That extra step can become quite resource-intensive.
If the site takes a lot of resources to load, or if your Javascript hinders a page’s ability to load (i.e., it blocks the rendering process or it slows it down), Google may simply leave before fetching everything it’s supposed to.
So keep the above three things in mind. There are many testing tools out there that can help identify these issues and offer suggestions on how to fix them.
So Then, What is Headless SEO?
I prefer to call it “headless SEO” because Javascript SEO alone does not consider other things such as content, keywords, internal linking, HTML tags, meta and structured data, and more.
Moreover, not all JS-powered websites use headless CMSes.
So essentially, headless SEO consists of:
- Javascript SEO (or technical SEO);
- And on-page SEO (or rendered page SEO).
(Of course, there’s also off-page SEO such as backlinks, digital PR, content marketing, map listings, and so on. It’s still important. But headless SEO really boils down to the other two.)
Most of the on-page SEO best practices apply to headless websites as they do to static HTML sites. But much like Javascript SEO, it’s simply important to be aware of how it shows up in the frontend and if it’s rendered correctly for both users and search engines.
With most headless CMSes, the core content of a page you create in the backend will show up the same on the frontend. So most on-page SEO elements, such as HTML tags, headings, links, images, and so on, can be managed there.
But non-core page content may be affected.
For example, if you use WordPress, many of the plugins will lose their functionality in a headless environment. Specifically, any plugins that affect the presentation layer will become useless, including any SEO plugins.
(Admin plugins or plugins that only affect the backend will remain functional. The benefit is that fewer plugins will increase the performance of the CMS overall.)
However, some plugins like Yoast SEO offer a headless option through WordPress’ REST API. REST, or “representational state transfer,” is a set of rules that allows access to data through a series of basic actions (like “GET,” “POST,” “PUT,” and “DELETE”).
What it means is that SEO data provided by the plugin and not part of the core content can be accessed, retrieved, and injected into the page by Javascript. Other plugins offer similar functionality, such as making head tags available through REST API. Otherwise, they must be manually coded into the layer.
But whether you use a plugin or not, simply ensure that the framework supports HTML tags and renders them. In general, this includes any optimized HTML tags that are not part of the core content, such as meta-data and structured data.
Other than that, other on-page SEO best practices apply. For example:
- Create original, quality content users want and find useful.
- Structure the content in a logical, easy-to-read manner.
- Make sure that it is factual, transparent, and reliable.
- Use relevant, supporting visuals where appropriate.
- Incorporate topical keywords throughout the content.
- Add relevant keywords within landmarks (like headings).
- Link content together to create topical relationships.
- Finally, consider how the content meets the user’s intent.
This applies to any website, headless or not.
Don’t Lose Your Head Over It
There are three key benefits to a headless CMS: First, it offers a content-first approach rather than a frontend-first one. Second, and definitely the reason why it has become so popular, it improves the user experience by improving the speed and performance.
Third, it improves SEO.
Content is by far the most important aspect of ranking well. By focusing on the content and improving the user experience, not only does it make it easier for users to consume that content but also makes it more appealing to search engines.
There are some drawbacks too, and chief among them is the lack of functionality that traditional CMSes offer. For this reason, more frontend work is needed to implement SEO (this time, the pun wasn’t intended).
Additionally, the number-one issue to be aware of is that Javascript must render the page first before the content can be found, fetched, and used. This applies to search engines as much as it does to users. The thing to remember is that Google may do it differently and at different times.
But once that’s covered, then the benefits of a headless CMS — benefits to the author, the user, and the search engine — may far outweigh any of the drawbacks.