Javascript SEO: What is SSR and CSR? Advantages and Disadvantages

Sale Database Tools Enhance User Experience and Sales Efficiency
Post Reply
mstlucky8072
Posts: 31
Joined: Mon Dec 09, 2024 4:00 am

Javascript SEO: What is SSR and CSR? Advantages and Disadvantages

Post by mstlucky8072 »

As you are reading this article, you know that websites consist of HTML, CSS and Javascript 3. Again, as you know, with the developing technology, interface development technologies have also developed and websites have become almost indistinguishable from mobile applications. We owe this development to many modern web interface development libraries, of which AngularJS , VueJs , React JS are the flag bearers .



While advanced, mobile app-like websites are beautiful and useful for users, the same cannot be said for search engines. Crawling and indexing websites where content is provided with JavaScript becomes quite a complex process for search engines. Let's take a closer look at this process.

What is Javascript SEO?
Javascript SEO is basically all the work done to ensure that websites where most of the content is provided with Javascript are crawled, indexed and ranked smoothly by search engines.

Modern interface development libraries appear on websites in two basic ways. This distinction stems from the creation of HTML versions of websites on the browser or server side. This is where we SEO experts start to focus. Now let's get to know these two methods a little more closely:

Client-side rendering (CSR):
In this technique, which has come into our lives with modern employment database browsers, websites respond to connection requests with a very small HTML response. This HTML response contains Javascript codes that create the content of the page. As long as these Javascript codes are not executed, the website is a blank page. The browser scans these Javascript files, creates the content and presents the web page to us.

Image


Server-side rendering (SSR): Server-side rendering
It can basically be defined as doing the job that the browser would do on the server. In connection requests coming to the server side, the server returns the entire page content as a readable HTML response. Even when the Javascript codes are not run, we encounter readable content, but the page does not have the dynamics provided by the Javascript codes. Then the Javascript codes run and the page starts to use the advantages offered by modern interface libraries.

What Does Google Think About Javascript?
Google organizes an event called I/O every year. At this event, they talk about their latest technologies and innovations. Of course, GoogleBot is among these innovations. GoogleBot was using version 41 of Google Chrome while scanning billions of pages on the internet. v41 Google Chrome had a hard time understanding and scanning modern Javascript libraries. As a definitive solution to this problem, Google announced at I/O 2018 that they were working on having GoogleBot scan web pages with the current version of Google Chrome. In May 2019, they officially announced that they had switched to the current Chrome version, v74. (Those who want to read the statement can click here .)

The current GoogleBot can now understand pages written in Javascript. However, another problem arises here. Scanning Javascript codes is very expensive. Imagine for a moment that you are Google, isn't it quite annoying that the electricity consumed and the efficiency of the scan decrease due to the increased processing volume? Therefore, these technological developments are not very economical for Google. Now let's see how Google manages to cope with this cost.

How Does Google Crawl Javascript Sites?
Google uses a method called two-stage crawling to crawl pages written in Javascript. So what is this method? Let's explain it briefly.
Post Reply