Hi,
I’ve noticed that in this moment the collection data for an item is rendered based on the template on the client side (using javascript).
What I would like is to have the data included in the markup (currently in the html markup we have for each page the content from the template itself)
View Page Source (search for anything from the post content)
You will notice that the content from the page is not in the html (the template is)
The content is generated dynamically on the client side
I think this is an important thing from an SEO perspective to have the content inside the page.
I don’t want to have 100 blog posts with the same content with different meta tags.
What you have in the <head> part are tags and meta tags that describe what the page is about like: title, description, other meta and so on …
Now, what you have inside the <body> tag is the page content structured using html tags. Basically this is where your article content will be here with the appropriate html tags like: h2, h3, h4, p, article and so on.
In SiteJet on any Collection Item page you have all the appropriate meta tags added inside the <head>, but the Collection Item content is not inside the <body> tag , this content is added when the page is loaded by the browser using JS.
Basically when the Google Bot index your page will read the existing html (head + body) - he doesn’t run/interpret the JS from the page.
This method is fine as long as your website is not content focused.
For instance if you want to rank better for a specific keyword in your article like “marketing”, you will need to use this keyword in the meta tags but also in the content (with appropriate tags like h1, h2, p and so on).
thanks for raising your concern in such a detailed manner. You’re right that the actual blog post content is loaded asynchronously. Your concern is about possible effects on the Google ranking due to this. I can assure you that there is no reason to worry about it. We can watch an increasing trend towards SPA even in websites, so modern search engine robots are aware of this and actually run JS for indexing. Otherwise they wouldn’t be able to index SPAs and other JS-driven websites at all.
I think the Sitejet Blog is a good example to prove this as it is also built with Sitejet Collections. Let’s pick for example this article. If we search for its exact URL in Google, we’ll see the article within the SERP as follows:
The description shows actual text from the blog post that is not within the meta description, so the actual post content is indexed. You can also search for a part of the text and find the article.
Hi @malte,
Thanks for your reply … I was aware that Google added support for SPAs (however this is not yet universally available on other search engines like Bing, DuckDuckGo), … but I guess most of the people care only about Google … and yes everyone is slowly catching up.