Page speed is one of the few things you can tweak as a website owner to improve the experiences of your site users.

According to reports, Google began testing a speed report tool within Google Search tool. While it is not available yet to all users they are beginning to slowly roll it out to the general public.

Where can you find these reports?

The speed report is found under “Enhancements”. The report will help web administrators to find issues with sections and URLs that may have page speed issues. While it is still listed as experimental, it surely poses as a very useful tool to monitor the overall speed performance of the site.

What info does it provide?

The report starts with a categorical view between mobile and desktop speed. You can then dive deeper into checking how many pages are “slow”, “moderate”, and fast”. From there, Google provides another deeper view by listing the pages organised by their status.

According to Google’s blog, the report uses real-world data from their Chrome User Experience report and also data from “a simulated load of a page on a single device and fixed set of network conditions.”.

Data at the moment is limited. In fact, most of the sites we monitor still are not yet covered. We presume that marrying the data types and presenting it to the general public may take some effort and time. Nonetheless, once it is fully operational, this tool with its sophisticated data sources will be able to provide web owners with a better grasp of how their site is actually performing.

Backlinking has always been a controversial topic in the world of digital marketing. Before Google set loose the Penguin Spam Filter on April 2012, ranking on Google was ‘easy money’. Website owners could buy their way into the top ranking position by adding more and more links. Top websites had link profiles with so much ‘garbage’ or ‘spam’ links. Link farming was big business. Websites with the same intent of gaining ranking would exchange links with another website (even if they were unrelated) just for a ranking boost. SEO was generally about creating spam links to manage rankings. 

Google caught on to this gamifaction of ranking’ and the Google Penguin Spam Filter is now part of Google’s Core Algorithm. Working in real-time, it can penalise a website as soon as spam links appear. 

The effectiveness of Penguin and the new ranking criteria added (mobile -friendly, page speed, etc) has somehow overshadowed links as an important factor in ranking. It begs the question, “Is backlinking still important in 2019?”

In Google’s first episode of #AskGoogleWebmasters Webmaster Trends Analyst John Mueller addressed whether linking out is good for SEO. 

In the video, Mueller basically related that linking is still important as a reference especially if it helps a user find more info on a topic and to also check sources. The importance of backlinks was further verified by his warning about using link schemes, linking in advertisements and also within user-generated content (like in the comments section). To some degree, its importance is verified by the methods they have created to detect unnatural link building schemes and the severity of the penalties when caught using them.   

The Takeaway. 

Links are critical for your site, and you should regard outbound links as important as inbound links. 

Always make sure that you are linking for relevance and credibility. Ask the following questions: Are your links useful to your readers? Do they point to reputable sources? 

Should you continue with Link Building Activities? 

Yes, ranking still relies heavily on how many reputable sites link to you. The operative word, in this case, is “reputable”. Just as making sure that you are only linking to relevant and credible sites – make sure that you also linked in such quality sites. 

How can you gain reputable and credible links? 

As in any case, quality content is often what becomes popular. Make sure that you generate content that fulfils the needs of your readers. Instead of spending money on ‘link building schemes’ it can be critical on shifting strategy by investing in quality content creation. While websites can be built overnight, creating a long-standing brand following is a marathon that requires the necessary investment and effort in content development and relationship-building. 

Here are some links to help you learn more about link building:

Linkbuilding Tips for 2019 – https://www.linkresearchtools.com/case-studies/link-building-techniques/

Creating the best content for link building – https://searchengineland.com/how-to-choose-the-best-content-format-for-link-building-305519

Link investigation and link building tools –https://searchengineland.com/link-building-tools-you-may-not-know-about-303699

On September 1, the Search Engine Jagernaut will no longer support the use of NOINDEX directive listed in robots.txt file. NOINDEX is a common practice but is an ‘unpublished rule” that professionals in the industry have been using for a long time.

They posted “In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019. For those of you who relied on the noindex indexing directive in the robots.txt file, which controls crawling, there are a number of alternative options” the company said.

This move to ‘outlaw’ this protocol was result of their latest update on open-sourcing Google’s production robots.txt parser. Upon examination of the their parser library, they also found some unsupported robot.txt rules. They reported, “Since these rules were never documented by Google, naturally, their usage in relation to Googlebot is very low. Digging further, we saw their usage was contradicted by other rules in all but 0.001% of all robots.txt files on the internet. These mistakes hurt websites’ presence in Google’s search results in ways we don’t think webmasters intended.”

Google listed the following alternatives to using this script:

(1) Noindex in robots meta tags: Supported both in the HTTP response headers and in HTML, the noindex directive is the most effective way to remove URLs from the index when crawling is allowed.

(2) 404 and 410 HTTP status codes: Both status codes mean that the page does not exist, which will drop such URLs from Google’s index once they’re crawled and processed.

(3) Password protection: Unless markup is used to indicate subscription or paywalled content, hiding a page behind a login will generally remove it from Google’s index.

(4) Disallow in robots.txt: Search engines can only index pages that they know about, so blocking the page from being crawled often means its content won’t be indexed. While the search engine may also index a URL based on links from other pages, without seeing the content itself, we aim to make such pages less visible in the future.

(5) Search Console Remove URL tool: The tool is a quick and easy method to remove a URL temporarily from Google’s search results.

Why should we care about this? This announcement is a clean-up of a hear-say that has become standard practice. While seemingly irrelevant if seen on the scale of on one website, this update coupled with the open-sourcing of the google robot.txt parser will likely affect the whole ecosystem of the internet and the search industry in the years to come. If you are using this protocol on your sites, take it out and replace with alternatives mentioned above. If you want to learn more about this – visit this blog

Other related resources can be found here: https://support.google.com/webmasters/answer/6332384?ref_topic=1724262

A website can be likened to a book. It has a cover, a table of contents and numerous pages. While some books, and even websites, can stay the same, websites are often changed and added with new content.

If we follow the same analogy on books, Search Engines like Google, Bing, and DuckDuckGo, are the librarians that identify, catalogue, sort, and organise all the books into meaningful orders for people to find information quickly.

While Crawl spiders follow algorithms to quickly go through your site’s content efficiently, there are still some things that may affect how well your site is indexed.

We will share some best practices that will help your site’s index “crawlability”.

1.     Allow Spiders to crawl your site.

Review Your Robot.txt, .htaccess and Sitemaps. It is critical that all your important content is crawlable.  Review your robot.text to see if you are blocking any specific important pages. On the end, you need to make sure to set-up scripts to disallow crawling of unimportant pages. Search engines do not need to access and crawl pages like logins or 404 pages.

You can also block Google spiders from crawling your page with the this meta tag: <meta name=”googlebot” content=”noindex”>. You can also add a non-index tag like “X-Robots-Tag: noindex” to de-index a page.

Lastly, keep your XML sitemap submission up to date if you make huge changes on your site. Make sure you submit it on Google Search Console/Crawl> Sitemaps.

2. Keep it Simple with HTML.

While Google spiders have grown in sophistication, pages written in HTML are the easiest type of pages to index. If you have a large site with thousands of pages, it can be to your benefit to lessen the load of your servers by avoiding heavy Javascript, Flash or XML pages. Use small HTML files whenever possible to optimize both load and crawlability.

3. Are you creating redirection loops? Audit Redirects.

Review your site for Redirect 301 or 302 chains. This type of linking can cause redirect loops that are very inefficient for search engines. Review and limit this type of linking to not more than 2 in a row to avoid locking search engines in crawling loops.

4. Review HTTP Errors and Fix Them.

Look for HTTP errors as well as Duplicate Pages errors on your site. Make sure you spend time fixing the issues to keep your site crawl error-free.  

Make sure that you use rel=”canonical” to tell bots the main version of a page. This is important if you have different versions (like mobile versions) of your site. A pro-tip is to tag mobile versions of your site as the canonical version.

5. Do you have dynamic content? Review URL Parameters.

If your chosen CMS generates a lot of dynamic URLs, it can hinder search engine from crawling all your content and maybe also creating duplicate content errors. To inform Googlebot about how your CMS tags your content, you can visit Google Search Console/Crawl> URL Parameters. This will ensure that all the pages generated do not impact the search crawl bots.

6. Do you have content in multiple languages? Use hreflang tag.

If you have content that is in a different language, make sure you use hreflang tags to tag these pages correctly. This will ensure that your local language content is found by search engines and you do not create duplicate content errors as well. If are just using a single language on your site, you still need to make sure you have a hreflang tag set to the site’s language.

Find out more on how to set-up multiple languages tags here: https://support.google.com/webmasters/answer/189077?hl=en

If you have any questions on this topic or search engine optimisation, comment on this article drop us a message here.

 

A short guide on how to use 301 redirects to migrate your old website to a new site without losing traffic

What is a 301 redirect?

A 301 redirect is simply a code of date telling website spiders where you have moved your old web pages permanently to or where they can find similar information.  It may seem to be an irrelevant task for websites that have less than 10 pages, however, these lines of codes can make or break a site with thousands of pages migrating to a new design.

Why is a 301 necessary for site migration?

While information on the internet is movable and thus temporary, it is critical to make sure that information that you previously published is easily found by search engines especially if you have renamed the URL. Changing any url without proper redirection will render the current URL “dead” or 404 does not exist.

Losing old links to a 404 (not found page) code is like demolishing a well-established bridge to a town without giving commuters due notice or even alternative avenues to reach their destination.

While the main reasons why organisations often move to a new site are founded in the hope of creating growth in their market share, migration can instead be a reason for them to lose more traffic overnight. A brand’s following may not actually diminish but the number of visitors you get to your web pages through specific website signals can get disrupted as you change over to a new website.

Below are some technical tips on 301 redirections that can help you transition smoothly to a new website without losing relevant traffic:

Document your current site’s structure

Make sure to audit the structure of your site. Take note all your internal and external links before you move over. Use tools like screaming frog to get a comprehensive list of your existing pages, images and links as well as all the meta information of your pages.

Develop your new website structure

Once you have a comprehensive record of how your site is crawled by search engines, you will have 2 methods of structuring your new site.

Option 1 –  You can use the existing URIs as the basis of your current site structure and mirror the url structure on your new site.

This means that you need to keep your pages and URIs one is to one with no alterations to the way URLS are written. For example, if you have a “website.com/about-us”, your new page will be “website.com/about-us” not “website.com/about” or website.com/aboutus”.

Option 2 –  Use the current structure as a guide to match your new pages.

Most of the time, there is a necessity to alter the current URLS. This often happens if you want to alter the way information is ordered on your site or if you happen to have a new domain altogether. In this scenario, you will need to match the current urls with your new set of urls by using 301 redirections.

For example, if you have a blog entry about “how to buy cars” using a URL carscars.com/how-to-buy-cars, you will need to 301 redirect this page to your new url “carscars.com/a-guide-on-buying-cars” or to your new domain “carbuyer.com/how-to-buy-cars” otherwise your visitors will not be able to find the page with the old link.

Side notes:

  • Do not simply 301 redirect all pages to your new homepage.

It can be tempting to shortcut this step by redirecting old links to the new home page. It seems like an easy fix to do but doing this will make your site both confusing to the web spiders and, most importantly, frustrating to your users.

  • Match pages logically

If you do not have an apple to apples match of new pages with old pages, you can still 301 redirect pages to similar pages that may have the same kind of info as the previously existing page. However, accuracy in matching information is important. You do not want to send users and website spiders to page on shoes for men if your old was all about shoes for women.

  • Create a useful 404 page

Instead of just simply providing a generic notice use the opportunity to provide similar links or a search bar that they can use to find similar information on your site. Aim to create a 404 page that can turn a missing page into a user satisfaction opportunity.

Afterthoughts:

After you have implemented your 301 redirects on your new site, make sure to take note of 404 links through Google analytics and to re-crawl your site for 404 pages that may appear in the future. It will be guide practice to keep 404 pages to a minimum whether you are migrating to a new site or not.

More info on 301 redirects

https://yoast.com/create-301-redirect-wordpress/
https://www.bruceclay.com/blog/how-to-properly-implement-a-301-redirect/

More info on 404 pages

http://charlesriverinteractive.com/check-404-errors-right-way/

 

The internet is a galaxy of web pages which are dominated by WordPress sites. Like Coca-Cola, it is a household name in Content Management Sites. Reports claim that WordPress takes up about 60% of the CMS websites and these sites run about 30 percent of the entire internet. While these sites are made to be easy for use, they can slow down as you continue to build and add new content. Real world page speed is an important metric both for usability and SEO so it is critical to routinely make adjustments on your site.


Below is a process that can help you improve your WordPress site page speed:

Get a baseline number
It will be hard for you to track improvement if you do not have a starting point. While You can use the following tools to find that number you will improve on.
https://tools.pingdom.com/
https://gtmetrix.com/

Check your engines
Make sure you have the latest updates on your WP Core and all your plugins. While this is more of a web security task, updates are often created to not only prevent vulnerabilities but to also patch some code issues. Your site will undoubtedly benefit from this activity.
Although unrelated to patching updates, it is likewise important that you install SSL on your sites. Google has started to force sites to use SSL this as a basic security feature. Adding encryption will prevent your site to be tagged “unsafe” by Google.  

Serve Small Images
Images are the most common reasons a site slows down. Audit your images and see if you can resize them. Often, image dimensions are larger than needed.  If the dimension size is not the issue, sift through your images and review their file sizes. High-quality images often mean larger file sizes, but this can affect load speed.  Fortunately, there are a number of tools (Tinypng.com and others) that can still provide you with high-quality images without degrading display quality.

Implement Browser caching
Browser caching is often one of the most recommended items by Google PageSpeed insights. Enabling browser caching will allow you to tell your visitors how long they should keep specific resources on their computers. This will make sites load faster as many of the resources will be simply referenced from their own devices. For example, you can set a limit of cache for 2 weeks for images, and longer periods for other items like style sheets that do not change often.  Your developer can do this by setting it up directly on the .htacces filer or via a plugin like W3 Total Cache

Go Mini

Extra lines of code, even blanks, can add bytes to files. These added bytes also add some time to load. Minify your HTML code, CSS style sheets, and JavaScripts to further speed up your site. This is a technical task that competent developers can accomplish easily. It is very important to review all pages after minification to ensure that they are functioning properly.

Some parting notes:
There is a balance between speed and display or functionality. It would be great to have both but if you must choose, in some cases, it is better to sacrifice a bit of page speed for higher quality images and functionality. A simple HTML site will load faster than a sophisticated image-heavy site, but it may not fulfill the needs of the customer.


Our team at NFT provides page speed updates with our website maintenance services. Contact us today to learn more.