Permanent 301 Redirect Setup in .htaccess

Redirect old domain to new domain

RewriteEngine on
RewriteCond %{HTTP_HOST} ^example.com [NC,OR]
RewriteCond %{HTTP_HOST} ^www.example.com [NC]
RewriteRule ^(.*)$ http://example.org/$1 [L,R=301,NC]

Force Redirect to NON-www version

RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.example.com [NC]
RewriteRule ^(.*)$ http://example.com/$1 [L,R=301,NC]

Force Redirect to WWW version

RewriteEngine on
RewriteCond %{HTTP_HOST} ^example.com [NC]
RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301,NC]

Redirect from HTTPS to HTTP

RewriteEngine on
RewriteCond %{HTTPS} on
RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

Redirect from HTTP to HTTPS

RewriteEngine on
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

Redirect files with certain extension

RewriteEngine On
RewriteCond %{REQUEST_URI} .aspx$
RewriteRule ^(.*).php$ /$1.php [R=301,L]

Redirect Individual Files

Considering Two Situations

  1. Redirecting File within the same domain
  2. Redirect File to Another Domain

Redirect File with

Redirect old domain to new domain

RewriteEngine on
RewriteCond %{HTTP_HOST} ^example.com [NC,OR]
RewriteCond %{HTTP_HOST} ^www.example.com [NC]
RewriteRule ^(.*)$ http://example.org/$1 [L,R=301,NC]

Force Redirect to NON-www version

RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.example.com [NC]
RewriteRule ^(.*)$ http://example.com/$1 [L,R=301,NC]

Force Redirect to WWW version

RewriteEngine on
RewriteCond %{HTTP_HOST} ^example.com [NC]
RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301,NC]

Redirect from HTTPS to HTTP

RewriteEngine on
RewriteCond %{HTTPS} on
RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

Redirect from HTTP to HTTPS

RewriteEngine on
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

Redirect files with certain extension

RewriteEngine On
RewriteCond %{REQUEST_URI} .aspx$
RewriteRule ^(.*).php$ /$1.php [R=301,L]

Redirect Individual Files

Considering Two Situations

  1. Redirecting File within the same domain
  2. Redirect File to Another Domain

Redirect File within the Same Domain

Redirect 301 /file1.htm /file2.htm

Redirect File to Another Domain

Redirect 301 /file1.php http://example.org/file2.php

in the Same Domain

Redirect 301 /file1.htm /file2.htm

Redirect File to Another Domain

Redirect 301 /file1.php http://example.org/file2.php

Net Promoter Score

Net Promoter Score calculated from 0 – 10

NPS ( Net Promoter Score ) = Promoter – Detractors

Promoters

  • (score 9-10) are loyal enthusiasts

Passives

  • (score 7-8) are satisfied but unenthusiastic customers

Detractors

  • (score 0-6) are unhappy customers

WordPress Robots.txt Sample

WordPress Robots.txt

User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/

Adding Sitemaps to WordPress Robots.txt

User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/

Sitemap: http://www.example.com/post-sitemap.xml

Explanation

Allowing all Bots

  • Allowing any Bots to Crawl
User-agent: *
Disallow:

Not Allowing any Bots

  • Not Allowing any Bots to Crawl
User-agent: *
Disallow: /

Block a Folder

User-agent: *
Disallow: /Folder/

Block a File

User-agent: *
Disallow: /file.html

 Block a page and/or a directory named private

User-agent: *
Disallow: /private

Block All Sub Folders starting with private

User-agent: *
Disallow: /private*/

Block URL’s end with

User-agent: *
Disallow: /*.asp$

Block URL’s which includes Question Mark (?)

User-agent: *
Disallow: /*?*

Block a File Type

User-agent: *
Disallow: /*.jpeg$

Block all Paginated pages which don’t have “?” at the end

  • http://www.example.com/blog/? ( Allow )
  • http://www.example.com/blog/?page=2 ( Not Allow )

Helps us to Block Paginated pages from Crawling

User-agent: *
Disallow: /*? # block URL that includes ?
Allow: /*?$ # allow URL that ends in ?

Using Hash

# Hash is used for commenting out

Bots / User Agents

Top 10 Bots

Robot
bingbot
Googlebot
Googlebot Mobile
AhrefsBot
Baidu
MJ12bot
proximic
A6
ADmantX
msnbot/2.0b

Individual Crawl request for each Bots

User-Agent: Googlebot
Allow: /

User-Agent: Googlebot-Mobile
Allow: /

User-Agent: msnbot
Allow: /

User-Agent: bingbot
Allow: /

# Adsense
User-Agent: Mediapartners-Google
Disallow: / 

# Blekko
User-Agent: ScoutJet
Allow: / 

User-Agent: Yandex
Allow: / 

# CommonCrawl
User-agent: ccbot
Allow: / 

User-agent: baiduspider
Allow: / 

User-agent: DuckDuckBot
Allow: / 

User-Agent: *
Disallow: /

Cross Domain Tracking [ Google Analytics ]

Cross Domain Tracking in Google Analytics is for tracking a Visitor across different domains

Usage

Tracking the same visitor navigating across

  1. Various Domain names
  2. Various Sub Domains
  3. http and https of the same/different domain
  4. 3rd Party Shopping Carts
  5. IFrame

Linking

( Example )

Consider two domains :

  1. example-1.com : Primary Domain
  2. example-2.com : Secondary Domain

Primary Domain

<script>

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga');

ga('create', 'UA-XXXXXXX-Y', 'auto', {'allowLinker': true});
ga('require', 'linker');
ga('linker:autoLink', ['example-2.com'] );
ga('send', 'pageview');

</script>

Secondary Domain

<script>

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga');

ga('create', 'UA-XXXXXXX-Y', 'auto', {'allowLinker': true});
ga('require', 'linker');
ga('linker:autoLink', ['example-1.com'] );
ga('send', 'pageview');

</script>

Three  or More Domains

ga('linker:autoLink', ['example-2.com', 'example-3.com'] );

 

SEO Cannibalisation

SEO Cannibalisation is a process of Consolidating many similar pages which dilutes SEO Value, in to a single page.

Similar Pages

  • Many Pages with Exactly Similar Page Titles
  • Many Pages with content targeting the same keyword

Reason for SEO Cannibalisation

Have too many similar pages?

  • It could confuse Google
  • It could dilute the SEO Value
  • It could decrease the overall Conversion Rate
  • It could confuse Website visitors
  • It lacks focus

So we need to do Keyword Cannibalisation

SEO Dilution

Issues on having too many similar Pages: 

  • Internal Linking – Linking to various pages with same Anchor Text dilutes SEO value
  • Backlinks – Dilution of Backlinks to various similar pages, decreasing the overall value if its linked to a single page.
  • Content Quality – Ideas and research about a single topic divided across many different pages dilutes content quality
  • Conversion Rate  – Various similar pages have various conversion rates,  so the overall conversion rate will be less

Consolidation

Considering similar pages,

Choosing THE Best Page to Cannibalise

A Page with

  • More Conversion Rate
  • High Quality Content
  • High Quality Backlinks

Consolidate SEO Value

  • Consolidate Content from other similar pages to The Single Page
  • Update Internal Links & its Anchor Text to The Single Page
  • Update Backlinks from others sites to The Single Page if possible
  • Use 301 Redirect from other pages to The Single Page

Advantages of SEO Cannibalisation

  • Google is not confused now
  • Solves the issue of SEO Dilution
  • Website Visitors are not confused now
  • Improves Overall Conversion Rate

Its Good to FOCUS!!!