After Google handed out penalties for close to a decade, manipulative SEO techniques have seen a definite death. Many SEO professionals linger around the basics, forgoing the massive opportunities on-page SEO enables. This might be due to caution or sheer habit.

To discuss oft-overlooked elements of on-page SEO, we will avoid discussing keyword optimisation, title optimisation, and the significance of tags. Instead, we address a list of problems and solutions to improve the search engine rankings using on-page elements in non-traditional ways.

To illustrate, compare a website to a physical store. The off-page SEO is the equivalent of PR efforts and reputation management, while on-page SEO focuses on what the store contains – cash register, shelves, and so forth. The importance of the elements vary, and is crucial to the store’s success.

Business owners insist on quotes for off-page work only. This often focuses on content, without the performance of any on-page changes. While confident they have done all that’s necessary, the reality differs as analysis yields amateur coding and insertion capable of adversely impacting rankings.

Advanced implementation of on-page techniques empowers a website and have instant telling effects on rankings just as using a good web host like domains4less.co.nz/web-hosting guarantees. Off-page efforts take much longer to yield results and depend on too many out-of-control variables.

Internal link structure

It is common knowledge that About and Contact Us pages typically have strong PR and Domain Authority scores, even without external links. This is because of internal links as these pages often appear in cross-site menus.

Robots have a focused objective which we can feed if we precisely identify it. When a site is being crawled, navigating across pages is via these internal links. Information is gleaned until the crawl is over or there is a timeout notification.

Improving internal links structure is an effective way to optimise on-page elements. This almost always improves ranking upon the next crawl. The well-known method for optimising internal link structure is anchors tell stories reading the anchors on a given site consecutively, the site’s theme can be revealed.

Problem – A well-defined internal links structure prevents the search algorithm from assigning less-relevant search phrases to the site’s priority landing pages. This results in having the homepage attributed to most of the phrases we prefer to rank for, yet few of them is to make it to the top of SERPs.

Solution – Where the site has a blog, commence by linking around 20-30 posts to significant landing pages. Each subsequent month, add 15-20 new posts to delve deeper into the main subject in the landing page, and link to older posts from these new posts and vice versa. This internal linking system should highlight the principal intersections in the site, using old indexed pages and newer pages alike.

Make your content interesting and of excellent quality, so the links would be effective. As long as anchors link from relevant pages and describe the target page, they are fine to use.

Next, locate the strongest pages on the website and link from them to the most important landing pages (where possible). This will build on existing strength to support crucial assets. Scrapebox is a good tool to use with this. Simply type the site’s URL and click on Start Harvesting. This will present all pages on the site and the strength and popularity of each can be checked by clicking Check PageRank.

Confusing root folder

This is one of the most overlooked elements on a website; something very few pay regular attention to. Google crawls a site for only a few seconds at a time. It is important to maximise this crawl time.

Problem – Site builders, optimisers, and site owners randomly discard files onto the root folder and subfolders continually. But every file has a measure of influence, the main issue being the dilution of relevant information in these junk files: various file versions, trial files, unused DOC/PDF files, backup directories, temporary files, and multimedia files (which should be placed in designated directories).

These files may no longer be in use, but they are considered when your site is crawled preventing you from maximising your allocated resources or diluting important information found on the site (a worse scenario).

Solution – To clean up unused files, open a directory and name it “old-files” and then place all unused files in it. In addition, sort media files and sub-directories, also updating all addresses in the code, wherever they are used. Simply performing the changes and ensuring no link is broken is enough. Xenu link sleuth is a great tool for this step. Finally, update the robot.txt file with this command: Disallow:/old-files/

Duplicate content

Penalties are hardly due to duplicate content, except where the aim is to manipulate the search engine. Duplicate content will attract a devaluation, and its effects are negligible compared with and outright penalty.

Each search result collect s thousands of indexed results. Google gives access to the first 1,000 (100 pages of search results). People will mostly check the first few pages of results, based on the common belief that relevant content is only found on these pages. But, duplicate content or similar content is an issue for the Google algorithm causing Google to do what is possible to enhance user experience.

The list of 1,000 is a chronological list sorted based on relevance and authority. Other filters aim to keep the quality high, and there is one geared towards recognising similar and duplicate content.

Problem – Duplicate content puts much pressure on your site to work harder. In theory, duplicate content could make it to the first page of search results, if other ranking factors build on its strength over competitors. It will only require more time and money.

Solution – Taking this one problem-solution after the other, we have:

Let us assume the homepage URL has been written in various ways: www.site.com, site.com/index.php, www.site.com/index.php, and site.com.

Solution – Start by putting the following lines in the htaccess files, replacing domain with your site domain name; replace html with php as needed:

RewriteEngine On

RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index\.php

RewriteRule ^index\.php$ / [L,R=301]

RewriteCond %{HTTP_HOST} ^www\.domain\.com$ [NC]

RewriteRule ^(.*)$ http://domain.com/$1 [L,R=301]

Finally, define a preference for a type of address on Google Webmaster Tools (the www prefix is optional).

In another scenario, you can have a secure version and a regular version – http and https.

Solution – When shifting from regular site display (http) to secure display (https), insert the following code into the htaccess file, replacing domain with your site’s name:

RewriteEngine On

RewriteCond %{SERVER_PORT} !^443$

RewriteRule ^(.*)$ http://www.domain.com/$1 [NC,R=301,L]

Then again, there is the case of duplicate titles and page descriptions. Google offers you this data, and expects you do something about it.

Solution – Go to Google Webmaster Tools and delete duplicates. In the process, examine any other apparent recommendations.

Internal pages with parameters are another problem. Many sites employ various versions of their URL for tracking and analytics. There are also many reasons why pages might receive URLs, where all lead to the same page. Google partly helps to sort this out with a Webmaster tool.

Solution – Define parameters Google will use to determine (ahead of time)  if duplicate content is present, and treat the addresses accordingly. Defining the parameters early is because once indexed, it takes a while before Google completely does away with them.

  • Find duplicate content using Copyscape and duly handle.
  • Eliminate look-alike content by unifying similar pages. Create more thorough longer-form content covering a topic with a sub-topic on each page, where possible. Be sure to set a cancelled page that should now include more robust information about the topic.
  • Use rel=”canonical” to define an internal page identical to another. Some example code is shown below:

<link href=”http://www.example.com/canonical-version-of-page/” rel=”canonical” />

Conclusion

On-page SEO is vital to search rankings and Google provides a comprehensive set of tools for any website owner or webmaster to have things under their control. This is what separates the pros from the minnows, but use these tips today and soar. See you on page one.