Complete SEO Audit Guide for 2018



August 29th, 2017 / Clayton C

Conducting a comprehensive SEO audit is no small task, but there are very good reasons for making this an annual project. Chief among them is the fact that search engines – especially Google – are constantly adjusting their algorithms and ranking factors. Which means what worked in your SEO strategy last year, might not work this year.

Constant or regular audits ensure your website is future proofed and you are informed of your website status, as you have a benchmark to refer to.  

Also the digital touch point is critical in almost every sale so a risk mitigation strategy for the gateway to your business – your website is integral to ensure the sustained success of your business.

A technical site audit is the only starting point when reviewing a site and optimizing for performance. The first principle is do no harm, so you need to know and understand a website before you make any changes. Performing an initial SEO audit provides a benchmark and then regular audits give you an opportunity to adjust your SEO strategy: making adjustments that will see you benefiting again from any algorithmic and guideline changes made by search engines. And it is also a chance to identify problems with your site structure, content, and other technical issues which could affect your visibility on search engine results pages (SERPs).

Audits identify and let you plan and budget for the future.

Read on to learn what issues you should be looking for, and what measures you should be putting into place; all updated for the coming year, as per the latest standards.

Tools

There are numerous free and premium tools you can use when doing an SEO audit, although premium tools will always give you access to richer information. Some popular tools include:

At a minimum we use all of the above.

Getting Started

Before starting on the actual audit, there are a few things you need to do, and check.

First try to access your site in a browser using http:// (no www), https:// (still no www), and then only using www. All variations should always end up loading only one preferred domain/URL. Next do a manual site search in both Google and Bing: type site:yoursitename.com into both browsers. Look at how many results are returned and compare that with what is shown in Google Search Console, Bing Webmaster Tools, Google Analytics, and the site crawl you’ll be doing later. Also scan the results to check that it is only your brand/site that is being returned, and that the home page appears somewhere on the first page of results.

Now, using your favourite crawl tool, initiate a full site crawl. Depending on the size of your website, this might take some time, but you will use the results of the crawl throughout your audit. In this article I will be using Screaming Frog’s SEO Spider, but you could use any of the other popular crawlers if they can do the following:

  • Highlight missing Google Analytics code
  • Highlight missing Google Tag Manager code
  • Highlight use of schema markup
  • Highlight pages blocked by robots.txt
  • Highlight client (4xx) and server (5xx) errors
  • Provide extra info on redirects, specifically the number of redirects for each URL, and the status code of the redirect
  • Broken internal and external links

If you are using Screaming Frog, select Configuration from the main menu, followed by Custom>>Search. Add the following filters:

Screenshot of the Screaming Frog Custom Search panel
  1. To check for missing Google Analytics code, add analytics\.js to the blank field of your next available filter, and change the dropdown to “Does Not Contain”.
  2. To check for missing Google Tag Manager code, add <iframe src-“//www.googletagmanager.com/ to the blank field of your next available filter, and change the dropdown to “Does Not Contain”.
  3. To check for schema markup, add itemtype=”http://schema.\.org/ to the blank field of your next available filter, and change the dropdown to “Contains”.

 

Results for all of these custom searches – and any others you add – can be accessed via the Custom tab in the main panel of Screaming Frog.

  1. Review any pages that do not include the Google Analytics code and the Tag Manager Code. Add the code to these pages if necessary.
  2. We will return to the custom search for schema markup later.
  3. The default configuration (Configuration>>Spider) should be fine.

 

Finally, open up the Google Search Console and Bing Webmaster Tools in your browser and check the following:

  • Are there any new messages relating to your site’s health? You will refer back to these as you work through the audit, making sure you don’t miss any issues.
  • Check the Index Status in Search Console. Look for any sudden, sharp changes to the number of pages indexed, and also compare the number of indexed pages with what your crawl tool and manual site search reports.
  • Check that you have specified a Preferred domain under Site Settings in Search Console.

Looking for Technical Issues

SSL Usage

Google confirmed in 2014 already that they consider the use of SSL certificates as a positive ranking factor. In 2017 they ramped up their efforts to encourage the use of SSL certificates by marking HTTP pages as “not secure” when accessed using the Google Chrome browser. This latest move might not affect your rankings directly, but it could result in a drop in traffic, with customers reluctant to use non-secure websites – even if you don’t collect any sensitive information from your customers.

Screenshot of the Screaming Frog Protocol tab

1. If you already have an SSL certificate in place, ensure that all your HTTP pages redirect to HTTPS. You can check this in Screaming Frog by selecting the Protocol, and changing the Filter to HTTP. Look for pages with a status code other than 301 or 302 and update them to use the HTTPS protocol.

How to compile an Insecure Content report in Screaming Frog

2. Check for mixed content issues using Screaming Frog by selecting Reports>>Insecure Content in the main menu. This will create a CSV document listing all uses of mixed content. You can ignore all HREF types if you followed the previous step, focusing instead on other types of content such as images. Again, this won’t necessarily affect your ranking, but will could result in “Not secure” warnings in Chrome.

Verify Sitemaps

Sitemaps are an essential part of your website because they help search engines understand how you site content is organised, making it more efficient for them to crawl and index. But it is quite easy to forget about updating your sitemap when updating your website structure; either adding or removing pages, or even just changing file names.

  1. Check that sitemaps have been submitted via the Google Search Console and Bing Webmaster Tools.
  2. Compare how many URLs were submitted to how many were indexed, then look for any warnings, and also check what the last crawl date is.
  3. If you’re not sure the sitemap is still correct, generate a new one using Screaming Frog, or an online generator. Remember to submit your new sitemap to Google and Bing once you have added it to your server.
  4. Only include indexable pages in your sitemap.

If your site has separate mobile and desktop versions, or is also available in different languages under the same domain, indicate this in your sitemap using rel=”alternate” tags.

Check robots.txt

Another file that is often overlooked when it comes to SEO audits, is the robots.txt file. This small, plain text file has the power to control which user-agents (robots) can crawl your site, and even how they crawl your site. The latter could be anything from which pages and directories to ignore, to instructing them to wait a specific period of time before crawling.

Review your robots.txt file during every SEO audit to ensure you aren’t blocking any user-agents from crawling pages and directories you want indexed. KeyCDN have a detailed list of the 10 most popular user-agents, including a mention of the different user-agents used by Google.

  1. The filename is case-sensitive, with user-agents only recognising robots.txt – all lowercase.
  2. The file should be placed in the main directory of your website, and of any sub-domains you have set up.
  3. Use the disallow directive to block/hide the following types of content/pages/directories:
  • Pages with duplicate content
  • Pagination pages
  • Dynamic product and service pages
  • Account pages
  • Admin pages
  • Shopping carts
  • Chats
  • Thank you pages
  1. In Screaming Frog you can check if any pages are blocked by selecting the Response Codes tab, and changing the Filter to Blocked by Robots.txt. The Googlebot is now able to render JavaScript, so always check that you aren’t blocking any JavaScript in your robots.txt file.
  2. Your XML sitemap should be listed in your robots.txt file.

Use of Schema Markup

The use of schema markup (or structured data) on your website is optional, but there are many benefits to using it, some of which can influence your rankings. Structured data doesn’t only give search engines valuable information relating to your business and content, it can also influence how you appear on SERPs. This, in turn, can lead to higher click-through ratios (CTR), which can positively influence your ranking.

Google SERP showing use of Schema markup
  1. Check if any of your pages include schema markup. At the very least you should be using the following schema types:
    • Organization
    • WebSite
    • Breadcrumb List
    • SiteNavigationElement
  2. Speak to your developer about adding relevant schema markup to your website. You can also do this yourself using Structured Data Markup Helper or Google Tag Manager.
  3. Use the Google Search Console (Search Appearance>>Structured Data) to see if there are any errors in your structured data, and correct as necessary.

Detecting Crawl Errors

Screaming Frog makes it quite easy to detect crawl errors, with the ability to export reports showing all pages that returned 4xx and 5xx response codes. 4xx are client errors, with “404 Not Found” being the most common, while 5xx are server errors. But both mean there are certain parts of your website that are not crawled.

Screenshot of Screaming Frog Client Error report
  1. Select Bulk Export>>Response Codes from the main menu of Screaming Frog. Export reports for both Client Error (4xx) Inlinks and Server Error (5xx) Inlinks.
  2. The reports will list problematic URLs, along with the response code (Status Code). Identify the cause, and fix.
  3. Many client errors can be fixed simply by updating certain links, or through the use of valid 301 redirects.
  4. Server errors can be fixed by your developer, with assistance from your hosting provider.

 

A poorly maintained SEO strategy can have a significant impact on your business, and recovering from some faults takes longer than others. So ideally you would be performing a full SEO audit at least every six months. But with some planning, there are some checks you can carry out more frequently, with crawl errors being one.

And this doesn’t need to be done using Screaming Frog, or any other premium SEO tools. The data provided by the Google Search Console is sufficient for finding and fixing crawl errors on a weekly basis.

Google Search Console showing crawl errors
  1. Crawl Errors, found under the Crawl menu in Search Console, highlights two types of errors: Site Errors and URL Errors.
  2. Site Errors only include extra info if there have been any within the past 90-days, and these highlight:
    • DNS Errors – most DNS errors don’t affect Google’s ability to crawl and index your site, but they could be affecting your visitors. Work with your host and developer to identify root causes if this is happening too frequently.
    • Server Connectivity – often caused by misconfigured or overloaded servers. Use Fetch as Google first to see if Google is able to return your homepage without problems.
    • Robots.txt Fetch – this occurs when a robots.txt file exists, but Google was not able to access it. Check that your robots.txt file isn’t misconfigured, and that your host isn’t blocking valid user-agents.
  3. URL Errors highlight specific pages with errors, with common errors being:
    • Server Errors – Google couldn’t access these pages due to specific server errors. Look at the Status Code and the frequency before taking action; many of these errors are temporary.
    • Soft 404 – these are often caused by pages with so little content on them that Google assumes they are meant to be 404 pages. They are safe to ignore if there aren’t too many, and they aren’t critical pages. Otherwise investigate further and see whether they need more content, or to be redirected to other pages.
    • Not found – mostly hard 404 errors, these are pages that don’t exist anymore, but are either still linked to from your website, listed in your sitemap, or linked to from an external site. Clicking on each URL will show you which of these apply, and fixing internal links should be as simple as updating links and your sitemap. You don’t necessarily want to lose any benefit external links bring, so either contact the site operator to update their link, or add a 301 redirect.
    • Access Denied – commonly caused by Google trying to crawl pages that require a login. These pages should include the noindex and nofollow directives, or better yet, use the disallow directive in your robots.txt file.

Checking for crawl errors on a weekly basis means you can always ensure that all areas of your site that need to be indexed are being crawled without any problems. And it also makes it easier to address server issues that might also be impacting on visitors.

Fix Faulty Redirects

Use Screaming Frog to export a report listing all redirects on your website: navigate to Reports in the main menu, then select Redirect Chains. Open the report in Excel, and look for the following:

Screenshot of Screaming Frog Redirect report
  • All URLs with more than one redirect: a redirect chain. These are often caused by changes to the site structure, along with a move to using SSL. One URL is redirected to another URL, and later the second (newer) URL is redirected to a third URL. Although a redirect is almost invisible to visitors, redirect chains can affect site speed, and create crawl errors. Fixing redirect chains is often as simple as removing the middle redirects: everything between the first and the final URL. But it helps to carefully analyse each chain – if any of the URLs in the middle of the chain show up in search results, create a new redirect from that URL to the final URL.
  • Redirect loops, where URL one redirects to URL two, which redirects back to URL one. Fix this either by removing the redirect completely, or by only removing the second redirect.
  • 302 Redirects, also known as temporary redirects. Unless the redirect truly is only meant to be temporary, change these to 301 redirects.

Reexamining Site Structure

Having a site structure that is well-organised – and easy to understand – is important for both SEO and user-experience (UX). Structured navigation, with clearly named files, folders, and links, not only make it easier for users to find what they are looking for, it also makes it easier for search engines to crawl and organise your content. And the golden rule – since the very early days of the internet – has always been that important content can be found within four clicks.

Screenshot of Screaming Frog Site Structure details
  1. While crawling your site, Screaming Frog analyses your site structure. This data can be accessed in the right-hand panel on the main screen, under the Site Structure tab.
  2. Look at the Top 20 URLs returned for your site, along with the categories you have assigned for each. Are these the most important pages/categories of your website, and are the names logical and clear?
  3. Next look at the Number of URI, which indicates the number of internal pages that can be accessed from each top URL/Category. They should be fairly similar, and any large differences need to be investigated.
  4. Next look at the Depth breakdown, which shows how many clicks it takes to reach certain pages. Investigate pages that are very deep to ensure that they don’t contain any important content.
  5. Select the URI tab from the main Screaming Frog panel, and change the Filter to Over 115 Characters. Although search engines don’t impose strict penalties for long URLs, you may lose credit for file names longer than five words. But long URLs could also indicate files that are positioned too far from the main page, buried inside multiple subfolders. This is quite common when using WordPress as your CMS, and setting a URL structure that not only creates a folder for each category, but also for each year, month, and date. You do want to use subfolders to better organise your content, but fewer is definitely better.
  6. The Filter option under the URI tab in Screaming Frog can also help you identify other undesirables in your URLs, including duplicates and URLs that use uppercase characters and underscores. If necessary, use hyphens to separate words, not underscores, spaces, or any other characters. Using uppercase characters can result in duplicate page issues, with crawlers sometimes interpreting /Resources/ as a different URL to /resources/.

Mobile Optimisations

As the amount of internet activity via mobile devices has increased, so too has search engine optimisation switched from optimising your site in general, to optimising specifically for mobile devices. While Google’s recommended design pattern for mobile devices is responsive design, you won’t be penalised if the mobile version of your website is served dynamically, or via a separate URL. Usability and accessibility are more important.

And optimising for mobile matters now more than ever. Google is testing out a mobile-first index, with plans to switch to this index in 2018. This means that in the future, how your site ranks on Google will be also be determined by its performance, accessibility, and overall experience on mobile devices. If your competitors offer a better mobile experience, they could end up appearing above you on search engine results pages.

 

Mobile Friendliness

Both Bing and Google provide easy-to-use mobile-friendliness tests, but the Google tool provides more insight than that of Bing. In addition to testing whether the website is easy to use on a mobile device, Google also tests the site speed which, while always an important consideration, is critical on mobile devices. The Google tool will return some basic info, with the option to request a more detailed report that shows what should be fixed, and how much time this will cut from the loading time.

Optimise & Compress Images

Optimising and compressing images is recommended for all sites, and you can use TinyPNG (it also supports JPEG) to rapidly reduce image file size, without affecting the quality. To enable compression, you can either:

Screenshot of cPanel Software panel
  • do it via cPanel. If your host uses cPanel as their server control panel, you can log in, scroll down to the Software section and select Optimize Website. This will allow you to enable compression throughout the site, or only for specific media types.
  • If you don’t use cPanel, you can enable compression on most servers by adding the following to your .htaccess file:

<ifModule mod_gzip.c>
mod_gzip_on Yes
mod_gzip_dechunk Yes
mod_gzip_item_include file .(html?|txt|css|js|php|pl)$
mod_gzip_item_include handler ^cgi-script$
mod_gzip_item_include mime ^text/.*
mod_gzip_item_include mime ^application/x-javascript.*
mod_gzip_item_exclude mime ^image/.*
mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.*
</ifModule>

Start Using AMP

Accelerated Mobile Pages (AMP) is a publishing format introduced by Google in early 2016. It helps blog and news publishers create optimised versions of their content that loads almost instantly on mobile devices. AMP is effectively spring loaded pages on Mobile.  AMP optimised content is easily identified in mobile search results, and the benefit to mobile users is that AMP pages don’t only load much faster, they also use less data. While the use of AMP is not a direct ranking factor for Google, (speed is) you could still benefits through higher CTRs, and an improved user-experience.

Example of Google SERPs using AMP

AMP is not meant to be used throughout your website, only on blog content and news, and more recently e-commerce. And if your blog uses WordPress, implementing AMP is a simple as installing and configuring a single plug-in. For any other uses, including e-commerce, it would be better to involve your developer.

Issues with your Accelerated Mobile Pages will be highlighted in the Google Search Console, under Search Appearance. Most of these will probably relate to structured data, and while AMP doesn’t impact on SEO directly, it would still be worthwhile to correct any structured data issues since they could affect how your AMP results display in search engine results pages.

Identifying Page-Level Issues

Analysing Page Titles

There are several important SEO elements to look at when it comes to page titles, including:

  • Keyword usage
  • Length
  • Duplication
  • Similarity to H1

Select the Page Titles tab from the main panel of Screaming Frog, and cycle through the different Filters available.

 

  1. Missing – self-explanatory, any pages listed here do not have a page title at all. Fix these immediately, following the guidelines outlined in the steps that follow.
  2. Duplicate – common cause of duplicate page titles is very long content that has been split over multiple pages, and the archives on CMSs like WordPress. Fix this by either adding page numbers to the end of the page title, or, in the case of blog archives in WordPress, adding “noindex” to your robots meta tag.
    Yoast-noindex

    If you’re using the Yoast SEO plugin in WordPress, this can be done by simply enabling a single setting. Pages with “noindex” in the robots meta tag are not indexed by search engines, so adding this to your archive index only affects the index, not the content listed in the index.
  3. Character/Pixel length – this is split over four filters that reveal which of your page titles are either too long or too short. The primary concern with page titles that are too long is that they will be cutoff on SERPs – aim for 55-60 characters, and less than 568 pixels. What is displayed in Google search results is determined by number of pixels, and because some characters are wider than others, it is possible to have a page title that is 55 characters long, but still just too wide to display fully. Too few characters could mean your page title is too vague, and doesn’t send a clear enough signal about the page content to both search engines, and prospective customers. Some page titles are naturally long (think of your blog content), so use your own judgement in determining what to trim and what to leave as is.
  4. Same as H1Moz recommends page titles that use the Primary Keyword – Secondary Keyword | Brand Name format. While strict adherence to this format isn’t always possible (especially when it comes to blog posts), all page titles should still manage to  include the brand name, and a relevant keyword close to the beginning. Switching to a format similar to this will immediately eliminate any warnings about H1 tags being the same as the page title.
  5. Multiple – most occurrences of multiple page titles are accidental, because they offer no benefit. But since Google and other user-agents only recognise the first occurrence of the <title> tag, there is a chance that it could negatively affect your visibility and ranking – especially if it is the wrong page title, or not properly optimised.
  6. All – use the All filter to audit all your page titles for keyword usage, remembering that keywords closer to the beginning of the page title carry more weight. Use the Export button to the right of the filter to export the report in CSV format, so that you can more easily navigate through the report in Excel.

Analysing Meta Descriptions

Once you have identified and fixed problems relating to page titles on your website, you can use Screaming Frog to perform a similar analysis of meta descriptions on all pages. The Meta Description tab is right next to the Page Titles tab, and the Filters and issues you want to look for are almost identical.

Screenshot of Screaming Frog Advanced configuration options
  1. Try running a new crawl in Screaming Frog after correcting any issues found when analysing page titles. First check Configuration>>Spider>>Advanced to ensure that Respect noindex is enabled. This will eliminate any duplicate alerts caused by your archive index.
  2. Even though keywords in the meta description aren’t as important as they once were, they are still relevant. Optimise the  full description for people reading them in SERPs, so that it is easy for them to see whether the page is relevant to what they were searching for. A better CTR can influence your visibility and ranking.
  3. Again eliminate any instances of multiple meta descriptions accidentally added to your HTML.

H1 and H2 Usage

These are no longer a critical SEO signal, but including one or two keywords between an <H1> tag is optimum. Though impact is diminishing we still find in limited circumstances a H1 can have high impact.

  1. Use H1 and H2 to add structure to your pages, and to make them more readable. For the most part, H1 will more or less match your page title, without the inclusion or your brand name. Make sure it is descriptive, and reads like a headline; it is a useful reminder to visitors what the page content is about, without them having to look at the page title.
  2. Use H2 to split content into smaller pieces, and to guide visitors to the section(s) they are most interested in. If you think of H1 as being like the title of a book, then H2 are the chapter headings. They should be descriptive, but avoid using too many.

Unnecessary Meta Tags

The need for certain meta tags has become unnecessary as browsers and search engines have matured. Including them won’t result in a direct penalty, but any unnecessary code increases filesize, which can impact the page loading speed. And this is a ranking factor for both Google and Bing.

  1. The meta tags with the most value are meta content type, meta description, meta viewport, and title.
  2. The robots meta tag is only necessary if using any of the no directives. If the robots meta tag is not present, user-agents will simply index and follow, unless instructed otherwise by your robots.txt file. Additionally, the use of noodp and noydir serve no purpose since the DMOZ and Yahoo directories shut down.
  3. Even if the benefit of social media on SEO is still being debated, including meta tags for Open Graph and Twitter Cards at least helps you control how your content appears when shared to Facebook or Twitter. At the very least you will benefit from marketing.
  4. Meta tags like keywords, author, and refresh (use a server-side redirect instead) are no longer necessary, and mostly just increase your filesize.
  5. Work with your developer when deciding which other meta tags to ditch, so that you don’t inadvertently get rid of something necessary or essential for your website.

Find Images With Missing Alt Text

Alt text on images not only help user-agents understand what the images are, it also makes your website a lot more accessible to visitors with disabilities.

  1. Select Bulk Export from the main menu of Screaming Frog, and then Images>>Images Missing Alt Text. Open the CSV file in Excel to see which pages contain images without any alt text. The report highlights the page URL, along with the image URL. Fix this by adding alt text to relevant images, making sure the text is descriptive, and maybe includes a keyword or two.
  2. Not all images require alt text, so for those rare occasions when you happen to include an image more for design than anything else (and it can’t be called via CSS), add an empty alt tag

<img src=”exampleimage.png” alt=””>

Uncovering Content Issues

Although there is some overlap in content issues looked at during an SEO audit and a content audit, doing one doesn’t exempt you from doing the other. That is because a content audit requires you to consider your content strategy too, which falls outside of the scope of an SEO audit.

 

Searching for Duplicate Content

There are two types of duplicate content:

  • duplicate content you have created yourself, either accidentally, or by posting some of your content to multiple domains, or through content syndication, and
  • copied or plagiarised content, which happens when another site operator copies content from your site and uses it on their site. The copied content is either used verbatim, or with so few changes made to it that it still resembles the original.

Duplicate content can affect the quality of search results, so in the absence of any signal indicating which version is the original, Google will try to determine this themselves. This becomes a problem if they happen to choose the wrong version. Manual actions are generally only applied to content that is obviously being used to manipulate or deceive.

  1. Copyscape offers two tools for finding duplicate content on the internet: the original Copyscape compares content found on your website to content found elsewhere, while Siteliner only looks for duplicate copy on your own website. Test both tools out using the free service offered, but the premium version is recommended for more comprehensive checks.
  2. The recommended fix for valid duplicate content is to first identify any duplicate content caused by domain issues. This would either be pages accessible with or without the www prefix, or secure and non-secure versions of the same content. Fix this by redirecting the old page(s), and by checking that you have set a preferred domain in the Google Search Console. You can also set a permanent redirect by adding the following lines to your .htacess file:

    # Redirect non-www to https + www
    # http://yourdomain.com  becomes  https://www.yourdomain.com
    RewriteCond %{HTTP_HOST} !^www\.
    RewriteRule .* https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

    # Redirect non-https to https
    # http://www.yourdomain.com  becomes  https://www.yourdomain.com
    RewriteCond %{HTTPS} off
    RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

  3. For other duplicate content you can add the rel=canonical tag to the original piece of content. Additionally, you may want to consider adding noindex and nofollow directives to the duplicate content.
  4. For unauthorised use or copying of your content, you should first try contacting the site operator and asking them to remove your content, failing which you can ask Google to remove the content form search results.

Assessing Content Quality

Evaluating the quality of the content on each page of your website is best done as part of your content audit. However, there are a few things you can do during your SEO audit in order to identify pages that require immediate attention.

Screenshot of Screaming Frog Internal tab
  1. Select the Internal tab from the main Screaming Frog panel. Change the Filter to HTML, and then export the report as a CSV file.
  2. Open the file in Excel, and sort using the Word Count column. You can also hide or delete any rows with a 301 or 404 Status Code, and pages using the noindex directive.
  3. Highlight pages that don’t use the same layout as the rest of your site. Site elements usually found on most pages include navigation and footers. If your blog uses elements not found on regular pages – such as a sidebar – highlight any blog URLs using a different colour.
  4. Load any regular page of your website in your browser, and do a rough count of words in the common elements: header, navigation, and footers. Do the same for your blog, but include elements only found on your blog: sidebar(s), entry-meta (author, date, categories, etc.), author bio’s, etc.
  5. Screaming Frog includes all of this in the Word Count figure it pulls, so you need to take this into account when looking at the word count on each page.
  6. Word count on its own is not a ranking factor, but higher word counts usually equate to quality, in-depth content. And quality content attracts links, lower bounce rates, and higher CTRs.
  7. Flag any pages with word counts lower than 500. The goal isn’t to bump these pages up to 500 or more words, but rather to see whether the content is enough, and that it has been properly optimised. This will be done in conjunction with an assessment of the performance of these pages, and where they rank in SERPs.
  8. Even though you checked keyword usage while analysing page title issues, now would be a good opportunity to check again, looking at titles, H1 and H2. A more thorough analysis of keyword usage would be done as part of your content audit.

The perfect way to end off your SEO audit is with a link audit; it is something that can be done on its own, but works equally well as a way to round off an in-depth SEO audit.

 

Internal Links

The easiest way to analyse all internal links is by exporting an inlink report from Screaming Frog. Select Bulk Export from the main navigation, then All Inlinks. The CSV file might be quite large, so be patient when opening it in Excel.

Screenshot of Screaming Frog Inlinks report
  1. Begin by identifying and fixing any broken links. Broken links not only impact on UX, but too many broken links can result in search engines viewing your site as abandoned or poorly maintained.
  2. Next look for any 302 redirects, and establish whether they should remain as is, or be changed to 301 redirects.
  3. Now check to see if all internal links make use of anchor text. Though not required, there is value in including it, and it also makes it easier for visitors and search engines to understand the context of the link. At the same time check to see that the anchor text is logical – and possibly descriptive – and that there is no apparent keyword stuffing.
  4. Back in Screaming Frog, select the Internal tab in the main panel, scroll to the right until you find the columns for Inlinks, Outlinks, and External Outlinks. Sort the External Outlinks column, and check any pages with a high number. Google no longer frowns on a high number of links, but it is still worth checking to see if they are all necessary.

External Links

Follow the same process as described for Internal Links, this time selecting All Outlinks under Bulk Export.

  1. Again you should start by first looking for and fixing broken links. Look for 4xx, 5xx, and 9xx status codes: 5xx status codes usually indicate a temporary server problem, so first see if this is the case before removing any links that return a 5xx status code.
  2. Make sure all links make use of anchor text, and that the text is somewhat descriptive of what is being linked to. Including the name of the website or business you are linking to is better than vague text. Search engine user-agents use this to determine relevancy, while it also gives visitors a good idea of where they will end up. Make sure the link is to the page you actually want to send visitors to, not simply the home page.
  3. If your site includes affiliate links, make sure there aren’t too many, and that they all include the nofollow directive

<a href=”http://www.google.com/” rel=”nofollow”>Google</a>

Backlinks

Also known as inbound links, backlinks are links to your site from other domains. And they have the potential to harm your site’s visibility in SERPs. The Google Search Console generates a report of sites linking to your site (Links to Your Site under Search Traffic), but you should always combine this with a backlink report from other tools. You can export a similar report from Bing Webmaster Tools, found under Reports & Data>>Inbound Links. The tools best suited to this are all paid solutions – Open Site Explorer, Ahrefs, Majestic, and SEMrush – but the results will be superior to anything free.

  1. Combine all the backlink reports you generate in order to create a detailed master list. Remove duplications before you start analysing.
  2. You are looking for broken links, links from sites that have no relevance to your industry, and excessive links from a single domain (especially if they all point to a single page, or from a single page).
  3. Also look at the anchor text used, which can be a good indicator of harmful practices.
  4. The first step in correcting all of these is to contact the business or site operator. For broken links, you either want to give them the correct URL to use or ask them to remove the link. For all other issues, you either want all links removed, or all excessive links removed.
  5. Unfortunately, not everyone will act on your message, and for that, you have the Disavow Links services offered by both Google and Bing. Be careful when using this option, because any mistakes you make could affect valuable links to your site.

Conclusion

Conducting a full SEO audit can take up to 40-hours, and even longer if you’ve never done one before, or have a site with thousands of pages. But it is an essential task if you want to understand your website, then maintain and improve your search engine rankings. And once you have done your first full SEO audit, set a benchmark for the future, you will find that it is possible to break it down into smaller components for future audits; with some elements of SEO requiring regular monitoring and maintenance, while others can be done quarterly, or even every six months. Although this article used Screaming Frog extensively, you should be able to check many of the points covered here using a premium SEO tool.

And because websites should be optimised for humans and not search engines, once you have concluded your SEO audit, why not dive straight into a content audit?




ABOUT THIS ARTICLE

How to perform an in depth technical SEO audit in 2018. An SEO audit is a comprehensive process and we detail several factors for you.


- Posted by




*Predikkta has sourced several external independent global tools to analyze websites.These tools do not reflect on occasion the internal website analytics, but are recognised global tools and provide accurate comparative results for measurement against competitors.

**The views in this article are those of the author