Google Search Console is a valuable tool for website owners to track the performance of their site in Google search results. It can also be used to identify and fix errors that may be preventing your site from being indexed or ranking well in search results.
In this article, we’ll discuss common Google Search Console errors and provide some tips on how to fix them.
What is Google Search Console Errors?
Google Search Console is a free tool provided by Google that allows website owners to monitor their site’s performance in search engine results pages (SERPs). It provides valuable insights about how your website is performing, how Google crawls and indexes your site and provides information about potential errors that could be affecting your website’s SEO.
To see the errors that have occurred on your website, go to the Pages tab on console in the indexing section on the left sidebar. This page has a section called Why aren’t the pages indexed?
The listed errors as shown below are problems that Google detected, which prevent the indexing of the page where the issue occurred.
Let’s get started with the first error you may encounter:
Server error (5xx)
A server error (5xx) in Google Search Console means that Googlebot couldn’t access your URL. These errors occur when the server is unable to process a request from a user, resulting in a page that cannot be displayed. The 5xx status errors refer to the server-side problems, which include the following:
- Server overload: If your server is overloaded, it may not be able to handle the traffic from Googlebot. This can happen if you’re experiencing a sudden spike in traffic, or if your server is not properly configured to handle large amounts of traffic.
- Misconfiguration: If your server is misconfigured, it may not be able to understand the requests from Googlebot. This can happen if your server’s security settings are too restrictive, or if your server’s DNS settings are incorrect.
- Software bug: There may be a bug in your website’s software that is preventing Googlebot from accessing your pages. This can happen if there is a problem with your website’s code, or if your website’s plugins are not compatible with Googlebot.
These types of errors may require expertise in software development, but let’s see what you can do to fix them.
How To Fix
To fix a server error in Google Search Console, you will need to identify the cause of the error and take steps to correct it. Here are some things you can do:
- Check with your hosting provider: If you’re experiencing server errors, it’s important to contact your hosting provider to identify and resolve the issue. Your hosting provider may be able to provide you with more information about the server-side errors and help you resolve the issue.
- Check your server logs: Your server logs may provide you with more information about the server errors and help you identify the root cause of the issue. Look for any errors or warnings in your server logs and investigate them further.
- Check your website configuration: Server errors can also be caused by misconfigured website settings, such as a misconfigured .htaccess or nginx file or a problem with the server software. Check your website configuration and make sure everything is set up correctly.
- Update your website’s software: If you think that there may be a bug in your website’s software, you can try updating your software to the latest version. This may fix the problem.
Once you have fixed the cause of the server error, you will need to wait for Googlebot to recrawl your pages. This may take a few days. Once Googlebot has recrawled your pages, the server error should be resolved.
Redirect Errors
Redirect errors are one of the types of errors that can appear in your Google Search Console. These errors occur when a redirect is set up incorrectly, leading to problems with the user experience and website performance. There are different types of redirect errors, including:
- A redirect chain that was too long: This error occurs when there are too many redirects in a chain, causing the page load time to be slow and negatively impacting user experience. For example, if Page A redirects to Page B, which redirects to Page C, which redirects to Page D, this creates a redirect chain that can be too long.
- A redirect loop: This error occurs when a redirect leads back to the original page, creating an infinite loop. For example, if Page A redirects to Page B, which redirects back to Page A, this creates a redirect loop.
- A redirect URL that eventually exceeded the max URL length: This error occurs when a redirect URL is too long, causing the server to stop processing the request. This can happen when there are too many parameters in the URL or when the URL is too long.
- A bad or empty URL in the redirect chain: This error occurs when a redirect URL is incorrect or empty, leading to issues with the user experience and website performance. For example, if Page A redirects to an empty URL or a URL that doesn’t exist, this creates a bad or empty URL in the redirect chain.
How To Fix
Here are some solutions to fix each of the errors mentioned above, respectively:
- Reduce the number of redirects: Make sure you reduce the number of redirects in the chain and ensure that each redirect is set up correctly.
- Remove redirect loops: Identify the redirect loops and remove or fix the offending redirect.
- Make shorter URLs: Shorten the redirect URL and remove any unnecessary parameters.
- Remove bad URLs: Identify the bad or empty URL in the chain and remove or fix the offending redirect.
After you’ve fixed the errors, you can manually visit the pages with errors to check if everything is OK. And after a few days, Google will be indexed the affected pages.
URL Blocked by robots.txt
When you encounter a URL blocked by robots.txt error in your Google Search Console, it means that Googlebot (Google’s web crawler) cannot access a specific page on your website because it has been blocked by your robots.txt file.
The robots.txt file is a standard used by webmasters to communicate with web crawlers and tell them which pages or parts of their site should be crawled and indexed.
The most common reasons why a page is blocked by the robots.txt file include:
- The page is intentionally blocked: This can happen when a webmaster wants to keep a certain page or section of their website private or hidden from search engines.
- The robots.txt file has been set up incorrectly: This can happen when the robots.txt file contains errors or is not configured correctly, causing search engines to incorrectly interpret which pages should be blocked.
- The page has been inadvertently blocked: This can happen when a webmaster unintentionally blocks a page that they want to be crawled and indexed.
How To Fix
To fix this error, you should first review your robots.txt file and ensure that it’s set up correctly. You can use the robots.txt tester tool in Google Search Console to check if any URLs are being blocked that shouldn’t be. If you find that a page is being blocked unintentionally, you can update your robots.txt file to allow search engines to crawl and index that page.
URL marked noindex Error
When a URL is marked as noindex in Google Search Console, it means that you have explicitly instructed Google not to index that particular page. This is typically done using the “noindex” tag in the HTML code of the page, which tells search engines not to include the page in their index.
How To Fix
To fix the issue, you can remove the tag in your HTML page and request that Google re-crawl the page using the Fetch as Google tool in Google Search Console. This will allow the page to be indexed and appear in search results.
Soft 404
A soft 404 is a type of error that can appear in Google Search Console when a web page is not found but the server does not return a proper 404 error status code. Instead, the server returns a 200 OK status code, which indicates that the page was found, even though it’s not the page the user was looking for.
How To Fix
To fix a soft 404 error in Google Search Console, you should first identify the pages that are generating the error. You can do this by checking the Crawl Errors report in Google Search Console, which will show you a list of URLs that are generating soft 404 errors.
Once you’ve identified the pages that are generating the error, you should take steps to fix them. Depending on the cause of the error, this may involve:
- redirecting users to a different page
- updating the page with new content
- removing the page altogether.
Blocked due to unauthorized request (401)
When you see a Blocked due to unauthorized request (401) error in Google Search Console, it means that Googlebot was unable to access your website’s content because the request was unauthorized. This error can occur for a variety of reasons, including issues with authentication, permissions or server configuration.
How To Fix
To solve this error, you should ensure that they are accessible to Googlebot. This may involve updating your website’s authentication settings to allow Googlebot to access the content or providing Googlebot with valid authentication credentials.
Blocked due to access forbidden (403)
When you see a Blocked due to access forbidden (403) error in Google Search Console, it means that Googlebot was unable to access your website’s content because access was forbidden. This error can occur for a variety of reasons, including issues with permissions or server configuration.
How To Fix
To fix the issue, you should ensure that they are accessible to Googlebot. This may involve updating your website’s permissions settings to allow Googlebot to access the content or granting Googlebot with the necessary permissions.
Not found (404)
When you see a Not found (404) error in Google Search Console Errors, it means that Googlebot was unable to find the requested URL on your website. This error can occur for a variety of reasons, including deleted pages, broken links, or typos in the URL.
How To Fix
To fix this issue, you should either
- restore the missing content
- create a new page with a similar URL to replace the missing content.
If the missing content cannot be restored, you should consider:
- redirecting the old URL to a new, relevant page on your website.
- returning a “410 gone” HTTP status code instead of a “404 not found” status code.
This tells Google that the page or resource has been permanently removed and should not be indexed or displayed in search results.
Crawled – currently not indexed
When you see the message Crawled – currently not indexed in Google Search Console, it means that Googlebot has crawled the URL but has not yet added it to its index. This can happen for a variety of reasons, such as low page quality, duplicate content or low authority.
If you want a page that has been crawled but not indexed to appear in Google search results, there are several steps you can take:
How To Fix
- Improve the page quality by adding original, high-quality content that is relevant to your target audience.
- Make sure that the page does not contain duplicate content.
- Build up the page’s authority by acquiring backlinks from other high-authority websites.
- Check for technical issues and fix any problems that are preventing Google from indexing the page.
- Use Google Search Console to request that Google index the page. Note that this does not guarantee that the page will be indexed, but it can help speed up the process.
Discovered – currently not indexed
When you see the message Discovered – currently not indexed in Google Search Console, it means that Google has found the URL but has not yet added it to its index. This is different from the “Crawled – currently not indexed” message, which indicates that Googlebot has already crawled the URL.
If you have a page that has been discovered but not indexed to appear in Google search results, you can apply same steps above.
Alternate page with proper canonical tag
When you have an alternate page with a proper canonical tag, it means that you have multiple versions of the same content, but you have specified which version you want to be considered the primary or canonical version. This can be useful for a variety of reasons, such as when you have different versions of a page for different regions or languages.
How To Fix
To fix the issue, you should ensure that the canonical tag is correctly implemented, the content on both versions of the page is consistent and any other potential issues are addressed
Duplicate without user-selected canonical
When Google detects duplicate content on your website without a user-selected canonical URL, it means that there are multiple versions of the same content on your website, but you have not specified which version should be considered the primary or canonical version. This can confuse search engines and affect your search engine rankings. Here’s how to fix it:
How To Fix
- Determine the primary version of the content: Identify which version of the content you want to be considered the primary or canonical version. This may be the version that has the most traffic, the version that you want to rank highest in search engines, or the version that you’ve optimized for SEO.
- Implement a canonical tag: Once you’ve identified the primary version of the content, implement a canonical tag on all other versions of the content, pointing to the primary version. This tells search engines which version of the content is the most important and should be considered for ranking purposes.
- Remove duplicate content: If you have duplicate content that isn’t necessary, such as pages with similar content or content that’s been copied from other websites, consider removing it entirely. This can help to eliminate confusion for search engines and improve your search engine rankings.
- Use 301 redirects: If you’ve moved content from one URL to another and want to redirect traffic from the old URL to the new one, use 301 redirects. This tells search engines that the old URL has been permanently moved to the new URL and ensures that users are automatically redirected to the correct version of the content.
Page with redirect
One of the google search console errors is “page with redirect”. If a URL that is not canonical redirects to another page, it will not be indexed by Google. Whether or not the target URL of the redirect is indexed by Google depends on Google’s evaluation of the page’s quality and any potential issues it may have.
How To Fix
To solve the issue of a non-canonical URL that redirects to another page, you should follow these steps:
- Check that the redirect is set up correctly: Make sure that the redirect is properly configured and is not returning any errors. Check that the redirect is a 301 redirect, which is the preferred redirect type for SEO purposes.
- Add a canonical tag to the target URL: To let Google know that the target URL is the preferred version of the page, add a canonical tag to the target URL’s HTML code. The canonical tag should point to the URL of the preferred page.
- Remove or fix any duplicate content: If the non-canonical URL and the target URL have duplicate content, you should remove or fix the duplicate content. Duplicate content can cause confusion for both users and search engines.
Conclusion
In conclusion, understanding the Google Search Console Errors that can be affect your website’s SEO performance badly is essential for every website owners. You should identify them correctly and take action to fix them. In this guide, we have discussed these common errors and provided tips to resolve them.
Thank you for reading.