Chuyển đến nội dung chính

New Site Crawl: Rebuilt to Find More Issues on More Pages, Faster Than Ever!

Posted by Dr-Pete

First, the good news — as of today, all Moz Pro customer have access to the new version of Site Crawl, our entirely rebuilt deep site crawler and technical SEO auditing platform. The bad news? There isn't any. It's bigger, better, faster, and you won't pay an extra dime for it.

A moment of humility, though — if you've used our existing site crawl, you know it hasn't always lived up to your expectations. Truth is, it hasn't lived up to ours, either. Over a year ago, we set out to rebuild the back end crawler, but we realized quickly that what we wanted was an entirely re-imagined crawler, front and back, with the best features we could offer. Today, we launch the first version of that new crawler.

Code name: Aardwolf

The back end is entirely new. Our completely rebuilt "Aardwolf" engine crawls twice as fast, while digging much deeper. For larger accounts, it can support up to ten parallel crawlers, for actual speeds of up to 20X the old crawler. Aardwolf also fully supports SNI sites (including Cloudflare), correcting a major shortcoming of our old crawler.

View/search *all* URLs

One major limitation of our old crawler is that you could only see pages with known issues. Click on "All Crawled Pages" in the new crawler, and you'll be brought to a list of every URL we crawled on your site during the last crawl cycle:

You can sort this list by status code, total issues, Page Authority (PA), or crawl depth. You can also filter by URL, status codes, or whether or not the page has known issues. For example, let's say I just wanted to see all of the pages crawled for Moz.com in the "/blog" directory...

I just click the [+], select "URL," enter "/blog," and I'm on my way.

Do you prefer to slice and dice the data on your own? You can export your entire crawl to CSV, with additional data including per-page fetch times and redirect targets.

Recrawl your site immediately

Sometimes, you just can't wait a week for a new crawl. Maybe you relaunched your site or made major changes, and you have to know quickly if those changes are working. No problem, just click "Recrawl my site" from the top of any page in the Site Crawl section, and you'll be on your way...

Starting at our Medium tier, you’ll get 10 recrawls per month, in addition to your automatic weekly crawls. When the stakes are high or you're under tight deadlines for client reviews, we understand that waiting just isn't an option. Recrawl allows you to verify that your fixes were successful and refresh your crawl report.

Ignore individual issues

As many customers have reminded us over the years, technical SEO is not a one-sized-fits-all task, and what's critical for one site is barely a nuisance for another. For example, let's say I don't care about a handful of overly dynamic URLs (for many sites, it's a minor issue). With the new Site Crawl, I can just select those issues and then "Ignore" them (see the green arrow for location):

If you make a mistake, no worries — you can manage and restore ignored issues. We'll also keep tracking any new issues that pop up over time. Just because you don't care about something today doesn't mean you won't need to know about it a month from now.

Fix duplicate content

Under "Content Issues," we've launched an entirely new duplicate content detection engine and a better, cleaner UI for navigating that content. Duplicate content is now automatically clustered, and we do our best to consistently detect the "parent" page. Here's a sample from Moz.com:

You can view duplicates by the total number of affected pages, PA, and crawl depth, and you can filter by URL. Click on the arrow (far-right column) for all of the pages in the cluster (shown in the screenshot). Click anywhere in the current table row to get a full profile, including the source page we found that link on.

Prioritize quickly & tactically

Prioritizing technical SEO problems requires deep knowledge of a site. In the past, in the interest of simplicity, I fear that we've misled some of you. We attempted to give every issue a set priority (high, medium, or low), when the difficult reality is that what's a major problem on one site may be deliberate and useful on another.

With the new Site Crawl, we decided to categorize crawl issues tactically, using five buckets:

  • Critical Crawler Issues
  • Crawler Warnings
  • Redirect Issues
  • Metadata Issues
  • Content Issues

Hopefully, you can already guess what some of these contain. Critical Crawler Issues still reflect issues that matter first to most sites, such as 5XX errors and redirects to 404s. Crawler Warnings represent issues that might be very important for some sites, but require more context, such as meta NOINDEX.

Prioritization often depends on scope, too. All else being equal, one 500 error may be more important than one duplicate page, but 10,000 duplicate pages is a different matter. Go to the bottom of the Site Crawl Overview Page, and we've attempted to balance priority and scope to target your top three issues to fix:

Moving forward, we're going to be launching more intelligent prioritization, including grouping issues by folder and adding data visualization of your known issues. Prioritization is a difficult task and one we haven't helped you do as well as we could. We're going to do our best to change that.

Dive in & tell us what you think!

All existing customers should have access to the new Site Crawl as of earlier this morning. Even better, we've been crawling existing campaigns with the Aardwolf engine for a couple of weeks, so you'll have history available from day one! Stay turned for a blog post tomorrow on effectively prioritizing Site Crawl issues, and a webinar on Friday at 9am Pacific.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Nhận xét

Bài đăng phổ biến từ blog này

What Google's GDPR Compliance Efforts Mean for Your Data: Two Urgent Actions

Posted by willcritchlow It should be quite obvious for anyone that knows me that I’m not a lawyer, and therefore that what follows is not legal advice. For anyone who doesn’t know me: I’m not a lawyer, I’m certainly not your lawyer, and what follows is definitely not legal advice. With that out of the way, I wanted to give you some bits of information that might feed into your GDPR planning, as they come up more from the marketing side than the pure legal interpretation of your obligations and responsibilities under this new legislation. While most legal departments will be considering the direct impacts of the GDPR on their own operations, many might miss the impacts that other companies’ (namely, in this case, Google’s) compliance actions have on your data. But I might be getting a bit ahead of myself: it’s quite possible that not all of you know what the GDPR is, and why or whether you should care. If you do know what it is, and you just want to get to my opinions, go ahead and ...

Optimizing AngularJS Single-Page Applications for Googlebot Crawlers

Posted by jrridley It’s almost certain that you’ve encountered AngularJS on the web somewhere, even if you weren’t aware of it at the time. Here’s a list of just a few sites using Angular: Upwork.com Freelancer.com Udemy.com Youtube.com Any of those look familiar? If so, it’s because AngularJS is taking over the Internet. There’s a good reason for that: Angular- and other React-style frameworks make for a better user and developer experience on a site. For background, AngularJS and ReactJS are part of a web design movement called single-page applications, or SPAs . While a traditional website loads each individual page as the user navigates the site, including calls to the server and cache, loading resources, and rendering the page, SPAs cut out much of the back-end activity by loading the entire site when a user first lands on a page. Instead of loading a new page each time you click on a link, the site dynamically updates a single HTML page as the user interacts with the site...

How We More than Doubled Conversions & Leads for a New ICO [Case Study]

Posted by jkuria Summary We helped Repux generate 253% more leads, nearly 100% more token sales and millions of dollars in incremental revenue during their initial coin offering (ICO) by using our CRO expertise . The optimized site also helped them get meetings with some of the biggest names in the venture capital community — a big feat for a Poland-based team without the pedigree typically required (no MIT, Stanford, Ivy League, Google, Facebook, Amazon, Microsoft background). The details: Repux is a marketplace that lets small and medium businesses sell anonymized data to developers. The developers use the data to build “artificially intelligent” apps, which they then sell back to businesses. Business owners and managers use the apps to make better business decisions. Below is the original page, which linked to a dense whitepaper. We don’t know who decided that an ICO requires a long, dry whitepaper, but this seems to be the norm! This page above suffers from several issues: ...