Explore All In SEO
Recent Articles
Recent Articles
Recent Articles

AMA Interview with John Mueller, SEO & Webmaster Trends Analyst at Google 2018

Summary of AMA Interview with John Mueller, SEO & Webmaster Trends Analyst at Google

May 15, 202224K Shares481.3K ViewsWritten By: Alastair MartinReviewed By: James Smith

SEOAMA: Is there any benefit in implementing a paywall markup on a website with a loginwall? How do bots/spammers affect trends and search volume? What do you do to filter out those effects?

Q: Can we get confirmation that these links will not be discounted by Google's Mobile First Indexing, or should webmasters prioritize important links being visible on load?

A: Just purely for usability reasons I'd always make important UI elements visible. Technically, we do understand that you sometimes have to make some compromises on mobile layouts (the important part is that it's loaded on the page, and not loaded on interaction with the page), but practically speaking, what use is search ranking if users run away when they land on your pages and don't convert? :-)

Q: How do you filter out bots/spammers? What do you do to filter out those effects??

A: The internet is filled with bots, we have to deal with it, just like you have to deal with it (filter, block, etc.) - it can get pretty involved, complicated, expensive... Heck, you might even be pointing some of those at our services, for all I know :-/. For the most part, it seems like our teams are getting the balance right for filtering & blocking, though sometimes I see "captcha" reports from people doing fairly reasonable (though complicated) queries, which I think we should allow easier.

Q: If you are creating a site that is heavy in SVG animation and the only real difference that any crawler can detect if the h1 and Mata data used on the page. ( For the purpose of the site I don't think we need text contenton there ). How would you go about making it more search friendly?

A: We're not going to "interpret" the images there, so we'd have to make do with the visible text on the page, and further context (eg, in the form of links to that page) to understand how it fits in. With puns, especially without a lot of explanation, that's going to make ranking really, really hard (how do you know to search for a specific pun without already knowing it?). What I'd recommend there is to find a way to get more content onto the page. A simple way to do that could be to let people comment. UGC brings some overhead in terms of maintenance & abuse, but could also be a reasonable way to get more text for your pages.

Q: Crawl map visualizations. How close are they to what Google actually sees when doing crawling, parsing and indexing of the website? Nowadays, few SEOtools are putting their focus on crawl maps and I am wondering if that method of presenting nodes and edges is too simplified and old school (related to old Page Rank patents)? Or they are useful and help us to understand internal architecture. What are your thoughts? Thanks

A: FWIW I haven't seen crawl map visualizations in a long time internally. It might be that some teams make them, but you might have heard that the web is pretty big, and mapping the whole thing would be pretty hard :-). I love the maps folks have put together, the recent one for paginated content was really neat. I wouldn't see this as something that's equivalent to what Google would do, but rather as a tool for youto better understand your site, and to figure out where you can improve crawling of it. Getting crawling optimized is the foundation of technical SEO, and can be REALLY HARD for bigger sites, but it also makes everything much smoother in the end.

Q: When might we expect the API to be updated for the new Search Console?

A: Oy, I hear you and ask regularly as well :-). We said it wouldn't be far away, so I keep pushing :-)

Q: When are you going to bring the extended data set to the existing API version? What new features are planned for the new API version?

A: As I understand it, the first step is to expand the Search Analytics API to include the longer timeframe. Past that I'd like to have everything in APIs, but unfortunately we have not managed to clone the team, so it might take a bit of time. Also, I'd like to have (as a proxy for all the requests we see) a TON of things, so we have to prioritize our time somehow :-). At any rate, I really love what some products are doing with the existing API data, from the Google Sheets script to archive data to fancy tools like Botify & Screaming Frog. Seeing how others use the API data to help folks make better websites (not just to create more weekly reports that nobody reads) is extremely helpful in motivating the team to spend more time on the API.

Q: Is it necessary to set a X-default for your alternate lang tags? Or is it enough to set a alternate lang for every language/site?

A: No, you don't need x-default. You don't need rel-alternate-hreflang. However, it can be really useful on international websites, especially where you have multiple countries with the same language. It doesn't change rankings, but helps to get the "right" URL swapped in for the user.

Q: We see a difference in indexed pages between the old and the new search console. ( https://ibb.co/kZShvS https://ibb.co/gZTY27 ). Is this a bug? Or does it have another reason?

A: If it's just a matter of multiple languages, we can often guess the language of the query and the better-fitting pages within your site. Eg, if you search for "blue shoes" we'll take the English page (it's pretty obvious), and for "blaue schuhe" we can get the German one. However, if someone searches for your brand, then the language of the query isn't quite clear. Similarly, if you have pages in the same language for different countries, then hreflang can help us there.

Q: Is Google trying to move away from link based authority or will they always play some part in the overall algorithm?

A: I still click on links a lot when browsing, so I guess there's that :). SEOs often focus on links too much, it doesn't always helps their sites in the long run or is probably not the most efficient thing to do to promote a website.

Q: Does not updating content will make the site lose its ranking?

A: Updating content doesn't automatically make it rank better. Some pages change all the time, others stay pretty much the same for years, all of them can be very relevant to users.

Q: Does Google search rely on data soruces not produced by the crawl for organic rankings purposes? What might some of those be?

A: We use a lot of signals(tm). I'm not sure what you're thinking of, but for example, we use the knowledge graph (which comes from various places, including Wikipedia) to try to understand entities on a page. This isn't so much a matter of crawling, but rather trying to map what we've seen to other concepts we've seen, and to try to find out how they could be relevant to people's queries.

Q: How is keyword cannibalizationseen by Google? People believe that having multiple pages about the same topic confuses search engines and that hurts their chances of ranking.

A: We just rank the content as we get it. If you have a bunch of pages with roughly the same content, it's going to compete with each other, kinda like a bunch of kids wanting to be first in line, and ultimately someone else slips in ahead of them :). Personally, I prefer fewer, stronger pages over lots of weaker ones - don't water your site's value down.

Q: I've been seeing a lot of inconsistencies between different markets of Google. I.e, Google.co.za operates differently than Google.co.uk etc. Will all markets eventually be the same? Are the smaller markets like Google.co.za a bit delayed? Will they ever be on the same level as the larger ones? I'm asking in terms of the Google Algorithm and ranking factors / penalties.

A: For the most part, we try not to have separate algorithms per country or language. It doesn't scale if we have to do that. It makes much more sense to spend a bit more time on making something that works across the whole of the web. That doesn't mean that you don't see local differences, but often that's just a reflection of the local content which we see.

The main exception here is fancy, newer search features that rely on some amount of local hands-on work. That can include things like rich results (aka rich snippets aka rich cards), which sometimes don't get launched everywhere when we start, and sometimes certain kinds of one-boxes that rely on local content & local polices to be aligned before launch. I'd love for us to launch everything everywhere at the same time (living in Switzerland, which seems to be the last place anyone - not just Google - wants to launch stuff, grr), but I realize that it's better to get early feedback and to iterate faster, than it would be to wait until everything's worked out, and then launch something globally that doesn't really work the way we wanted.

Q: I am trying to do a split test on my site by showing a new version of page to 10% traffic and original page to 90% traffic. For googlebot do we split them at runtime in the same proportion or push all of googlebot to original design so as not to confuse them.

A: We need to write up some A/B testing guidelines, this came up a lot recently :). Ideally you would treat Googlebot the same as any other user-group that you deal with in your testing. You shouldn't special-case Googlebot on its own, that would be considered cloaking. Practically speaking, sometimes it does fall into one bucket (eg, if you test by locale, or by user-agent hash, or whatnot), that's fine too. For us, A/B testing should have a limited lifetime, and the pages should be equivalent (eg, you wouldn't have "A" be an insurance affiliate, and the "B" a cartoon series). Depending on how you do A/B testing, Googlebot might see the other version, so if you have separate URLs, make sure to set the canonical to your primary URLs. Also, FWIW Googlebot doesn't store & replay cookies, so make sure you have a fallback for users without cookies.

Q: The historical standard for paraphrase or quote attribution arose from an industry where the content was first purchased and then shared (ie: libraries) - content creators were paid before attribution. On the web, the content is shared without purchase but with Google Search, attribution led to visits, offering a different form of payment (often ads). But with voice search, the likelihood of attribution yielding a visit is very low. How does Google intend to encourage healthy content production on the web given the disruption to the attribution funding model which has powered content creationsince, well, the beginning of the printed word?

A: I'll let you in on a secret: the health of the web-ecosystem is something that pretty much everyone on the search side thinks about, and when we (those who talk more with publishers) have stories to tell about how the ecosystem is doing, they listen up. Google's not in the web for the short-term, we want to make sure that the web can succeed together for the long-run, and that only works if various parties work together. Sure, things will evolve over time, the web is fast-moving, and if your website only makes sense if nothing elsewhere changes, then that will be rough. As I see the SEO/web-ecosystem, there are a lot of people who are willing to put in the elbow-grease to make a difference, to make changes, to bring their part of the world a step further, and not just cement existing systems more into place. (Personally, it's pretty awesome to see :-))

Recent Articles