الاثنين، 29 ديسمبر 2008
Well, it says First Click Free but turns out to be...
What webmasters will do is just disallow googlebot accessing their premium content.
Then how will users be able to find premium content on Google?
السبت، 27 ديسمبر 2008
Japanese WMC Blog launched
We just launched a new Webmaster Central Blog in Japanese. For those of you who feel more comfortable reading Japanese, and are interested in webmaster-related information from Google, and even learning about issues specific to our region and language, we hope you enjoy on our Japanese version of the Webmaster Central Blog :D
Written by Naoko Imai, Search Quality Team
الخميس، 25 ديسمبر 2008
Feliz Navidad from the Spanish Webmaster Central team!
It's been both a pleasure and a great opportunity for us to share our knowledge and hear your feedback. A few of this year's highlights:
For the blog, we had:
- Matt Cutts talking to us in a 3 part interview (see part 1, part 2, and part 3).
- A series of videos explaining how to use Webmaster Tools (See all parts: 1, 2, 3, 4, 5, and 6)
- The links series and the 404 series.
- Google's SEO Starter Guide in Spanish.
- SMX Madrid 2008.
- Google Search Masters 2008 in Mexico. We even have some video footage from this conference, including a session about the Help Group :)
- Congreso de Webmasters 2008.
This is us, several members of the Spanish Webmaster Central team:

From left to right: Cristina, Alvar, Rebecca, and Esperanza in Google's Dublin office, with a holiday touch :)
Written by Alvar López, Search Quality Team
الأربعاء، 24 ديسمبر 2008
Wishing you and your site a happy holiday!
Every day we see new people commenting and joining the discussion. This holiday season we'll try to update our blog to accommodate your growing needs. Always feel free to let us know how we're doing (especially if we publish a typo! :), because first and foremost and everywhere in the middle, we're trying to improve for you.
Happy holidays from all of us at Webmaster Central.
Written by Maile Ohye, Developer Programs Tech Lead
الاثنين، 22 ديسمبر 2008
Quick and easy tips for the holiday rush
Verify that your site is indexed by Google (and is returned in search results)
Check your snippet content and page titles with the site: command [site:example.com] -- do they look accurate and descriptive for users? Ideally, each title and snippet should be unique in order to reflect that each URL contains unique content. If anything is missing or you want more details, you can also use the Content Analysis tool in Webmaster Tools. There you can see which URLs on your site show duplicate titles or meta descriptions.
Label your images accurately
Don't miss out on potential customers! Because good 'alt' text and descriptive filenames help us better understand images, make sure you change non-descriptive file names [001.jpg] to something more accurate [NintendoWii.jpg]. Image Search is one of our largest search properties, so you should take advantage of it.
Know what Google knows (about your site)
Check for crawl errors and learn the top queries that bring traffic to your site through Webmaster Tools. See our diagnostics checklist.
Have a plan for expiring and temporary pages
Make sure to serve accurate HTTP status codes. If you no longer sell a product, serve a 404. If you have changed a product page to a new URL, serve a 301 to redirect the old page to the new one. Keeping your site up-to-date can help bring more targeted traffic your way.
Increase foot traffic too
If your website directs customers to a brick-and-mortar location, make sure you claim and double check your business listing in Google Local.
Usability 101
Test the usability of your checkout process with various browsers. Ask yourself if a user can get from product page to checkout without assistance. Is your checkout button easy to find?
Tell us where to find all of your web pages
If you upload new products faster than Google crawls your site, make sure to submit a Sitemap and include 'last modification' and change frequency' information. A Sitemap can point Googlebot to your new or hard-to-find content.
Manage your sitelinks
Your site may be triggering Sitelinks in the search results, so check the links and make sure the destination pages are fully functional. Remember: in Webmaster Tools you can remove any sitelinks that you don't think users will find useful.
Don't forget to check out these additional resources:
- Read our recently released SEO Starter Guide.
- Watch our Tutorials for Webmasters.
- Find out what information Google has about your website in Webmaster Tools.
- Get your other questions answered in our Webmaster Help Center.
- Ask your last-minute questions in the Webmaster Help Forum.
- Countdown to 2009!
الخميس، 18 ديسمبر 2008
Sitemap Submission Made Simple

Sitemap file formats supported by Google
Part of what makes the web so interesting is that there are so many different kinds of content out there. Do you use videos on your website? If so, send us a Video Sitemap file so that we can send you visitors to those videos! Do you host source-code samples? Submit a Code Search Sitemap! Here are the various kinds of Sitemap files that Google supports at the moment:- XML Sitemap files for web pages - Use these files to submit all of your web pages (this is the preferred format for web pages). While not all search engines may support the Sitemap types listed below, the XML Sitemap for web pages is supported by all search engines of sitemaps.org.
- RSS 2.0 and Atom 1.0 feeds for web pages - Many blogs create these automatically.
- Text files with web page URLs - If you can't automatically create one of the above formats, you can create a text file with your URLs in it.
- XML Sitemap files for Video Search - Videos on your website can be indexed and made available for Google Video Search.
- Media-RSS feeds for Video Search - mRSS feeds are used by various other systems, we can use these for Google Video Search as well.
- XML Sitemap files for Google Code Search - If you make programming samples or code available to your users, you can submit these for Google Code Search.
- XML Sitemap files for mobile web pages - Using this kind of format allows us to recognize content that has been optimized for mobile devices (please note that there was recently a small change in the format).
- XML Sitemap files for geo-data - If you have geographic data on your website in the form of KML or GeoRSS files, please let us know about these files.
- XML Sitemap files for News - News websites can submit their news content in this special Sitemap format (please note that you must first register with Google News before these files are processed).
If you have multiple Sitemap files that you wish to submit to Google, you can include up to 1,000 of these in an XML Sitemap Index file. If you have more than 1,000 Sitemap files, you can just submit multiple Sitemap Index files - we'd love to take them all!
Submitting your Sitemap files to Google
Once you have your Sitemap files ready and available on your server, all that's left is making sure that the search engines can find them. Google supports three simple ways to submit Sitemap files:- Using Google Webmaster Tools
Submitting your Sitemap files through Google Webmaster Tools is the preferred way of letting us know about them. The main advantage of doing it this way is that you'll always have direct feedback about how your Sitemap files were downloaded (were we able to reach your server?), how they were recognized (were they in the right format?) and what happened to the web pages listed in them (how many were indexed?). To submit your Sitemap files, make sure that your website is verified in Webmaster Tools, then go to "Sitemaps" in Webmaster Tools and enter the file name of your Sitemap(s).
Sometimes it makes sense to keep your Sitemap file on a different server / domain name. To submit Sitemap files like that, you must verify ownership of both sites in Webmaster Tools and submit the Sitemap on the appropriate site. For instance, if your Sitemap file for http://www.example.com is kept on http://sitemap-files.example.com/ then you need to verify ownership of both sites and then submit the Sitemap file under http://sitemap-files.example.com (even though the URLs listed in it are for http://www.example.com). For more information, please see our Help Center topic on submitting Sitemap files for multiple sites. - Listing Sitemap files in the robots.txt file
Another way of submitting a Sitemap file is to specify the URL in your robots.txt file. If you use this method of submitting a Sitemap file, it will be found by all search engines that support the Sitemaps protocol (although not all of them support the extensions listed above). Since you can specify the full URL of your Sitemap file in the robots.txt file, this method also allows you to store your Sitemap file on a different domain. Keep in mind that while Sitemap files submitted this way are processed on our side, they will not be automatically listed in your Webmaster Tools account. In order to receive feedback on your files, we recommend adding them manually to your account as well. - Using an HTTP "ping"
If your Sitemap files are generated automatically, a convenient way to submit (and re-submit) them is to access the "ping" URL for Google Sitemaps. This URL includes the URL of your Sitemap file. For more information on the "ping" URL for your website, please see the Help Center article on Updating a Sitemap. Feel free to "ping" this URL whenever you update your Sitemap file - we'll know to pick it up and process it again. If you also have your Sitemap file registered in Webmaster Tools, we'll update the status there as well. This method is also valid if your Sitemap file is kept on a different server, but you must still verify both sites in Webmaster Tools as previously mentioned.
Search engines that are a members of sitemaps.org support a similar way of submitting general web Sitemap files.
We hope these simplifications make it even easier for you to send us your Sitemap files!
Posted by John Mueller, Webmaster Trends Analyst, Google Zürich
الأربعاء، 17 ديسمبر 2008
Webmaster Tools in 40 languages!
In our recent Webmaster Tools launch, we went live in 14 new languages, bringing our total language support count to 40! With the launch of Bulgarian, Catalan, Croatian, Filipino, Greek, Indonesian, Lithuanian, Latvian, Portuguese (Portugal), Slovak, Slovenian, Serbian, Ukrainian and Vietnamese, Webmaster Tools joins Google products such as Google.com, AdWords, Gmail and Toolbar to reach the 40 Language Initiative (Google's company-wide initiative to make sure Google products are available in the 40 languages read by more than 98% of Internet users).
Our team is very excited to reach so many of you by offering our tools in 40 languages. At the same time, both the Google Localization and Webmaster Tools teams know that there's more room for improvements in the features and quality of our service. We hope to hear your input in the comments below, especially on the linguistic quality of our new languages.
Written by Kidus Asfaw, Google Localization
الأحد، 14 ديسمبر 2008
Has anyone implemented this and seen results yet? ...
الخميس، 11 ديسمبر 2008
Message Center info through our API
Recently we mentioned some updates in the Webmaster Tools GData API: we've just launched a whole new API, the Message Center GData API, as part of the Webmaster Tools API. The Message Center is the way that Google communicates to webmasters important issues regarding their sites—for example, if there's a problem crawling your site, or if someone has requested a change in crawl rate. Until now it was only possible to access these messages through the Message Center section of Webmaster Tools; but now you can also use GData to access it as a feed. This way you don't need to continually check your messages in Webmaster Tools, you can retrieve the messages feed automatically and be informed as soon as possible of any critical issues regarding your site.
What can I do?
The Message Center GData API lets you retrieve all messages, mark the messages as read or unread, and delete messages. You can do these tasks using the provided Java client libraries, or you can create your own client code based on the protocol information.
- Retrieve messages: The messages feed contains all the messages sent to your account. These messages have important information about your verified sites. Examples of messages include infinite spaces warnings and crawl rate change notifications.
- Mark messages as read or unread: In order to keep track of new communications from Google, you can mark your messages as read or unread, the same way that you would manage your inbox. If you retrieve a single message, this message will be automatically marked as read.
- Delete mesages: It's possible to delete messages using the GData API. However, be careful because if you delete a message through the API it will also be deleted in your Webmaster Tools account, as both interfaces share the same data.
How do I do it?
You can download code samples in Java for all these new features. These samples provide simple ways to use the messages feed. The following snippet shows how to retrieve the messages feed in a supported language and print all the messages:
// Connect with the service and authenticate
WebmasterToolsService service
= new WebmasterToolsService("exampleCo-exampleApp-1");
try {
service.setUserCredentials(USERNAME, PASSWORD);
} catch (AuthenticationException e) {
System.out.println("Username or password invalid");
return;
}
// Retrieve messages feed
MessagesFeed messages;
try {
URL feedUrl;
if (USER_LANGUAGE == null) {
feedUrl = new URL(MESSAGES_FEED_URI);
} else {
feedUrl = new URL(MESSAGES_FEED_URI
+ "?hl=" + USER_LANGUAGE);
}
messages = service.getFeed(feedUrl, MessagesFeed.class);
} catch (IOException e) {
System.out.println("There was a network error.");
return;
} catch (ServiceException e) {
System.out.println("The service is not available.");
return;
}
// Print the messages feed
System.out.println(messages.getTitle().getPlainText());
for (MessageEntry entry : messages.getEntries()) {
if (entry.getRead()) {
System.out.print(" \t");
} else {
System.out.print("new\t");
}
System.out.print(entry.getDate().toUiString() + "\t");
System.out.println(entry.getSubject());
}
Where do I get it?
If you want to know more about GData, you may want to start by checking out the GData website. The homepage of the Webmaster Tools GData API contains a section on the messages feed, with details about the protocol. You can also download the sample Message Center client form the GData download site. It will show you how to use all the Message Center GData API features.
Written by Javier Tordable, Software Engineer
الاثنين، 8 ديسمبر 2008
Reintroducing your English Webmaster Help Google Guides
Also in Mountain View: Evan, Jessica, and Nate.
Guides in Zürich, Switzerland: John Mueller and Balázs.
Written by Reid Yokoyama, Search Quality
الجمعة، 5 ديسمبر 2008
Friend Connect now available in beta to everyone
So for any of you who were interested in Friend Connect after our first announcement, and also to all of the newer readers out there, go ahead and give Friend Connect a try.
Written by Mendel Chuang, Product Marketing Manager
One place for changing your site's settings
The settings that have been moved to the new Settings page are:
1. Geographic Target
2. Preferred domain control
3. Opting in to enhanced image search
4. Crawl rate control

Changing a Setting
Whenever you change a setting, you will be given an option to save or cancel the change.

Please note: The Save/Cancel option is provided on a per setting basis and hence if you change multiple settings, you'll have to click the Save button associated with each setting.
Expiration of a setting
Some of the settings are time-bounded. That is, your setting will expire after a certain time period. For example, the crawl rate setting has an expiration period of 90 days. After this period, it's automatically reset to the default setting. Whenever you visit the Settings page, you can view the date that your setting will expire underneath the setting name.

That's all there is to it!
We always like adding features and making our interface clearer based on your suggestions, so keep them coming! Please share your feedback (or ask questions) in the Webmaster Help Forum.
Written by Jonathan Simon, Webmaster Trends Analyst and Nanda Kishore, Webmaster Tools Intern
الخميس، 4 ديسمبر 2008
A new look for our Webmaster Help Group
Googlers strongly believe in dogfooding our own products. We manage our work schedules with Google Calendar, publish our blogs on Blogger, and store scads of documentation on Google Sites. So, ever since we launched our first Webmaster Help Group, we've been using Google Groups to facilitate conversations about Webmaster Tools and web search issues.
Today, however, I'm thrilled to announce that our English and Polish Help Groups are getting a makeover. And the changes are more than just skin-deep. Our new Help Forums should make it easier for you to find answers, share resources with others, and have your participation acknowledged.
You can read more about the changes on the Official Google Blog, and then check it out for yourself: English, Polish.
Q: What will happen to the old English and Polish Help Groups?
A: While our old groups are now closed to new posts, they will still be available in read-only mode in case you want to reference any of your favorite posts from the good old days. Many of the most frequently-asked questions (and answers!) have already been transferred to our new Help Forums.
Q: If I was a member of the old group, will I automatically be a member of the new forum?
A: We won't be "transferring" membership from the old groups to the new, so even if you were a member of our Google Groups forum, you'll still need to join the new forum in order to participate. Nicknames and user profiles are also managed separately, so you're welcome to recreate your Google Groups profile in our new forum, or reinvent yourself.
Q: What about the Webmaster Help Groups in other languages?
A: They'll be moving to the new Help Forum format in 2009. Specific dates will be announced in each of the groups as they get closer to their moving date.
Feel free to post any other questions about the new Help Forums in the comments below.
Written by Susan Moskwa, Webmaster Trends Analyst
More control of Googlebot's crawl rate
Googlebot determines the range of crawl rate values you'll have available in Webmaster Tools. This is based on our understanding of your server's capabilities. This range may vary from one site to another and across time based on several factors. Setting the crawl rate to a lower-than-default value may affect the coverage and freshness of your site in Google's search results. However, setting it to higher value than the default won't improve your coverage or ranking. If you do set a custom crawl rate, the new rate will be in effect for 90 days after which it resets to Google's recommended value.الخميس، 6 نوفمبر 2008
Hi I have a new question however belive should be...
the question and also discussions
i have seen note alot of sites were the owners have recorted the same problems... For instance we have a site that is now not showing for prases or words prevelent in the domain name and title etc.. were other terms not related however still important are showing ... this aposed to another site that only recently whent live is now serpassing the site in question for similar search phrases Can anyone advise... I have assumed that relevence to the domain name when searching for the name or phrase should and in every other site and account seems to be try were in the case in question this site is not highly profiled and if wasnt listed on google maps would not be found for its name within the domain name. the site in question has had sitemap submited and has been verified and we also have in google alytics. the site shows for many terms but the puzzle is that its not high for its most expected tearm. This site also when we type in the full domain name www.thesite.com.au (not with site:)shows however only showed two times were any other would dominate to the beter part (now this is the other way) are there any issues or things I should consider or who should I talk to this would help us understand more and give opertunity to adjust and encourage better profile this would help.
الأربعاء، 5 نوفمبر 2008
RE: "we sometimes run into situations where import...
That would make content private (by the publishers CHOICE), correct?
السبت، 1 نوفمبر 2008
一些我應該永遠不會懂的事
الجمعة، 31 أكتوبر 2008
FCF has been around, though perhaps not well known...
I don't know if it's possible to truly secure this (especially with Google's insisting that there be no per-individual limit on FCF access). Chances are that the reader knows the "title" of the restricted content. Clearly the reader knows the site's URL. All the reader has to do, as eddie said, is search Google for that title with a "site:" phrase. It doesn't matter what fancy-pants crypto, Web Services, etc. Google might build for FCF -- hacking around FCF will always be fairly easy to do. So the value proposition of FCF is inversely proportional to the cost & difficulty of paying for the restricted content. WSJ wants $103 for an annual subscription, so naturally people will try to abuse that more than if they only asked for $10 per year, or were actually willing to sell a seven-day pass for $1.99.
الخميس، 30 أكتوبر 2008
John- Can you tell us how long we have to adhere t...
Can you tell us how long we have to adhere to these guidelines before being penalized?
If penalized, would it be just the offending pages or the entire site?
Thanks
السبت، 25 أكتوبر 2008
great idea, but unfortunately as is, poor implemen...
the main question is: what will keep people from using the "site:" search modifier, or faking the referrer, to see my entire site's premium content for free? it doesnt have to a perfect solution, but, as is, it's just too easy to be useful.
one idea is to make FCF only available to users logged in to google. google would then pass an anonymous reference to the google user ID to the content provider's site, and the owner of the site can decide how many FCFs to give each user (1, 3, 5, 10, who knows). google would make a web service available to verify that the user ID being supplied was recently active in a google search. of course this locks the user even more into the googlesphere, and restricts opening up FCF to users from other SEs. now if everyone was on openID that problem too would be solved.
using referrer alone is certainly not enough protective enough to be useful.
الخميس، 23 أكتوبر 2008
I agree it may be enough for some types of site in...
I'm suggesting that the entire model of allowing access based on referer is flawed, and suggesting a model by which IMO a better result could be achieved.
FCF is based on the idea that users want to be able to access stuff fully when they click on it (From Google, and by extension from any search engine or directory - why stop there?). I fundamentally don't think that this model is sustainable or securable.
I'm suggesting that it would be a better outcome for everyone if searchers were simply able to distinguish between these results, and choose to filter based on whether they would tolerate signup or pay-for links.
Someone has to pay for the web, and not all content is freely available. I think people will be much more understanding about this (and hence links will convert better to customers) if it's clear to them that the information will cost or require signup before they click on the link.
> Subscription/paid sites need signups to be susta...
> for a key aspect of their business cannot rely on such
> weak "security".
It could be enough for many sites to just show the registration box to say 80% of the people, not caring about all the different ways power users would see the content, as it's not really about "securing" the site to users (after all Google searchers can see it anyway; though the case may be different if you want to "secure" your site from non-Googlebot crawling). It's more about deceiving, uhm, I mean convincing enough people to send around the link they found in Google believing that the content will show to their friend, and then the friend will see the payment request instead of what was intended to be sent.
Beth Ann,1) You don't need to spoof a referer to "...
1) You don't need to spoof a referer to "beat" this, all you need to do is do a Google search for
"site: www.premium-content-here.com" and then open all the links in a new tab. You then get the referer in a perfecly legimate way, and get access to all the FCF content. Not difficult.
2) How long do you think it would be before browser versions / plugins / mods appear with a "Via Google" button that reloads the page with a Google referer header? Various plugins and browsers already exist to alter these headers (mainly for debugging purposes), and don't require any specialist knowledge.
Subscription/paid sites need signups to be sustainable, and for a key aspect of their business cannot rely on such weak "security".
IMO, a better approach to this would be for search engines to support meta data about content (via the sitemap), which specifies whether or not it's free / signup, which appear in the search results, and allow the searcher to specify preferences about whether they require free/pay/subscribe sites in their results. Robust authentication of googlebot could allow webmasters to permit indexing of a full article whilst retaining it as premium content for customers.
Cloaking content in this way would be far less damaging, as users would be aware that they were about to view a pay / subscription link, and would have the option to filter their search results to restrict these sites if they wished.
If extra processing & development costs are an issue, Google could potentially charge businesses a small amount per clickthrough on links cloaked like this - in the manner of Adwords. The customer would be well aware that the link was pay / register beforehand, so the conversion rates would be high. Google searchers would of course want to be confident that search rankings were not otherwise affected by this, and would be able to filter those results if desired.
Deep / hidden web indexed, customers happy that they are in control of their search results. Businesses happy that their content can be found and conversion rates are high. Google making more money.
Everyone is happy, the world becomes a better place. Free cake for everyone. Steve hailed as saviour of teh interwebs.
Now I'm off to solve the Middle East. What's all the fuss about?
الأربعاء، 22 أكتوبر 2008
@beth ann: The real problem are not those few peop...
My feelings on this "service" are mixed, but I've ...
الثلاثاء، 21 أكتوبر 2008
B1, Google's cache can be instructed to ignore you...
[meta name=”robots” content=”noarchive”]
(Use angle brackets around it instead of square)
For premium content this would seem like a good idea anyway, but it doesn't solve all the issues.
I run a subscription-based website and I also sell...
Any subscription-based website admin/author will tell you that they are continually trying to find the happy medium of giving enough teaser (public) material to show what the site is about, its quality and depth, etc, but not giving it all away. After all, your aim is to have people pay to read the rest of it.
This balance can and is already achieved by admins selecting some info for public viewing and restricting the rest.
I'm a big fan of Google and the technology it has introduced over the years, but I don't think this suggestion in its current form will be embraced because Google is asking to circumvent this crucial and delicate balance.
I might be more interested in this if Google's search engine would not display the FCF content in its search results (won't this show up in the cached area?), yet still have the search engine match any terms that the bot may have found in the FCF page. Also, I don't think the user's should be given any FCF privileges - only the Google Bot and only if it doesn't reveal exactly what it saw. If the user is interested in the content then let the website decide whether to let them in or not - maybe with a free trial if they signup, or a discount on a new subscription, or whatever the admin thinks is best.
Many thanks to Google though for at least sharing with us what goes on inside its incubator and giving us an opportunity to comment. How many other titans out there are so open and engaging. Thank you Google, I appreciate that.
- BC.
OK so Google wants me to agree to make all my prem...
WOW what a deal!
David
this is essentially what webmasterworld and google...
Here's (I think) a better solution which avoids mo...
----
1) Update the sitemap spec to support the following true/false flags for each page
* Free View (Is it cost free to view the full page)
* Free Preview (Is it cost free to view a preview of the page)
* Registration View (Is registration required to view the page)
* Registration Preview (Is registration required to view a preview of the page)
* Cache Preview (Should a preview of the page be cached)
* Cache page (Should the full page be cached)
(Depending on the model you require, you could base this on pages or articles - which would require some other modifications to the sitemap spec. This could easily be done without breaking for existing sitemaps.)
For each sitemap, indicate if SSL is supported for the URLs spidered (or alternate URLs, etc)
2) Equip googlebot with a client certificate to identify itself.
When googlebot spiders a site, the site can decide if it wants to let Google index the full article or not. If they are a big enough site to support SSL, they can validate the client is Google with higher certainty.
3) In the search listings, show flags depending on whether the information is free to view/preview and whether registration is required. Allow users to filter search results on these criteria, and setup default preferences.
Penalise sites if they are reported (and confirmed) as lying about the cost and registration flags.
Result: You can index the hidden web in a way allowed by the content authors, but still enable the end users to be in control of the type of sites their search will return.
Webmasters could build their registration and authentication system based on the rules in their sitemap file, so everything would be automatically up-to-date and in agreement between themselves and the search engines.
Just pop the cheque in the post ;)
الاثنين، 20 أكتوبر 2008
This is not something Google is "creating" just en...
What it boils down to, is will this increase registration conversion rates at sites?
There's two sides to this; first off if showing the "first click for free" does increase the conversion ratio, then you should do that for everyone who visits your site, not just those coming from Google results.
Second off, if you decide to do this only for Google users, people are going to spoof their referer ID to get to your content for free, not just for the first click, but for everywhere this is enabled on your site, which seems like it would reduce registration rates.
I see absolutely no reason a website should choose to do this.
There seems to be some confusion here on this issu...
*However*, if you allow googlebot to index the whole page, but only show a limited page to browsers (whether refered from Google or not), then it is cloaking.
Google would possibly (probably?) argue that it's okay as long as the googlebot indexed page has the same content as the page seen by users *when they click through from Google*
I wonder why anybody would want to implement this. Maybe some of the above enthusiasts could explain what good they see here?
I added my thoughts here: http://blogoscoped.com/a...
Looks like a new therm should be coined for the th...
Also it's a pity Google has changed its stance on the cloaking. Time to update the webmaster guidelines?
Cloaking refers to the practice of presenting different content or URLs to users and search engines. Serving up different results based on user agent may cause your site to be perceived as deceptive and removed from the Google index.
Some examples of cloaking include:
* Serving a page of HTML text to search engines, while showing a page of images or Flash to users.
* Serving different content to search engines than to users.
Am I reading this right?Website owners are now exp...
Website owners are now expected to reward users for using Google to navigate their site rather than the Website's own navigation.
A search for "site: www.restricted-content-here.com" and then opening all the documents in a new tab would seem to provide all "first click free" content, and when people figure this out, the only way those sites will make any money / signups is by having their valuable content unavailable through this system, and only having teaser content for free - Just like it is now.
Further to that, with this proposal, Google doesn't seem to be providing code for owners to use to filter these users, or treating the search traffic differently in any way. While it's not rocket science to implement, implying that this is a Google service seems dubious, as Google don't seem to be actually doing *anything* here.
I can see the benefit to general web users (In the short term perhaps), but what's the incentive for website owners to actually implement this? Are they going to be penalised in the search rankings if they don't?
I'm just a bit baffled - am I missing something important being offered here? I assume I must be, as Google usually offers well thought through products.



