الثلاثاء، 21 أكتوبر 2008

great info man...

great info man...

B1, Google's cache can be instructed to ignore you...

B1, Google's cache can be instructed to ignore your page by use of a Meta tag in the document.

[meta name=”robots” content=”noarchive”]

(Use angle brackets around it instead of square)

For premium content this would seem like a good idea anyway, but it doesn't solve all the issues.

I run a subscription-based website and I also sell...

I run a subscription-based website and I also sell a plugin for Wordpress that provides subscription services, so I've wrestled with this subject a lot.

Any subscription-based website admin/author will tell you that they are continually trying to find the happy medium of giving enough teaser (public) material to show what the site is about, its quality and depth, etc, but not giving it all away. After all, your aim is to have people pay to read the rest of it.

This balance can and is already achieved by admins selecting some info for public viewing and restricting the rest.

I'm a big fan of Google and the technology it has introduced over the years, but I don't think this suggestion in its current form will be embraced because Google is asking to circumvent this crucial and delicate balance.

I might be more interested in this if Google's search engine would not display the FCF content in its search results (won't this show up in the cached area?), yet still have the search engine match any terms that the bot may have found in the FCF page. Also, I don't think the user's should be given any FCF privileges - only the Google Bot and only if it doesn't reveal exactly what it saw. If the user is interested in the content then let the website decide whether to let them in or not - maybe with a free trial if they signup, or a discount on a new subscription, or whatever the admin thinks is best.

Many thanks to Google though for at least sharing with us what goes on inside its incubator and giving us an opportunity to comment. How many other titans out there are so open and engaging. Thank you Google, I appreciate that.

- BC.

OK so Google wants me to agree to make all my prem...

OK so Google wants me to agree to make all my premium content free to anyone who either 1) uses Google's site specific search instead of the navigation I provide OR 2) who fakes the referrer.

WOW what a deal!

David

this is essentially what webmasterworld and google...

this is essentially what webmasterworld and google have been doing for since the former erected a pay wall.

Here's (I think) a better solution which avoids mo...

Here's (I think) a better solution which avoids most of these problems. Implement this, and pay me a commission on revenue (just kidding), or give me a job and I'll implement it for you (kidding a bit less ;) ).

----

1) Update the sitemap spec to support the following true/false flags for each page

* Free View (Is it cost free to view the full page)

* Free Preview (Is it cost free to view a preview of the page)

* Registration View (Is registration required to view the page)

* Registration Preview (Is registration required to view a preview of the page)

* Cache Preview (Should a preview of the page be cached)

* Cache page (Should the full page be cached)

(Depending on the model you require, you could base this on pages or articles - which would require some other modifications to the sitemap spec. This could easily be done without breaking for existing sitemaps.)

For each sitemap, indicate if SSL is supported for the URLs spidered (or alternate URLs, etc)

2) Equip googlebot with a client certificate to identify itself.

When googlebot spiders a site, the site can decide if it wants to let Google index the full article or not. If they are a big enough site to support SSL, they can validate the client is Google with higher certainty.

3) In the search listings, show flags depending on whether the information is free to view/preview and whether registration is required. Allow users to filter search results on these criteria, and setup default preferences.

Penalise sites if they are reported (and confirmed) as lying about the cost and registration flags.

Result: You can index the hidden web in a way allowed by the content authors, but still enable the end users to be in control of the type of sites their search will return.

Webmasters could build their registration and authentication system based on the rules in their sitemap file, so everything would be automatically up-to-date and in agreement between themselves and the search engines.

Just pop the cheque in the post ;)