Vous êtes sur la page 1sur 133

Robot Exclusion Protocol (REP)

Robot Exclusion Protocol (REP) is one important part of search engine optimization (SEO). It gives the power to website owner to restrict search engines from indexing or crawling some parts or the whole website. One cannot say that all search engines are following the REP but the big three search engines(Google, Yahoo and Bing) have adopted a strategy of working together to support REP. Now the most important questions for those who are new to SEO will be why would any one be intrested in restricting search engines from indexing or crawling there site? Well there can be reasons for some of which are:

If you are building a new website you don't want search engines to crawl or index your site which is barely complete, have broken links or say just have a "under construction" page. Your is going through maintenance phase where you have "under construction" page for few of your site's section. Some content on your site is being duplicated, but is important and can't be removed. You don't want search engines to index duplicate content as this can harm your page/site ranking.

Now lets come to the main point how is REP applied? When we talk about rep we are actually talking about several things robots.txt, XML Sitemaps, robots meta tags and nofollow link attribute. robots.txt As name suggests its a simple txt file which should reside in root folder of your web site. robots.txt standard is almost always applied at the site-wide-level. robots.txt is the way to tell crawlers what not to crawl. Robots.txt has nothing to do with indexing. robots.txt is not an absolute requirement for every website but it's use is highly encouraged, as it can play an important role in SEO issues such as content duplication. robots.txt directives In this section we will be discussing various robots.txt directive:

User-agent Allow Disallow sitemap Crawl-delay

lets first talk about the wild cards being used in robots.txt *, $ These are the only two wild cards being used in robots.txt and they work similarly as they in regular expressions. '*' matches any no of characters '$' matchin the end of line. User-agent This directive simply indicates to which crawler(s) the next directive will act upon. This directive will always be followed by one of theseAllowDisallow
User-agent: googlebot User-agent: *

First line indicate towards google's crawler only Second one is general one that is the next directive[Allow/Disallow] will be applied to all the crawlers. MAJOR SEARCH BOTS - SPIDERS NAMES: Google: googlebot MSN Search: msnbot Bing Search: msnbot Yahoo: yahoo-slurp Ask/Teoma: teoma GigaBlast: gigabot Scrub The Web: scrubby DMOZ Checker: robozilla Nutch: nutch Alexa/Wayback: ia_archiver Baidu: baiduspider Google Image: googlebot-image Yahoo MM: yahoo-mmcrawler MSN PicSearch: psbot SingingFish: asterias Yahoo Blogs: yahoo-blogs/v3.9 Allow The Allow directive tells web crawlers that the specified resources can be crawled.
User-agent: * Allow: /images/

This means all the crawlers are allowed to crawl images/ directory of your site. Disallow As name suggests the Disallow directive prohibits crawlers from crawling specified resoures of your site. This directive is the original directive created.
User-agent:* Disallow:/private/

Now user have prohibited all the crawlers from crawling private/ directory of the site.

Sitemap sitemap is an important part of your site from search engine perspective as it can get all the links to crawl from one single page. Sitemap directive of robots.txt helps crawlers locate sitemap.
Sitemap: http://widwebway.com/sitemap.xml

The location of the sitemap directive is not mandated. It can be anywhere within the robots.txt file. Crawl-delay Only Yaho, Bing, MSN, Ask/Teoma use the Crawl-delay directive. They use it to tell crawlers how frequently they should check new content. its syntax is Crawl-delay:xx where "xx" is the minimum delay in seconds between successive crawler accesses. Yahoo's default crawl-delay value is 1 second. If the crawler rate is a problem for your server, you can set the delay up to up to 5 or 20 or a comfortable value for your server. Setting a crawl-delay of 20 seconds for Yahoo-Blogs/v3.9 would look something like:
User-agent: Yahoo-Blogs/v3.9 Crawl-delay: 20

Robots Meta Directives Opposite to robots.txt Robots Meta Directives operates on a page level only and provide indexing instructions to the obeying search engines. There are two types of meta directives:

Those that are part of the HTML page. Those that the web server sends as HTTP headers.

HTML Meta Directives List of HTML Meta Directives and their examples is given below: Noindex: Instructs search engines not to index this page Nofollow: Instructs search engines not to follow any outbound links. Nosnippet: Instructs search engines not to show any search results for this page. Noarchive: Instructs search engines not to show a cache link for this page. Follow: Directive that allows to follow outbound links Index: Default implied directive that says index this page We can combine more than one meta directives into single meta tags
<meta name="robots" content="noindex, nofollow">

HTTP Header Directives HTTP Header Directives are important because not all web documents are HTML pages. Search engines index a wide variety of our documents, including PDF files and word documents. If you are using an Apache web server, you could add the following line to your .htaccess files.
Header set X-Robots-tag "noindex, noarchive, nosnippet"

nofollow Link Attribute The basic idea is that links marked with the nofollow attribute will not pass any link juice to the spammer sites.
<a href="http://widwebway.com/" rel="nofollow">Web Development Company</a>

Robots exclusion protocol is being used widely but than the protocol is purely advisory. It relies on the cooperation of the web robot, so that marking an area of a site out of bounds with robots.txt does not guarantee privacy.

What is a 'landing page' in Google Analytics?


Question: Q: How do you make a page a landing page in google? Answer: A 'landing page' in Google Analytics refers to the FIRST page that they 'land on' from another website. So if they do a search for 'Brooklyn Office Space,' and they land on your HOME PAGE, it gets counted (for that visit) as a landing page. If they land deeper into your site, then that first page is a landing page. Now, the second definition of a 'landing page' is a page that you SPECIFICALLY create to attract Google traffic. So, for instance, you might create a page with a TITLE of 'Brooklyn Office Space' or 'Brooklyn Industrial Space,' and thoroughly optimize the page with a good TITLE, META DESCRIPTION, keyword heavy content H1, etc, plus a link from your home page. Then that page would 'attract' searches from Google and thereby be a landing page in both sences of the word: 1) a page specifically designed to attract Google searches, and 2) the first page that people 'land' on when they enter your website. The Landing Pages of a website is nothing but the pages through which people enter the website. So it is very important to determine what are the top landing pages in your website before you proceed further. Below are some data discrepancies which would make you rethink your trust on top landing pages reports: 1. The report section displays only entrances, bounces and bounce rate 2. The data is not sufficient to provide us the conversion data 3. Revenue generated by the landing pages cannot be displayed

Other than the traditional methods, there are ways to analyze landing pages from ROI-ers perspective. One way to analyze the landing page is from an existing Google Analytics report, and the other way to do so is to segment the landing page using custom reports. Heres how to do it using the available reports in Google Analytics: In GA, go to Visitors > Map Overlay and then use the dimension drop down menu to select your Landing Page.

This report will help us to see how these landing pages were viewed when the visitors signing up for getting my quote or buying products. So finally the Map Overlay Landing Page Report analysis is awesome and dandy. We can even customize our own report if the built-in report is not working for you. Custom Reports always give a better solution to analyze the insights.

Now we can compare whatever we want. The Goal and E-commerce data are separated here, but we can even pack it all on one tab if we would like. By adding this to dashboard we will get greater insights into your visitor traffic and site performance and that is indeed, good to go!

The Value Of Landing Pages


Monday, August 31, 2009 | 12:09 PM Labels: Business Insights Imagine that we're launching a brand new advertising campaign for our new e-commerce website that sells Empanadas, my favorite food. The structure of the website is simple. We have a homepage, a few category pages that lists empanadas by type (baked, fried, etc), and hundreds of individual pages for each type of empanada (ham and cheese, steak, chicken, veggie, etc.). Website structure

(click to enlarge)

Given this site design and our goal to sell as many empanadas as possible, let's look at this question: Which type of landing page (home, category, or product) leads people to purchase more empanadas? To answer it, we'll use two Google Analytics features, Custom Reports and Advanced Segments, to find out exactly, in dollars, which is the best type of page. And to perform this analysis we need one of two things: 1. e-commerce or 2. goals with a goal value. Searching for the answer in Landing Pages First go to the Content > Landing Pages.

(click to enlarge)

This report is naturally a good place to start but it only gives us three metrics: Entrances, Bounces and Bounce Rate. I want to know dollar amount, not bounce rate. To get the value of each landing page we have to create a custom report. Step 1) Create the Custom Report Go to Custom Reporting and create the following report: Dimension: Landing Page Metrics: Entrances, Abandonment Rate, Goal Completed and Value per visitor

(click to enlarge)

Great. Now I know the average value for any visitor that starts on these pages. On average the value per landing pages is $0.07. This means for all people who arrive at my webpage, on average each person will buy $0.07 worth of empanadas. Not much huh? However, as you can see some pages have a consistently much better conversion rate than others. For example, my home page -- /home.html -- gives me a per visit value of $0.10. I'd like to compare that with my other two page types: product and categories. We could go through this list and pick out one by one which is better, or write a regular expression in the search filter box, but an easier and more flexible way to identify these page is via Advanced Segments. Step 2) Create the Advanced Segment Take a minute to think about the layout of your website. Is there a unique identifier that let's you segment your landing page types? If there isn't then ask your Webmaster what you can do to get around this problem. In our example, remember that our website is very simple. Every empanada page contains the word empanada.html, every category page contains category.html, and the home page is home.html. To begin with, let's create a category segment. Create the "Category" Advanced Segment 1. Go to Advanced Segments>Create New. 2. Dimension: Landing Page 3. Contains "category.html" 4. Name it "Visits that land on Category." 5. Save and Apply to report Ouch! Visitors that land on my category pages spend an average of $0.04. Much worse than the average of $0.07. Now let's compare with what happens when a user lands on a page of an individual empanada product page. It's the same process as above except we use Landing Page Contains "empanada.html."

Create the "Empanada" Advanced Segment 1. Go to Advanced Segments>Create New. 2. Dimension: Landing Page 3. Contains "empanada.html" 4. Name it "Visits that land on empanada." 5. Save and Apply to report Here is what we get:

(click to enlarge) Wow! Visits that see a product page before anything else spend $0.30 on average. That's over 7 times more than the value of the category landing pages. Which pages should we use? Our empanada pages of course! We no longer have to guess which page is best. Even if we have hundreds of different types of empanadas we can calculate to the penny the potential value of focusing our advertisements on products. Yeah, that's nice but how do I do the same for my website? The above is a great example of full circle analytics. Set up goals, then create the reports and segments you best need to analyze the success of the goals. We chose to look at Landing Pages, but after you have goals, reports and segments in place, you can do most analyses. Here are the key takeaways: 1. Most importantly your URLs must have a unique identifier (like our ? type=empanadas) so you can segment by page type AND either e-commerce implementation or a goal value. 2. Instead of thinking home, category, and product think home, broad, or specific.

Usually, the more specific and focused the landing pages the better. 3. If you don't use an e-commerce website don't worry, you can do the same analysis. For e-commerce websites its much easier for us to calculate exact dollar return -- but! we can also use goal value to calculate user value. So, if you don't sell a product, your goal might be to have the users fill out a contact form. If for every 100 users that fill the form you can gain 5 leads that over a month spend an average of $100 each then the value of your form is 5x$100=$500/100=$5 per form completed. This goal value can also be used to calculate landing page value.

How to Use Custom Variables in Google Analytics


By Craig Buckler | June 7, 2011 | Traffic Analysis | Web Tech 6 0digg Email Print Its easy to track campaigns, ecommerce transactions and JavaScript events in Google Analytics. Custom variables offer a further level of control which permits you to segment all visitor data, e.g.

monitoring pages viewed by members and non-members discovering which products are bought by new and existing customers categorizing content by topic.

A single custom variable is set using the following code: view plainprint? 1. _gaq.push(["_setCustomVar", index, name, value, scope]); where: index (required) You can set up to five custom variables on any page so the index is an integer between 1 and 5. I recommend keeping things simple define five or fewer custom variables per website and assign a consistent index. Its possible to create more if they are spread over multiple pages, but it can lead to confusion.

name (required) The custom variable name. value (required) The custom variable value. Its possible to set numeric values, but data is passed and treated as a string. scope (optional) An integer between 1 and 3 where:

1 is visitor level the custom variable data persists for every visit and page viewed by an individual. 2 is session level the custom variable data persists during the single visit made by an individual. 3 is pageview level the custom variable data only persists during the current pageview.

For example, the following visitor-level variable could be set after the user registers and logs in for the first time. It would only need to be set once because it would remain associated with the user (even when they log out of our system): view plainprint? 1. _gaq.push(["_setCustomVar", 1, "Member", "yes", 1]); Perhaps we now want to segment members by the number of months theyve been registered (up to a maximum of 12 months). We could set a session-level variable whenever they log in: view plainprint? 1. _gaq.push(["_setCustomVar", 2, "RegisteredFor", Math.max(months,12)+" month s", 2]); If we want to track which topics are of interest to users, we could set a pageview-level variable: view plainprint? 1. _gaq.push(["_setCustomVar", 3, "Topic", "JavaScript", 3]); Custom variables are not sent immediately and should normally be set prior to calling _trackPageView(). They are also sent when a custom event occurs with _trackEvent(), but thats not a situation you can rely on. Our full code could therefore be: view plainprint? 1. var _gaq = _gaq || []; 2. _gaq.push(['_setAccount', 'UA-XXXXXXXX-X']); 3. // set custom variables

4. _gaq.push(["_setCustomVar", 1, "Member", "yes", 1]); 5. _gaq.push(["_setCustomVar", 2, "RegisteredFor", Math.max(months,12)+" month s", 2]); 6. _gaq.push(["_setCustomVar", 3, "Topic", "JavaScript", 3]); 7. // track page view 8. _gaq.push(['_trackPageview']); 9. (function() { 10. var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; 11. ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '. google-analytics.com/ga.js'; 12. var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefo re(ga, s); 13. })();

Custom Variable Reporting


The easiest way to view custom variables in Google Analytics is the Visitors > Custom Variables report. However, Advanced Segments in My Customizations offers a greater level of analysis. Click Create new custom segment then expand the Visitors section of the Dimensions box. Define a Segment by dragging a custom variable name or value to the dimension box. The following screenshot defines a new segment named Logged In which looks for data where the Member variable is set to yes":

Once the segment has been saved, you can open any report and click the All Visits tab at the top-right. You can restrict it to show members only by ticking the Logged In checkbox:

How is a Custom Report Different From an Advanced Segment?


The main thing here is to think of custom reports as reports, and advanced segments as filters. (NOTE: Important and hopefully not too confusing distinction: There is a separate function within Google Analytics that is actually called a filter, but it's different than an advanced segment -- if you're not familiar with it, ignore it for now and go back and check out some of these articles, read our power user's guide to Google Analytics hacks which contains some great examples of clever use of filters, or take a look at this documentation).

What is a Google Analytics Custom Report?


Think of custom reports as a default view of certain data that you're building or customizing to be displayed just the way you want it. For instance you can create a report to see your goal completions and unique visitors per keyword:

This is similar to a number of the reports you can quickly click through Google Analytics to see, such as the keyword report, top content report, etc. Basically what you are doing

with a custom report is creating a default template for a report that includes (and discards) exactly the metrics and dimensions you want. Here is an overview video from Google on custom reporting:

What is a Google Analytics Advanced Segment?


An advanced segment is more like a filter or additional layer of data on top of one of your reports (even a custom report, if you like). By default Google Analytics shows you a nice list of possible advanced segments:

All visits New visitors Returning visitors Paid search traffic Non-paid search traffic Search traffic Direct traffic Referral traffic Visits with conversions Mobile traffic Non-bounce visits

These filters allow you to create a different view of your campaign. You can filter you data and only look at the behavior of a specific segment. Here we see a great example of a custom report with an advanced filter layered on top. Our conversions and unique visitors by keyword report has a filter for only non-paid search traffic put on top of this custom view:

Or you can add layers to your report to see not only a high-level overview, but also how different segments of your traffic have performed:

For a more multimedia view of how advanced segments work, here is a video from Google:

So What Can I Do with Custom Reports & Advanced Segments?


As you can see above you can get some powerful insights from custom reports, advanced segments, or custom reports with advanced segments leveraged. The best way to use these tools is to understand both the types of insights that are available, and the problems you are looking to solve. If you're doing lead gen for a B2B software company, you don't need e-commerce segments or custom reports (no matter how nifty or valuable they are to e-commerce analytics users). As you create a custom report, think about:

What are my KPIs? What are my core sources? How can I segment those sources into meaningful sub-segmentations?

You can then include things like goals, goal value, unique visitors, page views -whatever metrics are core to your business -- as your metrics in these reports. You then want to identify major traffic sources, and meaningful sub-segments of those sources. Once you have a solid understanding of the fundamentals behind custom reports and advanced segments you'll be able to create great reports that offer you specific insights into how to improve your own online marketing efforts.

New Version of Google Analytics Includes Upgraded User Interface . ..

Search Engine Optimization (SEO) Online Training o News New Version of Google Analytics Includes Upgraded User Interface . ..

More Sharing ServicesShare | Share on chimein Share on facebook Share on stumbleupon Share on twitter Author SEO Coach Category News Tags Google Analytics, Google Webmaster, New Analytics Report, New Version, SEO, Site analytics, Updates, Website reporting Comments 0

Upgraded User Interface in New Version of Google AnalyticsThose of you who are familiar with Google Analytics and have been using the free service to monitor your sites web activity may have noticed a few changes to the look and feel of the site this month. The changes, which were released the first week of October can be seen by clicking the New Version link in the upper right corner of the screen, next to the account name. The most notable difference between the old version and the latest version of Google Analytics is the simplified site navigation. Now there are three basic areas webmasters can go for data Home, Standard Reporting, and Custom Reporting. In the Home section, site owners can quickly understand how many visitors they are getting to their site, what countries these visitor represent and the overall bounce rate of their site. Standard Reporting is where site owners will find all the canned reports they were used to seeing in previous versions of Google Analytics including: number of visits, number of pageviews, pages/visit, average time on site, etc. Custom Reporting is the area which site owners can build their own reports and access reports theyve previously created and saved.

Official: Google Analytics Gets Social Engagement Reporting


Jun 29, 2011 at 4:47pm ET by Daniel Waisberg

inShare725

Google Analytics has just announced a new set of reports (and functionality) that will enable websites to track social interaction with their content. This comes as a welcomed addition to the new Google+1 button, as it now enables one to measure the impact of social interactions in and outside websites (either through a Facebook like, +1 or LinkedIn share inside the website or +1 on search results). The new reports can be found in the Visitor section (make sure you are using the new Google Analytics) and are seeing the following:

The Social Engagement report shows site behavior changes for visits that include clicks on any social sharing actions. +1 is added automatically, but other sharing buttons should be added through coding, see below how to define them. This allows website owners to understand whether there is a different behavior between visitors that share and visitors that do not share or between different types of sharers.

The Social Actions report shows the number of social actions (+1 clicks, Tweets, etc) taken on the site. This can be helpful to prioritize which share buttons should be in the header of an article, for example:

The Social Pages report shows the pages on the site driving the highest the number of social actions. This is very useful to learn which content is viral and what your visitors really like to read to the point of sharing it with their friends.

This change is so meaningful that Google went the extra mile to create the Social Interaction Tracking, a new tracking function that will be used for social tracking only. Basically, the syntax is as follows: _trackSocial(network, socialAction, opt_target, opt_pagePath) 1. Network: Name of the social network (google, facebook, twitter, digg, etc) 2. SocialAction: Type of action (like, tweet, send, stumble) 3. opt_target: Subject of the action being taken. Optional, defaults to the URL being shared (document.location.href). Can be manually set to anything: a different URL (if theyre sharing content that points to another URL), an entity (e.g, product name, article name), or content ID

4. opt_pagePath: The page on which the action occurred. Optional, defaults to the URI where the sharing took place (document.location.pathname). Can be manually set (like a virtual pagename). For a more technical overview on how to implement this tag for facebook and Twitter visit the code site article. As concluded on the Google Analytics launch post (link above): Social reporting is just getting started. As people continue to find new ways to interact across the Web, we look forward to new reports that help business owners understand the value that social actions are providing to their business. So +1 to data!

Custom Variables
Custom variables are name-value pair tags that you can insert in your tracking code in order to refine Google Analytics tracking. With custom variables, you can define additional segments to apply to your visitors other than the ones already provided by Analytics. This document describes custom variables and how you can set them up. 1. Overview 2. Using Custom Variables 1. Example Use Cases 2. Usage Guidelines

Overview
You'll get the most out of custom variables if you understand the basic visitor interaction model used in Google Analytics. In this model, the visitor interacts with your content over a period of time, and the engagement with your site is broken down into a hierarchy. The diagram illustrates this model for a single visitor to your site, where each block represents the number of user sessions and interactions from that particular user. Each level in this model is defined as follows:

Visitorthe client that visits the site, such as the browser or mobile phone operated by a person.

Sessionthe period of time during which the visitor is active on the site. Pageactivity on the user's behalf which sends a GIF request to the Analytics servers. This is typically characterized by a pageview, but it can include: o a pageview o an event (e.g. click on a movie button)

Each of these three levels of interaction defines a specific scope of visitor engagement. This distinction is important to custom variables because each custom variable is restricted to a particular scope. For example, you might want to know the number of sessions where visitors removed an item from their shopping cart. For that particular case, you would define the custom variable to the session level, so that the entire session for that visitor is flagged as one in which items were removed from online carts. Back to Top

Using Custom Variables


Because you can set up a variety of custom variables to track user activity for your site, you'll most commonly create your own JavaScript utilities to manage them. Your script will use the basic method for creating a custom variable as follows:
_setCustomVar(index, name, value, opt_scope)

This method accepts four parameters:

indexThe slot for the custom variable. Required. This is a number whose value can range from 1 - 5, inclusive. A custom variable should be placed in one slot only and not be re-used across different slots. nameThe name for the custom variable. Required. This is a string that identifies the custom variable and appears in the top-level Custom Variables report of the Analytics reports. valueThe value for the custom variable. Required. This is a string that is paired with a name. You can pair a number of values with a custom variable name. The value appears in the table list of the UI for a selected variable name. Typically, you will have two or more values for a given name. For example, you might define a custom variable name gender and supply male and female as two possible values. opt_scopeThe scope for the custom variable. Optional. As described above, the scope defines the level of user engagement with your site. It is a number whose possible values are 1 (visitor-level), 2 (session-level), or 3 (page-level). When left undefined, the custom variable scope defaults to page-level interaction.

The following code snippet illustrates how you might set a custom variable for tracking those visits where users removed items from their shopping cart. Here, the _setCustomVar() method is called just before a _trackEvent() method, so that it gets delivered in the GIF request sent by the _trackEvent() method. It uses the name Items

with the value Yes in order to define that activity from the website users. In addition, it would make sense to also set a default custom variable for Items Removed and No. In this way, you would have a count of visits where items were removed from shopping carts, and a count of visits that didn't include item removal.
Removed

Async Snippet (recommended)


_gaq.push(['_setCustomVar', 1, // This custom var is set to slot #1. Required parameter. 'Items Removed', // The name acts as a kind of category for the user activity. Required parameter. 'Yes', // This value of the custom variable. Required parameter. 2 // Sets the scope to session-level. Optional parameter. ]); _gaq.push(['_trackEvent', 'Shopping', // category of activity 'Item Removal', // Action ]);

Traditional (ga.js) Snippet Once you have set up custom variables, you can use the _deleteCustomVar(index) method to remove your custom variables. Back to Top

Example Use Cases


Custom variables can be implemented in many different ways, depending upon your website model and business needs. The examples explore different use cases, where each one illustrates a different level of scope:

Page-level Custom Variables Session-level Custom Variables Visitor-level Custom Variables

Page-level Custom Variables


Use page-level custom variables to define a collection of page-level activities by your users. For example, suppose you manage the website for an online newspaper, where visitors view many different articles. While it is easy to determine which particular articles are the most popular, you can now also use custom variables to determine which sections of the newspaper are popular. This is done by setting a custom variable at the page level for

each article, where the section for that article is set as a custom variable. For example, you might have sections such as Life & Style, Opinion, and Business. You could set a custom variable to track all your articles by section. Async Snippet (recommended)
_gaq.push(['_setCustomVar', 1, // This custom var is set to slot #1. Required parameter. 'Section', // The top-level name for your online content categories. Required parameter. 'Life & Style', // Sets the value of "Section" to "Life & Style" for this particular aricle. Required parameter. 3 // Sets the scope to page-level. Optional parameter. ]);

Traditional (ga.js) Snippet Let's continue with this example and suppose that you not only want to tag the section for a particular article, but also the sub-section. For example, the Life & Style section for your newspaper might have a number of sub-sections as well, such as Food & Drink, Fashion, and Sports. So, for a particular article, you can track both the section and the sub-section. You could set an additional custom variable to track all your articles by subsection. Async Snippet (recommended)
_gaq.push(['_setCustomVar', 2, // This custom var is set to slot #2. Required parameter. 'Sub-Section', // The 2nd-level name for your online content categories. Required parameter. 'Fashion', // Sets the value of "Sub-section" to "Fashion" for this particular article. Required parameter. 3 // Sets the scope to page-level. Optional parameter. ]);

Traditional (ga.js) Snippet In this example, you set two simultaneous page-level custom variables for a single page. For any single page, you can track up to five custom variables, each with a separate slot. This means that you could assign 3 additional custom variables on this same page. For all articles on your website, you can set up an array of page-level custom variables to track them by a variety of sections and sub-sections. For more information on how to correctly use page-level custom variables, see Usage Guidelines below. Back to Top

Session-level Custom Variables


Use session-level custom variables to distinguish different visitor experiences across sessions. For example, if your website offers users the ability to login, you can use a custom variable scoped to the session level for user login status. In that way, you can segment visits by those from logged in members versus anonymous visitors. Async Snippet (recommended)
_gaq.push(['_setCustomVar', 1, // This custom var is set to slot #1. Required parameter. 'User Type', // The name of the custom variable. Required parameter. 'Member', // Sets the value of "User Type" to "Member" or "Visitor" depending on status. Required parameter. 2 // Sets the scope to session-level. Optional parameter. ]);

Traditional (ga.js) Snippet Suppose you wanted to track both the user type and whether a purchase attempt occurred for a given session. If we assume that every page offers the user the ability to login, we would want to reserve slot #1 for the User Type custom variable and use another slot for the purchase attempt: Async Snippet (recommended)
_gaq.push(['_setCustomVar', 2, // Required parameter. 'Shopping Attempts', // Required parameter. 'Yes', // Required parameter. // No) 2 // Optional parameter. ]); This custom var is set to slot #2. The name of the custom variable. The value of the custom variable. (you might set this value by default to Sets the scope to session-level.

Traditional (ga.js) Snippet Back to Top

Visitor-level Custom Variables

Use visitor-level custom variables to distinguish categories of visitors across multiple sessions. For example, if your websites offers premium content to paying subscribers, you can set a visit-level custom variable to analyze which users are paying members, at which level of payment, and which users are using the free level of service for the site. You would likely set this custom variable as a one-time function, since the value would persist across the life of the visitor cookie. Async Snippet (recommended)
_gaq.push(['_setCustomVar', 1, // This custom var is set to slot #1. Required parameter. 'Member Type', // The name of the custom variable. Required parameter. 'Premium', // The value of the custom variable. Required parameter. // (possible values might be Free, Bronze, Gold, and Platinum) 1 // Sets the scope to visitor-level. Optional parameter. ]);

Traditional (ga.js) Snippet

Usage Guidelines
This section describes the differences between the different types of custom variables and how to use them correctly:

Types of Custom Variables Use Caution When Mixing Different Variable Types Recommended Practices

Types of Custom Variables


The table below defines the key characteristics of the different variable types. Bear in mind that there are certain restrictions when the same slots are used by different variables. The total combined length of any custom variable name and value may not exceed 64 bytes. Keep in mind that this is not equivalent to 64 characters. Because names and values are URI encoded when stored, some characters use more than one byte. For example, = is stored as %3D rather than = and uses 3 bytes rather than 1. To get a list of URI encoded values, search the web for URL encoding reference. Back to Top

Pagelevel

When sharing a slot with other variables The last page-level variable to be called on a page is the one A single pageview, event, applied to that page. or transaction call. Duration

Number allowed For any web property (collection of pages), many unique page-level variables can be set and slots can be re-used. Limited only by the number of hits in a given session.

The current session of the visitor.

Sessionlevel

For any single page, you can set up to 5 simultaneous custom variables. The last session-level variable For any web property, you called in a session is the one can create as many distinct used for that session. session-level custom Example: If login=false for variables as can be defined slot #1 at the beginning of the with a 64-byte key-value pair session and login=true for slot limit. #1 later on, the session is set to For any given user session, true for login. you can set up to 5 sessionOver-rides any previously-set level variables. page-level variable called in the same session. Example: If slot #1 is first used for category=sports and then for login=true for a session, category=sports will not be recorded for that session.

The current For any web property, you Visitor- session and all The last value set for a visitor is can create up to five distinct level future sessions the one applied to the current visitor-level variables. for the life of the and future sessions. visitor cookie. Back to Top

Use Caution When Mixing Different Variable Types


Generally it is not recommended to mix the same custom variable slot with different types as it can lead to strange metric calculations. When you use multiple page-, session-, and visitor-level custom variables for your web property, you need to carefully determine the re-use of slots. If a situation arises on your

website where a page or session-level custom variable uses the same slot at the same time a page-level variable is set, only the session-level (or page-level) variable will be recorded. The following scenarios illustrate a mix of page, session, and visitor level variables set by a single user on the same browser. In each example, the slot is indicated by the number in parenthesis and S: indicates the scope of the variable.

Case 1 - Final Session-Level Variable Takes Precedence


Here the final page re-uses a session-level custom variable in slot 1 so it receives precedence. Page 1 Visit 1 (1) S:page-level Page 2 Page 3 (1) S: session-level (1) S: session-level
converted=true

section=opinion login=true

The report for visits would be:


# visits for section=opinion: 0 # visits for login=true: 0 # visits for converted=true: 1

Case 2 - Initial Visitor-Level Variable Takes Precedence


Here slot 1 is first used by a visitor-level custom variable in visit 1 followed by a pagelevel custom variable in visit 3. In this order, the visitor-level variable does not over-write the page-level variable. Page 1 Visit 1 (1) S:visitor-level
gender=male

Page 1 Visit 2 (2) S:session-level


converted=false

Page 1 Visit 3 (1) S:page-level

section=opinion

The report for visits would be:


# visits for gender=male: 2 # visits for converted=false: 1 # visits for section=opinion: 1

Recommended Practices

Do not use duplicate key names across slots. You have up to 5 simultaneous custom variables for use in a single request (e.g. pageview or event call). The sum of all your custom varaiables cannot exceed 5 in any given request (i.e. you cannot have 5 visitor and 5 session custom variables set simultaneously). Call the _setCustomVar() function when it can be set prior to a pageview or event GIF request. In certain cases this might not be possible, and you will need to set another _trackPageview() request after setting a custom variable. This is typically only necessary in those situations where the user triggers a session- or visit-level custom var, where it is not possible to bundle that method with a pageview, event, or ecommerce tracking call. Use a slot matrix to track large numbers of custom variables. If you have complex tracking requirements, where you have a mix of page- and session-level variables that might collide, you should build a slot matrix to ensure that session-level variables do not inadvertently over-ride page-level variables. Consider using Event Tracking for certain applications, rather than custom variables. For example, suppose you have an online music store and you want to track both login sessions, purchase attempt sessions, and sessions where music samples were played. It would make sense to use Event Tracking to track the number of attempts to play music rather than using session-level variables to achieve this. Here, you could use the 4th value parameter of the event tracking call to pass in session data from your own cookies. Don't use session-level variables to track behavior you can track with pagelevel variables. For example, suppose you track login status and shopping attempt status by sessions, and your site offers a "Members' special" page that you also want to track. Since a page-level custom variable will show the number of visits to that particular variable, you will already have available the number of visits that included that page at least once.

Search queries
The Search Queries page provides information about Google Web Search queries that have returned pages from your site. In addition, you can also see information about the pages on your site that were seen most often in search results (top pages). To specify the period for which you want to see data, use the calendar dropdowns above the graph. (By default, we'll show the last 30 days of data, along with how the daily average for the selected period compares with the daily average for the previous period.) To view the Search Queries page: 1. On the Webmaster Tools home page, click the site you want.

2. On the left-hand menu, click Your site on the web, and then click Search queries. Search queries data includes the following:

Queries: The total number of search queries that returned pages from your site results over the given period. (These numbers can be rounded, and may not be exact.) Query: A list of the top search queries that returned pages from your site. Impressions: The number of times pages from your site were viewed in search results, and the percentage increase/decrease in the daily average impressions compared to the previous period. (The number of days per period defaults to 30, but you can change it at any time.) Clicks: The number of times your site's listing was clicked in search results for a particular query, and the percentage increase/decrease in the average daily clicks compared to the previous period. CTR (clickthrough rate): The percentage of impressions that resulted in a click to your site, and the number of percentage points increase or decrease in the daily average CTR compared to the previous period. For example, if your CTR went from 40% to 30%, this column would show a change of -10. Average position: The average position of your site on the search results page for that query, and the change compared to the previous period. Green indicates that your site's average position is improving. To calculate average position, we take into account the ranking of your site for a particular query (for example, if a query returns your site as the #1 and #2 result, then the average position would be 1.5).

Filtering query data


By default, the Search Queries page shows combined query stats for all search types. To filter the data several ways, click Filters. For example, you can:

See stats for specific Google search properties. Click All and then click Image, Mobile (smartphone) (for example, iPhone, Palm, Android), Mobile, Video, or Web. The same query can appear in several different views, so the combined

number of queries for each search type may not match the number of queries shown for All. See stats for queries containing (or excluding) a certain word or phrase. Select Containing (or Not containing) in the Queries list and then type the search terms to include or exclude. See stats for starred queries only. Filter by location. Exclude search queries that generate very little traffic (fewer than 10 impressions or clicks).

To see additional information about a query, such as the position of your page on the Google search results page, and the URL of the page returned by the search query, click the query. The Query Details page provides list of pages on your site that appeared in search results for that query, along with impressions, clicks, and CTR. In addition, the Position column shows how often your site appeared in a specific position in search results. For example, if Position 1 has 36 impressions, it means that there were 36 searches for the query in which your site was the very first site listed in search results. Note: The data on the Query Detail page reflects any filters you set on the main Search Queries page.

How to use Search Queries data


This data can provide valuable information about your site. We recommend the following steps:

Review the Query list for expected keywords. If keywords you expect to see don't appear, your site may not have enough useful content relevant to those keywords. If unexpected words (like "Viagra" or "casino") appear, it's likely that your site has been hacked.

Compare Impressions and CTR to identify how you can improve your content. (Tip: Sort by Change to see queries with significant new activity.) There are several steps you can take to make your content appear more compelling so that users click your site in search results pages. Your page title appears in the results, so make sure it's relevant and accurate. Google can display the text in your pages' meta descriptions in search results, so review your meta descriptions.

If you have an AdWords account, review the Query list for keyword ideas. (Looking for more ideas? Check out the Publisher's Guide to Toolbar.)

About Search Queries data


The data we display may differ from the data displayed in other tools, such as Google Analytics. Possible reasons for this include:

Webmaster Tools does some additional data processingfor example, to eliminate duplicates and visits from robotsthat may cause your stats to differ from stats listed in other sources. Some tools, such as Google Analytics, track traffic only from users who have enabled JavaScript in their browser. Some tools define "keywords" differently. For example, the Keywords tool in Google Adwords displays the total number of user queries for that keyword across the web. The Webmaster Tools Search Queries page lists shows the total number of keyword search queries in which your page's listing was seen in search results, and this is a smaller number. There may be a lag between when the numbers are calculated and when they are visible to webmastersalthough data gets published in intervals, we are continually collecting it.

If you can no longer see a search query you saw recently, make sure you haven't filtered the results by country or type of search. Webmaster Tools aggregates query information, and displays search queries once the count of each query reaches a certain threshold. Your logs may show a particular query as having a high rank for a certain day or period, but that query does not appear in on the Search Queries page. If the query continues to be a top referrer, however, it will move to the top of our aggregate results and will appear on the Search Queries page. Also, Webmaster Tools stats show only search queries from Google. Your log files may combine search results from all search engines.

"Impressions: The number of times pages from your site were viewed in search results, and the percentage increase/decrease in the daily average impressions compared to the previous period. (The number of days per period defaults to 30, but you can change it at any time.)"

Search engine optimization or SEO is the art of placing your website in the first few pages of a search engine for a strategically defined set of keywords. In simple words SEO helps your website appear on the first page of a search engine like Google, when someone searches for your product or service.

On-Page Optimization Tips


1. Avoid the following things which can get you in trouble with search engines: - Don't use hidden text or hidden links. - Don't employ cloaking or lightning fast java redirects. - Don't load up your pages with irrelevant words. - Don't create multiple pages, sub-domains, or domains with substantially duplicate content. 2. Did you know that every page of your website stands on its own? Every page should have a unique title, description, and keyword tag. The description tag should describe ONLY that page. The keyword tag should include keywords for just that page. Include 5-6 keywords, including the main keyword phrase and synonyms of that keyword phrase. Don't make the mistake of including every keyword that could possibly describe what your site is about in your keyword tag. Make your keyword meta tag specific for each page. The keyword tag holds very little importance anyway, but be sure to make it page specific. FOCUS! 3. Do you have a site map on your website? In Google's Terms of Service, they suggest that you use a site map, so set one up immediately! There are many excellent programs that will create site maps for you. Put a link to your site map on every single page of your

site, and link to your other pages using link text that describes those pages. This link building service includes Google Sitemap Generator Setup, absolutely FREE. 4. Stay away from little keywords. When youre looking for terms to optimize for, especially primary key phrases, it is generally a good idea to stay away from little keywords, such as the, and, or, for, etc. 5. Does keyword in domain name make sense? People often seem to ask how important it is to include keywords that they want to rank for in their domain name. The answer is that its not essential, but it helps a lot. 6. Make title from 3 to 9 words! Page title elements are normally 3-9 words (60-80 characters) maximum in length, no fluff, straight and to the point. This is what shows up in most search engine results as a link back to your page. Make sure your Page Title Element(title tag) is relevant to the content on the page. 7. Describe your site in META! The META description tag usually consists of 25 to 30 words or less using no more than 160 to 180 characters total (including spaces). The META description also shows up in many search engine results as a summary of your site. Make sure your META Description Tag is relevant to the content on the page. 8. Insert keywords in META tag. For those search engines that are META enabled, the META keywords tag used to be one of the most important areas after the page title and page description. It has been abused by both marketers and consumers alike that there is very little weight given to the META keywords tag. Don't fret over your META keywords tag. Utilize keywords and keyword phrases from your title element, META description tag, heading tag and first one or two paragraphs of visible content. Try to limit it to 15 to 20 words if possible. 9. Don't forget about heading tag. At least one heading tag <h1> should appear at the top of your page and

be well written using prime keywords and keyword phrases. You can use CSS to control the appearance of the heading tags 10. Do you know what is Alt? Alt text is the line of text you see pop up when you place your cursor over an image. It also displays a text representation of the image when the user has images turned off in their browser (this is the intended behavior). It is highly recommended that you utilize this area as it is required under accessibility laws and, is indexed by the search engines. 11. Alt in Internet Explorer is different than in other browsers. Internet Explorer (IE) will display alt text when you hover your cursor over an element that utilizes the alt attribute. This is incorrect behavior as the alt text is designed to be displayed when the user has their images turned off while browsing. Other browsers such as Opera and Mozilla will not display the alt text on hover. 12. Traits of Alt. The alt attribute should not to be stuffed with keywords or phrases. The alt text should mirror the content of the image. If it is a graphic header, then your alt attribute should mirror the text in the graphic header. Alternative text values should not exceed 80 characters in length. If more than 70-80 characters are required one should use the longdesc attribute as an alternative to alt text. Make sure your Alt Attribute is relevant to the content for that image. 13. Avoid graphic links. Many web sites utilize graphic representations of links. These are visually appealing, but the text in the image cannot be indexed by the spiders. 14. Content is King. Content (visible copy) weighs heavily and is considered one of the primary areas of search engine optimization and marketing. Your content should be written in a way that grabs the users attention, while utilizing your targeted keywords and keyword phrases. There is a method to placement of the keywords and keyword phrases that will help your web site gain better placement in the search engines. Balance is essential and creating that balance takes knowledge and experience. 15. Add content regularly You should make it your goal to add at least one new page of content daily if possible. If not, then once a week is acceptable. You want to keep your website content fresh and give your visitors something to come back for on their next visit. Stale website content may not perform as well as

fresh website content. Utilize last modified dates on your pages so that visitors to your site know when the page was last modified and how fresh the content is. 16. Name files with keywords Instead of naming your file pagename.asp, you would name it keywordphrase.asp or page-name.asp. Always use hyphens (-) to separate the words in your file names, use all lower case for file naming, this includes images too. 17. Name directories appropriately Be descriptive with naming directories. Don't get carried away, but make sure at least one keyword or keyword phrase appears in the directory name. Don't forget to use hyphens (-) to separate the words. 18. Don't stuff keywords in title Don't stuff a bunch of keywords in your title separated by commas. It is one of the most unprofessional practices and it doesn't work real well for scan friendliness. 19. Define which products have the highest profit margin? A product that is less searched but has a higher profit margin would be easier to obtain a search engine ranking and would yield higher revenues. 20. Use only one < H1 > tag per page, and use your keyword phrase in the tag. Use it toward the top of the page. Make sure it captures your visitors' attention as soon as they land on your site. Also, make sure your first paragraph or the first words you use are interesting and designed to hold your visitors' attention. If you don't grab their attention and HOLD IT in the very beginning, they'll hit the back button and go back to the search results . . . it's as simple as that. 21. Another tip on freshening up some of your Title Tags. Think about this. Turn some of your titles into a question. Asking a question is a great attention getter. - Where on earth would you go if you had your pilot license? - What would you do if your career was terminated? - When should you submit your web pages? By asking a question, you create more response because it makes the reader think. 22. Did you know that there is some relevancy advantage in larger sites at the point that you begin to have around 100 pages or more indexed?

Something seems to happen around the 100 page mark. Remember to focus on building quality content of genuine value to your readers. 23. How fast do your pages load? Did you know that if your page loads too slowly, you could be deterring some search engine robots from crawling your website? Try and ensure your page loads within under 30 seconds (or even faster.) If your graphics are too bulky, check out a service called http://www.optiview.com to accelerate your page loading speed. 24. Prominence is best described as how close to the beginning of a specific area in which the keyword appears. A keyword or keyword phrase that appears closer to the top of the page or area may often be considered mildly more relevant. 25. Next time you are working on an important sales letter, remember to spend at least 50% of your time working on that all important headline text. This includes both the Title Tag and your main < H1 > Header on the page. Little changes to Heading text can make huge impacts with a little thought. Here is a variety of free validators you can use to check your CSS Style Sheets or HTML Validation or several other unique free tools from the W3.org. http://www.w3.org/QA/Tools/ 26. How compelling are your Meta descriptions? - Do you remember to use your important keyword phrase in the description? - Are you taking time to write unique descriptions for each of your pages? - How about a call to action? Do you remember to include good calls to action in your description?

51 SEO points for OnPage Optimization


Filed under featured, SEO34 comments
X

Welcome Googler! If you find this page useful, you might want tosubscribe to the RSS feed for updates on this topic.
Powered by WP Greet Box WordPress Plugin

1. 2. 3. 4. 5. 6. 7. 8. 9.

Place your keyword phrases at title tag. Title must be less than 65 characters. Place the most important keyword phrase close the beginning of the page title. Put some main keyword in keyword meta tag. Write a good description for meta description tag, description must be unique to each page. Keep Meta description short and meaningful write only 1 or 2 sentences in description. Target the most important competitive keyword phrase at the home page. Target one or two keywords phrases per page. Use only one H1 header per page. Place the most important keyword in H1 tag.

10. Use H2 and H3 for sub headers where required.

11. Use Bold / Italic / Underline for your keyword phrase for extra weight in contents. 12. Use bulleted lists to make content easier to read. 13. Use ALT tag for images, so that crawler can know about images. 14. Dont use flash on your website because crawler cant read flash. 15. Try to keep easier navigation of your website. 16. Use text based navigation. 17. Use CSS to creating navigation menu instead of JavaScript. 18. Use keyword phrases in file name, you can use hyphens (-) to separate the word in file names. 19. Create a valid robot.txt file. 20. Create a HTML site map for crawler and user. 21. Create a XML site map for Google crawler. 22. Add text links to others page in the footer of site. 23. Use keyword phrase in Anchor text. 24. Link the entire pages to each others. 25. Use keyword rich breadcrumb navigation to help search engines understand the structure of your site. 26. Add a feedback form and place a link to this form from all the pages. 27. Add bookmark button. 28. Add a subscription form at every page for increasing your mailing list. 29. Add RSS feed button so that user can subscribe easily. 30. Add Social Media sharing button.

31. Use images on every page but dont forget to use ALT tag. 32. Use videos on your site which is related to your niche. 33. Write informative, fresh, unique, useful content on your site. 34. Write site content in between 300 to 500 words. 35. Keywords % must be 3 to 5%. 36. Dont copy any content from other websites, fresh and unique content is the key of your success. 37. Add deep link to related articles. 38. Regular update your website with fresh content. 39. Use CSS to improve the look of your website. 40. Write your content for human not for robot. 41. Buy Country Level Domain name if your website is targeting local users. 42. Use a good keyword suggestion tools to finding good keywords phrases for your website. 43. Use 301 redirection to redirect http://www.domainname.com to http://domainname.com. 44. Try to buy local hosting server for your website if your site is targeting local people. 45. Use keywords rich URL instead of dynamic URL. 46. Break your article in paragraph if your article is long. 47. Add full contact address on contact page with direction map.

48. Validate XHTML and CSS at http://validator.w3.org/.


49. Dont use hidden text or hidden links. 50. Avoid graphic links, because text in the image can not be indexed by the spiders. 51. Dont create multi page for with same contents.

A List of All On-Page and Off-Page SEO Optimization Steps

1. Website analysis = A website analysis is a must for every website owner and if you are an SEO you must know how to do a websites analysis. A website analysis is more than a checklist in which you check if everything works fine and according to plan or not like content, programming, competition etc.

2. Keyword Research = Keyword research is necessary for a website. It helps in getting best keywords which will help you to get only the targeted traffic from search engines.

3. Bold, Italic effect to main keywords = These tags shows the emphasis given to a particular word. This kind od tagging helps the bots to understand that what the owner want to tell the user and what are the main key points of the content shown in that page, for which it would have higher ranking in SERP results.

4. Canonicalization = Url Duplicacy, if not solved then can harm your website. It happens because one dont redirects the unnecessary page that show the same data which is somehow considered as duplicate data by search engine bots.

5. Competition Analysis = It is must to understand the market and your competitors. One has to make some plans to top the completion that can only happen after doing the competition analysis about the every step they have taken so far and probable future plans, this helps a lot to perform better in the market.

6. CSS Validation = You website must look good not from outside but also from inside to the bots which crawl your site, so make sure that you have 100% correct CSS, so validate it CSS validators.

7. Google Base Feeds = Better known as FeedBurner, It will help you to have more subscribers from google reader which leads to more and better traffic.

8. H Tags Optimization (Eg: H1, H2, H3) = An important part of SEO, which helps you to make the robots understand that what is more important in your website and what is less.

9. HTML Code Clean Up & Optimization = HTML also needs to be optimize to provide optimal results, one must not create a website full of codes, try to make them as small as you can so that the load time can get better and this also helps in ranking better in SERP results.

10. Image Optimization = It refers to use alt tags properly. Using alt tags with every image will help you to get traffic from image search as well.

11. Hyperlink Optimization = The anchor text used to create hyperlinks of your website always helps you to rank better for those anchor texts, so try to use keywords as your anchor text to create links, internal or external.

12. In depth site Analysis = A deep analysis of your own website is necessary to do. It helps you to detect all the errors and other malfunctioning in your site.

13. Link Validation = All the links must be correct and must not be broken.

14. Meta Description Tags Optimization = You Meta tags help you to rank better but you must also know that how to manage them as there are some limitation and rules that you must follow to get maximum benefits from it.

15. Meta Keywords Tags Optimization = Just Like Meta Description tags Meta Keywords tag is also important most of the people thinks that it does not help in any way but it really does.

16. Navigation & Design Optimization = Your Navigation bar or any other navigation system must be good and user friendly, with it a good design which attracts the eye of the user is must.

17. PR Sculpting = Its a way to get PR from the websites which already have good PR. One can have the PR juice from good ranking websites by having

18. Robots.txt Optimization = It a file which tell the search engine bot that which place to crawl and which to not.

19. Text Modification Optimization = Text optimization also plays important role in ranking better in SERP. One must apply the keyword density rule to gain more benefits from their site.

20. Title Tag Optimization = A title is most important tag, a good not only attracts the users but also the search bots.

21. URL Rewrite = URL rewrite helps one to increase the visibility of their site by making url understandable in human language which are highly recommended by search engines as well.

22. W3C Validation = With help of W3C you can validate your website to make sure that it is up to dated and working fine according to the latest rules of programming.

23. Broken Links Checking = Its necessary to find out every error in your website even if it is a broken link. These kinds of error can happen internally or externally which may lead to bad reputation from both users and bots.

24. Directory Submissions = Directory Submission will help you to set the category of your website and to get good free one way backlinks which will help in faster indexation of your website.

25. Extraction of Site Urls ( Link Level Depth) = It helps in detecting the duplicacy of title and meta tags of those pages.

26. Internal Link Structuring = Internal link structure refers to a structure which let you link the other pages of your website with any other relative pages, these also may serve as backlinks.

27. Link Building ( Link Bait ) = It is a procedure of a reciprocal linking with websites which

already have good PR to share the link juice.

28. One way link (PR4 or Greater) = This steps may include all of the off page optimization steps to get one way backlinks, sometimes they are paid and sometimes they are free.

29. Site Back-links count = Try to get more backlinks everyday, more backlinks means better ranking and more traffic which leads to more income.

30. Local Search Engine Optimization = If your business is local then Local search engine optimization can help you a lot. Submitting in local directories and search engine apps will increase your chances to appear more in a local search.

31. Customer Review Submission = This is just like the testimonials but on a large scale, it includes testimonials, customer feedback, their comments etc. You can show this on many pages of your website which will increase customers trust in you.

32. h card Integration = hCard is one of several open microformat standards which helps in representing vCard properties and values in semantic HTML or XHTML.

33. Testimonial Submission = Try to get more testimonials from some reputed customers of your which will increase your goodwill.

34. Local Search Engine Optimization = This helps in getting better response from internet users if you have a local business. It is helpful for small businesses like shoe makers, restaurants, pizza shops, marts etc.

35. Google Webmaster Tools account setup & monitoring = It will help you to understand the flaws and errors in your website with many other facilities to be used to make your website a better one.

36. Installing Usability Tools on Website = If there are any tools that can be used by your customers, please provide it to them.

37. Optimization for Multiple Browsers = Making a website look good in all of the web browser is necessary because you dont want to lose your user just because your layout does not open correctly in some browser.

38. Article Submission = It helps in having a good backlink and also some visitors from good article websites.

39. Blog Comment on Relevant Blogs = Commenting on relevant blogs, increase your popularity as well as provides good backlinks, just make sure that you do not spam.

40. Blog Designing for the website = A blog of your website must look attractive as well it also must be similar to your website, so that the user may not get confused about its ownership.

41. Classified submission = It is a free facility given my many websites mostly forums to promote your website, products on it or any service that you provide for free, but please just dont spam there.

42. Creating Promotional pages on hubpages, squidoo, etc = Hub pages are good sources to attract more users, keeping the relevancy of the content you submit there can bring more users to your website.

43. Face Book Twitter Marketing = Social Media Marketing (SMM) with Social networks like Facebook and Twitter can also help you to spread the word about your website.

44. Integration of page bookmarking tools = You can also bookmark your bookmarks like creating a linkwheel between them.

45. Integration of page sharing tools = Sharing tools help you to increase your link popularity through those users who like to share the info with other with help of your sharing tools.

46. Paid Submission = One good websites people usually get paid links on targeted pages from where they get backlink, PR juice and traffic as well.

47. Photo Sharing = Sharing images on websites like flickr, photo bucket etc helps to increase traffic through image searches.

48. PPT Submission = Uploading PPT presentation or pdf files with good knowledge also attracts the users eyes.

49. Press Release = Any small or major change must be released in market to aware your customers, Press release do this work.

50. RSS Feeds = RSS feed helps reader to read your content without any fancy layout and irritating ads and many users use different RSS feed directories to find the content that they like to read and these website also keep track of your every single post which also counts as backlinks.

51. Social Bookmarking = Bookmarking is one of the best way to get good one way dofollow backlinks all you need a good list of bookmarking websites.

52. Video Submission = A good video can make a huge effect on your user, submitting to video websites like YouTube, MetaCafe, Vimeo etc can bring more targeted traffic.

53. Article Writing = It is the same as blog writing but article you write can also be submitted in other article websites or article directories.

54. Blog Writing = By providing good content to users that they like to read about your website, your service or your product, whichever the case is keeps your users stay in your contact.

55. Press Release Writing = Press release helps one to look like more profession, press release helps in spreading the word about your site or

56. Website Spell Check = One must keep their content correct even from the point of view grammar rules.

57. XML Site Map Creation & Submission = XML sitemap helps bots to crawl your website

without tackling any difficulties usually created by JavaScript, HTML error and other malfunctions.

58. HTML Sitemap for users = HTML sitemap helps user to find out all the links available on a website and they can go at the desired place.

59. Log file analysis = By keeping a track of who is coming and when is coming you can provide content or do SEO according to that region. It also helps in tracking the incoming of search engine bots and you can know that when to expect them again.

60. Google, Yahoo & Bing Site Map Creation = XML sitemaps helps in getting site indexed much faster by letting the bot to know about all the urls of your website.

61. Deep Indexing Recommendations = Deep indexing helps to get more backlinks from your own website, though as internal links they still help a lot.

62. Check Search Engine Road Blocks = Check if there is any problem for robots to crawl your website, you can check it through use of fetch as bot from Google webmaster tool.

Black hat SEO is both a myth and a reality we have to face sooner or later as SEO practicioners. While I abide by probably one of the strictest SEO codes of ethics around and SEOptimise is a clean white hat SEO company company itself we still cant deny that there is black hat SEO. The sheer existence of black hat SEO techniques must be acknowledged for several reasons. As Rishi Lakhani noted on his new SEO blog: You need it at least to know what to avoid or to know how competitors who perform worse than you still manage to outrank your site. The good news is: Most black hat SEO techniques can be used in a clean, ethical white hat way as well.

They are like knives: You can slice bread with a knife but you can kill with it as well. Its your decision how you use the knife. Also consider the problem with overall perception of the SEO industry. Your hat can be whiter than snow and still people will treat you as the guy with the virtual knife. Personally I think black hat SEO is for the weak. The black hat logic goes: When you cant win the game you have to cheat. Its the same dilemma as in sports though: When everybody cheats how are you going to win? Thats why reputable and successful SEO experts dont have to use it. OK, long story short, here are the 30 black hat techniques you can use ethically as well. Take note how I am explaining only the positive way of using each technique. I do not advocate the use of it in its original black hat context. Use these knives as kitchen knives:

black hat by googlisti 1. Hidden text Create modern CSS based websites with JQuery effects. They often hide large portions of text in layers to display them on click or mouse over for usability reasons. Example: CSS pagination. 2. IP delivery Offer the proper localized content to those coming from a country specific IP address. Offer the user a choice though. Shopping.com does a great job here. 3. 301 redirects Redirect outdated pages to the newer versions or your homepage. When moving to a new domain use them of course as well.

4. Throw Away Domains Create exact match micro sites for short term popular keywords and abandon them when the trend subsides. Something like tigerwoodssexrehab.com 5. Cloaking Hide the heavy Flash animations from Google, show the text-only version optimized for accessibility and findability. 6. Paid links Donate for charity, software developers etc. Many of them display links to those who donate. 7. Keyword stuffing Tags and folksonomy. Keyword stuff but adding several tags or let your users do the dirty work via UGC tagging (folksonomy) every major social site does that. 8. Automatically generated keyword pages Some shopping search engines create pages from each Google search query and assign the appropriate products to each query. You can do that as well if you have enough content. 9. Mispsellings Define, correct the misspelled term and/or redirect to the correct version. 10. Scraping Create mirrors for popular sites. Offer them to the respective webmasters. Most will be glad to pay less. 11. Ad only pages Create all page ads (interstitials) and show them before users see content like many old media do. 12. Blog spam Dont spam yourself! Get spammed! Install a WordPress blog without Akismet spam protection. Then create a few posts about Mesothelioma for example, a very profitable keyword. Then let spammers comment spam it or even add posts (via TDO Mini Forms). Last but not least parse the comments for your keyword and outgoing links. If they contain the keyword publish them and remove the outgoing links of course. Bot user generated content so to say. 13. Duplicate content on multiple domains Offer your content under a creative Commons License with attribution. 14. Domain grabbing Buy old authority domains that failed and revive them instead of putting them on sale. 15. Fake news Create real news on official looking sites for real events. You can even do it in print. Works great for all kinds of activism related topics. 16. Link farm Create a legit blog network of flagship blogs. A full time pro blogger can manage 3 to 5 high quality blogs by her or himself. 17. New exploits Find them and report them, blog about them. You break story and thus you get all the attention and links. Dave Naylor is excellent at it. 18. Brand jacking Write a bad review for a brand that has disappointed you or destroys the planet or set up a brand x sucks page and let consumers voice their concerns. 19. Rogue bots Spider websites and make their webmasters aware of broken links and other issues. Some people may be thankful enough to link to you. 20. Hidden affiliate links In fact hiding affiliate links is good for usability and can be even more ethical than showing them. example.com/ref?id=87233683 is far worse than than just example.com. Also unsuspecting Web users will copy your ad to forums etc. which might break their TOS. The only thing you have to do is disclose the affiliate as such. I prefer to use [ad] (on Twitter for example) or

[partner-link] elsewhere. This way you can strip the annoying ref ids and achieve full disclosure at the same time. 21. Doorway pages Effectively doorway pages could also be called landing pages. The only difference is that doorway pages are worthless crap while landing pages are streamlined to suffice on their own. Common for both is that they are highly optimized for organic search traffic. So instead of making your doorway pages just a place to get skipped optimize them as landing pages and make the users convert right there. 22. Multiple subdomains Multiple subdomains for one domain can serve an ethical purpose. Just think blogspot.co or wordpress.com they create multiple subdomains by UGC. This way they can rank several times for a query. You can offer subdomains to your users as well. 23. Twitter automation There is nothing wrong with Twitter automation as long as you dont overdo it. Scheduling and repeating tweets, even automatically tweeting RSS feeds from your or other blogs is perfectly OK as long as the Twitter account has a real person attending it who tweets manually as well. Bot accounts can be ethical as well in case they are useful no only for yourself. A bot collecting news about Haiti in the aftermath of the earthquake would be perfectly legit if you ask me. 24. Deceptive headlines Tabloids use them all the time, black hat SEO also do. There are ethical use cases for deceptive headlines though. Satire is one of course and humor simply as well. For instance I could end this list with 24 items and declare this post to a list of 30 items anyways. That would be a good laugh. Ive done that in the past but in a more humorous post. 25. Google Bowling The bad thing about Google bowling is that you hurt sites you dont like. You could reverse that: Reverse Google bowling would mean that you push sites of competitors you like to make those you dislike disappear below. In a way we do that all the time linking out to the competition, the good guys of SEO who then outrank the ugly sites we like a lot less. 26. Invisible links Youd never used invisible links on your sites did you? You liar! You have. Most free web counters and statistic tools use them. Statcounter is a good example. So when you embed them on your site you use invisible links. 27. Different content for search engines than users Do you use WordPress? Then you have the nofollow attribute added to your comment links. this way the search engine gets different content than the user. He sees and clicks a link. A search bot sees a no trespass sign instead. In white hat SEO its often called PageRank sculpting. Most social media add ons do that by default. 28. Hacking sites While crackers hack sites security experts warn site owners that they vulnerabilities. Both discover the same issues. Recently I got an email by someone who warned me to update my WordPress installation. That was a grand idea I thought. 29. Slander linkbait Pulling a Calacanis like SEO is bullshit is quite common these days. Why dont do it the other way around? The anti SEO thing doesnt work that good anymore unless you are as famous as Robert Scoble. In contrast a post dealing with 100 Reasons to Love SEO Experts might strike a chord by now.

30. Map spam Instead of faking multiple addresses all over the place just to appear on Google Maps and Local why dont you simply create an affiliate network of real life small business owners with shops and offices who, for a small amount of money, are your representatives there? All they need to do is to collect your mail from Google and potential clients.

PageRank
A PageRank results from a mathematical algorithm based on the graph, the webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or usa.gov. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. If there are no links to a web page there is no support for that page.

What is a spamment?
A spam comment (or spamment) is a comment whose purpose is to list as many links as possible to promote a webpage or product. The most important element of a spamment is the link. The entire purpose of a spamment is not to get users to click on the link, but to make search engines increase the ranking of a webpage by the collective weight of hundreds of thousands of spamments, all pointing to the same URL.

Why is current anti-spam technology lacking?


Often software relies on a blacklist of URLs or IPs to protect the commenting system. I herein call such blacklists literal because they require the identification string either whole or modified with wildcards. Literal blacklists have two major shortcomings: 1. It is difficult to add new spam URLs as they arrive and to remove redundant URLs as they are changed. 2. You need to know what to block before you block it; the process is reactive rather than proactive.

Keyword-based blacklisting is the future


Keyword-based blacklisting is the most viable option for future-proof spam protection. It uses a newline-delimited blacklist to see if a banned string appears anywhere within the URLs submitted with the comment. It searches only links so that a commenter won't be locked out simply for using a banned string.

Literal blacklists require the domain name (maybe even the full URL) of each spam link in order to effectively block the spamments:
http://www.penismedical.com http://www.altpenis.com/ http://www.enlargepenisguide.com/ http://www.allabout-penis-enlargement.com http://www.Penis-Devices.com/pumps.html http://www.penis-enlargement-planet.com and so on, ad nauseam.

A keyword-based blacklist can block all of those URLs with the single entry penis, and using enlarge blocks many more. You can replace tens of literal entries with a few keywords while actually improving security and lessening a blacklists footprint on the server, and because its case-insensitive, all variants of capitalisation are caught.
penis enlarge

And whats more, keywords let you block spam without even knowing the URL. If spammers try to send you any URLs containing penis or enlarge they are automatically blocked.

Are there disadvantages to keyword-based blacklisting?


A keyword can easily be found in a non-spam URL, like rape in www.westerngrapes.com. Sometimes it is better to block a fragment of the URL than have an innocent user locked out, though remember that an unreasonable lock-out is the exception rather than the rule. The keywords you end up putting in the blacklist are often those that only occur in spam-related or otherwise unpleasant URLs. To help make sure that the chance of blocking innocent users is reduced, the blacklist operates depending on the complexity of its entries. A one-word entry like porn will match any instance of porn in the URLs, while a full URL like http://www.PenisDevices.com/pumps.html will match only that.

How do I implement it?


Here is the PHP code I use for keyword-based blacklists, pulled from my CMS Writers Block. The code assumes that you have a newline-delimited list of keywords at include/blacklist.txt.
// Remove comments from blacklist.txt. $blacklist = preg_replace('/(##).+(##)/', '', file_get_contents('include/blacklist.txt'));

// Turn spaces into vertical bars $BlockedUrls = preg_replace('/\s+/', '|', trim($blacklist)); // Get all hrefs from comment body and put them into array preg_match_all('/href\s*=\s*(.{0,1})[^ >]+/i', $_POST['comment'], $urls_in_text, PREG_PATTERN_ORDER);

// Stringify array (sans 'href=') and append URL field to the end. $given_urls = implode(' % ', $urls_in_text[0]); $given_urls = preg_replace('/href\s*=\s*/i', '', $given_urls, -1).$_POST['url']; // Search string for blocked URL fragments and kill script if found. if(eregi($BlockedUrls, $given_urls)) { echo 'Comment blocked.'; exit(); } // If not exited, continue with the script.

I put the text of the comment and the input of the URL field into a single string because it would be faster to perform regex on a single string than to loop multiple calls on an array. Perl-compatible regex is used in most instances because it is faster than POSIX Extended.

Permalink
A permalink (portmanteau of permanent link) is a URL that points to a specific blog or forum entry after it has passed from the front page to the archives. Because a permalink remains unchanged indefinitely, it is less susceptible to link rot. Most modern weblogging and contentsyndication software systems support such links. Other types of websites use the term permanent links, but the term permalink is most common within the blogosphere. Permalinks are often simply stated so as to be human-readable.

TrustRank
From Wikipedia, the free encyclopedia Jump to: navigation, search TrustRank is a link analysis technique described in a paper by Stanford University and Yahoo! researchers for semi-automatically separating useful webpages from spam.[1] Many Web spam pages are created only with the intention of misleading search engines. These pages, chiefly created for commercial reasons, use various techniques to achieve higher-than-deserved rankings on the search engines' result pages. While human experts can easily identify spam, it is too expensive to manually evaluate a large number of pages. One popular method for improving rankings is to increase artificially the perceived importance of a document through complex linking schemes. Google's PageRank and

similar methods for determining the relative importance of Web documents have been subjected to manipulation. TrustRank method calls for selecting a small set of seed pages to be evaluated by an expert. Once the reputable seed pages are manually identified, a crawl extending outward from the seed set seeks out similarly reliable and trustworthy pages. TrustRank's reliability diminishes with increassed distance between documents and the seed set. The researchers who proposed the TrustRank methodology have continued to refine their work by evaluating related topics, such as measuring spam mass.

Web scraping (also called web harvesting or web data extraction) is a computer software technique of extracting information from websites. Usually, such software programs simulate human exploration of the World Wide Web by either implementing low-level Hypertext Transfer Protocol (HTTP), or embedding a fully-fledged web browser, such as Internet Explorer or Mozilla Firefox. Web scraping is closely related to web indexing, which indexes information on the web using a bot and is a universal technique adopted by most search engines. In contrast, web scraping focuses more on the transformation of unstructured data on the web, typically in HTML format, into structured data that can be stored and analyzed in a central local database or spreadsheet. Web scraping is also related to web automation, which simulates human browsing using computer software. Uses of web scraping include online price comparison, weather data monitoring, website change detection, research, web mashup and web data integration.

Content spam
These techniques involve altering the logical view that a search engine has over the page's contents. They all aim at variants of the vector space model for information retrieval on text collections.

[edit] Keyword stuffing


Keyword stuffing involves the calculated placement of keywords within a page to raise the keyword count, variety, and density of the page. This is useful to make a page appear to be relevant for a web crawler in a way that makes it more likely to be found. Example: A promoter of a Ponzi scheme wants to attract web surfers to a site where he advertises his scam. He places hidden text appropriate for a fan page of a popular music group on his page, hoping that the page will be listed as a fan site and receive many visits from

music lovers. Older versions of indexing programs simply counted how often a keyword appeared, and used that to determine relevance levels. Most modern search engines have the ability to analyze a page for keyword stuffing and determine whether the frequency is consistent with other sites created specifically to attract search engine traffic. Also, large webpages are truncated, so that massive dictionary lists cannot be indexed on a single webpage.

[edit] Hidden or invisible text


Unrelated hidden text is disguised by making it the same color as the background, using a tiny font size, or hiding it within HTML code such as "no frame" sections, alt attributes, zero-sized DIVs, and "no script" sections. People screening websites for a search-engine company might temporarily or permanently block an entire website for having invisible text on some of its pages. However, hidden text is not always spamdexing: it can also be used to enhance accessibility.

[edit] Meta-tag stuffing


This involves repeating keywords in the Meta tags, and using meta keywords that are unrelated to the site's content. This tactic has been ineffective since 2005.

[edit] Doorway pages


"Gateway" or doorway pages are low-quality web pages created with very little content but are instead stuffed with very similar keywords and phrases. They are designed to rank highly within the search results, but serve no purpose to visitors looking for information. A doorway page will generally have "click here to enter" on the page.

[edit] Scraper sites


Scraper sites are created using various programs designed to "scrape" search-engine results pages or other sources of content and create "content" for a website.[5] The specific presentation of content on these sites is unique, but is merely an amalgamation of content taken from other sources, often without permission. Such websites are generally full of advertising (such as pay-per-click ads[5]), or they redirect the user to other sites. It is even feasible for scraper sites to outrank original websites for their own information and organization names.

[edit] Article spinning


Article spinning involves rewriting existing articles, as opposed to merely scraping content from other sites, to avoid penalties imposed by search engines for duplicate content. This process is undertaken by hired writers or automated using a thesaurus database or a neural network.

[edit] Link spam


Link spam is defined as links between pages that are present for reasons other than merit.[6] Link spam takes advantage of link-based ranking algorithms, which gives websites higher rankings the more other highly ranked websites link to it. These techniques also aim at influencing other link-based ranking techniques such as the HITS algorithm.

[edit] Link-building software


A common form of link spam is the use of link-building software to automate the search engine optimization process.

[edit] Link farms


Link farms are tightly-knit communities of pages referencing each other, also known facetiously as mutual admiration societies[7].

[edit] Hidden links


Putting hyperlinks where visitors will not see them to increase link popularity. Highlighted link text can help rank a webpage higher for matching that phrase.

[edit] Sybil attack


A Sybil attack is the forging of multiple identities for malicious intent, named after the famous multiple personality disorder patient "Sybil" (Shirley Ardell Mason). A spammer may create multiple web sites at different domain names that all link to each other, such as fake blogs (known as spam blogs).

[edit] Spam blogs


Spam blogs are blogs created solely for commercial promotion and the passage of link authority to target sites. Often these "splogs" are designed in a misleading manner that will give the effect of a legitimate website but upon close inspection will often be written using spinning software or very poorly written and barely readable content. They are similar in nature to link farms.

[edit] Page hijacking


Page hijacking is achieved by creating a rogue copy of a popular website which shows contents similar to the original to a web crawler but redirects web surfers to unrelated or malicious websites.

[edit] Buying expired domains

Some link spammers monitor DNS records for domains that will expire soon, then buy them when they expire and replace the pages with links to their pages. See Domaining. However Google resets the link data on expired domains. Some of these techniques may be applied for creating a Google bomb that is, to cooperate with other users to boost the ranking of a particular page for a particular query.

[edit] Cookie stuffing


Cookie stuffing involves placing an affiliate tracking cookie on a website visitor's computer without their knowledge, which will then generate revenue for the person doing the cookie stuffing. This not only generates fraudulent affiliate sales, but also has the potential to overwrite other affiliates' cookies, essentially stealing their legitimately earned commissions.

[edit] Using world-writable pages


Main article: forum spam Web sites that can be edited by users can be used by spamdexers to insert links to spam sites if the appropriate anti-spam measures are not taken. Automated spambots can rapidly make the user-editable portion of a site unusable. Programmers have developed a variety of automated spam prevention techniques to block or at least slow down spambots.

[edit] Spam in blogs


Spam in blogs is the placing or solicitation of links randomly on other sites, placing a desired keyword into the hyperlinked text of the inbound link. Guest books, forums, blogs, and any site that accepts visitors' comments are particular targets and are often victims of drive-by spamming where automated software creates nonsense posts with links that are usually irrelevant and unwanted.

[edit] Comment spam


Comment spam is a form of link spam that has arisen in web pages that allow dynamic user editing such as wikis, blogs, and guestbooks. It can be problematic because agents can be written that automatically randomly select a user edited web page, such as a Wikipedia article, and add spamming links.[8]

[edit] Wiki spam


Wiki spam is a form of link spam on wiki pages. The spammer uses the open editability of wiki systems to place links from the wiki site to the spam site. The subject of the spam site is often unrelated to the wiki page where the link is added. In early 2005, Wikipedia implemented a default "nofollow" value for the "rel" HTML attribute. Links with this

attribute are ignored by Google's PageRank algorithm. Forum and Wiki admins can use these to discourage Wiki spam.

[edit] Referrer log spamming


Referrer spam takes place when a spam perpetrator or facilitator accesses a web page (the referee), by following a link from another web page (the referrer), so that the referee is given the address of the referrer by the person's Internet browser. Some websites have a referrer log which shows which pages link to that site. By having a robot randomly access many sites enough times, with a message or specific address given as the referrer, that message or Internet address then appears in the referrer log of those sites that have referrer logs. Since some Web search engines base the importance of sites on the number of different sites linking to them, referrer-log spam may increase the search engine rankings of the spammer's sites. Also, site administrators who notice the referrer log entries in their logs may follow the link back to the spammer's referrer page.

[edit] Other types of spamdexing


[edit] Mirror websites
A mirror site is the hosting of multiple websites with conceptually similar content but using different URLs. Some search engines give a higher rank to results where the keyword searched for appears in the URL.

[edit] URL redirection


URL redirection is the taking of the user to another page without his or her intervention, e.g., using META refresh tags, Flash, JavaScript, Java or Server side redirects.

[edit] Cloaking
Cloaking refers to any of several means to serve a page to the search-engine spider that is different from that seen by human users. It can be an attempt to mislead search engines regarding the content on a particular web site. Cloaking, however, can also be used to ethically increase accessibility of a site to users with disabilities or provide human users with content that search engines aren't able to process or parse. It is also used to deliver content based on a user's location; Google itself uses IP delivery, a form of cloaking, to deliver results. Another form of cloaking is code swapping, i.e., optimizing a page for top ranking and then swapping another page in its place once a top ranking is achieved.

Keyword Cannibalization: Definition and Solution


Posted by Elmer W. Cagape on Jan 13, 2008 in Keywords Keyword cannibalization happens when certain pages within your site compete against each other for certain keywords. From the word cannibal, your pages are eating the popularity of other pages to gain rankings. Sometimes its an unavoidable occurrence because as we develop page contents, we sometimes have an inherent intention to rank for keywords regardless of which pages they appear. Instead of mapping contents/pages to keywords, we think its not so bad to place important keywords on all pages. Another situation is that when we have broad contents, we tend to bundle them into one page instead of focusing specific contents on separate pages. For example, if I sell different kinds of shoes (running shoes, kids shoes, training shoes, trailwalking shoes, basketball shoes) and build one page that will contain all of the featured products. Mapping a keyword to all of the pages will not make one page representative enough for that keyword. As mentioned in Rands explanation, these are the implications:

Internal Anchor Text since youre pointing to so many different pages with the same subject, you cant concentrate the value of internal anchor text on one target. External Links If 4 sites link to one page on snowboards, 3 sites link to another of your snowboard pages and 6 sites link to yet another snowboard page, youve split up your external link value among three pages, rather than consolidating it into one. Content Quality After 3 or 4 pages of writing about the same primary topic, the value of your content is going to suffer. You want the best possible single page to attract links and referrals, not a dozen bland, replicated pages. Conversion Rate If one page is converting better than the others, its a waste to have multiple, lower-converting versions targeting the same traffic. If you want to do conversion tracking, use a multiple-delivery testing system (either A/B or multivariate).

So the concept of our solution to the shoe shop example is to create specific pages for a distinct product. A category will be created for each of the shoe products. This category shall have a list of products belonging only to that category. For example, basketball shoes shall be a category whose page will only consist of adidas, Nike, Reebok and other brands that have

basketball shoes. The same can be said of the other categories. All of these categories must be listed on one page which is devoted to focus on a more generic keyword phrase such as shoes or sneakers. For more specific keywords such as nike running shoes, they shall be mapped into a relevant category page. What about if we have already implemented a site that has issues of keyword cannibalization? Id also take Rands advice: Employ 301s liberally. When working with clients, I like to ID all the pages in the architecture with this issue and determine the best page to point them to, then use a 301 on every cannibalizing page to a single version. This not only ensures that visitors all arrive at the right page, but that the link equity and relevance built up over time is directing the engines to the most relevant and highest-ranking-potential page for the query.

Improve Your SEO By Checking Keyword Density of Your Posts


Article Written by Daman on February 3rd, 2012

1diggdigg

Email Share

Email 1diggdigg

0Email Share

Email

Keyword density now becomes most the important factor for Search engine optimization (SEO). Keywords are most important for a website or blog. Search engine giants like Google, Bing etc. uses algorithm to find keyword for an article. Many bloggers wrote very useful and quality post but they do not care about keyword or Keyword Density. But keywords are most important for a website to rank higher in search engines. Keyword density and placement of keyword is the one of the most important factor for SEO.

It is true that now Google require high quality and original content but without target keywords your content is nothing. I always noticed that bloggers write their post without caring about keywords and Keyword density. But doing this they will get good traffic but for only few days or weeks. After that, they lose all the traffic. Also Read: Top 5 WordPress Plugins for Better SEO Free WordPress Themes : Top Free SEO Optimized Themes For WordPress

What is Keyword Density?


Keyword Density: It is the percentage of times of a keyword or a phase appear in a webpage or a post in comparison with total numbers of words appear in a webpage.

Why Keywords are Important to Rank Higher in SE?


As I already mention above that keywords are the most important to rank higher in search engines. This is because search engines rank your website on the behalf of keywords. If you will not use proper keywords than your website will not get good traffic.

Methods to Check Keywords Density: Method 1: Manually


1. Formulae to calculate keyword density Number of times of the keyword repeat / Total numbers of words in your post * 100 If you want to check keyword density manually then you can do this by simple formulae. For example you had written a post and the number of words is 2,000 words and you are aiming for 1 to 3% keyword density then 30 (total number of times of the keyword repeat) / 2,000 (Total number of words in a post) * 100 = 1.5% Your keyword density is 1.5% 2. Formulae to calculate keyword density in keyphases Keyword Density = Number of keyphrases / total number of words / number of words in keyphrases * 10

Method 2: Using WordPress Plugins


If you are using WordPress as CMS then maybe you like WordPress Plugin to calculate keyword density because if you are bored from calculation KD manually then there are several WordPress plugins that allow you to check keyword density in posts.

WordPress Plugins to Check Keyword Density:


SEO Tool Keyword Density Checker Keyword Statistics BloggerHigh SEO

What should be the Optimum Keyword Density for a Post?

Now the most important part is the percentage of your keyword density. Yes, it is the most important factor for SEO. Whenever I wrote a post I never cross the limit of 1% to 3% because using more that 3% keywords may irritate your visitors and your post also look ugly. If you will cross the limit then Google bot realize that you are a spammer and using unnecessary keywords to rank higher. This might affect your search engine ranking naturally your traffic will also effected.

What is Keyword Stuffing?


As I already mention above that using too much keyword may affect your search engine ranking. Keyword stuffing refers to unethical search engine optimization technique. It generally occurs when a web page is stuffed with keywords. It had been used to rank higher in Google but after Google Panda this trick no more useful. When a post is stuffed with too much keyword then Google bot realize that you are a spammer and it might affect your ranking. So friends, I hope you enjoy this post on How to check keyword density of a post manually or using WordPress Plugins and I also hope that this post will help you to rank higher in search engines but if you have any doubt then please do comments.

Enterprise-class features delivered on Google's world-class platform.


"Enterprise-class" shouldn't mean "experts only." Google Analytics has made it easy for non-specialists and specialists alike, across your organization, to practice performance focused marketing.

Advertising ROI

Goals
Track sales and conversions. Measure your site engagement goals against: threshhold levels that you define.

Integrated with AdWords and AdSense


Optimize your AdWords performance with post-click data on your keywords, search queries, match type and more. AdSense reports show publishers which site content generates the most revenue.

Complete campaign tracking capabilities


Track email campaigns, banner ads, offline ads and more.

Ecommerce reporting
Trace transactions to campaigns and keywords, get loyalty and latency metrics, and identify your best revenue sources.

Cross Channel and Multimedia Tracking

Mobile Tracking
Track mobile websites, mobile apps and web-enabled mobile devices, including both high end and non-javascript enabled phones.

Internal Site Search


Understand visitor intent, find out what your customers are really looking for and speed up time to conversion.

Flash, video and social network application tracking


Track usage of your Ajax, Flash, social networking and Web 2.0 applications.

Customized Reporting

Advanced Segmentation
Isolate and analyze subsets of your traffic. Select from predefined custom segments such as "Paid Traffic" and "Visits with Conversions" or create new custom segments with a flexible, easy-to-use segment builder. Apply segments to current or historical data and compare segment performance side by side in reports. Watch video.

Custom Reports

Create, save, and edit custom reports that present the information you want to see organized in the way you want to see it. A drag and drop interface lets you select the metrics you want and define multiple levels of sub-reports. Once created, each custom report is available for as long as you want it. Watch video.

Dashboards
No more digging through reports. Put all the information you need on a custom Dashboard that you can email to others.

API and developer platform


Export data, create integrations, and develop client applications with the Google Analytics Data Export API. Customize Google Analytics tracking with the Google Analytics Tracking API.

Advanced Analysis Tools


Perform advanced data analysis with pivot tables, multiple dimensions and filtering features. Fast-on-the-fly tools let you dig deeper and manipulate data right in the report tables.

Analytics Intelligence
Google Analytics monitors your reports and automatically alerts you of significant changes in data patterns. You can also set up custom alerts to notify you when specific thresholds are reached.

Custom Variables
Custom variables allow you to define multiple, and even simultaneous, tracking segments based on hits, session or visit level data. Custom variables provide you the power and flexibility to customize Google Analytics and collect the unique site data most important to your business.

Data Export
Export your data with the Google Analytics Data Export API or email and export your data directly from the Google Analytics interface into Excel, CSV, PDF and tab delimited files.

Sharing and Communicating

Email reports
Schedule or send ad-hoc personalized report emails that contain exactly the information you want to share.

Sophisticated administrator and user controls


Control how sensitive data is shared and which reports are available to users on your account.

Visualizing Data

Motion Charts
Motion Charts add sophisticated multi-dimensional analysis to most Google Analytics reports. Select metrics for the x-axis, y-axis, bubble size, and bubble color and view how these metrics interact over time. Choose the metrics you want to compare and expose data relationships that would be difficult to see in traditional reports. Watch video.

Geo Targeting
Identify your most lucrative geographic markets.

Funnels
Visualize your conversion funnel. Fix leaks by seeing which pages result in lost opportunities and where your would-be customers go.

Spark lines
Thumbnail size graphics save you clicks and summarize the data in your report.

Score cards
See summary metrics in the context of historical or site average data.

Google Integration and Reliability


1st party cookie
Google Analytics has always exclusively used 1st party cookies to ensure reliable tracking and protect visitor privacy.

Google data center and collection methodology


Google Analytics runs on the same globally renowned infrastructure that powers Google, maximizing data integrity and privacy.

What Are Doorway Pages?


by SEW Staff, March 1, 2007 2 Comments Webmasters are sometimes told to submit "bridge" pages or "doorway" pages to search engines to improve their traffic. Doorway pages are created to do well for particular phrases. They are also known as portal pages, jump pages, gateway pages, entry pages, and by other names as well.
The More About Doorway Pages article available to Search Engine Watch members provides more information about doorway pages & tips on avoiding problems with search engines. Click here to learn more about becoming a member

Doorway pages are easy to identify in that they have been designed primarily for search engines, not for human beings. This page explains how these pages are delivered technically, and some of the problems they pose.

Low Tech Delivery

There are various ways to deliver doorway pages. The low-tech way is to create and submit a page that is targeted toward a particular phrase. Some people take this a step further and create a page for each phrase and for each search engine. One problem with this is that these pages tend to be very generic. It's easy for people to copy them, make minor changes, and submit the revised page from their own site in hopes of mimicking any success. Also, the pages may be so similar to each other that they are considered duplicates and automatically excluded by the search engine from its listings. Another problem is that users don't arrive at the goal page. Say they did a search for "golf clubs," and the doorway page appears. They click through, but that page probably lacks detail about the clubs you sell. To get them to that content, webmasters usually propel visitors forward with a prominent "Click Here" link or with a fast meta refresh command. By the way, this gap between the entry and the goal page is where the names "bridge pages" and "jump pages" come from. These pages either "bridge" or "jump" visitors across the gap. Some search engines no longer accept pages using fast meta refresh, to curb abuses of doorway pages. To get around that, some webmasters submit a page, and then swap it on the server with the "real" page once a position has been achieved. This is "code-swapping," which is also sometimes done to keep others from learning exactly how the page ranked well. It's also called "bait-and-switch." The downside is that a search engine may revisit at any time, and if it indexes the "real" page, the position may drop. Another note here: simply taking meta tags from a page ("meta jacking"), does not guarantee a page will do well. In fact, sometimes resubmitting the exact page from another location does not gain the same position as the original page. There are various reason why this occurs which go beyond this article, but the key point to understand is that you aren't necessarily finding any "secrets" by viewing source code, nor are you necessarily giving any away.

Agent Delivery
The next step up is to deliver a doorway page that only the search engine sees. Each search engine reports an "agent" name, just as each browser reports a name. The advantage to agent name delivery is that you can send the search engine to a tailored page yet direct users to the actual content you want them to see. This eliminates the entire "bridge" problem altogether. It also has the added benefit of "cloaking" your code from prying eyes.

Well, not quite. Someone can telnet to your web server and report their agent name as being from a particular search engine. Then they see exactly what you are delivering. Additionally, some search engines may not always report the exact same agent name, specifically to help keep people honest.

IP Delivery / Page Cloaking


Time for one more step up. Instead of delivering by agent name, you can also deliver pages to the search engines by IP address, assuming you've compiled a list of them and maintain it. Everyone and everything that accesses a site reports an IP address, which is often resolved into a host name. For example, I might come into a site while connected to AOL, which in turn reports an IP of 199.204.222.123 (FYI, that's not real, just an example). The web server may resolve the IP address into an address: wwtb03.proxy.aol.com, for example.
The Page Cloaking article available to Search Engine Watch members provides more information about page cloaking, including links to cloaking software providers. Click here to learn more about becoming a member

If you deliver via IP address, you guarantee that only something coming from that exact address sees your page. Another term for this is page cloaking, with the idea that you have cloaked your page from being seen by anyone but the search engine spiders.

Doorway page
From Wikipedia, the free encyclopedia Jump to: navigation, search "Doorway" redirects here. For the architectural element, see Door. Doorway pages are web pages that are created for spamdexing, this is, for spamming the index of a search engine by inserting results for particular phrases with the purpose of sending visitors to a different page. They are also known as bridge pages, portal pages,

jump pages, gateway pages, entry pages and by other names. Doorway pages that redirect visitors without their knowledge use some form of cloaking.

Contents
[hide]

1 Explanation 2 Cloaking 3 Construction 4 See also 5 References

[edit] Explanation
If a visitor clicks through to a typical doorway page from a search engine results page, in most cases they will be redirected with a fast Meta refresh command to another page. Other forms of redirection include use of Javascript and server side redirection, either through the .htaccess file or from the server configuration file. Some doorway pages may be dynamic pages generated by scripting languages such as Perl and PHP. Doorway pages are often easy to identify in that they have been designed primarily for search engines, not for human beings. Sometimes a doorway page is copied from another high ranking page, but this is likely to cause the search engine to detect the page as a duplicate and exclude it from the search engine listings. Because many search engines give a penalty for using the META refresh command,[1] some doorway pages just trick the visitor into clicking on a link to get them to the desired destination page, or they use Javascript for redirection. More sophisticated doorway pages, called Content Rich Doorways, are designed to gain high placement in search results without using redirection. They incorporate at least a minimum amount of design and navigation similar to the rest of the site to provide a more human-friendly and natural appearance. Visitors are offered standard links as calls to action. Landing pages are regularly misconstrued to equate to Doorway pages within the literature. The former are content rich pages to which traffic is directed to within the context of pay-per-click campaigns and to maximize SEO campaigns.

[edit] Cloaking
Another form of doorway pages are using a method called Cloaking. They show a version of that page to the visitor, but different from the one provided to crawlers, using server side scripts. They know whether it's a bot or a visitor based on their IP address and/or user-agent.

[edit] Construction
A content rich doorway page must be constructed in a Search engine friendly (SEF) manner, otherwise it may be construed as search engine spam possibly resulting in the page being banned from the index for an undisclosed amount of time. These types of doorways utilize (but are not limited to) the following:

Title Attributed images for key word support Title Attributed links for key word support

[edit] See also


Cloaking Keyword stuffing Page hijacking Spamdexing

Search Engine Optimization Training No matter you are an MBA or you are qualified in any other field, if you want to pursue a career as an SEO expert, our professional SEO training will transform you into a versatile SEO professional. Our Professional SEO training will ensure you a stable career, handsome income and consistent growth. Course Contents

Other Courses Basics of SEO Introduction to SEO SEO is an important tool for any web marketing campaign. It helps to set your website at the position higher than the millions of other websites in response to a search query. We will explain you regarding all the necessary information you need to know about search engine optimization such as:

What is search engine optimization? How SEO works for websites? Differences in the ranking criteria of major search engines

Types Of search engine optimization


On Page Optimization Off page Optimization

On Page Optimization On page optimization is the foremost step for any SEO strategy. It will not only help you to rank higher but will also enhance overall readability of your website. We teach you all the methods and tricks for effective on page optimization. SEO On Page Optimization Tools

Competitors Analysis Keyword Research Keyword Placement Title Creation Meta Tag Creation Meta Description Creation Content Optimization Keyword Density URL Structure Analysis Content Creation Image Optimization Sitemap Creation Use of robots.txt

Doorway Pages Invisible Text Cloaking

SEO Off Page Optimization tools


Search Engine Submission Directory Submission Article Submission Press Release submission Forums Posting Link Building o One-Way o Two-Way / Reciprocal / Link Exchange o Three-Way Blogs o Blogs Creation o Blogs submission o Blogs Commenting Posting Free Classifieds Google Mapping/Listing Social Book marking RSS feeds Video optimization Link Building

SEO Tools

Google Keyword Tool Word Tracker Keyword Spy Keywords Position Checker Keyword Density Checker Google Analytics Stats Counter

Indexing Versus Caching


Cache : when search engine bot or crawler come to your page or found any update on site, this process is "Cache".

Indexing : After reading the updated element it stores those new elements in Search Engine's database. This restoration process is "Indexing".

Cached is where google has saved a copy of the page (images and all) that can be viewed even if your site no longer exists. Indexed, is where your site is in their database.

What is organic and inorganic Search Engine Optimisation?


More than 90% of web users rely on search engines to find information. Search engines provide two types of listings: Organic A free listing generated by search engines once your website achieves a good listing it will usually stay there. Inorganic (pay per click) A paid listing you get until your advertising dollar runs out. Organic search engine optimisaiton (Webpartners) Organic SEO and keywords implimented by Search Engine Specialist. Keywords are what people search for on Google. Inorganic search engine optimisaiton (AdWords - Pay Per Click) You bid for keywords. Keywords are what people search for on Google.

Your website appears beside relevant search results when people search.

Your ad appears beside relevant search results when people search.

Once you achieve a good listing primarily you will stay there.

You pay everytime someone clicks on your advertisment. You get listed until your advertising dollar runs out.

Organic vs. inorganic. Organic listings: Free Have more credibility 90% of all searchers coming onto Google look and click through on listings from organic area listings

Inorganic listings: Pay per click Offer you a chance to compete on specific search terms Can help you generate business.

To set up an inorganic campaign all you need to do is set up an account with Google www.google.co.nz/adwords and follow the instructions. A place for both. At WebPartners we feel there is a place for both organic and inorganic (or pay per click) listings. We are however advocates for achieving good organic listings as: Your credibility as a trustworthy source is increased The cost per conversion is dramatically reduced

Organic SEO Inorganic SEO Almost free of cost Suited to all kinds of businesses No immediacy of results Paid search marketing Usually short-term impact Immediate results Easy to understand

Time consuming Long-term impact

Targeted Traffic Not affordable for all kinds of businesses

http://www.webconfs.com/15-minute-seo.php http://www.vaughns-1-pagers.com/internet/google-ranking-factors.htm

What is On-Page Optimization in SEO In On-Page Seo we work on directly on the web site and the changes done on website we can see instantly on web. Following are some different On-page activities we do betterment of site to get the maximum traffic and benefits from search engines and visitors. On-Page SEO Activities 1) Write unique keyword rich Title Tag 2) Meta Description 3) Meta Keywords 4) Easy URL Structure for with keyword 5) Search Engine Friendly Website Design 6) Bredcrumb Navigation 7) Keyword Research 8 ) H1,H2,H3,H4 Tags ( include your main keywords) 9) Effective use of robots.txt file 10) XML & HTML Sitemap

11) Alt Tags for images 12) Internal Linking of web pages 13) Choose best file names with your keywords 14) Use of No-follow links 15) Your Mobile site version 16) Keep your code as simple as you can 17) Keyword Density 18) Remove Canonical Problem ( if it is exists) 19) Google Video Sitemap creation 20) Url optimizations 21) Footer Link optimization 22) structure of head and left and right link optimization as per search engine guide line 23) keywords analysis

What is canonical error?


What is Canonical Error? When your home page opens with both "http://www.site.com" and "http://site.com" is known as canonical error. Sometime Canonical URL (best URL version) is also referred as "Preferred Domain". Canonicalization (Methods to set this problem) 1. Set your preferred domain (using webmaster tool) 2. Specify the canonical link for each version of the page For Example:

link rel="canonical" href="http://www.canonical.com" Add this extra information to the section of non-canonical URLs. 3. Indicate your canonical (preferred) URLs by including them in a Sitemap 4. Indicate how you would like Google to handle dynamic parameters. (by using Parameter Handling)

14 Best Practices and Tips for Video Optimization SEO Video optimization as part of your search engine optimization efforts can be a great way for you to expose your brand to users who may have otherwise not been familar with your brand, your product(s) or your service(s). With the launch of Universal Search from Google, you can expect to see more and more video results occupying the search engine results that are served up by Google. Marketers and site owners are continuing to experiment with video as a means of intercepting their target markets. As a result, we are seeing more video driven content surface on the Web. Just how popular is video content on the Web? Well depending on the study or data you review, the results vary. The thing is, as is the case with a lot of the content on the Web, a lot of the video out there is useless and lacking in informative content. Don't get me wrong, there is a lot of great "How To" video's out there, but there are a lot of video footage that is really just noise and an interruption to those of use looking for useful, relevant information. According to websiteoptimization.com, the growth of online video has been quite explosive.

Site 1. YouTube 2. Google Video 3. AOL Video 4. Yahoo! Video 5. vids.myspace.com 6. MSN Video

Unique Visitors (1000s) Year-over-year growth (June 2007) 51,378 17,759 15,687 15,473 15,281 11,967 162% 137% 114% 405% 69% 8%

With this type of growth, the need for online video optimization is a must. As a result, we

have compiled a few best practices that you should consider when looking to optimize your online video. Video Optimization Best Practices and Tips 1. Make sure that your video clips are relevant and informative - For starters, ensure that your video provides useful and informative information. Videos that demonstrate step by step procedures are great, videos that express an opinion about a specific topic can be useful too. Videos that have nothing to do with your brand or service offering or are general or vague in nature will just confuse your audience. Save the viewer the trouble and don't just upload a video for the sake of uploading a video. 2. SEO Video Optimization Fundamental Tip #1: Give your video a Catchy Title - Video can be used to bring visitors to your site. One way to get users to view your video is to give it a catchy title that contains a related key phrase that is relevant to your product, service or brand. 3. Use Video as a Portal to other Content on Your Site - Upload a couple of videos to portals like YouTube and provide links back to related content and other videos on your site. 4. SEO Video Optimization Fundamental Tip #2: Optimize your video for Important Key Phrases - You might want to optimize your video for terms users are likely to be searching for. Tag your video with these terms, consider naming the file name of the video with these terms in mind. 5. Provide Transcripts of your Videos - Good old HTML content is still a favorite with the search engines. If you want your video to rank well, you need to give the search engines something to index and rank. Surround your videos with onpage copy that can be indexed by the search engines. 6. Keep Your Videos to five minutes or less - there is nothing worse than a boring video that goes on an on. If you have video content that is of long duration, consider breaking it up into smaller pieces and tag each accordingly. Not only does this make for better viewing pleasure, it also keeps the user looking for more. 7. Make use of a Video Sitemap - For video that is native to your own website, make sure that users and search engine spiders can find your video content. The easiest way to do this is through the use of a video sitemap on your site. Use important keywords in the anchor text links to your videos featured on your video sitemap. 8. SEO Video Optimization Fundamental Tip #3: Tag Your Videos - tag your videos with key phrases that are reflective of the content. 9. Brand your Video with your Logo - Video is a great tool to generate brand awareness with your prospects. Take advantage of this by incorporating your brand in your videos. 10. SEO Video Optimization Fundamental Tip #4: Remember Inbound Linking Factors - Link to videos using important keywords in anchor text.

11. Offer the Option to Embed Your Video - allow other users access to the coding that will allow them to embed your video on their website or their blog. Think viral marketing 12. SEO Video Optimization Fundamental Tip #5: Add Descriptive Meta Data optimize your video for relevant keywords and include a keyword rich description of your video content. 13. Allow Users to Rate your Video - videos that receive higher ratings from users are the ones that users tend to favorite and save. You can bet that the search engines will pay attention to this when ranking these videos. 14. Syndicate Your Video - Submit your video to RSS. Video optimization is becoming more important as mainstream aspect to search engine optimization. The best way to optimize your video content is to think about the user and who you want to engage with your video. Be smart about it, don't just post a video for the sake of posting a video. Consider your audience, their language and their needs.

How to setup a 301 Redirect :


The 301 Permanent Redirect is the most efficient and search engine friendly method for redirecting websites. You can use it in several situations, including:

to redirect an old website to a new address to setup several domains pointing to one website to enforce only one version of your website (www. or no-www) to harmonize a URL structure change

There are several ways to setup a 301 Redirect, below I will cover the most used ones: PHP Single Page Redirect In order to redirect a static page to a new address simply enter the code below inside the index.php file.
<?php header("HTTP/1.1 301 Moved Permanently"); header("Location: http://www.newdomain.com/page.html"); exit(); ?>

301 redirect is the most efficient and Search Engine Friendly method for webpage redirection. It's not that hard to implement and it should preserve your search engine rankings for that particular page. If you have to change file names or move pages around, it's the safest option. The code "301" is interpreted as "moved permanently".

ALT Tag Optimization An Important SEO Component


ALT Tag optimization once was among the most effective SEO methods. Search engine optimizers tried to include their targeted keywords into the alt tag text in order to reinforce the relevancy of a webpage to get higher rankings. As of today, however, all the major search engines including Google, Yahoo and MSN stopped using ALT Tag as a decisive ranking factor when determining the relevancy of a webpage. That doesnt mean, of course, that ALT tags have become totally useless. Although its not as powerful as it used to be ALT tag optimization still remains an important on-page SEO technique.

What is ALT Tag


Alt tag (also referred to as image tag) is textual information related to a graphic image to indicate its relevancy and avoid indexing problems. The thing is that search engines dont perceive websites the way we do. They cannot read images, thats why it is important to use ALT tags to tell the search engine spiders what this or that picture is about. Basically an ALT tag is a name of a picture or graphic image.

ALT Tag Optimization


ALT tag optimization is the process of designing ALT tags in a way that would describe the image and also include relevant keywords. That is the basic SEO that still holds true even though ALT tag text is no longer a decisive ranking factor. There are a number of reasons to continue using alt tags for SEO. First of all, despite the fact that the big three search engines have stopped using alt tags to determine relevancy of a webpage there are search engines that still do. By optimizing your alt tags you can rank better on search engines that continue to employ alt tags as a ranking factor. Another reason to go on with alt tag optimization is that search engine ranking algorithms change on a regular basis, and alt tags may acquire more weight again. Alt tags are also important for the image search service provided by Google and other search engines. Whenever users search for images its important to have targeted keywords in the descriptions to provide them with more relevant results. Just like with any SEO technique its important to keep balance and use ALT tag optimization with a certain degree of moderation. You shouldnt stuff your picture descriptions with keywords for this will have an adverse effect on the users experience of your visitors, especially the visually impaired ones. You should remember that therere Internet users who have web pages read aloud to them by special software and long ALT tags will completely spoil their impression of your website. Its also critical to only use the keywords that are relevant to the image in order to provide for higher relevancy. ALT Tag optimization is just one component of a SEO strategy that alone wont much affect your rankings, but it can effectively reinforce other optimization efforts leading

The Basics of Anchor Text Optimization


An anchor text as the name suggests is the name given to a text that carries a hyper link to a webpage. For instance take a look at the following sentence: Get information on Search marketing here. In the above sentence clicking on the text 'search marketing here' will take you to the index page of this website. This means the text 'search marketing here' carries a hyperlink to the index page and can be considered an anchor text. What is anchor text optimization? Anchor text optimization is the process of optimizing anchor texts using keywords to get higher rankings for those keywords in SERPs. Basically any anchor text optimization campaign would involve using keywords to link to pages having related content. All major search engines place huge level of importance to anchor texts in comparison to mere links. When we place hyperlinks on keywords and use them to link with related pages the search bot considers this as additional value to the page. For instance, when the anchor text search engine optimization is used to link with a page containing information on search engine optimization, the search bot considers this as an additional indication that the web-page is relevant. How to optimize for anchor text Anchor text optimization can be done internally as well as externally. Internal anchor text optimization involves linking to web pages within a website from other pages using related keywords as anchor texts. Similarly external anchor text optimization involves getting inbound links from websites, directories, forums and blogs using necessary keywords as anchor texts. Internal Anchor Text Optimization Internal anchor text optimization refers to optimization within a website. For instance if a webpage has a hyperlink to another webpage then instead of using generic words as the link text (anchor text) one can make use of specific and related keywords. For instance, let's take an example of a website that offers information on search engine optimization especially regarding 'link exchange' and 'anchor text optimization'. Here the page that offers information on 'link exchange' can be linked from the homepage as Get link exchange information here instead of Get link exchange information here Similarly the page that offers information on anchor text optimization can be linked from the homepage as Get information on anchor text optimization here instead of Get information on anchor text optimization here

External Anchor Text optimization External anchor text optimization involves optimizing anchor text in the inbound links that a site gets. External anchor text optimization can be achieved by the following methods: 1.) Link Exchange Exchanging links with related websites and making use of proper anchor text on inbound links is the best way to optimize anchor text. A link from a related site with an anchor text that is again related to the destination page is considered more important by all search engines. 2.) Directory submissions Submitting your website in directories is a great way to get quality incoming links. Directories links are generally given more importance by search bots and are crawled more often. Some good directories in which you can consider submitting you site are yahoo! directory, Dmoz , LookSmart, Zeal, joint.com, business.com, allthebizz.com etc. 3.) Participating in forums and blogs Participating in forums and blogs by leaving keyword rich links is also a great method of anchor text optimization. Although it is advised not to spam forums/blogs as that might create a bad impression for the website and of-course the links will be removed. 4.) Article Submission If you have expertise in your field or area with some moderate writing skills, you can go in for article submission. There are many websites today that accept expert articles and to return the favor allow you to advertise your website/product/service. You can place the back-links using specific keywords that you wish to optimize. One major advantage of submitting articles is the fact that good articles get reprinted quickly leading to a good deal of exposure. Some good websites to which you can submit articles include sitereference.com, goarticles.com, workoninternet.com, amazines.com etc.

Keyword Cloud is a visual depiction of keywords used on a website, keywords having higer density are depicted in a larger fonts. Ideally your main keywords should appear in larger fonts at the start of the cloud. Keyword Density is the percentage of occurrence of your keywords to the text in the rest of your webpage. It is important for your main keywords to have the correct keyword density to rank well in Search Engines. Keyword density is the percentage of times a keyword or phrase appears on a web page compared to the total number of words on the page. In the context of search engine

optimization keyword density can be used as a factor in determining whether a web page is relevant to a specified keyword or keyword phrase. Many SEO experts consider the optimum keyword density to be 1 to 3 percent. Using a keyword more than that could be considered search spam. The formula to calculate your keyword density on a web page for SEO purposes is (Nkr / Tkn) * 100,[citation needed] where Nkr is how many times you repeated a specific keyword and Tkn the total words in the analyzed text. This will result in a keyword density value. When calculating keyword density, be sure to ignore html tags and other embedded tags which will not actually appear in the text of the page once it is published. When calculating the density of a keyword phrase, the formula would be (Nkr * Nwp / Tkn) * 100, where Nwp is the number of words in the phrase. So, for example, for a page about search engine optimization where that phrase is used four times and there are four hundred words on the page, the keyword phrase density is (4*3/400)*100 or 3 percent.

Link Wheel Service:


Link Wheel is a very powerful link building concept because it tries to imitate the natural internet pattern and search engines always love natural link building. Rather than getting links from unrelated websites and dubious sources, link wheel tries to get their links from related niches on the internet. Moreover, link wheels have the ability to multiply your links count over a period of time. Previously, network websites were created in an effort to boost the ranking of particular websites but the problem with such websites were that they easily raised the suspicion because it looked very artificial as they did not follow any link wheel pattern but all of them just pointed to one website they targeted. Link building through link wheel is a refined version of network websites. It has proven to be the best link building strategy. Understanding How a Link Wheel Works: Link wheels are nothing but a group of websites that are interconnected which try to improve the ranking of a target website by linking to target website. For example if you have 50 websites in your link wheel and the website you are targeting is "yourdomain.com", then: Site 1 links to Site 2 and back to the "yourdomain.com" Site 2 links to Site 3 and back to the "yourdomain.com" Site 3 links to Site 4 and back to the "yourdomain.com" ...

Site 13 links to Site 14 and back to the "yourdomain.com" Site 14 links to Site 1 and back to the "yourdomain.com"

Graphical Representation of Link Wheel Plan

Click on image to enlarge Yourdomain.com is the website whose ranking we are trying to enhance here. Number of companies try to make use of Web 2.0 technology to build link wheels. They make use of platforms such as Squidoo, Quizilla, Blogger, Weebly, etc. Over a period of time such strategies started losing their popularity because they were unable to produce the desired link wheel effect. SubmitEdge on the other hand tries to approach the link wheel in a different and more powerful way. SubmitEdge has come up with a new link wheel strategy, which makes several tools and platforms rather than just single platform. We make use of Web 2.0 properties, blog posts, article submissions and press release, etc. Using the above platforms, a link wheel with a single gap is created. The gap or break within the wheel reproduces the natural linking pattern on the internet. To the search engines it will look very natural even though it is an artificially created wheel. SubmitEdge has made a thorough research in this field and has tested several websites to ensure the effectiveness of the concept it has developed. From what SubmitEdge has learnt through its research, our link wheel strategy can never go wrong. We have the ultimate link wheel strategy for taking your website to the most vied top positions in all the major search engines. With much development and testing, we have found this to be the ultimate link wheel strategy for getting your URL to the first page of all major search engines.

SEO Title Tag


The first and most important part of your on-page SEO is the title tag (<title></title>). Many people who outsource or create a site in a WYSIWYG editor completely forget about the last of the meta tags that still gives some quality ranking love from search engines. Advertise on Tizag.com The benefits of using optimized title tags are three fold: 1. A user searching for your keyword will see your site's link highlighted in the search engines if your page's title is the same as the phrase they searched for. This drastically increases click through and can even give you more traffic than those who rank above you if their title tags are not optimized. 2. Increase your rankings on the search engines. 3. Help the engines distinguish between pages that might look similar.

Higher Click through Rates


Search engine optimization isn't just about showing up number one on search engines. Rather, it's about getting the all the traffic that you deserve from the search engines. If you rank #6 for "free hats" and you and your competitors forget to include that in the page's title tag, chances are, the person doing the search won't see much difference between your site and the others. However, if you were to change your website's title text to target your most important keyword phrase "free hats", then when someone completes the search for "free hats", they'd see your site show up in bold. This technique will greatly increase the user's desire to view your site first, as your site looks much more relevant and targeted.

Better Rankings
All too often, people believe that the title tag is a place to list the business and domain name of the website. This is wrong and is wasting one of the easiest ways you can tell the search engines what the a page is about and how they should categorize it. While humans might not notice the title tag, search engines certainly do. Use this opportunity to choose the most important keyword that you want to go after and get the free ranking boost that so many websites are missing out on. If you still want to include your domain or name of the company, do it after your keyword, followed by a

dash (e.g. "free hats - hatsemporium.com") to show that your keyword is the most important.

Help the Engines Distinguish your Pages


It's not easy being a search engine. They crawl the web day and night, taking the information from the web and trying to categorize it in a useful manner so that users can find what they're looking for. Make their job easier. Post clearly what the topic of each page is, using title tags, and help the search engine to distinguish one page from another. You may have two pages that are quite similar and it may require a little thought to point out how they different. Don't make the search engines figure out for themselves because they might make a mistake. Instead, make the decision for them. Spell the differences out for them and help your rankings in the process. This is just one strategy in avoiding the duplicate content penalty, which we'll be getting into greater depth later.

RSS Feeds advantages and disadvantages


Posted May 27th, 2008 in Blogging tips/tricks, SEO Tips and Tricks, Tutorials and tagged feeds, RSS by Sam Really Simple Syndication (RSS) is a tool useful for saving or retaining updated information on websites that you frequently visit or websites that are your favorite. RSS utilizes an XML code which scans continuously the content or subject matter of a certain website in search for new informations then transmits the information updates by way of feeding the information to subscribers. Advantages : RSS gives benefits to both readers (users) and web publishers.

It gives you the latest updates. Whether it is about the weather, new music, software upgrade, local news, or a new posting from a rarely-updates site learn about the latest as soon as it comes out. It gives the power of subscription to the user. Users are given a free-hand on which websites to subscribe in their RSS aggregators which they can change at any time they decide differently.

It saves on surfing time. Since an RSS feed provides a summary of the related article, it saves the users time by helping s/he decide on which items to prioritize when reading or browsing the net. It is spam free. Unlike email subscriptions, RSS does not make use of your email address to send updates thus your privacy is kept safe from spam mails. Unsubscribing is hassle-free. Unlike email subscriptions where the user is asked questions on why she/he is unsubscribing and then the user would be asked to confirm unsubscribing, all you have to do is to delete the RSS feed from your aggregator. It can be used as an advertising or marketing tool. Users who subscribe or syndicate product websites receive the latest news on products and services without the website sending spam mail. This is advantageous to both the web user and the website owner since advertising becomes targeted; those who are actually interested in their products are kept posted.

Disadvantages : The disadvantages of RSS use are brought about by its being a new technology and some user-preference concerns.

Some users prefer receiving email updates over an RSS feed. Graphics and photos do not appear in all RSS feeds. For conciseness and ease of publication, RSS feeds do not display the photos from the original site in announcing the update except for some web-based aggregators. The identity of the source website can be confusing. Since RSS feeds do not display the actual URL or name of the website, it can sometimes get confusing on what feed a user is actually reading. Publishers cannot determine how many users are subscribed to their feed and the frequency of their visits. Moreover, they would not know the reasons why users unsubscribe which could be important in improving their advertising. RSS feeds create higher traffic and demands on the server. Most readers still prefer the whole update over a brief summary of the entry, thus they still access the site. Since it is a new technology, many sites still do not support RSS.

It is easy to subscribe to entertainment RSS feed, just click the button that indicates the RSS feed. You usually have to copy the URL into your aggregator, but some RSS feeds automatically download into your reader. You can unsubscribe to an RSS feed anytime. The great thing about RSS feeds, is that youre always updated with the latest news and happenings. RSS feeds is revolutionizing the way information consumers get their content. Instead of being bombarded with a plethora of useless information, the consumers now can select & reject the material that goes into their consciousness.

Off Page Optimization


Off Page Optimisation is work that you undertake outside of your website to improve your seo rankings and visibility for your customer. Off Page Optimisation is more focused on digital marketing of your website over internet. Off Page Optimisation is more focused on link building work and Social Media Marketing. Website should have more and more backlinks from various quality website for seo ranking. Similarly Social Media is very important tool to market website over the internet. What are most important Methods for Off Page Optimization? Off Page Optimization is usually consist of different link building methods.

Directory Submission Social Bookmarking Submission Blog Posting Blog Hosting Article Submission Press Release Submission Forum Posting Footer Links Affiliate Marketing Blog Commenting Review Posting Rss Feed Submission Css Submission Video Submission Profile Creation Link Wheel Social Media Optimization

What is meant by Link Wheel in SEO - How to benefit from it?


SEO or Search Engine Optimization has become critical component of a properly developed websites. In order to appear ahead of competitors when probable customers look for product or services in major Search Engine like Google, it has become necessary to optimize the websites for those keywords from On Page as well as Off Page point of view. We are well aware about what is meant by On Page Optimization and Off Page Optimization.

Link Wheel - Pathofseo Off Page Optimization consist of more of Submission for building back link for the Websites. Link Wheel is one such Off Page technique by which desired results can be achieved. It is considered as link wheel helps the websites to rank for the keyword within short span of time. Under Link Wheel blog is created in different Web 2.0 Websites and are used to link with each other in a form of Link Circle. For a Link wheel at least three blog should be created in different platform of Web 2.0 such as Blogger, Word press, Type pad, Live Journal, Blog and so on. In Case of three Link wheel each post in a blog consist of three links 1) URL of Blog post itself -2) Targeted URL of Website -3) URL of next blog created in other platform, in these way the cycle rotates on. Make sure the first blog should contain the link of second and the second should contain the link of third and so on. Link Wheel can be at lease of 3 Wheel and there is no limit for maximum Wheel to be created. If 10 blog is created and used to link in similar pattern then it may be called 10 Link Wheel. As the number of link wheel increase, the pattern of linking will become more complex. This will make the link structure look more natural to Google Crawler and maximum benefit can be made possible.

On page and Off-Page Optimization techniques


Here are few things to do for on page optimisation. * Keyword Research and Analysis * Page Specific Meta Tag Creation * Title Tag (Page Title Optimisation) * Alt Tag Optimization * Anchor Text Optimization * Content Placement * Keyword Density * Implementation of 301 Permanent Redirect * Implementation of Google Analytics * Search Engine Verification from Google, Yahoo and MSN * Submission of XML Sitemap

* Creation of Proper Link Structure * Optimization of Search Engine Essential Files (robots.txt, urllist.txt, sitemap.xml) Here are few things to do for off-page optimisation. * Directory Submission * Social Bookmarking Submission * Blog Posting * Blog Hosting * Article Submission * Press Release Submission * Forum Posting * Footer Links * Affiliate Marketing * Blog Commenting * Review Posting * Rss Feed Submission * Css Submission * Video Submission * Profile Creation * Link Wheel * Social Media Optimization

Google Penalty Advice


In this article, we provide free SEO advice to assist in recovery from a Google penalty exploring the most common causes. We offer a useful penalty checklist allowing you to check your website for any non-compliance with Google's Webmaster Guidelines. This page was last updated in March 2011.

Finding the Causes of a Sudden Drop in Ranking


To check for Google penalties with any degree of certainty can be difficult. For example, if your website experiences a sudden reduction in ranking for its main keyword terms it can be caused solely by a Google algorithm change or search results (SERP) update.

When any algorithm change or Google SERP update is released, there are always winners and losers, and when a sudden drop in rankings is experienced Google penalties are often incorrectly blamed. However, where the traffic reduction from Google non-paid search is very extreme, as pictured left (from Google Analytics data - traffic sources > search engines > Google) then a penalty is much more likely. There are a growing number of Google filters now built into the Google algorithm which aim to detect violations of Google Webmaster Guidelines in order to help maintain the quality of Google's search results (SERP) for any given query. One such algorithmic filter is thought to have caused the massive drop on Google traffic pictured above.

Link Devaluation Effects


When considering the cause of a ranking reduction, its worth noting that Google continually applies link devaluation to links from various non-reputable sources that it considers spammers are exploiting to artificially raise the ranking of their sites. Paid links are a particular target for Google's spam team. Hence continual Google algorithm tweaks are being made in an effort to combat link spam and link buying. When link devaluation is applied, as it has with reciprocal links as well as links from many paid text link advertisements, low quality web directories and link farms, reductions in Google ranking may occur affecting the recipient site of the links. The severity of ranking reductions are usually synonymous with the website's reliance on that particular type of linking. There's no doubt that do-follow blog links and low quality web directory links have also been devalued and that this has lead to reduced website rankings for sites which got a significant number of backlinks or site wide links from do-follow blogs or low quality directories. In addition, backlinks from unrelated theme sites are also experiencing

Google devaluation - so if your site heavily relies on these links, then it too may experience a sudden drop in Google rankings. If you suspect a Google penalty, it first makes sense to check whether any Google algorithm changes have been made which could be the cause of the problem. SEO Forum posts reflecting algorithm changes usually appear on the SEO Chat Forum soon after the effects of any update are felt. However, if your website suffers sudden and dramatic fall in ranking and no Google algorithm changes have been made, then a Google penalty or filter may be the cause, especially if you have been embarking on activities which might have contravened Google Webmaster Guidelines. The most severe Google penalties lead to total website de-indexing and where the SEO misdemeanour is serious a site ban may be imposed by Google, accompanied by a Page Rank reduction to 0 or a greyed out Google Toolbar Page Rank indication. Google filters are less extreme with a site remaining indexed, but can still be extremely damaging to a company's profits. Whatever the cause, recovery from a Google penalty or filter is a challenge and our SEO checklist will help identify likely causes and reasons for any sudden reduction in Google ranking or major drop in SERP position for your main keywords.

Initial Test for a Penalty


When a penalty is suspected, start by checking with Google the number of URL's it has indexed. This can be accomplished by using the site:yourdomainname.com command within a Google search window. If no URL's are indexed then there is a high probability of a Google penalty, especially if your site used to be indexed. Another indicator of a Google penalty is ceasing to rank for your own company name, where previously your ranked well for your own brand name. The exception to this rule is a new website with few backlinks, which may not be Google indexed since it is still waiting to be crawled.

Not all Google penalties result in a loss of Page Rank. For example, various Google filters can be triggered by unnatural irregularities in backlinks (detected by the automated Google algorithm) or by excessive link buying or reciprocal link exchange, particularly when using similar keyword optimised anchor text in your links. The example above shows a typical reduction in website traffic caused by a Google SEO penalty. Another good indication that a site is under penalty is to take a unique paragraph of text from a popular page on the affected site and searching for it in Google. If the page doesn't come back as #1 and the page is still showing as cached using cache:www.mydomain.com/page.htm, then this is a good indication that a penalty or SERP filter has been placed on the domain. To avoid a Google penalty or SERP filter, take particular care when embarking on any link building program. In particular, avoid reciprocal link exchange becoming the mainstay of your SEO campaign and if you must buy links ensure that they are not placed in the footer of the site you're listed on or under sub-headings such as 'Our Sponsors', 'Our Partners', or 'Featured Sites'. Links placed naturally within text content are much more sensible and less detectable. If you suspect your website has received a Google penalty, you should check your site for non compliances to Google's webmaster guidelines and (if appropriate) request website reconsideration from your Webmaster Tools account. Interestingly, in a recent move by Google, some web sites which are in violation of Google's webmaster guidelines or terms of service may receive an e-mail from Google advising them to clean up their act, warning of a penalty and website de-indexing. When the breach of Google's terms (e.g. link spam or hidden text) is removed from the offending site, Google will usually automatically clear the penalty and re-index the site as many so-called penalties are actually 'filters' triggered by irregularities found by Google's algorithm.

Google Penalty Checklist


If your website has suffered a Google penalty, we provide some free SEO advice to help identify the cause and solve the problem below. Once you have identified the cause, we suggest watching the Google reconsideration tips video to help prepare a successful reconsideration request to Google. For further assistance with Google penalties contact us for professional help.

Linking to banned sites

Run a test on all outbound links from your site to see if you are linking to any sites which have themselves been Google banned. These will be sites which are Google de-listed and show Page Rank 0 with a greyed out Toolbar Page Rank indicator.

Linking to bad neighbourhoods Check you are not linking to any bad neighbourhoods (neighborhoods - US spelling), link farms or doorway pages. Bad neighbourhoods include spam sites and doorway pages, whilst link farms are are sites consisting predomininately of links to other sites, with no original or useful content. If in doubt, we recommend quality checking all of your outbound links to external sites using the Bad Neighborhood detection tool. Whilst this SEO tool isn't perfect, it may spot "problem sites". Another good tip is to do a Google search for the HTML homepage title of sites that you link to. If the sites don't come up in the top 20 of the Google SERP, then they are almost certainly low trust domains and linking to them should be avoided.

Automated query penalty Google penalties can sometimes be caused by using automated query tools which make use of Google's API, particularly when such queries are made from the same IP address that hosts your website. These tools break Google's terms of service (as laid out in their Webmaster Guidelines). Google allows certain automated queries into its database using its analytic tools and when accessing through a registered Google API account. Unauthorised types of automated query can cause problems, particularly when used excessively.

Over optimization penalties and Google filters Link over-optimisation is the single most common cause of a Google penalty. Poor SEO techniques such as aggressive link building using the same keywords in link anchor text often causes problems, especially where there has been a high link accrual rate and the website is relatively new. When managing link building campaigns, always vary the link text used and incorporate a variety of different keyword terms. Use a backlink anchor text analyser tool to check backlinks for sufficient keyword spread. Optimising for high paying (often abused) keywords like "Viagra" can further elevate risk, so mix in some long tail keywords into the equation. For brand new domains, be sensible and add a few one way backlinks a week and use deep linking to website internal pages, rather than just homepage link building. Above all, always vary your link anchor text to incorporate different keywords, your brand name and URL - not just incorporating variations on the same keyword!

There is strong evidence that Google has recently introduced some new automatic over optimisation filters into their algorithm. These seem to have the effect of applying a penalty to a page which has been over optimised for the same keyword by link building. See Google filters for more information or contact KSL Consulting for assistance (fees apply).

Website cross linking & detectable link schemes If you run more than one website and the Google penalty hits all sites at the same time, check the interlinking (cross linking) between those sites. Extensive interlinking of websites, particularly if they are on the same C Class IP address (same ISP) can be viewed as "link schemes" by Google, breaking their terms of service. The risks are even higher where site A site wide links to site B and site B site wide links back to site A. In addition, link schemes offering paid link placement in the footer section of webpages (even on high Page Rank pages) are detectable search engine spam and are best avoided. Site-wide links such as those offered in a Blogroll should also be avoided, particularly if keyword optimised. The reality is that site wide links do little to increase site visibility in the Google SERP, nor do they improve Page Rank more than a single link, as Google only counts one link from a site to another. KSL Consulting also believe that Yahoo! now applies a similar policy. There is some evidence that the extensive use of unnatural looking site-wide links can lower the Google trust of a site, which can subsequently reduce ranking.

Duplicate Content problems Whilst duplicate content in its own right is not thought to trigger Google penalties, it can be responsible for the non-indexation of website content and for placing all duplicate web pages into Google's supplemental index, which results in pages not ranking in the Google SERP. This can result in significant and sometimes sudden traffic loss to a site, similar to that caused by a penalty. Google will not index duplicate content and any site which utilises large amounts of content (like news feeds/articles) featured elsewhere on the web will likely suffer as a result.

Hidden text or links Remove any hidden text in your content and remove any hidden keywords. Such content may be hidden from view using CSS or alternatively, text may have been coded to be the same colour as the page background, rendering it invisible. These risky SEO techniques often lead to a Google penalty or web site ban and should be removed immediately. The same applies to hidden links, which Matt Cutts has openly stated break Google's webmaster guidelines.

Keyword stuffing (spamming) Remove excessive keyword stuffing in your website content (unnatural repetitions of the same phrase in body text). Always use natural, well written web copywriting techniques.

Check for Malware Problems It is worthwhile carrying out a check to see if Google has blacklisted your site as unsafe for browsing. To assess whether this is the case visit www.google.com/safebrowsing/diagnostic?site=mydomain.co.uk, replacing 'mydomain.co.uk' with your domain.

Automated page redirects The use of automated browser re-directs in any of your pages. Meta Refresh and JavaScript automated re-directs often result in Google penalties as the pages using them are perceived to be doorway pages. This technique is especially dangerous if the refresh time is less than 5 seconds. To avoid Google penalties, use a 301 redirect or Mod Rewrite technique instead of these methods. This involves setting up a .htaccess file on an Apache Linux web server.

Link buying or selling Check for any paid links (I.E. link buying of text links from known link suppliers or text link brokers). There is significant evidence that link buying can hurt rankings and Matt Cutts (head of Google's spam team) mentions this on his Google SEO blog. Matt states that Google will also devalue links from companies selling text links, such that they offer zero value to the recipient in terms for improving website ranking or Page Rank. In the past Google has applied a Page Rank penalty to known link sellers and many low quality directories who "sell Page Rank".

Reciprocal link building campaigns Excessive reciprocal linking may trigger a Google penalty or cause a SERP filter to be applied when the same or very similar link anchor text is used and large numbers of reciprocal links are added in a relatively short time, leading to a high link accrual rate. We recommend that reciprocal linking be restricted to companies you have some business relationship with, rather than being done solely for SEO benefit. For example two companies who work together may need to link to one another legitimately. Lots of reciprocal link building with low quality sites or websites which have an unrelated theme is not recommended. This can lead to a Backlink Over Optimisation Penalty (known as a BLOOP to SEO experts!). This causes a sudden

drop in SERP ranking (often severe). To avoid the problem, reciprocal link exchange should only be used in moderation as part of a more sustainable SEO strategy which also builds quality one way links to original website content.

Paid links on Commercial Directories Some leading online web directories offer paid links on multiple directory pages. These can be keyword optimised anchor text and search engine accessible (I.E. they have no "nofollow" tag). If you have optimised the same keyword elsewhere in your SEO campaign, adding hundreds of links from directories all with the same or similar anchor text can cause serious problems, as Google monitors link accrual rate as part of its algorithm. In extreme cases we've seen these kinds of directory links trigger a Google filter.

Thin Affiliates and "Made for Adsense" sites It's a well known fact that Google dislikes affiliate sites with thin content and the same applies to "made to Adsense" sites. Always make sure affiliate sites have quality original content if you don't want to get them filtered out of the search results when someone completes a Google spam report. We have had personal experience of affiliate sites acquiring a Google penalty, so don't spend time and money on SEO without the right content.

Content Feeds and I-Frames Whilst content feeds (including RSS) are widely used on the web, there is some evidence that pulling in large amounts of duplicate content through such feeds may have an adverse effect on ranking. In particular, the use of I-frames to pull in affiliate content should be avoided where possible. Consider the use of banners and text links as an alternative.

Same Registrant Domains As Google has access to the WHOIS records for domains and is known to use this information, it is possible that a penalty applied to one website may reduce the ranking of other websites with the same registrant, although most filters only affect one domain.

Check Google Webmaster Guidelines Read the Google Webmaster Guidelines and check website compliance in all respects. Since early 2007, Google may alert webmasters via the Google Webmaster Console who they feel might have unknowingly broken their

guidelines to advise them that their site has been removed from Google for a set period of time due to breaking one or more of Google's Webmaster Guidelines. However, blatant spam or significant breaches of Google's rules will often result in a site being banned or filtered from teh SERP, with no Webmaster Console notification. Where notification of a violation of Google's guidelines is received, it usually encourages the webmaster to correct the problem/s and then submit a Google re-inclusion request (now referred to as a 'reconsideration request' in Webmaster Tools). From my experience, after this is done the website will usually regain its original ranking in around 14 days, assuming that all violations of Google's terms and conditions have been resolved.

Google Webmaster Tools According to Matt Cutts's Blog, Google is improving webmaster communication with respect to banned sites and penalties. Google is now informing some (but not all) webmasters the cause of a website ban or penalty, via their excellent new Webmaster Console. In addition, a Google re-inclusion request can be made from the same interface. For this reason, if you've been hit by a web site ban or penalty, it is worthwhile signing up for Google Webmaster Tools and uploading an XML Sitemap onto your site and then to check site status in the Google Webmaster Console. This is an easy 15 minute job and may help to identify the cause and fix for the problem!

Preparing Your Site for Google Reconsideration Google recently prepared a Google reconsideration video tutorial on how to create a good reconsideration request, including tips on what Google look for when assessing the reinclusion of any website. The video tutorial is presented by actual members of Google's reconsideration team and is very helpful to any webmaster looking to successfully prepare a reconsideration request.

Google SERP Filters


May 2010 Update - There is clear evidence that over-optimising a single keyword through adding too many backlinks and site-wide links can result in triggering a Google filter whereby the recipient page of these links no longer ranks in the organic SERP for the keyword being optimised. Affected page/s appear to still be Google indexed and cached. The Google Trust Rank of the website may be slightly affected leading to a ranking reduction for other keywords. Interestingly though, affected websites can retain ranking for other long tail keywords

which have not been over optimised, particularly on pages which have not been subject to aggressive link building, but may have one or two decent natural links. One other fact worth noting is that affected pages seem to have high keyword density to the point of being over-optimised. In some cases changes to increase page keyword density for the problem keyword may have been made shortly prior to the Google filter being applied. In the cases observed, the websites still rank for their company name and pages still show in the Google index (using the site:domain.com command). However, picking a sentence of text from the affected page and searching for it in Google returned no results. It is therefore fair to assume that the filtered page was all but removed from the index as far as its ability to rank - even for long-tail keywords, although it still showed as being Google cached (cache:domain.com/page). To assess whether your website is affected by a Google SERP filter, do a site-wide backlink anchor text analysis using Majestic SEO (free) or a paid SEO tool like SEOMoz Linkscape and check the spread of keywords used in links to your page look natural. Check your keyword density too excluding Meta tags. Google is tightening up on link spam in a big way; be warned! Contact KSL Consulting more more help or advice (consultancy fees apply).

Check for a Total Google Website Ban


If you've used unethical black hat SEO techniques your website could be Google banned and consequently totally de-indexed. If your site no longer shows any pages indexed when the site:www.yourdomain.com command is used in Google (and it was previously indexed), then your site may have received the most extreme form of penalty - a total Google ban. Check for possible causes using the free SEO advice contained in our penalty checklist above.

Google Penalty Recovery Strategy


Recovering from a Google penalty normally involves fixing the cause of the problem and then waiting for Google to remove any over optimisation penalties or SERP filters. To fully recover Google ranking may take around 2-3 months after all website problems are corrected, although we have seen penalty recovery in a matter of weeks following full and thorough resolution of the Google Webmaster Guidelines infringements. The Google algorithm can automatically remove penalties if the affected website is still Google indexed. To check whether a particular website is still Google indexed, refer to our Google indexing page. If your website has been Google de-indexed and lost Page Rank, then you will need to make a Google re-inclusion request. Where the reason for the penalty is clear, it helps to provide details of any changes you've made to correct violations of the Google Webmaster Guidelines.

The best recovery strategy from any Google penalty is to thoroughly familiarise yourself with Google Webmaster Guidelines and also check the SEO Chat Forum for threads surrounding any recent Google algorithm changes and to evaluate recent changes made to your website prior to the sudden drop in Google ranking. Don't forget to check your link building strategy as poor SEO often causes Google penalties. Start by removing any reciprocal links to low quality websites, or sites having no relevance to your website theme.

Preparing for a Google Re-Inclusion (Reconsideration) Request


We recommend you start by watching the Google reconsideration tips video. If your site has been de-indexed due to a Google penalty, correct the problem and then apply to be re-included in the Google index by submitting a Google re-inclusion request from your Webmaster Tools account. More information about this is provided in Google Webmaster Help. Google refer to this process as making a "reconsideration request" which is now submitted from your Webmaster Tools login.

How long does site reconsideration take?


By submitting a reconsideration request to Google you enter the queue for the manual review process whereby your site is manually checked for violations of Google's Webmaster Guidelines. This can take several weeks. At the end of the process, an Inbox message is usually sent to the Webmaster to confirm that the reconsideration has been processed. This will be visible by logging into Webmaster Tools and then checking your Inbox under 'Messages'.

A typical message (as shown above) merely says that the domain in question has been reviewed and that it will be considered for reinclusion in the Google SERP assuming that all violations of Google's Webmaster Guidelines have been corrected. Following this, if the domain does not recover from the penalty, the Webmaster should undertake a more thorough website audit paying particular attention to backlink anchor text, recent backlink acquisition rate and the removal of any known paid links.

Majestic SEO can help in this regard by producing graphical representations of link trends. Note however that Majestic's backlink data may not be quite up to date. If you'd like more help with solving suspected Google penalties or any other Google ranking problem, contact KSL Consulting for professional SEO advice at affordable prices.

What is Link building?


Link Building is an amalgamation of expert techniques and workable strategies that allow a website to have backlinks or inbound links from other websites. The nature of these links is do-follow and this allows users of one website to just click on the link for a different website to visit it. Link Building is an immensely integral part of SEO and optimizers all over the world agree that website visibility is near impossible without this important segment of SEO. You can avail multiple Link Building techniques like Social Bookmarking, Directory Submissions, Link Wheel, Blog Commenting, Forum posting, Contextual Links and many successful services from linkbuilding-service.info.

Why link building is crucial?


If you ask this question to any of the webmasters or to any link building experts, the answer would be in three simple points. 1. For higher search engine rankings 2. For enhancing websites popularity 3. For building quality traffic But our link building experts say Achieving higher search engine rankings might be easy, well maintaining those search engine positions for long time is the difficult job And thats where most of the link building experts lag behind us and also this is one of the reasons why we were able to maintain long-term clients

Pay-Per-Click (PPC): a step-by-step guide


In my last post, PPC 101: Pay-Per-Click basics, I described PPC as a form of marketing advertisers could use to bring targeted audiences to their sites. This guide will detail the steps you would take to actually create a PPC ad campaign.

1. Go to Google Adwords and create an account if you dont already have one. You may use other search engines but if youre new to this, I suggest starting with the most popular, Google Adwords. 2. Perform research on keywords for your site. 3. Create a keyword list that you want to display your ad upon being searched. 4. Choose how much youre willing to pay per click. This is known as your bid amount. 5. Choose or create a landing page for your ads. This is where you want your targeted audience to show up. If possible, it should be a page developed specifically for your campaign and usually not your home page. 6. Write your ads. Stick with text ads for now; for each ad you will need a headline (max 50 characters), a description (max 200 characters), and the URL for your landing page. 7. Activate your Google Adwords campaign!

Those are the steps to create your PPC campaign but you are not done! Just like any ad campaign, you need to monitor your performance closely. If this is your first dive into a online ads, youll find that you probably need some fine-tuning almost immediately. Luckily, unlike traditional ads, you can be as agile as you want. To be candid, you can almost chalk up your first campaign as a learning experience and if you somehow make a few dollars in the process, then thats a bonus. A final tip: the parts that advertisers should evaluate (and constantly re-evaluate) are steps 2 and 3. Getting the keywords right is an art and a science. Even if you eventually get your keyword list to what you think is perfect, its probably time to look again.

8 Tips To Make PPC Work With Link Building Efforts


Nov 24, 2009 at 8:11am ET by Julie Joyce In this crazy digital era where even people like my poor old gospel-singing father look online for almost everything (usually guns and cashews), you really need to understand how to use as many types of advertising as possible in order to improve your efforts and expand your reach. Since link building is all over the SEO news right now, its being touted as the fix that will open the doors of the site to vast wealth and bring loads and loads of converting traffic. However, like with any form of marketing, nothing should truly stand alone. If youre just doing paid ads, you should consider link building. If you think that having lovely titles is enough to rank well, perhaps you should consider getting into social media. The key today is to embrace all sorts of different ideas and market yourself in the ways that matter to your audience, while using everything you have learned from one format to do better in another. Thus, well talk about how to make PPC and link development work together, instead of at cross-purposes, which Ive seen all too often. 1. Create linkable content on your site This ups the chance of a PPC conversion turning into a good link. Remember, a PPC user may not just stay on the landing page. Your entire site should have great and enticing content on it. 2. Create sticky landing pages Make sure your landing page for PPC is sticky enough to generate inbound links from the people who click on an ad, go to your site, then link to it for whatever reason (great content, fantastic user experience, amazing product, etc.) This is no different from any other content is king idea, but sometimes PPC ads just arent written, or maintained, with link building in mind. Maybe the person in charge of the ads has no clue about the fact that your PPC landing page does have the power to garner a link, so always point it out. Any visibility is a chance to get a link, period. Dont waste it. 3. Make your copy memorable Create PPC copy that people will remember, even if they dont click right now. They may be back later, and they may even give you a link. PPC copy is a lot like Twitter tweets in that you need to say a lot in a small amount of space, or no ones going to care. Nows the time to get your main keywords out there, tell a user why he or she should click, and make the connection that will stick in someones mind. 4. Visualize your anchor text

Think of how you would word your ideal anchor text for the landing page and include those words in your ad. Im sure that you know what your most important keywords are, but many people dont actually put those words in the actual ad. If your desired anchor text for a link is wholesale flowers then dont just add it as a keyphrase that will trigger the ad to be shown. Use it in the title and/or the ad description. The more exact connections you make, the higher the chance of getting something good (like a link) out of it later. 5. Build up internal page visibility The beauty of sending a PPC user to a landing page and not the home page is that you get a chance to generate a link to a subpage. This is an opportunity not to be missed. Depending upon your site, deep links can be very, very difficult to get without a lot of hard work. If there are internal pages that arent generating decent links but theyre important to you, create some ads landing on those page, and perhaps youll start seeing links show up soon. 6. Dont be deceptive Yes, this should go without saying, but too often, Ive clicked on an ad that wasnt at all what I was looking for, but due to some clever copy, I thought it would be. Not only does this waste money, it wastes time and it irritates users. The only link youre getting out of deception is going to be one that trashes your name, most likely. Although that might be part of a natural link profilejust dont do it. 7. Make use of the capability for quick testing In general, PPC changes are super fast and paid ads make an excellent arena for testing purposes. Test different PPC content to see what converts, and carry this over into your link building efforts through content changes, new page names, and so forth. 8. Use your PPC analytics PPC platforms give you analytics, so use them to figure out which phrases convert. Converting phrases make darned good anchor text. Try to alter your content to boost your visibility for those terms organically, as well. Dont take converting copy lightly perhaps you can figure out whats causing the conversions and replicate it on other pages. At the risk of alienating individuals who rely on PPC for their business, it is sometimes used as a band-aid that covers up all the poor SEO and lack of content on a site. Even if you cant rank organically, you can still buy advertising space. Successful online marketing is about taking all the available chances to get your name out there. If people know where to find you, they may link to you. Its very simple. All paid ads send a user to a page on your site that is a chance to not only convert, but to generate a link.

Theres a lot that you can apply to both PPC and link building through use of the other, so if youre not making use of these arenas in conjunction with one another, perhaps you should consider it.

Robots.txt http://webtools.live2support.com/se_robots.php webmaster tools: Generate robots.txt file for search engines allow & disallow, add user agent to disallow.

Introduction to "robots.txt" There is a hidden, relentless force that permeates the web and its billions of web pages and files, unbeknownst to the majority of us sentient beings. I'm talking about search engine crawlers and robots here. Every day hundreds of them go out and scour the web, whether it's Google trying to index the entire web, or a spam bot collecting any email address it could find for less than honorable intentions. As site owners, what little control we have over what robots are allowed to do when they visit our sites exist in a magical little file called "robots.txt." "Robots.txt" is a regular text file that through its name, has special meaning to the majority of "honorable" robots on the web. By defining a few rules in this text file, you can instruct robots to not crawl and index certain files, directories within your site, or at all. For example, you may not want Google to crawl the /images directory of your site, as it's both meaningless to you and a waste of your site's bandwidth. "Robots.txt" lets you tell Google just that. Creating your "robots.txt" file So lets get moving. Create a regular text file called "robots.txt", and make sure it's named exactly that. This file must be uploaded to the root accessible directory of your site, not a subdirectory (ie: http://www.mysite.com but NOT http://www.mysite.com/stuff/). It is only by following the above two rules will search engines interpret the instructions contained in the file. Deviate from this, and "robots.txt" becomes nothing more than a regular text file, like Cinderella after midnight. Now that you know what to name your text file and where to upload it, you need to learn what to actually put in it to send commands off to search engines that follow this protocol (formally the "Robots Exclusion Protocol"). The format is simple enough for most intents and purposes: a USERAGENT line to identify the crawler in question followed by one or more DISALLOW: lines to disallow it from crawling certain parts of your site.

1) Here's a basic "robots.txt": User-agent: * Disallow: / With the above declared, all robots (indicated by "*") are instructed to not index any of your pages (indicated by "/"). Most likely not what you want, but you get the idea. 2) Lets get a little more discriminatory now. While every webmaster loves Google, you may not want Google's Image bot crawling your site's images and making them searchable online, if just to save bandwidth. The below declaration will do the trick: User-agent: Googlebot-Image Disallow: / 3) The following disallows all search engines and robots from crawling select directories and pages: User-agent: * Disallow: /cgi-bin/ Disallow: /privatedir/ Disallow: /tutorials/blank.htm 4) You can conditionally target multiple robots in "robots.txt." Take a look at the below: User-agent: * Disallow: / User-agent: Googlebot Disallow: /cgi-bin/ Disallow: /privatedir/ This is interesting- here we declare that crawlers in general should not crawl any parts of our site, EXCEPT for Google, which is allowed to crawl the entire site apart from /cgi-bin/ and /privatedir/. So the rules of specificity apply, not inheritance. 5) There is a way to use Disallow: to essentially turn it into "Allow all", and that is by not entering a value after the semicolon(:): User-agent: * Disallow: / User-agent: ia_archiver Disallow: Here I'm saying all crawlers should be prohibited from crawling our site, except for Alexa, which is allowed. 6) Finally, some crawlers now support an additional field called "Allow:", most notably, Google. As its name implies, "Allow:" lets you explicitly dictate what files/folders can be crawled. However, this field is currently not part of the

"robots.txt" protocol, so my recommendation is to use it only if absolutely needed, as it might confuse some less intelligent crawlers. Per Google's FAQs for webmasters, the below is the preferred way to disallow all crawlers from your site EXCEPT Google: User-agent: * Disallow: / User-agent: Googlebot Allow: /

Robots.txt is a text (not html) file you put on your site to tell search robots which pages you would like them not to visit. One way to tell search engines which files and folders on your Web site to avoid is with the use of the Robots metatag. But since not all search engines read metatags, the Robots matatag can simply go unnoticed. A better way to inform search engines about your will is to use a robots.txt file.

What Is Robots.txt? Robots.txt is a text (not html) file you put on your site to tell search robots which pages you would like them not to visit. Robots.txt is by no means mandatory for search engines but generally search engines obey what they are asked not to do. It is important to clarify that robots.txt is not a way from preventing search engines from crawling your site (i.e. it is not a firewall, or a kind of password protection) and the fact that you put a robots.txt file is something like putting a note Please, do not enter on an unlocked door e.g. you cannot prevent thieves from coming in but the good guys will not open to door and enter. That is why we say that if you have really sen sitive data, it is too nave to rely on robots.txt to protect it from being indexed and displayed in search results. The location of robots.txt is very important. It must be in the main directory because otherwise user agents (search engines) will not be able to find it they do not search the whole site for a file named robots.txt. Instead, they look first in the main directory (i.e. http://mydomain.com/robots.txt) and if they don't find it there, they simply assume that this site does not have a robots.txt file and therefore they index everything they find along the way. So, if you don't put robots.txt in the right place, do not be surprised that search engines index your whole site. The concept and structure of robots.txt has been developed more than a decade ago and if you are interested to learn more about it, visit http://www.robotstxt.org/ or you can go straight to the Standard for Robot Exclusion because in this article we will deal only with the most important aspects of a robots.txt file. Next we will continue with the structure a robots.txt file. Structure of a Robots.txt File

The structure of a robots.txt is pretty simple (and barely flexible) it is an endless list of user agents and disallowed files and directories. Basically, the syntax is as follows: User-agent: Disallow: User-agent are search engines' crawlers and disallow: lists the files and directories to be excluded from indexing. In addition to user-agent: and disallow: entries, you can include comment lines just put the # sign at the beginning of the line: # All user agents are disallowed to see the /temp directory. User-agent: * Disallow: /temp/ The Traps of a Robots.txt File When you start making complicated files i.e. you decide to allow different user agents access to different directories problems can start, if you do not pay special attention to the traps of a robots.txt file. Common mistakes include typos and contradicting directives. Typos are misspelled user-agents, directories, missing colons after User-agent and Disallow, etc. Typos can be tricky to find but in some cases validation tools help. The more serious problem is with logical errors. For instance: User-agent: * Disallow: /temp/ User-agent: Googlebot Disallow: /images/ Disallow: /temp/ Disallow: /cgi-bin/ The above example is from a robots.txt that allows all agents to access everything on the site except the /temp directory. Up to here it is fine but later on there is another record that specifies more restrictive terms for Googlebot. When Googlebot starts reading robots.txt, it will see that all user agents (including Googlebot itself) are allowed to all folders except /temp/. This is enough for Googlebot to know, so it will not read the file to the end and will index everything except /temp/ - including /images/ and /cgi-bin/, which you think you have told it not to touch. You see, the structure of a robots.txt file is simple but still serious mistakes can be made easily.

Tools to Generate and Validate a Robots.txt File Having in mind the simple syntax of a robots.txt file, you can always read it to see if everything is OK but it is much easier to use a validator, like this one: http://tool.motoricerca.info/robots-checker.phtml. These tools report about common mistakes like missing slashes or colons, which if not detected compromise your efforts. For instance, if you have typed: User agent: * Disallow: /temp/ this is wrong because there is no slash between user and agent and the syntax is incorrect. In those cases, when you have a complex robots.txt file i.e. you give different instructions to different user agents or you have a long list of directories and subdirectories to exclude, writing the file manually can be a real pain. But do not worry there are tools that will generate the file for you. What is more, there are visual tools that allow to point and select which files and folders are to be excluded. But even if you do not feel like buying a graphical tool for robots.txt generation, there are online tools to assist you. For instance, the Server-Side Robots Generator offers a dropdown list of user agents and a text box for you to list the files you don't want indexed. Honestly, it is not much of a help, unless you want to set specific rules for different search engines because in any case it is up to you to type the list of directories but is more than nothing.

A Standard for Robot Exclusion Table of contents: Status of this document Introduction Method Format Examples Example Code Author's Address

Status of this document This document represents a consensus on 30 June 1994 on the robots mailing list (robots-request@nexor.co.uk), between the majority of robot authors and other people with an interest in robots. It has also been open for discussion on the Technical World Wide Web mailing list (www-talk@info.cern.ch). This document is based on a previous working draft under the same title. It is not an official standard backed by a standards body, or owned by any commercial organisation. It is not enforced by anybody, and there no guarantee that all current and future robots will use it. Consider it a common facility the majority of

robot authors offer the WWW community to protect WWW server against unwanted accesses by their robots. The latest version of this document can be found on http://www.robotstxt.org/wc/robots.html. Introduction WWW Robots (also called wanderers or spiders) are programs that traverse many pages in the World Wide Web by recursively retrieving linked pages. For more information see the robots page. In 1993 and 1994 there have been occasions where robots have visited WWW servers where they weren't welcome for various reasons. Sometimes these reasons were robot specific, e.g. certain robots swamped servers with rapid-fire requests, or retrieved the same files repeatedly. In other situations robots traversed parts of WWW servers that weren't suitable, e.g. very deep virtual trees, duplicated information, temporary information, or cgi-scripts with side-effects (such as voting). These incidents indicated the need for established mechanisms for WWW servers to indicate to robots which parts of their server should not be accessed. This standard addresses this need with an operational solution. The Method The method used to exclude robots from a server is to create a file on the server which specifies an access policy for robots. This file must be accessible via HTTP on the local URL "/robots.txt". The contents of this file are specified below. This approach was chosen because it can be easily implemented on any existing WWW server, and a robot can find the access policy with only a single document retrieval. A possible drawback of this single-file approach is that only a server administrator can maintain such a list, not the individual document maintainers on the server. This can be resolved by a local process to construct the single file from a number of others, but if, or how, this is done is outside of the scope of this document. The choice of the URL was motivated by several criteria: The filename should fit in file naming restrictions of all common operating systems. The filename extension should not require extra server configuration. The filename should indicate the purpose of the file and be easy to remember. The likelihood of a clash with existing files should be minimal.

The Format The format and semantics of the "/robots.txt" file are as follows:

The file consists of one or more records separated by one or more blank lines (terminated by CR,CR/NL, or NL). Each record contains lines of the form "<field>:<optionalspace><value><optionalspace>". The field name is case insensitive. Comments can be included in file using UNIX bourne shell conventions: the '#' character is used to indicate that preceding space (if any) and the remainder of the line up to the line termination is discarded. Lines containing only a comment are discarded completely, and therefore do not indicate a record boundary. The record starts with one or more User-agent lines, followed by one or more Disallow lines, as detailed below. Unrecognised headers are ignored. User-agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. The robot should be liberal in interpreting this field. A case insensitive substring match of the name without version information is recommended. If the value is '*', the record describes the default access policy for any robot that has not matched any of the other records. It is not allowed to have multiple such records in the "/robots.txt" file. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved. For example, Disallow: /help disallows both /help.html and /help/index.html, whereas Disallow: /help/ would disallow /help/index.html but allow /help.html. Any empty value, indicates that all URLs can be retrieved. At least one Disallow field needs to be present in a record. The presence of an empty "/robots.txt" file has no explicit associated semantics, it will be treated as if it was not present, i.e. all robots will consider themselves welcome. Examples The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/", or /foo.html: # robots.txt for http://www.example.com/ User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear

Disallow: /foo.html This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": # robots.txt for http://www.example.com/ User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / Example Code Although it is not part of this specification, some example code in Perl is available in norobots.pl. It is a bit more flexible in its parsing than this document specificies, and is provided as-is, without warranty.

Redirect to a New Page Place the following HTML redirect code between the <HEAD> and </HEAD> tags of your HTML code.

<meta HTTP-EQUIV="REFRESH" content="0; url=http://www.yourdomain.com/index.html">

HTML <div> Tag A section in a document that will be displayed in green: <div style="color:#00FF00"> <h3>This is a header</h3> <p>This is a paragraph.</p> </div>

Definition and Usage The <div> tag defines a division or a section in an HTML document. The <div> tag is often used to group block-elements to format them with styles. The <div> tag is supported in all major browsers.

Using ALT And TITLE Attributes When you're trying to create effective page content that will appeal to both human visitors and search engine spiders, you need to get the most out of every page element. One way to do this is to use ALT and TITLE attributes wherever you can. They increase your site's usability level and promotion possibilities if you clearly understand where to put them, when to use them, and why. Understand Their Purpose The ALT attribute is designed to be an alternative text description for images. ALT text displays before the image is loaded (if it's loaded at all) in the major browsers and instead of the image in text-based browsers like Lynx. ALT is a required element for images and can only be used for image tags because its specific purpose is to describe images. In contrast, you can use the TITLE attribute for just about any page element, but it isn't required for any page element. Use it to describe links, tables, individual table rows, and other structural HTML elements. They're more versatile than the ALT attribute and many search engine ranking algorithms read the text in TITLE attributes as regular page content. Consider the benefits of good ALT and TITLE text: Enhance the browsing experience of visitors with disabilities. Increase your page's keyword density score and relevancy for your targeted keywords. (Calculate your page's keyword density with Page Primer and find out if you've used your keywords too little - or too often.) Provide valuable information to all your visitors with descriptive link titles and descriptive text about other page elements.

Multiple Benefits From One Addition ALT and TITLE attributes are simple to add to your code. And remember that ALT is required for images: HTML Toolbox will alert you if your images don't have ALT text descriptions.

Not sure how to add an ALT or TITLE to your HTML tags? Try these examples: <img src="cafeteria.jpg" height="200" width="200" alt="UAHC campers enjoy a meal in the camp cafeteria"> <table width="100" border="2" title="Henry Jacobs Camp summer 2003 schedule"> <a href="page1.html" title="HS Jacobs - a UAHC camp in Utica, MS">Henry S. Jacobs Camp</a>> <form name="application" title="Henry Jacobs camper application" method=" " action=" "> Note that in each example, we used either the ALT or TITLE attribute to provide more information about the page element. ALT describes the image (a photo of the camp cafeteria) and the TITLE information in the table, link, and form describe the purpose of the element while using targeted keywords like UAHC camp and Henry Jacobs Camp. Used this way, ALT and TITLE do double-duty and boost both your promotion efforts and your site's accessibility. Search engine spiders love the keywords. Visitors with images turned off or who use text browsers, screen readers and other assistive technologies will appreciate your effort to orient them on your Web page and explain the purpose of elements like tables, forms, and links. How Browsers Display Them In the past, some webmasters have used the ALT text description as a sort of pop-up tooltip on images, to add funny captions to family photos, or advertise a sale on certain products. That was because the TITLE attribute wasn't widely supported. It is now and Netscape 6.x browsers interpret the W3C specifications rather strictly - ALT text no longer "pops up" when you run your mouse over an image. You have to use a TITLE attribute for that. That isn't the case with other browsers. Here's what to expect from both attributes when you roll your mouse over page elements: Browser ALT Attribute TITLE Attribute

TITLE attribute displays on mouseover. Displays correctly for all Explorer 5.x Displays ALT attribute for images if other page elements. and up image doesn't load. Netscape 4.x Netscape 6.x and up Opera Displays ALT attribute for images on mouseover. Ignores TITLE attribute. Ignores TITLE attribute on all page elements.

TITLE attribute displays on mouseover. Displays TITLE for all page ALT only displays if the image fails to elements. load or images are turned off. TITLE attribute displays on mouseover. TITLE text displays both as a

Displays ALT text if images turned off or not loaded.

pop-up tooltip and in the browser's status bar.

Be careful not to make your ALT and TITLE text too long because all Netscape browsers display it on one line. If you try to stuff several sentences of text into a single attribute, your Netscape visitors may only be able to read a portion of it.

The Rules of ALT The "Rules of ALT" is a short set of guidelines for how to use alternate text in your images. These rules will help make your images -- and your site as a whole -- more effective. Always use the ALT attribute in <IMG ...> and <AREA ...> tags, and in image type <INPUT ...> tags. There is no situation where you should not use ALT with these tags. If the image is purely decorative, i.e. if it has no informational value and is there only for the visual presentation of the page, then use ALT="". If the image is intended as a bullet, use ALT="*". If it is intended as a "horizontal rule" use ALT="----------". If the picture is a substitute for text, such as the name of a company in a logo, a picture of a signature, or a navigation aid such as an arrow pointing to the "home page", then put that information in ALT. So, for example, if the top of your page has a logo which reads "The Sarah Schoenfeld Company" then use ALT="The Sarah Schoenfeld Company". (It is also appropriate in this situation to put the logo image in an <H1 ...> element.) In these situations it is not desirable to describe the picture (e.g. "Logo of The Sarah Schoenfeld Company") because the important information about the picture (the name of the company) is already in the alternate text. If the image is in fact a picture of something which cannot be substituted for with text, provide a brief description of the contents, such as "picture of my dog Zoe". If the picture illustrates a concept, describe the concept, such as "College students in the 1980's wore their backpacks over one shoulder." If a lengthy description is needed to describe the image, use the LONGDESC attribute to point to the URL of another page which has the lengthy description. Because LONGDESC is not yet sufficiently supported, also follow the image with a "D-link". A D-link is a standard anchor link with contents consisting of the capital letter "D", like this: <A HREF="barchart.html">D</A> In image maps, all <AREA ...> tags should have ALT attributes. Many text and aural browsers can use this information to construct a non-graphical set of links. Unfortunately, this capability is not built into many browsers yet, particularly Netscape and MSIE with images auto-loading turned off. Therefore it is still best to create a text based set of links below the image map. Done properly this will not detract from the appearance of the page. Do not use ASCII art, such as --> to create an arrow. If the picture is an arrow to indicate "next page", then use ALT="next page".

A more complete set of rules can be found at http://www.w3.org/TR/WDWAI-PAGEAUTH/

Grey-Hat SEO Tactics The following tactics fall in the grey area between legitimate tactics and search engine spam. They include tactics such as cloaking, paid links, duplicate content and a number of others. Unless you are on the correct side of this equation these tactics are not recommended. Remember: even if the search engines cannot detect these tactics when they are used as spam, your competitors will undoubtedly be on the lookout and report your site to the engines in order to eliminate you from the competition. It is definitely worth noting that, while it may be tempting to enlist grey-hat and black-hat SEO tactics in order to rank well, doing so stands a very good chance of getting your website penalized. There are legitimate methods for ranking a website well on the search engines. It is highly recommended that webmasters and SEO's put in the extra time and effort to properly rank a website well, insuring that the site will not be penalized down the road or even banned from the search engines entirely. Grey-Hat SEO Tactics: Cloaking There are times when cloaking is considered a legitimate tactic by users and search engines alike. Basically, if there is a logical reason why you should be allowed to present different information to the search engines than the visitor (if you have content behind a "members only" area for example) you are relatively safe. Even so, this tactic is very risky and it is recommended that you contact each search engine, present your reasoning, and allow them the opportunity to approve it's use. Arguably, another example of a site legitimately using cloaking, is when the site is mainly image-based such as an art site. In this event, provided that the text used to represent the page accurately defines the page and image(s) on it, this could be considered a legitimate use of cloaking. As cloaking has often been abused, if other methods such as adding visible text to the page is possible it is recommended. If there are no other alternatives it is recommended that you contact the search engine prior to adding this tactic and explain your argument. There is more information on cloaking on our black-hat SEO tactics page. Paid Links The practice of purchasing link on websites solely for the increase in link-popularity that it can mean has grown steadily over the last year-or-so with link auction sites such as LinkAdage making this practice easier. (You can read more about LinkAdage on our SEO resources page. When links are purchased as pure advertising the practice is considered legitimate, while the practice of purchasing links only for the increase in link-popularity is

considered an abuse and efforts will be made to either discount the links or penalize the site (usually the sellers though not always). As a general rule, if you are purchasing links you should do so for the traffic that they will yield and consider any increase in link-popularity to be an "added bonus". You can read more about purchasing links and where to do so on our SEO resources page. Duplicate Content Due primarily to the increase in popularity of affiliate programs, duplicate content on the web has become an increasingly significant problem for both search engines and search engine users alike with the same or similar sitesdominating the top positions in the search engine results pages. To address this problem many search engines have added filters that seek out pages with the same or very similar content and eliminate the duplicate. Even at times when the duplicate content is not detected by the search engines it is often reported by competitors and the site's rankings penalized. There are times when duplicate content is considered legitimate by both search engines and visitors and that is on resource sites. A site that consists primarily as an index of articles on a specific subject-matter will not be penalized by posting articles that occur elsewhere on the net, though the weight it may be given as additional content will likely not be as high as a page of unique content. If you find competitors using these tactics it is not unethical to report them to the search engines. You are helping yourself, the search engines, and the visitors by insuring that only legitimate companies, providing real information and content, appear at the top of the search engines.

White-Hat SEO Tactics Beanstalk uses only white-hat SEO tactics and as such - we're very pleased to see you here. If you're interested in hiring a white-hat SEO firm please visit our SEO services page or better yet, call us toll free at 877-370-9750 begin_of_the_skype_highlighting 877-370-9750 end_of_the_sky pe_highlighting for more information on how we can help you. If you're a do-ityourselfer, keep reading ... there are the tactics you can use to stay out of trouble. What Is White-Hat SEO? Any SEO tactic that maintains the integrity of your website and the SERPs (search engine results pages) is considered a "white-hat" search engine optimization tactic. These are the only tactics that we will use whenever applicable and which enhance rather than detract from your website and from the rankings.

White-Hat SEO Tactics: Internal Linking By far one of the easiest ways to stop your website from ranking well on the search engines is to make it difficult for search engines to find their way through it. Many sites use some form of script to enable fancy drop-down navigation, etc. Many of these scripts cannot be crawled by the search engines resulting in unindexed pages. While many of these effects add visual appeal to a website, if you are using scripts or some other form of navigation that will hinder the spidering of your website it is important to add text links to the bottom of at least your homepage linking to all you main internal pages including a sitemap to your internal pages. Reciprocal Linking Exchanging links with other webmasters is a good way (not the best, but good) of attaining additional incoming links to your site. While the value of reciprocal links has declined a bit over the past year they certainly still do have their place. A VERY important note is that if you do plan on building reciprocal links it is important to make sure that you do so intelligently. Random reciprocal link building in which you exchange links with any and virtually all sites that you can will not help you over the long run. Link only to sites that are related to yours and who's content your visitors will be interested in and preferably which contain the keywords that you want to target. Building relevancy through association is never a bad thing unless you're linking to bad neighborhoods (penalized industries and/or websites). If you are planning or currently do undertake reciprocal link building you know how time consuming this process can be. An useful tool that can speed up the process is PRProwler. Essentially this tool allows you to find related sites with high PageRank, weeding out many of the sites that would simply be a waste of time to even visit. You can read more about PRProwler on our search engine positioning tools page. Content Creation Don't confuse "content creation" with doorway pages and the such. When we recommend content creation we are discussing creating quality, unique content that will be of interest to your visitors and which will add value to your site. The more content-rich your site is the more valuable it will appear to the search engines, your human visitors, and to other webmasters who will be far more likely to link to your website if they find you to be a solid resource on their subject. Creating good content can be very time-consuming, however it will be well worth the effort in the long run. As an additional bonus, these new pages can be used to target additional keywords related to the topic of the page. Writing For Others You know more about your business that those around you so why not let everyone know? Whether it be in the form of articles, forum posts, or a spotlight piece on someone else's website, creating content that other people will want to read and post on their sites is one of the best ways to build links to your website that don't

require a reciprocal link back.

Site Optimization The manipulation of your content, wording, and site structure for the purpose of attaining high search engine positioning is the backbone of SEO and the search engine positioning industry. Everything from creating solid title and meta tags to tweaking the content to maximize it's search engine effectiveness is key to any successful optimization effort. That said, it is of primary importance that the optimization of a website not detract from the message and quality of content contained within the site. There's no point in driving traffic to a site that is so poorly worded that it cannot possibly convey the desired message and which thus, cannot sell. Site optimization must always take into account the maintenance of the salability and solid message of the site while maximizing it's exposure on the search engines. Black-Hat SEO Tactics Constantly webmasters attempt to "trick" the search engines into ranking sites and pages based on illegitimate means. Whether this is through the use of doorway pages, hidden text, interlinking, keyword spamming or other means they are meant to only trick a search engine into placing a website high in the rankings. Because of this, sites using black-hat SEO tactics tend to drop from these positions as fast as they climb (if they do climb at all). The following tactics are not listed to help you "trick" the search engines but rather to warn you against these tactics should you hear they are used by other SEO's (this is not to say that all other search engine optimization experts use these tactics, just that some do and you should be warned against them). Due to the sheer number of tricks and scripts used against search engines they could not possibly all be listed here. Below you will find only some of the most common black-hat tactics. Many SEO's and webmasters have simply modified the below tactics in hopes that the new technique will work. Truthfully they may, but not forever and probably not for long.

Black-Hat SEO Tactics: Keyword Stuffing This is probably one of the most commonly abused forms of search engine spam. Essentially this is when a webmaster or SEO places a large number of instances of the targeted keyword phrase in hopes that the search engine will read this as relevant. In order to offset the fact that this text generally reads horribly it will often be placed at the bottom of a page and in a very small font size. An additional tactic that is often associated with this practice is hidden text which is commented on below. Hidden Text Hidden text is text that is set at the same color as the background or very close to it. While the major search engines can easily detect text set to the same color as a background some webmasters will try to get around it by creating an image file the same color as the text and setting the image file as the background. While undetectable at this time to the search engines this is blatant spam and websites using this tactic are usually quickly reported by competitors and the site blacklisted. Cloaking In short, cloaking is a method of presenting different information to the search engines than a human visitor would see. There are too many methods of cloaking to possibly list here and some of them are still undetectable by the search engines. That said, which methods still work and how long they will is rarely set-in-stone and like hidden text, when one of your competitors figures out what is being done (and don't think they aren't watching you if you're holding one of the top search engine positions) they can and will report your site and it will get banned. Doorway Pages Doorway pages are pages added to a website solely to target a specific keyword phrase or phrases and provide little in the way of value to a visitor. Generally the content on these pages provide no information and the page is only there to promote a phrase in hopes that once a visitor lands there, that they will go to the homepage and continue on from there. Often to save time these pages are generated by software and added to a site automatically. This is a very dangerous practice. Not only are many of the methods of injecting doorway pages banned by the search engines but a quick report to the search engine of this practice and your website will simply disappear along with all the legitimate ranks you have attained with your genuine content pages. Redirects Redirecting, when used as a black-hat tactic, is most commonly brought in as a compliment to doorway pages. Because doorway pages generally have little or no substantial content, redirects are sometime applied to automatically move a visitor to a page with actual content such as the homepage of the site. As quickly as the search engines find ways of detecting such redirects, the spammers are uncovering ways around detection. That said, the search engines figure them out eventually and your site will be penalized. That or you'll be reported by a competitor or a

disgruntled searcher. Duplicate Sites A throwback tactic that rarely works these days. When affiliate programs became popular many webmasters would simply create a copy of the site they were promoting, tweak it a bit, and put it online in hopes that it would outrank the site it was promoting and capture their sales. As the search engines would ideally like to see unique content across all of their results this tactic was quickly banned and the search engines have methods for detecting and removing duplicate sites from their index. If the site is changed just enough to avoid automatic detection with hidden text or the such, you can once again be reported to the search engines and be banned that way. Interlinking As incoming links became more important for search engine positioning the practice of building multiple websites and linking them together to build the overall link popularity of them all became a common practice. This tactic is more difficult to detect than others when done "correctly" (we cannot give the method for "correct" interlinking here as it's still undetectable at the time of this writing and we don't want to provide a means to spam engines). This tactic is difficult to detect from a user standpoint unless you end up with multiple sites in the top positions on the search engines in which case it is likely that you will be reported. Reporting Your Competitors While this may seem a bit off, the practice of reporting competitors that you find using the tactics noted above or other search engine spam tactics is entirely legitimate and shouldn't be considered at all unethical. When we take on search engine positioning clients this is always incorporated into our practices when applicable (which happily is not that often). When a competitor uses unfair tactics to beat you it is entirely fair to report them. If you have competitors that you feel are using illegitimate tactics to beat you on the search engines feel free to visit our "Report Spam" page for links to where to go on the major search engines to report spam results and sites to them. Just make sure you're own site is clean when you do.

The Role of the Webmaster

A Webmaster is a person who either: Creates and manages the information content (words and pictures) and organization of a Web site Manages the computer server and technical programming aspects of a Web site Or does both.

Companies advertising for a Webmaster vary in their use of the term. In a smaller company, a Webmaster typically "does it all." In a larger company, a Webmaster tends to be someone with either a writing and/or graphics design background who has acquired Web site creation skills (mainly knowledge and experience with HTML) or a more technical person with some programming skills. The "technical" Webmaster runs the server (for example, by managing the creation and authorization associated with file systems) and writes programs or PERL scripts required by the Web site. In a very large corporation, there may be a Webmaster team of people at the top of the corporation who establish the overall corporate Web design and policies, arrange the necessary technical resources (working with the people who provide the corporation its network infrastructure), and supervise the design of the corporation's Web site (which is often done by an outside firm). At division and product levels, there may be additional Webmasters who organize and develop the Web content and programming for their division or product. In addition, there is likely to be an interrelated effort to create a Web design, organization, and content for the corporation's intranet. At a small corporation, the Webmaster may be in charge of creating the site and putting it on a separate company's server or setting up one within the company. The Web design and creation may be done initially by an outside Web design firm that turns the finished site over to the company's in-house Webmaster to maintain and perhaps add content within the established design. And if you are a firm that specializes in creating Web sites, you may refer to the overall producer or art director as the Webmaster for a site. Obviously, this term (and job) is is still defining itself. A Webmaster is what a company says one is. In general, almost any Webmaster would be expected to know the Hypertext Markup Language (HTML) and have a good understanding of why a company should want a Web site. The role of the Webmaster requires knowledge in several different areas. As a webmaster, you are responsible for the information base of a particular site or organization. Webmaster duties typically will include editorial responsibility for the content, quality and style of the site, in collaboration with the area authors on the team. This will include finding, creating and installing tools to create web content and check consistency; development and enforcement of the house style, including liaison with graphic artists; and the development of interactive web applications.

Typical areas of responsibility for a Webmaster include: HTML Authoring. Including an understanding of HTML 2.0, 3.2, 4, Dynamic HTML, and other extensions; e.g. tables, frames, server-push/client-pull, server-side includes, etc., as well as an appreciation for browser compatibility issues. CGI Scripting. Typically including (but not necessarily limited to) Perl, C and UNIX shell scripts. Basic Graphic Design Capability. Able to produce attractive Web pages that are effective within the limitations of the delivery medium. Internet Awareness. A general appreciation for the issues concerning the Internet and World Wide Web (download time/bandwidth, content-driven pages, graphics vs text, browser compatibility - colors, resolution, etc). General UNIX and PC (MS-Windows) Awareness. Basic TCP/IP and Networking (e.g. Service ports, Name Servers, Email, USENET, HTTP, FTP, etc). Graphic Design Skills. A knowledge of graphics applications and techniques (e.g. Photoshop, Fractal Painter, 3D Modelling) and the ability to apply these in effective ways within the constraints imposed by the nature of the Internet. Customer Awareness. The ability to manage a relationship with a customer and work within specified requirements. WWW Server Configuration (e.g. NCSA, CERN, Apache, Netscape Commercial Server - including NSAPI)

What is RSS?

RSS is an abbreviation of either Really Simple Syndication or Rich Site Summary; it depends who you talk to. RSS is the new standard for distributing news and information via the Internet. The information is published via RSS News Feeds. What is an RSS News Feed? An RSS news feed is a list of topics which is made available from a web site, using RSS. To read the feed, the user installs a newsreader and enters the URL for the news feed. RSS news feeds are a replacement for email newsletters. Eventually, everyone will be reading their news via an RSS news aggregator. This way you have full control on what you will receive, without the need for visiting web-sites. The RSS way of reading news also helps to protect you from spam, because you can keep your email address private.

Benefits to RSS Feeds... 1) Benefits to RSS RSS streamlines communication between publishers and readers. Since RSS has had a popularity surge, webmasters have been experimenting and using RSS feeds to deliver content in new and innovative ways. Typically, RSS feeds contain news headlines and content summaries. The content summaries contain just enough information without overwhelming the reader with superfluous details. If the reader is interested and wants additional information they can click on the item in the feed, accessing the website which contains additional details. RSS readers aggregate multiple feeds, making it easy for individuals to quickly scan information contained within each feed. Feeds are generally themed, allowing users to opt-in to feeds that are of interest. The big benefit to RSS is that individuals opt-in to content of interest, totally controlling the flow of information that they receive. If the quality of the content in the feed declines, users simply remove the feed from their RSS reader and they will not receive any additional updates from that source. The RSS reader acts as an aggregator, allowing users to view and scan multiple content streams in a timely fashion.

RSS is a great supplemental communication method that does not burden the publisher with maintaining lists or following strict privacy guidelines. RSS feeds are compiled according to the user's choices, removing any burden that is placed on publishers of email newsletters. Publishers no longer need to be concerned with spam, privacy policies, and age guidelines.

Publishers using RSS as a communication vehicle are able to create keyword-rich, themed content, establishing trust, reputation, and ongoing communication with current and prospective customers.

What Kind of Information Can be Delivered in RSS Feeds? Blogs Feed Many blogs are catalogued in an RSS feed, with each blog entry summarized as a feed item. This makes it easy for visitors to scan blog posts for items of interest. Article Feed Articles are often placed into feeds to alert readers when new articles and content are available. The feed entry is typically an article summary or introduction. Readers can then ascertain if the article is of interest and read further. Forum Feed Many forums now have add-ons that allow participants to receive forum posts via RSS. The RSS feeds often will show the latest discussion topics; if users are interested they simply click to enter the forum to participate in the discussion. As the topic is updated they will see new entries in the RSS feed. Schedule Feed Schools, clubs and organizations will often use feeds to communicate meeting times, places and events that might be occurring. The RSS feeds are often used to publicize events, notify the community of schedule changes or meeting agendas. Discounts / Specials Feed Retail and online stores have begun using RSS feeds to deliver their latest specials and discounted offers. Some online retailers have taken this a step further, allowing users to create their own feeds based on keywords or phrases. For example, this service will generate a URL than can be entered into a news reader. The feed is updated each time an item is added to Amazon that meets the specified criteria or keywords - Amazon Search Feed - http://www.oxus.net/amazon/ Ego / News Monitoring Companies or individuals interested in receiving headline news based on a specific brand or keyword can use RSS feeds to monitor news sources.

For example, users can use the following tool to create a feed that will allow them to receive filtered news from Google News. They will only receive items related to a specific keyword or phrase they setup - http://www.justinpfister.com/gnewsfeed.php

Industry-Specific RSS Feed Uses Include: Technical professionals in specific industries have also developed RSS feeds as way to market, promote or communicate within their specific industries. In many cases, this has expanded their reach and increased communication with current and prospective customers and clients.

RSS feeds can be used by realtors to communicate the time and location for open houses, announce new property listings or promote decreased mortgage rates. Content feeds can also be used by universities to communicate sports scores or

event schedules. Computer service professionals can create feeds to notify clients of potential security breaches, virus risks or outbreaks. Ultimately, RSS is molded to meet the communication needs of many sectors. Consider how RSS can benefit your business and supplement your communication needs. 2) Benefits to Using RSS RSS stands for Really Simple Syndication. It is a method developed to allow webmasters and online publishers to syndicate their contents easily to other webmasters. For example, if you use RSS to publish your ezine, then other webmasters can easily use RSS to display your ezine/content on their website automatically. Without RSS, if a webmaster saw an interesting article and wanted to display it on his site, he would need to copy that article and paste it into an HTML file on his own website. This is a very time consuming and tedious process. However, with RSS, all the cutting and pasting can be eliminated and this process can be automated with RSS software. RSS is very new. The latest wave started in 2004 (although its been around since the late 90s, but was only in use by the real geeks). One news release from summer of 2004 is quoted as saying, Google is considering renewing support for the popular RSS Web publishing format in some of its services. This marks the latest twist in a burgeoning standards war over technology that could change how people read the news. Even the giants such as Microsoft, eBay and Yahoo embrace the RSS syndication format. Yet there are currently many webmasters and zine publishers still doing it the old way. Companies that don't have an RSS approach in place yet might need to reevaluate the way they are doing business. Their competitors will probably have already implemented some level of RSS support in their web sites. An innovative marketing channel has opened up allowing you to market your products and services. With more and more people using RSS, the market potential is endless. RSS has thus far, proven itself as becoming the preferred method of distributing news and information online. Those in the know are subscribing to RSS feeds to read news online and to receive the latest updates from their preferred publishers. If you publish a newsletter, you now have another conduit to distribute your news to thousands of readers! Who doesnt want that? Not only is RSS easy to use, it also affords you free targeted traffic without any additional efforts. In order for your business to be successful, you need traffic to your website, and lots of it. You need to grab a hold of the people who are interested in your products and services. It doesnt matter how good your products and services are if no one visits your site. One solution to drive traffic to your site is SEO. You can spend countless hours or spend large amounts of your budget to hire a search optimization expert. Or you can use RSS to glean similar results. With little effort you can easily add and display your articles in hundreds of websites - automatically! What could be better than automatic feeds while you sleep! You can stop the tedious article submissions. Whats more is with RSS, youll be adding fresh, relevant content on your website. Change the content daily if you prefer. It really is simple. Now Yahoo even has a free RSS directory.

How to reduce Bounce Rate? Bounce rate can be defined as how many visitors leave your site without seeing a second page in your site. This is an important metric for your website performance. It does not matter how many hits, page views, unique visitors you get. If they all leave immediately, then something is wrong. How to measure Bounce rate? You can install Google Analytics to monitor bounce rate. If 60 out of 100 visitors leave without seeing second page (usually in a given time like 30 minutes defined by cookies), then your bounce rate is 60%. A bounce rate below 40% is good. 40%65% is OK. Anything above that should be taken care of. Also, what you want to achieve through your site also defines what is a good bounce rate for you. A high bounce rate is not always bad. For a search engine like Google, 100% bounce rate may be good, actually ! How to reduce bounce rate? 1. Choose your niche. The more focused you are the lesser will be the bounce rate. Dont write about civil wars and cinema gossips in the same site. 2. Update your site regularly. Regular visitors may bounce too. People often visit their favorite sites to check for new content. If you dont have new content they will bounce. Give them Email and RSS feed subscription options so they wont contribute to the bounce rate. 3. Write in depth articles: People often land through search engines in particular pages. If you dont have comprehensive info about the given topic, they will bounce. 4. Write more number of related articles: If viewers like any of your articles, they will like to read more of the same topic in your website. Try to write more number of related articles on any given topic. 5. Monitor the search terms: Sometimes Google can send you people for a not so matching query. It is not your fault. But, you can record these queries and try to provide updated content on these topics. So you will have better performance in Google and your visitors will also be happy. 6. Give internal links: Give a list of related articles at the end of each article. Link to relevant previous articles from within the article. Categorize and tag articles. 7. List popular posts in the sidebar. 8. Have better page navigation: Often people get lost and dont have a clue how to reach parent sections. Give them navigation bread crumbs at the top like Home

>> Category name >> Articles name . Link to previous and next articles. This would help people to navigate back and forth and explore. 9. Use homepage real estate: Many sites have 100s of useful articles. But the homepage will be static and clueless of the hidden content. Redesign your homepage to include teasers and direct important sections within your site. Make it dynamic to show the latest updates to the site automatically. But, please note that not all land in your site through homepage. So, you need to focus in internal linking, context based menus in all pages of your site. 10. Have a clear and simple menu: Google does this well. Dont squeeze all your pages in the site in your menu. Have a context based dynamic menu for each page. A menu should be self-explanatory. Dont design it badly to look like a flash gimmick. Now a quick quiz. I assume that websites with some specific content will always universally have lower bounce rates. Can you guess what ? Related posts Useful Tools for Web Developers (6)

Tagged as: bounce rate, bouncerate, how to reduce bounce rate, reduce bounce rate, web development, website traffics.

Should I Use an HTML or a XML Sitemap on My Site? First of all lets clarify the difference between the two types of sitemaps for those who dont know it. An HTML sitemap is a page within your site that links to all the other important pages on your site. You can see an example of this on my Archives page. I call it Archives, but it is nothing more than an HTML sitemap. An XML sitemap does pretty much the same thing, but instead of using HTML (which a browser can interpret like any other page) it uses XML, which is a markup language to encode documents. This means that the XML sitemap will only be useful/make sense to search bots, while the HTML sitemap will also be visible/useful to human visitors. In my opinion most websites should have an HTML sitemap. Why? For two main reasons: it helps search engines crawl and understand your site, and it helps human visitors browse your site more efficiently. Why do I say most and not all websites? Because some specific types might not need it. Very small sites or online stores, for example, might not need an HTML sitemap because finding the single pages there is straight forward. For content based websites like blogs, however, the HTML sitemap is really helpful.

Now if you are worried about having too many links on a single page, you could use a sectioned HTML sitemap, where the initial page links to all the months, and then inside each months page youll have links to the single posts. However, I believe this structure is less functional than having all the posts in a single page, like I do. Try to find a specific post on those sectioned sitemaps and youll see how much time youll waste going back and forth the pages until you find what you are looking for. On my archives, on the other hand, you just need to browse around until you find the right post. Google does recommend webmasters to keep fewer than 100 links per page (including sitemap pages), but I believe this is not a strict policy. In other words, as long as your page with over 100 links exists to help your visitors, Google should be fine with it. This is the case with my archives. In fact you can see that it has a PR 5, and if you search on Google for daily blog tips archives it will appear on the first result, meaning that Google is fine with it. I use the Clean Archives WordPress plugin to create that page automatically. What about XML sitemaps? In my opinion using such a sitemap is not necessary, but it might be useful in some situations. Why dont I think it is necessary? Because if you craft your website with an efficient structure (i.e., with a flat hierarchy of pages, a clear navigation and sound permalinks) Google should have no problems crawling and indexing all your pages correctly. You can test this by using the site:domain.com operator on Google. It will reveal how many pages Google currently indexes from your site. Daily Blog Tips, for example, has around 1,700 pages indexed by Google. I have 1,425 posts published. If you then add the secondary pages (e.g., pages subsequent to the homepage, category and archive pages) this number should get close to 1,700, so yeah Google is indexing all my pages right now. Now in what situations I think one should use an XML sitemap? Exactly when one is having crawling or indexation problems. If Google was only indexing part of my pages, say 500 out of 1,700, then I would consider using an XML sitemap. Similarly if my newest posts were taking too long to get indexed by Google (if at all) I would also add an XML sitemap to try to solve the problem. Summing up: I believe most sites should use an HTML sitemap, because they are useful both for search engines and for human visitors (and if you dont want to have a page with more than 100 links just break your sitemap in sub-sections). As for XML sitemaps, I believe they are useful if you are having crawling or indexation problems. Obviously adding an XML sitemap to a healthy site wont do any harm, but I am not sure how much good it will do either.

Do follow and No follow : if the other webmasters give us a backlink with dofollow tag included, then we get some share in their pagerank i.e., the link will be considered by Google and our site ranking improves. Whereas if the other site owners give a link back to our site using the nofollow attribute, then it will not be considered by Google and our website ranking will stay same as it is.

Vous aimerez peut-être aussi