We assess competition in a phrase / market by assessing the normalised back link data and ignore the on page factors.
<EDIT> – If you want this process automated for you please go and read the following post – https://the.domain.name/blog/competition-analysis-in-search-engines/ </EDIT>
The presumption is that on page factors are generally going to be “good enough” to compete in a marketplace and is a prerequisite to delivering successful SERP rankings. A further presumption is that the quantity and collective quality of the links is the reason for a site ranking and the lack of them is the reason for a site not ranking.
The following is my methodology in assessing a marketplace for the link growth required as well as the rate at which that link growth can and should occur to deliver the necessary and expected rankings.
The following is what occurs per keyword, ultimately leading to groupings that when combined deliver a marketplace, which themselves form an industry
Texas Holdem —> Poker —> Gambling
Keyword —> Marketplace —> Industry
For each keyword gather the top N results.
In this example, I believe N is probably best served being set at 20, but I have seen great results with N = 10 and the value to decrease when N = 30 or more. Further testing is required to refine the perfect N number but I suspect that the “right number” is different for differing marketplaces and industries. Time will tell.
Collect all N results for the Keyphrase in question in the Google version relevant for the Geographic search.
N.B. Google.com results for $keyword will be different for $keyword for Google.co.uk etc.
For each result, split the results into domain and full URL.
Collect a full backlink report for the whole domain. This should be from the historical index as all links, whether they are deleted, aged or new are important.
The links are then normalised to only include those from separate and distinct C Class Subnets.
The growth of those links is then plotted over time. Both to show link growth speed over time as well as link loss over time.
This is combined with all the other sites in N results that are gathered and collectively plotted over time and displayed together.
Separately the same style of graphs are drawn, splitting the links, both for an individual site and for all sites in N, splitting the results into Topical Trust Flow Categories. If my belief is correct, then a few Topical Trust Flow categories will dominate over others for each Industry and Marketplace.
The target site is the site that we wish to rank. It’s backlink data is also gathered and is plotted against the above results on all occasions and shown in conjunction with the average (Likely to be mean) of the results.
The client site…. is likely to fall below both the mean and median, is then placed onto the graph. The existing N sites are projected forward in time, at their current link growth rates as if the average of them all.
N.B. Outliers, which are plus or minus 25% of the average are able to be excluded from the graphs if required. … However, it may be more beneficial for the user of the system to define the sites into categories. An example that I often encountered with retail sites was:
Site often fell into one of a few categories.
- Traditional Bricks and Mortar with an online presence – EG (UK specific) Tesco, John Lewis
- Behemoths of online commerce – EG Ebay and Amazon
- Niche specific online retailers
- Local retailers.
It may be that the outliers fall into one of the top 2 groups (Traditional bricks and mortar stores or Behemoths of Online Commerce) and a client’s target site falls into one of the smaller groups.
As such, removing the outliers may actually mean removing irrelevant competition from the analysis as the outliers they are.
The target site is then plotted over time going forward with link growth requirements, that always exceeded the average growth rates, (so it can catch up) but never broke away to the point where it exceeds more than the highest link growth rate within the target sites plus a predetermined percentage (Nominally set at 25%) It should not be forgotten to factor in link loss averages as these will occur based on prior time normalised results.
This should be able to plot forward expected ability to rank over time, and at what point those rankings should show. It should
EG. It may take 12 months of extensive link building to deliver rankings in Vertical marketplace A, for site X, yet only 6 months in Marketplace B with site Y.
When multiple keywords are brought together in the same graphing and analysis we can then define the marketplace requirements to rank and …… in time…. if I can gather market data on the value of businesses can tell us the expected rewards for any particular industry alongside the links required to get there.
It becomes an automated RoI calculator and timescales for the investments to deliver returns.