Dictionary of Technical SEO and Marketing Terms

As with any field of work, there is terminology that is new and often shrunk into shorthand. This dictionary offers a reference to technical terms and vernacular used in online marketing.


.htaccess
An Apache configuration file that is read during run-time.

301 Redirect
A method to redirect clients to another URL, a 301 redirect can be done by using a HTTP code 301 'Moved Permanently' and setting a Location HTTP header.

A/B Testing
Also known as split testing, two versions of a page are made with one or more variations between them. Interaction with the two versions can be measured to indicate which one of the versions converts for better for desired outcomes.

This can be an evolutionary process where continual small changes can drastically improve conversion rates over time.

Above the Fold
The part of the SERP page that is visible without scrolling. This part of the page has been given special semantic meaning as click through rates for results that are above the folder are higher than those below the fold, regardless of the ranking position.

Absolute Link / Absolute URL
A hyperlink that contains the full address of a web resource, including protocol, host and path.

Adsense
An ad-serving platform owned by Google that connects adwords users with 3rd party publishers, giving advertisers greater reach across the web and giving publishers a new way to monetise their traffic and content.

Adwords
The monetisation of Google's search engine results and other 3rd party sites. Advertisers bid for visibility for (usually) targeted traffic and pay for it typically on a cost-per-click basis. The advertiser aims to receive a return on their investment by bidding appropriately for the traffic, while Google is imbursed for providing the traffic, ensuring it is targeted, and filtering out fraud. This symbiotic relationship is extended to Adsense, which provides the same targeting for advertisers, but on 3rd party websites.

Affiliate Marketing
An affiliate marketer (affiliates) will deliver targeted leads to another company, usually in exchange for a percentage of sales. A major challenge for affiliates is the ability to attract targeted traffic and having a USP in order to attract interest (and subsequently sales).

Algorithm
Algo for short, the computational process where input is taken to produce a desired output. In the context of search engines, this involves taking the entirety of the crawled web, its media and connections and creating a search engine index that can serve answers to users of the search engine. Modern search engines also use 3rd party structured data as a supplementary ontology, Google's Knowledge Graph is an example of this.

AlltheWeb
A previously prominent search engine that has since been purchased and absorbed into Yahoo. AlltheWeb was known for having a comparable index size to Google in the late 1990's and early 2000's.

Alt Text
Alternative text, used in HTML attributes for images as a description for an image. Useful for crawlers/indexers that do not use image recognitions or would generally find a textual description helpful, especially so if the crawler does not intend to fetch image resources.

Analytics
The practice of analysing data about visitors to a website, with the intention of increasing desired outcomes based on that data, such as money made per visitor, increasing time spent on site, or increasing number of page views. Analytics allows you to identify opportunities where there may be weaknesses in the functionality of your site that can be changed to improve desired outcomes.

Anchor Text
The visible text component of an HTML link. Search engines use this text to give semantic meaning to the content within the destination hyperlink. For online marketers, this helps them allow search engines to better understand their content.

Apache
The most popular web server software used on today's web, still in active development and released in 1995. Today's web is mainly served by Apache, IIS and NGINX.

More info on Apache

API
Application Program Interface, a programmatic way to access a service that provides a standardised (and expected) method of input and output.

ASN
Autonomous System Number, a globally unique 32-bit number that identifies a network on the Internet.

Authority
The concept of some things being more important than others. In reference to search engines and search engine marketing, the web has presented a low barrier of entry for a great number of people to produce content on practically any subject. With this in hand, search engines require a way to designate relative importance to informational sources. Pagerank is a good example of an algorithm where authority is an emergent feature.

An online marketer would seek to be linked to from authoritative sources in order for this authority to be conferred onto them.

B2B
Products and services provided by a business that has a target market of other businesses.

B2C
Products and services provided by a business that has a target market of consumers, rather than other businesses.

Backlink
A hyperlink that links to another website. When page A links to page B, page A is referred to as a backlink to page B.

Bad Neighbourhood
An IP address or IP subnet that is shown to host websites of low quality, malware, sources of email spam and generally unsavoury characteristics that you would preferably not want to associate your website with.

Crawlers have been known to ignore entire subnets of IPs of 'bad neighbourhoods' as the quality of resource on the IPs is of little or no value to the end user.

Behavioural Targeting
Advertisement targeting based on known past actions of the potential customer in question. Typically a cookie is served with any advertising allowing the tracking of a user over time. Observing the user's actions and their potential intent and then re-serving adverts based on these past actions is a proven method of sales.

Bing
A search engine run by Microsoft. Bing is relatively popular in the USA and the 2nd most prominent search engine in the English speaking World.

Black Hat SEO
A term given for tactics of promoting websites in search engines that most likely breach the search engine's guidelines, and would result in a search engine penalty if they were detected. These tactics are suited for short term monetary gains rather than a longer term strategy of promoting and branding a website.

Blog
Abbreviated from weblog, is a diary style publication often ordered by date (and some categorisation). Blogs became popular in the early 2000's when content management systems lowered by barrier of entry for non-tech people to publish their content online. Wordpress is by far the most popular content management system for blogging.

Blogosphere
The encapsulation of all blogs on the web.

Blogroll
A list of links associated with a particular blog, chosen by the author because of their topical or social relevance to them.

Bookmark
A saved link that is stored somewhere for easy reference and remembrance. Most often bookmarks can be saved within the browser and accessed via a toolbar, though some websites provide bookmarking services also.

Bot
Synonymous with crawler or robot, a user agent that crawls web pages.

Breadcrumb Navigation
A visual cue for users regarding the information architecture of a website, showing where the page resides in the site's navigational hierarchy. For instance, you are roughly on a 3rd level page on this site. The 1st level is the home page, the 2nd level is the /learn/ collection of pages, and the 3rd level is this dictionary.

Broken Link
Or dead link, refers to hyperlinks that no longer point to the resource they originally were intended to point to. Typically a link gets broken because of content being moved, deleted, or the domain that the content resides under no longer resolving. See Cool URIs dont change.

Cache
A copy of data, often in reference to the browser, a caching server or search engine. For browsers, a cache is kept to forego unnecessary refetching of content over the web, improving the users experience. Search engines like Google offer a cache in cases the web page is no longer available, the search engine user can still browse the contents of the page.

Canonical URL
A distinct URL in cases where multiple URLs may serve the same content, the canonical URL is the chosen defining URL for all potential URLs linking to the same resource.

This helps avoid search engine crawlers and indexers from getting confused (or even penalising) a website for serving up the same content over multiple URLs. Providing a canonical tag over multiple copies removes confusion about the content being served and its contents.

ccTLD
Country Code Top Level Domain, TLDs that are associated with a specific geographic area.

CIDR
Classless Inter-Domain Routing

Citation Flow (CF)
A majestic metric that measures the relative link authority of a page or domain by evaluating it's prominence in a similar way to Pagerank. The resultant score is between 0 and 100.

Class C
With regards to diversity of IP addresses, the 4 octets of an IP address were traditionally known as classes, for example the IP 1.2.3.4 has a class A of 1 and a class C of 3.

From a link building perspective, it was always boilerplate advice to new SEOs to acquire links from different class Cs, because having a large concentration of links from the same IP or same class C range would appear unnatural.

A more up to date version of this advice would be to acquire links from different subnets and different hosting providers, to avoid any potential footprints of unnatural link building.

Networking gurus will disapprove of using the terminology of classes as Internet routing is now classless, though the term remains used in marketing circles.

Click Fraud
Nefarious clicks of an advertisement that result in increased cost for the advertiser and possibly undeserved profit to a publisher. Typically click fraud is intended to either exhaust the funds of a competing advertiser or to fraudulently acquire revenue as a publisher. Any reputable advertisement platform will attempt to actively and reactively filter out such clicks.

Client Side
Technology that is used on the user-end of a client-server architecture, typically referring to browsers, HTML and Javascript.

Cloaking
Returning different data to different clients. A typical example of cloaking is providing human users with a particular version of a page and serving up search engines with a different version. This action isn't entirely nefarious, sometimes a webmaster may cloak to allow search engines to be served up a more textual representation of a resource which they'd be able to programatically parse more easily.

Clustering
In reference to search engine results, the process of grouping matches for a query by domain, so the search engine results are not overcrowded by a particular domain. Dominants websites like Amazon and Ebay may overcrowd particular product terms with thousands of good matches, but the search engine may feel that user experience is improved by offering some alternative domains in its top results.

CMS
Content Management System

Comment Spam
A relatively old technique, usually via automation, that posts comments containing one or more links to a target website. The primary motivation is to increase the number of backlinks pointing to a page/domain and ultimately increase its rankings. The tactic is considered ineffectual in today's market.

Large volumes of comments could be posted by reverse engineering the comment systems of popular content management systems.

Content
Media published onto the web, content typically refers to the unique text on an HTML page, but can also refer to other media such as images.

Content-Type
The HTTP header that defines the media content at a particular URL. This allows clients to understand what kind of data is present before attempting to parse it.

For example, a content type of text/html is typically served for HTML documents, image/jpeg for JPEG images and so on.

Conversion
Used to describe a desired outcome, for example a user purchasing an item who'd clicked on an ad campaign would be considered a conversion for that ad campaign.

Cookie
A small file saved on the client side used for session management and longer term tracking of users.

Crawl Budget
See crawl depth

Crawl Depth
The depth at which a search engine crawler is willing to go in order until it stops. Because the web is potentially infinite in nature, including URLs with sessions, redirect loops and low quality content, crawlers assign a crawl budget to domains so that it can designate its resources fairly across the web in its entirety. Factors affecting crawl depth mainly depend on the authority of the site, size of the site and the freshness of pages within it. Ideally, you would want a crawler to crawl all your pages at least once in order for them to be indexed.

Crawl Frequency
See also crawl depth. More authoritative pages and pages with continually refreshed content will be crawled more frequently by search engines. The home pages of news web sites are a good example of pages that will be frequently crawled. Archive websites are an example of sites that will rarely be re-crawled due to the archival nature of them meaning the content should never change.

CSS
Cascading Style Sheets, the coding convention used to style HTML.

CTR
Clickthrough Rate, the measurement considering number of clicks versus number of potential clicks, often in reference to adverts or SERPs. If a query for a SERP gets 100 searches a month and 20 people click on the 1st result, then the CTR of the 1st result is 20%.

Dedicated Server
Server hardware that is dedicated to a single owner, rather than being 'shared server' or a 'virtual' dedicated server.

Deep Link
Vernacular describing a backlink that points to a non-home page of a domain, the 'deep' meaning deeper within the site than the home page, more often in reference to the deepest pages targeting longer tail phrases.

Delisted / Delisting
Referring to URLs or domains that have been removed from a search engine index, usually due to the site or page being hit with an algorithmic or manual penalty. Technical issues with hosting or DNS can also cause pages to temporarily be delisted.

Directory
A collection of curated websites, often within a topic that act as a resource for people interested in that topic.

Directories were the original primary means of finding anything on the web, though the progression of search engine algorithms and exponential growth of online content has given search engines much greater prominence.

Some directories still carry a lot of authority in their given niche.

DMOZ
One of the largest manually edited directories to have existed on the web, now no longer being maintained. For new websites, DMOZ was one of the first destinations for acquiring an authoritative link for an online marketer.

DNS
Domain Name System, the system used for translating more memorable person-friendly hostnames to less memorable IP addresses.

DNSSEC
A specification used to enhance security, data integrity and origin authenticiy of DNS records by digitally signing the data so clients can verify its accuracy.

For DNSSEC to work for a domain, the registry of the TLD, the registrar and finally the delegated nameservers must all be able to support DNSSEC for it to function correctly.

Domain Age
Some SEO's believe that an older website will have more prominence to a search engine, all other things being equal. The logic behind it may involve an older domain costing money to keep alive and therefore must be of some value, as opposed to a domain with one year registration which is more likely to be involved in slash and burn.

Domain Name
A string of alphanumeric, dots and dashes that are used to find and access resources on the Internet. The DNS system translates a client-supplied domain name and converts it to an IP address that can be located on the Internet.

Doorway Page
A page optimised for a particular search term or funnel of traffic.

Duplicate Content
Content that is the same or nearly the same as other pages on the same domain or other domains. Modern search engines are quite good at detecting duplicate content, ranging from the ability to strip out the template of pages within a domain, to comparing millions of documents for similarities. Duplicate content is typically something you would want to avoid on your site, as it offers no unique selling point that an end user would like, or a search engine would like to rank the page highly for.

Editorial Link
A naturally acquired backlink earned without solicitation, typically due to good value content.

EPP
Extensible Provisioning Protocol, the protocol that domain registries and domain registrars use to communicate with each other for the provisioning and maintenance of domain names. Markup is loosely based on the XML format.

External Link
Vernacular to describe a link within your domain that links to a different domain. Emphasis had been placed on these kinds of links as links confer authority and are also (often) outwith the control of a webmaster. Ensuring external links point to the content originally seen should be an ongoing process for a webmaster.

Favicon
An icon that appears in a client browsers address bar and alongside bookmarks. Favicon's can provide an eye catching and memorable method for user's to remember your website.

FFA
Free For All link pages, a relatively archaic type of web page where huge lists of links would be placed on a page acting as a general resource on a topic for users. Typically these pages are considered low quality nowadays, while the effort of maintaining such a list would be relatively high if the webmaster wished to avoid having dead links or links to resources that have drastically changed their content.

Filter Bubble
Through personalisation of content, it is proposed that tailoring content to fit a user's search and viewing history can result in them seeing a narrower focus of opinions and topics than they would otherwise see without personalisation. With a potential snowball effect over time, this could lead people to harden their opinions and views on things without competing and different views. Presumably, the concept lends itself to the idiom 'living in a bubble'.


Flash
A client-side technology that is approaching obsolescence and is declining in use. Flash had provided a richer media experience on some websites, but was well-known for being difficult to spider by search engine crawlers due to the lack of text or mechanism to understand what the Flash app was really about.

Forum
Online discussion software that was especially prominent on the early web. The emergence of social media and blogging has taken away mindshare from forums in recent years.

Frames
An HTML element that can seamlessly display multiple web pages under the one URL. Historically, search engine spiders had issues correctly crawling sites using frames.

Fresh Content / Content Freshness
Vernacular to describe content that is recently created or regularly updated. Search engine crawlers tend to visit more when they see new original content, or pages that are regularly updated.

FTP
File Transfer Protocol, a protocol used to transfer files (relatively) securely between servers. SFTP is the more secure version of FTP.

gccTLD
Generic Country Code Top Level Domain, TLDs that were originally assigned to be geographic specific, but have become globally popular over time.

.tv is an example of a gccTLD, as it was originally designated to the country Tuvalu, but for obvious reasons was found to be marketable for television related content.

Geographical Targeting or Geotargeting
Major search engines often redirect end users to their local version of a search engine, typically at the country level. They also use geographical relevance as a ranking factor, and will tend to rank local products and services higher for a query than non-local sites.

Google
The largest search engine in the world in terms of popularity, market cap and index size.

Google Bombing
A tactic to make a particular page rank well in Google for a given search term by pointing hundreds of links to it. This is effective when the term is not a competitive term that has the competitive ranking pressures associated with it. An infamous example of Google Bombing that brought the practice into the public spotlight was the term "miserable failure" during the 2003 US election campaign, resulting in a page about George W Bush ranking number one for the term.

Google Dance
The historical name given to a search engine update, when the results would 'dance' into different positions after an index update by Google, which happened more or less once a month. Over time, Google changed their indexing from a monthly update to a continually happening process, meaning results went into a continual flux.

Google Webmaster Tools
A toolset provided by Google to help webmasters optimise their site, where users can give Google instructive pointers about their site, and Google can provide instructive pointers in return, such as listing crawling issues or other aspects that can impact the crawlability or indexability of a website.

Googlebot
The user agent attributed to Google's search engine crawlers. Because user-agent's are easily spoofed and competing webmasters like to check for cloaking, the existence of Googlebot as the user-agent should not be taken as surety that it is indeed the Google crawler.

See Google's guidance on a reverse DNS lookup to validate a true Googlebot request.

gTLD
Generic Top-Level Domain, TLDs that are not associated with a specific geographic area.

Hidden Text
Text that is not visible in the browser but would otherwise be seen by a crawler. In the days where keyword prominence and density was more of a factor, often was the case that the visible text of a page would be person-friendly and devoid of keyword rich text, and hidden text was used with keyword lists for the search engine crawler to notice. Modern search engines parse Javascript and CSS and can often easily detect this kind of behaviour.

Hilltop Algorithm
An adaptation of the Pagerank algorithm, that takes a seed list of topic-specific expert documents and uses them as qualified expert citations, ranking pages by their relatedness and distance from those expert pages.

Hit
The act of someone loading up a URL on a given website, the resultant request is deemed a 'hit'. Also called an 'impression' or 'page view'.

HTML
Hypertext Markup Language, the code used to structure web pages that is rendered by web browsers.

HTTP
Hypertext Transfer Protocol, the protocol used to deliver pages on the world wide web.

ICANN
Internet Corporation for Assigned Names and Numbers, the body responsible for maintaining the DNS root zone. Website: ICANN

IDN
Internationalized Domain Name, domains with multi-byte encodings beyond traditional ASCII domain names.

If-Modified-Since (304 Not Modified)
A HTTP request header send from client to server that supplies a date and time. If the content requested has not been modified since the supplied date and time, a 304 Not Modified HTTP response should be returned, without any content. This has the benefit of saving bandwidth and resources for both server and client.

Search engines actively encourage this behaviour as it allows their crawler to spend more time on finding new content, rather than re-processing unchanged content.

IIS
Internet Information Services, a web server that is often used on Windows servers.

Inbound Link
Or backlink, links that point towards a page, typically intended to mean links from outwith the domain of the target page. As links are an important factor in search engine algorithms, marketers pay close attention to the quantity, quality and relevance of inbound links.

Indexed Pages
Pages that have been crawled by a search engine and appear in search engine results. When a page has been indexed has the change to appear in search engine results.

Information Architecture
The structural layout of information in a website is important for user experience (UX) and search engine crawlability, and pre-emptively designing where information will be placed is an important aspect of building a large site. Information architecture is the planning and actual layout of information on a site. Visually at a simple level, most websites can be thought of as a single home page, with crows feet links to inner pages and so forth.

Intuitive and ease of access to content is important in order for search engines to find it, and especially important for users who have to navigate through it.

Information Retrieval
The scientific field of storing and retrieving documents in useful manner. At the core level, any system will aim for high precision (returning the 'correct' documents) and recall (returning all the correct documents). In search engines, results tend to be limited to the first several hundred results.

Internal Link
Vernacular to describe a link within a page of a domain that points to another page within that domain.

Invisible Web
The part of the web that is not indexed by search engines or otherwise easily found.

The web is almost infinite given the potential permutations of all host names, protocols, URLs and query strings.

Search engines assign crawl budgets to websites to avoid getting stuck in infinite loops, they need to allocate a fair amount of resource for each domain they wish to crawl within a reasonable timeframe. Crawl budgets are typically based on a websites relative authority to other websites.

Because of the finite resources and time and potential infinite number of URLs on the web, there is an invisible web of countless URLs that are not easily found.


Javascript
A client-side (and now used in server-side) technology best known for its use in web browsers allowing for richer dynamic website experiences.

Keyword
Or 'phrase' (or query), the vernacular used for search engine queries that marketers target in order to acquire targeted traffic. A site about widgets would want to identify and rank for widget related keywords.

Keyword Density
An relatively obsolete metric that search engine's used to measure the topical relatedness of a web page. Non-stopwords and phrases with high keyword density would be considered relevant to the page. An overly high keyword density could sometimes result in a penalty. Modern search engines are very good at understanding synonyms, concepts and inferring the topic of a page by looking at the domain name, page title's, other pages within the domain and links pointing to the page.

Keyword Research
The process of identifying keywords that you wish to rank for on search engines. There are many keyword research tools that help achieve this goal.

Knowledge Graph
Google's ontology of structured information that was previously known as Freebase. The Knowledge Graph is a graph network of interrelated entities, attributes and events that help understand the relatedness of concepts. Some search engine queries will return Google's knowledge graph data as an answer to the user's query.

Landing Page
The page a user lands on after clicking on a link, often used in context to a user clicking on a SERP.

Link
Short for hyperlink, an HTML element that links one page to another, often accompanied by descriptive anchor text.

Link Bait
Content that is designed to attract backlinks. This can take the form of controversial content or opinion, or offering a product/service at reduced cost.

Link Building
A process of acquiring links to a page or domain. Link building can essentially be split into two objectives.

1) Acquiring links from targeted web pags that'll deliver human traffic
2) Acquiring links with the aim of achieving better search engine rankings

Link Churn
See link rot

Link Exchange
A scheme of exchanging links, often reciprocal, with the intention of them either boosting traffic of search engine prominence.

Link Juice
A term generalising the potency of a particular backlink or combined effect of all backlinks and their potential affect on search engine rankings.

Link Popularity
A more antiquated term in reference to the volume of links pointing to a given page or domain. More modern search engines do not use link popularity as a positive ranking metric, the authority and relevance of a link are considered to be of more importance.

Link Profile / Backlink Profile
A summation of a domain or page's entire backlink list. Speaking of a link profile often relates to the quality and source of links, and whether it appears to be a natural link profile in the cases of aggressive link building.

Backlink tools such as Majestic will offer various aggregate datapoints regarding a link profile, such as choice of anchor texts, contextual relevance of links and types of links.

Link Rot
Refers to the natural decay of a backlink profile where over time, links and backlink pages will gradually get deleted due to content being moved, deleted, or the domain that the content resides under no longer resolving. See Cool URIs dont change.

Link Sculpting
The act of intentionally allowing and disallowing pages to be followed and indexed by search engine crawlers in order to preserve any inferred prominence for pages and keywords that a webmaster wishes to rank for, rather than diluting the prominence on pages that have no ranking value.

Link Velocity
The rate of acquisition of new links from link building and links gained from 3rd parties. Acquiring links at too fast a rate, certainly in relation to the websites you're competing against is considered a potential risk that can result in a penalty, or the links being discounted as a positive ranking factor.

Local Search Results
Non-organic search results that are returned in response to a geographic specific query, or inferred geographic query (based on the user's known location).

Log Files
Metadata about each visit to a website, which includes but not limited to the date, page requested, page response, the user's IP address, the user's user agent and the referring page, if any.

Long Tail
Search engine queries that contain many words that have a low volume of traffic, but permutations of the long tail keyword add up to an appreciable amount of traffic. Years ago marketers realised that there is only a small finite number of SERPs available for high competition terms, sometimes beyond the budget of a project. Targeting the lower volume terms (that have the same or more definite intent by the searcher) provides new and sometimes untapped opportunities to find targeted traffic.

Majestic
A link intelligence tool that provides one of the largest non-search engine indexes for webmasters to use to gather information on backlink data and page metadata.

Manual Penalty
A search engine penalty that has been applied to a page or domain by a person working for the search engine, as opposed to their algorithms. Modern search engines tend to want to solve problems algorithmically, but often use people to tune their algorithm, review edge cases and review appeals by webmasters about why their site is penalised.

Manual Review
A review done by a search engine employee to evaluate whether a site is breaking the search engine's guidelines.

Meta Description
An HTML element that helps provide a summary of a web page, and is often showed by search engines as the description for the page within a SERP.

Meta Keywords
A relatively ineffectual HTML element that used to be a ranking factor. The keyword element would be populated with keywords in order to allow crawlers a better understanding of the page. This soon became a 'noisy' ranking factor that is not used much, if at all by major search engines, as other factors provide a more reliable assessment.

Metadata
A summary of information describing a longer piece of information. There are Meta elements in the HTML specification that help describe web documents, such as a short description of it, or the language used.

MFA
Made For Adsense site, a site that's primary motivation for existing is to receive revenue from Adsense. Typically synonymous with lower value sites that are heavily SEO'd to target specific keywords and their resultant traffic.

Mindshare
The relative awareness and popularity of a product in a given niche.

Mirror Site
An exact copy of a website, typically used for redundancy, load-balancing and better connectivity.

Natural Language Processing
The concept of using programmatic means to turn human natural language into a machine readable form. Natural Language Processing aims to accurately uncover the sense of words in the context they are used.

Negative Keyword
A keyword that is ignored in searches. On Google, you can use negative keywords by prepending them with a minus sign.

For example, a search for 'buy vps -cheap' would find documents ranking for buy and vps together, that do not include the word cheap (and potentially synonyms and stems).

Negative SEO
The idea of using means to negatively impact the rankings of a website. An example of negative SEO would be to take a competitor's backlinks and email the site owners telling them their site has moved and the backlink should be updated.

NGINX
Pronounced "engine X", a web server released in 2004 that is continuing to grow in popularity, and is second only to Apache in active usage.

Niche
A subject matter or market for a product.

Nofollow
A robots.txt protocol directive that tells crawlers not to follow any of the hyperlinks found on a particular page or pages.

Nominet
The domain name registry for .uk domain names

nTLD
New Top Level Domain, in reference to new gTLD and ccTLD domains that have arisen in the last several years.

Organic Search Results
Search engine results that are not paid for or appear as a custom response to queries, versus the more formal '10 blue links'. Google Adwords, Knowledge Graph, and Google Local would not be considered organic results. If a marketer was measuring how well they rank for a given phrase in response to their SEO efforts, they would want to ignore all non-organic and focus purely on the positioning of their website versus all other organic results.

PageRank
The defining algorithm that helped make Google the world-class search engine that it is today. Pagerank is a logarithmic scoring of the importance of web pages based on a random walk across the links of the web, with the intention of many iterations of the algorithm resulting in pages ranking more highly when they are linked to from authoritative high-Pagerank pages. Pagerank is still in use today but is incorporated into a much more sophisticated algorithm. See the original published paper on Pagerank.

Panda Algorithm
Vernacular for the part of Google's algorithm that attempts to reduce the number of low quality content sites appearing in its search results. When put in practice, a number of very-well known 'content farms' were hit with massive reductions in traffic from Google.

PBN
Private Backlink Network or Private Blog Network, an acronym used to describe a backlink profile under your influence. The advantage of having a link network under your control is that you retain control over your link, its surrounding content and how long it stays there.

Penalty
An instance of a page or domain having its search engine visibility reduced, typically due to it being flagged algorithmically, though sometimes manual intervention can result in a penalty.

Modern search engines often give some indication that a penalty is being applied and how to fix the issue, but not always.

Often marketers can misdiagnose lower search engine rankings as a penalty, which can be a result of other factors like technical issues preventing a search engine crawler fetching content from a site, prominent authoritative backlinks being lost, algorithm changes and/or competition being preferred.

Penguin Algorithm
Vernacular for Google algorithms that focus on the backlink profiles of a domain or page

Personalization
Many user's remain logged in while performing searches, and search engines often tailor the ordering of their results based on their search history. Geo-location is one major factor of personalisation, another aspect of personalisation is your preference to clicking particular websites over others.

PPC
Pay Per Click, a pricing method that has been used since the early days of the Internet where a publisher is paid for every click on an advertiser's advert. Google's Adwords uses a pay-per click model.

Query
Words that a search engine end-user uses to get search engine results about a given topic. Query is synonymous with question, where search engine results are providing the answer.

Query Refinement
Sometimes search engines will attempt to auto-correct queries, or provide alternate query suggestions to better help the user get the information they are looking for.

RAA
The Registrar Accreditation Agreement that registrars agree to in order to sell top level domains.

Reciprocal Links
Links given to each other by two or more parties, under the premise that a rising tide lifts all ships. Reciprocal links were vogue when link popularity was considered a central ranking factor.

Redirect
The act of redirecting a user from one URL to another. This can be done via several means, usually through a 301 or 302 HTTP redirect, but can also be done with Javascript or the meta refresh element.

Referrer
The page that a user clicked a link on in order to arrive at their current page. Browsers typically forward this information to the destination page as part of the HTTP request of the page.

Registrar
A registry accredited company that can acquire and manage domain names for one or more TLDs on your behalf. You can buy domain names directly from registrars or through 3rd party resellers like TDN and hosting providers.

Registry
The maintainer of zone files, administration and disputes for domains under a given TLD. Examples of registries include Verisign for .com and Nominet for .uk domains. Typically you will buy domains via a registrar accredited by the registry (or possibly a reseller of the registrar, like TDN) who can acquire and manage the domain name for you.

Relative Link / Relative URL
A hyperlink that contains a relative reference to another web resource, in relation to the current URL. For instance, the internal links of a site may be all relative for brevity's sake.

Reputation Management
Ensuring that a brand has no negative connotations in prominent locations where existing or potential customers may see it, such as for related SERPs or on social networks. For the former, someone adept at reputation management may be able to remove or push down the negative results and be imbursed for their efforts.

RIR
Regional Internet Registry, registries that are responsible for the assignment of Internet number resources, namely IP addresses.

There are five regional RIR's that are delegated by IANA to adminstrate resources for their local area, who delegrate blocks of addresses to LIR's (Local Internet Registries):


  • AFRINIC — The African continent

  • APNIC — Eastern Asia and Oceania

  • ARIN — North America

  • LACNIC — Latin America and the Caribbean

  • RIPE — Europe, the Middle East, Russia and Central Asia

Robots.txt
An advisory protocol that is adhered to by all major search engines, that indicates to crawlers parts of a site they are or are not allowed to access.

ROI
Return On Investment, after all is said and done, the amount of money you receive for your efforts should be higher than the amount you spent to get there. This is your ROI.

Sandbox
Vernacular describing pages that end up in a staging area outside of the main search engine index, that stay there for a period of time until the search engine trusts the content enough.

SEM
Search Engine Marketing, an extension of SEO to encompass Pay Per Click (PPC) management and additional search engine data sources like local listings, knowledge graph listings and shopping listings.

SEO
Search Engine Optimisation, the occupation of making content accessible and easily understandable to search engine crawlers.

SERP
Seach Engine Result Placement (and variants), the search engine results returned by a search engine for a query, shorthand to describe the results for a query or results in general. Often used to refer to the position of a domain within them.

Server Side
Technology that is used on the server-end of a client-server architecture, typically referring to databases, web servers and programming languages used to serve content.

Shared Hosting
Historically one of the most popular methods of hosting, the hosting provider would allocate an amount of disk, bandwidth and domain hosting slots to you in a shared hosting package. Often, hundreds or thousands of different users and domains would all reside on the same IP address, though often there is an add-on option of buying a dedicated IP address.

Sitemap (XML)
File or files that list URLs available for crawling on a website. These are helpful for providing a compact way of listing all the URLs available on a site without having to actually crawl the site itself.

Social Media
Platforms where communication between users is the primary function, Facebook, Twitter and LinkedIn are popular examples of social media.

Soft 404
When a page that does not exist does not return a HTTP 404 status code (often a 200 OK instead), it is a soft 404.

This is an issue with search engines as they're unable to discern whether the page actually does not exist or whether it's a soft 404. The search engine then has to attempt to evaluate the content and see if it's a duplicate (compared to all other soft 404's).

Search engines are known to access a random string URL to determine whether a site is returning soft 404s or not.

Having this issue on a site can unnecessarily consume crawling budget.

spam
Unsolicited email often sent in large volume, and in reference to search engine results, Adversarial information retrieval. Search engines aim to provide 100% precision and recall in their search engine results, and order the results algorithmically with the aim of satisfying the user's query. Although subjective, one definition of spam here could be any search engine result that does not satisfy the user's requirement for their search. Another definition may be in reference to online marketer's intention manipulation of the ordering of results.

Spider
Synonymous with crawler, a spider is a program that collects documents from the web. Googlebot and MJ12bot are well known spiders.

Spider Trap
A technical issue that sends spiders into a potentially infinite loop. Early web crawlers were prone to getting stuck on particular websites or groups of websites, though this is generally no longer an issue. Session IDs were one way in which a spider would repeatedly find new URLs on a website, but discover no actual new content.

The concepts of crawl budgets means that a webmaster should take care in preventing such issues, as generally you want a crawler to find all your content without exhausting its computational budget allocated to your site.

Static Content
Content that is not created on-the-fly, and is served as-is from the storage device.

Stemming
A process of removing suffixes of words to a common normalised 'stem' version of the word. An example: 'stemming', 'stemmers', 'stemmed' can all be nor to the word 'stem'.

Stop Words
Words that frequently appear in a language that offer no real contextual meaning to a text. Historically search engines would strip out these word and ignore them when present in search engine queries. Nowadays with a greater prominence on natural language processing, stopwords are not ignored.

Supplemental Results
Backfill search engine results that would otherwise not appear prominently for mainstream search queries. The backfill tends to be populated with duplicate or near-duplicate documents that are inferred to offer no more value than the documents they are similar to.

TDN
The Domain Name

Title
An important HTML element of a page that appears at the top of a browser window and the clickable link on search engine results. Title's should always be unique and descriptive for them to rank well.

TLD
Top Level Domain, the last label in a fully qualified domain name.

Topical Trust Flow (TTF)
Also see Trust Flow, topical trust flow creates topic specific Trust Flow scores for pages and domains. There are over 700 topics that Majestic uses.

Trust Flow (TF)
Similar to the Hilltop Algorithm in nature, Trust Flow is a score from 0 to 100 based on an iterative algorithm that Majestic uses to assess a page's distance from a seed list of authoritative websites on given topics.

UGC
User Generated Content, content that is provided by users and not a website owner. Blog comment sections, guest posts and entire social networks can be considered user generated content.

UI
User Interface, the front end of a website visible in a browser or app that users interact with.

Unique Visitor
Visitors to a website but ignoring multiple page views by the same users or return visits. There is no perfect way to define a unique visit, but two methods would be to either record unique IP address and user agent combinations, or using cookies. Due to people potentially owning and using many devices in a given day, it's relatively easy to double-count the same user.

Universal Search
A term incorporating all search engine results together in a search engine result page, including but not limited to organic results, ads, local, knowledge graph, images and video results.

URL
Uniform Resource Locator, a string of characters that uniquely identifies a resource on the Internet.

User-Agent Spoofing
An HTTP request that does not use the user-agent a server would expect for that client. For example, It is trivial to create an HTTP request that uses the Googlebot user agent, which can sometimes reveal some poor man's cloaking.

UX
User Experience, the overall experience of a user visiting a web site or app. A well thought out user interface (UI), compelling products or content, friendly interactions with staff and good information architecture should invariably result in a good user experience.

VPS
Virtual Private Server, an allocation of server resources that on the software level appear to be dedicated, but are actually shared between more than one user.

With a VPS instance, the user tends to have a dedicated IPv4 address.

White Hat SEO
A term given for tactics of promoting websites in search engines that most likely within the search engine's guidelines. A white hat SEO may argue that their role in SEO is merely to help search engine spiders access their site, rather than any intended manipulation of search engine results.

WHOIS
A protocol that stores data pertaining to owners of Internet resources such as domains and IP spaces, and responds to queries for that data. In the context of online marketing, varied and public WHOIS records for domain backlinks implies that links have been acquired from neutral 3rd parties.

Widgets
A generic term used in online marketing referring to search phrases, without referring to a specific phrase. Often you may want to discuss SEO in generic terms and referring to variations of [widgets], like [red widgets], [blue widgets] or [widgets in oregon] helps remove unnecessary detail.

Wordpress
The most popular content management system used on the web today.

Are we missing a term you'd like to know more about? Let us know and we'll add it in