How to create copy that works well for search engines?

Previously I’ve been learning about writing effective copy. I now want to learn more about how to best write for search engine optimisation. I used a great ebook titled “How to Create Compelling Content that Ranks Well in Search Engines” by Copyblogger to help me with this.

One of the first aspects raised in “How to Create Compelling Content” is a basic understanding of the three major components that power search engines:

  • Crawling – This is all about search engine “spiders” that crawl the web for content. These are actually bits of computer code that find information on a web page, “read” it, and then tirelessly continue along their journey by following links from your page to other pages. The spider will return from time to time to look for changes to the original page. This means that there will be opportunities to change the way a search engine sees and assesses your content.
  • Indexing –  The spider is not just casually browsing content, it’s storing the content it finds in a giant database. This is called indexing. The spider’s goal is to save every bit of content it crawls for the future benefit of searchers. It’s also gauging how relevant that content is to the words that searchers use when they want to find an answer to something.
  • Ranking –  Ultimately it’s about how the engine decides to deliver the most relevant results to searchers. The search engine algorithm which decides on the results follows a very complex set of rules. Copyblogger explains these rules as “the ground rules for a duel between your content and other content that might satisfy a searcher’s keyword query.”

Copyblogger then goes on to explain the importance of doing some keyword research upfront. What are the words and phrases that people use to find the information that they were looking for? These are the five key things to bear in mind in relation to keyword research:

  • Research tools – Google has a good, free keyword tool and there are similar tools out there such as Keyword Tool and Ubersuggest.
  • Get specific – Even though we often talk about keywords, in most cases it will be specific (short) phrases that are relevant. For example, “new car deals” or “best car discounts”.
  • Strength in numbers – It’s important to look at the relative popularity of a specific keyword among search terms. You want to make sure that enough people use your phrase or keyword when thinking about a specific topic. If you’re trying to rank in a very competitive sector, a keyword combination that can rank for an easier phrase might be preferable.
  • Highly relevant – This feels like the main point when doing keyword research: “Make sure that the search terms you are considering are highly relevant to your ultimate goal.”
  • Content resource – The key question here is whether a particular keyword phrase can support the development of content that readers perceive as value-adding. Copyblogger breaks this down into the following aspects: (1) satisfies the preliminary needs of the site visitor (2) acts as the first step in your sales or action cycle and (3) prompts people to link to it.

The book then goes into the more of the nitty gritty by highlighting “Five SEO copywriting elements that matter”:

  1. Title – With the title of your content, the critical thing is to make sure that the keywords you’re targeting are included in your title. Also, the closer to the front of the title your keywords are, the better. I’ve included some more points on how to best optimise your title in Fig. 1 and 2 below.
  2. Meta-Description – Copyblogger makes a good point by stressing that SEO copywriting isn’t just about ranking. It’s also about what your content looks like on a search engine results page (“SERP”). The meta description of your content will generally be the “snippet” copy for the search result below the title, which influences whether a person decides to read your content (and whether she clicks). Like with the title, the best would be to lead the meta-description with your keyword phrase. Also, you want to try and keep the meta description under 165 characters so the full description is visible in the search result. See Fig. 3 below for some examples of effective meta-descriptions.
  3. Content – For search optimisation purposes, your content should be on topic and strongly focus on the subject matter of the desired keyword phrases. It’s generally accepted that very brief content may have a harder time ranking over a page with more substantial content. So you’ll want to have a content body length of at least 300 words.
  4. Keyword frequency – There’s a clear difference between “keyword frequency” and “keyword density”. Keyword frequency is the number of times your targeted keyword phrase appears on the page. In contrast, keyword density is the ratio of those keywords to the rest of the words on the page. Copyblogger explains how keyword frequency affects ranking and that keyword density might not. I guess it’s a case of using common sense when writing content, checking the frequency of your keywords against the rest of the content. A keyword density greater than 5.5% could find you guilty of what’s called “keyword stuffing”, which tends to make Google think you’re trying to game their system.
  5. Linking out – Search engines are keen that your content is well connected with other content and pages, hence why linking out is important from an SEO perspective. Copyblogger provides some good tips with respect to linking out (see Fig. 4 below).

Main learning point: I’ve learned that getting your copy right is extremely important from an SEO perspective. This starts with being clear about the ultimate goal that you’re trying to achieve through your content, making sure this is reflected in your keyword phrase and, subsequently, in the title and body of the actual content.

Fig. 1 – Optimising the title of your content for SEO – Adapted from:

  • Have an alternative title in the title tag – It’s important that your CMS or blogging software allow you to serve an alternate title in the title tag (which is the snippet of code Google pulls to display a title in search results) than the headline that appears on the page.
  • Try to keep title length under 72 characters – Keeping your title length under 72 characters will ensure the full title is visible in a search result, increasing the likelihood of a click-through.

Fig. 2 – Sample titles, optimised for SEO:

For example, let’s say the keywords or phrases that I’m looking to target are “Ford Focus discounts”, then sample titles could look something like this:

“Three ways to get the best discount on your Ford Focus”

“Why getting an incredible discount on a new Ford Focus is easy”

Both titles contain my keyword phrase, but the keywords might not be in the best location for ranking or even for quick-scanning searchers compared with regular readers. By using an alternate title tag, I can enter a more search-optimized title for Google and searchers only, such as:

“Ford Focus: 3 ways to get the best discount”

“Getting discounts on a Ford Focus is easy”

Fig. 3 – Examples of effective meta-descriptions – Taken from: 


 Fig. 4 – Best practices with respect to linking out – Taken from:
  • Link to relevant content fairly early in the body copy
  • Link to relevant pages approximately every 120 words of content
  • Link to relevant interior pages of your site or other sites
  • Link with naturally relevant anchor text

Find similar fashion through Cortexica visual search

Last week, I went to a great talk by Alex Semenzato, who works as a Business Development Manager at Cortexica and is founder of FashTech. In his talk, Alex explained about the visual technology as developed by Cortexica. He discussed this technology in the context of fashion products, making the case for how visual search can really change the way we find out about fashion products and trends.

Especially given that fashion is such a visual product, it was very interesting to hear about how visual search can drive product discovery. Because of its visual nature, I can imagine that it’s much easier to explain what you’re looking for through images than through text.This is what I learned from Alex Semenzato’s talk:

  1. Find similar – The main proposition behind using Cortexica’s findSimilar™ software is that “you can shop any look just by taking a picture”. Users can take a picture on their mobiles of a design pattern or look that they like and use the visual search functionality on the client app to find either the exact item, or the most similar option(s) within the retailer’s database. One big caveat though: the quality of your visual search results is very dependent on the products available in the database of the retailer whose app you’re using. For instance, when I did a visual search through the Zalando app, the most relevant results that the app returned didn’t get  close to the look that I was searching for (see Fig. 1 below). In comparison, the results that the Macy’s app returned already felt more relevant (see Fig. 1 below).
  2. Matching – Alex explained that the matching between user’s pictures against fingerprints in the retailer database takes into account things such as colour pattern, texture and – eventually – shape. The technical challenge is to really get this mix right when doing the matching against available items in a retailer’s database. For example, the search technology needs to understand the different textures that a fabric like denim can have. It will be interesting to see how Cortexica’s competitors such as Snap FashionChic Engine and ASAP54 compare in this respect.
  3. Big data – In his presentation, Alex talked about potential B2C opportunities around Cortexica’s visual search capability. “Big data” was the first thing that he mentioned. Sometimes it feels like I can’t go to a presentation without at least one person mentioning the words “big data”, but it being able to measure makes a lot of sense in the context of visual search and fashion. One could use the analytics around products searched for (and bought) to gauge demand and to aid with product on-boarding. However, as a member in the audience rightly pointed out; the value of past data can be quite limited in the world of fashion, where it’s all about today’s trends. Alex talked about using the data generated from visual searches also in relation to merchandising solutions, associating similar items with the main product that one wants to promote.
  4. Things to watch out for – With the previous point about data opportunities come questions around data protection and data ownership. I would like to find out more about visual search and aspects like data usage and ownership. Think about questions such as “can I just use the data generated from ‘street style images’?” and “will the retailer own the images that I took and any associated data?” which I’d love to get answers on. Also, I wondered – after having had a play with the functionality – how to get the user experience around visual search right, especially if users don’t discover the type of product or look that they were looking for. For example, how do you keep users engaged if their retailer or publisher app doesn’t return the desired results?

Main learning point: I really enjoyed Alex Semenzato’s talk about the visual search capability as developed by Cortexica. It seems like a very logical and intuitive way to discover new products and I can see the visual aspect working particularly well in relation to fashion products. Given that this is a relatively new technology, there are few things which still need to pan out: the use of data and the overarching discovery experience for the user. Cortexica has definitely created and interesting piece of technology which can benefit both consumers and retailers alike.

Fig. 1 – Using Cortexica “Find Similar” visual search through the Zalando app

The image that I searched on:



The “most relevant” results that I got back on Zalando’s app:



The “most relevant” results that I got back on Macy’s app:



Related links for further learning:


Facebook Graph – Can it really take on Google?

With the amount of data that Facebook has on its users and their activities, I guess it came as no surprise when they recently launched Facebook Graph.

One of the first questions raised was whether Facebook is now looking to take on Google when it comes to search. In essence, Facebook Graph generates a variety of results (e.g. people, places, interests, etc.) all based on the social data available through your network on Facebook.

An obvious first comparison would be with Google+ and it triggered to me think a bit more about what Facebook Graph entails and how it compares to Google+:

  1. Facebook uses the data it’s already got – I thought this post on Fast Company explains Facebook Graph pretty well: “Graph Search leverages Facebook’s social data to pinpoint any combination of people, places, photos and interests. It is designed to field queries such as “photos of my best friend and my mom” or “friends of friends who like my favorite band and live in Palo Alto” or “Indian restaurants in Palo Alto that friends from India like.” In essence, all Facebook Graph does is using the social data it already has. In contrast, the launch of Google+ signified a venture into a fairly new area for Google, with it having to build a new social platform almost from the ground up.
  2. Facebook Graph has its (search) limitations – It was interesting that Facebook founder Mark Zuckerberg said that “We wouldn’t suggest people come and do web searches on Facebook, that’s not the intent” at the launch of Facebook Graph. Indeed, Graph is no Google when it comes to web search; searches on Graph are limited to data that are either public or visible to you. Also, as the aforementioned Fast Company article points out; if one of your friends has wrongly labelled a certain picture it’s just a case of tough luck with Facebook Graph.
  3. Different algorithms – Whereas Google’s search algorithms are predominantly based on keywords and links, Facebook Graph takes into social data around “likes” and “check-ins”. Consequently, the search results that Graph returns are likely to be a lot more personalised and authentic than Google’s. As I mentioned under point 1. above, Facebook has an almost endless amount of social data at its disposal which Google will struggle to compete with. Unlike Google, Facebook Graph enables users to search by using combined phrases such as “My friends who like cycling and have recently been to France.”

Main learning point: the main question I asked myself after having done this brief comparison of Facebook Graph and Google (Plus) was: “is it really fair to compare the two?” Google has clearly established itself as a very reliable web search platform, whilst Facebook Graph is clearly concentrating on “social search”. Having said that, I can see Google+ eventually suffering from Facebook Graph, mostly due to Facebook’s head start when it comes to social data. Facebook Graph, however, is currently only available in beta and it might not hit the dizzying heights that Facebook has hit. Facebook users might not sign up to Zuckerberg’s grand ‘one stop shop’ vision and prefer to search through Google …

Related links for further learning:

Social search: adding a ‘personal’ element to searching

Searching through Google has become completely ingrained in the way we look for information online. It provides search results based on a search term entered by the user. However, big search engine rival Bing and Facebook have recently tied up to create “social search”.

In a nutshell, this means that Bing will add “social signals”, based on people’s “Likes” on Facebook, to its search results. Users (provided they’re signed in on Facebook) will have the option of filtering search results by what their friends have “liked” on Facebook. For instance, when I search for “Dutch pancake places in London”, I will be able to see those restaurants “liked” by my Facebook friends. It’s a good way of ‘personalising’ my search results since a good number of my Dutch friends are likely to know a thing or two about pancakes.

Similarly, with the new “Profile Search”, Bing will return search results that have a greater “social proximity” to the user. Typically, when I search for “Pete Smith” in Google, I’ll get about 7 million results. With Bing’s new Profile Search, the results will have been filtered to show my Facebook friends and my friends’ friends, thus increasing the likelihood that I end with the Pete Smith I was looking for.

This new search functionality will kick in if Facebook users are logged into Facebook when they reach Bing or when they have “cookies” of data storing their basic Facebook information on their PCs or other devices. As a result, even when you’re not signed in (but have not unchecked the “keep me logged in” box) other users can still search Bing using your social graph. I can well imagine that this might lead to embarrassing situations in certain cases …

Even though I haven’t had a chance to test the new search functionality myself (Bing is looking for a UK release later this year), these are the things I’ve learned thus far:

  1. “Social search” adds a whole new dimension to search – It makes search results much more personal and relevant.
  2. The tie-in with Facebook sounds promising – Even though it’s early stages, this functionality has the potential to be extremely beneficial, both to users and brands.
  3. More privacy related issues are looming – With people becoming increasingly aware that an ‘online identity’ can be hard to shake off, the same goes for search results surfacing all kinds of personal information and content.

Main learning point: this link-up between Facebook and Bing introduces a new, exciting element to search. The ability to personalise one’s search results looks to become an important driver for search in the future. Some initial bugs still need to be ironed out, but I can imagine that future iterations of social search will revolutionise the way we search.

Related links for further learning:

Speeding up search: Google Instant

Towards the end of last week I learned about the launch of Google Instant, which aims to speed up the search process by providing suggested search terms as soon as the user starts typing in a search query. For instance, I started typing in “br” and the search bar started showing 4 results varying from “british airways” to “britains got talent”.

With this new, dynamic way of generating results, Google claim that typical researchers will save 2 to 5 seconds on every search query. With Instant, Google clearly tries to differentiate itself from competitive search engines such as Bing and Yahoo! who offer ‘search before you type’ functionality. The main things I learned with regard to Google Instant:

  1. Dynamic results – Relevant search results are generated whilst the user is typing in a query.
  2. Predictions – Google Instant predicts the rest of a search query even before the user has finished typing.
  3. Scroll through predictions – The user can scroll through the predictions (see learning point 2.) and see the results for each prediction when scrolling down.
  4. It’s aggressive! – I can imagine that Google Instant might not be for everyone, especially if you rather not have suggestions presented to you that are completely irrelevant to what you’re looking for!

What does Google Instant add to the existing search engines out there? The main value added in the speed with which search results are generated and the ability to modify search results. I guess Google Instant will be particularly appealing to people who search very frequently e.g. researchers. For Google it will be another product that can help to differentiate itself from its competitors.

Main learning point: a distinction can be made between “searching before you type” versus “searching as you type”. This ability to generate instant, dynamic results will help to speed up and simplify searches.

Related links for further learning: