ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel

Search Engine Database Schemas

Updated on November 2, 2009

This tutorial is an extension to my tutorial on how to make a search engine

I'm going to explain the database schema of a basic web search engine. This is the fundamentals, you can add your own details for the special needs of your project.


Schema of Main Search Engine Index

The main index of the search engine is simple. It just maps key phrases to web pages including a score of how well the key phrase or keyword relates to that web page.

This is the exact schema I use at Secret Search Engine Labs for the main index

keywordid int(11)

pageid int(11)

score float

There is two indexes in this table. The first index is on keywordid and score, that's how me do searches by doing a select query on keyword id sorting on score.

The second index is on pageid and is only needed if you are going to do live updates to the index. When a page is re-indexed we first delete all entries for that page by doing a lookup on pageid. We then add the new data for that page.

At one time I had an index on keyword and pageid and updated the page-keyword information but after some testing it seemed to be significantly faster to just delete everything on a pageid index and then add the new data.

Alternative Method

The above index is great for doing updates on the fly, you can both search in the index and update the index at the same time. It's slow for doing searches though and the following method can be used to speed it up if you don't need real time updates od the index.

To speed up searches, you need to be able to fetch everything related to a single keyword really fast. This can be done by making a separate database row for every keyword with only a big text field that holds the page-score information in a list format.

keywordid int(11)

scores text

The data in the scores field will include all the socres for that keyword as a pageid=score;pageid=score;pageid=score; string and even though this may seem inefficient at first it's a lot faster to read the data from disk sequentially than to look up every pageid=score from a separate row in the database.

The drawback here is that you have to maintain two indexes. The first one for building and updating the index and the second one, which is generated from the first one, for doing the searches.

Using Files for the Search Index

The second index can also be done using files, by creating one file per keyword. You make the keyword your filename and put the pageid=score;pageid=score; data inside the file.

By using the keyword for filename the filesystem will be your index. This means if you have more than a couple of thousand keywords you need to split them up into separate directories as most filesystems starts to slow down if you have too many files in the same directory.


The Keyword Table

The main index have a field named keywordid. This maps into the keyword table where all keywords and keyphrases are stored.

This is a somewhat simplified version of the database schema used over at the lab.

keywordid int(11)

keyword varchar(50)

This simply maps written keywords, like "fish", and keyphrases, like "New York" to an id. Using the id instead of the keyword in the main index makes it a lot smaller, especially for long keyphrases like "social bookmarking sites"

Over at Secret Search Engine Labs I use two additional fields for registering how many words are in the keyphrase 1-4 and how many times the keyphrase exists in the index. This information is used for statistics and for some of the filtering algorithms but is not needed for a basic search engine.

The Page Table

The main index also has a field named pageid. This, as you probably guessed maps to information about a single webpage in the pages table.

Here's the page table database schema abbreviated:

pageid int(11)

urlid int(11)

title varchar(70)

descr text

fdate int(10)

finterval int(11)

The pageid is the id used in the main index and in the links table to identify a page. urlid is just and index into a table containing all urls. title and descr is used together with urlid to create the entry for the page when displaying a search engine results page SERP. 

fdate is the time and date of last page fetch and finterval tells us how often to fetch the page. These field are used to trig refetches of pages.

At the lab I also cache some data about the page that could be generated from the other pages. It's just so much faster, when displaying info about a page, to just read the info instead of doing several data intensive queries.

The Links Table

To be able to score pages well you have to use anchor text from external links. That means I have to keep a table with links between pages.

Here's the database schema for the links table with the more obscure fields left out

linkid int(11)

urlid int(11)

anchor varchar(80)

spage int(11)

dpage int(11)

Every link has a source page, spage, which is the page where the link is located and a destination page, dpage, which is the page you will land on if you click the link.

Anchor text, anchor, is recorded for every link and is used by the ranking algorithm when building the index.

The url is not necessarily needed in the links table by is currently used by my link parser as the unique id to separate on link from another and that's why it's included.

There is three indexes on this table, on linkid, on dpage and on spage. We need to lookup a link from its id of cource, then we need to find all inbound links to a page, using dpage and we need to be able to find all outgoing links from a page by querying spage.

Other Tables

In the database we use at the lab there is a couple more tables for specialized functions. There's a sites table where the robots.txt file and cookies are cached, there's a fetchqueue table where pages are waiting in line to be fetched from the web, there's tables for configuration, for filters and for keeping track if new urls that someone want's added to the index.

Most of this is basic data management and not so much high fly search data so I'll leave it for a later tutorial.

Your questions and comments are most welcome in the comments section below! If there is interest I'll add more information to the tutorial.

Comments

    0 of 8192 characters used
    Post Comment

    • sbyholm profile imageAUTHOR

      sbyholm 

      6 years ago from Finland

      I don't do classical analytics as I am the search engine not the logger :) A real simple analytics schema logging hits could be

      Time:IP:URL:User Agent:Browser:OS

      you can then expand on that to make more complex analysis

    • profile image

      Zkezemz 

      6 years ago

      Good stuff! I was actually working on a Search Engine's log analysis and found this useful. DO we have any specific schema for the search analytics?

    • sbyholm profile imageAUTHOR

      sbyholm 

      7 years ago from Finland

      liran, I have though of that. Especially when the table or traffic grows so much that you would need several servers. There's a slight problem though, and that is that you often need to access the table both based on keyword and based on pagenumber. At least when updating all keywords for a specific page you need to first delete all keywords belonging to that specific page and then add the new ones.

    • profile image

      liran570 

      7 years ago

      Hi sbyholm.

      Thank you for this tutorial! :)

      I wonder.. What about saving the keywords table (or maybe also the keyworksId_pagesId table) not within one big table, but within many tables and use some hash method algorithm to know where to expect to find each keyword (or a keyworksId_pagesId connection).

      For example:

      Instead of using one "keywords" table, use "keywords_0", "keywords_1", "keywords_2" ....... "keywords_f" and the hash algorithm will be to do an md5 on the keyword and take the first character as the hash index (0,1,2,3...c,d,e,f)

    • sbyholm profile imageAUTHOR

      sbyholm 

      8 years ago from Finland

      Hello Scott,

      To make searches faster the first thing to check is that you have a good database index on your search engine index table (the table that maps words to results). Then make sure you have enough RAM on your server. If all else fails do as me and cache the first 100 search results for every keyword in a separate table. Less than 1% of surfers will ever search beyond that.

    • profile image

      Scott 

      8 years ago

      Hello,

      This is a good discussion that you have posted. I am trying to rewrite a search result script for my search engine script where I want the search result faster to fetch the search words from the database. Do you think you can even give a detail script coding sample for this?

      Thanks.

    • grillrepair profile image

      grillrepair 

      8 years ago from florida

      You know, I do not always like to consciously remember how uneducated I am. i do not need to know that i do not know how it works, i just need it to work. incidentally, i have pretty good rankings on your search engine, i guess it is working well! google does seem to have a few bugs in that department, though.

    • sbyholm profile imageAUTHOR

      sbyholm 

      8 years ago from Finland

      Add your questions, comments and ideas for new tutorials here

    working

    This website uses cookies

    As a user in the EEA, your approval is needed on a few things. To provide a better website experience, hubpages.com uses cookies (and other similar technologies) and may collect, process, and share personal data. Please choose which areas of our service you consent to our doing so.

    For more information on managing or withdrawing consents and how we handle data, visit our Privacy Policy at: https://hubpages.com/privacy-policy#gdpr

    Show Details
    Necessary
    HubPages Device IDThis is used to identify particular browsers or devices when the access the service, and is used for security reasons.
    LoginThis is necessary to sign in to the HubPages Service.
    Google RecaptchaThis is used to prevent bots and spam. (Privacy Policy)
    AkismetThis is used to detect comment spam. (Privacy Policy)
    HubPages Google AnalyticsThis is used to provide data on traffic to our website, all personally identifyable data is anonymized. (Privacy Policy)
    HubPages Traffic PixelThis is used to collect data on traffic to articles and other pages on our site. Unless you are signed in to a HubPages account, all personally identifiable information is anonymized.
    Amazon Web ServicesThis is a cloud services platform that we used to host our service. (Privacy Policy)
    CloudflareThis is a cloud CDN service that we use to efficiently deliver files required for our service to operate such as javascript, cascading style sheets, images, and videos. (Privacy Policy)
    Google Hosted LibrariesJavascript software libraries such as jQuery are loaded at endpoints on the googleapis.com or gstatic.com domains, for performance and efficiency reasons. (Privacy Policy)
    Features
    Google Custom SearchThis is feature allows you to search the site. (Privacy Policy)
    Google MapsSome articles have Google Maps embedded in them. (Privacy Policy)
    Google ChartsThis is used to display charts and graphs on articles and the author center. (Privacy Policy)
    Google AdSense Host APIThis service allows you to sign up for or associate a Google AdSense account with HubPages, so that you can earn money from ads on your articles. No data is shared unless you engage with this feature. (Privacy Policy)
    Google YouTubeSome articles have YouTube videos embedded in them. (Privacy Policy)
    VimeoSome articles have Vimeo videos embedded in them. (Privacy Policy)
    PaypalThis is used for a registered author who enrolls in the HubPages Earnings program and requests to be paid via PayPal. No data is shared with Paypal unless you engage with this feature. (Privacy Policy)
    Facebook LoginYou can use this to streamline signing up for, or signing in to your Hubpages account. No data is shared with Facebook unless you engage with this feature. (Privacy Policy)
    MavenThis supports the Maven widget and search functionality. (Privacy Policy)
    Marketing
    Google AdSenseThis is an ad network. (Privacy Policy)
    Google DoubleClickGoogle provides ad serving technology and runs an ad network. (Privacy Policy)
    Index ExchangeThis is an ad network. (Privacy Policy)
    SovrnThis is an ad network. (Privacy Policy)
    Facebook AdsThis is an ad network. (Privacy Policy)
    Amazon Unified Ad MarketplaceThis is an ad network. (Privacy Policy)
    AppNexusThis is an ad network. (Privacy Policy)
    OpenxThis is an ad network. (Privacy Policy)
    Rubicon ProjectThis is an ad network. (Privacy Policy)
    TripleLiftThis is an ad network. (Privacy Policy)
    Say MediaWe partner with Say Media to deliver ad campaigns on our sites. (Privacy Policy)
    Remarketing PixelsWe may use remarketing pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to advertise the HubPages Service to people that have visited our sites.
    Conversion Tracking PixelsWe may use conversion tracking pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to identify when an advertisement has successfully resulted in the desired action, such as signing up for the HubPages Service or publishing an article on the HubPages Service.
    Statistics
    Author Google AnalyticsThis is used to provide traffic data and reports to the authors of articles on the HubPages Service. (Privacy Policy)
    ComscoreComScore is a media measurement and analytics company providing marketing data and analytics to enterprises, media and advertising agencies, and publishers. Non-consent will result in ComScore only processing obfuscated personal data. (Privacy Policy)
    Amazon Tracking PixelSome articles display amazon products as part of the Amazon Affiliate program, this pixel provides traffic statistics for those products (Privacy Policy)