Search Engine Database Schemas

This tutorial is an extension to my tutorial on how to make a search engine

I'm going to explain the database schema of a basic web search engine. This is the fundamentals, you can add your own details for the special needs of your project.


Schema of Main Search Engine Index

The main index of the search engine is simple. It just maps key phrases to web pages including a score of how well the key phrase or keyword relates to that web page.

This is the exact schema I use at Secret Search Engine Labs for the main index

keywordid int(11)

pageid int(11)

score float

There is two indexes in this table. The first index is on keywordid and score, that's how me do searches by doing a select query on keyword id sorting on score.

The second index is on pageid and is only needed if you are going to do live updates to the index. When a page is re-indexed we first delete all entries for that page by doing a lookup on pageid. We then add the new data for that page.

At one time I had an index on keyword and pageid and updated the page-keyword information but after some testing it seemed to be significantly faster to just delete everything on a pageid index and then add the new data.

Alternative Method

The above index is great for doing updates on the fly, you can both search in the index and update the index at the same time. It's slow for doing searches though and the following method can be used to speed it up if you don't need real time updates od the index.

To speed up searches, you need to be able to fetch everything related to a single keyword really fast. This can be done by making a separate database row for every keyword with only a big text field that holds the page-score information in a list format.

keywordid int(11)

scores text

The data in the scores field will include all the socres for that keyword as a pageid=score;pageid=score;pageid=score; string and even though this may seem inefficient at first it's a lot faster to read the data from disk sequentially than to look up every pageid=score from a separate row in the database.

The drawback here is that you have to maintain two indexes. The first one for building and updating the index and the second one, which is generated from the first one, for doing the searches.

Using Files for the Search Index

The second index can also be done using files, by creating one file per keyword. You make the keyword your filename and put the pageid=score;pageid=score; data inside the file.

By using the keyword for filename the filesystem will be your index. This means if you have more than a couple of thousand keywords you need to split them up into separate directories as most filesystems starts to slow down if you have too many files in the same directory.


The Keyword Table

The main index have a field named keywordid. This maps into the keyword table where all keywords and keyphrases are stored.

This is a somewhat simplified version of the database schema used over at the lab.

keywordid int(11)

keyword varchar(50)

This simply maps written keywords, like "fish", and keyphrases, like "New York" to an id. Using the id instead of the keyword in the main index makes it a lot smaller, especially for long keyphrases like "social bookmarking sites"

Over at Secret Search Engine Labs I use two additional fields for registering how many words are in the keyphrase 1-4 and how many times the keyphrase exists in the index. This information is used for statistics and for some of the filtering algorithms but is not needed for a basic search engine.

The Page Table

The main index also has a field named pageid. This, as you probably guessed maps to information about a single webpage in the pages table.

Here's the page table database schema abbreviated:

pageid int(11)

urlid int(11)

title varchar(70)

descr text

fdate int(10)

finterval int(11)

The pageid is the id used in the main index and in the links table to identify a page. urlid is just and index into a table containing all urls. title and descr is used together with urlid to create the entry for the page when displaying a search engine results page SERP. 

fdate is the time and date of last page fetch and finterval tells us how often to fetch the page. These field are used to trig refetches of pages.

At the lab I also cache some data about the page that could be generated from the other pages. It's just so much faster, when displaying info about a page, to just read the info instead of doing several data intensive queries.

The Links Table

To be able to score pages well you have to use anchor text from external links. That means I have to keep a table with links between pages.

Here's the database schema for the links table with the more obscure fields left out

linkid int(11)

urlid int(11)

anchor varchar(80)

spage int(11)

dpage int(11)

Every link has a source page, spage, which is the page where the link is located and a destination page, dpage, which is the page you will land on if you click the link.

Anchor text, anchor, is recorded for every link and is used by the ranking algorithm when building the index.

The url is not necessarily needed in the links table by is currently used by my link parser as the unique id to separate on link from another and that's why it's included.

There is three indexes on this table, on linkid, on dpage and on spage. We need to lookup a link from its id of cource, then we need to find all inbound links to a page, using dpage and we need to be able to find all outgoing links from a page by querying spage.

Other Tables

In the database we use at the lab there is a couple more tables for specialized functions. There's a sites table where the robots.txt file and cookies are cached, there's a fetchqueue table where pages are waiting in line to be fetched from the web, there's tables for configuration, for filters and for keeping track if new urls that someone want's added to the index.

Most of this is basic data management and not so much high fly search data so I'll leave it for a later tutorial.

Your questions and comments are most welcome in the comments section below! If there is interest I'll add more information to the tutorial.

8 comments

sbyholm profile image

sbyholm 7 years ago from Finland Author

Add your questions, comments and ideas for new tutorials here


grillrepair profile image

grillrepair 6 years ago from florida

You know, I do not always like to consciously remember how uneducated I am. i do not need to know that i do not know how it works, i just need it to work. incidentally, i have pretty good rankings on your search engine, i guess it is working well! google does seem to have a few bugs in that department, though.


Scott 6 years ago

Hello,

This is a good discussion that you have posted. I am trying to rewrite a search result script for my search engine script where I want the search result faster to fetch the search words from the database. Do you think you can even give a detail script coding sample for this?

Thanks.


sbyholm profile image

sbyholm 6 years ago from Finland Author

Hello Scott,

To make searches faster the first thing to check is that you have a good database index on your search engine index table (the table that maps words to results). Then make sure you have enough RAM on your server. If all else fails do as me and cache the first 100 search results for every keyword in a separate table. Less than 1% of surfers will ever search beyond that.


liran570 5 years ago

Hi sbyholm.

Thank you for this tutorial! :)

I wonder.. What about saving the keywords table (or maybe also the keyworksId_pagesId table) not within one big table, but within many tables and use some hash method algorithm to know where to expect to find each keyword (or a keyworksId_pagesId connection).

For example:

Instead of using one "keywords" table, use "keywords_0", "keywords_1", "keywords_2" ....... "keywords_f" and the hash algorithm will be to do an md5 on the keyword and take the first character as the hash index (0,1,2,3...c,d,e,f)


sbyholm profile image

sbyholm 5 years ago from Finland Author

liran, I have though of that. Especially when the table or traffic grows so much that you would need several servers. There's a slight problem though, and that is that you often need to access the table both based on keyword and based on pagenumber. At least when updating all keywords for a specific page you need to first delete all keywords belonging to that specific page and then add the new ones.


Zkezemz 5 years ago

Good stuff! I was actually working on a Search Engine's log analysis and found this useful. DO we have any specific schema for the search analytics?


sbyholm profile image

sbyholm 5 years ago from Finland Author

I don't do classical analytics as I am the search engine not the logger :) A real simple analytics schema logging hits could be

Time:IP:URL:User Agent:Browser:OS

you can then expand on that to make more complex analysis

    Sign in or sign up and post using a HubPages Network account.

    0 of 8192 characters used
    Post Comment

    No HTML is allowed in comments, but URLs will be hyperlinked. Comments are not for promoting your articles or other sites.


    Click to Rate This Article
    working