Webmasters
and content providers began optimizing sites for search engines in the
mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters needed to do was to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[2]
The process involves a search engine spider downloading a page and
storing it on the search engine's own server, where a second program,
known as an indexer,
extracts various information about the page, such as the words it
contains and where these are located, as well as any weight for specific
words, and all links the page contains, which are then placed into a
scheduler for crawling at a later date.
0 Comments