What are Search Engines?
A software programs , or a website whose primary function is gathering and reporting information available on the Internet, or a portion of the Internet. Search engines are essentially massive databases that cover wide swaths of the Internet. Search engines are online services that allow users to scan the contents of the Internet to find Web sites or specific information of interest to them. Major internet search engines are – Google, Bing, Yahoo, Ask.com, AOL etc.
The main functions of software involve gathering data, evaluating or sorting it, and presenting the relevant data to the users for a given query. Accordingly, search engines have following main components
Crawling: Search Engines have special softwares known as spider, crawler, or bot, which "crawls" through the Internet gathering information. The spiders crawl over the web page, cataloging the words and links (and following these links to other websites), and then creating an index or listing of "key search words" that online users can use to find the pages they are looking for.
Indexing: Once the spiders have collected the data, these are processed and stored by the search engine in a way that makes it easy for people to access them. All the relevant information about a webpage is stored in the database known as Index.
Processing Queries: When a request for information comes into the search engine (hundreds of millions do each day), the engine retrieves from its index all the document that match the query. A match is determined if the terms or phrase is found on the page in the manner specified by the user
Ranking Results: Once the search engine has determined which results are a match for the query, the engine's algorithm (a mathematical equation commonly used for sorting) runs calculations on each of the results to determine which is most relevant to the given query. Search engines algorithms are very complex and highly guarded secrets.
The main functions of software involve gathering data, evaluating or sorting it, and presenting the relevant data to the users for a given query. Accordingly, search engines have following main components
Crawling: Search Engines have special softwares known as spider, crawler, or bot, which "crawls" through the Internet gathering information. The spiders crawl over the web page, cataloging the words and links (and following these links to other websites), and then creating an index or listing of "key search words" that online users can use to find the pages they are looking for.
Indexing: Once the spiders have collected the data, these are processed and stored by the search engine in a way that makes it easy for people to access them. All the relevant information about a webpage is stored in the database known as Index.
Processing Queries: When a request for information comes into the search engine (hundreds of millions do each day), the engine retrieves from its index all the document that match the query. A match is determined if the terms or phrase is found on the page in the manner specified by the user
Ranking Results: Once the search engine has determined which results are a match for the query, the engine's algorithm (a mathematical equation commonly used for sorting) runs calculations on each of the results to determine which is most relevant to the given query. Search engines algorithms are very complex and highly guarded secrets.
Comments
Post a Comment