Individual Search Engines
Search engines are web sites that collect and organize content from all over the Internet. Individual searchengines compile their own searchable databases automatically by machine. Programs, often referred to as spiders or robots, crawl through web pages andsearch for documents containing specified keywords or keyword groups. Another program, called an indexer, reads thesedocuments and creates an index based on the words contained in each document. The search engines results are then ranked n order of relevancy.
Although search engine is really a general class of programs, the term is often used to describe systems like Google, Bing, and Alta Vista that enable users to search for documents on the World Wide Web and USENET newsgroups.
Meta Search Engines
Unlike individual search engines, meta search engines do not own databases of web pages. Meta search engines are web sites that simultaneously search databases maintained by other individual search engines and/or web directories to get their listings. After collecting the results, meta search engines remove duplicate links and combine the search results into a single merged list.
Directories are collections of Internet sites organized by subject. Users click on a topic of interest and then browse through the list of resources in that category. Directories are constructed and maintained by human beings, rather than by the automated computer programs used to create search engines.
Best of the Web
Academic and Professional Search Engines
Academic and professional directories are usually created and maintained by subject experts to support the needs of researchers.
100 Time-Saving Search Engines for Serious Scholars
Deep Web Search Engines
The deep web (or invisible web or hidden web) is the name given to web pages that cannot be indexed or crawled by traditional search engines. These pages often link to databases on websites that can only be retrieved if they are searched from within the sites themselves. Some deep websites restrict database access to members or subscribers; some limit access to their pages. Although traditional search engines cannot retrieve content from the deep web, it is estimated that 95% of the deep web can be accessed through specialized searches.
The WWW Virtual Library