Feep! Search gets data for its index from a variety of sources:
API documentation from many sources is loaded via devdocs.io. DevDocs is an open source, offline-capable documentation viewer, with scrapers contributed by many authors.
Hacker News is a discussion forum for software development-related topics (among other things). Data is loaded by scraping the HN API, and updated (hopefully) weekly.
Feep! Search uses Kiwix to load a subset of pages from Wikipedia,
as curated by WikiProject Computing.
(wikipedia_en_computer_nopic
,
updated monthly)
The following MediaWiki instances are scraped periodically using a custom crawler:
Data for Stack Exchange sites is loaded via the Stack Exchange data dumps (hosted on the Internet Archive) and updated approximately quarterly. The following sites are currently included in the index:
The Feep! index also includes pages which have been linked to, but for which we do not have the contents of the page. These appear on results pages with the message:
This page may be related to your search, but no description is available because it has not yet been crawled.