Usually it would refer to freely available software, which developers could 'fork` or use/elaborate upon, but in this case it means `open`/unencrypted websites which webcrawlers traverse through. Google does this for it`s search engine, or the Wayback Machine does for archiving.
Say if you have a website domain, and on that domain there are several urls to pages within that domain. The bot will look at each of those pages, run whatever procedure, and keep going until all `available` pages are noted. You might have say one page which isn`t directly linked to; that wouldn`t be detected by said web crawler. If your facebook page cannot be accessed without the need for cookies or other clearance, you should be fine too (provided that said website isn`t providing a backdoor for the NSA)
Saturday, June 15, 2013 11:54:01 AM
DrProfessor, it's software that`s been produced with source code available, sometimes designed out of contributing parts under a license requiring the source code to be available: Linux, BSD and the like...