Robots.txt file is a text file that can be saved to a website’s server. It determines if and when the search engine crawlers can visit a website’s subpages and include them in their index. In doing this, certain subpages can be excluded from the search results.
For example: by using robots.txt files you can keep a website’s archives from being included in the search results. Some search engines however choose to ignore the robots.txt files. If a subpage needs to be really hidden from search then engines it should be password protected.