A robots.txt file is used to tell search engine crawlers which pages of your website they can access. The most common reason for stopping a search engine crawling your site is to avoid overloading the website with requests, which could lead to the site crashing.
You can use robots.txt to stop the crawlers going through unimportant pages of your site, but it’s worth remembering that this does not stop the pages appearing in SERPs for users.
Robots.txt can be useful when trying to prevent pages on your site timing out or crashing, but it shouldn’t be used in pages that you want consumers to find, such as the homepage of your website.