MetaMoJi Crawler

About MetaMoJi Crawler

MetaMoJi Crawler is a robot (web crawler) developed and operated by MetaMoJi Corporation for indexing web pages, in order to conduct research and development of new web services by utilizing information that is accessible on the Internet. We take great care to avoid causing any trouble when crawling, but if you feel uncomfortable, you can block crawling by taking the steps described below.

How to Block Crawling

MetaMoJi Crawler responds to The Robots Exclusion Protocol and The Robots META tag. If you wish to avoid being crawled,do either of the following.

Block with The Robots Exclusion Protocol

By creating a text file, “robots.txt” in the root directory of the web server, access by web crawlers can be controlled. For example, if your web page is on the server http://www.yourserver.yourdomain/, save this as the text file “http://www.yourserver.yourdomain/robots.txt” to keep it from being crawled. If you wish to block MetaMoJi Crawler for all contents, type as follows:

User-agent: MetaMoJiCrawler
Disallow: /

robots.txt is comprised of rows starting with “User-agent:” or “Disallow:”. After “User-agent:”,type in the crawler name you would like to control access of. After “Disallow:”, type in the path you would like to prohibit access to.

Block with The Robots META tag

If you do not have authority as the web site administrator and cannot place robots.txt on the server,
you can block web crawlers by using The Robots META tag. This is a mechanism with which a crawler’s behavior is controlled by embedding META tags in HTML files. Access by web crawlers can be blocked by placing the following tag in the headers of HTML files you wish to prohibit access to.


This meta tag specifies that crawlers do not index this web page and that web pages linked from this web page are not followed.

Contact Information

If you have any questions regarding this matter, please contact us at the following:

MetaMoJi Corporation
Comments are closed.