Making `wget` not save the page

Solution 1:

You can redirect the output of wget to /dev/null (or NUL on Windows):

wget http://www.example.com -O /dev/null

The file won't be written to disk, but it will be downloaded.

Solution 2:

If you don't want to save the file, and you have accepted the solution of downloading the page in /dev/null, I suppose you are using wget not to get and parse the page contents.

If your real need is to trigger some remote action, check that the page exists and so on I think it would be better to avoid downloading the html body page at all.

Play with wget options in order to retrieve only what you really need, i.e. http headers, request status, etc.

  • assuming you need to check the page is ok (ie, the status returned is 200) you can do the following:

    wget --no-cache --spider http://your.server.tld/your/page.html
    
  • if you want to parse server returned headers do the following:

    wget --no-cache -S http://your.server.tld/your/page.html
    

See the wget man page for further options to play with.
See lynx too, as an alternative to wget.


Solution 3:

In case you also want to print in the console the result you can do:

wget -qO- http://www.example.com

Solution 4:

$ wget http://www.somewebsite.com -O foo.html --delete-after


Solution 5:

Another alternative is to use a tool like curl, which by default outputs the remote content to stdout instead of saving it to a file.

Tags:

Wget