why would curl and wget result in a 403 forbidden?

A HTTP request may contain more headers that are not set by curl or wget. For example:

  • Cookie: this is the most likely reason why a request would be rejected, I have seen this happen on download sites. Given a cookie key=val, you can set it with the -b key=val (or --cookie key=val) option for curl.
  • Referer (sic): when clicking a link on a web page, most browsers tend to send the current page as referrer. It should not be relied on, but even eBay failed to reset a password when this header was absent. So yes, it may happen. The curl option for this is -e URL and --referer URL.
  • Authorization: this is becoming less popular now due to the uncontrollable UI of the username/password dialog, but it is still possible. It can be set in curl with the -u user:password (or --user user:password) option.
  • User-Agent: some requests will yield different responses depending on the User Agent. This can be used in a good way (providing the real download rather than a list of mirrors) or in a bad way (reject user agents which do not start with Mozilla, or contain Wget or curl).

You can normally use the Developer tools of your browser (Firefox and Chrome support this) to read the headers sent by your browser. If the connection is not encrypted (that is, not using HTTPS), then you can also use a packet sniffer such as Wireshark for this purpose.

Besides these headers, websites may also trigger some actions behind the scenes that change state. For example, when opening a page, it is possible that a request is performed on the background to prepare the download link. Or a redirect happens on the page. These actions typically make use of Javascript, but there may also be a hidden frame to facilitate these actions.

If you are looking for a method to easily fetch files from a download site, have a look at plowdown, included with plowshare.


Tried all of the above however no luck; used dev browser tool to get user-agent string, once I added the following, success:

--user-agent="Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"

Just want to add to the above answers that you could use the "Copy as cURL" feature present in Chrome developer tools (since v26.0) and Firebug (since v1.12). You can access this feature right-clicking the request row in the Network tab.

Tags:

Curl

Wget