How do I extract all the external links of a web page and save them to a file?

You will need 2 tools, lynx and awk, try this:

$ lynx -dump http://www.google.com.br | awk '/http/{print $2}' > links.txt

If you need numbering lines, use command nl, try this:

$ lynx -dump http://www.google.com.br | awk '/http/{print $2}' | nl > links.txt

Here's an improvement on lelton's answer: you don't need awk at all for lynx's got some useful options.

lynx -listonly -nonumbers -dump http://www.google.com.br

if you want numbers

lynx -listonly -dump http://www.google.com.br