Is there any command-line, generic HTTP proxy (like Squid)?
Both Perl and Python (and probably Ruby as well) have simple kits that you can use to quickly build simple HTTP proxies.
In Perl, use HTTP::Proxy. Here's the 3-line example from the documentation. Add filters to filter, log or rewrite requests or responses; see the documentation for examples.
use HTTP::Proxy; my $proxy = HTTP::Proxy->new( port => 3128 ); $proxy->start;
In Python, use SimpleHTTPServer. Here's some sample code lightly adapted from effbot. Adapt the
do_GET method (or others) to filter, log or rewrite requests or responses.
import SocketServer import SimpleHTTPServer import urllib class Proxy(SimpleHTTPServer.SimpleHTTPRequestHandler): def do_GET(self): self.copyfile(urllib.urlopen(self.path), self.wfile) httpd = SocketServer.ForkingTCPServer(('', 3128), Proxy) httpd.serve_forever()
This may not be the best solution, but if you use any proxy then it will have a specific
host:port so the netcat solution with still work, albeit you'll have to pick apart the proxy meta-data to make sense of it.
The easiest way to do this might be to use any random anonymization proxy out there and just channel all the traffic through
netcat. (I.e., set your browser proxy to
localhost:port and then forward the data to the real proxy.)
If you want to have a local proxy then a SOCKS5 proxy with
ssh -D <port> localhost is probably your easiest option. Obviously, you need to tell your browser to use a "socks" proxy rather than an "http" proxy.
So, something like this (assuming your local machine accepts incoming ssh connections):
ssh -fN -D 8000 localhost nc -l 8080 | tee capturefile | nc localhost 8000
Naturally, that'll only work for one browser connection attempt, and then exit, and I have not attempted to forward the return data to the browser, so you'll need your full