How do I scrape only the <body> tag off of a website

I'd suggest taking advantage of the HTML Agility Pack to do the HTML parsing/manipulation.

You can easily select the body like this:

var webGet = new HtmlWeb();
var document = webGet.Load(url);
document.DocumentNode.SelectSingleNode("//body")

Still the simplest/fastest (least accurate) method.

int start = response.IndexOf("<body", StringComparison.CurrentCultureIgnoreCase);
int end = response.LastIndexOf("</body>", StringComparison.CurrentCultureIgnoreCase);
return response.Substring(start, end-start + "</body>".Length);

Obviously if there's javascript in the HEAD tag like...

document.write("<body>");

Then you'll end up with a little more then you wanted.


I think that your best option is to use a lightweight HTML parser (something like Majestic 12, which based on my tests is roughly 50-100% faster than HTML Agility Pack) and only process the nodes which you're interested in (anything between <body> and </body>). Majestic 12 is a little harder to use than HTML Agility Pack, but if you're looking for performance then it will definitely help you!

This will get you the closes to what you're asking for, but you will still have to download the entire page. I don't think there is a way around that. What you will save on is actually generating the DOM nodes for all the other content (aside from the body). You will have to parse them, but you can skip the entire content of a node which you're not interested in processing.

Here is a good example of how to use the M12 parser.

I don't have a ready example of how to grab the body, but I do have one of how to only grab the links and with little modification it will get there. Here is the rough version:

GrabBody(ParserTools.OpenM12Parser(_response.BodyBytes));

You need to Open the M12 Parser (the example project that comes with M12 has comments that detail exactly how all of these options affect performance, AND THEY DO!!!):

public static HTMLparser OpenM12Parser(byte[] buffer)
{
    HTMLparser parser = new HTMLparser();
    parser.SetChunkHashMode(false);
    parser.bKeepRawHTML = false;
    parser.bDecodeEntities = true;
    parser.bDecodeMiniEntities = true;

    if (!parser.bDecodeEntities && parser.bDecodeMiniEntities)
        parser.InitMiniEntities();

    parser.bAutoExtractBetweenTagsOnly = true;
    parser.bAutoKeepScripts = true;
    parser.bAutoMarkClosedTagsWithParamsAsOpen = true;
    parser.CleanUp();
    parser.Init(buffer);
    return parser;
}

Parse the body:

public void GrabBody(HTMLparser parser)
{

    // parser will return us tokens called HTMLchunk -- warning DO NOT destroy it until end of parsing
    // because HTMLparser re-uses this object
    HTMLchunk chunk = null;

    // we parse until returned oChunk is null indicating we reached end of parsing
    while ((chunk = parser.ParseNext()) != null)
    {
        switch (chunk.oType)
        {
            // matched open tag, ie <a href="">
            case HTMLchunkType.OpenTag:
                if (chunk.sTag == "body")
                {
                    // Start generating the DOM node (as shown in the previous example link)
                }
                break;

            // matched close tag, ie </a>
            case HTMLchunkType.CloseTag:
                break;

            // matched normal text
            case HTMLchunkType.Text:
                break;

            // matched HTML comment, that's stuff between <!-- and -->
            case HTMLchunkType.Comment:
                break;
        };
    }
}

Generating the DOM nodes is tricky, but the Majestic12ToXml class will help you do that. Like I said, this is by no means equivalent to the 3-liner you saw with HTML agility pack, but once you get the tools down you will be able to get exactly what you need for a fraction of the performance cost and probably just as many lines of code.