Separate output file for every url given in start_urls list of spider in scrapy

I'd implement a more explicit approach (not tested):

  • configure list of possible categories in settings.py:

    CATEGORIES = ['Arts', 'Business', 'Computers']
    
  • define your start_urls based on the setting

    start_urls = ['http://www.dmoz.org/%s' % category for category in settings.CATEGORIES]
    
  • add category Field to the Item class

  • in the spider's parse method set the category field according to the current response.url, e.g.:

    def parse(self, response):
         ...
         item['category'] = next(category for category in settings.CATEGORIES if category in response.url)
         ...
    
  • in the pipeline open up exporters for all categories and choose which exporter to use based on the item['category']:

    def spider_opened(self, spider):
        ...
        self.exporters = {}
        for category in settings.CATEGORIES:
            file = open('output/%s.xml' % category, 'w+b')
            exporter = XmlItemExporter(file)
            exporter.start_exporting()
            self.exporters[category] = exporter
    
    def spider_closed(self, spider):
        for exporter in self.exporters.itervalues(): 
            exporter.finish_exporting()
    
    def process_item(self, item, spider):
        self.exporters[item['category']].export_item(item)
        return item
    

You would probably need to tweak it a bit to make it work but I hope you got the idea - store the category inside the item being processed. Choose a file to export to based on the item category value.

Hope that helps.