Using python Requests with javascript pages

While Selenium might seem tempting and useful, it has one main problem that can't be fixed: performance. By calculating every single thing a browser does, you will need a lot more power. Even PhantomJS does not compete with a simple request. I recommend that you will only use Selenium when you really need to click buttons. If you only need javascript, I recommend PyQt (check https://www.youtube.com/watch?v=FSH77vnOGqU to learn it).

However, if you want to use Selenium, I recommend Chrome over PhantomJS. Many users have problems with PhantomJS where a website simply does not work in Phantom. Chrome can be headless (non-graphical) too!

First, make sure you have installed ChromeDriver, which Selenium depends on for using Google Chrome.

Then, make sure you have Google Chrome of version 60 or higher by checking it in the URL chrome://settings/help

Now, all you need to do is the following code:

from selenium.webdriver.chrome.options import Options
from selenium import webdriver

chrome_options = Options()
chrome_options.add_argument("--headless")

driver = webdriver.Chrome(chrome_options=chrome_options)

If you do not know how to use Selenium, here is a quick overview:

driver.get("https://www.google.com") #Browser goes to google.com

Finding elements: Use either the ELEMENTS or ELEMENT method. Examples:

driver.find_element_by_css_selector("div.logo-subtext") #Find your country in Google. (singular)
  • driver.find_element(s)_by_css_selector(css_selector) # Every element that matches this CSS selector
  • driver.find_element(s)_by_class_name(class_name) # Every element with the following class
  • driver.find_element(s)_by_id(id) # Every element with the following ID
  • driver.find_element(s)_by_link_text(link_text) # Every with the full link text
  • driver.find_element(s)_by_partial_link_text(partial_link_text) # Every with partial link text.
  • driver.find_element(s)_by_name(name) # Every element where name=argument
  • driver.find_element(s)_by_tag_name(tag_name) # Every element with the tag name argument

Ok! I found an element (or elements list). But what do I do now?

Here are the methods you can do on an element elem:

  • elem.tag_name # Could return button in a .
  • elem.get_attribute("id") # Returns the ID of an element.
  • elem.text # The inner text of an element.
  • elem.clear() # Clears a text input.
  • elem.is_displayed() # True for visible elements, False for invisible elements.
  • elem.is_enabled() # True for an enabled input, False otherwise.
  • elem.is_selected() # Is this radio button or checkbox element selected?
  • elem.location # A dictionary representing the X and Y location of an element on the screen.
  • elem.click() # Click elem.
  • elem.send_keys("thelegend27") # Type thelegend27 into elem (useful for text inputs)
  • elem.submit() # Submit the form in which elem takes part.

Special commands:

  • driver.back() # Click the Back button.
  • driver.forward() # Click the Forward button.
  • driver.refresh() # Refresh the page.
  • driver.quit() # Close the browser including all the tabs.
  • foo = driver.execute_script("return 'hello';") # Execute javascript (COULD TAKE RETURN VALUES!)

Good news: there is now a requests module that supports javascript: https://pypi.org/project/requests-html/

from requests_html import HTMLSession

session = HTMLSession()

r = session.get('http://www.yourjspage.com')

r.html.render()  # this call executes the js in the page

As a bonus this wraps BeautifulSoup, I think, so you can do things like

r.html.find('#myElementID').text

which returns the content of the HTML element as you'd expect.


You are going to have to make the same request (using the Requests library) that the javascript is making. You can use any number of tools (including those built into Chrome and Firefox) to inspect the http request that is coming from javascript and simply make this request yourself from Python.