python – How to use Selenium in Databricks and accessing and moving downloaded files to mounted storage and keep Chrome and ChromeDriver versions in sync?

Here is the guide to installing Selenium, Chrome, and ChromeDriver. This will also move a file after downloading via Selenium to your mounted storage. Each number should be in it’s own cell.

  1. Install Selenium
%pip install selenium
  1. Do your imports
import pickle as pkl
from selenium import webdriver
from import Options
  1. Download the latest ChromeDriver to the DBFS root storage /tmp/. The curl command will get the latest Chrome version and store in the version variable. Note the escape before the $.
version=`curl -sS`
wget -N${version}/ -O /tmp/

  1. Unzip the file to a new folder in the DBFS root /tmp/. I tried to use non-root path and it does not work.
unzip /tmp/ -d /tmp/chromedriver/
  1. Get the latest Chrome download and install it.
sudo curl -sS -o - | apt-key add
sudo echo "deb stable main" >> /etc/apt/sources.list.d/google-chrome.list
sudo apt-get -y update
sudo apt-get -y install google-chrome-stable

** Steps 3 – 5 can be combined into one command. You can also use the following to create a shell script and use it as an init file to configure for your clusters and is especially useful when using job clusters which use transient clusters because init scripts apply to all worker nodes rather than just the driver node. This also installs Selenium, allowing you to skip step 1. Just paste in one cell in a new notebook, run, then point your init script to dbfs:/init/ Now every time the cluster or transient cluster spins up, this will install Chrome, ChromeDriver, and Selenium on all worker nodes before your job begins to run.

# dbfs:/init/
cat > /dbfs/init/ <<EOF
echo Install Chrome and Chrome driver
version=`curl -sS`
wget -N${version}/ -O /tmp/
unzip /tmp/ -d /tmp/chromedriver/
sudo curl -sS -o - | apt-key add
sudo echo "deb stable main" >> /etc/apt/sources.list.d/google-chrome.list
sudo apt-get -y update
sudo apt-get -y install google-chrome-stable
pip install selenium
cat /dbfs/init/
  1. Configure your storage account. Example is for Azure Blob Storage using ADLSGen2.
service_principal_id = "YOUR_SP_ID"
service_principle_key = "YOUR_SP_KEY"
tenant_id = "YOUR_TENANT_ID"
directory = "" + tenant_id + "/oauth2/token"
configs = {"": "OAuth",
       "": "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider",
       "":  service_principal_id,
       "": service_principle_key,
       "": directory,
       "": "true"}
  1. Configure your mounting location and mount.
mount_point = "/mnt/container-data/"
mount_point_main = "/dbfs/mnt/container-data/"
container = "container-data"
storage_account = "adlsgen2"
storage = "abfss://"+ container +"@"+ storage_account + ""
utils_folder = mount_point + "utils/selenium/"
raw_folder = mount_point + "raw/"

if not any(mount_point in mount_info for mount_info in dbutils.fs.mounts()):
    source = storage,
    mount_point = mount_point,
    extra_configs = configs)
  print(mount_point + " has been mounted.")
  print(mount_point + " was already mounted.")
print(f"Utils folder: {utils_folder}")
print(f"Raw folder: {raw_folder}")
  1. Create method for instantiating Chrome browser. I need to load in a cookies file that I have placed in my utils folder which points to mnt/container-data/utils/selenium. Make sure the arguments are the same (no sandbox, headless, disable-dev-shm-usage)
def init_chrome_browser(download_path, chrome_driver_path, cookies_path, url):
    Instatiates a Chrome browser.

    download_path : str
        The download path to place files downloaded from this browser session.
    chrome_driver_path : str
        The path of the chrome driver executable binary (.exe file).
    cookies_path : str
        The path of the cookie file to load in (.pkl file).
    url : str
        The URL address of the page to initially load.

        Returns the instantiated browser object.
    options = Options()
    prefs = {'download.default_directory' : download_path}
    options.add_experimental_option('prefs', prefs)
    print(f"{}    Launching Chrome...")
    browser = webdriver.Chrome(service=Service(chrome_driver_path), options=options)
    print(f"{}    Chrome launched.")
    print(f"{}    Loading cookies...")
    cookies = pkl.load(open(cookies_path, "rb"))
    for cookie in cookies:
    print(f"{}    Cookies loaded.")
    print(f"{}    Browser ready to use.")
    return browser
  1. Instate browser. Set the downloads location to the DBFS root file system /tmp/downloads. Make sure the cookies path has /dbfs in front so the full cookies path is like /dbfs/mnt/...
browser = init_chrome_browser(
    cookies_path="/dbfs"+ utils_folder + "cookies.pkl",
  1. Do your navigating and any downloads you need.

  2. OPTIONAL: Examine your download location. In this example, I downloaded a CSV file and will search through the downloaded folder until I find that file format.

import os
import os.path
for root, directories, filenames in os.walk('/tmp'):
    if any(".csv" in s for s in filenames):
  1. Copy the file from DBFS root tmp to your mounted storage (/mnt/container-data/raw/). You can rename during this operation as well. You can only access root file system using file: prefix when using dbutils.
dbutils.fs.cp("file:/tmp/downloads/file1.csv", f"{raw_folder}file2.csv')

Leave a Comment