Skip to content

Automation & Scripting

Automation is arguably Python’s most popular use case for non-developers. By writing a simple script, you can replace hours of manual data entry, file renaming, or web clicking with a few seconds of execution.


Web scraping involves automatically extracting data from websites.

  • BeautifulSoup: For parsing HTML and extracting text.
  • Selenium / Playwright: For controlling a real browser (useful for sites that require login or have complex JavaScript).
scraper.py
import requests
from bs4 import BeautifulSoup
res = requests.get("https://news.ycombinator.com/")
soup = BeautifulSoup(res.text, "html.parser")
# Get the titles of the top stories
for link in soup.find_all("span", class_="titleline"):
print(link.text)

Professional automation scripts shouldn’t just print plain text. The rich library allows you to create beautiful progress bars, tables, and formatted logs.

beautiful_output.py
from rich.console import Console
from rich.table import Table
console = Console()
table = Table(title="System Status")
table.add_column("Service", style="cyan")
table.add_column("Status", style="green")
table.add_row("Database", "Online")
table.add_row("API", "Online")
console.print(table)

Instead of running your script manually, you can tell Python to run it on a schedule (e.g., “every morning at 8:00 AM”).

scheduler.py
import schedule
import time
def job():
print("Checking for new emails...")
schedule.every().day.at("08:00").do(job)
while True:
schedule.run_pending()
time.sleep(60) # Wait 1 minute

Python is an excellent “Glue” because it can talk to your operating system’s shell.

Using the subprocess module, you can run any system command (like git, docker, or ls) and capture its output directly into a Python variable.


TaskRecommended Library
HTTP Requestsrequests or httpx
File Handlingpathlib and shutil
UI / Formattingrich
Browser Controlplaywright
Parsing HTMLbeautifulsoup4