At the time Madison AI began, artificial intelligence was just emerging into the public conversation in a big way. While AI was being hailed as the future, government workflows were still stuck in the past—bogged down by manual processes, repetitive reporting, and inefficient documentation.

The idea for Madison AI didn’t start with me. It came from Dave Solaro, the Assistant County Manager for Washoe County, and Erica Olsen, the CEO of my company. They recognized a major problem: county employees were spending too much time writing staff reports and not enough time doing the jobs they were actually trained for—like engineering, planning, and infrastructure development.

That’s where I came in. My job was to build a system that could use AI to streamline reporting, reduce redundant work, and free up skilled employees to focus on their expertise.

Automating the Mundane, Enhancing Efficiency

Washoe County staff reports required a huge amount of manual effort, pulling data from various sources, reformatting information, and drafting documents. The process was slow, repetitive, and a poor use of time for employees with specialized training.

Code: Scraping Legislative Data for Report Automation


import os
import time
import logging
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager

logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")

def setup_driver():
    options = webdriver.ChromeOptions()
    options.add_argument("--headless")
    options.add_argument("--disable-gpu")
    options.add_argument("--no-sandbox")
    return webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)

def extract_legislative_data(url):
    driver = setup_driver()
    driver.get(url)

    try:
        expand_buttons = driver.find_elements(By.CLASS_NAME, "expand-button")
        for button in expand_buttons:
            button.click()
            time.sleep(0.5)

        pdf_links = driver.find_elements(By.TAG_NAME, "a")
        os.makedirs("downloads", exist_ok=True)

        for link in pdf_links:
            href = link.get_attribute("href")
            if href and href.endswith(".pdf"):
                file_name = f"downloads/{link.text.strip().replace(' ', '_')}.pdf"
                os.system(f"wget -q '{href}' -O '{file_name}'")
                logging.info(f"Downloaded: {file_name}")

    except Exception as e:
        logging.error(f"Error occurred: {e}")

    finally:
        driver.quit()
        logging.info("Scraping complete.")

if __name__ == "__main__":
    extract_legislative_data("https://washoelegislation.gov/data")

OCR: Turning Scanned Documents into Usable Reports

Many critical government documents weren’t even searchable—they were scanned PDFs. Instead of making employees manually retype them, I implemented Azure Computer Vision OCR to extract and format content for easy inclusion in reports.

Code: OCR for Staff Report Automation


import time
import logging
import requests
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from msrest.authentication import ApiKeyCredentials
from azure.cognitiveservices.vision.computervision.models import OperationStatusCodes

logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")

SUBSCRIPTION_KEY = "your-subscription-key"
ENDPOINT = "https://your-vision-endpoint.cognitiveservices.azure.com/"
client = ComputerVisionClient(ENDPOINT, ApiKeyCredentials(SUBSCRIPTION_KEY))

def extract_text_from_pdf(pdf_url):
    try:
        response = client.read(pdf_url, raw=True)
        operation_id = response.headers["Operation-Location"].split("/")[-1]

        while True:
            result = client.get_read_result(operation_id)
            if result.status not in [OperationStatusCodes.not_started, OperationStatusCodes.running]:
                break
            time.sleep(2)

        if result.status == OperationStatusCodes.succeeded:
            extracted_text = [line.text for page in result.analyze_result.read_results for line in page.lines]
            return "\n".join(extracted_text)

        logging.warning("OCR process did not succeed.")
        return ""

    except requests.exceptions.RequestException as e:
        logging.error(f"Network error: {e}")
        return ""

if __name__ == "__main__":
    pdf_text = extract_text_from_pdf("https://washoe.gov/documents/meeting_minutes.pdf")
    logging.info(f"Extracted OCR Text: {pdf_text[:500]}...")

AI-Driven Search & Summarization

Instead of manually digging through multiple sources, county employees needed a way to quickly find and summarize relevant information. I built an Azure AI Search indexer and integrated Azure OpenAI to provide automated summaries of lengthy reports.

Code: AI Search Index for Reports


import requests
import json
import logging

logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")

SEARCH_ENDPOINT = "https://your-search-endpoint.search.windows.net"
SEARCH_INDEX = "staff-reports-index"
API_KEY = "your-api-key"

HEADERS = {
    "Content-Type": "application/json",
    "api-key": API_KEY
}

INDEX_DEFINITION = {
    "name": SEARCH_INDEX,
    "fields": [
        {"name": "id", "type": "Edm.String", "key": True},
        {"name": "title", "type": "Edm.String", "searchable": True},
        {"name": "summary", "type": "Edm.String", "searchable": True},
        {"name": "date", "type": "Edm.DateTimeOffset", "filterable": True, "sortable": True},
        {"name": "department", "type": "Edm.String", "facetable": True}
    ],
    "suggesters": [
        {"name": "sg", "searchMode": "analyzingInfixMatching", "sourceFields": ["title", "summary"]}
    ]
}

def create_search_index():
    try:
        url = f"{SEARCH_ENDPOINT}/indexes?api-version=2021-04-30-Preview"
        response = requests.post(url, headers=HEADERS, json=INDEX_DEFINITION)

        if response.status_code == 201:
            logging.info("Search index created successfully.")
        elif response.status_code == 409:
            logging.info("Search index already exists.")
        else:
            logging.error(f"Failed to create search index: {response.text}")

    except requests.exceptions.RequestException as e:
        logging.error(f"Network error: {e}")

if __name__ == "__main__":
    create_search_index()

Why Did I Build This?

Because government employees shouldn’t have to spend their valuable time writing reports when AI can automate the bulk of it. Because when Dave and Erica approached me with this project, I knew that AI could cut through inefficiencies and free up skilled workers to focus on real impact.

“Fine. I’ll do it myself.”