A critical vulnerability was discovered in React Server Components (Next.js). Our systems remain protected but we advise to update packages to newest version. Learn More

Aniket
Dec 1, 2025
  373
(2 votes)

Creating a custom tool for Opal AI in Google Cloud using Python SDK

I had the opportunity of participating in the Opal AI Hackathon challenge, where we built a custom tool using Optimizely's Opal Python SDK.

This article walks you through building the tool in Python and deploying it securely on Google Cloud Run. The goal of this tool is to enable an Opal Agent to instantly create a fully-detailed Azure DevOps (ADO) User Story from a simple request.

The Core: The Optimizely Opal Tools SDK

Optimizely provides a variety of SDKs to connect to their services. For an Opal Agent to use your service, it first needs a blueprint defining what your tool does, what inputs it needs, and how to execute it. This blueprint is called the Tool Manifest, exposed via a standardized `/discovery` endpoint.

The Python Opal Tools SDK abstracts away the complexity of managing this API contract. By simply decorating a standard Python function with `@tool`, the SDK automatically handles:

  1. Generating the required OpenAPI-compatible Discovery Manifest at '/discovery'

  2. Routing incoming `POST` requests to the correct function.

  3. Validating and parsing the JSON input based on your Pydantic model.

Tool Service Code (main.py)

The main.py file is the main entry point of your code and the service uses FastAPI for routing and the Opal SDK for tool definition. You can definine a single application instance that hosts multiple tool endpoints. In this case, I built a single tool endpoint.

import os
import base64
import httpx
import pdb
from fastapi import FastAPI
from pydantic import BaseModel, Field
from opal_tools_sdk import ToolsService, tool
from dotenv import load_dotenv

# --- Configuration ---
# In production, load these from os.environ for security
ADO_ORG = "rightpoint"
ADO_PROJECT = "Optimizely-Opal-Challenge-2025"
load_dotenv()
ADO_PAT = os.environ.get("ADO_PAT")

tag = "opal-2025"

# Encode PAT for Basic Auth
auth_str = f":{ADO_PAT}"
b64_auth = base64.b64encode(auth_str.encode()).decode()
HEADERS = {
    "Authorization": f"Basic {b64_auth}",
    "Content-Type": "application/json-patch+json"
}

# --- App Setup ---
app = FastAPI()
# This initializes the /discovery endpoint automatically
service = ToolsService(app)

# --- Parameters Model ---
class UserStoryParams(BaseModel):
    title: str = Field(..., description="The title of the user story")
    description: str = Field(..., description="Detailed description of the user story")
    acceptance_criteria: str = Field(None, description="Acceptance criteria for the story")

# --- The Tool Definition ---
@tool(
    name="create_ado_user_story",
    description="Creates a new User Story in Azure DevOps with a title and description.",
)

async def create_ado_user_story(params: UserStoryParams):
    """
    Creates a User Story in Azure DevOps.
    """
    url = f"https://dev.azure.com/{ADO_ORG}/{ADO_PROJECT}/_apis/wit/workitems/$User%20Story?api-version=7.1"

    # Azure DevOps requires a JSON Patch document
    payload = [
        {
            "op": "add",
            "path": "/fields/System.Title",
            "value": params.title
        },
        {
            "op": "add",
            "path": "/fields/System.Description",
            "value": params.description
        },
        {
            "op": "add",
            "path": "/fields/System.Tags",
            "value": tag
        }
    ]

    if params.acceptance_criteria:
        payload.append({
            "op": "add",
            "path": "/fields/Microsoft.VSTS.Common.AcceptanceCriteria",
            "value": params.acceptance_criteria
        })

    response = None
    try:
        # Use httpx.AsyncClient for non-blocking I/O inside an async function
        async with httpx.AsyncClient(timeout=30.0) as client:
            response = await client.post(url, headers=HEADERS, json=payload)
            response.raise_for_status()
            data = response.json()

        # Return a dictionary directly. This dictionary is the FINAL return value
        # and should not be awaited by the SDK's wrapper.
        return {
            "status": "success",
            "id": data.get("id"),
            "link": data.get("_links", {}).get("html", {}).get("href"),
            "message": f"User Story #{data.get('id')} created successfully."
        }
    except httpx.RequestError as e: 
        # Handle all httpx communication errors (DNS, connection, etc.)
        response_text = "No response body available." if response is None else response.text
        print(f"HTTPX Request Error: {str(e)}\nDetails: {response_text}")
        return {
            "status": "error",
            "message": f"Connection/Request Error: {str(e)}",
            "details": response_text
        }
    except Exception as e:
        # Handle all other exceptions (including raise_for_status errors)
        response_text = "N/A"
        if response is not None and response.text:
            response_text = response.text
            
        print(f"Unexpected Error: {str(e)}\nResponse Body: {response_text}")
        return {
            "status": "error",
            "message": f"An unexpected error occurred: {str(e)}",
            "details": response_text
        }

# Run locally for testing
if __name__ == "__main__":
    import uvicorn
    # CRITICAL: Ensure you are running this command, which uses the uvicorn async server.
    uvicorn.run(app, host="0.0.0.0", port=8000)

 

When writing tools for high-performance cloud environments like Cloud Run, it's essential to use asynchronous (async) code. This prevents a single network request (like waiting for the slow ADO API response) from blocking the entire Python process, allowing the server to handle dozens of requests simultaneously.

We achieve this by defining the function as `async def` and using the asynchronous HTTP client, `httpx`.

The tool constructs the ADO JSON Patch payload, uses an injected Personal Access Token (PAT) for authentication, and executes the asynchronous network call.

 

Deployment: Building on Google Cloud Run

To make your tool publicly accessible to Optimizely Opal, we deploy it as a serverless container on Google Cloud Run.

Deployment Files

We use a `requirements.txt` to manage dependencies and a `Dockerfile` for the deployment.

a) Dependencies (`requirements.txt`): This tells the application the required dependencies to install for the application to run successfully.

fastapi==0.110.0
uvicorn==0.27.1
requests==2.31.0
optimizely-opal.opal-tools-sdk
pydantic
python-dotenv

b) Container Blueprint (`Dockerfile`):

# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.12-slim

# Allow statements and log messages to immediately appear in the Knative logs
ENV PYTHONUNBUFFERED True

# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./

# Install production dependencies.
RUN pip install --no-cache-dir -r requirements.txt

# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
# Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling.
CMD exec uvicorn main:app --host 0.0.0.0 --port $PORT

This container ensures security by running the application under a non-root user and explicitly defines the startup command.

Deployment Steps (gcloud CLI)

  1. Secure the PAT: Upload your Azure DevOps PAT to Google Cloud Secret Manager (recommended, as shown in previous context).

  2. Build and Deploy: Use the `gcloud` CLI to build the container from source and deploy, mapping the secret to the `ADO_PAT` environment variable.

Enable APIs (if necessary): Note you may need to enable the google cloud billing for certain services to work.
gcloud services enable cloudbuild.googleapis.com run.googleapis.com secretmanager.googleapis.com


Build and run the application:
gcloud run deploy opal-ado-tool \
    --region us-central1 \
    --source . \
    --allow-unauthenticated \

Integrating the Tool into Optimizely Opal

Once deployed, your service provides a public URL (e.g., `https://opal-ado-tool-xyz.run.app`).

Register Discovery Endpoint: In the Optimizely Opal UI, register the tool using the public URL appended with `/discovery`.
Agent Workflow: Configure an Agent Workflow step to first synthesize the necessary ADO parameters (Org, Project, Title, Description) from an unstructured request, and then automatically feed that structured JSON output directly into the `create_ado_user_story` tool.

By bridging the gap between your conversational AI input and your crucial development systems, you empower your Optimizely agents to become powerful, action-oriented contributors to your product delivery lifecycle.

 

Dec 01, 2025

Comments

Please login to comment.
Latest blogs
Building simple Opal tools for product search and content creation

Optimizely Opal tools make it easy for AI agents to call your APIs – in this post we’ll build a small ASP.NET host that exposes two of them: one fo...

Pär Wissmark | Dec 13, 2025 |

CMS Audiences - check all usage

Sometimes you want to check if an Audience from your CMS (former Visitor Group) has been used by which page(and which version of that page) Then yo...

Tuan Anh Hoang | Dec 12, 2025

Data Imports in Optimizely: Part 2 - Query data efficiently

One of the more time consuming parts of an import is looking up data to update. Naively, it is possible to use the PageCriteriaQueryService to quer...

Matt FitzGerald-Chamberlain | Dec 11, 2025 |

Beginner's Guide for Optimizely Backend Developers

Developing with Optimizely (formerly Episerver) requires more than just technical know‑how. It’s about respecting the editor’s perspective, ensurin...

MilosR | Dec 10, 2025