Unlock Team Efficiency with a Custom AI Agent

Make internal knowledge searchable, actionable, and available on demand.

by Jayson Miller

Answering the same internal questions again and again? From “Where’s the campaign tracker?” to “How do I format this deck?”, knowledge gets stuck in silos. AI agents can fix that. With a simple setup, you can build a smart assistant that gives teams instant access to the answers they need.

Imagine being able to ask about campaign history, tool instructions, or brand guidelines and get fast, accurate answers without searching through files or messaging coworkers. With just a few lines of code, you can build an internal agent that does exactly that.

What This Agent Can Do

  • Content and Brand Guidelines – Make tone, formatting, language, and visuals easy to find and follow
  • Campaign History – Keep a record of what’s been done, what worked, and what didn’t
  • Tool and Workflow How-Tos – Help users understand internal tools, forms, and processes
  • Company-wide Changes – Support transitions with clear, centralized answers

These agents reduce repetitive questions, improve onboarding, and unlock knowledge that would otherwise stay buried.

So how do you actually create one of these agents? It doesn’t take much. In fact, with just a few core pieces—a text file, an API key, and a small Python script—you can have a working internal agent ready to answer questions. Below I’ll show you how to put it together.

The Simplest Working Version

Here’s a step-by-step breakdown of how to build a basic internal agent using the OpenAI SDK, a simple text file, and FastAPI to handle interactions. This approach can be adapted to use other models or frameworks as needed.

Step 1: Define the App

from fastapi import FastAPI

app = FastAPI()

This initializes a FastAPI app, which acts as the system that listens for questions and sends back answers.

Step 2: Load Your API Key

from dotenv import load_dotenv
import os

load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")

Your API key is like your personal passcode for accessing the language model. It’s what authorizes your app to use OpenAI’s services. This loads your API key from a .env file, which helps you keep sensitive information secure. You’ll need to create a .env file and add:

OPENAI_API_KEY=your_api_key_here

Step 3: Load Internal Knowledge

with open("knowledge.txt", "r") as f:
    context = f.read()

This loads your internal content from a file called knowledge.txt. You can name this file anything you want. It should include the content you want the agent to reference—things like process docs, campaign summaries, brand guidelines, or onboarding materials.

Step 4: Add a System Prompt

system_prompt = (
    "Use the following document to answer the user's question:\
"
    f"{context}"
)

The system prompt sets the stage for the model, telling it what knowledge to use when answering questions. Think of it like the agent’s job description.

Step 5: Initialize the Model

from openai import OpenAI
openai = OpenAI()

This connects your app to OpenAI’s language model. You can substitute other models here if needed, like Claude, Gemini, or LLaMA.

Step 6: Define Input Structure

from pydantic import BaseModel

class ChatRequest(BaseModel):
    question: str
    history: list[str] = []

This defines the format your app expects when someone sends a question. It allows users to send in a single question or include some back-and-forth context.

Step 7: Create a Function to Handle Requests

@app.post("/chat")
def chat(request: ChatRequest):
    messages = [
        {"role": "system", "content": system_prompt},
        *[{"role": "user", "content": m} for m in request.history],
        {"role": "user", "content": request.question},
    ]

    response = openai.chat.completions.create(
        model="gpt-4",
        messages=messages
    )

    return {"answer": response.choices[0].message.content}

This function tells your agent what to do when it receives a question. It combines the system prompt, any previous chat history, and the new question—then sends that to the model and returns the model’s response.

The Full Working Code

from fastapi import FastAPI
from pydantic import BaseModel
from openai import OpenAI
from dotenv import load_dotenv
import os

load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
openai = OpenAI()

app = FastAPI()

with open("knowledge.txt", "r") as f:
    context = f.read()

system_prompt = (
    "Use the following document to answer the user's question:\
"
    f"{context}"
)

class ChatRequest(BaseModel):
    question: str
    history: list[str] = []

@app.post("/chat")
def chat(request: ChatRequest):
    messages = [
        {"role": "system", "content": system_prompt},
        *[{"role": "user", "content": m} for m in request.history],
        {"role": "user", "content": request.question},
    ]

    response = openai.chat.completions.create(
        model="gpt-4",
        messages=messages
    )

    return {"answer": response.choices[0].message.content}

Running the Agent

To run this agent locally, create a virtual environment in your terminal and install the required libraries: FastAPI, Uvicorn, python-dotenv, OpenAI, and Pydantic. Include your API key in a .env file and your internal content in a knowledge.txt file.

Then launch the app using a command like:

uvicorn main:app --reload

Once it's running, you can interact with the agent through a script, curl command, or by connecting it to a simple frontend.

Put Your Knowledge to Work

An internal agent isn't just about saving time—it can change how teams interact with information. Instead of repeating answers, rewriting onboarding emails, or searching through outdated slides, people can ask questions like “What’s the latest version of our sales pitch?”, “Where are the design specs from the Q2 launch?”, or “How should I write this in our tone of voice?” and get clear, instant answers.

This guide outlined one way to get started, but it’s not the only way. You can swap in a different model, adjust the behavior, or expand your content. The core idea is simple: structure your knowledge in a way that makes it usable, accessible, and smart.

Once that’s in place, the rest becomes easier to build. Start small. Improve over time. You don’t need a massive system to make a real difference.