Module 1: Overview

MGMT 675: Generative AI for Finance

Kerry Back, Rice University

From Makers to Checkers

What we’re working towards is that every employee will have their own personalized AI assistant; every process is powered by AI agents, and every client experience has an AI concierge.

Workers would shift from being creators of reports or software updates, or ‘makers’ … to ‘“checkers” or managers of AI agents’ doing that work.

Derek Waldron, JP Morgan Chief Analytics Officer — CNBC interview, September 30, 2025

CFOs Are Going All In on AI

Deloitte Q4 2025 CFO Signals Survey — 200 CFOs at $1B+ companies

87%

say AI will be extremely or very important to finance operations

54%

prioritize integrating AI agents in finance

50%

cite digital transformation as #1 priority

49%

prioritize automating processes for higher-value work

2%

say AI won’t be important

Case Study: HPE’s “Alfred”

Marie Myers, CFO of Hewlett Packard Enterprise (#143 on Fortune 500)

The Problem

  • Weekly 90-minute operational review required 100+ slides
  • Hundreds of hours of preparation across business units
  • No time left for forward-looking analysis

The AI Solution

  • Built “Alfred” — AI agents that pull, reconcile, and analyze data automatically
  • 90% of manual prep eliminated; cycle time reduced 40%, costs down 25%
  • 3,000+ finance employees being reskilled to build their own agents

“The goal is for finance professionals to become masters of their own destiny rather than casualties of automation.” — Marie Myers

“How HPE’s CFO used AI to transform the 100-slide Monday meeting,” Fortune, Feb. 12, 2026.

Something Big is Happening

The person who walks into a meeting and says “I used AI to do this analysis in an hour instead of three days” is going to be the most valuable person in the room. Not eventually. Right now.

Spend one hour a day experimenting with AI … If you do this for the next six months, you will understand what’s coming better than 99% of the people around you.

Matt Shumer, CEO of HyperWrite AI — February 2026

AI Agents

Chatbot

%%{init: {'theme': 'base', 'themeVariables': {'fontSize': '42px'}, 'flowchart': {'nodeSpacing': 100, 'rankSpacing': 140, 'padding': 28, 'useMaxWidth': true}}}%%
flowchart LR
  U["<b>👤 User</b>"] ---|"prompts & replies"| C["<b>💬 Chatbot</b>"]
  C ---|"API calls"| L["<b>🧠 LLM</b>"]

  style U fill:#eff6ff,stroke:#3b82f6,stroke-width:2px,color:#0f172a,font-size:56px,padding:24px
  style C fill:#dbeafe,stroke:#3b82f6,stroke-width:2px,color:#0f172a,font-size:56px,padding:24px
  style L fill:#fef3c7,stroke:#f59e0b,stroke-width:2px,color:#0f172a,font-size:56px,padding:24px

  linkStyle default stroke:#3b82f6,stroke-width:4px

The chatbot sends three things to the LLM with each request:

  1. User prompt — what you just typed
  2. System prompt — hidden instructions that define the LLM’s behavior
  3. Conversation history — all prior messages, so the LLM has context

Agent

%%{init: {'theme': 'base', 'themeVariables': {'fontSize': '42px'}, 'flowchart': {'nodeSpacing': 100, 'rankSpacing': 140, 'padding': 28, 'useMaxWidth': true}}}%%
flowchart LR
  U["<b>👤 User</b>"] ---|"request & result"| A["<b>🤖 Agent</b>"]
  A ---|"API calls"| L["<b>🧠 LLM</b>"]
  A ---|"executes"| T["<b>🔧 Tool</b>"]

  style U fill:#eff6ff,stroke:#3b82f6,stroke-width:2px,color:#0f172a,font-size:56px,padding:24px
  style A fill:#dbeafe,stroke:#3b82f6,stroke-width:2px,color:#0f172a,font-size:56px,padding:24px
  style L fill:#fef3c7,stroke:#f59e0b,stroke-width:2px,color:#0f172a,font-size:56px,padding:24px
  style T fill:#f0fdf4,stroke:#22c55e,stroke-width:2px,color:#0f172a,font-size:56px,padding:24px

  linkStyle default stroke:#3b82f6,stroke-width:4px

The agent programmatically controls communication between the LLM and the tool. The system prompt includes “You have a tool …” so the LLM knows what it can call. Multiple rounds may occur before the agent returns output to the user.

How Tool Calls Work

The LLM does not run code directly. When it decides a tool is needed, it returns a structured message:

{
  "tool": "sql_query",
  "query": "SELECT ticker, close FROM prices WHERE date = '2024-12-31'"
}

The agent’s execution layer reads that message, runs the actual tool, and sends the result back to the LLM. The LLM reads the result and decides what to do next.

An agent can use multiple tools in sequence — query a database, pass results to Python for analysis, generate a chart, and assemble everything into a PowerPoint deck — all from a single instruction.

The Agent Loop

🎯

Plan

The LLM reads your request and breaks it into steps

Execute

The agent calls a tool — runs code, queries a database, writes a file

👁️

Observe

The agent checks the result of the action

🔄

Iterate

If unsatisfactory, the agent adjusts its plan and tries again

When an agent encounters an error, it doesn’t stop and ask you what to do. It diagnoses the problem, revises its approach, and tries again. This autonomous problem-solving is what separates agents from simple automation scripts.

Examples of Agent Tools

🐍

Python Environment

🗄️

Database Server

🌐

Web Browser / Search

📁

File System

🔌

API Calls

An agent may be connected to multiple tools. For example, get data from a database server and send it to a Python environment to return analysis.

Past, Present, and Future of Personal Computing

Example: Rice Data Portal

The Rice Data Portal is an agent.

  • LLM: ChatGPT — interprets your natural-language question and writes a SQL query
  • Tool: A database server in the cloud holding stock-market and financial data
  • Agent: An app running elsewhere in the cloud that coordinates communication between ChatGPT and the database

You type a question (“Show me AAPL closing prices for 2024”). The agent sends it to ChatGPT, which returns a SQL query. The agent runs the query against the database and displays the answer. If the query fails, ChatGPT rewrites it and tries again — multiple rounds, no human intervention.