Skip to content

Pulse - Fundamentals

What It Is (General)

Pulse is Palm's AI intelligence layer — a natural language interface that lets treasury teams interact with their cash data through conversation rather than dashboards alone. It encompasses the Palm Chat (customer-facing), Palm MCP (internal), AI Digest (homepage), and future agentic capabilities.


What It Means for Our ICP

How Treasury Teams Think About It

Treasury teams don't think in terms of "AI assistants" — they think in terms of getting answers faster. The questions they ask (why did this forecast change? what's my cash position? which accounts need attention?) currently require either manual dashboard navigation or asking Palm's engineering team directly.

Origin Story (Palm Internal, 2026-03-09) "A lot of the questions we were getting were basically just us having to be the treasurers' analysts. The idea was: how can we allow LLMs to take care of all of these requests?" — Art

Architecture: Two Products

Palm MCP (Internal Only)

  • Model Context Protocol server — a standardized connection between LLMs and Palm's data layer
  • Used via Claude, Cursor, or any MCP-compatible client
  • Restricted to @usepalm.com domain
  • Built during hackathon, incrementally improved
  • Analogy: "If REST APIs is apps talking to data, then MCPs is LLMs talking to data natively"

Palm Chat (Customer-Facing)

  • In-platform chat interface with full conversation history
  • First externally-exposed Python service (everything else is Go) — chosen for better AI/ML tooling
  • MCP tools imported directly as Python functions (no inter-service calls currently)
  • Thread-based conversation management via Treasury API
  • Generative AI capabilities handled by separate Python service
  • Planned Slack integration via /ask slash command (same backend)

Data Layer: BigQuery + dbt

  • BigQuery is the data warehouse
  • dbt transforms data into silver and gold layers (no bronze — data arrives in good state)
  • MCP accesses gold layer ~80-90% of the time
  • Silver layer used for more complex/creative queries
  • Key tables: fact_transactions, fact_account_balances, dim_forecast_periods, dim_transaction_period_summary
  • Every table exposed through chat must have customer_public_id field

Schema-First Workflow (Key Design Decision)

The single most impactful technical decision: forcing the LLM to retrieve table schemas before writing any query.

"During the hackathon, I quickly realized that the LLM has this tendency to just hallucinate on table names. Sometimes I would also hallucinate with the LLM — 'this forecasting balance currency table sounds like a cool name, but it really didn't exist.'" — Art

How it works: 1. LLM calls list_tables to see what exists 2. LLM calls get_schema for relevant tables (with column descriptions) 3. Only then can LLM call execute_query 4. If LLM misunderstands the question, querying real data helps it self-correct

Security Model

Principle: Security at infrastructure level, not prompt level.

"Doing security on this level [prompt] is kind of stupid... LLMs can very easily bypass system prompts. That's why we actually do security in the infrastructure layer." — Art

GCP Service Account Impersonation: - Per-customer service accounts with row-level security - Filter enforced on customer_public_id — cannot be bypassed - No metadata leakage (row counts, table metadata blocked) - Internal Palm users have a separate service account that can bypass customer isolation - Query validation: only SELECT or WITH statements allowed (infrastructure-enforced, not just prompt-enforced)

Future complexity: Entity-level ACLs would require per-customer-per-entity service accounts — acknowledged as a scaling challenge.

Context Management

  • Last 20 messages kept in full (including tool calls, inputs, outputs)
  • Older messages summarized
  • "User summary" persisted after first 5 messages, across last 10 threads
  • No retrieval/RAG yet — just summarization
  • Mixing cross-customer data pollutes context window (removing "view as customer" feature)

AI Digest (Homepage)

A structured, LLM-generated daily summary for the homepage:

Fixed sections (dynamic ordering based on user preference and recent activity): - Summary (always on top) - Account balance activity - Accounts needing attention - Notable transactions - Confirmed forecasts - Chat bar for deeper exploration

How it works: 1. System prompt specifies sections and what to analyze 2. User feedback (customize button) added to context 3. LLM creates a plan (like plan mode in Cursor/Claude) 4. LLM executes plan, produces structured JSON with public IDs 5. Data validated (references checked against actual records) 6. Treasury API does final lookup, returns structured format for frontend 7. Everything saved to database

Scheduled Agent Workflows (Demo/Prototype)

  • Users can define recurring analysis workflows in natural language
  • Agent executes on schedule, produces PDF/Excel output
  • First agent type: "data report agent"
  • Future agent types could include execution agents (suggestions for what to do)
  • Reuses existing report page UI components

Why Palm Chat Over Just MCP

Concern MCP Palm Chat
Control No visibility into queries Full logging
Evaluation Can't validate responses Can analyze all interactions
Features Limited to data queries Can build new features on top
Accessibility Requires Claude/Cursor setup In-platform, accessible to all
Cost User pays (their LLM provider) Palm pays (our tokens)

AI Strategy Vision: Three Pillars

Palm's AI Strategy (Source: 2026-03-10)

  1. Correctness — Ensuring AI outputs are accurate and trustworthy
  2. Governance — Treasury policy compliance (e.g., investment limits, FX hedging rules)
  3. Hyper-personalization — Customer-specific institutional knowledge baked into AI behavior

"SaaS has never been a subject for hyper personalization or hyper customization. Now we finally live in an era where you can build hyper customized SaaS." — Emma

Proactive vs Reactive AI

The key insight from ON's CS engagement: the highest-value AI interactions are the ones users DON'T initiate. Giannis's categorization accuracy analysis for Amanda — using Palm MCP to find CapEx miscategorizations — only happened because CS proactively ran it. Amanda didn't know to ask for it.

This validates the shift from reactive chat to proactive scheduled analysis: - Agent runs on schedule (e.g., monthly categorization accuracy audit) - Surfaces findings as AI Digest blocks or Slack notifications - User doesn't need to know what to ask — the system tells them what matters

Starter Templates / Skills

Generalizing customer-specific process docs (like Giannis's CapEx analysis workflow) into reusable "starter templates" for new customers. Giannis already has 3 customer process docs in Notion Document Hub (bank fee analysis, account funding, CapEx analysis). These could become Palm-provided skill templates for onboarding.

How They Talk About It

  • "Pulse" — the overall AI capability brand name
  • "Palm MCP" — the internal data access layer
  • "Palm Chat" — the customer-facing conversational interface
  • "AI Digest" — the homepage summary feature
  • "Agents" — scheduled or triggered automated workflows
  • "Schema-first workflow" — the pattern of forcing schema lookup before query

Tools & Systems

  • BigQuery — Data warehouse (dbt silver + gold layers)
  • Firebase — User authentication (tokens used in both Go and Python services)
  • GCP Service Accounts — Per-customer data isolation
  • Python (Pydantic/FastAPI) — First externally-exposed Python service for AI capabilities
  • Go — Everything else in the platform (Treasury API)

Sources: view all