← Platform features

AI

Lead scoring you wrote yourself, not a black box you hope is right.

TexAu does not ship a packaged "AI scoring model." Instead, the AI Column lets you write a prompt + a system prompt, pass in any column data on the row, and return a 0–100 score, a classification, or a routing decision — per row, on demand. You define the rubric. The model evaluates each row against it. BYOK supported (GPT, Claude, Gemini).

Name
Company
Email
Industry
ICP Score
Sarah Mitchell
Clearbit
SaaS
94
Marcus Chen
Plaid
Fintech
81
Jessica Alvarez
Lattice
HR Tech
72
David Park
Stripe
Fintech
88
Rachel Thompson
Gong
Sales Tech
65
Brian Kowalski
Mixmax
Sales Tech
41

What you get

The capabilities, concrete.

  • You write the rubric

    A prompt describes what a great lead looks like. A system prompt sets the tone, output format, and guardrails. No training pipeline, no monthly re-train cycle.

  • Personalized per row

    Pass any combination of enriched fields — industry, headcount, tech stack, post engagement, recent funding — into the prompt as variables. The output is grounded in the row, not in a vibe.

  • Any output shape

    Return a 0–100 score, a tier (A/B/C), a routing decision, or a one-line reason. Whatever your downstream system needs.

  • Same tool over API and MCP

    The exact prompt that runs in your TexAu table also runs as a REST endpoint and as an MCP tool. One source of truth.

  • BYOK

    Bring your OpenAI, Anthropic, or Gemini key. Your tokens, your billing, your privacy. Or pay 1 credit per row through TexAu.

How it works

The shape of the workflow.

Right-click a column. Pick AI Column. Write the prompt and system prompt. Pass the variables you want the model to see. Run it on the table, on a schedule, over the API, or from your MCP-connected agent. Edit the prompt, rerun — no training loop required.

FAQ

Frequently asked.

Does TexAu train a model on my closed-won deals?
No. We deliberately do not ship a packaged "trained on your closed-won" scorer — the data shape and label quality required to make that honest is rare. Instead, the AI Column gives you a prompt-driven scoring surface you control end-to-end. If you have a separate model you want to call, you can wire it via a custom HTTP action and surface the score the same way.
Is the score explainable?
Yes — return the score and the reasoning in the same prompt. Most teams ask the model to return JSON like { score, top_reasons }. The reasons are whatever the prompt asks for.
How is this different from AI Columns?
Same primitive, different framing. The AI Column is the underlying tool. "Custom scoring" is one of the most common things teams use it for. Same goes for summarization, classification, lead routing, draft generation, etc.

Run it on your motion

See custom scoring with the ai column in your own workspace.

Free to start. No demo required. Five minutes to first run.