About the Role
About the Role:
We’re building AI agents that give supply chain operators real-time intelligence about data quality, risk, and reliability - before bad data causes a costly downstream decision. These aren’t dashboards or BI wrappers. They’re agents that understand BOM hierarchies, supplier data feeds, and demand signals deeply enough to reason about failure modes, score data risk, and generate validation and remediation playbooks automatically.
The Problem You’ll Own:
Every large manufacturer depends on data from contract manufacturers, logistics partners, and internal systems - purchase orders, inventory records, demand forecasts, BOM structures. That data is almost always incomplete, inconsistently formatted, or wrong in ways invisible until someone makes a multi-million dollar decision on top of it.
The traditional answer is a validation team running SQL checks. The AI answer is an agent that reads the data specification, understands the business process it supports, identifies every field-level failure mode, scores severity and risk, and writes the validation rule set and remediation playbook - automatically, at a depth and speed no analyst can match.
What You’ll Do:
+ Design and spec AI agents for supply chain data intelligence: FMEA automation, validation rule generation, data quality scoring, anomaly detection, and remediation workflow design.
+ Translate raw manufacturer and supplier data specifications, BOM governance documents, and data dictionaries into structured LLM context, few-shot examples, and domain grounding.
+ Own the data governance framework: define what good data looks like across supply chain domains and build the severity/occurrence/detectability rubrics agents use to prioritize problems
+ Build working LLM prototypes: Use Claude/Lovable or similar tools to build working prototypes to gather feedback and communicate requirements.
+ Run customer discovery workshops onsite: map data landscapes, identify highest-risk data elements, return with a prioritized validation roadmap and working proof-of-concept.
+ Build the enterprise deployment playbook: COE model, wave rollout approach, training materials, and executive narrative for scaling across business units and geographies.
What This Is Not:
+ Not a data engineering role - you are building the intelligence layer, not the infrastructure beneath it.
+ Not a pure PM role - you will build prototypes; if you haven’t yet spent time building with LLMs, this will be a stretch.
+ Not a consulting engagement - you are building a product and a company, not delivering a project and moving on.
+ Not for someone who needs a team beneath them - you are the first hire; the second hire reports to you.
+ Not a role where domain knowledge substitutes for technical ability, or vice versa - both are required.
In Your First 90 Days:
Days 1-30: Shadow 2-3 customer supply chain teams; document data feeds, failure modes, and manual workarounds; produce a prioritized problem inventory.
Days 31-60: Ship a working AI agent that processes a real customer data spec, scores risk using FMEA logic, and outputs a structured validation playbook.
Days 61-90: Write the product spec for v1, define the COE rollout model, identify the first three expansion accounts, present the go-to-market roadmap to the founding team.
Why Now:
Supply chain data governance has been an unsolved problem for decades. SQL rules don’t scale, don’t adapt, and don’t reason about business context. LLMs do. The companies that will own this market combine deep supply chain domain expertise with LLM-native tooling. That combination is rare. We’re building it - and you are the person who makes it real.
Compensation & Logistics:
Salary: Competitive with Senior roles at early-stage companies
Equity: Meaningful early-stage equity grant - you are a foundational hire
Location: San Francisco; travel required for customer deployments (~15-25%)
Benefits: Full medical, dental, vision; learning budget
Requirements
This role exists at an intersection almost no one occupies. You need all four of these:
Supply Chain Fluency: Deep experience in global supply chain - manufacturing, semiconductor, or high-tech OEM. You understand BOM hierarchies, supplier data feeds, excess and obsolete exposure, and why a wrong delivery commitment field causes chaos in a production plan.
AI / LLM Practitioner: Built and shipped AI agents using Claude or other LLMs. Can write Python end-to-end: read a spec, call an LLM, parse structured output, write results back. Resourceful and obsessively practical.
Data Governance Instinct: Think naturally in validation rules, failure modes, and risk scores. Familiar with FMEA methodology - SEV/OCC/DET scoring - applied to data quality
Enterprise Deployment: Deployed programs at enterprise scale. Know how to build a COE from scratch, manage cross-BU stakeholders, and write the executive deck that gets a VP to say yes
Background That Maps Well:
10-20 years of experience with meaningful time in supply chain operations or analytics, plus product management, consulting, or technical program management.
Hands-on experience at a large enterprise - semiconductor, electronics manufacturing, or high-tech OEM - dealing with manufacturer or supplier data quality problems firsthand.
Direct LLM experience (Anthropic Claude, OpenAI GPT, or similar) - professional, internal tooling, or serious personal project.
Data validation, quality scorecard, or governance framework design experience for structured enterprise data.
COE or program deployment experience: stood up a new capability, defined the operating model, managed the rollout.
FMEA methodology familiarity applied to data - SEV/OCC/DET scoring and RPN prioritization is a significant differentiator
About the Company
We’re building systems that continuously validate data and business processes across large enterprise environments. Enterprises run on multiple systems: ERP (e.g., SAP), APIs, internal tools, and data platforms (Databricks, Snowflake, Postgres). Inconsistencies in data - either from external vendors, internal processes, or data migrations break workflows. When AI is layered on top, those failures scale.
We build the layer that:
+ Prevents inconsistent data entry
+ Detects inconsistencies across systems
+ Validates business logic in real time
+ Enables AI-driven workflows to run safely and reliably
We’re already live at a Fortune 100 AI company and launching at Fortune 500 scale companies in healthcare and financial services.
