We are seeking a Principal AI Security Architect to define and lead the AI security, governance, trust, and compliance architecture for our enterprise-grade Agentic AI platform. This platform powers AI assistants that reason, plan, and deliver outcomes, and therefore requires robust security, privacy, auditability, and responsible AI frameworks. This role involves designing end-to-end AI security guardrails, establishing responsible AI policies, enforcing model governance, and ensuring compliance with evolving AI regulatory and industry standards while maintaining agility in a startup-like environment. Key Responsibilities:AI Security Architecture & Guardrails Architect security controls, guardrails, and policy enforcement layers for LLM-driven agents and workflows. Define mechanisms for real-time prompt filtering, output moderation, and tool access restrictions to prevent abuse or unsafe behavior. Design secure multi-tenant agent runtime environments (sandboxing, isolation, permissions) for enterprise deployments. Implement dynamic policy enforcement for agent tool usage and sensitive data handling. Responsible AI & GovernanceEstablish a Responsible AI framework for fairness, bias detection, hallucination control, and ethical AI usage in agentic workflows. Define and enforce AI model governance policies, including model versioning, explainability, and approval workflows. Build auditability pipelines to track model prompts, outputs, and decision-making chains (critical for compliance and forensics)Collaborate with legal, compliance, and risk teams to align with AI regulatory standards (EU AI Act, NIST AI RMF, ISO/IEC 42001). Data Privacy & ComplianceArchitect privacy-preserving AI systems, including data minimization, PII redaction, encryption (at rest/in transit), and secure embedding storage. Ensure regional data residency and cross-border compliance (GDPR, HIPPA, CCPA). Design mechanisms for secure API integrations with enterprise systems (OAuth2, JWT, zero-trust patterns). Implement audit trails and tamper-proof logging for sensitive agent activity. AI Threat Modeling & Risk ManagementLead threat modeling for AI agents, including prompt injection, data exfiltration, adversarial inputs, and model poisoning attacks. Design AI-specific intrusion detection and anomaly detection pipelines for agent workflows. Define risk scoring frameworks for agents, tools, and knowledge sources used within the platform. Trust, Explainability & TransparencyBuild explainability frameworks to trace agent decisions (reasoning chains, tool invocation logs). Enable trust dashboards for customers to audit model performance, decisions, and compliance adherence. Incorporate AI transparency reporting (e.g., usage logs, fairness audits) as part of platform deliverables. Leadership & CollaborationPartner with platform architects, backend engineers, and ML teams to embed security and governance into every layer of the AI stack. Provide technical leadership and mentorship to engineers on AI security patterns and best practices. Serve as the subject matter expert for internal and external security/compliance reviews, audits, and certifications. Please note: This is a hybrid role that will be based in San Mateo, CA or Bellevue, WA and requires an in-office presence three days per week (Tuesday - Thursday).