AI Compliance Engineer (Responsible AI)
SnowHeap LLC
- Sharjah
- Contract
- Full-time
Location: Remote (MENA/EU time zones). Candidates must be able to align at least 80% of their working hours with UAE time (depending on their location). Optional Dubai meetups.Please note: We are unable to provide visa sponsorship for this role at this time.Tasks
- Define and run SnowHeap’s AI governance program: policies, control library, risk register, exception handling, and sign-offs (from ideation to production).
- Map laws and frameworks (EU AI Act, GDPR/PDPL/DIFC DPL, NIST AI RMF, ISO/IEC 42001 & 27001, SOC 2) to concrete technical controls in our products and client projects.
- Build an evaluation harness for LLMs/agents: golden sets, scenario tests, adversarial probes, offline evals, and online A/Bs; track hallucination, safety, bias, privacy leakage, robustness, cost, and latency.
- Implement guardrails (PII detection, jailbreak/prompt-injection defenses, output filters, content safety) and wire them into pipelines (LangChain/LangGraph, CrewAI/Agno).
- Stand up audit-ready telemetry: data lineage, prompt/response logging with redaction, model cards, decision traces, and approval workflows (LangSmith/observability tools).
- Partner with Security/Privacy on DPIAs/TRA, retention, DLP, key management, access controls, and vendor risk (OpenAI/Anthropic terms, Azure/GCP/AWS).
- Lead red-teaming exercises; coordinate incident response playbooks for model failures and safety regressions.
- Review prompts, fine-tunes, and datasets for policy compliance; curate evaluation datasets and “go/no-go” acceptance criteria.
- Coach engineers, sales, and clients; write crisp docs and checklists; run internal trainings and readiness reviews.
- Contribute to proposals and client audits; turn compliance into a product advantage.
- 4+ years in Security/Privacy/Compliance, ML governance, or safety engineering, with 2+ years on LLM products.
- Strong grasp of LLM stacks: OpenAI & Azure OpenAI, Claude, Agno, CrewAI, LangChain/LangGraph/LangSmith.
- Hands-on model evaluation: building test sets, rubric-based scoring, offline/online evals, statistical analysis; familiarity with tools or libraries for evals/observability.
- Working knowledge of privacy & AI risk (GDPR/PDPL/DIFC DPL, EU AI Act concepts, NIST AI RMF), and how to turn them into safeguards, SOPs, and controls.
- Context engineering expertise: ability to design, test, and audit prompt chains, context windows, and memory architectures for compliance, safety, and explainability.
- Solid scripting in Python/Pydantic (TypeScript nice to have); able to review PRs and add compliance checks to CI/CD.
- Cloud/MLOps fluency: one of AWS/GCP/Azure; containers, secrets, monitoring, access controls.
- Excellent writing and stakeholder skills; can say “no” with rationale and ship a safer “yes”.
- ISO 27001/ISO 42001/SOC 2 implementation or audit experience.
- Prior red-teaming of LLMs (prompt-injection, data exfiltration, harmful content).
- Experience in regulated domains (financial services, healthcare, public sector).
- Arabic or UAE market experience.
- High-ownership role shaping SnowHeap’s AI governance and product roadmap.
- Remote-first across MENA/EU time zones; flexible hours.
- Competitive compensation with performance bonus.
- Fast career growth, build the function and lead it.
- Links to eval frameworks, safety work, or red-team write-ups you’ve done
- Example policies/checklists you authored (redacted is fine)
- GitHub or snippets showing eval harnesses, guardrails, or LangSmith/LangGraph workflows