Work with our companies

Principal AI Security Engineer

Candescent

Candescent

Software Engineering, Data Science
Atlanta, GA, USA
Posted on Apr 1, 2026

Candescent is a forward-thinking technology company transforming how financial institutions deliver Intelligent Banking experiences. We unite digital banking, account opening, and branch solutions that power and connect digital banking, account opening, and branch solutions—creating seamless engagement across digital, remote, and in-person channels.

Our Experience-Led, Intelligence-Driven approach combines human-centered design with data, automation, and cloud-based innovation. Built on an API-first architecture, our extensible ecosystem enables institutions to adapt quickly, integrate easily, and unlock new opportunities for growth—turning every customer interaction into a moment of clarity, confidence, and connection.

Role Summary
We are seeking an AI Security Engineer to own the security of how we adopt and integrate third-party artificial intelligence and large language model (LLM) services across the enterprise. This is a practitioner role for someone with a strong security engineering foundation who has developed meaningful expertise in AI/ML security risks — or who is actively building that expertise and ready to own it as their primary charter.
As an enterprise consumer of AI services, our risk surface centers on how we connect to and use external AI providers — securing API integrations, controlling data exposure, governing adoption of AI tools across the organization, and ensuring AI usage aligns with our regulatory obligations. This role is not focused on building or training AI models.
Key Responsibilities and Deliverables
Secure AI Integration
  • Define and maintain secure integration patterns for third-party AI and LLM services, including API security, authentication, secrets management, and data-in-transit protections.
  • Establish and enforce input/output controls, prompt handling standards, and data classification guardrails for AI-enabled applications.
  • Evaluate the security posture of AI service providers as part of third-party and vendor risk processes.
  • Develop guidance for the secure adoption of agentic AI tools and multi-agent integrations, including scope containment and human oversight controls.
AI Security Governance
  • Build and maintain an AI security risk framework aligned to the organization's regulatory obligations: GLBA, PCI DSS 4.0.1, DORA ICT third-party risk, and NYDFS 23 NYCRR 500.
  • Establish governance controls for enterprise AI adoption, including standards for approved AI services, data handling requirements, and shadow AI detection.
  • Align internal AI security controls to emerging frameworks — NIST AI RMF and ISO/IEC 42001 — and advise on the organization's readiness as regulatory expectations evolve.
Threat Identification & Engineering Controls
  • Identify and mitigate AI-specific risks including prompt injection, model manipulation, data leakage, adversarial inputs, and AI-enabled social engineering.
  • Partner with security operations to build detection and response capabilities for AI-integrated systems.
  • Monitor the evolving AI threat landscape and translate emerging risks into practical engineering and governance responses.
Cross-Functional Partnership
  • Work with engineering, product, and cloud platform teams to embed security-by-design into AI-enabled applications and integrations.
  • Communicate AI security risks and recommendations clearly to both technical peers and non-technical leadership.
  • Contribute to security awareness and internal education on AI risk for engineering and business teams.
Requirements
  • Bachelor’s degree in Computer Science, Information Security, Engineering, or a related technical discipline or equivalent practical experience.
  • 7+ years of experience in security engineering, application security, cloud security, or a closely related discipline.
  • Hands-on experience securing cloud-native environments and API-based integrations (AWS, Azure, or GCP).
  • Solid understanding of authentication, authorization, secrets management, and data protection in distributed systems.
  • Ability to assess technical risk and translate findings into actionable engineering controls and governance language.
  • Working knowledge of AI/ML security risks relevant to an enterprise consumer context: prompt injection, data leakage, insecure API integration, shadow AI, model output manipulation, and AI supply chain risk.
  • Familiarity with OWASP LLM Top 10 and MITRE ATLAS as applied threat frameworks.
  • Experience with or meaningful exposure to securing integrations with LLM service providers (e.g., Azure OpenAI, AWS Bedrock, Google Vertex AI, Anthropic, OpenAI).
  • Demonstrated engagement with AI security as an area of active professional focus — through applied work, research, certifications, or equivalent.
Security Foundations (any of the following)
  • CISSP — Certified Information Systems Security Professional
  • CCSP — Certified Cloud Security Professional
  • Cloud platform security certification: AWS Security Specialty, AZ-500 (Azure), or Google Professional Cloud Security Engineer
  • ISSAP — Information Systems Security Architecture Professional (for candidates with a strong architecture focus)
Preferred
  • Familiarity with at least one of the following as it applies to AI or third-party technology risk: GLBA, PCI DSS 4.0.1, NYDFS 23 NYCRR 500, DORA.
  • Experience working in regulated financial services or an equivalently controlled environment is a plus, not a requirement.
  • CAISP — Certified AI Security Professional (Practical DevSecOps) — hands-on, lab-based; currently the most technically rigorous AI security credential available
  • CAISS — Certified AI Security Specialist (Ampcus Cyber / ISACA chapters) — workshop-based; widely available through ISACA and ISC² chapter networks
  • AIGP — Artificial Intelligence Governance Professional (IAPP) — governance and compliance focus; particularly relevant to regulatory alignment work
  • ISO/IEC 42001 Lead Implementer or Lead Auditor — appropriate for candidates with a governance and risk management emphasis
  • Must be legally authorized to work in the U.S. without sponsorship.
  • Hybrid in Atlanta Office

Statement to Third Party Agencies
To ALL recruitment agencies: Candescent only accepts resumes from agencies on the preferred supplier list. Please do not forward resumes to our applicant tracking system, Candescent employees, or any Candescent facility. Candescent is not responsible for any fees or charges associated with unsolicited resumes.