Director of Secure AI & Application Development
Candescent
Candescent is the leading cloud-based digital banking solutions provider for financial institutions. We are transforming digital banking with intelligent, cloud-powered solutions that connect account opening, digital banking, and branch experiences for financial institutions. Our advanced technology and developer tools enable seamless, differentiated customer journeys that elevate trust, service, and innovation. Success here requires flexibility in a fast-paced environment, a client-first mindset, and a commitment to delivering consistent, reliable results as part of a performance-driven, values-led team. With team members around the world, Candescent is an equal opportunity employer.
Job Summary
The Executive Director of Secure AI & Application Development at Candescent will lead the strategic direction, development, and execution of the enterprise-wide application security program with specialized focus on AI/ML security for SaaS products serving regulatory enterprises. This role is responsible for embedding security into the software development lifecycle (SDLC) and AI development lifecycle (AIDLC), partnering with engineering, product data science, AI/ML engineering, and infrastructure teams to ensure secure design, development, and deployment of AI-powered applications. The ideal candidate will be a visionary leader with deep technical expertise in both security and AI/ML security, strong business acumen, regulatory compliance expertise, and a proven track record of building and scaling secure development practices in complex Saas and AI-driven environments.
Key Responsibilities and Deliverables
Strategic Leadership
- Define and drive the application and AI/ML security strategy aligned with Candescent’s business and risk objectives for regulatory enterprise clients.
- Lead the development and execution of secure SDLC and AI development lifecycle (AIDLC) practices across all engineering and data science teams.
- Serve as a trusted advisor to senior leadership on application security risks, AI/ML security risks, model governance, trends, and mitigation strategies.
- Establish AI security governance frameworks that meet regulatory requirements (EU AI Act, NIST AI RMF, ISO 42001).
- Develop security strategies for AI supply chain, third-party AI integrations, and LLM/GenAI implementations.
Program Development & Execution
- Build and mature the application security program, including threat modeling, secure coding, code reviews, and security testing across traditional applications and AI/ML systems.
- Develop and maintain security standards, policies, and guidelines for application development and AI model development, training, and deployment.
- Oversee the integration of security tools (SAST, DAST, SCA, IAST, RASP) and AI security tools (model scanning, adversarial testing, data poisoning detection, model monitoring) into CI/CD and ML pipelines.
- Implement MLSecOps practices and secure AI pipeline architectures.
- Establish data governance and privacy controls for AI training data, including PII handling and data lineage tracking.
- Create security frameworks for model versioning, model registry security, and secure model serving.
Collaboration & Enablement
- Partner with DevOps, Engineering, Data Science, ML Engineering, and Product teams to ensure security is embedded early and continuously.
- Lead security champions programs and developer and data scientist training initiatives to foster a security-first culture with specialized AI security awreness.
- Collaborate with GRC, Risk, and Compliance teams to ensure regulatory and policy alignment specific to AI regulations and industry-specific requirements (HIPAA, SOC 2, GDPR, CCPA for AI systems).
- Work closely with customer-facing teams to address client security requirements and regulatory audit needs.
- Partner with legal and compliance teams on AI ethics, explainability, and bias mitigation from a security perspective.
Risk Management & Incident Response
- Identify and prioritize application and AI/ML security risks through assessments, pen testing, red teaming of AI models and threat intelligence.
- Conduct AI-specific risk assessments including adversarial attacks, model poisoning, prompt injection, and data exfiltration risks.
- Lead response efforts for application-related and AI/ML security incidents and vulnerabilities.
- Provide executive-level reporting on application and AI security posture, KPIs, and risk metrics with regulatory reporting capabilities.
- Manage third-party AI vendor security assessments and AI supply chain risk.
- Develop incident response playbooks specific to AI security incidents (model theft, data poisoning, adversarial attacks).
Qualifications and Experience
- 10+ years of experience in information security, with at least 5 years in application security leadership roles and 2+ years working with AI/ML security.
- Deep understanding of modern application architectures (e.g., microservices, containers, APIs, cloud-native) and AI/ML architectures (model training pipelines, inference endpoints, vector databases, LLM deployments).
- Hands-on experience with secure coding practices, threat modeling, and vulnerability management including AI/ML specific threat modeling (OWASP ML Top 10, MITRE ATLAS).
- Proficiency with security tools such as SAST, DAST, SCA, and container security platforms plus AI security tools (model scanning, adversarial robustness testing, data validation frameworks).
- Strong knowledge of OWASP Top 10, OWASP ML Top 10, OWASP LLM Top 10, CWE, CVE, and secure development frameworks.
- Experience working in Agile/DevOps and MLOps environments and integrating security into CI/CD and ML pipelines.
- Proven ability to lead cross-functional teams and influence at all levels of the organization.
- Deep understanding of regulatory compliance requirements for AI systems and SaaS products serving highly regulated industries.
- Experience with AI model governance, explainability requirements, and bias detection/mitigation.
- Knowledge of prompt engineering security, LLM guardrails, and GenAI security best practices.
- Strong background in data privacy, data governance, and secure data handling for ML training datasets.
Preferred Distinctions
- Advanced degree in Computer Science, Cybersecurity, or related field.
- Industry certifications such as CISSP, CSSLP, OSWE, or GIAC GWAPT.
- Experience with cloud security (AWS, Azure, GCP) and infrastructure-as-code security.
- Familiarity with regulatory frameworks (e.g., SOC 2, ISO 27001, PCI-DSS, HIPAA).
- Experience building or scaling a security champions program.
- Public speaking or thought leadership in the application security community.
Statement to Third Party Agencies
To ALL recruitment agencies: Candescent only accepts resumes from agencies on the preferred supplier list. Please do not forward resumes to our applicant tracking system, Candescent employees, or any Candescent facility. Candescent is not responsible for any fees or charges associated with unsolicited resumes.