What if the biggest threat to your AI product isn’t bad actors, but the guardrails you missed to build?
As AI capabilities explode, product leaders are facing security risks that don’t show up in traditional testing. In this session, you’ll hear real-world examples of how targeted prompting exposed vulnerabilities, even in systems considered “safe”, and understand the real work behind guardrails that actually scale.
In this webinar, Okta’s Engineering Security Director Arun Kumar Elengovan dives into:
– How prompting reveals hidden security gaps before users ever see them – Surprising edge cases and model behaviors that highlighted the need for stronger guardrails
– Balancing usability, velocity, and security without slowing teams down
– The real-world playbook for responsible AI: checkpoints, governance, and workflows
– Designing scalable AI governance frameworks that protect innovation rather than restrict it
You’ll leave with tactical frameworks to identify vulnerabilities earlier, build safety mechanisms that scale, and strengthen your AI’s reliability across the entire product lifecycle.
Join us for new conversations with leading product executives every week. Roll through the highlights of this week’s event below, then head on over to our Events page to see which product leaders will be joining us next week.
Show Notes:
- Data poisoning is a powerful and economically viable attack on AI systems, making data pipeline security foundational.
- AI systems fail both inadvertently (such as data drift, bias) and adversarially (such as poisoning, prompt injection).
- Traditional security approaches are often insufficient for AI due to the architectural blending of code and data in large language models.
- Prompt injections are a critical risk—both direct (user-driven) and indirect (embedded in data sources) attacks must be addressed.
- Large language models lack a true code-data boundary, treating all input as tokens, which creates systemic vulnerabilities to manipulation.
- Inference-time attacks like model extraction, evasion, and membership inference exploit models without needing training data access.
- Real-world attacks leverage browser environments and prompt injection chains to exfiltrate sensitive data without user awareness.
- Many AI vulnerabilities are architectural, not just bugs, so guardrails must match fundamental system constraints.
- Social engineering and psychological tactics, such as urgency, authority, and flattery, work well on language models because they aim to please users.
- Jailbreak templates and multi-turn manipulation techniques, like crescendo and skeleton key attacks, can subvert safety mechanisms.
- Encoding techniques, such as base64 or leetspeak, can bypass word-level filters, making pattern-matching insufficient for AI safety.
- Guardrails need to focus on both immediate and cumulative threats, with vigilance for multi-turn conversations.
- Effective AI security requires continuous monitoring, as risk posture changes and new attacks require regular red teaming.
- Responsible AI is about continuous improvement, not theoretical perfection—safety practices must be regularly measured and iterated.
- Responsible AI failures often stem from speed-over-safety, scale amplification, complexity, and regulatory lag.
- Key categories of responsible AI risk include fabrication (hallucinations), bias, abuse, privacy, and security.
- Red teaming operationalizes responsible AI principles, uncovering risks proactively and enabling prioritized mitigation.
- Frameworks like OWASP LLM Top 10 and ATLAS help with risk mapping, but practical, programmable guardrails (such as Nemo) are essential.
- Foundational security controls remain vital: least privilege, API security, audit logging, and sandboxing are still best practices.
- Layered defense (defense in depth) is optimal, as vulnerabilities exist at every stage, requiring technical, administrative, and organizational controls.
About the speaker
About the host
Products that Count is a 501(c)3 nonprofit that helps everyone build great products. It celebrates product excellence through coveted Awards that inspire 500,000+ product managers and honor great products and the professionals responsible for their success. It accelerates the career and rise to the C-suite of >30% of all Product Managers globally by providing exceptional programming – including award-winning podcasts and popular newsletters – for free. It acts as a trusted advisor to all CPOs at Fortune 1000, and publishes key insights from innovative companies, like Capgemini, SoFi, and Amplitude, that turn product success into business success.