In this editorial, Konfer Product Lead Ganesh Rajan considers the tradeoffs that come with responsible AI. Ganesh works at the forefront of this subject at Konfer. There, he ensures that AI-generated results can be understood and traced back to their decision-making mechanisms so that people can trust the AI they are working with. In a world of rapid innovation, how can the latest tech maintain a high level of trust?


The rapid progress of artificial intelligence (AI), particularly generative AI, has furthered the need and the urgency for robust regulations addressing concerns around ethics, transparency, and trust.

Public availability of powerful generative AI capabilities, such as ChatGPT, DALL-E, NotionAI, and others, are necessitating such discussions. They have transformed content creation by producing vast amounts of high-quality content with minimal human intervention. This has enhanced efficiency and lowered costs for various industries, including marketing, journalism, and creative writing. However, many observers have also raised concerns about potential misuse, including fake news, deepfakes, and the spread of misinformation.

Achieving a balance between promoting innovation and ensuring responsible AI usage entails comprehending and detailing generative models’ capabilities and limitations, and formulating guidelines for their ethical application. This balancing act involves cooperation and collaboration between the product developers, the service providers, and the users alike.

The Role of Copyright Laws in AI-Generated Content

The emergence of AI-generated content challenges conventional copyright frameworks. These were initially designed for human authorship.

Legal systems must evolve to safeguard intellectual property rights while encouraging innovation. Continuous engagement among legal experts, AI developers, and stakeholders is necessary to ensure equitable and adaptive copyright laws.

Several recent rulings have tackled challenges related to AI-generated content and copyright laws. Some jurisdictions have contemplated granting copyright protection to AI-generated works under specific conditions. Others have deemed them ineligible due to the absence of human authorship. Striking the right balance will be vital in determining the future of content creation and reuse.

From Principles to Action: Making Existing AI Regulations Actionable

There are some regulations already in place. The US Federal Reserve SR11-7 is mandatory for all models deployed in organizations within the purview of the US Federal Reserve. Beyond the US, the EU AI Act (which is proceeding towards standardization within the EU), the NIST AI Risk Management Framework (AI RMF), and Singapore’s AI Act, to name a few, underscore the global interest in managing AI’s ramifications. These regulations are designed to create a foundation for responsible AI practices across the globe.

To build on this foundation and cultivate a responsible AI ecosystem, developers and product managers must address some key concerns. These include making regulations actionable and measurable, integrating them into the AI lifecycle, and assigning clear roles and responsibilities within organizations. This will ensure a robust framework for AI development and deployment that aligns with the ethical guidelines and standards set forth by these regulatory bodies.

Make regulations actionable

To make regulations actionable for AI, it is crucial to disassemble and deconstruct the high-level regulatory principles into smaller concrete steps. Product leaders need to collaborate with the organization’s policy and compliance teams, AI developers, and business leaders to deconstruct the needed policy and regulatory specifications into a set of serviceable requirements.

Make regulations measurable

By making regulatory requirements measurable, organizations are better able to assess their progress towards conformance and compliance, and demonstrate their dedication to responsible AI development. To achieve this, product leaders must establish clear metrics and benchmarks against which organizations can evaluate their AI systems. This requires them to collaborate not just with the teams inside their organization but also with external stakeholders, and their product peers in similar organizations. Creating a standard way to measure AI compliance will lead to environments conducive to responsible AI innovation and development.

Incorporate regulations into the AI lifecycle workflow

Integrating regulations into the AI lifecycle workflow is vital for guaranteeing responsible AI development from design to retirement. Product leaders need to incorporate the regulatory concerns into the AI systems lifecycle, via the measurable requirements. This starts with the AI development workflow. That way, product leaders can plan for possible risks and ethical issues with the technology.

Assign roles and responsibilities within an enterprise

Setting clear roles and responsibilities within an organization is a key part of putting AI regulations into place. Clear identification, documentation, and communication of who is responsible for each part of the needed AI regulation compliance is key. Then, the product leader and organization can better manage AI systems and make sure they follow legal and ethical rules.

By addressing these key considerations, product leaders drive the organizations closer to achieving a sustainable and responsible AI ecosystem. That ecosystem can champion both innovation and adherence to ethical standards.

About the speaker
Ganesh Rajan A10 Networks, Vice President, Product Line Management Member

Ganesh is a Product Management Executive who brings both engineering development and product management experiences from his roles over the last 20+ years. His last 12+ years have been mostly in Product Management leadership roles. He has worked in established companies as well as in early-stage startups; in a couple of cases, he was one of the co-founders. Ganesh has had entrepreneurial and growth experiences in these companies, he has taken products from 0-1, in some cases turning around the technical direction. At present, Ganesh is in a startup, Konfer, Inc., that is building solutions to enable continuous AI governance, helping enterprises develop and deploy Responsible AI. Previously, he was at A10 Networks (ATEN), a public company, as the Vice President, Product Line Management.

Provide your rating for this post
If you liked this post, please use the buttons to the left to share it with a friend or post it on social media. Thank you!

Leave a Reply

Read more

All Things AI Beyond the Hype

All things AI, from an eBook with insights by amazing product leaders to a webinar and podcast recording with more real life examples.

Click to Join for Free with LinkedIn

Product Industries to Watch: Satisfi Labs CEO on Conversational AI Tech

Kris Drey and Don White discuss Satisfi Labs’ quest to build a conversational AI tech platform on the theory that it answers everything.

Product Industries to Watch: Microsoft Product Leader on AI for Voice in Healthcare

Pulse PEMF Director of Product Ryan House and Microsoft Product Lead Arun Ravi discuss the value of AI for voice in healthcare.

A10 Networks VP of Product on Being “ConfAIdent”

A10 Networks VP of Product Ganesh Rajan will touch on Responsible AI regulations, and what to expect with respect to Generative AI.

Borealis AI Product Lead on Machine Learning Product Development & Business Value

Borealis AI Product Lead Alex LaPlante talks what makes machine learning products different from traditional software products.

Sign-in / Register for Free

Don’t be left behind in your career. Join a growing community of over 500K Product professionals committed to building great products. Register for FREE today and get access to :

  • All eBooks
  • All Infographics
  • Product Award resources
  • Search for other members

Coming soon for members only: personalized content, engagement, and networking.