Responsible AI: Scale & Speed with sustainable innovation

08th April 2023
Responsible AI: Scale & Speed with sustainable innovation

By Harsh Kaur,
Regional Account Manager, Trend Micro & Member of the ISACA Emerging Trends Working Group and Founding Member of CyberGurukul
April 8, 2023: “Main Hulk hoon…dishoom dishoom karke Big Bad Wolf se aapko bachaunga.” In the black and white world of my 4-year-old adorable nephew Meer, he imagines himself as the Avenger who can and will save his beloved ones from any adversaries
Well, a big salute to the spirit of our brave little saviour with his dishoom dishoom stories - and also the superpowers that empower his thoughts
When we envision this in the real world, one of the types of “Avengers” that comes to mind is artificial intelligence (AI). From our commute using Google maps, to social networking websites, to e-payments for daily financial transactions and chatbots that attend to our queries 24/7 and much more, we simply can’t imagine our lives without AI helping us conquer our daily challenges, can we
Indeed, AI has permeated all dimensions of modern computing and become increasingly entwined in our daily routines. Organisations today leverage AI to enable and secure business at scale, speed and with confidence. From gaining operational efficiency, elevating user experience to augmenting cybersecurity guardrails - AI functions are core to innovating business transformation projects.
The anti-climax
However, AI has also been heart of multiple security, privacy and ethical debates. This is because the value-agnostic design of AI may not differentiate between good and wrong access, and its data-dependent nature can also create new points of vulnerabilities. This can result in bad actors – or villains, as my nephew might relate to – having greater incentives to target these algorithms and expand the scale, scope and complexity of attacks.
This has compelled us to delve deeper into tough questions around ethical considerations related to AI, including:
-Can we really trust AI?
-Are benefits of artificial intelligence for all?
-How do we guard against the weaponization of AI?
-Who is responsible for the system’s outcomes? And above all,
-What are the regulations and audit measures assessing it?
And the hero enters
Great power does attract greater responsibility. To address the above concerns and achieve a new wave of growth, we now need to embark on the journey of AI to become progressive, inclusive, trustworthy, honest, transparent and accountable. In this way, we can bridge the gap between AI and humans and make it humane, responsible and a means to promote sustainable growth. This is not only the right thing to do, but also allows enterprises to build digital trust with their stakeholders, which, according to ISACA, has a range of business benefits
Specific to the Indian scenario, this discussion has today gained even more momentum with ‘Make in India to make AI in India and make AI work for India’ buzz. As per IDC, the Indian artificial intelligence market is forecasted to reach US$7.8 billion by 2025 at a CAGR of 20.2%.
To bridge the gap between the promise and the potential, the Union Budget 2023 heavily focuses on establishing a strong AI ecosystem in India and training skilled professionals. This includes building three Centers of Excellence for artificial intelligence with the goal of researching and developing practical, modern applications in agriculture, health, sustainable cities and much more.
While India is racing to become the next AI hub, it will also need to gear up to regulate this new-age technology. Yes, preparing for this modern era of digital scrutiny will be need of the hour, and navigating ethics, trust and governance will be the heart of delivering responsible innovation and become the steppingstones to building digital trust.
Hero in Action
As these regulations grows, the tools promoting moral AI come into greater focus. Many concur that morals and responsibility are learned behaviors (just like my little nephew is learning today). So, applying this concept will require intervention at multiple levels and granular layers, including focusing on:
-Why- The Objective: Define and articulate principles and mission influenced by the organization’s goals, risk management framework, compliances, governance structure and digital trust framework.
-What- The Cornerstones: Develop transparent, explainable AI workflows and tools that support principles such as fairness, traceability, robustness, trust and privacy.
-How- The Process: Create algorithm assessments with clear success criteria, incentives, and training measures.
-Who- The People: Provide training and awareness to all employees with responsible AI principles and success metrics, including identification of roles, expectations and accountability.
-Where and When: Throughout the AI lifecycle, implement a series of quantitative and qualitative checks in action, from data collection, design, deployment and use. In addition, help identify AI bias and gaps before you scale.
The Finale
With the rise of AI, geography has become history and imaginations the new reality. However, as AI has evolved, it has acted as a double-edged sword for adopting cyber defense and digital trust strategies.
To combat against the dark side of AI and instead leverage its superpowers to ride the next wave of digital maturity and build digital trust, ensure that ethics and human rights define the way forward and safeguard against AI abuse and misuse.
Morals need to be learned and trust will then be earned.
ISACA (www.isaca.org) is a global community advancing individuals and organizations in their pursuit of digital trust.