Why neglecting AI ethics is such risky business – and how to do AI right


Just_Super/Getty Images

Nearly 80 years ago, in July 1945, MH Hasham Premji founded Western India Vegetable Products Limited in Amalner, a town in the Jalgaon district of Maharashtra, India, located on the banks of the Bori River. The company began as a manufacturer of cooking oils.

In the 1970s, the company pivoted to IT and changed its name to Wipro. Over the years, it has grown to become one of India’s biggest tech companies, with operations in 167 countries, nearly a quarter of a million employees, and revenue north of $10 billion. The company is led by executive chairman Rishad Premji, grandson of the original founder.

kiran-m-blue-background

Kiran Minnasandram, VP and CTO of Wipro FullStride Cloud

Image: Wipro

Today, Wipro describes itself as a “leading global end-to-end IT transformation, consulting, and business process services provider.” In this exclusive interview, ZDNET spoke with Kiran Minnasandram, VP and CTO of Wipro FullStride Cloud.

Also: Forget SaaS: The future is Services as Software, thanks to AI

He spearheads strategic technological initiatives and leads the development of future-looking solutions. His primary role is to drive innovation and empower organizations by providing them with state-of-the-art solutions.

With a focus on cloud computing, he architects and implements advanced cloud-based architectures that transform how businesses operate, while optimizing operations, enhancing scalability, and fostering flexibility to propel clients forward on their digital journeys.

Also: 7 leadership lessons for navigating the AI turbulence

As you might imagine, AI has become a big focus for the company. In this interview, we had the opportunity to discuss the importance of AI ethics and sustainability as it pertains to the future of IT.

Let’s dig in.

Company values

ZDNET: How do you define ethical AI, and why is it critical for businesses today?

Kiran Minnasandram: Ethical AI not only complies with the law but is also aligned with the value we hold dear at Wipro. Everything we do is rooted in four pillars.

AI must be aligned with our values around the individual (privacy and dignity), society (fairness, transparency, and human agency), and the environment. The fourth pillar is technical robustness that encompasses legal compliance, safety, and robustness.

ZDNET: Why do many businesses struggle with AI ethics, and what are the key risks they should address?

KM: The struggle often comes from the lack of a common vocabulary around AI. This is why the first step is to set up a cross-organizational strategy that brings together technical teams as well as legal and HR teams. AI is transformational and requires a corporate approach.

Second, organizations need to understand what the key tenets of their AI approach are. This goes beyond the law and encompasses the values they want to uphold.

Also: Is your business AI-ready? 5 ways to avoid falling behind

Third, they can develop a risk taxonomy based on the risks they foresee. Risks are based on legal alignment, security, and the impact on the workforce.

ZDNET: How does AI adoption impact corporate sustainability goals, both positively and negatively?

KM: AI adoption has and will have a significant impact on corporate sustainability goals. On the positive side, AI can enhance operational efficiency by optimizing supply chains and improving resource management through more precise monitoring of energy and carbon consumption, as well as improving data collection processes for regulatory reporting.

For example, AI can be used by manufacturing or logistics companies to optimize transportation routes, leading to reduced carbon emissions.

Also: 5 quick ways to tweak your AI use for better results – and a safer experience

Conversely, rapid development and deployment of AI is resulting in increased energy consumption and carbon emissions, as well as substantial water usage for cooling data centers. Training large AI models demands significant computational power, resulting in a larger carbon footprint.

Environmental impact 

ZDNET: How should enterprises balance the drive for AI innovation with environmental responsibility?

KM: As a starting point, enterprises will need to establish clear policies, principles, and guidelines on the sustainable use of AI. This creates a baseline for decisions around AI innovation and enables teams to make the right choices around the type of AI infrastructure, models, and algorithms they will adopt.

Additionally, enterprises need to establish systems to effectively track, measure, and monitor environmental impact from AI usage and demand this from their service providers.

We have worked with clients to evaluate current AI policies, engage internal and external stakeholders, and develop new principles around AI and the environment before training and educating employees across several functions to embed thinking in everyday processes.

Also: Want to win in the age of AI? You can either build it or build your business with it

By creating more transparency and accountability, companies can drive meaningful AI innovation while being cognizant of their environmental commitments. There are a significant number of cross-industry and cross-stakeholder groups being set up to support enterprises with exploring the environmental dilemmas, measurement requirements, and impact associated with AI innovation.

With an incredibly fast-moving agenda, learning from others and collaborating on a global stage is critical. Wipro has led various collaborative global efforts on AI and the environment alongside our clients. We are well-placed to help our clients navigate the regulatory landscape.

ZDNET: How are global regulations evolving to address ethical AI and sustainability concerns?

KM: AI has never existed in isolation. Privacy, consumer protection, security, and human rights legislation all apply to AI. In fact, data protection regulators play a key role in safeguarding individuals from the harms of AI. Consumer protection plays a key role when it comes to algorithmic pricing, for example, and non-discrimination legislation can support cases of algorithmic discrimination.

It is very important for organizations to understand how existing legislation applies to AI and upskill the workforce on how to embed legal protection, privacy, and security into the adoption of AI.

Also: Is your business AI-ready? 5 ways to avoid falling behind

In addition to existing legislation, some AI-specific laws are being enacted. In Europe, the EU AI Act governs the marketisation of AI products. The riskier the product, the more it needs to have controls wrapped around it.

In the US, individual states are legislating around AI, especially in the context of labor management, which is arguably one of the most complex areas of AI deployment.

Biggest misconception

ZDNET: What are the biggest misconceptions about AI ethics and sustainability, and how can businesses overcome them?

KM: The biggest misconception is that it is challenging to bring innovation and responsibility together. The reality is that responsible AI is the key to unlocking AI progress as it provides long-term sustainable innovation.

Also: How businesses are accelerating time to agentic AI value

Ultimately, companies and consumers will choose the products they trust. So, trust is the cornerstone for AI deployment. Companies that bring together innovation and trust are going to have a competitive edge.

ZDNET: How does Wipro FullStride Cloud support companies in aligning AI with ESG (environmental, social, and governance) goals?

KM: We start by developing responsible AI frameworks that ensure fairness, transparency, and accountability within the AI models. We also leverage AI to track and report ESG metrics, as well as Green AI initiatives such as tools to measure and reduce AI’s carbon footprint.

Also: AI agents aren’t just assistants: How they’re changing the future of work today

On the infrastructure side, we work with clients to optimize workloads and make energy-efficient use of data centers. We also work on industry-specific AI solutions for sectors like healthcare, finance, and manufacturing to meet ESG goals.

ZDNET: What are the most effective ways cloud solutions can reduce AI’s environmental footprint?

KM: Cloud solutions can support energy-efficient data centers by using renewables, optimizing cooling, and incorporating carbon-aware computing. AI model optimization is also possible through less energy-intensive techniques such as federated learning and model pruning.

Also: As AI agents multiply, IT becomes the new HR department

You can align resources more closely with demand by using serverless and auto-scaling solutions to avoid over-provisioning. Cloud providers now offer carbon tracking and reporting dashboards, allowing you to measure and optimize your footprint. With multi-cloud and edge computing, you can further reduce data movement and process AI closer to the source.

Leveraging the cloud

ZDNET: How can cloud infrastructure be leveraged to embed ethical considerations into AI development?

KM: Cloud infrastructure offers powerful tools to help embed ethical considerations into AI development. Built-in AI ethics toolkits can support bias detection and fairness testing by identifying imbalances in training data and models. Cloud platforms also offer diversity-aware training tools to help ensure datasets are representative and inclusive, which is critical for developing responsible AI systems.

Also: The CTO vs. CMO AI power struggle – who should really be in charge?

You can also take advantage of cloud-based AI frameworks that offer explainability and transparency features to better understand how models make decisions. Secure and privacy-preserving AI development is supported through capabilities like differential privacy and encrypted processing, enabling responsible data handling from end to end.

Cloud services can further support ethical AI through automated compliance monitoring, helping ensure adherence to regulations such as GDPR and CCPA. Tools for model drift testing and hallucination detection are also available, making it easier to continuously monitor model performance and flag inaccurate or unreliable outputs over time.

ZDNET: Why do some organizations struggle to measure AI’s sustainability impact, and how can cloud-based tools help?

KM: Many organizations struggle to measure AI’s sustainability impact due to the absence of standard metrics. Without a universal framework to quantify environmental effects, it becomes difficult to benchmark progress or compare across initiatives. Cloud-based tools can help bridge this gap by offering customizable dashboards and models that track carbon output across the AI lifecycle, from development through deployment.

Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses

Real-time monitoring presents another challenge, as energy consumption associated with AI workloads can fluctuate significantly. Static reporting methods often miss these variations. Cloud platforms can offer dynamic, real-time tracking tools that adjust to shifting workloads and provide a more accurate view of energy usage.

Additionally, fragmented data visibility across cloud, on-premises, and edge environments complicates sustainability assessments. Cloud-native solutions can aggregate data from multiple sources into a single view, improving transparency and decision-making.

Some of AI’s environmental costs remain hidden. These extend beyond training to inference, storage, and compute scaling. Cloud tools can surface these lesser-known impacts by analyzing end-to-end usage patterns.

Regulatory and compliance gaps also add complexity, especially as ESG (environmental, social, and governance) reporting requirements vary by region. Cloud services can help manage this by automating region-specific compliance tracking.

Finally, cloud-based analytics can assist in navigating the trade-offs between cost, model performance, and sustainability, offering insights that support more balanced, responsible AI development.

ZDNET: What concrete steps can organizations take to improve AI transparency and accountability?

KM: First, train the workforce to use AI responsibly. Encourage the workforce to deploy AI within a safe space by querying and interrogating it.

Also: How Nvidia is helping upskill educators and prep students for the AI age

Second, set up a governance structure for AI, encompassing all aspects of the business, from procurement to HR, CISO, and risk management.

ZDNET: How does AI bias emerge, and what role do cloud-based frameworks play in mitigating it?

KM: Bias in AI can come from several sources, including algorithmic training data that are unrepresentative or contain historical prejudices, as well as errors and inconsistencies in human-labeled datasets. If trained on poor data, AI decisions may be skewed based on cultural, corporate, or societal ethical frameworks, leading to inconsistent outcomes.

Also: AI for the world, or just the West? How researchers are tackling Big Tech’s global gaps

Legacy AI models trained on outdated assumptions and historical data may continue to propagate past biases. AI may also struggle with diverse dialects, regional contexts, or cultural nuances.

Cloud-based frameworks can help mitigate this by monitoring compliance with diverse regional regulations and ensuring fair AI model development through validation across diverse economic, social, and demographic groups. Cloud-based adaptive training processes can also rebalance datasets to prevent power-dynamic biases.

ZDNET: What governance strategies should enterprises implement to ensure responsible AI usage?

KM: The most important thing is to have a governance framework. Some organizations may have a separate AI governance structure, while others (like ours) have embedded it within our existing governance construct.

Also: The best free AI courses and certificates in 2025

It is very important to involve every corner of the organization. AI impact assessments are useful tools to embed legal protection, privacy, and robustness in the deployment of AI from the inception stage.

AI issues

What do you think about the growing emphasis on ethical and sustainable AI? Has your organization implemented any frameworks or policies to ensure responsible AI development?

How are you approaching the environmental impact of AI workloads, and are you using any cloud-based tools to help measure or reduce that footprint?

Do you think global regulations are keeping pace with AI innovation, or are companies being left to navigate the gray areas on their own? Let us know in the comments below.

Get the morning’s top stories in your inbox each day with our Tech Today newsletter.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.





Source link

Leave a Comment