Artificial Intelligence: driving innovation while safeguarding ethics and privacy – Cisco Blogs


The phone in your hand, the network it runs on, the public transportation app you’re using to commute… It’s likely that each of them contains a touch of Artificial Intelligence (AI) to make your experience more efficient, seamless, secure, and personalized.

At Cisco, we develop innovative services that advance healthcare, enhance education, provide remote working, improve government services to citizens, expand accessibility, and strengthen cybersecurity. We deliver technological advances that help us go beyond what is manually possible to secure a more sustainable future for all. For example, Cisco has invested in AI technologies such as virtual assistant, noise removal, real time translations from English to 100+ languages and speech enhancement in our collaboration solutions such as Webex.

In line with Cisco’s purpose, we choose to use technology responsibly to power an inclusive future for all. We recognize ethical issues arise with the design and use of AI, and that is why it’s so important to proactively capture and address these risks. Like with any technology, the issue mostly stands in how it is being used or implemented. Yet, there is a lot we can do as businesses to get it right from the very beginning, at the design stage, to build-in privacy, security, and human rights.

Regulating AI

The European Commission recently published a proposal to regulate artificial intelligence. It would set a new legal framework to foster the development and uptake of AI while ensuring it meets a high-level protection of public interests, individuals’ fundamental rights and freedoms as enshrined in the Union’s law.

There is no doubt AI needs to be guided by regulation with a view to ensuring the technologies can be trusted and to make sure they never compromise fundamental rights such as privacy, equality, safety, and security.

At Cisco, we believe the power of AI can be a positive force in powering an inclusive future for all and should only be used where it benefits the user and society. We fully support the European Commission’s proposal to ban uses that might exploit vulnerable persons, and subliminal uses. Furthermore, we are pleased to see the Commission’s approach towards ensuring the quality of data sets and algorithms to remove potential discrimination; again, making sure technology is developed and used to benefit users and society to power a more inclusive future.

Defining “High-Risk AI”

We are pleased to see that the Commission’ proposal supports the twin goals of promoting AI and promoting trust in the technology, and that it takes a proportionate approach in targeting high-risk applications.

Given the relative novelty of the technology and the rapid nature of its development, the proposed definitions for “high-risk AI” will no doubt require further attention to provide greater clarity for end-users, businesses and researchers.

In cases where AI could – without the right approach – present potential risks for citizens, we agree that there is a role for policy makers to set rules with a view to maximizing trust.

The necessary focus on facial recognition and biometric identification

Much of the commentary in the media on the Commission’s proposal has been around how it approaches biometric identification and surveillance in public spaces. We believe the Commission’s proposal is a good step towards shaping a trusted use of AI and support the cautious approach to facial recognition and biometric identification.

At Cisco we have decided that we would only make use of facial recognition in applications where we systematically ensure the user gives prior explicit consent to be identified, such as to simplify access to working stations in Cisco offices, just like you would to unlock your phone.

We believe privacy is a fundamental human right and our technology reflects that. We don’t think of privacy as an add-on. To the contrary, we strive to build privacy and security from the very start, in the way we design our products. Which is why the limited uses of biometric identification in our solutions require the user to opt into rather than have it enabled by default.

We ensure that any data is collected, stored, and processed in line with data protection laws and we go beyond that with Cisco’s policies by using privacy engineering and our secure development lifecycle from the outset in the development of our products, services, and enterprise data processing activities.

Collaboration

The history of AI development has been one of collaboration – between many different actors from businesses, universities, research organizations and others – and between countries around the world.

It will be important to further promote that approach if Europe is to realize its potential in AI. Ongoing collaboration between policy makers and businesses will be essential as this nascent technology develops further and new use cases, opportunities and challenges are unearthed.

Collaboration will also be needed between Europe and other parts of the world that share its values and ambitions on AI. We see the ongoing assessment of both the United States and the European Union’s AI strategies and rules as a bright window of opportunity to align and collaborate. As the US and the EU look at reviving transatlantic collaboration, they should seize this opportunity to take concrete actions and build stronger ties around shared values.

Trust and AI regulation

At Cisco, we understand just how important it is to build and maintain trust into our technology. We are actively and consistently taking action to identify the most responsible way to develop and realize the potential of AI and to build in strong safeguards on security, human rights and privacy at all stages of our technology’s development and operation.

Cisco looks forward to working with the European Commission, Parliament and Council on this important agenda going forward so that AI helps uphold well-being, fairness, accountability, privacy, responsibility, justice and sustainability.

Share:



Source link