- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
Cisco’s vision for AI policies that bolster beneficial uses and uphold human rights
It’s hard to go a day without hearing this: while artificial intelligence (AI) and machine learning (ML) are powering incredible new innovations, they are also creating new challenges. Because AI and ML can have consequential impact on individuals and society, these technologies demand clear governance over how they are developed, deployed, and used.
In this blog post, I take stock of the global policy landscape and initiatives to regulate AI as well as how Cisco is upholding key principles in its AI-based offerings.
A Big Year Ahead for Global Artificial Intelligence Policy
The year 2022 is shaping up to be a momentous one when it comes to global policymaking related to artificial intelligence (AI). In the European Union (EU), policymakers are advancing their work on the AI Act, the world’s first initiative to apply broad-based regulations to AI-based systems, with a focus on high-risk AI, with meetings across institutions taking place in March 2022. They are also considering updates to the EU’s product liability regime to address AI-specific concerns.
In the United States, the White House is leading an effort to develop an “AI Bill of Rights” in parallel with federal agency work on an AI Risk Management Framework. China is building on recent work related to ethical norms for AI and a trustworthy AI assessment framework, looking to incorporate similar concepts from the proposed EU regulations.
And the United Kingdom government will set out its own thinking on regulating AI while backing a new effort to establish standards and best practices for AI development globally. These activities together with other AI-related initiatives around the globe will make for a busy year as industry, government, academia, and civil society come together to define the best policy approaches.
What Constitutes “High-Risk” AI?
The debate on how to best regulate AI technologies is not new but it’s far from over. One of the main contingency points in the European proposal is that the bulk of the provisions in the EU AI Act would apply to AI systems deemed to be “high-risk”. This includes systems used in the context of law enforcement or that serve as safety components in regulated products like medical devices or machinery.
But precisely what counts as a “high-risk” AI system continues to be discussed, including the extent to which digital infrastructure controlled by AI systems should be considered “high-risk.” We fully support the notion of increasing regulatory obligations as risks increase. This requires recognizing the vast category of “low-risk” ways that AI and ML are used in the industry to manage traffic flows, devices, and security threats, for example via Cisco DNA Center or Cisco Secure Network Analytics.
We will continue working to ensure that the AI Act and other forthcoming regulations recognize these risk distinctions and tailor their regimes accordingly.
And What About Facial Recognition?
If there is one application of machine learning that has been a global flashpoint in policy circles, it is facial recognition. Whether in the context of law enforcement, surveillance, judicial systems, or commercial applications, policymakers and advocates have expressed valid and salient concerns about the potential for harms related to the human rights, privacy, security, efficacy, and fairness implications of this technology.
As in the case of defining “high-risk,” precise scoping is crucial when it comes to establishing policy. Within Webex by Cisco, for example, we use facial recognition to recognize individuals participating in Webex meetings and display their name labels, to provide background blur and background replacement, and to optimize visual layouts on screen.
We never use facial recognition without customer and end-user opt-in, and we use a layered set of techniques and mitigations to ensure privacy, security, and accuracy. The facial recognition features are designed to make Webex meetings more inclusive, secure, and personalized, and to power the future of hybrid work.
We believe it is possible to craft policy frameworks that support these kinds of capabilities in a business context while applying increased controls and restrictions on the use of facial recognition in other contexts. We will be working with policymakers on understanding and drawing these lines as policies are further developed.
Transparency, Fairness, Accountability, Privacy, Security, and Reliability
Recognizing our responsibility to ensure beneficial uses of AI and ML, we recently announced our Responsible AI Framework, which is grounded in our principles of transparency, fairness, accountability, privacy, security, and reliability.
The Framework establishes design controls for AI and ML model development that are integrated with Cisco’s existing processes for security by design, privacy by design, and human rights by design; supports AI incident reporting; and relies on a governance committee of senior executives to oversee responsible AI at Cisco. This Framework addresses the key issues we find that underlie the concerns across many AI policy and regulatory initiatives worldwide.
Our recently released Meraki Video (MV) Intelligence Training provides an example of how the principles and framework are incorporated into our design process. The Intelligence Training feature gives customers fine-grained options over whether and how their Meraki cameras contribute video footage to help us train the machine learning model that powers Meraki Video object recognition.
The MV Intelligence Training was designed according to rigorous principles for training data quality and it incorporates industry-leading security and privacy features. Putting our principles into practice, MV was deliberately designed without facial recognition or any other kind of biometric recognition because such identification in video security solutions has a high probability of misuse, and the difficulties of doing it accurately can prevent it from being used safely.
In our recent contribution <link to comments> to a White House consultation about AI-powered biometrics, we go into more depth about the safeguards we implement across our portfolio when using AI-powered biometric identification.
We will continue working with governments and other stakeholders around the world to shape AI policy and regulatory regimes to advance state-of-the-art responsible AI. As we leverage and iterate on our Responsible AI Framework, we will continue to channel what we learn into the process of developing policy frameworks that bolster beneficial uses of AI while mitigating harm.
Share: