- Why Cisco Leads with Wi-Fi 7: Transforming Future Connectivity
- What is AI networking? How it automates your infrastructure (but faces challenges)
- I traveled with a solar panel that's lighter than a MacBook, and it's my new backpack essential (and now get 23% off for Black Friday)
- Windows 11 24H2 hit by a brand new bug, but there's a workaround
- This Samsung OLED spoiled every other TV for me, and it's $1,400 off for Black Friday
Valuing privacy and inclusion in software design
The following is an excerpt from the 2022 Cisco Purpose Report, published on December 8, 2022.
As the stewards of the data that make modern life possible, technology companies must earn customers’ and users’ trust. This means being mindful of how our products are made and used and taking steps to address potential negative impacts. Cisco strives to design and build technology in ways that respect human rights, promote inclusion, and protect privacy and security—so that everyone can benefit from a more connected world.
When it comes to Powering an Inclusive Future for All, respect for human rights is a fundamental innovation principle. Privacy, security, and inclusion must be central in the design methodology, especially when it comes to artificial intelligence and machine learning. Training data sets often determine how product design and user experience take shape—the adage “goodness in, goodness out” certainly applies. Developers must take steps to ensure these data sets are robust, diverse, and representative of all users. Failing to do so can result in inaccuracies, disappointing user experiences, and unintended bias.
Take, for example, Webex virtual backgrounds, which are designed to hide users’ surroundings to enhance privacy, security, professionalism, and fun. Early research versions of this feature, which reflected the state of the art at the time, did not perform well for certain hair textures and styles, or lighting conditions. In some cases, they inadvertently filtered out portions of a user’s appearance. Our engineers recognized that broader, more diverse training data sets that were representative of the Webex user base were required, and addressed this during the design phase before releasing the feature. By applying data that was anonymized, ethically sourced, user-contributed, open-source, provided with explicit consent, and respectful of individual privacy in other ways, we made the training data more robust. This resulted in more representative images and algorithms, and a much better and more inclusive user experience for all.
In 2022, we built on the learnings from the Webex team and launched our Responsible AI Framework based on six principles: Transparency, Fairness, Accountability, Privacy, Security, and Reliability. Our Responsible AI Working Group continuously drives adherence to these principles by putting new technologies through Responsible AI Impact Assessments, offering guidance on how to manage risk to human rights, and providing accountability via incident reporting of human rights, privacy, and security concerns.
To learn more about the progress we’re making to power a more inclusive future, visit our Cisco ESG Reporting Hub, where you can read our 2022 Cisco Purpose Report.
Share: