- How to detect this infamous NSO spyware on your phone for just $1
- I let my 8-year-old test this Android phone for kids. Here's what you should know before buying
- 3 lucrative side hustles you can start right now with OpenAI's Sora video generator
- How to use Microsoft's Copilot AI on Linux
- Protect 3 Devices With This Maximum Security Software
Valuing privacy and inclusion in software design
The following is an excerpt from the 2022 Cisco Purpose Report, published on December 8, 2022.
As the stewards of the data that make modern life possible, technology companies must earn customers’ and users’ trust. This means being mindful of how our products are made and used and taking steps to address potential negative impacts. Cisco strives to design and build technology in ways that respect human rights, promote inclusion, and protect privacy and security—so that everyone can benefit from a more connected world.
When it comes to Powering an Inclusive Future for All, respect for human rights is a fundamental innovation principle. Privacy, security, and inclusion must be central in the design methodology, especially when it comes to artificial intelligence and machine learning. Training data sets often determine how product design and user experience take shape—the adage “goodness in, goodness out” certainly applies. Developers must take steps to ensure these data sets are robust, diverse, and representative of all users. Failing to do so can result in inaccuracies, disappointing user experiences, and unintended bias.
Take, for example, Webex virtual backgrounds, which are designed to hide users’ surroundings to enhance privacy, security, professionalism, and fun. Early research versions of this feature, which reflected the state of the art at the time, did not perform well for certain hair textures and styles, or lighting conditions. In some cases, they inadvertently filtered out portions of a user’s appearance. Our engineers recognized that broader, more diverse training data sets that were representative of the Webex user base were required, and addressed this during the design phase before releasing the feature. By applying data that was anonymized, ethically sourced, user-contributed, open-source, provided with explicit consent, and respectful of individual privacy in other ways, we made the training data more robust. This resulted in more representative images and algorithms, and a much better and more inclusive user experience for all.
In 2022, we built on the learnings from the Webex team and launched our Responsible AI Framework based on six principles: Transparency, Fairness, Accountability, Privacy, Security, and Reliability. Our Responsible AI Working Group continuously drives adherence to these principles by putting new technologies through Responsible AI Impact Assessments, offering guidance on how to manage risk to human rights, and providing accountability via incident reporting of human rights, privacy, and security concerns.
To learn more about the progress we’re making to power a more inclusive future, visit our Cisco ESG Reporting Hub, where you can read our 2022 Cisco Purpose Report.
Share: