- ITDM 2025 전망 | 금융 플랫폼 성패, 지속가능한 사업 가치 창출에 달렸다” KB국민카드 이호준 그룹장
- “고객경험 개선하고 비용은 절감, AI 기반까지 마련” · · · AIA생명의 CCM 프로젝트 사례
- 2025年、CIOはAIに意欲的に投資する - そしてその先も
- The best robot vacuums for pet hair of 2024: Expert tested and reviewed
- These Sony headphones eased my XM5 envy with all-day comfort and plenty of bass
4 ways to ask hard questions about emerging tech risks
Start with your core values
Your organization’s core values spell out the behaviors the organization expects of itself and of all employees. These can also be a guide as to what not to do. Google’s “Don’t be evil” became Alphabet’s “Do the right thing” and was intended to guide the organization when some other organizations were less scrupulous.
This is a starting point, but we also need to examine each proposed future action and initiative, whether in-house or off-the-shelf, to explore where each good intention may lead. The common advice is to start small with lower complexity and lower risk projects and build experience prior to taking on the larger, more impactful initiatives. You can also borrow from Amazon’s technique of asking whether decisions or actions are reversible or not. If reversible, then there’s clearly less risk.
Interrogate transformative technology
This means going beyond the typical business and technical questions related to a project and, where needed, asking legal and ethical questions as well. While innovation often gets non-productive pushback due to internal politics (for instance, not invented here syndrome), a productive type of pushback is asking probing questions like what’s the impact of mistakes? Will an AI-informed decision simply be wrong, or could it become catastrophically wrong? What level of careful piloting or real-world testing can help to address the unknowns and lower the level of risk? What’s an acceptable level of risk when it comes to cybersecurity, society, and opportunity?
The work of non-profits such as the Future of Life Institute looks at transformative technology such as AI and biotechnology with the goal of steering it toward benefiting life and away from extreme large-scale risks. These organizations and others can be valuable resources to raise awareness of the risks at hand.
Establish guardrails at the organizational level
While guardrails may not be applicable for the global AI military arms race, they can be beneficial at a more granular level within specified use cases and industries. Guardrails in the form of responsible procurement practices, guidelines, targeted recommendations, and regulatory initiatives are widespread and there’s much already available. Legislature is also stepping up its actions with the recent EU AI Act proposing different rules for different risk levels with the aim of reaching an agreement by the end of this year.
A simple guardrail at the organizational level is to craft your own corporate use policy as well as sign on to various industry agreements as appropriate. For AI and other areas, a corporate use policy can help to educate users to potential risk areas, and hence manage risk, while still encouraging innovation.