- The best MagSafe accessories of 2024: Expert tested and reviewed
- Threads will show you more from accounts you follow now - like Bluesky already does
- OpenAI updates GPT-4o, reclaiming its crown for best AI model
- Nile unwraps NaaS security features for enterprise customers
- Even Nvidia's CEO is obsessed with Google's NotebookLM AI tool
An intelligent application future hinges on using responsible AI
As enthusiasm for AI builds and regulation gains momentum, investing in “doing generative AI” responsibly and ethically is not just the right thing to do — it will give companies a competitive advantage. Responsible AI helps mitigate operational, financial and competitive risks. Plus, data shows that companies leveraging responsible AI practices may be better positioned to attract and retain talent.
Unfortunately, there’s no industry standard for what responsible AI should look like. Stanford researchers found that companies building AI test their models against different benchmarks and testing methods, like TruthfulQA or Toxic Gen. This complicates efforts to compare models, and analyze the risks and limitations of the models that enterprises are deploying or using. Improving AI posture can start now. Here are a few things for leaders to consider:
Start with dynamic and practical frameworks to close the information gap
Understanding generative AI and how it works is key to defining responsible AI and improving AI practices. Many CISOs are nervous about deploying AI due to how broadly it expands their attack surface and risk exposure. Until they are more comfortable with what they can do to defend against these risks, their organizations are unable to pursue valuable generative AI use cases. That’s why some colleagues began developing an AI security framework designed to help CISOs understand AI system components and associated security risks.
This isn’t limited to CISOs. Leaders excited about generative AI also need to understand where things can go wrong if they don’t build responsibly. As risk grows, leaders must get clear on the different financial, environmental, and ethical risks that come with this space. But they can’t commit to responsible innovation without first identifying what that actually looks like for their organization. For example, per that same Stanford study, vulnerabilities in foundation models are getting more complex. Researchers are finding more strategies to get models to exhibit harmful behavior that don’t exist in most general red teaming efforts.
Make security and safety the number-one priority
The core tenets of responsible generative AI can be broken down into three main categories: ethical, governance and design.
- Ethical AI means that models and data follow the cultural values of those developing and deploying them. In most contexts, that means they are human-centric, fair and safe. AI should be created to help people, prioritize the human experience and promote equitable outcomes.
- Governance imposes ongoing accountability and observability on AI-based applications. This means that a person or group retains responsibility for outcomes and can make any needed changes over time. The AI must be compliant with data privacy, copyright, intellectual property and model regulations of the jurisdictions of its operation. It should seek to minimize private information collection, track that data’s use and anticipate the need for data deletion in the future.
- Responsibly designed AI means that AI is built to be interpretable and explainable to the best extent possible. It is efficient (e.g., using smaller models that require less compute at training or inference), therefore being more sustainable. The system should be built to be resilient to technical interruptions or adversarial attacks.
Security and safety are central to each pillar. Controls that protect customer data and company IP are table stakes, as is resiliency to technical interruptions or adversarial attacks. Organizations must deploy AI systems with encryption, network controls, data governance and auditing to ensure that the entire AI workflow is protected and monitored for vulnerabilities or breaches. Those who don’t keep these controls in mind risk system infiltration, data breaches or exfiltrations, business disruptions and more; there are massive legal, financial, strategic and reputational damages on the line from these types of attacks.
Curate, screen and layer to create custom models best suited for the organization
While generative AI technology continues to evolve, thoughtful regulation is necessary to uphold responsible AI development across the board. In the interim, there are a few steps companies can take to improve their AI security posture and lay the groundwork towards better, more responsible practices:
- Curate. Companies can opt to use smaller, focused sets of data for training or fine-tuning that are carefully adjusted to remove risks specific to their business. AI is only as good as the data it uses, so good data hygiene and governance is essential as a first step for curation.
- Screen. AI and ML teams can implement filtering capabilities and risk-based scores (e.g. toxicity labeling) to both the prompt and the output. These filters should apply to both the prompts and responses for best effect.
- Layer. Teams can limit what actually interacts with the base model through prompt engineering and grounding. Retrieval-augmented generation (RAG) is a common and cost-effective way to constrain responses to what is in a corpus of provided text that serves as a layer.
From an enterprise use perspective, intentional model choice should underscore all of this. Shared, off-the-shelf offerings are, in many ways, insufficient for the enterprise in regard to responsible AI principles. For example, consider how a generalized model trained off of broad, web-based data would fare in a healthcare setting. The same way a person on the street would be unlikely to understand doctors’ reports, a large model isn’t customized to the necessary level to help doctors with specialized tasks.
Consider two healthcare tasks: recommending post-operative care protocols and predicting post-operative hospital readmittance. A custom model could be trained on curated health records to generate post-operative instructions based on other patients’ histories. To do this, patient names and other identifying information must be suppressed to ensure privacy. Since physicians often have their own unique way of giving instructions, RAG could be employed to further tailor the language of response. Doctors should also review all of the generated instructions for accuracy and make any necessary changes before they go to the patient. The model could then continue to be tuned with updated, verified instructions from physicians.
Separately, hospital readmission is typically treated as a classification model rather than a generative one. This could be handled with a far less complex model with increased transparency and explainability and far less cost.
Identifying a model that makes sense for the business objective (avoiding the harmful environmental impacts associated with creating and using unnecessarily large hyperscale models) is central to responsible, secure AI use.
Responsible controls for tomorrow and beyond
Generative AI is moving fast, but there’s no need to break things. The advancement of generative AI relies on building trust in intelligent applications through responsible practices in the deployment and use of the technology.
As regulation begins to crystallize, companies must stay ahead. Maintain data hygiene and governance best practices to ensure that outputs are high-quality and reflect the technology’s intended use. Keep the ethical, governance and design principles at the center of all responsible AI deployment decisions with the goal of creating robust and resilient systems that maintain accountability and trust.