5 methods to adopt responsible generative AI practice at work

Being able to catch up with a busy Slack channel once a day could also improve productivity and work-life balance, but those who make the plans and decisions should take responsibility for making sure AI summaries, action items, and timescales are accurate. AI tools that summarize calls with customers and clients can help managers supervise and train staff. That might be as useful for financial advisors as for call center workers, but tools that monitor employee productivity need to be used with empathy to avoid concerns about workplace surveillance. User feedback and product reviews are helpful, but the sheer volume can be overwhelming and nuggets of useful information might be buried pages deep.

Generative AI can classify, summarize, and categorize responses to give aggregate feedback that’s easier to absorb. In the long term, it’s easy to imagine a personal shopping assistant that suggests items you’d want to buy and answers questions about them rather than leaving you to scroll through pages of reviews and comments. But again, businesses will need to be cautious about introducing tools that might surface offensive or defamatory opinions, or be too enthusiastic about filtering out negative reactions. Generative AI tools can read and summarize long documents, and use the information to draft new ones. There are already tools like Docugami that promise to extract due dates and deliverables from contracts, and international law firm Allen & Overy is trialling a platform to help with contract analysis and regulatory compliance. Generating semi-structured documents like MoUs, contracts, or statements of work may speed up business processes and help you standardize some business terms programmatically, but expect to need a lot of flexibility and oversight.

5. Get over writer’s block, spruce up designs

You don’t have to turn your whole writing process over to an AI just to get help with brainstorming, copywriting and creating images or designs. Office 365 and Google Docs will soon allow you to ask generative AI to create documents, emails and slideshows, so you’ll want to have policy on how these are reviewed for accuracy before they’re shared with anyone. Again, start with more constrained tasks and internal uses that you can monitor.

Generative AI can suggest what to write in customer outreach emails, thank you messages, or warnings about logistical issues, right inside your email or in a CRM like Salesforce, Zoho, or Dynamics 365, either as part of the platform or through a third-party tool. There’s also a lot of interest in using AI for marketing, but there are brand risks too. Treat these options only as a way to get started and not the final version before clicking send.

AI-generated text might not be perfect but if you have a lot of blanks to fill, it’s likely better than nothing. Shopify Magic, for instance, can take basic product details and write consistent, SEO-tuned product descriptions for an online storefront, and once you have something, you can improve on it. Also, Reddit and LinkedIn use Azure Vision Services to create captions and alternative text for images to improve accessibility when members don’t add those themselves. If you have a large video library for training, auto-generated summaries might help employees make the most of their time. Image generation from text can be extremely powerful, and tools like the new Microsoft Designer app put image diffusion models in the hands of business users who might balk at using a Discord server to access Midjourney, and don’t have the expertise to use a Stable Diffusion plugin in Photoshop. But AI-generated images are also controversial, with issues ranging from deepfakes and uncanny valley effects, to the source of training data and the ethics of using works of known artists without compensation. Organizations will want to have a very clear policy on using generated images to avoid the more obvious pitfalls.

Finding your own uses

As you can see, there are opportunities to benefit from generative AI in everything from customer support and retail, to logistics and legal services—anywhere you want a curated interaction with a reliable information source.

To use it responsibly, start with natural language processing use cases such as classification, summarization and text generation for non-customer-facing scenarios where the output is reviewed by humans who have the expertise to spot and correct errors and false information, and look for an interface that makes it easy and natural to do that rather than it just accepting suggestions. It’ll be tempting to save time and money by skipping human involvement, but the damage to your business could be significant if what’s generated is inaccurate, irresponsible, or offensive.

Many organisations are worried about leaking data into the models that might help competitors. Google, Microsoft and OpenAI have already published data usage policies that say the data and prompts used by one company will only be used to train their model, not the core model supplied to every customer. But you’ll still want to have guidance on what information staff can copy into public generative AI tools.

Vendors also say that users own the input and output of the models, which is a good idea in theory, but may not reflect the complexity of copyright and plagiarism concerns with generative AI, and models like ChatGPT don’t include citations, so you don’t know if the text they return is correct or copied from someone else. Paraphrasing isn’t exactly plagiarism, but misappropriating an original idea or insight from someone else isn’t a good look for any business.

It’s also important for organizations to develop AI literacy and have staff become familiar with using and evaluating the output of generative AI. Start small with areas that aren’t critical and learn from what works.



Source link