What Does Closed-Door Meeting With AI Industry Leaders Mean for Business?


Some of the United States’ top tech executives and generative AI development leaders met with senators last Wednesday in a closed-door, bipartisan meeting about possible federal regulations for generative artificial intelligence. Elon Musk, Sam Altman, Mark Zuckerberg, Sundar Pichai and Bill Gates were some of the tech leaders in attendance, according to reporting from the Associated Press. TechRepublic spoke to business leaders about what to expect next in terms of government regulation of generative artificial intelligence and how to remain flexible in a changing landscape.

Jump to:

AI summit included tech leaders and stakeholders

Each participant had three minutes to speak, followed by a group discussion led by Senate Majority Leader Chuck Schumer and Republican Sen. Mike Rounds of South Dakota. The goal of the meeting was to explore how federal regulations might respond to the benefits and challenges of rapidly-developing generative AI technology.

Musk and former Google CEO Eric Schmidt discussed concerns about generative AI posing existential threats to humanity, according to the Associated Press’ sources inside the room. Gates considered solving problems of hunger with AI, while Zuckerberg was concerned with open source vs. closed source AI models. IBM CEO Arvind Krishna pushed back against the idea of AI licenses. CNN reported that NVIDIA CEO Jensen Huang was also present.

All of the forum attendees raised their hands in support of the government regulating generative AI, CNN reported. While no specific federal agency was named as the owner of the task of regulating generative AI, the National Institute of Standards and Technology was suggested by several attendees.

The fact that the meeting, which included civil rights and labor group representatives, was skewed toward tech moguls was dissatisfying to some senators. Sen. Josh Hawley, R-Mo., who supports licensing for certain high-risk AI systems, called the meeting a “giant cocktail party for big tech.”

“There was a lot of care to make sure the room was a balanced conversation, or as balanced as it could be,” Deborah Raji, a researcher at the University of California, Berkeley who specialized in algorithmic bias and attended the meeting, told the AP.(Note: TechRepublic contacted Senator Schumer’s office for a comment about this AI summit, and we have not received a reply by the time of publication.)

U.S. regulation of generative AI is still developing

So far, the U.S. federal government has issued suggestions for AI makers, including watermarking AI-generated content and putting guardrails against bias in place. Companies including Meta, Microsoft and OpenAI have attached their names to the White House’s list of voluntary AI safety commitments.

Many states have bills or legislation in place or in progress related to a variety of applications of generative AI. Hawaii has passed a resolution that “urges Congress to begin a discussion considering the benefits and risks of artificial intelligence technologies.”

Questions of copyright

Copyright is also a factor being considered when it comes to legal rules around AI. AI-generated images cannot be copyrighted, the U.S. Copyright Office determined in February, although parts of stories created with AI art generators can be.

Raul Martynek, chief executive officer of data center solutions maker DataBank, emphasized that copyright and privacy are “two very clear problems stemming from generative AI that legislation could mitigate.” Generative AI consumes massive amounts of energy and information about people and copyrighted works.

“Given that states from California to New York to Texas are forging ahead with state privacy legislation in the absence of unified federal action, we may soon see the U.S. Congress act to bring the U.S. on par with other jurisdictions that have more comprehensive privacy legislation,” said Martynek.

SEE: The European Union’s AI Act bans certain high-risk practices such as using AI for facial recognition. (TechRepublic) 

He brought up the case of Barry Diller, chairman and senior executive of media conglomerate IAC, who suggested companies using AI content should share revenue with publishers.

“I can see privacy and copyright as the two issues that could be regulated first when it ultimately happens,” Martynek said.

Ongoing AI policy discussions

In May 2023, the Biden-Harris administration created a roadmap for federal investments in AI development, made a request for public input on the topic of AI risks and benefits, and produced a report on the problems and advantages of AI in education.

“Can Congress work to maximize AI’s benefits, while protecting the American people—and all of humanity— from its novel risks?,” Schumer wrote in June.

“The policymakers must ensure vendors realize if their service can be used for a darker purpose and likely provide the legal path for accountability,” said Rob T. Lee, a technical consultant to the U.S. government and chief curriculum director and faculty lead at the SANS Institute, in an email to TechRepublic. “Trying to ban or control the development of services could hinder innovation.”He compared artificial intelligence to biotech or pharmaceuticals, which are industries that could be harmful or beneficial depending on how they are used. “The key is not stifling innovation while ensuring ‘accountability’ can be created,” Lee said.

Generative AI’s impact on cybersecurity for businesses

Generative AI will impact cybersecurity in three main ways, Lee suggested:

  • Data integrity problems.
  • Conventional crimes such as theft or tax evasion.
  • Vulnerability exploits such as ransomware.

“Even if policymakers get involved more — all of the above will still occur,” he said.

“The value of AI is overstated and not well understood, but it is also attracting a lot of investment from both good actors and bad actors,” Blair Cohen, founder and president of identity verification firm AuthenticID, said in an email to TechRepublic. “There is a lot of discussion over regulating AI, but I am sure the bad actors won’t follow those regulations.”

On the other hand, Cohen said, AI and machine learning may also be critical to protecting against malicious uses of the hundreds or thousands of digital attack vectors open today.

Business leaders should keep up-to-date with cybersecurity in order to protect against both artificial intelligence and traditional digital threats. Lee noted that the speed of the development of generative AI products creates its own dangers.

“The data integrity side of AI will be a challenge, and vendors will be rushing to get products to market (and) not putting appropriate security controls in place,” Lee said.

Policymakers might learn from corporate self-regulation

With large companies self-regulating some of their uses of generative AI, the tech industry and governments will learn from each other.

“So far, the U.S. has taken a very collaborative approach to generative AI legislation by bringing in the experts to workshop needed policies and even simply learn more about generative AI, its risk and capabilities,” said Dan Lohrmann, field chief information security officer at digital solutions provider Presidio, in an email to TechRepublic. “With companies now experimenting with regulation, we are likely to see legislators pull from their successes and failures when it comes time to develop a formal policy.”

Considerations for business leaders working with generative AI

Regulation of generative AI will move “reasonably slowly” while policymakers learn about what generative AI can do, Lee said.

Others agree that the process will be gradual. “The regulatory landscape will evolve gradually as policymakers gain more insights and expertise in this area,” predicted Cohen.

64% of Americans want generative AI to be regulated

In a survey published in May 2023, global customer experience and digital solutions provider TELUS International found that 64% of Americans want generative AI algorithms to be regulated by the government. 40% of Americans do not believe companies using generative AI in their platforms are doing enough to stop bias and false information.

Businesses can benefit from transparency

“Importantly, business leaders should be transparent and communicate their AI policies publicly

and clearly, as well as share the limitations, potential biases and unintended consequences of

their AI systems,” said Siobhan Hanna, vice president and managing director of AI and machine learning at TELUS International, in an email to TechRepublic.

Hanna also suggested that business leaders should have human oversight over AI algorithms, be sure that the information conveyed by generative AI is appropriate for all audiences and address ethical problems through third-party audits.

“Business leaders should have clear standards with quantitative metrics in place measuring the accuracy, completeness, reliability, relevance and timeliness of its data and its algorithms’ performance,” Hanna said.

How businesses can be flexible in the face of uncertainty

It is “incredibly challenging” for businesses to keep up with changing regulations, said Lohrmann. Companies should consider using GDPR requirements as a benchmark for their policies around AI if they handle personal data at all, he said. No matter what regulations apply, guidance and norms around AI should be clearly defined.

“Keeping in mind that there is no widely accepted standard in regulating AI, organizations need to invest in creating an oversight team that will evaluate a company’s AI projects not just around already existing regulations, but also against company policies, values and social responsibility goals,” Lohrmann said.

When decisions are finalized, “Regulators will likely emphasize data privacy and security in generative AI, which includes protecting sensitive data used by AI models and safeguarding against potential misuse,” Cohen said.



Source link