Can Security Experts Leverage Generative AI Without Prompt Engineering Skills?

Can Security Experts Leverage Generative AI Without Prompt Engineering Skills?

Professionals across industries are exploring generative AI for various tasks — including creating information security training materials — but will it truly be effective? Brian Callahan, senior lecturer and graduate program director in information technology and web sciences at Rensselaer Polytechnic Institute, and Shoshana Sugerman, an undergraduate student in this same program, presented the results of their experiment on this topic at ISC2 Security Congress in Las Vegas in October. Experiment involved creating cyber training…

Read More

Generative AI in Security: Risks and Mitigation Strategies

Generative AI in Security: Risks and Mitigation Strategies

Generative AI became tech’s fiercest buzzword seemingly overnight with the release of ChatGPT. Two years later, Microsoft is using OpenAI foundation models and fielding questions from customers about how AI changes the security landscape. Siva Sundaramoorthy, senior cloud solutions security architect at Microsoft, often answers these questions. The security expert provided an overview of generative AI — including its benefits and security risks — to a crowd of cybersecurity professionals at ISC2 in Las Vegas…

Read More

Apple Joins Voluntary U.S. Government Commitment to AI Safety

Apple Joins Voluntary U.S. Government Commitment to AI Safety

Apple is the latest addition to the list of public U.S. companies that made voluntary commitments to AI regulations, the White House announced on July 26. The commitments, first announced in September 2023, include vows to publicly disclose AI capabilities, to watermark AI content and more. These commitments set a public standard for the country’s largest AI makers in an effort to reduce deception and other novel, unsafe practices that could stem from realistic-looking AI…

Read More

OpenAI Secrets Stolen in 2023 After Internal Forum Was Hacked

OpenAI Secrets Stolen in 2023 After Internal Forum Was Hacked

The online forum OpenAI employees use for confidential internal communications was breached last year, anonymous sources have told The New York Times. Hackers lifted details about the design of the company’s AI technologies from forum posts, but they did not infiltrate the systems where OpenAI actually houses and builds its AI. OpenAI executives announced the incident to the whole company during an all-hands meeting in April 2023, and also informed the board of directors. It…

Read More

OpenAI, Anthropic AI Research Reveals More About How LLMs Affect Security and Bias

OpenAI, Anthropic AI Research Reveals More About How LLMs Affect Security and Bias

Because large language models operate using neuron-like structures that may link many different concepts and modalities together, it can be difficult for AI developers to adjust their models to change the models’ behavior. If you don’t know what neurons connect what concepts, you won’t know which neurons to change. On May 21, Anthropic published a remarkably detailed map of the inner workings of the fine-tuned version of its Claude AI, specifically the Claude 3 Sonnet…

Read More

Some Generative AI Company Employees Pen Letter Wanting ‘Right to Warn’ About Risks

Some Generative AI Company Employees Pen Letter Wanting ‘Right to Warn’ About Risks

Some current and former employees of OpenAI, Google DeepMind and Anthropic published a letter on June 4 asking for whistleblower protections, more open dialogue about risks and “a culture of open criticism” in the major generative AI companies. The Right to Warn letter illuminates some of the inner workings of the few high-profile companies that sit in the generative AI spotlight. OpenAI holds a distinct status as a nonprofit trying to “navigate massive risks” of…

Read More

Combatting Deepfakes in Australia: Content Credentials is the Start

Combatting Deepfakes in Australia: Content Credentials is the Start

There is growing consensus on how to address the challenge of deepfakes in media and businesses, generated through technologies such as AI. Earlier this year, Google announced that it was joining the Coalition for Content Provenance and Authenticity as a steering committee member — other organisations in the C2PA include OpenAI, Adobe, Microsoft, AWS and the RIAA. With growing concern about AI misinformation and deepfakes, IT professionals will want to pay close attention to the…

Read More

OpenAI's GPT-4 Can Autonomously Exploit 87% of One-Day Vulnerabilities

OpenAI's GPT-4 Can Autonomously Exploit 87% of One-Day Vulnerabilities

The GPT-4 large language model from OpenAI can exploit real-world vulnerabilities without human intervention, a new study by University of Illinois Urbana-Champaign researchers has found. Other open-source models, including GPT-3.5 and vulnerability scanners, are not able to do this. A large language model agent — an advanced system based on an LLM that can take actions via tools, reason, self-reflect and more — running on GPT-4 successfully exploited 87% of “one-day” vulnerabilities when provided with…

Read More

NVIDIA GTC Keynote: Blackwell Architecture Will Accelerate AI Products in Late 2024

NVIDIA GTC Keynote: Blackwell Architecture Will Accelerate AI Products in Late 2024

NVIDIA’s newest GPU platform is the Blackwell (Figure A), which companies including AWS, Microsoft and Google plan to adopt for generative AI and other modern computing tasks, NVIDIA CEO Jensen Huang announced during the keynote at the NVIDIA GTC conference on March 18 in San Jose, California. Figure A The NVIDIA Blackwell architecture. Image: NVIDIA Blackwell-based products will enter the market from NVIDIA partners worldwide in late 2024. Huang announced a long lineup of additional…

Read More

Microsoft’s Security Copilot Enters General Availability

Microsoft’s Security Copilot Enters General Availability

Microsoft Security Copilot, also referred to as Copilot for Security, will be in general availability starting April 1, the company announced today. Microsoft revealed that pricing for Security Copilot will start at $4/hr, calculated based on usage. At a press briefing on March 7 at the Microsoft Experience Center in New York (Figure A), we saw how Microsoft positions Security Copilot as a way for security personnel to get real-time assistance with their work and…

Read More
1 2