- This simple Gmail trick gave me another 15GB of storage for free - and I didn't lose any files
- Why Oura Ring 4 is ZDNET's product of the year - besting Samsung, Apple, and others in 2024
- Infostealers Dominate as Lumma Stealer Detections Soar by Almost 400%
- How to generate your own music with the AI-powered Suno
- What is an IT consultant? Roles, types, salaries, and how to become one
Spotlight on DeepKeep.ai
DeepKeep, the leading provider of AI-Native Trust, Risk, and Security Management (TRiSM), empowers large corporations that rely on AI, GenAI, and LLM technologies to manage risk and protect growth. Our model-agnostic, multi-layer platform ensures AI security and trustworthiness from the R&D phase of machine learning models through to deployment. This includes comprehensive risk assessment, prevention, detection, monitoring and mitigation.
“DeepKeep’s technology and vision ensure the responsible and secure development, deployment, and use of AI technologies,” says Rony Ohayon, CEO and Founder of DeepKeep. “We provide AI-native security and trustworthiness that safeguard AI throughout its entire lifecycle, allowing businesses to adopt AI confidently while protecting commercial and consumer data.”
DeepKeep Dashboard:
AI is becoming essential for businesses and everyday life. In 2023, 35% of businesses adopted AI, and 90% of leading businesses supported and invested in AI for competitive advantage. As the adoption of LLMs and generative AI surges across diverse applications and industries, organizational attack surfaces expand, introducing unique threats and weaknesses. New risks associated with LLMs go beyond traditional cyber-attacks and include Prompt Injection, Jailbreak, and PII Leakage, as well as the lack of trustworthiness due to biases, fairness, and vulnerabilities.
Gartner’s new TRiSM category is a perfect fit for DeepKeep, as it ensures AI model governance, trustworthiness, fairness, reliability, robustness, efficacy, and data protection. This includes solutions and techniques for model interpretability and explainability, AI data protection, model operations, and adversarial attack resistance.
DeepKeep’s unique use of Generative AI to secure Generative AI sets it apart from competitors like Hidden Layer and Robust Intelligence. We leverage GenAI to protect LLMs and computer vision models throughout the entire AI lifecycle. Our AI-native security solutions ensure businesses adopt AI safely, protecting both commercial and consumer data.
DeepKeep’s expertise includes computer vision models, large language models (LLM) and multimodal scenarios. We prioritize implementing both trustworthiness and security to enable synergies equaling more than the sum of the parts, and also address both digital and physical threats, such as facial recognition and object detection, to ensure comprehensive protection.
DeepKeep raised $10M in seed funding in a round led by Canadian-Israeli VC Awz Ventures. Our roadmap includes expanding into multilingual natural language processing (NLP). As we collaborate with multinational companies globally, there is growing demand for support in multiple languages, with an initial focus on Japanese, driven by our partnerships with Japanese firms.
DeepKeep recently conducted an extensive evaluation of Meta’s LlamaV2 7B LLM, summarized with the following weaknesses and strengths:
- The LlamaV2 7B model is highly susceptible to both direct and indirect Prompt Injection (PI) attacks, with a majority of test attacks succeeding when exposing the model to contexts containing injected prompts.
- The model is vulnerable to Adversarial Jailbreak attacks, provoking responses that violate ethical guidelines, with tests revealing a significant reduction in the model’s refusal rate under such scenarios.
- The model is highly susceptible to Denial-of-Service (DoS) attacks, with prompts containing transformations like word replacement, character substitution, and order switching leading to excessive token generation.
- The model demonstrateד a high propensity for data leakage across diverse datasets, including finance, health, and generic PII.
- The model has a significant tendency to hallucinate, challenging its reliability.
- The model often opts out of answering questions related to sensitive topics like gender and age, suggesting it was trained to avoid potentially sensitive conversations rather than engage with them in an unbiased manner.
DeepKeep’s evaluation of data leakage and PII management demonstrates the model’s struggle to balance user privacy with the utility of information provided. However, Meta’s LlamaV2 7B LLM shows a remarkable ability to identify and decline harmful content, boasting a 99% refusal rate in our tests. Yet, our investigations into hallucinations indicate a significant tendency to fabricate responses, challenging its reliability. Overall, the LlamaV2 7B model showcases strengths in task performance and ethical commitment, with areas for improvement in handling complex transformations, addressing bias, and enhancing security against sophisticated threats.
Dr. Rony Ohayon is the CEO and Founder of DeepKeep, the leading provider of AI-Native Trust, Risk, and Security Management (TRiSM). He has 20 years of experience within the high-tech industry with a rich and diverse career spanning development, technology, academia, business, and management. He has a Ph.D. in Communication Systems Engineering from Ben-Gurion University, a Post-Doctorate from ENST France, an MBA, and more than 30 registered patents in his name. Rony was the CEO and Founder of DriveU, where he oversaw the inception, establishment, and management. Additionally, he founded LiveU, a leading technology solutions company for broadcasting, managing, and distributing IP-based video content, where he also served as CTO until the company was acquired. In the education realm, Rony was a senior faculty member at the Faculty of Engineering at Bar-Ilan University (BIU), where he founded the field of Computer Communication and taught courses about algorithms, distributed computing, and cybersecurity in networks.
About the Author
Dan K. Anderson, CEO and Co-Founder Mark V Security.
Dan currently serves as a vCISO and On-Call Roving reporter for CyberDefense Magazine. Dan has spent his life developing and implementing communications between systems and developing systems and applications in Military, Healthcare, and Mining. He has a background in Electrical Engineering and Chemistry with emphasis in Healthcare Informatics, BSEE, MS Computer Science, MBA Entrepreneurial focus, and has specialized in Information Security and Assurance, earning his Certified Information System Auditor (CISA), Certified in Risk and Information Systems Control (CRISC), both from the Information Systems Audit and Control Association (ISACA). Additional certifications include: Certified Business Continuity Lead Auditor (CBCLA), Certified Ethical Hacker (C|EH), Payment Card Industry Internal Security Assessor (ISA and PCIP), and Information Technology Infrastructure Library (ITIL v3). Winner Top Global CISO of the year 2023.
Dan has worked for Healthcare IT Vendors such as Cerner, GE, and IDX, and consults globally in Information Systems Security, Regulatory Compliance, Information Systems Audit, and Intellectual Property Assurance.
Some of Dan’s work includes consulting premier teaching hospitals such as Stanford Medical Center, Harvard’s Boston Children’s Hospital, University of Utah Hospital, and large Integrated Delivery Networks such as Sutter Health, Catholic Healthcare West, Kaiser Permanente, Veteran’s Health Administration, and Intermountain Healthcare.
Dan is a Board member, Past President, and Academic Liaison Director of the Utah chapter of the Information Systems Audit and Control Association, (ISACA), a Board member of UtahSec.org, a Board member and Past President of FBI Infragard Salt Lake City Chapter, member of FBI Citizen’s Academy Alumni Association, and member of the Security Technical Committee of Health Level Seven (HL7). Board Member, Center for Excellence in Higher Education Program Advisory Committee. Board Member, Utah Valley University Cyber Security Program Community Advisory Board. Board Member University of Utah Eccles School of Business Masters in Information Systems (MSIS) Program Advisory Board. Member BlackHat Network team. Healthcare Customer Advisory Board Member, Proofpoint. IEEE 2612 Cyber Medical Device Conformance founding member. 2023 Winner Global CISO of the Year.
Dan has served in positions as President, CEO, CIO, CISO, CTO, and Director for various companies, is currently CEO and Co-Founder of Mark V Security, Chief Information Security Officer, and Senior Management Consultant for Spectra Consulting Group, Current Cyber Advisor Board member for Graphite Health, and Former Chief Information Security and Privacy Officer for Lifescan Global, Inc.
In his spare time Dan has previously volunteered as an Ice Hockey coach for over 14 years in various youth hockey associations in Utah, High School-Midget Major AA travel teams, earning USA Hockey’s highest coaching level 5 Master Coach. Current volunteer efforts in building the future of infosec security professionals through University Board work, involvement in the local hacking scene, and mentoring students and co-workers.
Dan lives in Littleton, Colorado and Salt Lake City, Utah
Dan can be reached online at (EMAIL, TWITTER, etc..) dan.anderson@markvsecurity, @Z0lton, and at [email protected]