In this contributed article, Ashley Leonard, president and CEO of Syxsense, reflects on some of the most pertinent issues affecting the adoption of generative AI in security. These include the question of who owns the AI output, how to conduct quality assurance to mitigate unwanted results, and companies’ overall preparedness to manage workforce displacement. The article also pulls real-life scenarios from across the industry and provide considerations to help businesses navigate generative AI adoption without missing out on the technology altogether.
Three Considerations Before Adding Generative AI Capabilities to Your Security Stack
Find out what drives today’s most successful platform businesses
In ‘The enterprise guide to platform thinking: What it can do for your business’, we explore how to achieve platform success and realize the full spectrum of business, operational, and customer value platforms can deliver. Inside, we dive into: – How platform thinking has evolved over the last five years, and the role that cultural, […]
The Buyer’s Guide to Cloud Security Solutions for Startups
Cloud security is a critical consideration, there’s no denying that. From your intellectual property and software services you sell, to sensitive data on customers and users, any threat can put your reputation on the line. While public cloud resources are key enablers of business, they come with the adoption of a shared responsibility model that […]
Generative AI governance How to reduce information security risks
Generative AI tools, particularly Large Language Models (LLMs) such as ChatGPT, offer immense potential for solving all kinds of business problems, from creating documents to generating code. They can also introduce security risks in two novel ways: leaking information and introducing code vulnerabilities. This article explores the ways these challenges often arise across organizations and […]
Navigating Data Lake Challenges: Governance, Security, and GDPR Compliance
In this contributed article, Coral Trivedi, Product Manager at Fivetran, discusses how enterprises can get the most value from a data lake. The article discusses automation, security, pipelines and GSPR compliance issues.
NetSPI Debuts ML/AI Penetration Testing, a Holistic Approach to Securing Machine Learning Models and LLM Implementations
NetSPI, the global leader in offensive security, today debuted its ML/AI Pentesting solution to bring a more holistic and proactive approach to safeguarding machine learning model implementations. The first-of-its-kind solution focuses on two core components: Identifying, analyzing, and remediating vulnerabilities on machine learning systems such as Large Language Models (LLMs) and providing grounded advice and real-world guidance to ensure security is considered from ideation to implementation.
Concerned About Migrating to the Cloud?
Watch this video to learn more about cloud services and how a cloud service solution just might be the answer to all your challenges when it comes to the storage, accessibility, capacity, and security of your data and applications.
Splunk Introduces New AI Offerings to Accelerate Detection, Investigation and Response Across Security and Observability
Splunk Inc. (NASDAQ: SPLK), the cybersecurity and observability leader, today announced Splunk AI, a collection of new AI-powered offerings to enhance its unified security and observability platform. Splunk AI combines automation with human-in-the-loop experiences, so organizations can drive faster detection, investigation and response while controlling how AI is applied to their data. Leaning into its lineage of data visibility and years of innovation in AI and machine learning (ML), Splunk continues to enrich the customer experience by delivering domain-specific insights through its AI capabilities for security and observability.
7 Key Security Criteria for Data Tools: A Buyer’s Guide
In this contributed article, Petr Nemeth. founder and CEO of Dataddo, discusses how data security is becoming a major concern for buyers of data tools. Here are seven criteria to help evaluate the suitability of any data tool for your data stack.
WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks
SlashNext published a research report detailing a unique module based on ChatGPT that was created by cybercriminals with the explicit intent of leveraging generative AI for nefarious purposes. These research findings have widespread implications for the security community in understanding how bad actors are not only manipulating generative AI platforms like ChatGPT for malicious purposes, but also creating entirely new platforms based on the same technology, specifically designed to do their ill-bidding.