Security News
In short, using generative AI over the top of your existing enterprise content demands strict attention to information sensitivity labelling, information classification and governance. In summary, it is vital to tightly control information governance before letting AI search and generative services loose on your information.
A threat actor is using a PowerShell script that was likely created with the help of an artificial intelligence system such as OpenAI's ChatGPT, Google's Gemini, or Microsoft's CoPilot. Accessing the shortcut file triggered PowerShell to run a remote script.
The need for vCISO services is growing. SMBs and SMEs are dealing with more third-party risks, tightening regulatory demands and stringent cyber insurance requirements than ever before. However,...
According to a recent Gartner survey, widespread GenAI adoption has resulted in a scramble to provide audit coverage for potential risks arising from the technology's use. In this Help Net Security video, Thomas Teravainen, a Research Specialist at Gartner, discusses how AI-related risks have seen the biggest increases in audit plan coverage in 2024.
Figure A. Both countries will now "Align their scientific approaches" and work together to "Accelerate and rapidly iterate robust suites of evaluations for AI models, systems, and agents." This action is being taken to uphold the commitments established at the first global AI Safety Summit last November, where governments from around the world accepted their role in safety testing the next generation of AI models. The MoU primarily relates to moving forward on plans made by the AI Safety Institutes in the U.K. and U.S. The U.K.'s research facility was launched at the AI Safety Summit with the three primary goals of evaluating existing AI systems, performing foundational AI safety research and sharing information with other national and international actors.
Hackers are using Facebook advertisements and hijacked pages to promote fake Artificial Intelligence services, such as MidJourney, OpenAI's SORA and ChatGPT-5, and DALL-E, to infect unsuspecting users with password-stealing malware. In one of the cases seen by researchers at Bitdefender, a malicious Facebook page impersonating Midjourney amassed 1.2 million followers and remained active for nearly a year before it was eventually taken down.
New research has found that artificial intelligence (AI)-as-a-service providers such as Hugging Face are susceptible to two critical risks that could allow threat actors to escalate privileges,...
The study also found that AI integration into cybersecurity is not just a concept but also a practical reality for many, with 67% of respondents stating that they have tested AI specifically for security purposes. As for the ability to leverage AI, 48% of professionals expressed confidence in their organization's ability to execute a strategy for leveraging AI in security, with 28% feeling reasonably confident and 20% very confident.
AI deepfakes were not on the risk radar of organisations just a short time ago, but in 2024, they are rising up the ranks. Aon's Global Risk Management Survey, for example, does not mention it, though organisations are concerned about business interruption or damage to their brand and reputation, which could be caused by AI. Huber said the risk of AI deepfakes is still emergent, and it is morphing as change in AI happens at a fast rate.
Your profile can be used to present content that appears more relevant based on your possible interests, such as by adapting the order in which content is shown to you, so that it is even easier for you to find content that matches your interests. Content presented to you on this service can be based on your content personalisation profiles, which can reflect your activity on this or other services, possible interests and personal aspects.