Security News

Experts from security firm F5 have argued that cyber criminals are unlikely to send new armies of generative AI-driven bots into battle with enterprise security defences in the near future because proven social engineering attack methods will be easier to mount using generative AI. The release of generative AI tools, such as ChatGPT, have caused widespread fears that democratization of powerful large language models could help bad actors around the world supercharge their efforts to hack businesses and steal or hold sensitive data hostage. F5, a multicloud security and application delivery provider, tells TechRepublic that generative AI will result in a growth in social engineering attack volumes and capacity in Australia, as threat actors deliver a higher volume of better quality attacks to trick IT gatekeepers.

76% of cybersecurity professionals believe the world is very close to encountering malicious AI that can bypass most known cybersecurity measures, according to Enea. AI is anticipated to bolster threat detection and vulnerability assessments, with intrusion detection and prevention identified as the domain most likely to benefit from AI. Deep learning for detecting malware in encrypted traffic holds the most promise, with 48% of cybersecurity professionals anticipating a positive impact from AI. Cost savings emerged as the top KPI for measuring the success of AI-enhanced defenses, while 72% of respondents believe AI automation will play a key role in alleviating cybersecurity talent shortages.

In today's rapidly evolving technological landscape, the integration of Artificial Intelligence (AI) and Large Language Models (LLMs) has become ubiquitous across various industries. This wave of...

"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT...

It's widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears "Wishful worries"-that is, "Problems that it would be nice to have, in contrast to the actual agonies of the present." A signal moment came when Timnit Gebru, a co-leader of Google's AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.

92% of AI team leaders at leading-edge organizations felt that their AI initiatives are generating value, according to Wallaroo. "Leading edge ML enterprises have a number of lessons to teach other organizations embarking on their own ML production journeys," said Vid Jain, CEO of Wallaroo.

IBM has unveiled the next evolution of its managed detection and response service offerings with new AI technologies, including the ability to automatically escalate or close up to 85% of alerts, helping to accelerate security response timelines for clients. The managed services are delivered by IBM Consulting's global team of security analysts via IBM's advanced security services platform, which applies multiple layers of AI and contextual threat intelligence from the company's vast global security network - helping automate away the noise while quickly escalating critical threats.

Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence. Countries trying to influence each other's elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the US presidential election.

The TorchServe flaws discovered by the Oligo Security research team can lead to unauthorized server access and remote code execution on vulnerable instances. Due to insecure deserialization in the SnakeYAML library, attackers can upload a model with a malicious YAML file to trigger remote code execution.

The AI security center's establishment follows an NSA study that identified securing AI models from theft and sabotage as a major national security challenge, especially as generative AI technologies emerge with immense transformative potential for both good and evil. Nakasone said it would become "NSA's focal point for leveraging foreign intelligence insights, contributing to the development of best practices guidelines, principles, evaluation, methodology and risk frameworks" for both AI security and the goal of promoting the secure development and adoption of AI within "Our national security systems and our defense industrial base."