Security News
Microsoft is offering up to $15,000 to bug hunters that pinpoint vulnerabilities of Critical or Important severity in its AI-powered "Bing experience"."The new Microsoft AI bounty program comes as a result of key investments and learnings over the last few months, including an AI security research challenge and an update to Microsoft's vulnerability severity classification for AI systems," says Lynn Miyashita, a technical program manager with the Microsoft Security Response Center.
Professors at the University of South Australia and Charles Sturt University have developed an algorithm to detect and intercept man-in-the-middle attacks on unmanned military robots. MitM attacks are a type of cyberattack where the data traffic between two parties, in this case, the robot and its legitimate controllers, is intercepted either to eavesdrop or to inject false data in the stream.
Companies are losing revenue in the fight against malicious bot attacks, according to survey by Kasada. Despite spending millions of dollars on traditional bot management solutions, companies are still financially impacted by bot attacks.
Microsoft announced a new AI bounty program focused on the AI-driven Bing experience, with rewards reaching $15,000. AI-powered Bing experiences on bing.com in Browser AI-powered Bing integration in Microsoft Edge, including Bing Chat for Enterprise.
In this post I'm going to focus specifically on data security and how your team can ensure a safe Copilot rollout. Microsoft relies heavily on sensitivity labels to enforce DLP policies, apply encryption, and broadly prevent data leaks.
Experts from security firm F5 have argued that cyber criminals are unlikely to send new armies of generative AI-driven bots into battle with enterprise security defences in the near future because proven social engineering attack methods will be easier to mount using generative AI. The release of generative AI tools, such as ChatGPT, have caused widespread fears that democratization of powerful large language models could help bad actors around the world supercharge their efforts to hack businesses and steal or hold sensitive data hostage. F5, a multicloud security and application delivery provider, tells TechRepublic that generative AI will result in a growth in social engineering attack volumes and capacity in Australia, as threat actors deliver a higher volume of better quality attacks to trick IT gatekeepers.
76% of cybersecurity professionals believe the world is very close to encountering malicious AI that can bypass most known cybersecurity measures, according to Enea. AI is anticipated to bolster threat detection and vulnerability assessments, with intrusion detection and prevention identified as the domain most likely to benefit from AI. Deep learning for detecting malware in encrypted traffic holds the most promise, with 48% of cybersecurity professionals anticipating a positive impact from AI. Cost savings emerged as the top KPI for measuring the success of AI-enhanced defenses, while 72% of respondents believe AI automation will play a key role in alleviating cybersecurity talent shortages.
In today's rapidly evolving technological landscape, the integration of Artificial Intelligence (AI) and Large Language Models (LLMs) has become ubiquitous across various industries. This wave of...
"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT...
It's widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears "Wishful worries"-that is, "Problems that it would be nice to have, in contrast to the actual agonies of the present." A signal moment came when Timnit Gebru, a co-leader of Google's AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.