Security News

Mitigating AI security risks
2024-02-15 16:50

Review and manage your consent Here's an overview of our use of cookies, similar technologies and how to manage them. Webinar It has become possible to swiftly and inexpensively train, validate and deploy AI models and applications, yet while we embrace innovation, are we aware of the security risks?

AI outsourcing: A strategic guide to managing third-party risks
2024-02-15 06:00

As mentioned earlier, outsourcing AI services has become a go-to strategy for many companies hoping to leverage the power of AI without the investment in in-house development. AI systems often process large volumes of sensitive data.

#AI
AI PC shipments are expected to surpass 167 million units by 2027
2024-02-15 04:00

Shipments of AI PCs - personal computers with specific system-on-a-chip capabilities designed to run generative AI tasks locally - are expected to grow from nearly 50 million units in 2024 to more than 167 million in 2027, according to IDC. Shipments of AI PCs. By the end of the forecast, IDC expects AI PCs will represent nearly 60% of all PC shipments worldwide. "As we enter a new year, the hype around generative AI has reached a fever pitch, and the PC industry is running fast to capitalize on the expected benefits of bringing AI capabilities down from the cloud to the client," said Tom Mainelli, group VP, Devices and Consumer Research.

#AI
How are state-sponsored threat actors leveraging AI?
2024-02-14 16:17

Microsoft and OpenAI have identified attempts by various state-affiliated threat actors to use large language models to enhance their cyber operations. Just as defenders do, threat actors are leveraging AI to boost their efficiency and continue to explore all the possibilities these technologies can offer.

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks
2024-02-14 14:39

Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber...

Fabric: Open-source framework for augmenting humans using AI
2024-02-14 05:30

Fabric is an open-source framework, created to enable users to granularly apply AI to everyday challenges. "I created it to enable humans to easily augment themselves with AI. I believe it's currently too difficult for people to use AI. I think there are too many tools, too many websites, and too few practical use cases that combine a problem with a solution. Fabric is a way of addressing those problems," Daniel Miessler, the creator of Fabric, told Help Net Security.

Cybercriminals get productivity boost with AI
2024-02-14 04:30

This growth's unintended side effect is an ever-expanding attack surface that, coupled with the availability of easily accessible and criminally weaponized generative AI tools, has increased the need for highly secure remote identity verification. "Generative AI has provided a huge boost to threat actors' productivity levels: these tools are relatively low cost, easily accessed, and can be used to create highly convincing synthesized media such as face swaps or other forms of deepfakes that can easily fool the human eye as well as less advanced biometric solutions. This only serves to heighten the need for highly secure remote identity verification," says Andrew Newell, Chief Scientific Officer, iProov.

NIST Establishes AI Safety Consortium
2024-02-13 14:40

The National Institute of Standards and Technology established the AI Safety Institute on Feb. 7 to determine guidelines and standards for AI measurement and policy.An interesting omission on the list of U.S. AI Safety Institute members is the Future of Life Institute, a global nonprofit with investors including Elon Musk, established to prevent AI from contributing to "Extreme large-scale risks" such as global war.

Protecting against AI-enhanced email threats
2024-02-13 05:30

According to a report from Abnormal Security, generative AI is likely behind the significant uptick in the volume and sophistication of email attacks on organizations, with 80% of security leaders stating that their organizations have already fallen victims to AI-generated email attacks. Even though humans are still better at crafting effective phishing emails, AI is still immensely helpful to cyber crooks: even less-skilled hackers can use it to easily craft credible and customized emails, with no grammar and spelling mistakes, nonsensical requests, etc.

Microsoft tests Windows 11 ‘Super Resolution’ AI-upscaling for gamers
2024-02-12 21:23

Microsoft is testing a new "Automatic Super Resolution" AI-assisted upscaling feature that increases the video and image quality of supported games while also making them run more smoothly. As first discovered by Windows sleuth PhantomOfEarth, Microsoft is now testing an Automatic Super Resolution feature as part of its first preview of Windows 11 24H2 in the Canary and Dev channels.