Security News
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber...
Fabric is an open-source framework, created to enable users to granularly apply AI to everyday challenges. "I created it to enable humans to easily augment themselves with AI. I believe it's currently too difficult for people to use AI. I think there are too many tools, too many websites, and too few practical use cases that combine a problem with a solution. Fabric is a way of addressing those problems," Daniel Miessler, the creator of Fabric, told Help Net Security.
This growth's unintended side effect is an ever-expanding attack surface that, coupled with the availability of easily accessible and criminally weaponized generative AI tools, has increased the need for highly secure remote identity verification. "Generative AI has provided a huge boost to threat actors' productivity levels: these tools are relatively low cost, easily accessed, and can be used to create highly convincing synthesized media such as face swaps or other forms of deepfakes that can easily fool the human eye as well as less advanced biometric solutions. This only serves to heighten the need for highly secure remote identity verification," says Andrew Newell, Chief Scientific Officer, iProov.
The National Institute of Standards and Technology established the AI Safety Institute on Feb. 7 to determine guidelines and standards for AI measurement and policy.An interesting omission on the list of U.S. AI Safety Institute members is the Future of Life Institute, a global nonprofit with investors including Elon Musk, established to prevent AI from contributing to "Extreme large-scale risks" such as global war.
According to a report from Abnormal Security, generative AI is likely behind the significant uptick in the volume and sophistication of email attacks on organizations, with 80% of security leaders stating that their organizations have already fallen victims to AI-generated email attacks. Even though humans are still better at crafting effective phishing emails, AI is still immensely helpful to cyber crooks: even less-skilled hackers can use it to easily craft credible and customized emails, with no grammar and spelling mistakes, nonsensical requests, etc.
Microsoft is testing a new "Automatic Super Resolution" AI-assisted upscaling feature that increases the video and image quality of supported games while also making them run more smoothly. As first discovered by Windows sleuth PhantomOfEarth, Microsoft is now testing an Automatic Super Resolution feature as part of its first preview of Windows 11 24H2 in the Canary and Dev channels.
As senior director and global head of the office of the chief information security officer at Google Cloud, Nick Godfrey oversees educating employees on cybersecurity as well as handling threat detection and mitigation. We conducted an interview with Godfrey via video call about how CISOs and other tech-focused business leaders can allocate their finite resources, getting buy-in on security from other stakeholders, and the new challenges and opportunities introduced by generative AI. Since Godfrey is based in the United Kingdom, we asked his perspective on UK-specific considerations as well.
"Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We're putting the fraudsters behind these robocalls on notice," said FCC Chairwoman Jessica Rosenworcel.While currently, State Attorneys Generals can target the outcome of an unwanted AI-voice generated robocall-such as the scam or fraud they are seeking to perpetrate-this action now makes the act of using AI to generate the voice in these robocalls itself illegal, expanding the legal avenues through which state law enforcement agencies can hold these perpetrators accountable under the law.
Nearly half of businesses reported a growth in synthetic identity fraud, while biometric spoofs and counterfeit ID fraud attempts also increased, according to AuthenticID. Consumers and businesses alike are facing new challenges in today's digital existence, from considering the ramifications of digital identity to grappling with the use and prevalence of new tools like generative AI. In the meantime, the explosion of AI has also pushed identity fraud into a new frontier that will become a potential global shift in the coming year. 68% of people said the threat of identity fraud and scams impacts how they make purchases, open accounts, and do business.
Latio Application Security Tester is an open-source tool that enables the usage of OpenAI to scan code from the CLI for security and health issues. Easily send code changes to OpenAI without dealing with copy-pasting into ChatGPT or setting up the perfect prompt.