Security News

Fake Facebook MidJourney AI page promoted malware to 1.2 million people
2024-04-05 16:47

Hackers are using Facebook advertisements and hijacked pages to promote fake Artificial Intelligence services, such as MidJourney, OpenAI's SORA and ChatGPT-5, and DALL-E, to infect unsuspecting users with password-stealing malware. In one of the cases seen by researchers at Bitdefender, a malicious Facebook page impersonating Midjourney amassed 1.2 million followers and remained active for nearly a year before it was eventually taken down.

AI-as-a-Service Providers Vulnerable to PrivEsc and Cross-Tenant Attacks
2024-04-05 14:08

New research has found that artificial intelligence (AI)-as-a-service providers such as Hugging Face are susceptible to two critical risks that could allow threat actors to escalate privileges,...

Security pros are cautiously optimistic about AI
2024-04-05 04:30

The study also found that AI integration into cybersecurity is not just a concept but also a practical reality for many, with 67% of respondents stating that they have tested AI specifically for security purposes. As for the ability to leverage AI, 48% of professionals expressed confidence in their organization's ability to execute a strategy for leveraging AI in security, with 28% feeling reasonably confident and 20% very confident.

AI Deepfakes Rising as Risk for APAC Organisations
2024-04-04 15:29

AI deepfakes were not on the risk radar of organisations just a short time ago, but in 2024, they are rising up the ranks. Aon's Global Risk Management Survey, for example, does not mention it, though organisations are concerned about business interruption or damage to their brand and reputation, which could be caused by AI. Huber said the risk of AI deepfakes is still emergent, and it is morphing as change in AI happens at a fast rate.

When AI attacks
2024-04-04 08:56

Your profile can be used to present content that appears more relevant based on your possible interests, such as by adapting the order in which content is shown to you, so that it is even easier for you to find content that matches your interests. Content presented to you on this service can be based on your content personalisation profiles, which can reflect your activity on this or other services, possible interests and personal aspects.

Six steps for security and compliance in AI-enabled low-code/no-code development
2024-04-04 05:00

AI is quickly transforming how individuals create their own apps, copilots, and automations. The first is that production environments are no longer welcoming dozens or hundreds of apps but tens and hundreds of thousands of apps, automations and connections - all from users of varying technical backgrounds.

Google Cloud/Cloud Security Alliance Report: IT and Security Pros Are ‘Cautiously Optimistic’ About AI
2024-04-03 16:00

The C-suite is more familiar with AI technologies than their IT and security staff, according to a report from the Cloud Security Alliance commissioned by Google Cloud. The report, published on April 3, addressed whether IT and security professionals fear AI will replace their jobs, the benefits and challenges of the increase in generative AI and more.

Why AI forensics matters now
2024-04-02 04:00

In this Help Net Security video, Sylvia Acevedo, who serves on the Boards of Qualcomm and Credo, discusses why companies should invest in forensic capabilities and why forensics will be such an important topic as AI continues to be integrated into infrastructures and workflows. In an era where AI is becoming increasingly integral to business operations, the lack of comprehensive education and training in AI forensics poses a significant threat.

It's surprisingly difficult for AI to create just a plain white image
2024-03-31 11:38

My research colleague and data scientist Cody Nash met with one such encounter when he pondered "Can AI Create a White Painting?". All Nash wanted from AI was an image of a plain, pure, white background; in color-code lingo, the color #FFFFFF or RGB(255,255,255).

#AI
AI abuse and misinformation campaigns threaten financial institutions
2024-03-29 05:30

Though generative AI offers financial firms remarkable business and cybersecurity utility, cyberthreats relating to GenAI in financial services are a consistent concern, according to FS-ISAC. Cybercriminals exploit AI for data exfiltration. That said, threat actors can use generative AI to write malware and more skilled cybercriminals could exfiltrate information from or inject contaminated data into the large language models that train GenAI. The use of corrupted GenAI outputs can expose financial institutions to severe legal, reputational, or operational consequences.