Security News
Following the footsteps of WormGPT, threat actors are advertising yet another cybercrime generative artificial intelligence tool dubbed FraudGPT on various dark web marketplaces and Telegram channels."This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.," Netenrich security researcher Rakesh Krishnan said in a report published Tuesday.
The details are scant-the article is based on a "Heavily redacted" contract-but the New York subway authority is using an "AI system" to detect people who don't pay the subway fare. Joana Flores, an MTA spokesperson, said the AI system doesn't flag fare evaders to New York police, but she declined to comment on whether that policy could change.
74% of respondents using generative AI say it's already improving their ability to increase revenue, and the average daily time savings from generative AI is 33%-the equivalent of 2.5 hours every workday or 13 hours each workweek. Companies' top concern with not using generative AI is falling behind competitors-but hurdles like security concerns, lack of a cohesive AI strategy, and lack of internal policies to govern generative AI prevent adoption.
In March, the U.K. government released a white paper promoting the country as a place to "Turbocharge growth" in AI. According to the white paper, 500,000 people in the U.K. are employed in the AI industry, and AI contributed £3.7 billion to the national economy in 2022. In response, on July 18, the independent research body Ada Lovelace Institute, in a lengthy report, called for a more "Robust domestic policy" in order to regulate AI through legislation that clarifies and organizes the U.K.'s effort to promote AI as an industry.
Threat actors are already engaging in rigorous discussions of how language models can be used for everything from identifying 0-day exploits to craft spear-phishing emails. Threat exposure management firm Flare has identified more than 200,000 OpenAI credentials currently being sold on the dark web in the form of stealer logs.
Seven US artificial intelligence giants - Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI - have publicly committed to "Help move toward safe, secure, and transparent development of AI technology." Test the security of their AI systems before launch Share knowledge about AI risk management best practices among themselves and with the government.
"BundleBot is abusing the dotnet bundle, self-contained format that results in very low or no static detection at all," Check Point said in a report published this week, adding it is "Commonly distributed via Facebook Ads and compromised accounts leading to websites masquerading as regular program utilities, AI tools, and games." NET single-file, self-contained application that, in turn, incorporates a DLL file, whose responsibility is to fetch a password-protected ZIP archive from Google Drive.
AI is about to make this issue much more complicated, and could drastically expand the types of laws that can be enforced in this manner. Some legal scholars predict that computationally personalized law and its automated enforcement are the future of law.
A new research report explores emerging trends that software organizations need to consider as part of their security strategy, and risks associated with the use of existing open source software in application development. In particular, as modern software development increasingly adopts distributed architectures and microservices alongside third party and open source components, the report tracks the astonishing popularity of ChatGPT's API, how current large language model-based AI platforms are unable to accurately classify malware risk in most cases, and how almost half of all applications make no calls at all to security-sensitive APIs in their code base.
We and our store and/or access information on a device, such as cookies and process personal data, such as unique identifiers and standard information sent by a device for personalised ads and content, ad and content measurement, and audience insights, as well as to develop and improve products. With your permission we and our partners may use precise geolocation data and identification through device scanning.