Security News
Research suggests there will be just 250m users of AI enabled applications and services this year, a number which will double by 2027 and hit 1bn by 2029 as companies find new, more innovative ways to harness the technology. While the number of native AI applications currently available is around 2,000, a lot more are in the pipeline.
Prompt-Guard-86M, introduced by Meta last week in conjunction with its Llama 3.1 generative model, is intended "To help developers detect and respond to prompt injection and jailbreak inputs," the social network giant said. So makers of AI models build filtering mechanisms called "Guardrails" to catch queries and responses that may cause harm, such as those revealing sensitive training data on demand, for example.
Apple is the latest addition to the list of public U.S. companies that made voluntary commitments to AI regulations, the White House announced on July 26. The addition of Apple is "Further cementing these commitments as cornerstones of responsible AI innovation," the White House stated in a press release.
The new tool the research project is unleashing on deepfakes, called "MISLnet", evolved from years of data derived from detecting fake images and video with tools that spot changes made to digital video or images. These may include the addition or movement of pixels between frames, manipulation of the speed of the clip, or the removal of frames.
X has quietly begun training its Grok AI chat platform using members' public posts without first alerting anyone that it is doing it by default. To avoid being left out of the game, X quietly began to train its Grok AI chat platform by using users' posts without asking for permission or making an announcement about the change.
A Spanish-speaking cybercrime group named GXC Team has been observed bundling phishing kits with malicious Android applications, taking malware-as-a-service offerings to the next level. The phishing kit is priced anywhere between $150 and $900 a month, whereas the bundle including the phishing kit and Android malware is available on a subscription basis for about $500 per month.
The seemingly paradoxical solution to these growing threats is further development and research into more sophisticated offensive AI. Plato's adage, "Necessity is the mother of invention," is an apt characterization of cybersecurity today, where new AI-driven threats drive the innovation of more advanced security controls. While developing more sophisticated offensive AI tools and techniques is far from morally commendable, it continues to emerge as an inescapable necessity.
While sysadmins recognize AI's potential, significant gaps in education, cautious organizational adoption, and insufficient AI maturity hinder widespread implementation, leading to mixed results and disruptions in 16% of organizations, according to Action1. "Our findings indicate that, despite some trial and error in AI implementation among sysadmins, organizations generally approach AI cautiously. Implementation projects are predominantly focused on a few IT areas, and even among those that have been implemented, results are mixed. This underscores the fact that AI technology still needs time to mature and evolve before AI-driven solutions become more widespread and practical."
As AI-generated deepfake attacks and identity fraud become more prevalent, companies are developing response plans to address these threats, according to GetApp. Much like phishing attack preparation, it appears that companies are looking to run simulations of attacks to increase preparedness as a majority of respondents work in companies where this is already implemented.
The hybrid multicloud strategies that many Australian enterprises have adopted over the last decade could be made more complex by new AI applications. The only solutions could be rationalisation...