Security News

Salesforce announced a rebrand of its Einstein 1 Data Cloud and new capabilities for the Einstein generative AI assistant for CRM at the Dreamforce conference held in San Francisco on Tuesday, Sept. 12. Salesforce's Einstein 1 Data Cloud metadata framework will be integrated within the Einstein 1 Platform.

The same digital automation tools that have revolutionized workflows for developers are creating an uphill battle regarding security. From data breaches and cyberattacks to compliance concerns, the stakes have never been higher for enterprises to establish a robust and comprehensive security strategy.

In the context of generative AI, having properly defined user roles to control who can access the AI system, train models, input data, and interpret outputs has become a critical security requirement. You might grant data scientists the authority to train models, while other users might only be permitted to use the model to generate predictions.

Deepfake videos use AI and deep learning techniques to create highly realistic but fake or fabricated content. The most effective evaluation of deepfake technology can be made when watching videos in which the "Deepfaked" person is a celebrity or individual whom the viewer is visually familiar with.

Unlike General AI, Narrow AI is a specialized form of AI that is tuned for very specific tasks. In cybersecurity, Narrow AI can analyze activity data and logs, searching for anomalies or signs of an attack.

The National Cyber Security Centre provides details on prompt injection and data poisoning attacks so organizations using machine-learning models can mitigate the risks. Large language models used in artificial intelligence, such as ChatGPT or Google Bard, are prone to different cybersecurity attacks, in particular prompt injection and data poisoning.

ChatGPT has attracted hundreds of millions of users and was initially praised for its transformative potential. Concerns for safety controls and unpredictability have landed it on IT leaders' list of apps to ban in the workplace.

Google is launching a beta version of SynthID, a tool that identifies and watermarks AI-generated images. The tool will initially be available to a limited number of customers that use Imagen, Google's cloud-based AI model for generating images from text.

Block Unwanted Calls With AI for Just $50 Until Labor Day Sale Ends 11:59 PM PST 9/4 AI-powered personalization rescues you from unwanted calls and texts, providing protection and stopping spam calls and texts waste your time. If your business has been plagued by unwanted callers, it's time to give yourself a break.

Data from the human vs. machine challenge could provide a framework for government and enterprise policies around generative AI. OpenAI, Google, Meta and more companies put their large language models to the test on the weekend of August 12 at the DEF CON hacker conference in Las Vegas. The Generative Red Team Challenge organized by AI Village, SeedAI and Humane Intelligence gives a clearer picture than ever before of how generative AI can be misused and what methods might need to be put in place to secure it.