Security News

IBM, Salesforce and More Pledge to White House List of Eight AI Safety Assurances
2023-09-13 14:32

Assurances include watermarking, reporting about capabilities and risks, investing in safeguards to prevent bias and more. Some of the largest generative AI companies operating in the U.S. plan to watermark their content, a fact sheet from the White House revealed on Friday, July 21.

Privacy concerns cast a shadow on AI’s potential for software development
2023-09-13 03:00

Organizations are optimistic about AI, but AI adoption requires attention to privacy and security, productivity, and training, according to GitLab. "According to the GitLab Global DevSecOps Report, only 25% of developers' time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60% of developers' day-to-day work. To realize AI's full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software, not just developers, to benefit from the efficiency boost."

Dreamforce 2023: Salesforce Expands Einstein AI and Data Cloud Platform
2023-09-12 14:34

Salesforce announced a rebrand of its Einstein 1 Data Cloud and new capabilities for the Einstein generative AI assistant for CRM at the Dreamforce conference held in San Francisco on Tuesday, Sept. 12. Salesforce's Einstein 1 Data Cloud metadata framework will be integrated within the Einstein 1 Platform.

Strategies for harmonizing DevSecOps and AI
2023-09-12 04:30

The same digital automation tools that have revolutionized workflows for developers are creating an uphill battle regarding security. From data breaches and cyberattacks to compliance concerns, the stakes have never been higher for enterprises to establish a robust and comprehensive security strategy.

3 ways to strike the right balance with generative AI
2023-09-07 05:00

In the context of generative AI, having properly defined user roles to control who can access the AI system, train models, input data, and interpret outputs has become a critical security requirement. You might grant data scientists the authority to train models, while other users might only be permitted to use the model to generate predictions.

#AI
Emerging threat: AI-powered social engineering
2023-09-06 04:30

Deepfake videos use AI and deep learning techniques to create highly realistic but fake or fabricated content. The most effective evaluation of deepfake technology can be made when watching videos in which the "Deepfaked" person is a celebrity or individual whom the viewer is visually familiar with.

Everything You Wanted to Know About AI Security but Were Afraid to Ask
2023-09-04 11:29

Unlike General AI, Narrow AI is a specialized form of AI that is tuned for very specific tasks. In cybersecurity, Narrow AI can analyze activity data and logs, searching for anomalies or signs of an attack.

UK’s NCSC Warns Against Cybersecurity Attacks on AI
2023-09-01 18:35

The National Cyber Security Centre provides details on prompt injection and data poisoning attacks so organizations using machine-learning models can mitigate the risks. Large language models used in artificial intelligence, such as ChatGPT or Google Bard, are prone to different cybersecurity attacks, in particular prompt injection and data poisoning.

ChatGPT on the chopping block as organizations reevaluate AI usage
2023-08-31 03:30

ChatGPT has attracted hundreds of millions of users and was initially praised for its transformative potential. Concerns for safety controls and unpredictability have landed it on IT leaders' list of apps to ban in the workplace.

Google launches tool to identify AI-generated images
2023-08-30 09:35

Google is launching a beta version of SynthID, a tool that identifies and watermarks AI-generated images. The tool will initially be available to a limited number of customers that use Imagen, Google's cloud-based AI model for generating images from text.