Security News

Energized by the hype around generative AI, enterprises are aggressively pursuing practical applications of this new technology while remaining cautious about the risks, according to ISG. ISG research shows 85% of companies surveyed believe investments in generative AI within the next 24 months are important or critical. While organizations see the potential of generative AI, they still don't yet fully know how to handle the risks.

Some of the United States' top tech executives and generative AI development leaders met with senators last Wednesday in a closed-door, bipartisan meeting about possible federal regulations for generative artificial intelligence. TechRepublic spoke to business leaders about what to expect next in terms of government regulation of generative artificial intelligence and how to remain flexible in a changing landscape.

Bing Chat, the famous ChatGPT-powered chatbot that allows users to converse with various personalities and topics has connectivity issues worldwide. BleepingComputer can confirm Bing Chat is not working in Asia and United States.

According to the surveyed DevOps and SecOps leaders, 97% are using the technology today, with 74% reporting they feel pressure to use it despite identified security risks. While DevOps and SecOps respondents hold similar outlooks on generative AI in most cases, there are notable differences with regards to adoption and productivity.

Uniphore then feeds that data into U-Capture, its conversational AI automation tool. U-Capture builds on Red Box data recording capabilities.

Assurances include watermarking, reporting about capabilities and risks, investing in safeguards to prevent bias and more. Some of the largest generative AI companies operating in the U.S. plan to watermark their content, a fact sheet from the White House revealed on Friday, July 21.

Organizations are optimistic about AI, but AI adoption requires attention to privacy and security, productivity, and training, according to GitLab. "According to the GitLab Global DevSecOps Report, only 25% of developers' time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60% of developers' day-to-day work. To realize AI's full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software, not just developers, to benefit from the efficiency boost."

Salesforce announced a rebrand of its Einstein 1 Data Cloud and new capabilities for the Einstein generative AI assistant for CRM at the Dreamforce conference held in San Francisco on Tuesday, Sept. 12. Salesforce's Einstein 1 Data Cloud metadata framework will be integrated within the Einstein 1 Platform.

The same digital automation tools that have revolutionized workflows for developers are creating an uphill battle regarding security. From data breaches and cyberattacks to compliance concerns, the stakes have never been higher for enterprises to establish a robust and comprehensive security strategy.

In the context of generative AI, having properly defined user roles to control who can access the AI system, train models, input data, and interpret outputs has become a critical security requirement. You might grant data scientists the authority to train models, while other users might only be permitted to use the model to generate predictions.