Security News
Currently, the value of generative AI, like ChatGPT and DALL-E, is lopsided in favor of threat actors. Threat actors using generative AI in their attack arsenal is an eventuality, and now we need to focus on how we will defend against this new threat.
ChatGPT - the Large Language Model developed by OpenAI and based on the GPT-3 natural language generator - is generating ethical chatter. Like CRISPR's impact on biomedical engineering, ChatGPT slices and dices, creating something new from scraps of information and injecting fresh life into the fields of philosophy, ethics and religion.
ChatGPT from OpenAI is a conversational chatbot recently released in preview mode for research purposes. It takes natural language as input and aims to solve problems, provide follow-up questions or even challenge assertions depending on your question.
The security shop's research team said it has already seen Russian cybercriminals on underground forums discussing OpenAI workarounds so that they can bring ChatGPT to the dark side. We'd have thought ChatGPT would be most useful for coming up with emails and other messages to send people to trick them into handing over their usernames and passwords, but what do we know? Some crooks may find the AI model helpful in offering malicious code and techniques to deploy.
Google is calling EU cybersecurity foundersGoogle announced that the Google for Startups Growth Academy: Cybersecurity program now accepts applications from EU companies. Rackspace ransomware attack was executed by using previously unknown security exploitThe MS Exchange exploit chain recently revealed by Crowdstrike researchers is how the Play ransomware gang breached the Rackspace Hosted Exchange email environment, the company confirmed last week.
You can ask ChatGPT to write code, but the results can be mixed. A common task of any SecOps analyst is sometimes having to process specific log files, grep for certain patterns and export them to gain meaningful insight into an incident or issue.
Within a few weeks of ChatGPT going live, participants in cybercrime forums-some with little or no coding experience-were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks.The Python code combined various cryptographic functions, including code signing, encryption, and decryption.
The NYC Department of Education has banned the use of ChatGPT by students and teachers in New York City schools as there are serious concerns about its use hampering learning and leading to misinformation. Microsoft is reportedly planning to integrate ChatGPT into Bing to give its search engine an edge over competitors like Google Search.
As with any new technology, the development and deployment of ChatGPT is likely to have a significant impact on the field of cybersecurity. In many ways, ChatGPT and other AI technologies hold great promise for improving the ability of organizations and individuals to defend against cyber threats.
For even the most skilled hackers, it can take at least an hour to write a script to exploit a software vulnerability and infiltrate their target. Soon, a machine may be able to do it in mere seconds.When OpenAI last week released its ChatGPT tool, allowing users to interact with an artificial intelligence chatbot, computer security researcher Brendan Dolan-Gavitt wondered whether he could instruct it to write malicious code. So, he asked the model to solve a simple capture-the-flag challenge.