Security News > 2022 > October > Mitigating the risks of artificial intelligence compromise
The number of cyberattacks directed at artificial intelligence continues to increase, and hackers are no longer planting malicious bugs within code - their techniques have become increasingly complex, allowing them to tamper with systems to compromise and "Weaponize" AI against the organizations leveraging it for their operations.
There are four typical elements to consider when it comes to ML. The first is data sets: the data provided to a device or machine so it can function, review, and decide based on the information received.
To secure a system, any algorithm deployed must be specifically adjusted to the unique problem that needs to be solved, to align with the specific model and nature of the data provided.
Starting with the data set aspect of a system, a component such as a Trusted Platform Module is able to sign and verify that any data provided to the machine has been communicated from a reliable source.
Any deviations of the model, if bad or inaccurate data is supplied, can be prevented through applying trusted principles focusing on cyber resiliency, network security, sensor attestation, and identity.
Should the system be exploited by a hacker, any exposed layer's key and measurement will differ from any others within the system, mitigating the potential risk by securing data and protecting itself from any disclosure of data.
News URL
https://www.helpnetsecurity.com/2022/10/27/compromise-artificial-intelligence/