Security News > 2021 > December > How can AI be made more secure and trustworthy?

How can AI be made more secure and trustworthy?
2021-12-20 06:45

Those are a proper understanding of what AI is capable of and how it should be used, and improvements to the security of AI. To understand how machine learning works and how to use it properly, it is important to bear in mind that although some ML models are very complex, the systems incorporating ML are still just a product of combining an understanding of a domain and its data.

Model evasion attacks essentially exploit the fact that decision boundaries in the model are very complex and the capability of the model to interpolate between samples is limited, in a way leaving "Gaps" to be utilized for.

To understand how this attack works, consider the fact that model training processes are designed to find an optimal decision boundary between classes.

To perform a confidentiality attack, an adversary sends optimized sets of queries to the target model in order to uncover the way the model works or to reconstruct the model based on those inputs.

Many threats against ML models are real but ML practitioners don't necessarily even consider them, since most of the effort used to develop models focuses on the improvement of model performance.

Simpler models are often more robust, but trade-offs should naturally be considered on a case-by-case basis.


News URL

https://www.helpnetsecurity.com/2021/12/20/secure-ai/

#AI