dnn website developer

2022.03.18

dnn website developer

Planning editing | Natalie

Translator | Liu Zhiyong,Ma Zhuoqi

edit | Debra Chen,Emily

AI Front line guide: Deep neural network has achieved breakthrough progress on cognitive mission,But they are easily affected“Antagonism”s attack:Slight change image、Text or voice data can deceive these systems,Awareness of misjudgment。And against attacks have been AI The deployment in security key applications constitutes the actual threat,In order to solve this problem,recently IBM Introduced an open source tool:Resist the robust toolbox(Adversarial Robustness Toolbox),Caring for the artificial intelligence system to protect against sexual attack。 For more dry content, please pay attention to the WeChat public number“AI front”,(ID:ai-front)

recently IBM Announced launching AI Developers' confrontational effectors(Adversarial Robustness Toolbox)。The toolbox is released in the form of an open source code base,These include attack agents、Defense application and benchmark test tool,These tools enable developers to enable inherent elasticity(baked-in resilience)Integrated to respond to confrontational attacks。AI Developers against deep neural networks against sexual attack,To ensure that they can withstand all the contents needed for the real world test,Can find it in this open source toolbox。

IBM Indicates that this is the first defense against confrontational attacks in the industry.,according to IBM Chief Technology Officer of Safety System Sridhar Muppidi Saying:

Some of some of the existing resistance to confrontation AI The model is platform strong。to this end,IBM The team designed this counter-robust toolbox,And this tool is universal on different platforms.。Whether the developer passes Keras still TensorFlow Code、Develop,Can build the same library to build a defense system。

This tool is like AI Comprehensive martial arts coach,Be DNN Recovery capabilities evaluation,Teach it custom defense technology,And provide an internal anti-attack layer(anti-virus layer)。

Anti-anti-anti-anti-anti-sample attack

Artificial intelligence in recent years(AI)The development has achieved great progress,Modern artificial intelligence system has reached the level of approaching humanity on cognitive mission,For example, object identification、Video label、Speech text conversion and machine translation, etc.。Most of these breakthroughs are in depth neural networks(DNNs)based on。Depth neural network is a complex machine learning model,Its structure has certain similarity to neurons connected in the brain.,It can handle high-dimensional input(High resolution image, such as millions of pixels),Input mode representing different abstraction,And these features are associated with high-level semantic concepts。

Deep neural network has an interesting feature:Although they are usually very accurate,But they are easily subject to so-called“Antagonism”s attack。The counterfeit sample is the input signal that is deliberately modified(For example pictures),An attacker wants it to make deep neural networks produce expected responses。picture 1 Give an example:Attackers joined a small amount of counterfeit noise on the picture of the giant panda,Deep neural network to classify it into a hacker。usually,The goal of confronting the sample is to make neural networks generate misconceptions,Or a particular incorrect prediction result。

(dnn website developer)picture 1 Antagonism(right)By inputting in the original input(Left)Add counterfeit noise(middle)Gain。Although the noise added to the sample is almost unable to,But the depth neural network will be misused into“Monkey”Instead of“Giant panda”。

Antagonist AI The deployment in the security critical application has constituted a real threat。Image、video、Voice and other data only need to almost detect changes,Can be used to confuse AI system。Even if the attacker does not understand the structure of the depth neural network or how to access its parameters,You can also make this kind of confrontation sample。More worrying is,Configurative attacks can be expanded in the physical world:In addition to the pixels of the digital image,Attackers can avoid human face recognition systems by wearing specially designed glasses,Or attack the visual identification system in the automatic driving vehicle by occlusion traffic marks。

2016 year,A group of students from Carnegie Mellon designed a glasses,And successfully cheated the face recognition algorithm,Make it to identify people with glasses into another completely different person。

Global well-known scientific media TNW(The Next Web)The voice system vulnerability shown above is reported earlier earlier this year.,Hackers can deceive speech transfer text systems through specific ways,For example, in your favorite song, sneak some speech instructions,Let the intelligent voice assistant empty your bank account。Hackers do not necessarily need to select a specific song from your collection playlist.,They can sit on the public transport or sit in your office.,Pretend to listen to music,Then secretly embed the attack signal。

(dnn website developer)The threats brought by confrontation include deception GPS Misleading、Attack carrier system、Camouflage vessel ID Come to deceive AI Driven satellite, etc.。AI The system is easy to be attacked, including unmanned cars and military drones.,If their security is threatened,Both may become a hacker's weapon。

Actually,all DNN Need to have the ability to defend attacks,Otherwise they are like a computer without virus protection.,Easy to be in danger。

Open source against robust toolbox

IBM Irish Research Center launched against robust toolbox(Adversarial Robustness Toolbox)Is an open source software library,Can help researchers and developers against confrontation attacks on deep neural networks,So that AI System is safer。This tool will be Sridhar Muppidi PhD(IBM Fellow,Vice president IBM Safety Chief Technology Officer)and Koos Lodewijkx(Vice President and Safety Operation and Response(SOAR)CTO)exist RSA Officially released。You can also in advance GitHub View and use this project:
https://github.com/ibm/adversarial-robustness-toolbox。

(dnn website developer)The design purpose of anti-robust toolbox is to support researchers and developers to create new defense technology,And the actual defense of the real world artificial intelligence system。Researchers can compare their newly designed defense systems with the current state-of-the-art system with the most advanced system.。Developer,The toolbox provides an interface,Supporting the use of personal methods as building blocks to form a comprehensive defense system。

This open source library is based on Python write,Can develop、Test and deploy depth neural network,These include the most advanced algorithms of the actual counterfeit examples, and the way to enhance deep neural network to resist confrontation attacks。

(dnn website developer)Developers use this open source toolbox to protect depth neural networks, three steps:

Measuring the robustness of the model。 first,Assessment of a healthyness of a given depth neural network。There is a direct evaluation method,It is the accuracy loss after the input confrontation sample.。There are other methods to measure when the input has a small change.,The degree of change in internal representation and output of the network。

Model hardening。 Secondly,“hardening”Give depth neural network,Make it more robust to confrontation。A common method is to preprocessing the input of depth neural networks,Enhance training data with confrontation samples,Or change the architecture of the depth neural network to prevent the presence of the signal along the internal representation of the signal。

Real-time detection。 finally,Real-time detection methods can be applied to the input of marking attackers may have tampered with。These methods generally attempt to use the abnormal activation of the layers inside the depth neural network caused by counterfeiting inputs.。

How to start

Access open source project address

https://github.com/ibm/adversarial-robustness-toolbox

Now you can start using confrontational toolbox!The current release version already has a comprehensive document and tutorial,Can help researchers and developers get started quickly。The research team is currently preparing more detailed white paper,Details of implementing methods in an open source library。

The first version support for the robust toolbox is based on Tensorflow and Keras Realized depth neural network,The drop-down version will support other popular frameworks,Such as PyTorch or MXNet。Currently,这个开源工具主要改善了视觉识别系统的对抗健壮性,This open source tool mainly improves the confrontation of visual identification systems.,例如语音、Future version will apply to other data modes。

IBM 研究团队希望,Voice,并使人工智能在现实世界应用中的部署更加安全。Text or time series,或者对这个工具有任何改进建议,Research team hopes!

(dnn website developer)本文来自AIAnti-aerodynamic toolbox project can promote research and development in the field of deep neural networks against aerodynamics,创业家系授权发布,And make artificial intelligence more safe in real world applications,版权归作者所有,If you have any experience with the robust toolbox。[ 下载创业家APP,Or have any improvement suggestions for this tool7000种生意 ]

返回列表
更多新闻资讯