Security Researchers Raise Concerns Over Security Flaws in Machine Learning

pietro jeng n6B49lTx7NM unsplash 5

 

In today’s age, it is impossible to implement effective cybersecurity technology without depending on innovative technologies like machine learning and artificial intelligence. Machine learning in the field of cybersecurity is a fast-growing trend. But with machine learning and AI there comes a cyber threat. Unlike traditional software, where flaws in design and source code account for most security issues, in AI systems, vulnerabilities can exist in images, audio files, text, and other data used to train and run machine learning models.

 What is machine learning? 

Machine learning, a subset of AI is helping business organizations to analyze the threats and respond to ‘adversarial attack’ and security incidents. It also helps to automate more boring and tedious tasks that were previously carried out by under-skilled security teams. Now, Google is also using machine learning to examine the threats against mobile endpoints running on Android along with detecting and removing malware from the infected handsets. 

What are adversarial attacks? 

Adversarial attacks are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. For instance, as web applications with database backends started replacing static websites, SQL injection attacks became prevalent. The widespread adoption of browser-side scripting languages gave rise to cross-site scripting attacks. Buffer overflow attacks overwrite critical variables and execute malicious code on target computers by taking advantage of the way programming languages such as C handle memory allocation. 

Security flaws linked with machine learning and AI 

Security researchers at Adversa, a Tel Aviv-based start-up that focuses on security for artificial intelligence (AI) systems have published their report which says many machine learning systems are vulnerable to adversarial attacks, imperceptible manipulations that cause models to behave erratically. 

According to the researchers at Adversa, machine learning systems that process visual data account for most of the work on adversarial attacks, followed by analytics, language processing, and autonomy. Web developers who are integrating machine learning models into their applications should take note of these security issues, warned Alex Polyakov, co-founder and CEO of Adversa. 

“There is definitely a big difference in so-called digital and physical attacks. Now, it is much easier to perform digital attacks against web applications: sometimes changing only one pixel is enough to cause a misclassification,” Polyakov told The Daily Swig.

If you like the site, please consider joining the telegram channel or supporting us on Patreon using the button below.

Discord

Original Source