Article 29 February 2024

Ai implementation: not just for tech giants, also for manufacturing industry

In November 2022, the ai-chatbot ChatGPT became available for public use. After a period of unprecedented popularity, the upgraded variant, ChatGPT-4, followed in March 2023. But in this positive flow, a countermovement also emerged. Just weeks after the launch of ChatGPT-4, more than a thousand tech leaders, including Elon Musk, signed an open letter calling on ai labs to take a six-month pause in ai research and development.

Critics said the technology was evolving too fast. Italy was the first country in the world to ban ChatGPT, following which the European Union and China announced they were working on regulations for ai.

When considering responsible ai, we think about its impact on the tech industry: how should ai be regulated and what developments will come from new regulations? However, ai is additionally manifesting itself as a valuable force in other sectors, such as manufacturing. The technology is capable of ‘unlocking’ precious time and resources by automating workloads, making informed business decisions and streamlining activities.

Self-aware

For many companies in the manufacturing industry, the issue is not whether ai becomes self-aware, but whether companies understand the advice and decisions of ai models and can detect malware. Indeed, organisations are increasingly integrating and relying on ai models.

Looking at exactly what role ai plays in the manufacturing industry, ai can simplify tasks by detecting potential machine maintenance before the service team is called in. This optimises business processes.

Cycle

When it comes to responsible ai, there are two considerations to take into account. The first is practical: does ai make the right decisions for the right reasons? Having explainable ai is crucial to understanding why an algorithm makes the decisions it does, even if those decisions turn out to be ineffective. There is often a cycle where machine learning (ml) feeds the ai and the ai in turn produces more data for the ml model. Flawed reasoning can pollute the output, resulting in unusable data and unreliable decision-making. Understanding this is therefore important.

On the other side of the coin, ethics focuses more on the cybersecurity concerns surrounding ai. Ransomware poses a significant problem for any ai system. When malware corrupts the data in an ai system and disrupts the algorithm, it can lead to a snowball effect with disastrous consequences, such as irreparable damage to products.

Intrusion

The more autonomous and intelligent ai systems become, the greater the risk that a malicious party can intrude and damage the system without completely disabling it. This makes it less likely that the system will be detected and repaired in time. The lack of human intervention gives malware more opportunity to go undetected.

“With each update, ChatGPT has shown itself to be more effective and capable, which has contributed to its increasing popularity”

Companies should therefore take cybersecurity seriously for safe and responsible use of ai. It requires software with proper security measures and strict segregation of duties and authority for each task or user. This brings together practical and ethical aspects of ai, resulting in responsible use of ai. In this way, ai can be used to simplify business decision-making.

With each update, ChatGPT has shown itself to be more effective and capable, which has contributed to its increasing popularity. Despite the ongoing uncertainty for the unknown, the chances of a delay in the development of new ai tools seem slim. It is essential that we do not hesitate to understand the profound workings of this technology, harness its full potential and protect ai equally from any malicious attacks from unwanted parties.

Source: Kevin Miller for Computable.nl

Also interesting