The last few years have taught us that modern manufacturing must cope with disruption and be ready to pivot quickly. If data flows in real time, agility will be achieved. Then we can adapt to disrupted products and disrupted supply chains, even if we change manufacturing location or method.
Here’s how we become more agile:
● Digital strategy should align with business strategy
● Datasets need to be accurate and well defined
● Insights, decisions, and outcomes need to be mapped to those datasets
● AI can help us unlock the value of big data
Agility is essential
Manufacturers have talked about agility for decades, promoting the idea that flexible manufacturing, particularly in a high mix, low to medium volume environment. Agility is essential for manufacturers to deliver a solution that suits consumers and brands. Indeed, Industry 4.0 describes an agile manufacturing ecosystem that works with a lot size of one, a digital thread that defines the entire supply chain and manufacturing process.
The numerous disruptions of the last five years (trade war, pandemic, components shortages, geopolitics, etc.) have shown that the industry wants to achieve this agility. And while many companies, particularly those in EMS (Electronic Manufacturing Services), talked a good game, few had built it into their business or operational strategy.
In fact, the opposite could be true; over decades, the EMS industry had sought to drive economies through scale and an increasing dependence on low-cost labor, all while moving inventory out of the supply chain. This has created rigid manufacturing footprints that thrive on little change with little ability to adapt at speed. These supply chains have proved brittle with just-in-time, not allowing for the just-in-case scenarios we’ve seen recently.
We’ve also had a decade of ‘talking the talk’ around Industry 4.0 and very few ‘walking the walk’! We need to use digital tools to create a more agile work environment that is less dependent on labor and hence more efficient, adaptable, and reliable. In short, few have seen a measurable digital dividend as yet.
So, what has that got to do with operational analytics?
A lot, it turns out. The role of operational analytics is to gain intelligence that drives insights, leading to better and faster decisions. And that eventually drives better outcomes, which means better quality and reliability, greater efficiency and profitability, and a more robust and agile operational model. The process is simple data-insight-value.
Data everywhere
We’re not short of data! One thing we have done well in the first decade of the fourth industrial revolution is to figure out how to connect machines and harvest the massive amounts of data available, and some might say we now have too much data. At this stage, data accuracy is essential, but we’ll come to that in more detail later.
If our process starts with data, it ends with value:
● Data (contextualized) produces intelligence
● Intelligence drives insights
● Insight drives value through better decision making
An example might be the closed loop between an SPI (Solder Paste Inspection) machine and a solder paste printer on a typical SMT line. In the past, many errors occurred because of the print quality of the solder paste printer. These would result in poor quality after reflow, low first-pass yield, and inefficiencies. Introducing SPI to the process allowed us to stop the line if there was an issue, but it only acted as a stop signal.
By creating a closed loop, the SPI can use images of the board with the printed solder paste (data) to determine if the right amount of solder paste is present in the right place (insight). It then uses that insight to decide how much to adjust the printer (speed, alignment, and pressure) to get the best result. Hence reducing waste and increasing yield and reliability to drive incremental value.
In this case, we know the outcome or value we are trying to achieve and can work back to understand that dataset needed to gain insight and make a good decision promptly. We could get numerous parameters from the SMT line; some deliver value, and some may not. Hence the first phase of any operational analytics strategy is to understand what data we need and how to use it to deliver value.
The quality of the data is also critical in this example. Not all SPI systems are created equal, and if the image is not accurate or accurately processed, errors can slip through, and false calls can slow the entire process.
Analyzing data with AI just got easier
Like Industry 4.0, we’ve been talking up AI (Artificial Intelligence) for some time. Right now, everyone’s being dazzled, and occasionally disappointed, by the skills of Open AI’s ChatGPT and other chatbots from Google and Microsoft. Undoubtedly, AI has a massive part in our future, whether doing our children’s homework or figuring out how to optimize a factory or even an entire manufacturing ecosystem.
What these AI systems show us, often vividly, is the importance of the learning derived from datasets. If you use unreliable data, you’ll get unpredictable insight, driving flawed decisions. Let’s return to the example of the inspection system used to adjust the line. AI could be used to manage the enormous amount of data being derived from the system, but we need to be especially careful in what datasets we use to train that AI. Hence, we need to ensure those developing these systems have the domain knowledge associated with the manufacturing system and the deep domain knowledge required to understand good data and what is not. We must also ensure we use the best possible inspection solution with the highest definition and most accurate image.
There is no doubt that AI will be a game-changer for the use of data, particularly on the factory floor. Like many, we are working hard in this area. The factory floor can give us hundreds of signals at any moment. AI will help us process, prioritize and manage those signals to generate better insight, outcomes, and value. We are on a fast ramp in the performance of AI and its application, but we will need to be careful about how we train our AI systems and how we monitor and manage their performance.
Taking care of data
It’s worth thinking about the best practices for the management of data. With the volume of data generated in a factory, it is easy to see how data volumes can quickly become unwieldy and expensive in terms of storage. Here are a few best practices and things to consider in terms of data management:
● Backup and recovery – it is essential to have a regular backup plan and backups in multiple locations to ensure that if and when a data breach or failure occurs, recovery can happen quickly and seamlessly.
● Data locations – consider if you plan to store data on-premise, in the cloud, or perhaps both. Within this decision will be considerations around data security and access. Multiple locations should add protection against loss but will also add cost.
● Security and access – encryption is vital as much of the data stored may be confidential or include your or your customer’s IP. The same is true concerning access. Ensure that access is restricted to those needing the data and have the appropriate security clearance. As a rule of thumb, any data being transmitted should be encrypted. Part of any data management security system is the management of access.
● Data management – using the correct tools and systems can help optimize the data storage needed while creating more efficient workflows.
● Compliance and regulation – over the last decade, various laws and regulations have been implemented to protect privacy and ensure data is properly collected, stored, and shared. Ensure you are current and compliant with the rules and legislation of the regions where you operate and store or transfer data.
● Stay up to date – regular audits of your data process should allow you to stay on top of what is happening regarding technology, regulation, and specifically with your data. Knowing what data is being accessed and used and what is accumulating without providing insight or value is essential.
Whose data is it anyway?
Now that we can use big data, we must consider whose hands the power should be in and who needs which data. In a typical brand/EMS relationship, the data required by the brand will differ from what the contract manufacturer needs. Typically, product data affecting traceability, reliability, recalls, and supply chain transparency is necessary for the brand. On the other hand, data concerning manufacturing performance is more normally leveraged by the manufacturing company. But sometimes, these lines are blurred.
Hence, sharing data in an open and safe environment is important. Trusted data must be available to drive custom dashboards, reports, and notifications for every stakeholder. In some cases, data access must be gated so only those needing sensitive information can view it. Lastly, the ability to drill down into each data field can be extremely valuable to understand an issue’s root cause and finding the right solution.
Five years ago, everyone was talking about the ‘glass factory’ concept, where customers and the operational team could see exactly what was happening on each line and for each product. Now, thanks to recent component shortages and supply chain disruptions, people are more excited about the idea of a ‘glass pipeline, which provides real-time transparency into each part of the supply chain.
Focus on the end game
The bottom line is that data needs to serve the business’s strategy rather than the other way around! If we can design and plan the outcomes we need, such as a more efficient and sustainable supply chain and manufacturing ecosystem, we can map them to the data we need to collect to drive those outcomes.
And if we take a more open-minded approach in that design, we can create analytics that is as adaptable and agile as our businesses need to be!