Richard Hill

Judgement for AI-mediated work

Category: Manufacturing Systems

  • Model a business: simulate industrial processes

    Model a business: simulate industrial processes

    There comes a point when your spreadsheet models of business processes just don’t cut it. You observe some complexity that is just too difficult to explore. You have questions that remain unanswered because you can’t do the analysis. One way of tackling this is to model a business so that you can simulate it and gain some useful insight.

    We are going to look at an everyday approach to creating a model of an industrial process or service. We shall consider how we can ask questions of the model and use that to improve our understanding.

    With this understanding we shall then look at building a simple tool to simulate the operation of the process and thus, model a business. This simulation will produce results that we can use to experiment with different scenarios so that when we go back to the business, we can take more informed actions.

    First, we need to understand the system.

    Understanding business operations to model a business

    Manufacturing facilities vary in both size and complexity. One factory might have two or three areas where different processes take place. Another factory might contain hundreds of machines.

    Each of the factories will have evolved to cope with the manufacture of different products, different mixes of order types, different customer demand profiles, varying quality of raw materials, unpredictable machine breakdowns and so on.

    The list of possible interruptions to a neatly ordered continuous flow of efficient output is endless.

    Shop floor supervisors manage these variations using their analytical skills and experience. At some point during the working week, they’ll be required to answer the following questions:

    1. When will order X be finished?
    2. How much stock is tied-up in the factory?
    3. What is the utilisation of the work centre?

    These questions might be asked by different stakeholders.

    Question 1 probably comes from the customer, via the sales department, perhaps because the order is late.

    Question 2 might come from purchasing who are concerned with re-order quantities for input materials. Or it might be the accountants who are assessing cashflow.

    Question 3 is certainly asked by the accountants, so that they can put a measure on the production potential of a factory. But it is also posed by the planners who want to find additional production capacity for more customer orders.

    In a smaller organisation these roles may be undertaken by the same person. In larger companies the functions will be separate departments. Whatever the size of the facility, the questions are the same. The answers are likely to be the same also.

    When faced with such questions there are too many variables to consider, for you to make a reasoned judgement. Such answers start with “it depends”.

    Attempting to quantify the lateness of an order is dependent upon the jobs in front of the late order, the reliability of the process, whether the operator is working at peak performance, the quality of the tooling and raw materials, etc.

    If the process in question is fed, or feeds into other processes, the opportunities for error are compounded. This leads to the use of estimates which might be generous and therefore may build inefficiencies into how we manage the overall operations.

    What we need is a model of the facility. This model captures the essential characteristics of the business unit and lets us change some of those characteristics so that we can see what the effects of those changes might be.

    Our supervisor might have had an idea to reduce the batch sizes of their orders, but not felt able to try it out as their machine utilisation measures might drop.

    If something went wrong and an order was late, the change initiated by the supervisor might be cited as the cause of the reduction in output.

    But if that change could be applied to a model, that has no physical connection to the real facility, perhaps we could learn more about how the system behaves. If we understand the system better, we stand to make better decisions in the future.

    This practice is referred to as simulation and it has long been the preserve of industrial mathematicians, or scientists who study operational research. Such work creates a lot of value for organisations, by creating models and allowing production personnel to experiment with different strategies.

    However, these mathematical approaches are often inaccessible and significant training is required to interpret the models.

    We can often obtain much of the benefit of simulation without the need for advanced mathematical skills, and this is the approach that we shall take in this article.

    Everything is a queue

    Let’s assume that you visit the local supermarket to buy a few items. You select your items and make your way to the checkouts to pay for the shopping.

    There are a number of checkouts in the supermarket but for some reason only one of the checkouts is operational. You are not the only customer in the store, and there are three other people already at the checkout, waiting in a queue to be served.

    When they have been through the checkout, it will be your turn to be served. Fig. 1 illustrates the scenario.

    queueing model of a supermarket checkout
    Fig. 1 Supermarket queueing with one checkout

    We are going to assume that each person and their shopping in the queue represents one job.

    Just for a moment, think about your answers to the following questions:

    1. When will your job (you and your shopping) be finished?
    2. How many jobs are there in the queue?
    3. What capacity of the checkout is utilised?

    You may recognise these questions from earlier. What was your answer to Question 1?

    Since we don’t know how long it takes to process any of the shopping, we would have to say “it depends”.

    It depends on how much shopping each person in the queue has; this might range from a hand basket to an over-laden trolley.

    Question 2 is a little simpler. We know that each person and their shopping is classed as a job, so we just count the number of jobs in the queue. If there are three jobs waiting in front of you, there must be a job in progress at the checkout, which makes four jobs.

    And then there is you, bringing the total to five.

    And what about Question 3?

    When thinking about utilisation we need to consider potential interruptions such as:

    • the checkout operator being changed at the end of a shift;
    • a request from a customer services supervisor for a price because the barcode on an item is unreadable;
    • a power cut causing the till to stop working.

    If there is a queue of customers, and there are no disruptions to the actual process, we can assume that the checkout is kept busy. Once the queue becomes zero (all the jobs have been processed), or there is an interruption, the checkout becomes idle and the utilisation drops to zero.

    Now that we have a basic representation of our supermarket checkout in place, let’s see how we can alter the performance of the system.

    The supermarket manager realises that if customers have to queue for too long they may become frustrated, or even leave the store without making a purchase. This is not good for business, so another checkout is opened up as in Fig. 2.

    supermarket queueing model with two checkouts
    Fig. 2. Supermarket queueing with two checkouts.

    Now, you approach the checkouts and find that there are two checkouts working. Each checkout is processing one job each, with a queue of one job waiting also. You are free to join either queue.

    Let’s assume that it takes the same amount of time to process each job. If that is the case, since both queues are shorter, you will have to queue for less time before your job is processed. The utilisation of the checkouts reduces however, unless there are more jobs arriving behind you.

    We can thus deduce that there is some form of relationship between the number of available checkouts, the number of jobs to be processed, and the overall time taken to process an individual job.

    If the supermarket manager had such a model, they could experiment with the optimum number of checkouts to service their customer demand patterns. This would help them allocate the correct number of checkout staff for busy periods, while reducing the instances of checkouts being idle during quieter times.

    The model would permit them to plan for seasonal adjustments in shoppers’ behaviours.

    But, if the model can be executed quickly, it could also be a tool to explore a scenario that is unfolding – such as a large influx of customers that were unexpected – and this is where modelling and simulation can become a powerful tool for the management of business operations.

    Modelling an industrial process

    We shall now consider an industrial scenario. A joinery company produces wooden window frames. Each of the frames is cut from lengths of timber that are shaped and cut to length by a machine.

    The company receives orders of varying quantities of windows, which means that varying numbers of timber lengths are required from the first machine. The company only cuts timber lengths for orders and does not make products to put into stock.

    Each order is considered to be a job. Just as was the case with the shopper and their variable amount of shopping, each job can vary in size.

    Each job must then spend a certain amount of time waiting in a queue, before being processed by the machine. The total time that the timber is in the system is queueing time + processing time.

    Both the queueing time and the processing time are dependent upon the size of the respective order.

    We can see now that the model for creating lengths of timber window frame is essentially the same as our first supermarket model.

    We have jobs, a queue, and a processing station, where the actual work gets done. This scenario is illustrated in Fig. 3.

    Queueing model for a single industrial machine
    Fig. 3. Queueing model for a single industrial machine.

    For instance, what impact is a longer queue going to make on a) resource utilisation, and b) the overall time that a job spends in the system?

    A longer queue suggests that there will be less interruption to flow, so the utilisation will be higher.

    However, the longer the queue the more time that a particular job takes to be completed, so the delivery time is longer.

    The next stage is to build a simulation so that we can verify our thoughts.

    Designing a process simulation to model a business

    We now have an illustration of how we can model a single industrial process. That model is part of the initial specification of a simulation that we can execute. The simulation will execute a virtual production run, and that will give us an idea of how the model can perform.

    The simulation allows us to change different parameters of the model, without incurring the cost or disruption of moving physical plant around.

    So far, our model describes:

    • a process of material conversion, where lengths of timber are given a profile and then cut into shorter lengths that are suitable for window frames;
    • a single machine that performs the operations described above;
    • each job is processed one at a time. Multiple jobs cannot be processed simultaneously;
    • jobs arrive for processing and wait to be processed in a queue;
    • a job that has been processed is deemed to be complete and exits the system.

    We now need some more information to allow us to build the simulation.

    First, we should describe the rate at which jobs arrive for processing.

    Second, we need to specify the time taken to process a job.

    Third, we need to consider whether there is any variation in the size of a job. For this first example we shall assume that each job requires roughly the same amount of time to process. We shall explore variable job sizes later.

    There are many different simulation tools that can be used to build queueing models. We shall be using “Ciw” (which is Welsh for “queue”).

    Ciw is a simulation framework that uses Python and as such is free to acquire and use. Just type ‘ciw python’ into Google to find it.

    Within Ciw, there are three parameters that are of relevance to our industrial process model.

    1. arrival_distributions: this is the rate at which jobs arrive to be processed. We shall assume that the jobs arrive approximately every 15 minutes, or four times per hour;
    2. service_distributions: this is the time that each job spends being processed, or the time taken to do the shaping and cutting to length of the timber by the server (machine). We shall assume that each job takes 15 minutes;
    3. number_of_servers: this represents the number of machines at a workstation. In our example, we have one machine, or one server.

    It is important to note at this point the difference between parameters that are static, and those that might vary.

    For instance, for a given simulation we can assume that the number of machines (servers) doesn’t alter, so we give it the value of 1 as we want to investigate the scenario with one machine.

    However, while we can say that jobs arrive at a rate of four times per hour, or every 15 minutes, that isn’t strictly realistic.

    Sometimes there are interruptions to the deliveries. A forklift truck might drop the timber when loading it from the lorry, or there may be a physical blockage preventing the wood being placed next to the machine.

    Similarly, the time taken to process the timber won’t always take 15 minutes. This is just an approximation that – on average – takes 15 minutes.

    Sometimes the timber might blunt the cutting blades of the machine and it will take longer to finish the operation.

    Conversely, when the tooling is new or freshly sharpened the machining time will be less than 15 minutes.

    We want our simulation to take account of these variances and we do this by specifying a distribution function. This tells the simulation to use a range of values, whose mean is the arrival rate that we are suggesting.

    So, for an arrival rate of 15 minutes, the simulation will generate a set of values that vary, with a mean time of 15 minutes.

    This allows the simulation to be more realistic as it will take account of naturally occurring variations in waiting and processing times.

    We are now ready to build the simulation.

    Building the simulation in Ciw

    Create a new text file called:

    timber_conversion.py

    We shall enter some snippets of code now to quickly create a simulation to produce some results. Try not to worry about some of the details just yet as they will be explained later.

    What is important is to execute a simulation so that we can start to understand the timber conversion process better. First, we specify the arrival and service distributions, along with the quantity of servers:

    import ciw
    
    N = ciw.create_network(
        # jobs arrive every 10 minutes, or 6 times per hour
        arrival_distributions=[ciw.dists.Exponential(0.1)],
        # jobs take 15 minutes to process which is 4 jobs
        completed per hour
        service_distributions=[ciw.dists.Exponential(0.067)],
        # the number of machines available to do the processing
        number_of_servers=[1]
    )

    You might have noticed that the value contained in

    [ciw.dists.Exponential(0.1)]

    does not seem to relate to an arrival rate of 6 times per hour. This distribution function requires a decimal value, so we divide the arrival rate of 6 (arrivals per hour) and divide it by 60 (the number of minutes in an hour).

    Similarly, for the service time, the rate of processing per hour is 4 and is represented as 4/60 = 0.067.

    The next piece of code to add is:

    ciw.seed(1)
    Q = ciw.Simulation(N)
    # run the simulation for one shift (8 hours = 480 minutes)
    Q.simulate_until_max_time(480)

    This is an instruction to tell the computer to create a simulation and to run it for a simulated time of one shift (8 hours/480 minutes).

    That is all that is required to create the simulation. However, there are no instructions to tell the computer to report the results. The following program code does this:

    waitingtimes = [r.waiting_time for r in recs]
    servicetimes = [r.service_time for r in recs]
    avg_waiting_time = sum(waitingtimes) / len(waitingtimes)
    print(`Avg. wait time: ',avg_waiting_time)
    avg_service_time = sum(servicetimes) / len (servicetimes)
    print(`Avg. processing time: ',avg_service_time)
    print(`Avg. machine utilisation %:',
    		Q.transitive_nodes[0].server_utilisation)

    There are three results that are reported (look for the ‘print’ keyword).

    First, the average waiting time in minutes for each job.

    Second, the average time taken to process each job in minutes.

    Finally, the average utilisation of the machine (server) as a percentage.

    When you execute your simulation you should see the following results in the console:

    Avg. wait time: 51.51392337104136
    Avg. processing time: 12.643780078229085
    Avg. machine utilisation: 0.9969939361643851

    This tells us that on average, a job took nearly 13 minutes to process and had to wait approximately 52 minutes in the queue. The machine was operating for most of the time (99.7% utilisation).

    This is excellent for a shopfloor supervisor who has to report the percentage of time that a machine spends idle.

    Hardly any downtime for the machine in this situation.

    However, let’s use the simulation to start investigating different scenarios.

    We shall now explore the effect of increasing the number of machines from one to two.

    Edit the following line to increase the number of servers (machines) to 2:

    number_of_servers=[2]

    If we execute the simulation again, we observe the following results:

    Avg. wait time: 8.79660702065997
    Avg. processing time: 14.249724856289776
    Avg. machine utilisation: 0.6827993160305518

    We can see that the addition of an extra machine has dramatically reduced the wait time from 52 minutes to around 9 minutes. The utilisation of the two resources has also fallen to 68%, meaning that machining resources are idle for approximately 32% of the shift.

    While there is a reduction in waiting time, and therefore the overall lead time to delivery of a product, there is the additional capital cost of extra plant. Depending on how the machine is operated, there may also be extra labour required to run both machines at the same time.

    The shopfloor supervisor has a conversation with the company owner and it is clear that there is no cash with which to purchase another machine. The next course of action is to try and increase the output of the timber conversion process.

    The service time is 15 minutes, which means that 4 jobs per hour are processed.

    What difference would it make if we could process 5 jobs per hour?

    Edit the following line to reflect a service rate of 5 jobs per hour (5/60=0.08):

    service_distributions=[ciw.dists.Exponential(0.08)]

    Here are the results:

    Avg. wait time: 26.20588722740488
    Avg. processing time: 11.300597865271746
    Avg. machine utilisation: 0.9485495709367999

    The machine utilisation has increased, but the waiting time is much less than it was with a service time of 15 minutes. This illustrates that there is a significant benefit to be had by making even small changes to the service time of a process.

    Such thinking is central to “lean manufacturing” techniques, where potential opportunities for the removal of waste are identified.

    There might be some different tooling that enables the timber to be cut at a faster rate, or there might be a better way of organising the material so that the cutting-to-length operation is optimised for the fewest cuts.

    Confidence

    Once we have built a simulation, it is important that we are confident that it represents the situation that we are modelling.

    If we look at the results we have observed so far, what do we notice about the average processing time?

    We have obtained three different values: 12.6, 14.2 and 11.3 minutes. This is a significant range of values and it suggests that the simulation might not be taking a sufficient number of scenarios into account.

    For a given scenario, there is a time when the simulation queue is empty, and then partially complete, until a steady state of operation is achieved. Similarly, towards the end of a simulation there will be a number of jobs that remain unfinished.

    When we report the statistics of how the process has performed, we are collecting the data for jobs that have been completed.

    Depending on the time require to ‘wind-up’ and ‘wind-down’ a simulation, there could be a disproportionate effect on the performance that we observe. This would decrease our confidence in ability of the simulation to be used as a tool for experimentation.

    We deal with this in two ways. First, we run the simulation for a longer time and then report only the performance from the system once it is in a steady state of operation.

    For our 8 hour shift, we could add an hour before the start and at the end for warm-up and cool-down periods.

    Second, we can run the simulation many times, altering a number (called a ‘seed’) so that each run has some variation introduced into it.

    Create a new file called

    timber_conversion_2.py

    and enter the following code:

    import ciw
    
    N = ciw.create_network(
        # jobs arrive every 10 minutes, or 6 times per hour
        arrival_distributions=[ciw.dists.Exponential(0.1)],
        # jobs take 15 minutes to process which is
        	4 jobs completed per hour
        service_distributions=[ciw.dists.Exponential(0.067)],
        # the number of machines available to do the processing
        number_of_servers=[1]
    )
    
    runs = 1000 # this is the number of simulation runs
    average_waits = []
    average_services = []
    
    for trial in range(runs):
        ciw.seed(trial) # change the seed for each run
        Q = ciw.Simulation(N)
        # run the simulation for one shift (8 hours = 480 minutes) 
        	+ 2 hours (120 minutes)
        Q.simulate_until_max_time(600, progress_bar=True)
        recs = Q.get_all_records()
        waits = [r.waiting_time for r in recs if r.arrival_date >
        	60 and r.arrival_date < 540]
        mean_wait = sum(waits) / len(waits)
        average_waits.append(mean_wait)
        services = [r.service_time for r in recs if r.arrival_date > 60 and r.arrival_date < 540]
        mean_services = sum(services) / len(services)
        average_services.append(mean_services)
        
    print(`Number of simulation runs: ',runs)
    print(`Avg. wait time: ', sum(average_waits)/len(average_waits))
    print(`Avg. processing time: ',
    	sum(average_services)/len(average_services))
    

    Execute the code and you will observe the following results:

    Number of simulation runs: 1000
    Avg. wait time: 115.69878479543915
    Avg. processing time: 14.87316389724181
    Avg. machine utilisation: 0.8560348867271905

    You can now edit the variable <runs=1000> to change the number of times that the simulation executes.

    As the value of <runs> increases the statistics start to stabilise. This indicates that we can have confidence that the simulation is providing results that we trust. This is regarded as good practice for the modelling and simulation of systems.

    Conclusion

    We have looked at the application of queueing to the modelling of an industrial process. Our queueing model helps us understand the system better, and it also helps specify the various parameters that are important to include in our analysis.

    This specification can then be used with a simulation tool. We have used Ciw to quickly construct a simulation that represents our queueing model.

    As the simulation runs we collect summary statistics that can help us understand the inter-relationships between parameters such as job arrival rates, processing times and the number of resources available to do the work.

    We can then explore different scenarios by changing these parameters and this helps us understand what the limits of the system might be. Exploring different situations via simulations is an inexpensive and quick way to find the limits of a system, or to identify new possibilities.

    For example, you might want to find ways of increasing the output of a factory temporarily to complete a particular rush order for an important customer.

    You know that you can increase capacity by adding another shift or by buying new plant. But you might want to know how many additional operators you need to bring in to complete the extra work. You’ll also want to see how this might impact the rest of the orders for other customers.

    You might not be able to buy, install and commission new plant quickly enough, but a simulation can give you a good idea as to whether you should out-source some of the work or not.

    An example of using simulation strategically is to consider the potential impact of the sales team’s forecast for the next quarter; you could use this forecast to investigate the demands that would be made on your business resources and see what resources you might need.

    If you need to, you’ll be in a much stronger position to justify the acquisition of new plant or additional staffing.

    Model a business yourself

    Using the program code from above, experiment with different values.

    You can change the parameters for the number of simulation runs for instance, but you can also change the ‘shift length’; this refers to the amount of simulated time that the program executes.

    Simulation code allows us to try out different values quickly, to see what the different effects might be. This is convenient when we have a specific question to answer.

    However, we often need to perform deeper analysis of a simulation model, and in such cases it is useful to record the effects of our changes.

    Try to adopt good practice by recording the values that you change, noting the effects of these changes in a table. This habit will help you when your models increase in complexity.

    Some good questions to ask of this model could be:

    1. What is the effect on machine utilisation as the arrival rate of jobs declines?
    2. How would you find an optimum set of values to ensure that the system is balanced?

    When you start to build simulations, you quickly gain a deeper appreciation of the dynamics of systems. An important part of simulation is being able to discover, and then communicate the results of your simulation.

    Using the program code above plus the details available in Ciw documentation, develop some additional information to report.

    For example, it would be interesting to see what the average length of the queue is before the machine.

    This will then tell us what the total inventory that is being processed amounts to (Work in Progress, or WIP).

    The code above currently reports the average (mean) of a set of values. Enhance the reporting to include additional summary statistics such as standard deviation.

     

    Return to home page

  • Lead time

    Lead time

    It doesn’t take long for a conversation on the shop floor of a manufacturing plant to mention lead time. There always seems to be something or other that will affect it, usually negatively. But what does lead time mean exactly?

    Our need to predict the future, and then plan resources to fulfil the future demands, requires some estimation of the time taken to manufacture an item. The time that elapses from the receipt of an order, to the delivery of the product, is generally referred to as the lead time.

    However, the daily conversation on the shop floor is more likely to be about an internal movement times; the time that a component takes to get from one process to another. The overall lead time, which is what the planners are concerned with, is something that emerges as a consequence of all the individual ‘process’ lead times being added together.

    This is more of an issue for job shop manufacturing rather than flow-line production. Job shops use their plant in different ways depending on the make up of the order, which means that predictable (and short) lead times are of paramount importance. If we can’t estimate the lead time with any accuracy, either the delivery shall be late, or the overtime bill will increase.

    The operations on the shop floor tend to be reactive, and the conditions can change at a moment’s notice. This makes planning extremely challenging, since any update to a plan can be quickly overshadowed by yet another change that has occurred in production.

    Thus, we “put fat into the system”, allowing for potential variations and as a consequence, lengthening lead times. We don’t want to do this as planning for a longer lead time means that we shall have more materials (Work in Progress, WIP) in the system, which also means more cash that is committed.

    In flow-line production, the batch size is effectively one. Each product is processed one at a time, before it is passed to the subsequent process centre. In a job shop we process batches that are often much greater than one “to achieve efficiencies”, which is usually an accounting-driven measure of machine utilisation. The shortest lead time is achieved with a batch size of one. As we increase the batch size, the lead time increases. Thus, if we want to increase the responsiveness of a job shop, we need to look at methods that can reduce the batch sizes.

  • Production planning

    Production planning

    Experienced shop floor supervisors and production managers can often find themselves at odds with the production planning function in a manufacturing business. Tensions emerge between the organisational desire to trust the principles of Materials Requirements Planning (MRP), which is often the core module for determining works orders to be manufactured, and the hard-won experience of managing materials through workstations.

    The theory sounds fine; assign a lead-time to each component part, enter the due date for the finished product into the system, along with an order quantity, and the planning system will back-calculate the date by which the material is released to the factory.

    Factory supervisors would argue that the assumption that lead-times remain constant is fatally flawed. The planners might reply that if the production schedules were adhered to, everything would work as intended.

    Taking a scientific approach to production planning, by constructing simulations of manufacturing systems, we can observe that any variation in actual lead-times can wreak havoc on the performance of the overall system. A system where material is pushed into the factory as a consequence of a due dat and fixed assumptions of process lead-times is extremely sensitive to line stoppages, operator absence and material shortages.

    Simulation illustrates that there is a direct relationship between lead-time and the quantity of Work-in-Progress. If the WIP increases, so does the lead-time. If you keep pushing material into the system because that is what the MRP software says, the WIP will increase, and therefore so will the lead-time. Orders that become overdue just get added to the list of works orders, and the cycle continues until other measures such as overtime are taken. Production planning can become a nightmare.

    The issue here is that MRP does not take into account the WIP levels for a given system. It assumes that the constant lead-times shall manage a constant level of WIP.

    Staff on the shop floor realise this, though it isn’t always that intuitive how to solve the problem.

    Kanban is often hailed as the solution, as part of a Lean implementation. WIP is explicitly controlled at each workstation in a Kanban line. The material cannot be released for processing until a Kanban card becomes available – it is pulled through the system rather than pushed as with MRP. As soon as Kanban is installed, a dramatic reduction in WIP is observed immediately, which is good news until a stoppage occurs. Lean systems use this “threat” to have everyone focus their attention on the stoppage to resolve the problem, with the aim of eradicating the stoppage permanently.

    However, this still doesn’t always rest easy with the shop floor supervisor, who only truly settles when the bottleneck process is kept running.

    In a manufacturing system, the bottleneck governs the output of the system as a whole, and should therefore be utilised as much as possible. The way to do this is to ensure that there is a suitably sized buffer of jobs in front of the bottleneck to keep it going. Starving the bottleneck is starving the factory of capacity.

    The granular control of WIP at every workstation can therefore be too restrictive for some production lines, especially where setup times are lengthier or there are just more stoppages in general.

    Maintaining a constant level of WIP for the system as a whole, rather than between individual workstations, is the approach referred to as CONWIP. Since CONWIP does not control the individual transit of material between workstations, that material is permitted to flow freely within the factory.

    The emergent effect is that it queues at the entry to the bottleneck, which is exactly what the production manager wants. This keeps utilisation of the slowest process as high as possible, while still restricting the flow of new material into the system, which would adversely affect on-time delivery of finished goods.

    WIP control is a fundamental concept for the management of a production facility. IIoT can help enable WIP management by monitoring the utilisation of the current system bottleneck, and controlling the release of new material into the system in response to natural variations in process cycle time. This is particularly important for manufacturing systems that need to deliver mass-customisation for customers.

  • Digital manufacturing: start small

    Digital manufacturing: start small

    While you will find a relentless justification for the need to apply science to manufacturing management in these articles, it would be rather naive to think that just looking after the mathematics will solve all of the challenges. Digital transformation is complex and is enabled through people. People need leadership, particularly in organisations, and if you are to be successful at delivering your vision of digital manufacturing, there needs to be someone at the front who knows how to enact change.

    Pilot schemes are an effective way of introducing potentially disruptive practices to an organisation. It’s of vital importance to indicate the success at a small scale so that there is evidence that the change works in your organisation’s culture.

    The mathematics of analytics is not always difficult, and simple tools like spreadsheets can take the brunt of the daily workload.  Training staff to apply this thinking to their activities can take time, but gets better with practice on your digital manufacturing journey.

    One effective way of ensuring that staff in the pilot become engaged is to make sure that either the measures of the improvement can be explicitly linked to their efforts, or that their efforts are directly measured.

    In the same way that an SPC chart can show a machinist when the tool has lost its edge, individual’s performance on activities can be measured as above, within, or below some control limits. This data is an essential ingredient of a successful digital manufacturing ecosystem.

    Reporting such results enables staff to direct their efforts to the most pressing priorities, while also engendering a culture of continuous improvement.

    The linking of production activity to monitoring of manufacturing objectives helps develop a culture whereby operations are of interest, and studied by all staff, rather than leaving it all to the production planning department.

  • Manufacturing science

    Manufacturing science

    What place is there in the factory for any talk of science?

    In fact, it all sounds a bit technical, too theoretical, and probably of no use in practice. Theory doesn’t take account of machine breakdowns, interruptions to material supplies, or machinists not turning up for their shift.

    Science is often taken for meaning mathematics, which of course can be considered technical, convoluted, theoretical, abstract, essential or elegant, depending upon your viewpoint.

    A mathematician might argue that the detail of manufacturing operations are too much of a distraction. Therefore, you need to use abstraction to build a simplified, holistic model. That model might then provide insight as to how the individual operations inter-relate to produce the results that are being achieved.

    A production manager might counter this by maintaining that “the devil is in the detail”. The minutiae that is being removed from the mathematical model is of prime importance. In fact, the only way to understand the whole factory model is through experience and the subsequent intuition that it develops.

    When pressed, most factory staff accept that some degree of mathematics is useful, from the basic accounting and measurement of operations, through to monitoring, reporting and even forecasting.

    And so the “complex stuff” might be relegated into the realms of simulation and modelling. Or the latest software package for that matter (including the desire to become a Data Scientist by using R or Python to solve everything).

    But science is much more fundamental than the production of mathematical equations, the purchase of new software, or the acquisition of new programming language skills. It is about posing questions, and then using some method to try and disprove those questions, until an improvement is observed.

    Isn’t that what a production manager does when they attempt to implement Kanban, or lean methods? In fact, the lean movement (and Statistical Process Control before it) relies on the awareness and application of some fundamental mathematics to help managers take rational decisions.

    It is through the understanding and intuition that is acquired by applying scientific principles to manufacturing that progress can be made, and that previous beliefs can be identified as incorrect.

    It is interesting that a casual conversation with a production supervisor or manager will elicit that they have problems with meeting deadlines when the work-in-progress levels increase.

    So why do the same businesses persist with ERP software systems, which specifically assume that lead-times are constant, irrespective of the amount of inventory in the system?

    I’ve witnessed organisations who have thrown out Kanban, as it creates conflicts with the works orders that are generated by the ERP system. Unfortunately for them, that’s the point; there is something inherently wrong with the software that they are choosing to persist with.

    These are the sorts of situations where manufacturing science can help. The mindset of science will create staff who are open to change, ready to question tradition, and be able to acquire whatever essential mathematical skills are required to do the job.

  • IIoT: Technical vs. people skills

    IIoT: Technical vs. people skills

    You want your factory to be IIoT enabled. You’ve seen the videos and read the case studies. It’s obvious: IIoT technologies are central to your digital transformation.

    But where do you start? How do you start?

    The technologies of IIoT implicitly demand people with technical skills. While we travel through an early adoption phase, some plant can produce the data we need, but we’ll probably have to augment other plant so that it can do the same.

    IIoT lends itself to the technophiles; even though the barriers to entry are lowering, if you want to fasten a data reporting capability onto a machine tool, you’ll need to know what you are doing.

    If we consider the area where IIoT is flourishing currently – condition monitoring and predictive maintenance – then the domain is populated by technical people, with technical skills, doing technical things.

    Some installations are moving to a service-oriented model, where the manufacturer does not get involved with the IIoT at all. The IIoT installer takes care of data monitoring, analytics, reporting and communication, and merely produces processed data to be consumed by the client.

    If we want to transform a factory, we shall need to think much more broadly than a predictive maintenance solution. The complex interplay of multiple machines, material handling equipment, finishing plant, assembly lines, etc., suggests that there will be an imperative for the rapid up-skilling of existing staff to become more technical.

    But we know that technology projects often fail when the focus is on the technology itself. Of course, it is the potential of the technology that justified the transformation project in the first place; however, people are still central to the operations and they need to be brought along with the change if the change is to stick.

    So, IIoT implementation initiatives need a people-centric approach to lead the development of people-skills. Lean is a good way of approaching IIoT adoption as it focuses on the principles of efficiency, supporting the development of appropriate behaviours.

    With such behaviours in place, IIoT can be `relegated’ to a technology that serves what people really need.

  • Modelling robots and Cyber Physical Systems

    Modelling robots and Cyber Physical Systems

    Webots (https://cyberbotics.com) is a simulation environment for the design and modelling of robotic systems.  Since robotics invariably results in some physical actuation, Webots can also be used to model Cyber-Physical Systems, and being open source, the software is free to use and experiment with.

    Webots is particularly suited to newcomers to robotic systems, though it can still be used more formally in industrial scenarios, via links to the Robot Operating System (ROS – https://www.ros.org).

    If you have a need to develop a robot, a physical control of a process, or you are just curious, Webots is a good place to start.

    I use Webots for teaching both robotic systems and cyber-physical systems, usually in the context of digital manufacturing/Industry 4.0. You can model entire systems at an abstract level, or focus on the detail of sensors interacting with each other.

    Some more reasons to use Webots can be found here: https://cyberbotics.com/#webots

    To get a working installation of Webots, you should visit the excellent documentation that is located at: https://cyberbotics.com/doc/guide/installation-procedure#installation-on-macos

  • IIoT Kanban – not so easy

    IIoT Kanban – not so easy

    Any factory floor supervisor knows that the more raw material/components/work-in-progress that is pumped into a manufacturing system, the longer the lead-time. Put another way, the order is delivered late, and subsequent orders cannot be planned with any certainty as you don’t know when they will be ready. This is not good for business.

    MRP attempted to deal with planning by inferring a fixed lead-time for each stage of production. This didn’t work either, though it doesn’t stop manufacturers persisting with MRP based information systems as they cannot find a better solution.

    Kanban, from Japan, directly deals with the issue of work-in-progress levels, by controlling them directly. Each Kanban card represents a unit of work to be produced. When you run out of cards, you cannot release more material into the system, even though you could increase the utilisation of a machine, or reduce your wastage percentages.

    This discipline can feel counter-intuitive at first, especially when a machine is sat idle and you just know that you could get ahead with one extra batch.

    The effects of too much WIP is elegantly demonstrated by a sausage machine. If you  put too much sausage meat into the grinder, the sausages are uneven and mis-shapen. If you carry on adding meat, the grinder just blocks up.

    When you get the flow of material into the grinder balanced with the application of the sausage skin, everything works nicely.

    The consequences of letting WIP go rampant are clear when there are tangible, physical products. But what about processes that do not result in a product?

    What can we do with information ‘products’?

    Kanban is also developing a following with project managers who would like to follow similar principles; manage the workload at any given time to avoid blockages and breakdowns.

    More experience is required in that it can be more challenging to estimate the amount of work for a given task, as opposed to knowing the machining cycle time of a particular component.

    But, the use of IIoT to capture data does help build a corpus of information upon which to predict future durations for tasks, irrespective of whether there is any physical product involved or not.

    A lot of the promise of digital manufacturing is the ability to delegate coordination, optimisation and decision-making to the machines. This means that the machines will have to be aware of their surroundings so that they can make judgements that do not violate the WIP protocols of a system.

    This means that digital manufacturing is more than IIoT equipment. It needs architectures and conceptual thinking to ensure that the necessary behaviours are in place to replicate and eventually replace human-centric interventions.

    We may need an agent-oriented view of our systems as these attempt to map behaviours into functionality, and a fundamental aspect of such behaviours is that they are communicated between agents socially.

    Kanban’s apparent simplicity actually disguises a set of complex behaviours that humans take for granted.

    The machines have a way to go yet.