One factor that motivates industrial organisations to adopt Internet of Things (IoT) technologies is the potential to be able to monitor, control and coordinate processes remotely. This opens up new possibilities for collaboration across manufacturing sites, and has the potential to have a massive positive impact on supply chains of collaborating entities, who may be separate business organisations, but who need to co-exist by serving the needs of each other.
The degree of integration that is feasible between processes is in part determined by the frequency of monitoring that is required to optimise a process or group of processes.
For instance, if a system is measuring the ambient environmental temperature, it is likely that an hourly update will produce more than enough data upon which to gain the insight needed for whatever decisions are taken.
In contrast, a machine tool that is shaping components that form part of larger assemblies, may be reporting tool wear using loading on the drive motors, by way of electric current draw monitoring. In this case, the reporting of hourly data may be meaningless, and a much more frequent stream of reported data would be required to illustrate at which point in time the tool is starting to lose its sharpness and requires replacement.
A second issue in relation to time exists if we consider the potential of collaboration across different geographies, which is feasible with the use of networked sensors (and is typically the basis of a more sophisticated Cyber Physical System or CPS).
In such a case, the motoring and reporting frequency of the data is of importance, but there is an assumption that the data that is reported from each of the sensors is recorded at the same time. The synchronisation of internal clocks in embedded microprocessors can quickly become an issue if we want to integrate lots of data-producing devices together.
Depending on the design of the data producing system, there may be different latencies that affect the time that data is reported, analysed and posted into a repository. For some processes this is more critical than others; the example of temperature recording is clearly more tolerant than that of the concurrent monitoring of machine tool wear.
Our assumption is that we are building systems that sense so that we can do something interesting with the data, and this is usually some form of analytics. Whether the analytics is performed local to the source of the data, or perhaps more “downstream” upon a database or data warehouse, there is still the issue of data integrity; how do we protect against data being recorded that is inadvertently mis-labeled with time stamps that are not synchronised?
Each situation needs to be considered on an individual basis, but it is generally prudent to record a sequence of activities post data capture, so that an analytics function can observe the chain of events that occurred after an event was reported. This might include recording the times that the data was sent, received, processed and archived for instance.