The United Manufacturing Hub at its core is a Helm Chart for Kubernetes consisting of several microservices and open source 3rd party applications, such as Node-RED and Grafana. This Helm Chart can be deployed in various environments, from edge devices and virtual machines to managed Kubernetes offerings. In large-scale deployments, you find typically a combination out of all these deployment options.
In this chapter, we’ll explore the various microservices and applications that make up the United Manufacturing Hub, and how they work together to help you extract, contextualize, store, and visualize data from your shop floor.
flowchart
subgraph UMH["United Manufacturing Hub"]
style UMH fill:#47a0b5
subgraph UNS["Unified Namespace"]
style UNS fill:#f4f4f4
kafka["Apache Kafka"]
mqtt["HiveMQ"]
console["Console"]
kafka-bridge
mqtt-kafka-bridge["mqtt-kafka-bridge"]
click kafka "./microservices/core/kafka"
click mqtt "./microservices/core/mqtt-broker"
click console "./microservices/core/console"
click kafka-bridge "./microservices/core/kafka-bridge"
click mqtt-kafka-bridge "./microservices/core/mqtt-kafka-bridge"
mqtt <-- MQTT --> mqtt-kafka-bridge <-- Kafka --> kafka
kafka -- Kafka --> console
end
subgraph custom["Custom Microservices"]
custom-microservice["A user provied custom microservice in the Helm Chart"]
custom-application["A user provided custom application deployed as Kubernetes resources or as a Helm Chart"]
click custom-microservice "./microservices/core/custom"
end
subgraph Historian
style Historian fill:#f4f4f4
kafka-to-postgresql
timescaledb[("TimescaleDB")]
factoryinsight
umh-datasource
grafana["Grafana"]
redis
click kafka-to-postgresql "./microservices/core/kafka-to-postgresql"
click timescaledb "./microservices/core/database"
click factoryinsight "./microservices/core/factoryinsight"
click grafana "./microservices/core/grafana"
click redis "./microservices/core/redis"
kafka -- Kafka ---> kafka-to-postgresql
kafka-to-postgresql -- SQL --> timescaledb
timescaledb -- SQL --> factoryinsight
factoryinsight -- HTTP --> umh-datasource
umh-datasource --Plugin--> grafana
factoryinsight <--RESP--> redis
kafka-to-postgresql <--RESP--> redis
end
subgraph Connectivity
style Connectivity fill:#f4f4f4
nodered["Node-RED"]
barcodereader
sensorconnect
click nodered "./microservices/core/node-red"
click barcodereader "./microservices/community/barcodereader"
click sensorconnect "./microservices/core/sensorconnect"
nodered <-- Kafka --> kafka
barcodereader -- Kafka --> kafka
sensorconnect -- Kafka --> kafka
end
subgraph Simulators
style Simulators fill:#f4f4f4
mqtt-simulator["IoT sensors simulator"]
packml-simulator["PackML simulator"]
opcua-simulator["OPC-UA simulator"]
click mqtt-simulator "./microservices/community/mqtt-simulator"
click packml-simulator "./microservices/community/packml-simulator"
click opcua-simulator "./microservices/community/opcua-simulator"
mqtt-simulator -- MQTT --> mqtt
packml-simulator -- MQTT --> mqtt
opcua-simulator -- OPC-UA --> nodered
end
end
subgraph Datasources
plc["PLCs"]
other["Other systems on the shopfloor (MES, ERP, etc.)"]
barcode["USB barcode reader"]
ifm["IO-link sensor"]
iot["IoT devices"]
plc -- "Siemens S7, OPC-UA, Modbus, etc." --> nodered
other -- " " ----> nodered
ifm -- HTTP --> sensorconnect
barcode -- USB --> barcodereader
iot <-- MQTT --> mqtt
%% at the end for styling purposes
nodered <-- MQTT --> mqtt
end
subgraph Data sinks
umh-other["Other UMH instances"]
other-systems["Other systems (cloud analytics, cold storage, BI tools, etc.)"]
kafka <-- Kafka --> kafka-bridge
kafka-bridge <-- Kafka ----> umh-other
factoryinsight -- HTTP ----> other-systems
end
Simulators
The United Manufacturing Hub includes several simulators to generate data during development and testing.
Microservices
iotsensorsmqtt simulates data in three different MQTT topics, providing a simple way to test and visualize MQTT data streams.
packml-simulator simulates a PackML machine which sends and receives MQTT messages
opcua-simulator simulates an OPC-UA server, which can then be used to test connectivity of OPC-UA clients and to generate sample data for OPC-UA clients
Data connectivity microservices
The United Manufacturing Hub includes microservices that extract data from the shop floor and push it into the Unified Namespace. Additionally, you can deploy your own microservices or third-party solutions directly into the Kubernetes cluster using the custom microservice feature. To learn more about third-party solutions, check out our extensive tutorials on our learning hub
Microservices
sensorconnect automatically reads out IO-Link Master and their connected sensors, and pushes the data to the message broker.
barcodereader connects to USB barcode reader devices and pushes the data to the message broker.
Node-RED is a versatile tool with many community plugins and allows access to machine PLCs or connections with other systems on the shopfloor. It plays an important role and is explained in the next section.
Node-RED: connectivity & contextualization
Node-RED is not just a tool for connectivity, but also for stream processing and data contextualization. It is often used to extract data from the message broker, reformat the event, and push it back into a different topic, such as the UMH datamodel.
In addition to the built-in microservices, third-party contextualization solutions can be deployed similarly to data connectivity microservices. For more information on these solutions, check out our extensive tutorials on our learning hub.
In addition to the built-in microservices, third-party contextualization solutions can be deployed similarly to data connectivity microservices. For more information on these solutions, check out our extensive tutorials on our learning hub.
Microservices
Node-RED is a programming tool that can wire together hardware devices, APIs, and online services.
Unified Namespace
At the core of the United Manufacturing Hub lies the Unified Namespace, which serves as the central source of truth for all events and messages occurring on your shop floor. The Unified Namespace is implemented using two message brokers: HiveMQ for MQTT and Apache Kafka. MQTT is used to receive data from IoT devices on the shop floor because it excels at handling a large number of unreliable connections. On the other hand, Kafka is used to enable communication between the microservices, leveraging its large-scale data processing capabilities.
The data between both brokers is bridged automatically using the mqtt-to-kafka microservice, allowing you to send data to MQTT and process it reliably in Kafka.
For more information on the Unified Namespace feature and how to use it, check out the detailed description of the Unified Namespace feature.
Microservices
HiveMQ is an MQTT broker used for receiving data from IoT devices on the shop floor. It excels at handling large numbers of unreliable connections.
Apache Kafka is a distributed streaming platform used for communication between microservices. It offers large-scale data processing capabilities.
mqtt-kafka-bridge is a microservice that bridges messages between MQTT and Kafka, allowing you to send data to MQTT and process them reliably in Kafka.
kafka-bridge a microservice that bridges messages between multiple Kafka instances.
console is a web-based user interface for Kafka, which provides a graphical view of topics and messages.
Historian / data storage and visualization
The United Manufacturing Hub stores events according to our datamodel. These events are automatically stored in TimescaleDB, an open-source time-series SQL database. From there, you can access the stored data using Grafana, a visualization and analytics software. With Grafana, you can perform on-the-fly data analysis by executing simple min, max, and avg on tags, or extended KPI calculations such as OEE. These calculations can be selected in the umh-datasource microservice.
For more information on the Historian or Analytics feature and how to use it, check out the detailed description of the Historian feature or the Analytics features.
Microservices
kafka-to-postgresql stores data in selected topics from the Kafka broker in a PostgreSQL compatible database such as TimescaleDB.
TimescaleDB, which is an open-source time-series SQL database
factoryinsight provides REST endpoints to fetch data and calculate KPIs
umh-datasource is a plugin providing access factoryinsight
redis is an in-memory data structure store, used for cache.
Custom Microservices
The Helm Chart allows you to add your own microservices or Docker containers to the United Manufacturing Hub. These can be used, for example, to connect with third-party systems or to analyze the data. Additionally, you can deploy any other third-party application as long as it is available as a Helm Chart, Kubernetes resource, or Docker Compose (which can be converted to Kubernetes resources).
1 - Helm Chart
This page describes the Helm Chart of the United Manufacturing Hub and the
possible configuration options.
An Helm chart is a package manager for Kubernetes that simplifies the
installation, configuration, and deployment of applications and services.
It contains all the necessary Kubernetes manifests, configuration files, and
dependencies required to run a particular application or service. One of the
main advantages of Helm is that it allows to define the configuration of the
installed resources in a single YAML file, called values.yaml. Helm provides great documentation on how to acheive this at https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing
The Helm Chart of the United Manufacturing Hub is composed of both custom
microservices and third-party applications. If you want a more in-depth view of
the architecture of the United Manufacturing Hub, you can read the Architecture overview page.
Helm Chart structure
Custom microservices
The Helm Chart of the United Manufacturing Hub is composed of the following
custom microservices:
barcodereader: reads the input from
a barcode reader and sends it to the MQTT broker for further processing.
customMicroservice: a
template for deploying any number of custom microservices.
factoryinput: provides REST
endpoints for MQTT messages.
factoryinsight: provides REST
endpoints to fetch data and calculate KPIs.
grafanaproxy: provides a
proxy to the backend services.
MQTT Simulator: simulates
sensors and sends the data to the MQTT broker for further processing.
kafka-bridge: connects Kafka brokers
on different Kubernetes clusters.
kafkatopostgresql:
stores the data from the Kafka broker in a PostgreSQL database.
TimescaleDB: an open-source time-series SQL
database.
Configuration options
The Helm Chart of the United Manufacturing Hub can be configured by setting
values in the values.yaml file. This file has three main sections that can be
used to configure the applications:
customers: contains the definition of the customers that will be created
during the installation of the Helm Chart. This section is optional, and it’s
used only by factoryinsight and factoryinput.
_000_commonConfig: contains the basic configuration options to customize the
United Manufacturing Hub, and it’s divided into sections that group applications
with similar scope, like the ones that compose the infrastructure or the ones
responsible for data processing. This is the section that should be mostly used
to configure the microservices.
_001_customMicroservices: used to define the configuration of
custom microservices that are not included in the Helm Chart.
After those three sections, there are the specific sections for each microservice,
which contain their advanced configuration. This is the so called Danger Zone,
because the values in those sections should not be changed, unlsess you absolutely
know what you are doing.
When a parameter contains . (dot) characters, it means that it is a nested
parameter. For example, in the tls.factoryinput.cert parameter the cert
parameter is nested inside the tls.factoryinput section, and the factoryinput
section is nested inside the tls section.
Customers
The customers section contains the definition of the customers that will be
created during the installation of the Helm Chart. It’s a simple dictionary where
the key is the name of the customer, and the value is the password.
For example, the following snippet creates two customers:
customers:customer1:password1customer2:password2
Common configuration options
The _000_commonConfig contains the basic configuration options to customize the
United Manufacturing Hub, and it’s divided into sections that group applications
with similar scope.
The following table lists the configuration options that can be set in the
_000_commonConfig section:
_000_commonConfig section parameters
Parameter
Description
Type
Allowed values
Default
datainput
The configuration of the microservices used to input data.
The _000_commonConfig.datasources section contains the configuration of the
microservices used to acquire data, like the ones that connect to a sensor or
simulate data.
The following table lists the configuration options that can be set in the
_000_commonConfig.datasources section:
datasources section parameters
Parameter
Description
Type
Allowed values
Default
barcodereader
The configuration of the barcodereader microservice.
The _000_commonConfig.dataprocessing.nodered section contains the configuration
of the nodered microservice.
The following table lists the configuration options that can be set in the
_000_commonConfig.dataprocessing.nodered section:
nodered section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether the nodered microservice is enabled.
bool
true, false
true
defaultFlows
Whether the default flows should be used.
bool
true, false
false
Infrastructure
The _000_commonConfig.infrastructure section contains the configuration of the
microservices responsible for connecting all the other microservices, such as the
MQTT broker and the
Kafka broker.
The following table lists the configuration options that can be set in the
_000_commonConfig.infrastructure section:
The private key of the certificate for the Kafka broker
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.barcodereader.sslKeyPassword
The encrypted password of the SSL key for the barcodereader microservice. If empty, no password is used
string
Any
""
tls.barcodereader.sslKeyPem
The private key for the SSL certificate of the barcodereader microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.barcodereader.sslCertificatePem
The private SSL certificate for the barcodereader microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslKeyPasswordLocal
The encrypted password of the SSL key for the local mqttbridge broker. If empty, no password is used
string
Any
""
tls.kafkabridge.sslKeyPemLocal
The private key for the SSL certificate of the local mqttbridge broker
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkabridge.sslCertificatePemLocal
The private SSL certificate for the local mqttbridge broker
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslCACertRemote
The CA certificate for the remote mqttbridge broker
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslCertificatePemRemote
The private SSL certificate for the remote mqttbridge broker
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslKeyPasswordRemote
The encrypted password of the SSL key for the remote mqttbridge broker. If empty, no password is used
string
Any
""
tls.kafkabridge.sslKeyPemRemote
The private key for the SSL certificate of the remote mqttbridge broker
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkadebug.sslKeyPassword
The encrypted password of the SSL key for the kafkadebug microservice. If empty, no password is used
string
Any
""
tls.kafkadebug.sslKeyPem
The private key for the SSL certificate of the kafkadebug microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkadebug.sslCertificatePem
The private SSL certificate for the kafkadebug microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkainit.sslKeyPassword
The encrypted password of the SSL key for the kafkainit microservice. If empty, no password is used
string
Any
""
tls.kafkainit.sslKeyPem
The private key for the SSL certificate of the kafkainit microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkainit.sslCertificatePem
The private SSL certificate for the kafkainit microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkastatedetector.sslKeyPassword
The encrypted password of the SSL key for the kafkastatedetector microservice. If empty, no password is used
string
Any
""
tls.kafkastatedetector.sslKeyPem
The private key for the SSL certificate of the kafkastatedetector microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkastatedetector.sslCertificatePem
The private SSL certificate for the kafkastatedetector microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkatopostgresql.sslKeyPassword
The encrypted password of the SSL key for the kafkatopostgresql microservice. If empty, no password is used
string
Any
""
tls.kafkatopostgresql.sslKeyPem
The private key for the SSL certificate of the kafkatopostgresql microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkatopostgresql.sslCertificatePem
The private SSL certificate for the kafkatopostgresql microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kowl.sslKeyPassword
The encrypted password of the SSL key for the kowl microservice. If empty, no password is used
string
Any
""
tls.kowl.sslKeyPem
The private key for the SSL certificate of the kowl microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kowl.sslCertificatePem
The private SSL certificate for the kowl microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.mqttkafkabridge.sslKeyPassword
The encrypted password of the SSL key for the mqttkafkabridge microservice. If empty, no password is used
string
Any
""
tls.mqttkafkabridge.sslKeyPem
The private key for the SSL certificate of the mqttkafkabridge microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.mqttkafkabridge.sslCertificatePem
The private SSL certificate for the mqttkafkabridge microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.nodered.sslKeyPassword
The encrypted password of the SSL key for the nodered microservice. If empty, no password is used
string
Any
""
tls.nodered.sslKeyPem
The private key for the SSL certificate of the nodered microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.nodered.sslCertificatePem
The private SSL certificate for the nodered microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.sensorconnect.sslKeyPassword
The encrypted password of the SSL key for the sensorconnect microservice. If empty, no password is used
string
Any
""
tls.sensorconnect.sslKeyPem
The private key for the SSL certificate of the sensorconnect microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.sensorconnect.sslCertificatePem
The private SSL certificate for the sensorconnect microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
Data storage
The _000_commonConfig.datastorage section contains the configuration of the
microservices used to store data. Specifically, it controls the following
microservices:
If you want to specifically configure one of these microservices, you can do so
in their respective sections in the Danger Zone.
The following table lists the configurable parameters of the
_000_commonConfig.datastorage section.
datastorage section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether to enable the data storage microservices
bool
true, false
true
db_password
The password for the database. Used by all the microservices that need to connect to the database
string
Any
changeme
Data input
The _000_commonConfig.datainput section contains the configuration of the
microservices used to input data. Specifically, it controls the following
microservices:
If you want to specifically configure one of these microservices, you can do so
in their respective sections in the danger zone.
The following table lists the configurable parameters of the
_000_commonConfig.datainput section./
datainput section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether to enable the data input microservices
bool
true, false
false
MQTT Bridge
The _000_commonConfig.mqttBridge section contains the configuration of the
mqtt-bridge microservice,
responsible for bridging MQTT brokers in different Kubernetes clusters.
The following table lists the configurable parameters of the
_000_commonConfig.mqttBridge section.
mqttBridge section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether to enable the mqtt-bridge microservice
bool
true, false
false
localSubTopic
The topic that the local MQTT broker subscribes to
string
Any valid MQTT topic
ia/factoryinsight
localPubTopic
The topic that the local MQTT broker publishes to
string
Any valid MQTT topic
ia/factoryinsight
oneWay
Whether to enable one-way communication, from local to remote
The topic that the remote MQTT broker subscribes to
string
Any valid MQTT topic
ia
remotePubTopic
The topic that the remote MQTT broker publishes to
string
Any valid MQTT topic
ia/factoryinsight
Kafka Bridge
The _000_commonConfig.kafkaBridge section contains the configuration of the
kafka-bridge microservice,
responsible for bridging Kafka brokers in different Kubernetes clusters.
The following table lists the configurable parameters of the
_000_commonConfig.kafkaBridge section.
The _000_commonConfig.kafkaStateDetector section contains the configuration
of the kafka-state-detector
microservice, responsible for detecting the state of the Kafka broker.
The following table lists the configurable parameters of the
_000_commonConfig.kafkaStateDetector section.
kafkastatedetector section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether to enable the kafka-state-detector microservice
bool
true, false
false
Debug
The _000_commonConfig.debug section contains the debug configuration for all
the microservices. This values should not be enabled in production.
The following table lists the configurable parameters of the
_000_commonConfig.debug section.
debug section parameters
Parameter
Description
Type
Allowed values
Default
enableFGTrace
Whether to enable the foreground trace
bool
true, false
false
Tulip Connector
The _000_commonConfig.tulipconnector section contains the configuration of
the tulip-connector
microservice, responsible for connecting a Tulip instance with the United
Manufacturing Hub.
The following table lists the configurable parameters of the
_000_commonConfig.tulipconnector section.
tulipconnector section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether to enable the tulip-connector microservice
bool
true, false
false
domain
The domain name pointing to you cluster
string
Any valid domain name
tulip-connector.changme.com
Custom microservices configuration
The _001_customConfig section contains a list of custom microservices
definitions. It can be used to deploy any application of your choice, which can
be configured using the following parameters:
Custom microservices configuration parameters
Parameter
Description
Type
Allowed values
Default
name
The name of the microservice
string
Any
example
image
The image and tag of the microservice
string
Any
hello-world:latest
enabled
Whether to enable the microservice
bool
true, false
false
imagePullPolicy
The image pull policy of the microservice
string
“Always”, “IfNotPresent”, “Never”
“Always”
env
The list of environment variables to set for the microservice
object
Any
[{name: LOGGING_LEVEL, value: PRODUCTION}]
port
The internal port of the microservice to target
int
Any
80
externalPort
The host port to which expose the internal port
int
Any
8080
probePort
The port to use for the liveness and startup probes
int
Any
9091
startupProbe
The interval in seconds for the startup probe
int
Any
200
livenessProbe
The interval in seconds for the liveness probe
int
Any
500
statefulEnabled
Create a PersistentVolumeClaim for the microservice and mount it in /data
bool
true, false
false
Danger zone
The next sections contain a more advanced configuration of the microservices.
Usually, changing the values of the previous sections is enough to run the
United Manufacturing Hub. However, you may need to adjust some of the values
below if you want to change the default behavior of the microservices.
Everything below this point should not be changed, unless you know what you are doing.
Whether to enable the initChownData job, to reset data ownership at startup
bool
true, false
true
persistence.enabled
Whether to enable persistence
bool
true, false
true
persistence.size
The size of the persistent volume
string
Any
5Gi
podDisruptionBudget.minAvailable
The minimum number of available pods
int
Any
1
service.port
The port of the Service
int
Any
8080
service.type
The type of Service to expose
string
ClusterIP, LoadBalancer
LoadBalancer
serviceAccount.create
Whether to create a ServiceAccount
bool
true, false
false
testFramework.enabled
Whether to enable the test framework
bool
true, false
false
datasources
The datasources section contains the configuration of the datasources
provisioning. See the
Grafana documentation
for more information.
datasources.yaml:apiVersion:1datasources:- name:umh-v2-datasource# <string, required> datasource type. Requiredtype:umh-v2-datasource# <string, required> access mode. proxy or direct (Server or Browser in the UI). Requiredaccess:proxy# <int> org id. will default to orgId 1 if not specifiedorgId:1url:"http://united-manufacturing-hub-factoryinsight-service/"jsonData:customerID:$FACTORYINSIGHT_CUSTOMERIDapiKey:$FACTORYINSIGHT_PASSWORDbaseURL:"http://united-manufacturing-hub-factoryinsight-service/"apiKeyConfigured:trueversion:1# <bool> allow users to edit datasources from the UI.isDefault:falseeditable:false# <string, required> name of the datasource. Required- name:umh-datasource# <string, required> datasource type. Requiredtype:umh-datasource# <string, required> access mode. proxy or direct (Server or Browser in the UI). Requiredaccess:proxy# <int> org id. will default to orgId 1 if not specifiedorgId:1url:"http://united-manufacturing-hub-factoryinsight-service/"jsonData:customerId:$FACTORYINSIGHT_CUSTOMERIDapiKey:$FACTORYINSIGHT_PASSWORDserverURL:"http://united-manufacturing-hub-factoryinsight-service/"apiKeyConfigured:trueversion:1# <bool> allow users to edit datasources from the UI.isDefault:trueeditable:false# <string, required> name of the datasource. Required
envValueFrom
The envValueFrom section contains the configuration of the environment
variables to add to the Pod, from a secret or a configmap.
grafana envValueFrom section parameters
Parameter
Description
Value from
Name
Key
FACTORYINSIGHT_APIKEY
The API key to use to authenticate to the Factoryinsight API
secretKeyRef
factoryinsight-secret
apiKey
FACTORYINSIGHT_BASEURL
The base URL of the Factoryinsight API
secretKeyRef
factoryinsight-secret
baseURL
FACTORYINSIGHT_CUSTOMERID
The customer ID to use to authenticate to the Factoryinsight API
secretKeyRef
factoryinsight-secret
customerID
FACTORYINSIGHT_PASSWORD
The password to use to authenticate to the Factoryinsight API
secretKeyRef
factoryinsight-secret
password
env
The env section contains the configuration of the environment variables to add
to the Pod.
grafana env section parameters
Parameter
Description
Type
Allowed values
Default
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS
List of plugin identifiers to allow loading even if they lack a valid signature
The extraInitContainers section contains the configuration of the extra init
containers to add to the Pod.
The init-plugins container is used to install the default plugins shipped with
the UMH version of Grafana without the need to have an internet connection.
See the documentation
for a list of the plugins.
The initContainer section contains the configuration for the init containers.
By default, the hivemqextensioninit container is used to initialize the HiveMQ
extensions.
This section gives an overview of the microservices that can be found in the
United Manufacturing Hub.
There are several microservices that are part of the United Manufacturing Hub.
Some of them compose the core of the platform, and are mainly developed by the
UMH team, with the addition of some third-party software. Others are maintained
by the community, and are used to extend the functionality of the platform.
2.1 - Core
This section contains the overview of the Core components of the United
Manufacturing Hub.
The microservices in this section are part of the Core of the United Manufacturing
Hub. They are mainly developed by the UMH team, with the addition of some
third-party software. They are used to provide the core functionality of the
platform.
2.1.1 - Cache
The technical documentation of the redis microservice,
which is used as a cache for the other microservices.
The cache in the United Manufacturing Hub is Redis, a
key-value store that is used as a cache for the other microservices.
How it works
Recently used data is stored in the cache to reduce the load on the database.
All the microservices that need to access the database will first check if the
data is available in the cache. If it is, it will be used, otherwise the
microservice will query the database and store the result in the cache.
By default, Redis is configured to run in standalone mode, which means that it
will only have one master node.
You shouldn’t need to configure the cache manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the redis section of the Helm
chart values file.
You can consult the Bitnami Redis chart
for more information about the available configuration options.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
ALLOW_EMPTY_PASSWORD
Allow empty password
bool
true, false
false
BITNAMI_DEBUG
Specify if debug values should be set
bool
true, false
false
REDIS_PASSWORD
Redis password
string
Any
Random UUID
REDIS_PORT
Redis port number
int
Any
6379
REDIS_REPLICATION_MODE
Redis replication mode
string
master, slave
master
REDIS_TLS_ENABLED
Enable TLS
bool
true, false
false
2.1.2 - Database
The technical documentation of the database microservice,
which stores the data of the application.
The database microservice is the central component of the United Manufacturing
Hub and is based on TimescaleDB, an open-source relational database built for
handling time-series data. TimescaleDB is designed to provide scalable and
efficient storage, processing, and analysis of time-series data.
You can find more information on the datamodel of the database in the
Data Model section, and read
about the choice to use TimescaleDB in the
blog article.
How it works
When deployed, the database microservice will create two databases, with the
related usernames and passwords:
grafana: This database is used by Grafana to store the dashboards and
other data.
factoryinsight: This database is the main database of the United Manufacturing
Hub. It contains all the data that is collected by the microservices.
There is only one parameter that usually needs to be changed: the password used
to connect to the database. To do so, set the value of the db_password key in
the _000_commonConfig.datastorage
section of the Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
BOOTSTRAP_FROM_BACKUP
Whether to bootstrap the database from a backup or not.
int
0, 1
0
PATRONI_KUBERNETES_LABELS
The labels to use to find the pods of the StatefulSet.
The namespace in which the StatefulSet is deployed.
string
Any
united-manufacturing-hub
PATRONI_KUBERNETES_POD_IP
The IP address of the pod.
string
Any
Random IP
PATRONI_KUBERNETES_PORTS
The ports to use to connect to the pods.
string
Any
[{"name": "postgresql", "port": 5432}]
PATRONI_NAME
The name of the pod.
string
Any
united-manufacturing-hub-timescaledb-0
PATRONI_POSTGRESQL_CONNECT_ADDRESS
The address to use to connect to the database.
string
Any
$(PATRONI_KUBERNETES_POD_IP):5432
PATRONI_POSTGRESQL_DATA_DIR
The directory where the database data is stored.
string
Any
/var/lib/postgresql/data
PATRONI_REPLICATION_PASSWORD
The password to use to connect to the database as a replica.
string
Any
Random 16 characters
PATRONI_REPLICATION_USERNAME
The username to use to connect to the database as a replica.
string
Any
standby
PATRONI_RESTAPI_CONNECT_ADDRESS
The address to use to connect to the REST API.
string
Any
$(PATRONI_KUBERNETES_POD_IP):8008
PATRONI_SCOPE
The name of the cluster.
string
Any
united-manufacturing-hub
PATRONI_SUPERUSER_PASSWORD
The password to use to connect to the database as the superuser.
string
Any
Random 16 characters
PATRONI_admin_OPTIONS
The options to use for the admin user.
string
Comma separated list of options
createrole,createdb
PATRONI_admin_PASSWORD
The password to use to connect to the database as the admin user.
string
Any
Random 16 characters
PGBACKREST_CONFIG
The path to the configuration file for Postgres BackRest.
string
Any
/etc/pgbackrest/pgbackrest.conf
PGDATA
The directory where the database data is stored.
string
Any
$(PATRONI_POSTGRESQL_DATA_DIR)
PGHOST
The directory of the runnning database
string
Any
/var/run/postgresql
2.1.3 - Factoryinsight
The technical documentation of the Factoryinsight microservice, which exposes
a set of APIs to access the data from the database.
Factoryinsight is a microservice that provides a set of REST APIs to access the
data from the database. It is particularly useful to calculate the Key
Performance Indicators (KPIs) of the factories.
How it works
Factoryinsight exposes REST APIs to access the data from the database or calculate
the KPIs. By default, it’s only accessible from the internal network of the
cluster, but it can be configured to be
accessible from the external network.
The APIs require authentication, that can be ehither a Basic Auth or a Bearer
token. Both of these can be found in the Secret factoryinsight-secret.
You shouldn’t need to configure Factoryinsight manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the factoryinsight section of the Helm
chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
CUSTOMER_NAME_{NUMBER}
Specifies a user for the REST API. Multiple users can be set
string
Any
""
CUSTOMER_PASSWORD_{NUMBER}
Specifies the password of the user for the REST API
string
Any
""
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not reccomended for production
string
true, false
false
DRY_RUN
If enabled, data wont be stored in database
bool
true, false
false
FACTORYINSIGHT_PASSWORD
Specifies the password for the admin user for the REST API
string
Any
Random UUID
FACTORYINSIGHT_USER
Specifies the admin user for the REST API
string
Any
factoryinsight
INSECURE_NO_AUTH
If enabled, no authentication is required for the REST API. Not reccomended for production
bool
true, false
false
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MICROSERVICE_NAME
Name of the microservice. Used for tracing
string
Any
united-manufacturing-hub-factoryinsight
POSTGRES_DATABASE
Specifies the database name to use
string
Any
factoryinsight
POSTGRES_HOST
Specifies the database DNS name or IP address
string
Any
united-manufacturing-hub
POSTGRES_PASSWORD
Specifies the database password to use
string
Any
changeme
POSTGRES_PORT
Specifies the database port
int
Valid port number
5432
POSTGRES_USER
Specifies the database user to use
string
Any
factoryinsight
REDIS_PASSWORD
Password to access the redis sentinel
string
Any
Random UUID
REDIS_URI
The URI of the Redis instance
string
Any
united-manufacturing-hub-redis-headless:6379
SERIAL_NUMBER
Serial number of the cluster. Used for tracing
string
Any
defalut
VERSION
The version of the API used. Each version also enables all the previous ones
int
Any
2
2.1.4 - Grafana
The technical documentation of the grafana microservice,
which is a web application that provides visualization and analytics capabilities.
The grafana microservice is a web application that provides visualization and
analytics capabilities. Grafana allows you to query, visualize, alert on and
understand your metrics no matter where they are stored.
It has a rich ecosystem of plugins that allow you to extend its functionality
beyond the core features.
How it works
Grafana is a web application that can be accessed through a web browser. It
let’s you create dashboards that can be used to visualize data from the database.
Thanks to some custom datasource plugins,
Grafana can use the various APIs of the United Manufacturing Hub to query the
database and display useful information.
Kubernetes resources
Deployment: united-manufacturing-hub-grafana
Service:
External LoadBalancer: united-manufacturing-hub-grafana at
port 8080
The technical documentation of the kafka-bridge microservice,
which acts as a communication bridge between two Kafka brokers.
Kafka-bridge is a microservice that connects two Kafka brokers and forwards
messages between them. It is used to connect the local broker of the edge computer
with the remote broker on the server.
How it works
This microservice has two ways of operation:
High Integrity: This mode is used for topics that are critical for the
user. It is garanteed that no messages are lost. This is achieved by
committing the message only after it has been successfully inserted into the
database. Ususally all the topics are forwarded in this mode, except for
processValue, processValueString and raw messages.
High Throughput: This mode is used for topics that are not critical for
the user. They are forwarded as fast as possible, but it is possible that
messages are lost, for example if the database struggles to keep up. Usually
only the processValue, processValueString and raw messages are forwarded in
this mode.
Kubernetes resources
Deployment: united-manufacturing-hub-kafkabridge
Secret:
Local broker: united-manufacturing-hub-kafkabridge-secrets-local
You can configure the kafka-bridge microservice by setting the following values
in the _000_commonConfig.kafkaBridge
section of the Helm chart values file.
The topic map is a list of objects, each object represents a topic (or a set of
topics) that should be forwarded. The following JSON schema describes the
structure of the topic map:
{
"$schema": "http://json-schema.org/draft-07/schema",
"type": "array",
"title": "Kafka Topic Map",
"description": "This schema validates valid Kafka topic maps.",
"default": [],
"additionalItems": true,
"items": {
"$id": "#/items",
"anyOf": [
{
"$id": "#/items/anyOf/0",
"type": "object",
"title": "Unidirectional Kafka Topic Map with send direction",
"description": "This schema validates entries, that are unidirectional and have a send direction.",
"default": {},
"examples": [
{
"name": "HighIntegrity",
"topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
"bidirectional": false,
"send_direction": "to_remote" }
],
"required": [
"name",
"topic",
"bidirectional",
"send_direction" ],
"properties": {
"name": {
"$id": "#/items/anyOf/0/properties/name",
"type": "string",
"title": "Entry Name",
"description": "Name of the map entry, only used for logging & tracing.",
"default": "",
"examples": [
"HighIntegrity" ]
},
"topic": {
"$id": "#/items/anyOf/0/properties/topic",
"type": "string",
"title": "The topic to listen on",
"description": "The topic to listen on, this can be a regular expression.",
"default": "",
"examples": [
"^ia\\..+\\..+\\..+\\.(?!processValue).+$" ]
},
"bidirectional": {
"$id": "#/items/anyOf/0/properties/bidirectional",
"type": "boolean",
"title": "Is the transfer bidirectional?",
"description": "When set to true, the bridge will consume and produce from both brokers",
"default": false,
"examples": [
false ]
},
"send_direction": {
"$id": "#/items/anyOf/0/properties/send_direction",
"type": "string",
"title": "Send direction",
"description": "Can be either 'to_remote' or 'to_local'",
"default": "",
"examples": [
"to_remote",
"to_local" ]
}
},
"additionalProperties": true },
{
"$id": "#/items/anyOf/1",
"type": "object",
"title": "Bi-directional Kafka Topic Map with send direction",
"description": "This schema validates entries, that are bi-directional.",
"default": {},
"examples": [
{
"name": "HighIntegrity",
"topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
"bidirectional": true }
],
"required": [
"name",
"topic",
"bidirectional" ],
"properties": {
"name": {
"$id": "#/items/anyOf/1/properties/name",
"type": "string",
"title": "Entry Name",
"description": "Name of the map entry, only used for logging & tracing.",
"default": "",
"examples": [
"HighIntegrity" ]
},
"topic": {
"$id": "#/items/anyOf/1/properties/topic",
"type": "string",
"title": "The topic to listen on",
"description": "The topic to listen on, this can be a regular expression.",
"default": "",
"examples": [
"^ia\\..+\\..+\\..+\\.(?!processValue).+$" ]
},
"bidirectional": {
"$id": "#/items/anyOf/1/properties/bidirectional",
"type": "boolean",
"title": "Is the transfer bidirectional?",
"description": "When set to true, the bridge will consume and produce from both brokers",
"default": false,
"examples": [
true ]
}
},
"additionalProperties": true }
]
},
"examples": [
{
"name":"HighIntegrity",
"topic":"^ia\\..+\\..+\\..+\\.(?!processValue).+$",
"bidirectional":true },
{
"name":"HighThroughput",
"topic":"^ia\\..+\\..+\\..+\\.(processValue).*$",
"bidirectional":false,
"send_direction":"to_remote" }
]
}
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library, do not enable in production
string
true, false
false
KAFKA_GROUP_ID_SUFFIX
Identifier appended to the kafka group ID, usually a serial number
string
Any
defalut
KAFKA_SSL_KEY_PASSWORD_LOCAL
Password for the SSL key pf the local broker
string
Any
""
KAFKA_SSL_KEY_PASSWORD_REMOTE
Password for the SSL key of the remote broker
string
Any
""
KAFKA_TOPIC_MAP
A json map of the kafka topics should be forwarded
Defines which logging level is used, mostly relevant for developers.
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MICROSERVICE_NAME
Name of the microservice (used for tracing)
string
Any
united-manufacturing-hub-kafka-bridge
REMOTE_KAFKA_BOOTSTRAP_SERVER
URL of the remote kafka broker
string
Any valid URL
""
SERIAL_NUMBER
Serial number of the cluster (used for tracing)
string
Any
defalut
2.1.6 - Kafka Broker
The technical documentation of the kafka-broker microservice,
which handles the communication between the microservices.
The Kafka broker in the United Manufacturing Hub is RedPanda,
a Kafka-compatible event streaming platform. It’s used to store and process
messages, in order to stream real-time data between the microservices.
How it works
RedPanda is a distributed system that is made up of a cluster of brokers,
designed for maximum performance and reliability. It does not depend on external
systems like ZooKeeper, as it’s shipped as a single binary.
External NodePort: united-manufacturing-hub-kafka-external at
port 9094 for the Kafka API listener, port 9644 for the Admin API listener,
port 8083 for the HTTP Proxy listener, and port 8081 for the Schema Registry
listener.
You shouldn’t need to configure the Kafka broker manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the redpanda
section of the Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
HOST_IP
The IP address of the host machine.
string
Any
Random IP
POD_IP
The IP address of the pod.
string
Any
Random IP
SERVICE_NAME
The name of the service.
string
Any
united-manufacturing-hub-kafka
2.1.7 - Kafka Console
The technical documentation of the kafka-console microservice,
which provides a GUI to interact with the Kafka broker.
Kafka-console uses Redpanda Console
to help you manage and debug your Kafka workloads effortlessy.
With it, you can explore your Kafka topics, view messages, list the active
consumers, and more.
How it works
You can access the Kafka console via its Service.
It’s automatically connected to the Kafka broker, so you can start using it
right away.
You can view the Kafka broker configuration in the Broker tab, and explore the
topics in the Topics tab.
Kubernetes resources
Deployment: united-manufacturing-hub-console
Service:
External LoadBalancer: united-manufacturing-hub-console at
port 8090
ConfigMap: united-manufacturing-hub-console
Secret: united-manufacturing-hub-console
Configuration
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
LOGIN_JWTSECRET
The secret used to authenticate the communication to the backend.
string
Any
Random string
2.1.8 - Kafka to Postgresql
The technical documentation of the kafka-to-postgresql microservice,
which consumes messages from a Kafka broker and writes them in a PostgreSQL database.
Kafka-to-postgresql is a microservice responsible for consuming kafka messages
and inserting the payload into a Postgresql database. Take a look at the
Datamodel to see how the data is structured.
This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits. This will happen automatically from version 0.9.12.
How it works
By default, kafka-to-postgresql sets up two Kafka consumers, one for the
High Integrity topics and one for the
High Throughput topics.
The graphic below shows the program flow of the microservice.
High integrity
The High integrity topics are forwarded to the database in a synchronous way.
This means that the microservice will wait for the database to respond with a
non error message before committing the message to the Kafka broker.
This way, the message is garanteed to be inserted into the database, even though
it might take a while.
Most of the topics are forwarded in this mode.
The picture below shows the program flow of the high integrity mode.
High throughput
The High throughput topics are forwarded to the database in an asynchronous way.
This means that the microservice will not wait for the database to respond with
a non error message before committing the message to the Kafka broker.
This way, the message is not garanteed to be inserted into the database, but
the microservice will try to insert the message into the database as soon as
possible. This mode is used for the topics that are expected to have a high
throughput.
You shouldn’t need to configure kafka-to-postgresql manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the kafkatopostgresql section of the Helm
chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not reccomended for production
string
true, false
false
DRY_RUN
If set to true, the microservice will not write to the database
bool
true, false
false
KAFKA_BOOTSTRAP_SERVER
URL of the Kafka broker used, port is required
string
Any
united-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORD
Key password to decode the SSL private key
string
Any
""
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MEMORY_REQUEST
Memory request for the message cache
string
Any
50Mi
MICROSERVICE_NAME
Name of the microservice (used for tracing)
string
Any
united-manufacturing-hub-kafkatopostgresql
POSTGRES_DATABASE
The name of the PostgreSQL database
string
Any
factoryinsight
POSTGRES_HOST
Hostname of the PostgreSQL database
string
Any
united-manufacturing-hub
POSTGRES_PASSWORD
The password to use for PostgreSQL connections
string
Any
changeme
POSTGRES_SSLMODE
If set to true, the PostgreSQL connection will use SSL
string
Any
require
POSTGRES_USER
The username to use for PostgreSQL connections
string
Any
factoryinsight
2.1.9 - MQTT Bridge
The technical documentation of the mqtt-bridge microservice,
which acts as a communication bridge between two MQTT brokers.
MQTT-bridge is a microservice that connects two MQTT brokers and forwards
messages between them. It is used to connect the local broker of the edge computer
with the remote broker on the server.
How it works
This microservice subscribes to topics on the local broker and publishes the
messages to the remote broker, while also subscribing to topics on the remote
broker and publishing the messages to the local broker.
You can configure the URL of the remote MQTT broker that MQTT-bridge should
connect to by setting the value of the remoteBrokerUrl parameter in the
_000_commonConfig.mqttBridge
section of the Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
BRIDGE_ONE_WAY
Whether to enable one-way communication, from local to remote
bool
true, false
true
INSECURE_SKIP_VERIFY_LOCAL
Skip TLS certificate verification for the local broker
bool
true, false
true
INSECURE_SKIP_VERIFY_REMOTE
Skip TLS certificate verification for the remote broker
bool
true, false
true
LOCAL_BROKER_SSL_ENABLED
Whether to enable SSL for the local MQTT broker
bool
true, false
true
LOCAL_BROKER_URL
URL for the local MQTT broker
string
Any
ssl://united-manufacturing-hub-mqtt:8883
LOCAL_CERTIFICATE_NAME
Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption
Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption
string
USE_TLS, NO_CERT
USE_TLS
REMOTE_PUB_TOPIC
Remote MQTT topic to publish to
string
Any
ia/factoryinsight
REMOTE_SUB_TOPIC
Remote MQTT topic to subscribe to
string
Any
ia
2.1.10 - MQTT Broker
The technical documentation of the mqtt-broker microservice,
which forwards MQTT messages between the other microservices.
The MQTT broker in the United Manufacturing Hub is HiveMQ
and is customized to fit the needs of the stack. It’s a core component of
the stack and is used to communicate between the different microservices.
How it works
The MQTT broker is responsible for receiving MQTT messages from the
different microservices and forwarding them to the
MQTT Kafka bridge.
Kubernetes resources
StatefulSet: united-manufacturing-hub-hivemqce
Service:
Internal ClusterIP:
HiveMQ local: united-manufacturing-hub-hivemq-local-service at
port 1883 (MQTT) and 8883 (MQTT over TLS)
VerneMQ (for backwards compatibility): united-manufacturing-hub-vernemq at
port 1883 (MQTT) and 8883 (MQTT over TLS)
VerneMQ local (for backwards compatibility): united-manufacturing-hub-vernemq-local-service at
port 1883 (MQTT) and 8883 (MQTT over TLS)
External LoadBalancer: united-manufacturing-hub-mqtt at
port 1883 (MQTT) and 8883 (MQTT over TLS)
If you want to add more extensions, or to change the configuration, visit
the HiveMQ documentation.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
HIVEMQ_ALLOW_ALL_CLIENTS
Whether to allow all clients to connect to the broker
bool
true, false
true
2.1.11 - MQTT Kafka Bridge
The technical documentation of the mqtt-kafka-bridge microservice,
which transfers messages from MQTT brokers to Kafka Brokers and vice versa.
Mqtt-kafka-bridge is a microservice that acts as a bridge between MQTT brokers
and Kafka brokers, transfering messages from one to the other and vice versa.
This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits.
This will happen automatically from version 0.9.12.
Since version 0.9.10, it allows all raw messages, even if their content is not
in a valid JSON format.
How it works
Mqtt-kafka-bridge consumes topics from a message broker, translates them to
the proper format and publishes them to the other message broker.
You shouldn’t need to configure mqtt-kafka-bridge manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the mqttkafkabridge section of the Helm
chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not reccomended for production
string
true, false
false
INSECURE_SKIP_VERIFY
Skip TLS certificate verification
bool
true, false
true
KAFKA_BASE_TOPIC
The Kafka base topic
string
Any
ia
KAFKA_BOOTSTRAP_SERVER
URL of the Kafka broker used, port is required
string
Any
united-manufacturing-hub-kafka:9092
KAFKA_LISTEN_TOPIC
Kafka topic to subscribe to. Accept regex values
string
Any
^ia.+
KAFKA_SENDER_THREADS
Number of threads used to send messages to Kafka
int
Any
1
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MESSAGE_LRU_SIZE
Size of the LRU cache used to store messages. This is used to prevent duplicate messages from being sent to Kafka.
int
Any
100000
MICROSERVICE_NAME
Name of the microservice (used for tracing)
string
Any
united-manufacturing-hub-mqttkafkabridge
MQTT_BROKER_URL
The MQTT broker URL
string
Any
united-manufacturing-hub-mqtt:1883
MQTT_CERTIFICATE_NAME
Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption
Size of the LRU cache used to store raw messages. This is used to prevent duplicate messages from being sent to Kafka.
int
Any
100000
SERIAL_NUMBER
Serial number of the cluster (used for tracing)
string
Any
default
2.1.12 - Node-RED
The technical documentation of the nodered microservice,
which wires together hardware devices, APIs and online services.
Node-RED is a programming tool for wiring together
hardware devices, APIs and online services in new and interesting ways. It
provides a browser-based editor that makes it easy to wire together flows using
the wide range of nodes in the Node-RED library.
How it works
Node-RED is a JavaScript-based tool that can be used to create flows that
interact with the other microservices in the United Manufacturing Hub or
external services.
You can enable the nodered microservice and decide if you want to use the
default flows in the _000_commonConfig.dataprocessing.nodered
section of the Helm chart values.
All the other values are set by default and you can find them in the
Danger Zone section of the Helm chart values.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
NODE_RED_ENABLE_SAFE_MODE
Enable safe mode, useful in case of broken flows
boolean
true, false
false
TZ
The timezone used by Node-RED
string
Any
Berlin/Europe
2.1.13 - Sensorconnect
The technical documentation of the sensorconnect microservice,
which reads data from sensors and sends them to the MQTT or Kafka broker.
Sensorconnect automatically detects ifm gateways
connected to the network and reads data from the connected IO-Link
sensors.
How it works
Sensorconnect continuosly scans the given IP range for gateways, making it
effectively a plug-and-play solution. Once a gateway is found, it automatically
download the IODD files for the connected sensors and starts reading the data at
the configured interval. Then it processes the data and sends it to the MQTT or
Kafka broker, to be consumed by other microservices.
If you want to learn more about how to use sensors in your asstes, check out the
retrofitting section of the UMH Learn
website.
IODD files
The IODD files are used to describe the sensors connected to the gateway. They
contain information about the data type, the unit of measurement, the minimum and
maximum values, etc. The IODD files are downloaded automatically from
IODDFinder once a sensor is found, and are
stored in a Persistent Volume. If downloading from internet is not possible,
for example in a closed network, you can download the IODD files manually and
store them in the folder specified by the IODD_FILE_PATH environment variable.
If no IODD file is found for a sensor, the data will not be processed, but sent
to the broker as-is.
You can configure the IP range to scan for gateways, and which message broker to
use, by setting the values of the parameters in the
_000_commonConfig.datasources.sensorconnect
section of the Helm chart values file.
The default values of the other parameters are usually good for most use cases,
but you can change them in the Danger Zone section of the Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
ADDITIONAL_SLEEP_TIME_PER_ACTIVE_PORT_MS
Additional sleep time between pollings for each active port
float
Any
0.0
ADDITIONAL_SLOWDOWN_MAP
JSON map of values, allows to slow down and speed up the polling time of specific sensors
Enables the use of the fgtrace library. Not reccomended for production
string
true, false
false
DEVICE_FINDER_TIMEOUT_SEC
HTTP timeout in seconds for finding new devices
int
Any
1
DEVICE_FINDER_TIME_SEC
Time interval in seconds for finding new devices
int
Any
20
IODD_FILE_PATH
Filesystem path where to store IODD files
string
Any valid Unix path
/ioddfiles
IP_RANGE
The IP range to scan for new sensor
string
Any valid IP in CIDR notation
192.168.10.1/24
KAFKA_BOOTSTRAP_SERVER
URL of the Kafka broker. Port is required
string
Any
united-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORD
The encrypted password of the SSL key. If empty, no password is used
string
Any
""
KAFKA_USE_SSL
Set to true to use SSL encryption for the connection to the Kafka broker
string
true, false
false
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers
string
PRODUCTION, DEVELOPMENT
PRODUCTION
LOWER_POLLING_TIME_MS
Time in milliseconds to define the lower bound of time between sensor polling
int
Any
20
MAX_SENSOR_ERROR_COUNT
Amount of errors before a sensor is temporarily disabled
int
Any
50
MICROSERVICE_NAME
Name of the microservice (used for tracing)
string
Any
united-manufacturing-hub-sensorconnect
MQTT_BROKER_URL
URL of the MQTT broker. Port is required
string
Any
united-manufacturing-hub-mqtt:1883
MQTT_CERTIFICATE_NAME
Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption
string
USE_TLS, NO_CERT
USE_TLS
MQTT_PASSWORD
Password for the MQTT broker
string
Any
INSECURE_INSECURE_INSECURE
POD_NAME
Name of the pod (used for tracing)
string
Any
united-manufacturing-hub-sensorconnect-0
POLLING_SPEED_STEP_DOWN_MS
Time in milliseconds subtracted from the polling interval after a successful polling
int
Any
1
POLLING_SPEED_STEP_UP_MS
Time in milliseconds added to the polling interval after a failed polling
int
Any
20
SENSOR_INITIAL_POLLING_TIME_MS
Amount of time in milliseconds before starting to request sensor data. Must be higher than LOWER_POLLING_TIME_MS
int
Any
100
SUB_TWENTY_MS
Set to 1 to allow LOWER_POLLING_TIME_MS of under 20 ms. This is not recommended as it might lead to the gateway becoming unresponsive until a manual reboot
int
0, 1
0
TEST
If enabled, the microservice will use a test IODD file from the filesystem to use with a mocked sensor. Only useful for development.
string
true, false
false
TRANSMITTERID
Serial number of the cluster (used for tracing)
string
Any
default
UPPER_POLLING_TIME_MS
Time in milliseconds to define the upper bound of time between sensor polling
int
Any
1000
USE_KAFKA
If enabled, uses Kafka as a message broker
string
true, false
true
USE_MQTT
If enabled, uses MQTT as a message broker
string
true, false
false
Slowdown map
The ADDITIONAL_SLOWDOWN_MAP environment variable allows you to slow down and
speed up the polling time of specific sensors. It is a JSON array of values, with
the following structure:
This section contains the overview of the community-supported components of
the United Manufacturing Hub used to extend the functionality of the platform.
The microservices in this section are not part of the Core of the United
Manufacturing Hub, either because they are still in development, deprecated or
only supported community. They can be used to extend the functionality of the
platform.
It is not recommended to use these microservices in production as they might be
unstable or not supported anymore.
2.2.1 - Barcodereader
The technical documentation of the barcodereader microservice,
which reads barcodes and sends the data to the Kafka broker.
This microservice is still in development and is not considered stable for production use.
Barcodereader is a microservice that reads barcodes and sends the data to the Kafka broker.
How it works
Connect a barcode scanner to the system and the microservice will read the barcodes and send the data to the Kafka broker.
The asset ID, which is used for the topic structure
string
Any
barcodereader
CUSTOMER_ID
The customer ID, which is used for the topic structure
string
Any
raw
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not recommended for production
string
true, false
false
INPUT_DEVICE_NAME
The name of the USB device to use
string
Any
Datalogic ADC, Inc. Handheld Barcode Scanner
INPUT_DEVICE_PATH
The path of the USB device to use. It is recommended to use a wildcard (for example, /dev/input/event*) or leave empty
string
Valid Unix device path
""
KAFKA_BOOTSTRAP_SERVER
URL of the Kafka broker used, port is required
string
Any
united-manufacturing-hub-kafka:9092
LOCATION
The location, which is used for the topic structure
string
Any
barcodereader
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers.
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MICROSERVICE_NAME
Name of the microservice (used for tracing)
string
Any
united-manufacturing-hub-barcodereader
SCAN_ONLY
Prevent message broadcasting if enabled
bool
true, false
false
SERIAL_NUMBER
Serial number of the cluster (used for tracing)
string
Any
defalut
2.2.2 - Factoryinput
The technical documentation of the factoryinput microservice,
which provides REST endpoints for MQTT messages via HTTP requests.
This microservice is still in development and is not considered stable for production use
Factoryinput provides REST endpoints for MQTT messages via HTTP requests.
This microservice is typically accessed via grafana-proxy
How it works
The factoryinput microservice provides REST endpoints for MQTT messages via HTTP requests.
The main endpoint is /api/v1/{customer}/{location}/{asset}/{value}, with a POST
request method. The customer, location, asset and value are all strings. And are
used to build the MQTT topic. The body of the HTTP request is used as the MQTT
payload.
Internal ClusterIP: united-manufacturing-hub-factoryinput-service at
port 80
Secret: factoryinput-secret
Configuration
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
BROKER_URL
URL to the broker
string
all
ssl://united-manufacturing-hub-mqtt:8883
CERTIFICATE_NAME
Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption
string
USE_TLS, NO_CERT
USE_TLS
CUSTOMER_NAME_{NUMBER}
Specifies a user for the REST API. Multiple users can be set
string
Any
""
CUSTOMER_PASSWORD_{NUMBER}
Specifies the password of the user for the REST API
string
Any
""
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not reccomended for production
string
true, false
false
FACTORYINPUT_PASSWORD
Specifies the admin user for the REST API
string
Any
factoryinsight
FACTORYINPUT_USER
Specifies the password for the admin user for the REST API
string
Any
Random UUID
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MQTT_QUEUE_HANDLER
Number of queue workers to spawn
int
0-65535
10
MQTT_PASSWORD
Password for the MQTT broker
string
Any
INSECURE_INSECURE_INSECURE
POD_NAME
Name of the pod. Used for tracing
string
Any
united-manufacturing-hub-factoryinput-0
SERIAL_NUMBER
Serial number of the cluster. Used for tracing
string
Any
defalut
VERSION
The version of the API used. Each version also enables all the previous ones
int
Any
1
2.2.3 - Grafana Proxy
The technical documentation of the grafana-proxy microservice,
which proxies request from Grafana to the backend services.
This microservice is still in development and is not considered stable for production use
How it works
The grafana-proxy microservice serves an HTTP REST endpoint located at
/api/v1/{service}/{data}. The service parameter specifies the backend
service to which the request should be proxied, like factoryinput or
factoryinsight. The data parameter specifies the API endpoint to forward to
the backend service. The body of the HTTP request is used as the payload for
the proxied request.
Kubernetes resources
Deployment: united-manufacturing-hub-grafanaproxy
Service:
External LoadBalancer: united-manufacturing-hub-grafanaproxy-service at
port 2096
Configuration
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not reccomended for production
The microservice publishes messages on the topic ia/raw/development/ioTSensors/,
creating a subtopic for each simulation. The subtopics are the names of the
simulations, which are Temperature, Humidity, and Pressure.
The values are calculated using a normal distribution with a mean and standard
deviation that can be configured.
You can change the configuration of the microservice by updating the config.json
file in the ConfigMap.
2.2.6 - MQTT to Postgresql
The technical documentation of the mqtt-to-postgresql microservice,
which consumes messages from an MQTT broker and writes them in a PostgreSQL
database.
This microservice is deprecated and should not be used anymore in production.
Please use kafka-to-postgresql instead.
How it works
The mqtt-to-postgresql microservice subscribes to the MQTT broker and saves
the values of the messages on the topic ia/# in the database.
2.2.7 - OPCUA Simulator
The technical documentation of the opcua-simulator microservice,
which simulates OPCUA devices.
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.
How it works
The OPCUA Simulator is a microservice that simulates OPCUA devices. You can read
the full documentation on the
GitHub repository.
You can then connect to the simulated OPCUA server via Node-RED and read the
values of the simulated devices. Learn more about how to connect to the OPCUA
simulator to Node-RED in our guide.
You can change the configuration of the microservice by updating the config.json
file in the ConfigMap.
2.2.8 - PackML Simulator
The technical documentation of the packml-simulator microservice,
which simulates a manufacturing line using PackML over MQTT.
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but it is enabled by default.
PackML MQTT Simulator is a virtual line that interfaces using PackML implemented
over MQTT. It implements the following PackML State model and communicates
over MQTT topics as defined by environmental variables. The simulator can run
with either a basic MQTT topic structure or SparkPlugB.
You shouldn’t need to configure PackML Simulator manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the packmlmqttsimulator section of the
Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
AREA
ISA-95 area name of the line
string
Any
DefaultArea
LINE
ISA-95 line name of the line
string
Any
DefaultProductionLine
MQTT_PASSWORD
Password for the MQTT broker. Leave empty if the server does not manage permissions
string
Any
INSECURE_INSECURE_INSECURE
MQTT_URL
Server URL of the MQTT server
string
Any
mqtt://united-manufacturing-hub-mqtt:1883
MQTT_USERNAME
Name for the MQTT broker. Leave empty if the server does not manage permissions
string
Any
PACKMLSIMULATOR
SITE
ISA-95 site name of the line
string
Any
testLocation
2.2.9 - Tulip Connector
The technical documentation of the tulip-connector microservice,
which exposes internal APIs, such as factoryinsight, to the internet.
Specifically designed to communicate with Tulip.
This microservice is still in development and is not considered stable for production use.
The tulip-connector microservice enables communication with the United
Manufacturing Hub by exposing internal APIs, like
factoryinsight, to the
internet. With this REST endpoint, users can access data stored in the UMH and
seamlessly integrate Tulip with a Unified Namespace and on-premise Historian.
Furthermore, the tulip-connector can be customized to meet specific customer
requirements, including integration with an on-premise MES system.
How it works
The tulip-connector acts as a proxy between the internet and the UMH. It
exposes an endpoint to forward requests to the UMH and returns the response.
You can enable the tulip-connector and set the domain for the ingress by editing
the values in the _000_commonConfig.tulipconnector
section of the Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
FACTORYINSIGHT_PASSWORD
Specifies the password for the admin user for the REST API
string
Any
Random UUID
FACTORYINSIGHT_URL
Specifies the URL of the factoryinsight microservice.
Specifies the mode that the service will run in. Change only during development
string
dev, prod
prod
2.3 - Grafana Plugins
This section contains the overview of the custom Grafana plugins that can be
used to access the United Manufacturing Hub.
2.3.1 - Umh Datasource V2
This page contains the technical documentation of the umh-datasource-v2 plugin,
which allows for easy data extraction from factoryinsight.
The plugin, umh-datasource-v2, is a Grafana data source plugin that allows you to fetch
resources from a database and build queries for your dashboard.
How it works
When creating a new panel, select umh-datasource-v2 from the Data source drop-down menu. It will then fetch the resources
from the database. The loading time may depend on your internet speed.
Select the resources in the cascade menu to build your query. DefaultArea and DefaultProductionLine are placeholders
for the future implementation of the new data model.
Only the available values for the specified work cell will be fetched from the database. You can then select which data value you want to query.
Next you can specify how to transform the data, depending on what value you selected.
For example, all the custom tags will have the aggregation options available. For example if you query a processValue:
Time bucket: lets you group data in a time bucket
Aggregates: common statistical aggregations (maximum, minimum, sum or count)
Handling missing values: lets you choose how missing data should be handled
Configuration
In Grafana, navigate to the Data sources configuration panel.
Select umh-v2-datasource to configure it.
Configurations:
Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
API Key: authenticates the API calls to factoryinsight.
Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.
This page contains the technical documentation of the plugin umh-datasource, which allows for easy data extraction from factoryinsight.
We are no longer maintaining this microservice. Use instead our new microservice datasource-v2 for data extraction from factoryinsight.
The umh datasource is a Grafana 8.X compatible plugin, that allows you to fetch resources from a database
and build queries for your dashboard.
How it works
When creating a new panel, select umh-datasource from the Data source drop-down menu. It will then fetch the resources
from the database. The loading time may depend on your internet speed.
Select your query parameters Location, Asset and Value to build your query.
Configuration
In Grafana, navigate to the Data sources configuration panel.
Select umh-datasource to configure it.
Configurations:
Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
API Key: authenticates the API calls to factoryinsight.
Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.
2.3.3 - Factoryinput Panel
This page contains the technical documentation of the plugin factoryinput-panel, which allows for easy execution of MQTT messages inside the UMH stack from a Grafana panel.
This plugin is still in development and is not considered stable for production use
Below you will find a schematic of this flow, through our stack.
3 - Datamodel
This page describes the data model of the UMH stack - from the message payloads up to database tables.
Raw Data
If you have events that you just want to send to the message broker / Unified Namespace without the need for it to be stored, simply send it to the raw topic.
This data will not be processed by the UMH stack, but you can use it to build your own data processing pipeline.
ProcessValue Data
If you have data that does not fit in the other topics (such as your PLC tags or sensor data), you can use the processValue topic. It will be saved in the database in the processValue or processValueString and can be queried using factorysinsight or the umh-datasource Grafana plugin.
Production Data
In a production environment, you should first declare products using addProduct.
This allows you to create an order using addOrder. Once you have created an order,
send an state message to tell the database that the machine is working (or not working) on the order.
When the machine is ordered to produce a product, send a startOrder message.
When the machine has finished producing the product, send an endOrder message.
Send count messages if the machine has produced a product, but it does not make sense to give the product its ID. Especially useful for bottling or any other use case with a large amount of products, where not each product is traced.
Recommendation: Start with addShift and state and continue from there on
Modifying Data
If you have accidentally sent the wrong state or if you want to modify a value, you can use the modifyState message.
Unique Product Tracking
You can use uniqueProduct to tell the database that a new instance of a product has been created.
If the produced product is scrapped, you can use scrapUniqueProduct to change its state to scrapped.
3.1 - Messages
For each message topic you will find a short description what the message is used for and which structure it has, as well as what structure the payload is excepted to have.
Introduction
The United Manufacturing Hub provides a specific structure for messages/topics, each with its own unique purpose.
By adhering to this structure, the UMH will automatically calculate KPIs for you, while also making it easier to maintain
consistency in your topic structure.
3.1.1 - activity
activity messages are sent when a new order is added.
This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.
A message is sent here each time a new order is added.
Content
key
data type
description
product_id
string
current product name
order_id
string
current order name
target_units
int64
amount of units to be produced
The product needs to be added before adding the order. Otherwise, this message will be discarded
One order is always specific to that asset and can, by definition, not be used across machines. For this case one would need to create one order and product for each asset (reason: one product might go through multiple machines, but might have different target durations or even target units, e.g. one big 100m batch get split up into multiple pieces)
JSON
Examples
One order was started for 100 units of product “test”:
This message can be emitted to add a child product to a parent product.
It can be sent multiple times, if a parent product is split up into multiple child’s or multiple parents are combined into one child. One example for this if multiple parts are assembled to a single product.
detectedAnomaly messages are sent when an asset has stopped and the reason is identified.
This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.
If you have a lot of processValues, we’d recommend not using the /processValue as topic, but to append the tag name as well, e.g., /processValue/energyConsumption. This will structure it better for usage in MQTT Explorer or for data processing only certain processValues.
For automatic data storage in kafka-to-postgresql both will work fine as long as the payload is correct.
Please be aware that the values may only be int or float, other character are not valid, so make sure there is no quotation marks or anything
sneaking in there. Also be cautious of using the JavaScript ToFixed() function, as it is converting a float into a string.
Usage
A message is sent each time a process value has been prepared. The key has a unique name.
Content
key
data type
description
timestamp_ms
int64
unix timestamp of message creation
<valuename>
int64 or float64
Represents a process value, e.g. temperature
Pre 0.10.0:
As <valuename> is either of type ´int64´ or ´float64´, you cannot use booleans. Convert to integers as needed; e.g., true = “1”, false = “0”
A message is sent each time a process value has been prepared. The key has a unique name. This message is used when the datatype of the process value is a string instead of a number.
Content
key
data type
description
timestamp_ms
int64
unix timestamp of message creation
<valuename>
string
Represents a process value, e.g. temperature
JSON
Example
At the shown timestamp the custom process value “customer” had a readout of “miller”.
recommendation are action recommendations, which require concrete and rapid action in order to quickly eliminate efficiency losses on the store floor.
Content
key
data type
description
uid
string
UniqueID of the product
timestamp_ms
int64
unix timestamp of message creation
customer
string
the customer ID in the data structure
location
string
the location in the data structure
asset
string
the asset ID in the data structure
recommendationType
int32
Name of the product
enabled
bool
-
recommendationValues
map
Map of values based on which this recommendation is created
diagnoseTextDE
string
Diagnosis of the recommendation in german
diagnoseTextEN
string
Diagnosis of the recommendation in english
recommendationTextDE
string
Recommendation in german
recommendationTextEN
string
Recommendation in english
JSON
Example
A recommendation for the demonstrator at the shown location has not been running for a while, so a recommendation is sent to either start the machine or specify a reason why it is not running.
{
"UID": "43298756",
"timestamp_ms": 15888796894,
"customer": "united-manufacturing-hub",
"location": "dccaachen",
"asset": "DCCAachen-Demonstrator",
"recommendationType": "1",
"enabled": true,
"recommendationValues": { "Treshold": 30, "StoppedForTime": 612685 },
"diagnoseTextDE": "Maschine DCCAachen-Demonstrator steht seit 612685 Sekunden still (Status: 8, Schwellwert: 30)" ,
"diagnoseTextEN": "Machine DCCAachen-Demonstrator is not running since 612685 seconds (status: 8, threshold: 30)",
"recommendationTextDE":"Maschine DCCAachen-Demonstrator einschalten oder Stoppgrund auswählen.",
"recommendationTextEN": "Start machine DCCAachen-Demonstrator or specify stop reason.",
}
Here a message is sent every time products should be marked as scrap. It works as follows: A message with scrap and timestamp_ms is sent. It starts with the count that is directly before timestamp_ms. It is now iterated step by step back in time and step by step the existing counts are set to scrap until a total of scrap products have been scraped.
Content
timestamp_ms is the unix timestamp, you want to go back from
scrap number of item to be considered as scrap.
You can specify maximum of 24h to be scrapped to avoid accidents
(NOT IMPLEMENTED YET) If counts does not equal scrap, e.g. the count is 5 but only 2 more need to be scrapped, it will scrap exactly 2. Currently, it would ignore these 2. see also #125
(NOT IMPLEMENTED YET) If no counts are available for this asset, but uniqueProducts are available, they can also be marked as scrap.
JSON
Examples
Ten items where scrapped:
{
"timestamp_ms":1589788888888,
"scrap":10}
Schema
{
"$schema": "http://json-schema.org/draft/2019-09/schema",
"$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
"type": "object",
"default": {},
"title": "Root Schema",
"required": [
"timestamp_ms",
"scrap" ],
"properties": {
"timestamp_ms": {
"type": "integer",
"default": 0,
"minimum": 0,
"title": "The unix timestamp you want to go back from",
"examples": [
1589788888888 ]
},
"scrap": {
"type": "integer",
"default": 0,
"minimum": 0,
"title": "Number of items to be considered as scrap",
"examples": [
10 ]
}
},
"examples": [
{
"timestamp_ms": 1589788888888,
"scrap": 10 },
{
"timestamp_ms": 1589788888888,
"scrap": 5 }
]
}
A message is sent here each time the asset changes status. Subsequent changes are not possible. Different statuses can also be process steps, such as “setup”, “post-processing”, etc. You can find a list of all supported states here.
Content
key
data type
description
state
uint32
value of the state according to the link above
timestamp_ms
uint64
unix timestamp of message creation
JSON
Example
The asset has a state of 10000, which means it is actively producing.
A message is sent here each time a product has been produced or modified. A modification can take place, for example, due to a downstream quality control.
There are two cases of when to send a message under the uniqueProduct topic:
The exact product doesn’t already have a UID (-> This is the case, if it has not been produced at an asset incorporated in the digital shadow). Specify a space holder asset = “storage” in the MQTT message for the uniqueProduct topic.
The product was produced at the current asset (it is now different from before, e.g. after machining or after something was screwed in). The newly produced product is always the “child” of the process. Products it was made out of are called the “parents”.
Content
key
data type
description
begin_timestamp_ms
int64
unix timestamp of start time
end_timestamp_ms
int64
unix timestamp of completion time
product_id
string
product ID of the currently produced product
isScrap
bool
optional information whether the current product is of poor quality and will be sorted out. Is considered false if not specified.
uniqueProductAlternativeID
string
alternative ID of the product
JSON
Example
The processing of product “Beilinger 30x15” with the AID 216381 started and ended at the designated timestamps. It is of low quality and due to be scrapped.
The database stores the messages in different tables.
Introduction
We are using the database TimescaleDB, which is based on PostgreSQL and supports standard relational SQL database work,
while also supporting time-series databases.
This allows for usage of regular SQL queries, while also allowing to process and store time-series data.
Postgresql has proven itself reliable over the last 25 years, so we are happy to use it.
If you want to learn more about database paradigms, please refer to the knowledge article about that topic.
It also includes a concise video summarizing what you need to know about different paradigms.
Our database model is designed to represent a physical manufacturing process. It keeps track of the following data:
The state of the machine
The products that are produced
The orders for the products
The workers’ shifts
Arbitrary process values (sensor data)
The producible products
Recommendations for the production
Please note that our database does not use a retention policy. This means that your database can grow quite fast if you save a lot of process values. Take a look at our guide on enabling data compression and retention in TimescaleDB to customize the database to your needs.
A good method to check your db size would be to use the following commands inside postgres shell:
CREATETABLEIFNOTEXISTScountTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),countINTEGERCHECK(count>0),UNIQUE(timestamp,asset_id));-- creating hypertable
SELECTcreate_hypertable('countTable','timestamp');-- creating an index to increase performance
CREATEINDEXONcountTable(asset_id,timestampDESC);
This table stores process values, for example toner level of a printer, flow rate of a pump, etc.
This table, has a closely related table for storing number values, processValueTable.
CREATETABLEIFNOTEXISTSprocessValueStringTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),valueNameTEXTNOTNULL,valueTESTNULL,UNIQUE(timestamp,asset_id,valueName));-- creating hypertable
SELECTcreate_hypertable('processValueStringTable','timestamp');-- creating an index to increase performance
CREATEINDEXONprocessValueStringTable(asset_id,timestampDESC);-- creating an index to increase performance
CREATEINDEXONprocessValueStringTable(valuename);
3.2.6 - processValueTable
processValueTable contains process values.
Usage
This table stores process values, for example toner level of a printer, flow rate of a pump, etc.
This table, has a closely related table for storing string values, processValueStringTable.
CREATETABLEIFNOTEXISTSprocessValueTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),valueNameTEXTNOTNULL,valueDOUBLEPRECISIONNULL,UNIQUE(timestamp,asset_id,valueName));-- creating hypertable
SELECTcreate_hypertable('processValueTable','timestamp');-- creating an index to increase performance
CREATEINDEXONprocessValueTable(asset_id,timestampDESC);-- creating an index to increase performance
CREATEINDEXONprocessValueTable(valuename);
CREATETABLEIFNOTEXISTSstateTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),stateINTEGERCHECK(state>=0),UNIQUE(timestamp,asset_id));-- creating hypertable
SELECTcreate_hypertable('stateTable','timestamp');-- creating an index to increase performance
CREATEINDEXONstateTable(asset_id,timestampDESC);
3.2.11 - uniqueProductTable
uniqueProductTable contains unique products and their IDs.
CREATETABLEIFNOTEXISTSuniqueProductTable(uidTEXTNOTNULL,asset_idSERIALREFERENCESassetTable(id),begin_timestamp_msTIMESTAMPTZNOTNULL,end_timestamp_msTIMESTAMPTZNOTNULL,product_idTEXTNOTNULL,is_scrapBOOLEANNOTNULL,quality_classTEXTNOTNULL,station_idTEXTNOTNULL,UNIQUE(uid,asset_id,station_id),CHECK(begin_timestamp_ms<end_timestamp_ms));-- creating an index to increase performance
CREATEINDEXONuniqueProductTable(asset_id,uid,station_id);
3.3 - States
States are the core of the database model. They represent the state of the machine at a given point in time.
States Documentation Index
Introduction
This documentation outlines the various states used in the United Manufacturing Hub software stack to calculate OEE/KPI and other production metrics.
State Categories
Active (10000-29999): These states represent that the asset is actively producing.
Material (60000-99999): These states represent that the asset has issues regarding materials.
Operator (140000-159999): These states represent that the asset is stopped because of operator related issues.
Planning (160000-179999): These states represent that the asset is stopped as it is planned to stop (planned idle time).
Process (100000-139999): These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.
Unknown (30000-59999): These states represent that the asset is in an unspecified state.
Glossary
OEE: Overall Equipment Effectiveness
KPI: Key Performance Indicator
Conclusion
This documentation provides a comprehensive overview of the states used in the United Manufacturing Hub software stack and their respective categories. For more information on each state category and its individual states, please refer to the corresponding subpages.
3.3.1 - Active (10000-29999)
These states represent that the asset is actively producing
10000: ProducingAtFullSpeedState
This asset is running at full speed.
Examples for ProducingAtFullSpeedState
WS_Cur_State: Operating
PackML/Tobacco: Execute
20000: ProducingAtLowerThanFullSpeedState
Asset is producing, but not at full speed.
Examples for ProducingAtLowerThanFullSpeedState
WS_Cur_Prog: StartUp
WS_Cur_Prog: RunDown
WS_Cur_State: Stopping
PackML/Tobacco : Stopping
WS_Cur_State: Aborting
PackML/Tobacco: Aborting
WS_Cur_State: Holding
Ws_Cur_State: Unholding
PackML:Tobacco: Unholding
WS_Cur_State Suspending
PackML/Tobacco: Suspending
WS_Cur_State: Unsuspending
PackML/Tobacco: Unsuspending
PackML/Tobacco: Completing
WS_Cur_Prog: Production
EUROMAP: MANUAL_RUN
EUROMAP: CONTROLLED_RUN
Currently not included:
WS_Prog_Step: all
3.3.2 - Unknown (30000-59999)
These states represent that the asset is in an unspecified state
30000: UnknownState
Data for that particular asset is not available (e.g. connection to the PLC is disrupted)
Examples for UnknownState
WS_Cur_Prog: Undefined
EUROMAP: Offline
40000 UnspecifiedStopState
The asset is not producing, but the reason is unknown at the time.
Examples for UnspecifiedStopState
WS_Cur_State: Clearing
PackML/Tobacco: Clearing
WS_Cur_State: Emergency Stop
WS_Cur_State: Resetting
PackML/Tobacco: Clearing
WS_Cur_State: Held
EUROMAP: Idle
Tobacco: Other
WS_Cur_State: Stopped
PackML/Tobacco: Stopped
WS_Cur_State: Starting
PackML/Tobacco: Starting
WS_Cur_State: Prepared
WS_Cur_State: Idle
PackML/Tobacco: Idle
PackML/Tobacco: Complete
EUROMAP: READY_TO_RUN
50000: MicrostopState
The asset is not producing for a short period (typically around five minutes), but the reason is unknown at the time.
3.3.3 - Material (60000-99999)
These states represent that the asset has issues regarding materials.
60000 InletJamState
This machine does not perform its intended function due to a lack of material flow in the infeed of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several inlets, the condition o lack in the inlet refers to the main flow , i.e. to the material (crate, bottle) that is fed in the direction of the filling machine (Central machine). The defect in the infeed is an extraneous defect, but because of its importance for visualization and technical reporting, it is recorded separately.
Examples for InletJamState
WS_Cur_State: Lack
70000: OutletJamState
The machine does not perform its intended function as a result of a jam in the good flow discharge of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several discharges, the jam in the discharge condition refers to the main flow, i.e. to the good (crate, bottle) that is fed in the direction of the filling machine (central machine) or is fed away from the filling machine. The jam in the outfeed is an external fault 1v, but it is recorded separately, because of its importance for visualization and technical reporting.
Examples for OutletJamState
WS_Cur_State: Tailback
80000: CongestionBypassState
The machine does not perform its intended function due to a shortage in the bypass supply or a jam in the bypass discharge of the machine, detected by the sensor system of the control system (machine stop). This condition can only occur in machines with two outlets or inlets and in which the bypass is in turn the inlet or outlet of an upstream or downstream machine of the filling line (packaging and palleting machines). The jam/shortage in the auxiliary flow is an external fault, but it is recoded separately due to its importance for visualization and technical reporting.
Examples for the CongestionBypassState
WS_Cur_State: Lack/Tailback Branch Line
90000: MaterialIssueOtherState
The asset has a material issue, but it is not further specified.
Examples for MaterialIssueOtherState
WS_Mat_Ready (Information of which material is lacking)
PackML/Tobacco: Suspended
3.3.4 - Process(100000-139999)
These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.
100000: ChangeoverState
The asset is in a changeover process between products.
Examples for ChangeoverState
WS_Cur_Prog: Program-Changeover
Tobacco: CHANGE OVER
110000: CleaningState
The asset is currently in a cleaning process.
Examples for CleaningState
WS_Cur_Prog: Program-Cleaning
Tobacco: CLEAN
120000: EmptyingState
The asset is currently emptied, e.g. to prevent mold for food products over the long breaks, e.g. the weekend.
Examples for EmptyingState
Tobacco: EMPTY OUT
130000: SettingUpState
This machine is currently preparing itself for production, e.g. heating up.
Examples for SettingUpState
EUROMAP: PREPARING
3.3.5 - Operator (140000-159999)
These states represent that the asset is stopped because of operator related issues.
140000: OperatorNotAtMachineState
The operator is not at the machine.
150000: OperatorBreakState
The operator is taking a break.
This is different from a planned shift as it could contribute to performance losses.
Examples for OperatorBreakState
WS_Cur_Prog: Program-Break
3.3.6 - Planning (160000-179999)
These states represent that the asset is stopped as it is planned to stopped (planned idle time).
160000: NoShiftState
There is no shift planned at that asset.
170000: NO OrderState
There is no order planned at that asset.
3.3.7 - Technical (180000-229999)
These states represent that the asset has a technical issue.
180000: EquipmentFailureState
The asset itself is defect, e.g. a broken engine.
Examples for EquipmentFailureState
WS_Cur_State: Equipment Failure
190000: ExternalFailureState
There is an external failure, e.g. missing compressed air.
Examples for ExternalFailureState
WS_Cur_State: External Failure
200000: ExternalInterferenceState
There is an external interference, e.g. the crane to move the material is currently unavailable.
210000: PreventiveMaintenanceStop
A planned maintenance action.
Examples for PreventiveMaintenanceStop
WS_Cur_Prog: Program-Maintenance
PackML: Maintenance
EUROMAP: MAINTENANCE
Tobacco: MAINTENANCE
220000: TechnicalOtherStop
The asset has a technical issue, but it is not specified further.