There are several microservices that are part of the United Manufacturing Hub. Some of them compose the core of the platform, and are mainly developed by the UMH team, with the addition of some third-party software. Others are maintained by the community, and are used to extend the functionality of the platform.
This is the multi-page printable view of this section. Click here to print.
Microservices
- 1: Core
- 1.1: Cache
- 1.2: Database
- 1.3: Factoryinsight
- 1.4: Grafana
- 1.5: Kafka Bridge
- 1.6: Kafka Broker
- 1.7: Kafka Console
- 1.8: Kafka to Postgresql
- 1.9: MQTT Bridge
- 1.10: MQTT Broker
- 1.11: MQTT Kafka Bridge
- 1.12: Node-RED
- 1.13: Sensorconnect
- 2: Community
- 2.1: Barcodereader
- 2.2: Factoryinput
- 2.3: Grafana Proxy
- 2.4: Kafka State Detector
- 2.5: MQTT Simulator
- 2.6: MQTT to Postgresql
- 2.7: OPCUA Simulator
- 2.8: PackML Simulator
- 2.9: Tulip Connector
- 3: Grafana Plugins
- 3.1: Umh Datasource V2
- 3.2: Umh Datasource
- 3.3: Factoryinput Panel
1 - Core
The microservices in this section are part of the Core of the United Manufacturing Hub. They are mainly developed by the UMH team, with the addition of some third-party software. They are used to provide the core functionality of the platform.
1.1 - Cache
The cache in the United Manufacturing Hub is Redis, a key-value store that is used as a cache for the other microservices.
How it works
Recently used data is stored in the cache to reduce the load on the database. All the microservices that need to access the database will first check if the data is available in the cache. If it is, it will be used, otherwise the microservice will query the database and store the result in the cache.
By default, Redis is configured to run in standalone mode, which means that it will only have one master node.
Kubernetes resources
- StatefulSet:
united-manufacturing-hub-redis-master
- Service:
- Internal ClusterIP:
- Redis:
united-manufacturing-hub-redis-master
at port 6379 - Headless:
united-manufacturing-hub-redis-headless
at port 6379 - Metrics:
united-manufacturing-hub-redis-metrics
at port 6379
- Redis:
- Internal ClusterIP:
- ConfigMap:
- Configuration:
united-manufacturing-hub-redis-configuration
- Health:
united-manufacturing-hub-redis-health
- Scripts:
united-manufacturing-hub-redis-scripts
- Configuration:
- Secret:
redis-secret
- PersistentVolumeClaim:
redis-data-united-manufacturing-hub-redis-master-0
Configuration
You shouldn’t need to configure the cache manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the redis
section of the Helm
chart values file.
You can consult the Bitnami Redis chart for more information about the available configuration options.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
ALLOW_EMPTY_PASSWORD | Allow empty password | bool | true , false | false |
BITNAMI_DEBUG | Specify if debug values should be set | bool | true , false | false |
REDIS_PASSWORD | Redis password | string | Any | Random UUID |
REDIS_PORT | Redis port number | int | Any | 6379 |
REDIS_REPLICATION_MODE | Redis replication mode | string | master , slave | master |
REDIS_TLS_ENABLED | Enable TLS | bool | true , false | false |
1.2 - Database
The database microservice is the central component of the United Manufacturing Hub and is based on TimescaleDB, an open-source relational database built for handling time-series data. TimescaleDB is designed to provide scalable and efficient storage, processing, and analysis of time-series data.
You can find more information on the datamodel of the database in the Data Model section, and read about the choice to use TimescaleDB in the blog article.
How it works
When deployed, the database microservice will create two databases, with the related usernames and passwords:
grafana
: This database is used by Grafana to store the dashboards and other data.factoryinsight
: This database is the main database of the United Manufacturing Hub. It contains all the data that is collected by the microservices.
Then, it creates the tables based on the database schema.
If you want to learn more about how TimescaleDB works, you can read the TimescaleDB documentation.
Kubernetes resources
- StatefulSet:
united-manufacturing-hub-timescaledb
- Service:
- Internal ClusterIP for the replicas:
united-manufacturing-hub-replica
at port 5432 - Internal ClusterIP for the config:
united-manufacturing-hub-config
at port 8008 - External LoadBalancer:
united-manufacturing-hub
at port 5432
- Internal ClusterIP for the replicas:
- ConfigMap:
- Patroni:
united-manufacturing-hub-timescaledb-patroni
- Post init:
timescale-post-init
- Postgres BackRest:
united-manufacturing-hub-timescaledb-pgbackrest
- Scripts:
united-manufacturing-hub-timescaledb-scripts
- Patroni:
- Secret:
- Certificate:
united-manufacturing-hub-certificate
- Patroni credentials:
united-manufacturing-hub-credentials
- Users passwords:
timescale-post-init-pw
- Certificate:
- PersistentVolumeClaim:
- Data:
storage-volume-united-manufacturing-hub-timescaledb-0
- WAL-E:
wal-volume-united-manufacturing-hub-timescaledb-0
- Data:
Configuration
There is only one parameter that usually needs to be changed: the password used
to connect to the database. To do so, set the value of the db_password
key in
the _000_commonConfig.datastorage
section of the Helm chart values file.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
BOOTSTRAP_FROM_BACKUP | Whether to bootstrap the database from a backup or not. | int | 0, 1 | 0 |
PATRONI_KUBERNETES_LABELS | The labels to use to find the pods of the StatefulSet. | string | Any | {app: united-manufacturing-hub-timescaledb, cluster-name: united-manufacturing-hub, release: united-manufacturing-hub} |
PATRONI_KUBERNETES_NAMESPACE | The namespace in which the StatefulSet is deployed. | string | Any | united-manufacturing-hub |
PATRONI_KUBERNETES_POD_IP | The IP address of the pod. | string | Any | Random IP |
PATRONI_KUBERNETES_PORTS | The ports to use to connect to the pods. | string | Any | [{"name": "postgresql", "port": 5432}] |
PATRONI_NAME | The name of the pod. | string | Any | united-manufacturing-hub-timescaledb-0 |
PATRONI_POSTGRESQL_CONNECT_ADDRESS | The address to use to connect to the database. | string | Any | $(PATRONI_KUBERNETES_POD_IP):5432 |
PATRONI_POSTGRESQL_DATA_DIR | The directory where the database data is stored. | string | Any | /var/lib/postgresql/data |
PATRONI_REPLICATION_PASSWORD | The password to use to connect to the database as a replica. | string | Any | Random 16 characters |
PATRONI_REPLICATION_USERNAME | The username to use to connect to the database as a replica. | string | Any | standby |
PATRONI_RESTAPI_CONNECT_ADDRESS | The address to use to connect to the REST API. | string | Any | $(PATRONI_KUBERNETES_POD_IP):8008 |
PATRONI_SCOPE | The name of the cluster. | string | Any | united-manufacturing-hub |
PATRONI_SUPERUSER_PASSWORD | The password to use to connect to the database as the superuser. | string | Any | Random 16 characters |
PATRONI_admin_OPTIONS | The options to use for the admin user. | string | Comma separated list of options | createrole,createdb |
PATRONI_admin_PASSWORD | The password to use to connect to the database as the admin user. | string | Any | Random 16 characters |
PGBACKREST_CONFIG | The path to the configuration file for Postgres BackRest. | string | Any | /etc/pgbackrest/pgbackrest.conf |
PGDATA | The directory where the database data is stored. | string | Any | $(PATRONI_POSTGRESQL_DATA_DIR) |
PGHOST | The directory of the runnning database | string | Any | /var/run/postgresql |
1.3 - Factoryinsight
Factoryinsight is a microservice that provides a set of REST APIs to access the data from the database. It is particularly useful to calculate the Key Performance Indicators (KPIs) of the factories.
How it works
Factoryinsight exposes REST APIs to access the data from the database or calculate the KPIs. By default, it’s only accessible from the internal network of the cluster, but it can be configured to be accessible from the external network.
The APIs require authentication, that can be ehither a Basic Auth or a Bearer
token. Both of these can be found in the Secret factoryinsight-secret
.
API documentation
Kubernetes resources
- Deployment:
united-manufacturing-hub-factoryinsight-deployment
- Service:
- Internal ClusterIP:
united-manufacturing-hub-factoryinsight-service
at port 80 - External : Access factoryinsight outside the cluster
- Internal ClusterIP:
- Secret:
factoryinsight-secret
Configuration
You shouldn’t need to configure Factoryinsight manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the factoryinsight
section of the Helm
chart values file.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
CUSTOMER_NAME_{NUMBER} | Specifies a user for the REST API. Multiple users can be set | string | Any | "" |
CUSTOMER_PASSWORD_{NUMBER} | Specifies the password of the user for the REST API | string | Any | "" |
DEBUG_ENABLE_FGTRACE | Enables the use of the fgtrace library. Not reccomended for production | string | true , false | false |
DRY_RUN | If enabled, data wont be stored in database | bool | true , false | false |
FACTORYINSIGHT_PASSWORD | Specifies the password for the admin user for the REST API | string | Any | Random UUID |
FACTORYINSIGHT_USER | Specifies the admin user for the REST API | string | Any | factoryinsight |
INSECURE_NO_AUTH | If enabled, no authentication is required for the REST API. Not reccomended for production | bool | true , false | false |
LOGGING_LEVEL | Defines which logging level is used, mostly relevant for developers | string | PRODUCTION, DEVELOPMENT | PRODUCTION |
MICROSERVICE_NAME | Name of the microservice. Used for tracing | string | Any | united-manufacturing-hub-factoryinsight |
POSTGRES_DATABASE | Specifies the database name to use | string | Any | factoryinsight |
POSTGRES_HOST | Specifies the database DNS name or IP address | string | Any | united-manufacturing-hub |
POSTGRES_PASSWORD | Specifies the database password to use | string | Any | changeme |
POSTGRES_PORT | Specifies the database port | int | Valid port number | 5432 |
POSTGRES_USER | Specifies the database user to use | string | Any | factoryinsight |
REDIS_PASSWORD | Password to access the redis sentinel | string | Any | Random UUID |
REDIS_URI | The URI of the Redis instance | string | Any | united-manufacturing-hub-redis-headless:6379 |
SERIAL_NUMBER | Serial number of the cluster. Used for tracing | string | Any | defalut |
VERSION | The version of the API used. Each version also enables all the previous ones | int | Any | 2 |
1.4 - Grafana
The grafana microservice is a web application that provides visualization and analytics capabilities. Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored.
It has a rich ecosystem of plugins that allow you to extend its functionality beyond the core features.
How it works
Grafana is a web application that can be accessed through a web browser. It let’s you create dashboards that can be used to visualize data from the database.
Thanks to some custom datasource plugins, Grafana can use the various APIs of the United Manufacturing Hub to query the database and display useful information.
Kubernetes resources
- Deployment:
united-manufacturing-hub-grafana
- Service:
- External LoadBalancer:
united-manufacturing-hub-grafana
at port 8080
- External LoadBalancer:
- ConfigMap:
united-manufacturing-hub-grafana
- Secret:
grafana-secret
- PersistentVolumeClaim:
united-manufacturing-hub-grafana
Configuration
Grafana is configured through its user interface. The default credentials are found in the grafana-secret Secret.
The Grafana installation that is provided by the United Manufacturing Hub is shipped with a set of preinstalled plugins:
- ACE.SVG by Andrew Rodgers
- Button Panel by CloudSpout LLC
- Button Panel by UMH Systems Gmbh
- Discrete by Natel Energy
- Dynamic Text by Marcus Olsson
- FlowCharting by agent
- Pareto Chart by isaozler
- Pie Chart (old) by Grafana Labs
- Timepicker Buttons Panel by williamvenner
- UMH Datasource by UMH Systems Gmbh
- UMH Datasource v2 by UMH Systems Gmbh
- Untimely by factry
- Worldmap Panel by Grafana Labs
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
FACTORYINSIGHT_APIKEY | The API key to use to authenticate to the Factoryinsight API | string | Any | Base64 encoded string |
FACTORYINSIGHT_BASEURL | The base URL of the Factoryinsight API | string | Any | united-manufacturing-hub-factoryinsight-service |
FACTORYINSIGHT_CUSTOMERID | The customer ID to use to authenticate to the Factoryinsight API | string | Any | factoryinsight |
FACTORYINSIGHT_PASSWORD | The password to use to authenticate to the Factoryinsight API | string | Any | Random UUID |
GF_PATHS_DATA | The path where Grafana will store its data | string | Any | /var/lib/grafana/data |
GF_PATHS_LOGS | The path where Grafana will store its logs | string | Any | /var/log/grafana |
GF_PATHS_PLUGINS | The path where Grafana will store its plugins | string | Any | /var/lib/grafana/plugins |
GF_PATHS_PROVISIONING | The path where Grafana will store its provisioning configuration | string | Any | /etc/grafana/provisioning |
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS | List of plugin identifiers to allow loading even if they lack a valid signature | string | Comma separated list | umh-datasource,umh-factoryinput-panel,umh-v2-datasource |
GF_SECURITY_ADMIN_PASSWORD | The password of the admin user | string | Any | Random UUID |
GF_SECURITY_ADMIN_USER | The username of the admin user | string | Any | admin |
1.5 - Kafka Bridge
Kafka-bridge is a microservice that connects two Kafka brokers and forwards messages between them. It is used to connect the local broker of the edge computer with the remote broker on the server.
How it works
This microservice has two ways of operation:
- High Integrity: This mode is used for topics that are critical for the user. It is garanteed that no messages are lost. This is achieved by committing the message only after it has been successfully inserted into the database. Ususally all the topics are forwarded in this mode, except for processValue, processValueString and raw messages.
- High Throughput: This mode is used for topics that are not critical for the user. They are forwarded as fast as possible, but it is possible that messages are lost, for example if the database struggles to keep up. Usually only the processValue, processValueString and raw messages are forwarded in this mode.
Kubernetes resources
- Deployment:
united-manufacturing-hub-kafkabridge
- Secret:
- Local broker:
united-manufacturing-hub-kafkabridge-secrets-local
- Remote broker:
united-manufacturing-hub-kafkabridge-secrets-remote
- Local broker:
Configuration
You can configure the kafka-bridge microservice by setting the following values in the _000_commonConfig.kafkaBridge section of the Helm chart values file.
kafkaBridge:
enabled: true
remotebootstrapServer: ""
topicmap:
- bidirectional: false
name: HighIntegrity
send_direction: to_remote
topic: ^ia\..+\..+\..+\.((addMaintenanceActivity)|(addOrder)|(addParentToChild)|(addProduct)|(addShift)|(count)|(deleteShiftByAssetIdAndBeginTimestamp)|(deleteShiftById)|(endOrder)|(modifyProducedPieces)|(modifyState)|(productTag)|(productTagString)|(recommendation)|(scrapCount)|(startOrder)|(state)|(uniqueProduct)|(scrapUniqueProduct))$
- bidirectional: false
name: HighThroughput
send_direction: to_remote
topic: ^ia\..+\..+\..+\.(processValue).*$
Topic Map schema
The topic map is a list of objects, each object represents a topic (or a set of topics) that should be forwarded. The following JSON schema describes the structure of the topic map:
{
"$schema": "http://json-schema.org/draft-07/schema",
"type": "array",
"title": "Kafka Topic Map",
"description": "This schema validates valid Kafka topic maps.",
"default": [],
"additionalItems": true,
"items": {
"$id": "#/items",
"anyOf": [
{
"$id": "#/items/anyOf/0",
"type": "object",
"title": "Unidirectional Kafka Topic Map with send direction",
"description": "This schema validates entries, that are unidirectional and have a send direction.",
"default": {},
"examples": [
{
"name": "HighIntegrity",
"topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
"bidirectional": false,
"send_direction": "to_remote"
}
],
"required": [
"name",
"topic",
"bidirectional",
"send_direction"
],
"properties": {
"name": {
"$id": "#/items/anyOf/0/properties/name",
"type": "string",
"title": "Entry Name",
"description": "Name of the map entry, only used for logging & tracing.",
"default": "",
"examples": [
"HighIntegrity"
]
},
"topic": {
"$id": "#/items/anyOf/0/properties/topic",
"type": "string",
"title": "The topic to listen on",
"description": "The topic to listen on, this can be a regular expression.",
"default": "",
"examples": [
"^ia\\..+\\..+\\..+\\.(?!processValue).+$"
]
},
"bidirectional": {
"$id": "#/items/anyOf/0/properties/bidirectional",
"type": "boolean",
"title": "Is the transfer bidirectional?",
"description": "When set to true, the bridge will consume and produce from both brokers",
"default": false,
"examples": [
false
]
},
"send_direction": {
"$id": "#/items/anyOf/0/properties/send_direction",
"type": "string",
"title": "Send direction",
"description": "Can be either 'to_remote' or 'to_local'",
"default": "",
"examples": [
"to_remote",
"to_local"
]
}
},
"additionalProperties": true
},
{
"$id": "#/items/anyOf/1",
"type": "object",
"title": "Bi-directional Kafka Topic Map with send direction",
"description": "This schema validates entries, that are bi-directional.",
"default": {},
"examples": [
{
"name": "HighIntegrity",
"topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
"bidirectional": true
}
],
"required": [
"name",
"topic",
"bidirectional"
],
"properties": {
"name": {
"$id": "#/items/anyOf/1/properties/name",
"type": "string",
"title": "Entry Name",
"description": "Name of the map entry, only used for logging & tracing.",
"default": "",
"examples": [
"HighIntegrity"
]
},
"topic": {
"$id": "#/items/anyOf/1/properties/topic",
"type": "string",
"title": "The topic to listen on",
"description": "The topic to listen on, this can be a regular expression.",
"default": "",
"examples": [
"^ia\\..+\\..+\\..+\\.(?!processValue).+$"
]
},
"bidirectional": {
"$id": "#/items/anyOf/1/properties/bidirectional",
"type": "boolean",
"title": "Is the transfer bidirectional?",
"description": "When set to true, the bridge will consume and produce from both brokers",
"default": false,
"examples": [
true
]
}
},
"additionalProperties": true
}
]
},
"examples": [
{
"name":"HighIntegrity",
"topic":"^ia\\..+\\..+\\..+\\.(?!processValue).+$",
"bidirectional":true
},
{
"name":"HighThroughput",
"topic":"^ia\\..+\\..+\\..+\\.(processValue).*$",
"bidirectional":false,
"send_direction":"to_remote"
}
]
}
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
DEBUG_ENABLE_FGTRACE | Enables the use of the fgtrace library, do not enable in production | string | true , false | false |
KAFKA_GROUP_ID_SUFFIX | Identifier appended to the kafka group ID, usually a serial number | string | Any | defalut |
KAFKA_SSL_KEY_PASSWORD_LOCAL | Password for the SSL key pf the local broker | string | Any | "" |
KAFKA_SSL_KEY_PASSWORD_REMOTE | Password for the SSL key of the remote broker | string | Any | "" |
KAFKA_TOPIC_MAP | A json map of the kafka topics should be forwarded | JSON | See below | {} |
KAKFA_USE_SSL | Enables the use of SSL for the kafka connection | string | true , false | false |
LOCAL_KAFKA_BOOTSTRAP_SERVER | URL of the local kafka broker, port is required | string | Any valid URL | united-manufacturing-hub-kafka:9092 |
LOGGING_LEVEL | Defines which logging level is used, mostly relevant for developers. | string | PRODUCTION, DEVELOPMENT | PRODUCTION |
MICROSERVICE_NAME | Name of the microservice (used for tracing) | string | Any | united-manufacturing-hub-kafka-bridge |
REMOTE_KAFKA_BOOTSTRAP_SERVER | URL of the remote kafka broker | string | Any valid URL | "" |
SERIAL_NUMBER | Serial number of the cluster (used for tracing) | string | Any | defalut |
1.6 - Kafka Broker
The Kafka broker in the United Manufacturing Hub is RedPanda, a Kafka-compatible event streaming platform. It’s used to store and process messages, in order to stream real-time data between the microservices.
How it works
RedPanda is a distributed system that is made up of a cluster of brokers, designed for maximum performance and reliability. It does not depend on external systems like ZooKeeper, as it’s shipped as a single binary.
Read more about RedPanda in the official documentation.
Kubernetes resources
- StatefulSet:
united-manufacturing-hub-kafka
- Service:
- Internal ClusterIP (headless):
united-manufacturing-hub-kafka
- External NodePort:
united-manufacturing-hub-kafka-external
at port 9094 for the Kafka API listener, port 9644 for the Admin API listener, port 8083 for the HTTP Proxy listener, and port 8081 for the Schema Registry listener.
- Internal ClusterIP (headless):
- ConfigMap:
united-manufacturing-hub-kafka
- Secret:
united-manufacturing-hub-kafka-sts-lifecycle
- PersistentVolumeClaim:
datadir-united-manufacturing-hub-kafka-0
Configuration
You shouldn’t need to configure the Kafka broker manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the redpanda section of the Helm chart values file.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
HOST_IP | The IP address of the host machine. | string | Any | Random IP |
POD_IP | The IP address of the pod. | string | Any | Random IP |
SERVICE_NAME | The name of the service. | string | Any | united-manufacturing-hub-kafka |
1.7 - Kafka Console
Kafka-console uses Redpanda Console to help you manage and debug your Kafka workloads effortlessy.
With it, you can explore your Kafka topics, view messages, list the active consumers, and more.
How it works
You can access the Kafka console via its Service.
It’s automatically connected to the Kafka broker, so you can start using it right away. You can view the Kafka broker configuration in the Broker tab, and explore the topics in the Topics tab.
Kubernetes resources
- Deployment:
united-manufacturing-hub-console
- Service:
- External LoadBalancer:
united-manufacturing-hub-console
at port 8090
- External LoadBalancer:
- ConfigMap:
united-manufacturing-hub-console
- Secret:
united-manufacturing-hub-console
Configuration
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
LOGIN_JWTSECRET | The secret used to authenticate the communication to the backend. | string | Any | Random string |
1.8 - Kafka to Postgresql
Kafka-to-postgresql is a microservice responsible for consuming kafka messages and inserting the payload into a Postgresql database. Take a look at the Datamodel to see how the data is structured.
This microservice requires that the Kafka Topic umh.v1.kafka.newTopic
exits. This will happen automatically from version 0.9.12.
How it works
By default, kafka-to-postgresql sets up two Kafka consumers, one for the High Integrity topics and one for the High Throughput topics.
The graphic below shows the program flow of the microservice.
High integrity
The High integrity topics are forwarded to the database in a synchronous way. This means that the microservice will wait for the database to respond with a non error message before committing the message to the Kafka broker. This way, the message is garanteed to be inserted into the database, even though it might take a while.
Most of the topics are forwarded in this mode.
The picture below shows the program flow of the high integrity mode.
High throughput
The High throughput topics are forwarded to the database in an asynchronous way. This means that the microservice will not wait for the database to respond with a non error message before committing the message to the Kafka broker. This way, the message is not garanteed to be inserted into the database, but the microservice will try to insert the message into the database as soon as possible. This mode is used for the topics that are expected to have a high throughput.
The topics that are forwarded in this mode are processValue, processValueString and all the raw topics.
Kubernetes resources
- Deployment:
united-manufacturing-hub-kafkatopostgresql
- Secret:
united-manufacturing-hub-kafkatopostgresql-certificates
Configuration
You shouldn’t need to configure kafka-to-postgresql manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the kafkatopostgresql
section of the Helm
chart values file.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
DEBUG_ENABLE_FGTRACE | Enables the use of the fgtrace library. Not reccomended for production | string | true , false | false |
DRY_RUN | If set to true, the microservice will not write to the database | bool | true , false | false |
KAFKA_BOOTSTRAP_SERVER | URL of the Kafka broker used, port is required | string | Any | united-manufacturing-hub-kafka:9092 |
KAFKA_SSL_KEY_PASSWORD | Key password to decode the SSL private key | string | Any | "" |
LOGGING_LEVEL | Defines which logging level is used, mostly relevant for developers | string | PRODUCTION, DEVELOPMENT | PRODUCTION |
MEMORY_REQUEST | Memory request for the message cache | string | Any | 50Mi |
MICROSERVICE_NAME | Name of the microservice (used for tracing) | string | Any | united-manufacturing-hub-kafkatopostgresql |
POSTGRES_DATABASE | The name of the PostgreSQL database | string | Any | factoryinsight |
POSTGRES_HOST | Hostname of the PostgreSQL database | string | Any | united-manufacturing-hub |
POSTGRES_PASSWORD | The password to use for PostgreSQL connections | string | Any | changeme |
POSTGRES_SSLMODE | If set to true, the PostgreSQL connection will use SSL | string | Any | require |
POSTGRES_USER | The username to use for PostgreSQL connections | string | Any | factoryinsight |
1.9 - MQTT Bridge
MQTT-bridge is a microservice that connects two MQTT brokers and forwards messages between them. It is used to connect the local broker of the edge computer with the remote broker on the server.
How it works
This microservice subscribes to topics on the local broker and publishes the messages to the remote broker, while also subscribing to topics on the remote broker and publishing the messages to the local broker.
Kubernetes resources
- StatefulSet:
united-manufacturing-hub-mqttbridge
- Secret:
united-manufacturing-hub-mqttbridge-secrets
- PersistentVolumeClaim:
united-manufacturing-hub-mqttbridge-claim
Configuration
You can configure the URL of the remote MQTT broker that MQTT-bridge should
connect to by setting the value of the remoteBrokerUrl
parameter in the
_000_commonConfig.mqttBridge
section of the Helm chart values file.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
BRIDGE_ONE_WAY | Whether to enable one-way communication, from local to remote | bool | true , false | true |
INSECURE_SKIP_VERIFY_LOCAL | Skip TLS certificate verification for the local broker | bool | true , false | true |
INSECURE_SKIP_VERIFY_REMOTE | Skip TLS certificate verification for the remote broker | bool | true , false | true |
LOCAL_BROKER_SSL_ENABLED | Whether to enable SSL for the local MQTT broker | bool | true , false | true |
LOCAL_BROKER_URL | URL for the local MQTT broker | string | Any | ssl://united-manufacturing-hub-mqtt:8883 |
LOCAL_CERTIFICATE_NAME | Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption | string | USE_TLS, NO_CERT | USE_TLS |
LOCAL_PUB_TOPIC | Local MQTT topic to publish to | string | Any | ia |
LOCAL_SUB_TOPIC | Local MQTT topic to subscribe to | string | Any | ia/factoryinsight |
MQTT_PASSWORD | Password for the MQTT broker | string | Any | INSECURE_INSECURE_INSECURE |
REMOTE_BROKER_SSL_ENABLED | Whether to enable SSL for the remote MQTT broker | bool | true , false | true |
REMOTE_BROKER_URL | URL for the local MQTT broker | string | Any | ssl://united-manufacturing-hub-mqtt.united-manufacturing-hub:8883 |
REMOTE_CERTIFICATE_NAME | Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption | string | USE_TLS, NO_CERT | USE_TLS |
REMOTE_PUB_TOPIC | Remote MQTT topic to publish to | string | Any | ia/factoryinsight |
REMOTE_SUB_TOPIC | Remote MQTT topic to subscribe to | string | Any | ia |
1.10 - MQTT Broker
The MQTT broker in the United Manufacturing Hub is HiveMQ and is customized to fit the needs of the stack. It’s a core component of the stack and is used to communicate between the different microservices.
How it works
The MQTT broker is responsible for receiving MQTT messages from the different microservices and forwarding them to the MQTT Kafka bridge.
Kubernetes resources
- StatefulSet:
united-manufacturing-hub-hivemqce
- Service:
- Internal ClusterIP:
- HiveMQ local:
united-manufacturing-hub-hivemq-local-service
at port 1883 (MQTT) and 8883 (MQTT over TLS) - VerneMQ (for backwards compatibility):
united-manufacturing-hub-vernemq
at port 1883 (MQTT) and 8883 (MQTT over TLS) - VerneMQ local (for backwards compatibility):
united-manufacturing-hub-vernemq-local-service
at port 1883 (MQTT) and 8883 (MQTT over TLS)
- HiveMQ local:
- External LoadBalancer:
united-manufacturing-hub-mqtt
at port 1883 (MQTT) and 8883 (MQTT over TLS)
- Internal ClusterIP:
- ConfigMap:
- Configuration:
united-manufacturing-hub-hivemqce-hive
- Credentials:
united-manufacturing-hub-hivemqce-extension
- Configuration:
- Secret:
united-manufacturing-hub-hivemqce-secret-keystore
- PersistentVolumeClaim:
- Data:
united-manufacturing-hub-hivemqce-claim-data
- Extensions:
united-manufacturing-hub-hivemqce-claim-extensions
- Data:
Configuration
Most of the configuration is done through the XML files in the ConfigMaps. The default configuration should be sufficient for most use cases.
The HiveMQ installation of the United Manufacturing Hub comes with these extensions:
- RBAC file extension to manage the authentication and authorizations rules for the broker.
- Prometheus extension to expose metrics for a prometheus applications
- Heartbeat extension to allow for readiness checks
If you want to add more extensions, or to change the configuration, visit the HiveMQ documentation.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
HIVEMQ_ALLOW_ALL_CLIENTS | Whether to allow all clients to connect to the broker | bool | true , false | true |
1.11 - MQTT Kafka Bridge
Mqtt-kafka-bridge is a microservice that acts as a bridge between MQTT brokers and Kafka brokers, transfering messages from one to the other and vice versa.
This microservice requires that the Kafka Topic umh.v1.kafka.newTopic
exits.
This will happen automatically from version 0.9.12.
Since version 0.9.10, it allows all raw messages, even if their content is not in a valid JSON format.
How it works
Mqtt-kafka-bridge consumes topics from a message broker, translates them to the proper format and publishes them to the other message broker.
Kubernetes resources
- Deployment:
united-manufacturing-hub-mqttkafkabridge
- Secret:
- Kafka:
united-manufacturing-hub-mqttkafkabridge-kafka-secrets
- MQTT:
united-manufacturing-hub-mqttkafkabridge-mqtt-secrets
- Kafka:
Configuration
You shouldn’t need to configure mqtt-kafka-bridge manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the mqttkafkabridge
section of the Helm
chart values file.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
DEBUG_ENABLE_FGTRACE | Enables the use of the fgtrace library. Not reccomended for production | string | true , false | false |
INSECURE_SKIP_VERIFY | Skip TLS certificate verification | bool | true , false | true |
KAFKA_BASE_TOPIC | The Kafka base topic | string | Any | ia |
KAFKA_BOOTSTRAP_SERVER | URL of the Kafka broker used, port is required | string | Any | united-manufacturing-hub-kafka:9092 |
KAFKA_LISTEN_TOPIC | Kafka topic to subscribe to. Accept regex values | string | Any | ^ia.+ |
KAFKA_SENDER_THREADS | Number of threads used to send messages to Kafka | int | Any | 1 |
LOGGING_LEVEL | Defines which logging level is used, mostly relevant for developers | string | PRODUCTION, DEVELOPMENT | PRODUCTION |
MESSAGE_LRU_SIZE | Size of the LRU cache used to store messages. This is used to prevent duplicate messages from being sent to Kafka. | int | Any | 100000 |
MICROSERVICE_NAME | Name of the microservice (used for tracing) | string | Any | united-manufacturing-hub-mqttkafkabridge |
MQTT_BROKER_URL | The MQTT broker URL | string | Any | united-manufacturing-hub-mqtt:1883 |
MQTT_CERTIFICATE_NAME | Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption | string | USE_TLS, NO_CERT | USE_TLS |
MQTT_PASSWORD | Password for the MQTT broker | string | Any | INSECURE_INSECURE_INSECURE |
MQTT_SENDER_THREADS | Number of threads used to send messages to MQTT | int | Any | 1 |
MQTT_TOPIC | MQTT topic to subscribe to. Accept regex values | string | Any | ia/# |
POD_NAME | Name of the pod. Used for tracing | string | Any | united-manufacturing-hub-mqttkafkabridge-Random-ID |
RAW_MESSSAGE_LRU_SIZE | Size of the LRU cache used to store raw messages. This is used to prevent duplicate messages from being sent to Kafka. | int | Any | 100000 |
SERIAL_NUMBER | Serial number of the cluster (used for tracing) | string | Any | default |
1.12 - Node-RED
Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the Node-RED library.
How it works
Node-RED is a JavaScript-based tool that can be used to create flows that interact with the other microservices in the United Manufacturing Hub or external services.
See our guides for Node-RED to learn more about how to use it.
Kubernetes resources
- StatefulSet:
united-manufacturing-hub-nodered
- Service:
- External LoadBalancer:
united-manufacturing-hub-nodered-service
at port 1880
- External LoadBalancer:
- ConfigMap:
- Configuration:
united-manufacturing-hub-nodered-config
- Flows:
united-manufacturing-hub-nodered-flows
- Configuration:
- Secret:
united-manufacturing-hub-nodered-secrets
- PersistentVolumeClaim:
united-manufacturing-hub-nodered-claim
Configuration
You can enable the nodered microservice and decide if you want to use the default flows in the _000_commonConfig.dataprocessing.nodered section of the Helm chart values.
All the other values are set by default and you can find them in the Danger Zone section of the Helm chart values.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
NODE_RED_ENABLE_SAFE_MODE | Enable safe mode, useful in case of broken flows | boolean | true , false | false |
TZ | The timezone used by Node-RED | string | Any | Berlin/Europe |
1.13 - Sensorconnect
Sensorconnect automatically detects ifm gateways connected to the network and reads data from the connected IO-Link sensors.
How it works
Sensorconnect continuosly scans the given IP range for gateways, making it effectively a plug-and-play solution. Once a gateway is found, it automatically download the IODD files for the connected sensors and starts reading the data at the configured interval. Then it processes the data and sends it to the MQTT or Kafka broker, to be consumed by other microservices.
If you want to learn more about how to use sensors in your asstes, check out the retrofitting section of the UMH Learn website.
IODD files
The IODD files are used to describe the sensors connected to the gateway. They
contain information about the data type, the unit of measurement, the minimum and
maximum values, etc. The IODD files are downloaded automatically from
IODDFinder once a sensor is found, and are
stored in a Persistent Volume. If downloading from internet is not possible,
for example in a closed network, you can download the IODD files manually and
store them in the folder specified by the IODD_FILE_PATH
environment variable.
If no IODD file is found for a sensor, the data will not be processed, but sent to the broker as-is.
Kubernetes resources
- StatefulSet:
united-manufacturing-hub-sensorconnect
- Secret:
- Kafka:
united-manufacturing-hub-sensorconnect-kafka-secrets
- MQTT:
united-manufacturing-hub-sensorconnect-mqtt-secrets
- Kafka:
- PersistentVolumeClaim:
united-manufacturing-hub-sensorconnect-claim
Configuration
You can configure the IP range to scan for gateways, and which message broker to use, by setting the values of the parameters in the _000_commonConfig.datasources.sensorconnect section of the Helm chart values file.
The default values of the other parameters are usually good for most use cases, but you can change them in the Danger Zone section of the Helm chart values file.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
ADDITIONAL_SLEEP_TIME_PER_ACTIVE_PORT_MS | Additional sleep time between pollings for each active port | float | Any | 0.0 |
ADDITIONAL_SLOWDOWN_MAP | JSON map of values, allows to slow down and speed up the polling time of specific sensors | JSON | See below | [] |
DEBUG_ENABLE_FGTRACE | Enables the use of the fgtrace library. Not reccomended for production | string | true, false | false |
DEVICE_FINDER_TIMEOUT_SEC | HTTP timeout in seconds for finding new devices | int | Any | 1 |
DEVICE_FINDER_TIME_SEC | Time interval in seconds for finding new devices | int | Any | 20 |
IODD_FILE_PATH | Filesystem path where to store IODD files | string | Any valid Unix path | /ioddfiles |
IP_RANGE | The IP range to scan for new sensor | string | Any valid IP in CIDR notation | 192.168.10.1/24 |
KAFKA_BOOTSTRAP_SERVER | URL of the Kafka broker. Port is required | string | Any | united-manufacturing-hub-kafka:9092 |
KAFKA_SSL_KEY_PASSWORD | The encrypted password of the SSL key. If empty, no password is used | string | Any | "" |
KAFKA_USE_SSL | Set to true to use SSL encryption for the connection to the Kafka broker | string | true , false | false |
LOGGING_LEVEL | Defines which logging level is used, mostly relevant for developers | string | PRODUCTION, DEVELOPMENT | PRODUCTION |
LOWER_POLLING_TIME_MS | Time in milliseconds to define the lower bound of time between sensor polling | int | Any | 20 |
MAX_SENSOR_ERROR_COUNT | Amount of errors before a sensor is temporarily disabled | int | Any | 50 |
MICROSERVICE_NAME | Name of the microservice (used for tracing) | string | Any | united-manufacturing-hub-sensorconnect |
MQTT_BROKER_URL | URL of the MQTT broker. Port is required | string | Any | united-manufacturing-hub-mqtt:1883 |
MQTT_CERTIFICATE_NAME | Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption | string | USE_TLS, NO_CERT | USE_TLS |
MQTT_PASSWORD | Password for the MQTT broker | string | Any | INSECURE_INSECURE_INSECURE |
POD_NAME | Name of the pod (used for tracing) | string | Any | united-manufacturing-hub-sensorconnect-0 |
POLLING_SPEED_STEP_DOWN_MS | Time in milliseconds subtracted from the polling interval after a successful polling | int | Any | 1 |
POLLING_SPEED_STEP_UP_MS | Time in milliseconds added to the polling interval after a failed polling | int | Any | 20 |
SENSOR_INITIAL_POLLING_TIME_MS | Amount of time in milliseconds before starting to request sensor data. Must be higher than LOWER_POLLING_TIME_MS | int | Any | 100 |
SUB_TWENTY_MS | Set to 1 to allow LOWER_POLLING_TIME_MS of under 20 ms. This is not recommended as it might lead to the gateway becoming unresponsive until a manual reboot | int | 0, 1 | 0 |
TEST | If enabled, the microservice will use a test IODD file from the filesystem to use with a mocked sensor. Only useful for development. | string | true, false | false |
TRANSMITTERID | Serial number of the cluster (used for tracing) | string | Any | default |
UPPER_POLLING_TIME_MS | Time in milliseconds to define the upper bound of time between sensor polling | int | Any | 1000 |
USE_KAFKA | If enabled, uses Kafka as a message broker | string | true, false | true |
USE_MQTT | If enabled, uses MQTT as a message broker | string | true, false | false |
Slowdown map
The ADDITIONAL_SLOWDOWN_MAP
environment variable allows you to slow down and
speed up the polling time of specific sensors. It is a JSON array of values, with
the following structure:
[
{
"serialnumber": "000200610104",
"slowdown_ms": -10
},
{
"url": "http://192.168.0.13",
"slowdown_ms": 20
},
{
"productcode": "AL13500",
"slowdown_ms": 20.01
}
]
2 - Community
The microservices in this section are not part of the Core of the United Manufacturing Hub, either because they are still in development, deprecated or only supported community. They can be used to extend the functionality of the platform.
It is not recommended to use these microservices in production as they might be unstable or not supported anymore.
2.1 - Barcodereader
This microservice is still in development and is not considered stable for production use.
Barcodereader is a microservice that reads barcodes and sends the data to the Kafka broker.
How it works
Connect a barcode scanner to the system and the microservice will read the barcodes and send the data to the Kafka broker.
Kubernetes resources
- Deployment:
united-manufacturing-hub-barcodereader
- Secret:
united-manufacturing-hub-barcodereader-secrets
Configuration
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
ASSET_ID | The asset ID, which is used for the topic structure | string | Any | barcodereader |
CUSTOMER_ID | The customer ID, which is used for the topic structure | string | Any | raw |
DEBUG_ENABLE_FGTRACE | Enables the use of the fgtrace library. Not recommended for production | string | true , false | false |
INPUT_DEVICE_NAME | The name of the USB device to use | string | Any | Datalogic ADC, Inc. Handheld Barcode Scanner |
INPUT_DEVICE_PATH | The path of the USB device to use. It is recommended to use a wildcard (for example, /dev/input/event* ) or leave empty | string | Valid Unix device path | "" |
KAFKA_BOOTSTRAP_SERVER | URL of the Kafka broker used, port is required | string | Any | united-manufacturing-hub-kafka:9092 |
LOCATION | The location, which is used for the topic structure | string | Any | barcodereader |
LOGGING_LEVEL | Defines which logging level is used, mostly relevant for developers. | string | PRODUCTION, DEVELOPMENT | PRODUCTION |
MICROSERVICE_NAME | Name of the microservice (used for tracing) | string | Any | united-manufacturing-hub-barcodereader |
SCAN_ONLY | Prevent message broadcasting if enabled | bool | true , false | false |
SERIAL_NUMBER | Serial number of the cluster (used for tracing) | string | Any | defalut |
2.2 - Factoryinput
This microservice is still in development and is not considered stable for production use
Factoryinput provides REST endpoints for MQTT messages via HTTP requests.
This microservice is typically accessed via grafana-proxy
How it works
The factoryinput microservice provides REST endpoints for MQTT messages via HTTP requests.
The main endpoint is /api/v1/{customer}/{location}/{asset}/{value}
, with a POST
request method. The customer, location, asset and value are all strings. And are
used to build the MQTT topic. The body of the HTTP request is used as the MQTT
payload.
Kubernetes resources
- StatefulSet:
united-manufacturing-hub-factoryinput
- Service:
- Internal ClusterIP:
united-manufacturing-hub-factoryinput-service
at port 80
- Internal ClusterIP:
- Secret:
factoryinput-secret
Configuration
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
BROKER_URL | URL to the broker | string | all | ssl://united-manufacturing-hub-mqtt:8883 |
CERTIFICATE_NAME | Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption | string | USE_TLS, NO_CERT | USE_TLS |
CUSTOMER_NAME_{NUMBER} | Specifies a user for the REST API. Multiple users can be set | string | Any | "" |
CUSTOMER_PASSWORD_{NUMBER} | Specifies the password of the user for the REST API | string | Any | "" |
DEBUG_ENABLE_FGTRACE | Enables the use of the fgtrace library. Not reccomended for production | string | true , false | false |
FACTORYINPUT_PASSWORD | Specifies the admin user for the REST API | string | Any | factoryinsight |
FACTORYINPUT_USER | Specifies the password for the admin user for the REST API | string | Any | Random UUID |
LOGGING_LEVEL | Defines which logging level is used, mostly relevant for developers | string | PRODUCTION, DEVELOPMENT | PRODUCTION |
MQTT_QUEUE_HANDLER | Number of queue workers to spawn | int | 0-65535 | 10 |
MQTT_PASSWORD | Password for the MQTT broker | string | Any | INSECURE_INSECURE_INSECURE |
POD_NAME | Name of the pod. Used for tracing | string | Any | united-manufacturing-hub-factoryinput-0 |
SERIAL_NUMBER | Serial number of the cluster. Used for tracing | string | Any | defalut |
VERSION | The version of the API used. Each version also enables all the previous ones | int | Any | 1 |
2.3 - Grafana Proxy
This microservice is still in development and is not considered stable for production use
How it works
The grafana-proxy microservice serves an HTTP REST endpoint located at
/api/v1/{service}/{data}
. The service
parameter specifies the backend
service to which the request should be proxied, like factoryinput or
factoryinsight. The data
parameter specifies the API endpoint to forward to
the backend service. The body of the HTTP request is used as the payload for
the proxied request.
Kubernetes resources
- Deployment:
united-manufacturing-hub-grafanaproxy
- Service:
- External LoadBalancer:
united-manufacturing-hub-grafanaproxy-service
at port 2096
- External LoadBalancer:
Configuration
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
DEBUG_ENABLE_FGTRACE | Enables the use of the fgtrace library. Not reccomended for production | string | true , false | false |
FACTORYINPUT_BASE_URL | URL of factoryinput | string | Any | http://united-manufacturing-hub-factoryinput-service |
FACTORYINPUT_KEY | Specifies the password for the admin user for factoryinput | string | Any | Random UUID |
FACTORYINPUT_USER | Specifies the admin user for factoryinput | string | Any | factoryinput |
FACTORYINSIGHT_BASE_URL | URL of factoryinsight | string | Any | http://united-manufacturing-hub-factoryinsight-service |
MICROSERVICE_NAME | Name of the microservice. Used for tracing | string | Any | united-manufacturing-hub-factoryinput |
SERIAL_NUMBER | Serial number of the cluster. Used for tracing | string | Any | default |
VERSION | The version of the API used. Each version also enables all the previous ones | int | Any | 1 |
2.4 - Kafka State Detector
How it works
Kubernetes resources
- Deployment:
united-manufacturing-hub-kafkastatedetector
- Secret:
united-manufacturing-hub-kafkastatedetector-secrets
Configuration
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
ACTIVITY_ENABLED | Controls wheter to check the activity of the Kafka broker | string | true , false | true |
ANOMALY_ENABLED | Controls wheter to check for anomalies in the Kafka broker | string | true , false | true |
DEBUG_ENABLE_FGTRACE | Enables the use of the fgtrace library. Not reccomended for production | string | true , false | false |
KAFKA_BOOTSTRAP_SERVER | URL of the Kafka broker used, port is required | string | Any | united-manufacturing-hub-kafka:9092 |
KAFKA_SSL_KEY_PASSWORD | Key password to decode the SSL private key | string | Any | "" |
KAKFA_USE_SSL | Enables the use of SSL for the kafka connection | string | true , false | false |
MICROSERVICE_NAME | Name of the microservice (used for tracing) | string | Any | united-manufacturing-hub-kafkastatedetector |
SERIAL_NUMBER | Serial number of the cluster. Used for tracing | string | Any | defalut |
2.5 - MQTT Simulator
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.
The IoTSensors MQTT Simulator is a microservice that simulates sensors sending data to the MQTT broker. You can read the full documentation on the
How it works
The microservice publishes messages on the topic ia/raw/development/ioTSensors/
,
creating a subtopic for each simulation. The subtopics are the names of the
simulations, which are Temperature
, Humidity
, and Pressure
.
The values are calculated using a normal distribution with a mean and standard
deviation that can be configured.
Kubernetes resources
- Deployment:
united-manufacturing-hub-iotsensorsmqtt
- ConfigMap:
united-manufacturing-hub-iotsensors-mqtt
Configuration
You can change the configuration of the microservice by updating the config.json
file in the ConfigMap.
2.6 - MQTT to Postgresql
If you landed here from Google, you probably might want to check out either the architecture of the United Manufacturing Hub or our knowledge website for more information on the general topics of IT, OT and IIoT.
This microservice is deprecated and should not be used anymore in production. Please use kafka-to-postgresql instead.
How it works
The mqtt-to-postgresql microservice subscribes to the MQTT broker and saves
the values of the messages on the topic ia/#
in the database.
2.7 - OPCUA Simulator
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.
How it works
The OPCUA Simulator is a microservice that simulates OPCUA devices. You can read the full documentation on the GitHub repository.
You can then connect to the simulated OPCUA server via Node-RED and read the values of the simulated devices. Learn more about how to connect to the OPCUA simulator to Node-RED in our guide.
Kubernetes resources
- Deployment:
united-manufacturing-hub-opcuasimulator-deployment
- Service:
- External LoadBalancer:
united-manufacturing-hub-opcuasimulator-service
at port 46010
- External LoadBalancer:
- ConfigMap:
united-manufacturing-hub-opcuasimulator-config
Configuration
You can change the configuration of the microservice by updating the config.json
file in the ConfigMap.
2.8 - PackML Simulator
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but it is enabled by default.
PackML MQTT Simulator is a virtual line that interfaces using PackML implemented over MQTT. It implements the following PackML State model and communicates over MQTT topics as defined by environmental variables. The simulator can run with either a basic MQTT topic structure or SparkPlugB.
How it works
You can read the full documentation on the GitHub repository.
Kubernetes resources
- Deployment:
united-manufacturing-hub-packmlmqttsimulator
Configuration
You shouldn’t need to configure PackML Simulator manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the packmlmqttsimulator
section of the
Helm chart values file.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
AREA | ISA-95 area name of the line | string | Any | DefaultArea |
LINE | ISA-95 line name of the line | string | Any | DefaultProductionLine |
MQTT_PASSWORD | Password for the MQTT broker. Leave empty if the server does not manage permissions | string | Any | INSECURE_INSECURE_INSECURE |
MQTT_URL | Server URL of the MQTT server | string | Any | mqtt://united-manufacturing-hub-mqtt:1883 |
MQTT_USERNAME | Name for the MQTT broker. Leave empty if the server does not manage permissions | string | Any | PACKMLSIMULATOR |
SITE | ISA-95 site name of the line | string | Any | testLocation |
2.9 - Tulip Connector
This microservice is still in development and is not considered stable for production use.
The tulip-connector microservice enables communication with the United Manufacturing Hub by exposing internal APIs, like factoryinsight, to the internet. With this REST endpoint, users can access data stored in the UMH and seamlessly integrate Tulip with a Unified Namespace and on-premise Historian. Furthermore, the tulip-connector can be customized to meet specific customer requirements, including integration with an on-premise MES system.
How it works
The tulip-connector acts as a proxy between the internet and the UMH. It exposes an endpoint to forward requests to the UMH and returns the response.
API documentation
Kubernetes resources
- Deployment:
united-manufacturing-hub-tulip-connector-deployment
- Service:
- Internal ClusterIP:
united-manufacturing-hub-tulip-connector-service
at port 80
- Internal ClusterIP:
- Ingress:
united-manufacturing-hub-tulip-connector-ingress
Configuration
You can enable the tulip-connector and set the domain for the ingress by editing the values in the _000_commonConfig.tulipconnector section of the Helm chart values file.
Environment variables
Variable name | Description | Type | Allowed values | Default |
---|---|---|---|---|
FACTORYINSIGHT_PASSWORD | Specifies the password for the admin user for the REST API | string | Any | Random UUID |
FACTORYINSIGHT_URL | Specifies the URL of the factoryinsight microservice. | string | Any | http://united-manufacturing-hub-factoryinsight-service |
FACTORYINSIGHT_USER | Specifies the admin user for the REST API | string | Any | factoryinsight |
MODE | Specifies the mode that the service will run in. Change only during development | string | dev, prod | prod |
3 - Grafana Plugins
3.1 - Umh Datasource V2
The plugin, umh-datasource-v2, is a Grafana data source plugin that allows you to fetch resources from a database and build queries for your dashboard.
How it works
When creating a new panel, select umh-datasource-v2 from the Data source drop-down menu. It will then fetch the resources from the database. The loading time may depend on your internet speed.
Select the resources in the cascade menu to build your query. DefaultArea and DefaultProductionLine are placeholders for the future implementation of the new data model.
Only the available values for the specified work cell will be fetched from the database. You can then select which data value you want to query.
Next you can specify how to transform the data, depending on what value you selected. For example, all the custom tags will have the aggregation options available. For example if you query a processValue:
- Time bucket: lets you group data in a time bucket
- Aggregates: common statistical aggregations (maximum, minimum, sum or count)
- Handling missing values: lets you choose how missing data should be handled
Configuration
In Grafana, navigate to the Data sources configuration panel.
Select umh-v2-datasource to configure it.
Configurations:
- Base URL: the URL for the factoryinsight backend. Defaults to
http://united-manufacturing-hub-factoryinsight-service/
. - Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
- API Key: authenticates the API calls to factoryinsight.
Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format
Basic xxxxxxxx
.
- Base URL: the URL for the factoryinsight backend. Defaults to
3.2 - Umh Datasource
We are no longer maintaining this microservice. Use instead our new microservice datasource-v2 for data extraction from factoryinsight.
The umh datasource is a Grafana 8.X compatible plugin, that allows you to fetch resources from a database and build queries for your dashboard.
How it works
When creating a new panel, select umh-datasource from the Data source drop-down menu. It will then fetch the resources from the database. The loading time may depend on your internet speed.
Select your query parameters Location, Asset and Value to build your query.
Configuration
In Grafana, navigate to the Data sources configuration panel.
Select umh-datasource to configure it.
Configurations:
- Base URL: the URL for the factoryinsight backend. Defaults to
http://united-manufacturing-hub-factoryinsight-service/
. - Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
- API Key: authenticates the API calls to factoryinsight.
Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format
Basic xxxxxxxx
.
- Base URL: the URL for the factoryinsight backend. Defaults to
3.3 - Factoryinput Panel
This plugin is still in development and is not considered stable for production use
Requirements
- A United Manufacturing Hub stack
- External IP or URL to the grafana-proxy
- In most cases it is the same IP address as your Grafana dashboard.
Getting started
For development, the steps to build the plugin from source are described here.
- Go to
united-manufacturing-hub/grafana-plugins/umh-factoryinput-panel
- Install dependencies.
yarn install
- Build plugin in development mode or run in watch mode.
yarn dev
- Build plugin in production mode (not recommended due to Issue 32336).
yarn build
- Move the resulting dis folder in your Grafana plugins directory.
- Windows:
C:\Program Files\GrafanaLabs\grafana\data\plugins
- Linux:
/var/lib/grafana/plugins
Rename the folder to umh-factoryinput-panel.
Enable the enable development mode to load unsigned plugins.
restart your Grafana service.
Technical Information
Below you will find a schematic of this flow, through our stack.