The United Manufacturing Hub is a comprehensive Helm Chart for Kubernetes,
integrating a variety of open source software, including notable third-party
applications such as Node-RED and Grafana. Designed for versatility, UMH is
deployable across a wide spectrum of environments, from edge devices to virtual
machines, and even managed Kubernetes services, catering to diverse industrial
needs.
The following diagram depicts the interaction dynamics between UMH’s components
and user types, offering a visual guide to its architecture and operational
mechanisms.
Management Console
The Management Console
of the United Manufacturing Hub is a robust web application designed to configure,
manage, and monitor the various aspects of Data and Device & Container
Infrastructures within UMH. Acting as the central command center, it provides a
comprehensive overview and control over the system’s functionalities, ensuring
efficient operation and maintenance. The console simplifies complex processes,
making it accessible for users to oversee the vast array of services and operations
integral to UMH.
Device & Container Infrastructure
The Device & Container Infrastructure
lays the foundation of the United Manufacturing Hub’s architecture, streamlining
the deployment and setup of essential software and operating systems across devices.
This infrastructure is pivotal in automating the installation process, ensuring
that the essential software components and operating systems are efficiently and
reliably established. It provides the groundwork upon which the Data Infrastructure
is built, embodying a robust and scalable base for the entire architecture.
Data Infrastructure
The Data Infrastructure is the heart of
the United Manufacturing Hub, orchestrating the interconnection of data sources,
storage, monitoring, and analysis solutions. It comprises three key components:
Data Connectivity: Facilitates the integration of diverse data sources into
UMH, enabling uninterrupted data exchange.
Unified Namespace (UNS): Centralizes and standardizes data within UMH into
a cohesive model, by linking each layer of the ISA-95 automation pyramid to the
UNS and assimilating non-traditional data sources.
Historian: Stores data in TimescaleDB, a PostgreSQL-based time-series
database, allowing real-time and historical data analysis through Grafana or
other tools.
The UMH Data Infrastructure leverages Industrial IoT to expand the ISA95 Automation
Pyramid, enabling high-speed data processing using systems like Kafka. It enhances
system availability through Kubernetes and simplifies maintenance with Docker and
Prometheus. Additionally, it facilitates the use of AI, predictive maintenance,
and digital twin technologies
Expandability
The United Manufacturing Hub is architecturally designed for high expandability,
enabling integration of custom microservices or Docker containers. This adaptability
allows for users to establish connections with third-party systems or to implement
specialized data analysis tools. The platform also accommodates any third-party
application available as a Helm Chart, Kubernetes resource, or Docker Compose,
offering vast potential for customization to suit evolving industrial demands.
1 - Data Infrastructure
An overview of UMH’s Data Infrastructure, integrating and managing diverse data
sources.
The United Manufacturing Hub’s Data Infrastructure is where all data converges.
It extends the ISA95 Automation Pyramid, the usual model for data flow in factory
settings. This infrastructure links each level of the traditional pyramid to the
Unified Namespace (UNS), incorporating extra data sources that the typical automation
pyramid doesn’t include. The data is then organized, stored, and analyzed to offer
useful information for frontline workers. Afterwards, it can be sent to the a data
lake or analytics platform, where business analysts can access it for deeper insights.
It comprises three primary elements:
Data Connectivity:
This component includes an array of tools and services designed
to connect various systems and sensors on the shop floor, facilitating the flow
of data into the Unified Namespace.
Unified Namespace:
Acts as the central hub for all events and messages on the
shop floor, ensuring data consistency and accessibility.
Historian: Responsible
for storing events in a time-series database, it also provides tools for data
visualization, enabling both real-time and historical analytics.
Together, these elements provide a comprehensive framework for collecting,
storing, and analyzing data, enhancing the operational efficiency and
decision-making processes on the shop floor.
1.1 - Data Connectivity
Learn about the tools and services in UMH’s Data Connectivity for integrating
shop floor systems.
The Data Connectivity module in the United Manufacturing Hub is designed to enable
seamless integration of various data sources from the manufacturing environment
into the Unified Namespace. Key components include:
Node-RED:
A versatile programming tool that links hardware devices, APIs, and online services.
barcodereader:
Connects to USB barcode readers, pushing data to the message broker.
benthos-umh: A specialized version of benthos featuring an OPC UA plugin for
efficient data extraction.
sensorconnect:
Integrates with IO-Link Masters and their sensors, relaying data to the message broker.
These tools collectively facilitate the extraction and contextualization of data
from diverse sources, adhering to the ISA-95 automation pyramid model, and
enhancing the Management Console’s capability to monitor and manage data flow
within the UMH ecosystem.
1.1.1 - Barcodereader
This microservice is still in development and is not considered stable for production use.
Barcodereader is a microservice that reads barcodes and sends the data to the Kafka broker.
How it works
Connect a barcode scanner to the system and the microservice will read the barcodes and send the data to the Kafka broker.
What’s next
Read the Barcodereader reference
documentation to learn more about the technical details of the Barcodereader
microservice.
1.1.2 - Node Red
Node-RED is a programming tool for wiring together
hardware devices, APIs and online services in new and interesting ways. It
provides a browser-based editor that makes it easy to wire together flows using
the wide range of nodes in the Node-RED library.
How it works
Node-RED is a JavaScript-based tool that can be used to create flows that
interact with the other microservices in the United Manufacturing Hub or
external services.
Read the Node-RED reference
documentation to learn more about the technical details of the Node-RED
microservice.
1.1.3 - Sensorconnect
Sensorconnect automatically detects ifm gateways
connected to the network and reads data from the connected IO-Link
sensors.
How it works
Sensorconnect continuosly scans the given IP range for gateways, making it
effectively a plug-and-play solution. Once a gateway is found, it automatically
download the IODD files for the connected sensors and starts reading the data at
the configured interval. Then it processes the data and sends it to the MQTT or
Kafka broker, to be consumed by other microservices.
If you want to learn more about how to use sensors in your asstes, check out the
retrofitting section of the UMH Learn
website.
IODD files
The IODD files are used to describe the sensors connected to the gateway. They
contain information about the data type, the unit of measurement, the minimum and
maximum values, etc. The IODD files are downloaded automatically from
IODDFinder once a sensor is found, and are
stored in a Persistent Volume. If downloading from internet is not possible,
for example in a closed network, you can download the IODD files manually and
store them in the folder specified by the IODD_FILE_PATH environment variable.
If no IODD file is found for a sensor, the data will not be processed, but sent
to the broker as-is.
What’s next
Read the Sensorconnect reference
documentation to learn more about the technical details of the Sensorconnect
microservice.
1.2 - Unified Namespace
Discover the Unified Namespace’s role as a central hub for shop floor data in
UMH.
The Unified Namespace (UNS) within the United Manufacturing Hub is a vital module
facilitating the streamlined flow and management of data. It comprises various
microservices:
data-bridge:
Bridges data between MQTT and Kafka and between multiple Kafka instances, ensuring
efficient data transmission.
HiveMQ:
An MQTT broker crucial for receiving data from IoT devices on the shop floor.
Redpanda (Kafka):
Manages large-scale data processing and orchestrates communication between microservices.
Redpanda Console:
Offers a graphical interface for monitoring Kafka topics and messages.
The UNS serves as a pivotal point in the UMH architecture, ensuring data from shop
floor systems and sensors (gathered via the Data Connectivity module) is effectively
processed and relayed to the Historian and external Data Warehouses/Data Lakes
for storage and analysis.
1.2.1 - Data Bridge
Data-bridge is a microservice specifically tailored to adhere to the
UNS
data model. It consumes topics from a message broker, translates them to
the proper format and publishes them to the other message broker.
How it works
Data-bridge connects to the source broker, that can be either Kafka or MQTT,
and subscribes to the topics specified in the configuration. It then processes
the messages, and publishes them to the destination broker, that can be either
Kafka or MQTT.
In the case where the destination broker is Kafka, messages from multiple topics
can be merged into a single topic, making use of the message key to identify
the source topic.
For example, subscribing to a topic using a wildcard, such as
umh.v1.acme.anytown..*, and a merge point of 4, will result in
messages from the topics umh.v1.acme.anytown.foo.bar,
umh.v1.acme.anytown.foo.baz, umh.v1.acme.anytown and umh.v1.acme.anytown.frob
being merged into a single topic, umh.v1.acme.anytown, with the message key
being the missing part of the topic name, in this case foo.bar, foo.baz, etc.
Here is a diagram showing the flow of messages:
The value of the message is not changed, only the topic and key are modified.
Another important feature is that it is possible to configure multiple data
bridges, each with its own source and destination brokers, and each with its
own set of topics to subscribe to and merge point.
The brokers can be local or remote, and, in case of MQTT, they can be secured
using TLS.
What’s next
Read the Data Bridge reference
documentation to learn more about the technical details of the data-bridge microservice.
1.2.2 - Kafka Broker
The Kafka broker in the United Manufacturing Hub is RedPanda,
a Kafka-compatible event streaming platform. It’s used to store and process
messages, in order to stream real-time data between the microservices.
How it works
RedPanda is a distributed system that is made up of a cluster of brokers,
designed for maximum performance and reliability. It does not depend on external
systems like ZooKeeper, as it’s shipped as a single binary.
Read the Kafka Broker reference documentation
to learn more about the technical details of the Kafka broker microservice
1.2.3 - Kafka Console
Kafka-console uses Redpanda Console
to help you manage and debug your Kafka workloads effortlessy.
With it, you can explore your Kafka topics, view messages, list the active
consumers, and more.
How it works
You can access the Kafka console via its Service.
It’s automatically connected to the Kafka broker, so you can start using it
right away.
You can view the Kafka broker configuration in the Broker tab, and explore the
topics in the Topics tab.
What’s next
Read the Kafka Console reference documentation
to learn more about the technical details of the Kafka Console microservice.
1.2.4 - MQTT Broker
The MQTT broker in the United Manufacturing Hub is HiveMQ
and is customized to fit the needs of the stack. It’s a core component of
the stack and is used to communicate between the different microservices.
How it works
The MQTT broker is responsible for receiving MQTT messages from the
different microservices and forwarding them to the
MQTT Kafka bridge.
What’s next
Read the MQTT Broker reference documentation
to learn more about the technical details of the MQTT Broker microservice.
1.3 - Historian
Insight into the Historian’s role in storing and visualizing data within the
UMH ecosystem.
The Historian in the United Manufacturing Hub serves as a comprehensive data
management and visualization system. It includes:
kafka-to-postgresql-v2:
Archives Kafka messages adhering to the Data Model V2 schema into the database.
TimescaleDB:
An open-source SQL database specialized in time-series data storage.
Grafana:
A software tool for data visualization and analytics.
factoryinsight:
An analytics tool designed for data analysis, including calculating operational efficiency metrics like OEE.
Redis:
Utilized as an in-memory data structure store for caching purposes.
This structure ensures that data from the Unified Namespace is systematically
stored, processed, and made visually accessible, providing OT professionals with
real-time insights and analytics on shop floor operations.
1.3.1 - Cache
The cache in the United Manufacturing Hub is Redis, a
key-value store that is used as a cache for the other microservices.
How it works
Recently used data is stored in the cache to reduce the load on the database.
All the microservices that need to access the database will first check if the
data is available in the cache. If it is, it will be used, otherwise the
microservice will query the database and store the result in the cache.
By default, Redis is configured to run in standalone mode, which means that it
will only have one master node.
What’s next
Read the Cache reference documentation
to learn more about the technical details of the cache microservice.
1.3.2 - Database
The database microservice is the central component of the United Manufacturing
Hub and is based on TimescaleDB, an open-source relational database built for
handling time-series data. TimescaleDB is designed to provide scalable and
efficient storage, processing, and analysis of time-series data.
You can find more information on the datamodel of the database in the
Data Model section, and read
about the choice to use TimescaleDB in the
blog article.
How it works
When deployed, the database microservice will create two databases, with the
related usernames and passwords:
grafana: This database is used by Grafana to store the dashboards and
other data.
factoryinsight: This database is the main database of the United Manufacturing
Hub. It contains all the data that is collected by the microservices.
Read the Database reference documentation
to learn more about the technical details of the database microservice.
1.3.3 - Factoryinsight
Factoryinsight is a microservice that provides a set of REST APIs to access the
data from the database. It is particularly useful to calculate the Key
Performance Indicators (KPIs) of the factories.
How it works
Factoryinsight exposes REST APIs to access the data from the database or calculate
the KPIs. By default, it’s only accessible from the internal network of the
cluster, but it can be configured to be
accessible from the external network.
The APIs require authentication, that can be either a Basic Auth or a Bearer
token. Both of these can be found in the Secret factoryinsight-secret.
What’s next
Read the Factoryinsight reference documentation
to learn more about the technical details of the Factoryinsight microservice.
1.3.4 - Grafana
The grafana microservice is a web application that provides visualization and
analytics capabilities. Grafana allows you to query, visualize, alert on and
understand your metrics no matter where they are stored.
It has a rich ecosystem of plugins that allow you to extend its functionality
beyond the core features.
How it works
Grafana is a web application that can be accessed through a web browser. It
let’s you create dashboards that can be used to visualize data from the database.
Thanks to some custom datasource plugins,
Grafana can use the various APIs of the United Manufacturing Hub to query the
database and display useful information.
What’s next
Read the Grafana reference documentation
to learn more about the technical details of the grafana microservice.
1.3.5 - Kafka to Postgresql V2
The Kafka to PostgreSQL v2 microservice plays a crucial role in consuming and
translating Kafka messages for storage in a PostgreSQL database. It aligns with
the specifications outlined in the Data Model v2.
How it works
Utilizing Data Model v2, Kafka to PostgreSQL v2 is specifically configured to
process messages from topics beginning with umh.v1.. Each new topic undergoes
validation against Data Model v2 before message consumption begins. This ensures
adherence to the defined data structure and standards.
Message payloads are scrutinized for structural validity prior to database insertion.
Messages with invalid payloads are systematically rejected to maintain data integrity.
The microservice then evaluates the payload to determine the appropriate table
for insertion within the PostgreSQL database. The decision is based on the data
type of the payload field, adhering to the following rules:
Numeric data types are directed to the tag table.
String data types are directed to the tag_string table.
What’s next
Read the Kafka to Postgresql v2
reference documentation to learn more about the technical details of the
Kafka to Postgresql v2 microservice.
1.3.6 - Umh Datasource V2
The plugin, umh-datasource-v2, is a Grafana data source plugin that allows you to fetch
resources from a database and build queries for your dashboard.
How it works
When creating a new panel, select umh-datasource-v2 from the Data source drop-down menu. It will then fetch the resources
from the database. The loading time may depend on your internet speed.
Select the resources in the cascade menu to build your query. DefaultArea and DefaultProductionLine are placeholders
for the future implementation of the new data model.
Only the available values for the specified work cell will be fetched from the database. You can then select which data value you want to query.
Next you can specify how to transform the data, depending on what value you selected.
For example, all the custom tags will have the aggregation options available. For example if you query a processValue:
Time bucket: lets you group data in a time bucket
Aggregates: common statistical aggregations (maximum, minimum, sum or count)
Handling missing values: lets you choose how missing data should be handled
Configuration
In Grafana, navigate to the Data sources configuration panel.
Select umh-v2-datasource to configure it.
Configurations:
Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
API Key: authenticates the API calls to factoryinsight.
Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.
2 - Device & Container Infrastructure
Understand the automated deployment and setup process in UMH’s Device &
Container Infrastructure.
The Device & Container Infrastructure in the United Manufacturing Hub automates
the deployment and setup of the data infrastructure in various environments. It
is tailored for Edge deployments, particularly in Demilitarized Zones, to minimize
latency on-premise, and also extends into the Cloud to harness its functionalities.
It consists of several interconnected components:
Provisioning Server: Manages the initial bootstrapping of devices,
including iPXE configuration and ignition file distribution.
Flatcar Image Server: A central repository hosting various versions of
Flatcar Container Linux images, ensuring easy access and version control.
Customized iPXE: A specialized bootloader configured to streamline the
initial boot process by fetching UMH-specific settings and configurations.
First and Second Stage Flatcar OS: A two-stage operating system setup where
the first stage is a temporary OS used for installing the second stage, which
is the final operating system equipped with specific configurations and tools.
Installation Script: An automated script hosted at management.umh.app,
responsible for setting up and configuring the Kubernetes environment.
Kubernetes (k3s): A lightweight Kubernetes setup that forms the backbone
of the container orchestration system.
This infrastructure ensures a streamlined, automated installation process, laying
a robust foundation for the United Manufacturing Hub’s operation.
3 - Management Console
Delve into the functionalities and components of the UMH’s Management Console,
ensuring efficient system management.
The Management Console is pivotal in configuring, managing, and monitoring the
United Manufacturing Hub. It comprises a web application,
a backend API and the management companion agent, all designed to ensure secure and
efficient operation.
Web Application
The client-side Web Application, available at management.umh.app
enables users to register, add, and manage instances, and monitor the
infrastructure within the United Manufacturing Hub. All communications between
the Web Application and the user’s devices are end-to-end encrypted, ensuring
complete confidentiality from the backend.
Management Companion
Deployed on each UMH instance, the Management Companion acts as an agent responsible
for decrypting messages coming from the user via the Backend and executing
requested actions. Responses are end-to-end encrypted as well, maintaining a
secure and opaque channel to the Backend.
Management Updater
The Updater is a custom Job run by the Management Companion, responsible for
updating the Management Companion itself. Its purpose is to automate the process
of upgrading the Management Companion to the latest version, reducing the
administrative overhead of managing UMH instances.
Backend
The Backend is the public API for the Management Console. It functions as a bridge
between the Web Application and the Management Companion. Its primary role is to
verify user permissions for accessing UMH instances. Importantly, the backend
does not have access to the contents of the messages exchanged between the Web
Application and the Management Companion, ensuring that communication remains
opaque and secure.
4 - Legacy
This section gives an overview of the legacy microservices that can be found
in older versions of the United Manufacturing Hub.
This section provides a comprehensive overview of the legacy microservices within
the United Manufacturing Hub. These microservices are currently in a transitional
phase, being maintained and deployed alongside newer versions of UMH as we gradually
shift from Data Model v1 to v2. While these legacy components are set to be deprecated
in the future, they continue to play a crucial role in ensuring smooth operations
and compatibility during this transition period.
4.1 - Factoryinput
This microservice is still in development and is not considered stable for production use
Factoryinput provides REST endpoints for MQTT messages via HTTP requests.
This microservice is typically accessed via grafana-proxy
How it works
The factoryinput microservice provides REST endpoints for MQTT messages via HTTP requests.
The main endpoint is /api/v1/{customer}/{location}/{asset}/{value}, with a POST
request method. The customer, location, asset and value are all strings. And are
used to build the MQTT topic. The body of the HTTP request is used as the MQTT
payload.
What’s next
Read the Factoryinput reference
documentation to learn more about the technical details of the Factoryinput
microservice.
4.2 - Grafana Proxy
This microservice is still in development and is not considered stable for production use
How it works
The grafana-proxy microservice serves an HTTP REST endpoint located at
/api/v1/{service}/{data}. The service parameter specifies the backend
service to which the request should be proxied, like factoryinput or
factoryinsight. The data parameter specifies the API endpoint to forward to
the backend service. The body of the HTTP request is used as the payload for
the proxied request.
What’s next
Read the Grafana Proxy reference
documentation to learn more about the technical details of the Grafana Proxy
microservice.
4.3 - Kafka Bridge
Kafka-bridge is a microservice that connects two Kafka brokers and forwards
messages between them. It is used to connect the local broker of the edge computer
with the remote broker on the server.
How it works
This microservice has two ways of operation:
High Integrity: This mode is used for topics that are critical for the
user. It is garanteed that no messages are lost. This is achieved by
committing the message only after it has been successfully inserted into the
database. Ususally all the topics are forwarded in this mode, except for
processValue, processValueString and raw messages.
High Throughput: This mode is used for topics that are not critical for
the user. They are forwarded as fast as possible, but it is possible that
messages are lost, for example if the database struggles to keep up. Usually
only the processValue, processValueString and raw messages are forwarded in
this mode.
What’s next
Read the Kafka Bridge reference documentation
to learn more about the technical details of the Kafka Bridge microservice.
4.4 - Kafka State Detector
This microservice is still in development and is not considered stable for production use
How it works
What’s next
4.5 - Kafka to Postgresql
Kafka-to-postgresql is a microservice responsible for consuming kafka messages
and inserting the payload into a Postgresql database. Take a look at the
Datamodel to see how the data is structured.
This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits. This will happen automatically from version 0.9.12.
How it works
By default, kafka-to-postgresql sets up two Kafka consumers, one for the
High Integrity topics and one for the
High Throughput topics.
The graphic below shows the program flow of the microservice.
High integrity
The High integrity topics are forwarded to the database in a synchronous way.
This means that the microservice will wait for the database to respond with a
non error message before committing the message to the Kafka broker.
This way, the message is garanteed to be inserted into the database, even though
it might take a while.
Most of the topics are forwarded in this mode.
The picture below shows the program flow of the high integrity mode.
High throughput
The High throughput topics are forwarded to the database in an asynchronous way.
This means that the microservice will not wait for the database to respond with
a non error message before committing the message to the Kafka broker.
This way, the message is not garanteed to be inserted into the database, but
the microservice will try to insert the message into the database as soon as
possible. This mode is used for the topics that are expected to have a high
throughput.
Read the Kafka to Postgresql reference documentation
to learn more about the technical details of the Kafka to Postgresql microservice.
4.6 - MQTT Bridge
MQTT-bridge is a microservice that connects two MQTT brokers and forwards
messages between them. It is used to connect the local broker of the edge computer
with the remote broker on the server.
How it works
This microservice subscribes to topics on the local broker and publishes the
messages to the remote broker, while also subscribing to topics on the remote
broker and publishing the messages to the local broker.
What’s next
Read the MQTT Bridge reference documentation
to learn more about the technical details of the MQTT Bridge microservice.
4.7 - MQTT Kafka Bridge
Mqtt-kafka-bridge is a microservice that acts as a bridge between MQTT brokers
and Kafka brokers, transfering messages from one to the other and vice versa.
This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits.
This will happen automatically from version 0.9.12.
Since version 0.9.10, it allows all raw messages, even if their content is not
in a valid JSON format.
How it works
Mqtt-kafka-bridge consumes topics from a message broker, translates them to
the proper format and publishes them to the other message broker.
What’s next
Read the MQTT Kafka Bridge
reference documentation to learn more about the technical details of the
MQTT Kafka Bridge microservice.
4.8 - MQTT Simulator
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.
The IoTSensors MQTT Simulator is a microservice that simulates sensors sending data to the
MQTT broker. You can read the full documentation on the
The microservice publishes messages on the topic ia/raw/development/ioTSensors/,
creating a subtopic for each simulation. The subtopics are the names of the
simulations, which are Temperature, Humidity, and Pressure.
The values are calculated using a normal distribution with a mean and standard
deviation that can be configured.
What’s next
Read the IoTSensors MQTT Simulator reference
documentation to learn more about the technical details of the IoTSensors MQTT Simulator
microservice.
This microservice is deprecated and should not be used anymore in production.
Please use kafka-to-postgresql instead.
How it works
The mqtt-to-postgresql microservice subscribes to the MQTT broker and saves
the values of the messages on the topic ia/# in the database.
What’s next
Read the MQTT to Postgresql reference
documentation to learn more about the technical details of the MQTT to Postgresql
microservice.
4.10 - OPCUA Simulator
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.
How it works
The OPCUA Simulator is a microservice that simulates OPCUA devices. You can read
the full documentation on the
GitHub repository.
You can then connect to the simulated OPCUA server via Node-RED and read the
values of the simulated devices. Learn more about how to connect to the OPCUA
simulator to Node-RED in our guide.
What’s next
Read the OPCUA Simulator reference
documentation to learn more about the technical details of the OPCUA Simulator
microservice.
4.11 - PackML Simulator
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but it is enabled by default.
PackML MQTT Simulator is a virtual line that interfaces using PackML implemented
over MQTT. It implements the following PackML State model and communicates
over MQTT topics as defined by environmental variables. The simulator can run
with either a basic MQTT topic structure or SparkPlugB.
Read the PackML Simulator reference
documentation to learn more about the technical details of the PackML Simulator
microservice.
4.12 - Tulip Connector
This microservice is still in development and is not considered stable for production use.
The tulip-connector microservice enables communication with the United
Manufacturing Hub by exposing internal APIs, like
factoryinsight, to the
internet. With this REST endpoint, users can access data stored in the UMH and
seamlessly integrate Tulip with a Unified Namespace and on-premise Historian.
Furthermore, the tulip-connector can be customized to meet specific customer
requirements, including integration with an on-premise MES system.
How it works
The tulip-connector acts as a proxy between the internet and the UMH. It
exposes an endpoint to forward requests to the UMH and returns the response.
What’s next
Read the Tulip Connector reference
documentation to learn more about the technical details of the Tulip Connector
microservice.
4.13 - Grafana Plugins
This section contains the overview of the custom Grafana plugins that can be
used to access the United Manufacturing Hub.
4.13.1 - Umh Datasource
This page contains the technical documentation of the plugin umh-datasource, which allows for easy data extraction from factoryinsight.
We are no longer maintaining this microservice. Use instead our new microservice datasource-v2 for data extraction from factoryinsight.
The umh datasource is a Grafana 8.X compatible plugin, that allows you to fetch resources from a database
and build queries for your dashboard.
How it works
When creating a new panel, select umh-datasource from the Data source drop-down menu. It will then fetch the resources
from the database. The loading time may depend on your internet speed.
Select your query parameters Location, Asset and Value to build your query.
Configuration
In Grafana, navigate to the Data sources configuration panel.
Select umh-datasource to configure it.
Configurations:
Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
API Key: authenticates the API calls to factoryinsight.
Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.
4.13.2 - Factoryinput Panel
This page contains the technical documentation of the plugin factoryinput-panel, which allows for easy execution of MQTT messages inside the UMH stack from a Grafana panel.
This plugin is still in development and is not considered stable for production use