This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Data Model (v1)

This page describes the data model of the UMH stack - from the message payloads up to database tables.

flowchart LR AP[Automation Pyramid] --> C_d C_d --> UN_d UN_d --> H_d H_d --> DL[Data Lake] subgraph UNS[ ] subgraph C_d[ ] C_d_infoX["Connectivity\n(e.g., OPC UA)"] C_d_info["Time-Series data\nUnstructured / semi-structured data\nRelational data (master, operational, batch)"] C_d_infoX --- C_d_info end subgraph UN_d[ ] UN_d_infoX["Unified Namespace\n(e.g., MQTT, Kafka)"] UN_d_info["umh/v1/enterprise/site/area/productionLine/workCell/originID/_schema/schema_specific"] UN_d_infoX --- UN_d_info end subgraph H_d[ ] H_d_infoX["Historian\n(e.g., TimescaleDB)"] H_d_info[Table: asset\nTable: tag\nTable: tag_string] H_d_infoX --- H_d_info end end click C_d_infoX href "../features/connectivity" click UN_d_infoX href "./messages" click H_d_infoX href "./database" click C_d_info href "../features/connectivity" click UN_d_info href "./messages" click H_d_info href "./database"
flowchart LR AP[Automation Pyramid] --> C_d C_d --> UN_d UN_d --> H_d H_d --> DL[Data Lake] subgraph UNS[ ] subgraph C_d[ ] C_d_infoX["Connectivity\n(e.g., OPC UA)"] C_d_info["Time-Series data\nUnstructured / semi-structured data\nRelational data (master, operational, batch)"] C_d_infoX --- C_d_info end subgraph UN_d[ ] UN_d_infoX["Unified Namespace\n(e.g., MQTT, Kafka)"] UN_d_info["umh/v1/enterprise/site/area/productionLine/workCell/originID/_schema/schema_specific"] UN_d_infoX --- UN_d_info end subgraph H_d[ ] H_d_infoX["Historian\n(e.g., TimescaleDB)"] H_d_info[Table: asset\nTable: tag\nTable: tag_string] H_d_infoX --- H_d_info end end click C_d_infoX href "../features/connectivity" click UN_d_infoX href "./messages" click H_d_infoX href "./database" click C_d_info href "../features/connectivity" click UN_d_info href "./messages" click H_d_info href "./database"
The Data Infrastructure of the UMH consists out of the three components: Connectivity, Unified Namespace, and Historian (see also Architecture). Each of the components has their own standards and best-practices, so a consistent data model across multiple building blocks need to combine all of them.

If you like to learn more about our data model & ADR’s checkout our learn article.

Connectivity

Incoming data is often unstructured, therefore our standard allows either conformant data in our _historian schema, or any kind of data in any other schema.

Our key considerations where:

  1. Event driven architecture: We only look at changes, reducing network and system load
  2. Ease of use: We allow any data in, allowing OT & IT to process it as they wish

Unified Namespace

The UNS employs MQTT and Kafka in a hybrid approach, utilizing MQTT for efficient data collection and Kafka for robust data processing. The UNS is designed to be reliable, scalable, and maintainable, facilitating real-time data processing and seamless integration or removal of system components.

These elements are the foundation for our data model in UNS:

  1. Incoming data based on OT standards: Data needs to be contextualized here not by IT people, but by OT people. They want to model their data (topic hierarchy and payloads) according to ISA-95, Weihenstephaner Standard, Omron PackML, Euromap84, (or similar) standards, and need e.g., JSON as payload to better understand it.

  2. Hybrid Architecture: Combining MQTT’s user-friendliness and widespread adoption in Operational Technology (OT) with Kafka’s advanced processing capabilities. Topics and payloads can not be interchanged fully between them due to limitations in MQTT and Kafka, so some trade-offs needs to be done.

  3. Processed data based on IT standards: Data is sent after processing to IT systems, and needs to adhere with standards: the data inside of the UNS needs to be easy processable for either contextualization, or storing it in a Historian or Data Lake.

Historian

We choose TimescaleDB as our primary database.

Key elements we considered:

  1. IT best-practice: used SQL and Postgres for easy compatibility, and therefore TimescaleDb
  2. Straightforward queries: we aim to make easy SQL queries, so that everyone can build dashboards
  3. Performance: because of time-series and typical workload, the database layout might not be optimized fully on usability, but we did some trade-offs that allow it to store millions of data points per second

1 - Unified Namespace

Describes all available _schema and their structure

Topic structure

flowchart LR umh --> v1 v1 --> enterprise enterprise -->|Optional| site site -->|Optional| area area -->|Optional| productionLine productionLine -->|Optional| workCell workCell -->|Optional| originID originID -->|Optional| _schema["_schema (Ex: _historian, _analytics, _local)"] _schema -->_opt["Schema dependent context"] classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_schema mqtt; classDef optional fill:#77aa77,stroke:#333,stroke-width:4px; class site,area,productionLine,workCell,originID optional; enterprise -.-> _schema site -.-> _schema area -.-> _schema productionLine -.-> _schema workCell -.-> _schema click _schema href "#_schema" click umh href "#versioning-prefix" click v1 href "#versioning-prefix"
flowchart LR umh --> v1 v1 --> enterprise enterprise -->|Optional| site site -->|Optional| area area -->|Optional| productionLine productionLine -->|Optional| workCell workCell -->|Optional| originID originID -->|Optional| _schema["_schema (Ex: _historian, _analytics, _local)"] _schema -->_opt["Schema dependent context"] classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_schema mqtt; classDef optional fill:#77aa77,stroke:#333,stroke-width:4px; class site,area,productionLine,workCell,originID optional; enterprise -.-> _schema site -.-> _schema area -.-> _schema productionLine -.-> _schema workCell -.-> _schema click _schema href "#_schema" click umh href "#versioning-prefix" click v1 href "#versioning-prefix"

Versioning Prefix

The umh/v1 at the beginning is obligatory. It ensures that the structure can evolve over time without causing confusion or compatibility issues.

Topic Names & Rules

All part of this structure, except for enterprise and _schema are optional. They can consist of any letters (a-z, A-Z), numbers (0-9) and therein symbols (- & _). Be careful to avoid ., +, # or / as these are special symbols in Kafka or MQTT. Ensure that your topic always begins with umh/v1, otherwise our system will ignore your messages.

Be aware that our topics are case-sensitive, therefore umh.v1.ACMEIncorperated is not the same as umh.v1.acmeincorperated.

Throughout this documentation we will use the MQTT syntax for topics (umh/v1), the corresponding Kafka topic names are the same but / replaced with .

Topic validator



OriginID

This part identifies where the data is coming from. Good options include the senders MAC address, hostname, container id. Examples for originID: 00-80-41-ae-fd-7e, E588974, e5f484a1791d

_schema

_historian

Messages tagged with _historian will be stored in our database and are available via Grafana.

_analytics

Messages tagged with _analytics will be processed by our analytics pipeline. They are used for automatic calculation of KPI’s and other statistics.

_local

This key might contain any data, that you do not want to bridge to other nodes (it will however be MQTT-Kafka bridged on its node).

For example this could be data you want to pre-process on your local node, and then put into another _schema. This data must not necessarily be JSON.

Other

Any other schema, which starts with an underscore (for example: _images), will be forwarded by both MQTT-Kafka & Kafka-Kafka bridges but never processed or stored.

This data must not necessarily be JSON.

Converting other data models

Most data models already follow a location based naming structure.

KKS Identification System for Power Stations

KKS (Kraftwerk-Kennzeichensystem) is a standardized system for identifying and classifying equipment and systems in power plants, particularly in German-speaking countries.

In a flow diagram, the designation is: 1 2LAC03 CT002 QT12

  • Level 0 Classification:

    Block 1 of a power plant site is designated as 1 in this level.

  • Level 1 Classification:

    The designation for the 3rd feedwater pump in the 2nd steam-water circuit is 2LAC03. This means:

    Main group 2L: 2nd steam, water, gas circuit Subgroup (2L)A: Feedwater system Subgroup (2LA)C: Feedwater pump system Counter (2LAC)03: third feedwater pump system

  • Level 2 Classification:

    For the 2nd temperature measurement, the designation CT002 is used. This means:

    Main group C: Direct measurement Subgroup (C)T: Temperature measurement Counter (CT)002: second temperature measurement

  • Level 3 Classification:

    For the 12th immersion sleeve as a sensor protection, the designation QT12 is used. This means:

    • Main group Q: Control technology equipment
    • Subgroup (Q)T: Protective tubes and immersion sleeves as sensor protection
    • Counter (QT)12: twelfth protective tube or immersion sleeve

The above example refers to the 12th immersion sleeve at the 2nd temperature measurement of the 3rd feed pump in block 1 of a power plant site. Translating this in our data model could result in: umh/v1/nuclearCo/1/2LAC03/CT002/QT12/_schema

Where:

  • nuclearCo: Represents the enterprise or the name of the nuclear company.
  • 1: Maps to the site, corresponding to Block 1 of the power plant as per the KKS number.
  • 2LAC03: Fits into the area, representing the 3rd feedwater pump in the 2nd steam-water circuit.
  • CT002: Aligns with productionLine, indicating the 2nd temperature measurement in this context.
  • QT12: Serves as the workCell or originID, denoting the 12th immersion sleeve.
  • _schema: Placeholder for the specific data schema being applied.

1.1 - _analytics

Messages for our analytics feature

Topic structure

flowchart LR topicStart["umh.v1..."] --> _analytics _analytics --> wo[work-order] wo --> wo-create[create] wo --> wo-start[start] wo --> wo-stop[stop] _analytics --> pt[product-type] pt --> pt-create[create] _analytics --> p[product] p --> p-add[add] p --> p-set-bad-quantity[setBadQuantity] _analytics --> s[shift] s --> s-add[add] s --> s-delete[delete] _analytics --> st[state] st --> st-add[add] st --> st-overwrite[overwrite] classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_analytics mqtt; classDef type fill:#00ffbb,stroke:#333,stroke-width:4px; class wo,pt,p,s,st type; classDef func fill:#8899dd,stroke:#333,stroke-width:4px; class wo-create,wo-start,wo-stop,pt-create,p-add,p-set-bad-quantity,s-add,s-delete,st-add,st-overwrite func; click topicStart href "../" click wo href "#work-order" click pt href "#product-type" click p href "#product" click s href "#shift" click wo-create href "#create" click wo-start href "#start" click wo-stop href "#stop" click pt-create href "#create-1" click p-add href "#add" click p-set-bad-quantity href "#set-bad-quantity" click s-add href "#add-1" click s-delete href "#delete" click st-add href "#add-2" click st-overwrite href "#overwrite"
flowchart LR topicStart["umh.v1..."] --> _analytics _analytics --> wo[work-order] wo --> wo-create[create] wo --> wo-start[start] wo --> wo-stop[stop] _analytics --> pt[product-type] pt --> pt-create[create] _analytics --> p[product] p --> p-add[add] p --> p-set-bad-quantity[setBadQuantity] _analytics --> s[shift] s --> s-add[add] s --> s-delete[delete] _analytics --> st[state] st --> st-add[add] st --> st-overwrite[overwrite] classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_analytics mqtt; classDef type fill:#00ffbb,stroke:#333,stroke-width:4px; class wo,pt,p,s,st type; classDef func fill:#8899dd,stroke:#333,stroke-width:4px; class wo-create,wo-start,wo-stop,pt-create,p-add,p-set-bad-quantity,s-add,s-delete,st-add,st-overwrite func; click topicStart href "../" click wo href "#work-order" click pt href "#product-type" click p href "#product" click s href "#shift" click wo-create href "#create" click wo-start href "#start" click wo-stop href "#stop" click pt-create href "#create-1" click p-add href "#add" click p-set-bad-quantity href "#set-bad-quantity" click s-add href "#add-1" click s-delete href "#delete" click st-add href "#add-2" click st-overwrite href "#overwrite"

Work Order

Create

Use this topic to create a new work order.

This replaces the addOrder message from our v0 data model.

Fields

  • external_work_order_id (string): The work order ID from your MES or ERP system.
  • product (object): The product being produced.
    • external_product_id (string): The product ID from your MES or ERP system.
    • cycle_time_ms (number) (optional): The cycle time for the product in seconds. Only include this if the product has not been previously created.
  • quantity (number): The quantity of the product to be produced.
  • status (number) (optional): The status of the work order. Defaults to 0 (created).
    • 0 - Planned
    • 1 - In progress
    • 2 - Completed
  • start_time_unix_ms (number) (optional): The start time of the work order. Will be set by the corresponding start message if not provided.
  • end_time_unix_ms (number) (optional): The end time of the work order. Will be set by the corresponding stop message if not provided.

Example

{
  "external_work_order_id": "1234",
  "product": {
    "external_product_id": "5678"
  },
  "quantity": 100,
  "status": 0
}

Start

Use this topic to start a previously created work order.

Each work order can only be started once. Only work orders with status 0 (planned) and no start time can be started.

Fields

  • external_work_order_id (string): The work order ID from your MES or ERP system.
  • start_time_unix_ms (number): The start time of the work order.

Example

{
  "external_work_order_id": "1234",
  "start_time_unix_ms": 1719931704927
}

Stop

Use this topic to stop a previously started work order.

Stopping an already stopped work order will have no effect. Only work orders with status 1 (in progress) and no end time can be stopped.

Fields

  • external_work_order_id (string): The work order ID from your MES or ERP system.
  • end_time_unix_ms (number): The end time of the work order.

Example

{
  "external_work_order_id": "1234",
  "end_time_unix_ms": 1719931704927
}

Product Type

Create

Announce a new product type.

We recommend using the work-order/create message to create products on the fly.

Fields

  • external_product_type_id (string): The product type ID from your MES or ERP system.
  • cycle_time_ms (number) (optional): The cycle time for the product in milliseconds.

Example

{
  "external_product_type_id": "5678",
  "cycle_time_ms": 60
}

Product

Add

Communicates the completion of part of a work order.

Fields

  • external_product_type_id (string): The product type ID from your MES or ERP system.
  • product_batch_id (string) (optional): Unique identifier for the product. This could for example be a barcode or serial number.
  • start_time_unix_ms (number): The start time of the product.
  • end_time_unix_ms (number): The end time of the product.
  • quantity (number): The quantity of the product produced.
  • bad_quantity (number) (optional): The quantity of bad products produced.

Example

{
  "external_product_type_id": "5678",
  "product_batch_id": "1234",
  "start_time_unix_ms": 1719931604927,
  "end_time_unix_ms": 1719931704927,
  "quantity": 100,
  "bad_quantity": 5
}

Set Bad Quantity

Modify the quantity of bad products produced.

Fields

  • external_product_type_id (string): The product type ID from your MES or ERP system.
  • end_time_unix_ms (string): The end time of the product, used to identify an existing product.
  • bad_quantity (number): The new quantity of bad products produced.

Example

{
  "external_product_type_id": "5678",
  "end_time": 1719931704927,
  "bad_quantity": 10
}

Shift

Add

Announce a new shift.

Fields

  • start_time_unix_ms (number): The start time of the shift.
  • end_time_unix_ms (number): The end time of the shift.

Example

{
  "start_time_unix_ms": 1719931604927,
  "end_time_unix_ms": 1719931704927
}

Delete

Delete a previously created shift.

Fields

  • start_time_unix_ms (number): The start time of the shift.

Example

{
  "start_time_unix_ms": 1719931604927
}

State

Add

Announce a state change.

Checkout the state documentation for a list of available states.

Fields

  • state (number): The state of the machine.
  • start_time_unix_ms (number): The start time of the state.

Example

{
  "state": 10000,
  "start_time_unix_ms": 1719931604927
}

Overwrite

Overwrite one or more states between two times.

Fields

  • state (number): The state of the machine.
  • start_time_unix_ms (number): The start time of the state.
  • end_time_unix_ms (number): The end time of the state.

1.2 - _historian

Messages for our historian feature

Topic structure

flowchart LR topicStart["umh.v1..."] --> _historian _historian --> |Optional| tagName _historian --> |Optional| tagGroup tagGroup --> tagName classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_historian mqtt; classDef optional fill:#77aa77,stroke:#333,stroke-width:4px; class site,area,productionLine,workCell,originID,tagGroup,tagName optional; tagGroup -.-> |1-N| tagGroup click topicStart href "../"
flowchart LR topicStart["umh.v1..."] --> _historian _historian --> |Optional| tagName _historian --> |Optional| tagGroup tagGroup --> tagName classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_historian mqtt; classDef optional fill:#77aa77,stroke:#333,stroke-width:4px; class site,area,productionLine,workCell,originID,tagGroup,tagName optional; tagGroup -.-> |1-N| tagGroup click topicStart href "../"

Message structure

Our _historian messages are JSON containing a unix timestamp as milliseconds (timestamp_ms) and one or more key value pairs. Each key value pair will be inserted at the given timestamp into the database.

Show JSON Schema

Examples:

{
    "timestamp_ms": 1702286893,
    "temperature_c": 154.1
}
{
    "timestamp_ms": 1702286893,
    "temperature_c": 154.1,
    "pressure_bar": 5,
    "notes": "sensor 1 is faulty"
}

If you use a boolean value, it will be interpreted as a number.

Tag grouping

Sometimes it makes sense to further group data together. In the following example we have a CNC cutter, emitting data about it’s head position. If we want to group this for easier access in Grafana, we could use two types of grouping.

  1. Using Tags / Tag Groups in the Topic: This will result in 3 new database entries, grouped by head & pos.

    Topic: umh/v1/cuttingincorperated/cologne/cnc-cutter/_historian/head/pos

    {
     "timestamp_ms": 1670001234567,
      "x": 12.5,
      "y": 7.3,
      "z": 3.2
    }
    

    This method allows very easy monitoring of the data in tools like our Management Console or MQTT Explorer, as each new / will be displayed as a Tree.

  2. Using JSON subobjects:

    Equivalent to the above we could also send: Topic: umh/v1/cuttingincorperated/cologne/cnc-cutter/_historian

    {
     "timestamp_ms": 1670001234567,
      "head": {
         "pos": {
            "x": 12.5,
            "y": 7.3,
            "z": 3.2   
         }
      }
    }
    

    It’s usefull if the machine cannot send to multiple topics, but grouping is still desired.

  3. Combining Both Methods: Equivalent to the above we could also send:

    Topic: umh/v1/cuttingincorperated/cologne/cnc-cutter/_historian/head

    {
     "timestamp_ms": 1670001234567,
      "pos": {
         "x": 12.5,
         "y": 7.3,
         "z": 3.2
      }
    }
    

    This can be useful, if we also want to monitor the cutter head temperature and other attributes, while still preserving most of the readability of the above method.

    {
     "timestamp_ms": 1670001234567,
      "pos": {
         "x": 12.5,
         "y": 7.3,
         "z": 3.2
      },
      "temperature": 50.0,
      "collision": false
    }
    

What’s next?

Find out how the data is stored and can be retrieved from our database.

2 - Historian

Describes databases of all available _schema

Custom PostgreSQL Functions

get_asset_id_immutable

This function is an optimized version of get_asset_id that is defined as immutable. It is the fastest of the three functions and should be used for all queries, except when you plan to manually modify values inside the asset table.

Example:

SELECT * FROM tag WHERE get_asset_id_immutable(
                                '<enterprise>', 
                                '<site>', 
                                '<area>', 
                                '<line>', 
                                '<workcell>',
                                '<origin_id>'
                        ) LIMIT 1;

get_asset_id_stable

This function is an optimized version of get_asset_id that is defined as stable. It is a good choice over get_asset_id for all queries.

Example:

SELECT * FROM tag WHERE get_asset_id_stable(
                                '<enterprise>', 
                                '<site>', 
                                '<area>', 
                                '<line>', 
                                '<workcell>',
                                '<origin_id>'
                        ) LIMIT 1;

[Legacy] get_asset_id

This function returns the id of the given asset. It takes a variable number of arguments, where only the first (enterprise) is mandatory. This function is only kept for compatibility reasons and should not be used in new queries, see get_asset_id_stable or get_asset_id_immutable instead.

Example:

SELECT * FROM tag WHERE get_asset_id(
                                '<enterprise>', 
                                '<site>', 
                                '<area>', 
                                '<line>', 
                                '<workcell>',
                                '<origin_id>'
                        ) LIMIT 1;

get_asset_ids_stable

This function is an optimized version of get_asset_ids that is defined as stable. It is a good choice over get_asset_ids for all queries.

Example:

SELECT * FROM tag WHERE get_asset_ids_stable(
                                '<enterprise>', 
                                '<site>', 
                                '<area>', 
                                '<line>', 
                                '<workcell>',
                                '<origin_id>'
                        ) LIMIT 1;

get_asset_ids_immutable

There is no immutable version of get_asset_ids, as the returned values will probably change over time.

[Legacy] get_asset_ids

This function returns the ids of the given assets. It takes a variable number of arguments, where only the first (enterprise) is mandatory. It is only kept for compatibility reasons and should not be used in new queries, see get_asset_ids_stable instead.

Example:

SELECT * FROM tag WHERE get_asset_ids(
                                '<enterprise>', 
                                '<site>', 
                                '<area>', 
                                '<line>', 
                                '<workcell>',
                                '<origin_id>'
                        ) LIMIT 1;

2.1 - Analytics

How _analytics data is stored and can be queried
erDiagram asset { int id PK "SERIAL PRIMARY KEY" text enterprise "NOT NULL" text site "DEFAULT '' NOT NULL" text area "DEFAULT '' NOT NULL" text line "DEFAULT '' NOT NULL" text workcell "DEFAULT '' NOT NULL" text origin_id "DEFAULT '' NOT NULL" } product_type { product_type_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY external_product_type_id TEXT NOT NULL cycle_time_ms INTEGER NOT NULL asset_id INTEGER REFERENCES asset(id) _ CONSTRAINT "external_product_asset_uniq UNIQUE (external_product_type_id, asset_id)" _ CHECK "(cycle_time_ms > 0)" } work_order { work_order_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY external_work_order_id TEXT NOT NULL asset_id INTEGER NOT NULL REFERENCES asset(id) product_type_id INTEGER NOT NULL REFERENCES product_type(product_type_id) quantity INTEGER NOT NULL status INTEGER "NOT NULL DEFAULT 0, -- 0: planned, 1: in progress, 2: completed" start_time TIMESTAMPTZ end_time TIMESTAMPTZ _ CONSTRAINT "asset_workorder_uniq UNIQUE (asset_id, external_work_order_id)" _ CHECK "(quantity > 0)" _ CHECK "(status BETWEEN 0 AND 2)" _ UNIQUE "(asset_id, start_time)" _ EXCLUDE "USING gist (asset_id WITH =, tstzrange(start_time, end_time) WITH &&) WHERE (start_time IS NOT NULL AND end_time IS NOT NULL)" } product { product_type_id INTEGER REFERENCES product_type(product_type_id) product_batch_id TEXT asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ end_time TIMESTAMPTZ NOT NULL quantity INTEGER NOT NULL bad_quantity INTEGER "DEFAULT 0" _ CHECK "(quantity > 0)" _ CHECK "(bad_quantity >= 0)" _ CHECK "(bad_quantity <= quantity)" _ CHECK "(start_time <= end_time)" _ UNIQUE "(asset_id, end_time, product_batch_id)" _ HYPERTABLE "create_hypertable('product', 'end_time', if_not_exists => TRUE)" _ INDEX "INDEX idx_products_asset_end_time ON product(asset_id, end_time DESC)" } shift { shift_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ NOT NULL end_time TIMESTAMPTZ NOT NULL _ CONSTRAINT "shift_start_asset_uniq UNIQUE (start_time, asset_id)" _ CHECK "(start_time < end_time)" _ EXCLUDE "USING gist (asset_id WITH =, tstzrange(start_time, end_time) WITH &&)" } state { asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ NOT NULL state INT NOT NULL _ CHECK "(state >= 0)" _ UNIQUE "(start_time, asset_id)" _ HYPERTABLE "create_hypertable('states', 'start_time', if_not_exists => TRUE)" _ INDEX "INDEX idx_states_asset_start_time ON states(asset_id, start_time DESC)" } asset ||--o{ work_order : "id" asset ||--o{ product_type : "id" asset ||--o{ product : "id" asset ||--o{ shift : "id" asset ||--o{ state : "id" work_order ||--o{ product_type : "product_type_id" product ||--o{ product_type : "product_type_id"
erDiagram asset { int id PK "SERIAL PRIMARY KEY" text enterprise "NOT NULL" text site "DEFAULT '' NOT NULL" text area "DEFAULT '' NOT NULL" text line "DEFAULT '' NOT NULL" text workcell "DEFAULT '' NOT NULL" text origin_id "DEFAULT '' NOT NULL" } product_type { product_type_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY external_product_type_id TEXT NOT NULL cycle_time_ms INTEGER NOT NULL asset_id INTEGER REFERENCES asset(id) _ CONSTRAINT "external_product_asset_uniq UNIQUE (external_product_type_id, asset_id)" _ CHECK "(cycle_time_ms > 0)" } work_order { work_order_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY external_work_order_id TEXT NOT NULL asset_id INTEGER NOT NULL REFERENCES asset(id) product_type_id INTEGER NOT NULL REFERENCES product_type(product_type_id) quantity INTEGER NOT NULL status INTEGER "NOT NULL DEFAULT 0, -- 0: planned, 1: in progress, 2: completed" start_time TIMESTAMPTZ end_time TIMESTAMPTZ _ CONSTRAINT "asset_workorder_uniq UNIQUE (asset_id, external_work_order_id)" _ CHECK "(quantity > 0)" _ CHECK "(status BETWEEN 0 AND 2)" _ UNIQUE "(asset_id, start_time)" _ EXCLUDE "USING gist (asset_id WITH =, tstzrange(start_time, end_time) WITH &&) WHERE (start_time IS NOT NULL AND end_time IS NOT NULL)" } product { product_type_id INTEGER REFERENCES product_type(product_type_id) product_batch_id TEXT asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ end_time TIMESTAMPTZ NOT NULL quantity INTEGER NOT NULL bad_quantity INTEGER "DEFAULT 0" _ CHECK "(quantity > 0)" _ CHECK "(bad_quantity >= 0)" _ CHECK "(bad_quantity <= quantity)" _ CHECK "(start_time <= end_time)" _ UNIQUE "(asset_id, end_time, product_batch_id)" _ HYPERTABLE "create_hypertable('product', 'end_time', if_not_exists => TRUE)" _ INDEX "INDEX idx_products_asset_end_time ON product(asset_id, end_time DESC)" } shift { shift_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ NOT NULL end_time TIMESTAMPTZ NOT NULL _ CONSTRAINT "shift_start_asset_uniq UNIQUE (start_time, asset_id)" _ CHECK "(start_time < end_time)" _ EXCLUDE "USING gist (asset_id WITH =, tstzrange(start_time, end_time) WITH &&)" } state { asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ NOT NULL state INT NOT NULL _ CHECK "(state >= 0)" _ UNIQUE "(start_time, asset_id)" _ HYPERTABLE "create_hypertable('states', 'start_time', if_not_exists => TRUE)" _ INDEX "INDEX idx_states_asset_start_time ON states(asset_id, start_time DESC)" } asset ||--o{ work_order : "id" asset ||--o{ product_type : "id" asset ||--o{ product : "id" asset ||--o{ shift : "id" asset ||--o{ state : "id" work_order ||--o{ product_type : "product_type_id" product ||--o{ product_type : "product_type_id"

asset

This table holds all assets. An asset for us is the unique combination of enterprise, site, area, line, workcell & origin_id.

All keys except for id and enterprise are optional. In our example we have just started our CNC cutter, so it’s unique asset will get inserted into the database. It already contains some data we inserted before so the new asset will be inserted at id: 8

identerprisesitearealineworkcellorigin_id
1acme-corporation
2acme-corporationnew-york
3acme-corporationlondonnorthassembly
4stark-industriesberlinsouthfabricationcell-a13002
5stark-industriestokyoeasttestingcell-b33005
6stark-industriespariswestpackagingcell-c23009
7umhcologneofficedevserver1sensor0
8cuttingincoperatedcolognecnc-cutter

work_order

This table holds all work orders. A work order is a unique combination of external_work_order_id and asset_id.

work_order_idexternal_work_order_idasset_idproduct_type_idquantitystatusstart_timeend_time
1#24758110002022-01-01T08:00:00Z2022-01-01T18:00:00Z

product_type

This table holds all product types. A product type is a unique combination of external_product_type_id and asset_id.

product_type_idexternal_product_type_idcycle_time_msasset_id
1desk-leg-011210.08

product

This table holds all products.

product_type_idproduct_batch_idasset_idstart_timeend_timequantitybad_quantity
1batch-n11382022-01-01T08:00:00Z2022-01-01T08:10:00Z1007

shift

This table holds all shifts. A shift is a unique combination of asset_id and start_time.

shiftIdasset_idstart_timeend_time
182022-01-01T08:00:00Z2022-01-01T19:00:00Z

state

This table holds all states. A state is a unique combination of asset_id and start_time.

asset_idstart_timestate
82022-01-01T08:00:00Z20000
82022-01-01T08:10:00Z10000

2.2 - Historian

How _historian data is stored and can be queried

Our database for the umh.v1 _historian datamodel currently consists of three tables. These are used for the _historian schema. We choose this layout to enable easy lookups based on the asset features, while maintaining separation between data and names. The split into tag & tag_string prevents accidental lookups of the wrong datatype, which might break queries such as aggregations, averages, …

erDiagram asset { int id PK "SERIAL PRIMARY KEY" text enterprise "NOT NULL" text site "DEFAULT '' NOT NULL" text area "DEFAULT '' NOT NULL" text line "DEFAULT '' NOT NULL" text workcell "DEFAULT '' NOT NULL" text origin_id "DEFAULT '' NOT NULL" } tag { timestamptz timestamp "NOT NULL" text name "NOT NULL" text origin "NOT NULL" int asset_id FK "REFERENCES asset(id) NOT NULL" real value } tag_string { timestamptz timestamp "NOT NULL" text name "NOT NULL" text origin "NOT NULL" int asset_id FK "REFERENCES asset(id) NOT NULL" text value } asset ||--o{ tag : "id" asset ||--o{ tag_string : "id"
erDiagram asset { int id PK "SERIAL PRIMARY KEY" text enterprise "NOT NULL" text site "DEFAULT '' NOT NULL" text area "DEFAULT '' NOT NULL" text line "DEFAULT '' NOT NULL" text workcell "DEFAULT '' NOT NULL" text origin_id "DEFAULT '' NOT NULL" } tag { timestamptz timestamp "NOT NULL" text name "NOT NULL" text origin "NOT NULL" int asset_id FK "REFERENCES asset(id) NOT NULL" real value } tag_string { timestamptz timestamp "NOT NULL" text name "NOT NULL" text origin "NOT NULL" int asset_id FK "REFERENCES asset(id) NOT NULL" text value } asset ||--o{ tag : "id" asset ||--o{ tag_string : "id"

asset

This table holds all assets. An asset for us is the unique combination of enterprise, site, area, line, workcell & origin_id.

All keys except for id and enterprise are optional. In our example we have just started our CNC cutter, so it’s unique asset will get inserted into the database. It already contains some data we inserted before so the new asset will be inserted at id: 8

identerprisesitearealineworkcellorigin_id
1acme-corporation
2acme-corporationnew-york
3acme-corporationlondonnorthassembly
4stark-industriesberlinsouthfabricationcell-a13002
5stark-industriestokyoeasttestingcell-b33005
6stark-industriespariswestpackagingcell-c23009
7umhcologneofficedevserver1sensor0
8cuttingincoperatedcolognecnc-cutter

tag

This table is a timescale hypertable. These tables are optimized to contain a large amount of data which is roughly sorted by time.

In our example we send data to umh/v1/cuttingincorperated/cologne/cnc-cutter/_historian/head using the following JSON:

{
 "timestamp_ms": 1670001234567,
  "pos":{ 
    "x": 12.5,
    "y": 7.3,
    "z": 3.2
  },  
  "temperature": 50.0,
  "collision": false
}

This will result in the following table entries:

timestampnameoriginasset_idvalue
1670001234567head_pos_xunknown812.5
1670001234567head_pos_yunknown87.3
1670001234567head_pos_zunknown83.2
1670001234567head_temperatureunknown850.0
1670001234567head_collisionunknown80

The origin is a placeholder for a later feature, and currently defaults to unknown.

tag_string

This table is the same as tag, but for string data. Our CNC cutter also emits the G-Code currently processed. umh/v1/cuttingincorperated/cologne/cnc-cutter/_historian

{
 "timestamp_ms": 1670001247568,
  "g-code": "G01 X10 Y10 Z0"
}

Resulting in this entry:

timestampnameoriginasset_idvalue
1670001247568g-codeunknown8G01 X10 Y10 Z0

Data retrieval

SQL

  1. SSH into your instance.
  2. Open a PSQL session
  3. Select the umh_v2 database using \c umh_v2
  4. Execute any query against our tables.

Example Queries

  • Get the number of rows in your tag table:
    SELECT COUNT(1) FROM tag;
    
  • Get the newest tag row for “umh/v1/umh/cologne”:
    SELECT * FROM tag WHERE asset_id=get_asset_id_immutable('umh', 'cologne') LIMIT 1;
    
    The equivalent function, without using our helper is:
    SELECT t.* FROM tag t, asset a WHERE t.asset_id=a.id AND a.enterprise='umh' AND a.site='cologne' LIMIT 1;
    

get_asset_id_immutable(<enterprise>, <site>, <area>, <line>, <workcell>, <origin_id>) is a helper function to ease retrieval of the asset id.

All fields except <enterprise> are optional, and it will always return the first asset id matching the search.

Grafana

Follow our Data Visualization tutorial to get started.

External access (Datagrip, PGAdmin, …)

  1. SSH into your instance.

  2. Get the password of the kafkatopostgresqlv2 user

    sudo kubectl get secret timescale-post-init-pw --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -o json | jq -r '.data["1_set_passwords.sh"]' | base64 -d | grep "kafkatopostgresqlv2 WITH PASSWORD" | awk -F "'" '{print $2}'
    
  3. Use your preferred tool to connect to our umh_v2 database using the kafkatopostgresqlv2 user and the password from above command.

    datagrip
    datagrip

3 - States

States are the core of the database model. They represent the state of the machine at a given point in time.

States Documentation Index

Introduction

This documentation outlines the various states used in the United Manufacturing Hub software stack to calculate OEE/KPI and other production metrics.

State Categories

Glossary

  • OEE: Overall Equipment Effectiveness
  • KPI: Key Performance Indicator

Conclusion

This documentation provides a comprehensive overview of the states used in the United Manufacturing Hub software stack and their respective categories. For more information on each state category and its individual states, please refer to the corresponding subpages.

3.1 - Active (10000-29999)

These states represent that the asset is actively producing

10000: ProducingAtFullSpeedState

This asset is running at full speed.

Examples for ProducingAtFullSpeedState

  • WS_Cur_State: Operating
  • PackML/Tobacco: Execute

20000: ProducingAtLowerThanFullSpeedState

Asset is producing, but not at full speed.

Examples for ProducingAtLowerThanFullSpeedState

  • WS_Cur_Prog: StartUp
  • WS_Cur_Prog: RunDown
  • WS_Cur_State: Stopping
  • PackML/Tobacco : Stopping
  • WS_Cur_State: Aborting
  • PackML/Tobacco: Aborting
  • WS_Cur_State: Holding
  • Ws_Cur_State: Unholding
  • PackML:Tobacco: Unholding
  • WS_Cur_State Suspending
  • PackML/Tobacco: Suspending
  • WS_Cur_State: Unsuspending
  • PackML/Tobacco: Unsuspending
  • PackML/Tobacco: Completing
  • WS_Cur_Prog: Production
  • EUROMAP: MANUAL_RUN
  • EUROMAP: CONTROLLED_RUN

Currently not included:

  • WS_Prog_Step: all

3.2 - Unknown (30000-59999)

These states represent that the asset is in an unspecified state

30000: UnknownState

Data for that particular asset is not available (e.g. connection to the PLC is disrupted)

Examples for UnknownState

  • WS_Cur_Prog: Undefined
  • EUROMAP: Offline

40000 UnspecifiedStopState

The asset is not producing, but the reason is unknown at the time.

Examples for UnspecifiedStopState

  • WS_Cur_State: Clearing
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Emergency Stop
  • WS_Cur_State: Resetting
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Held
  • EUROMAP: Idle
  • Tobacco: Other
  • WS_Cur_State: Stopped
  • PackML/Tobacco: Stopped
  • WS_Cur_State: Starting
  • PackML/Tobacco: Starting
  • WS_Cur_State: Prepared
  • WS_Cur_State: Idle
  • PackML/Tobacco: Idle
  • PackML/Tobacco: Complete
  • EUROMAP: READY_TO_RUN

50000: MicrostopState

The asset is not producing for a short period (typically around five minutes), but the reason is unknown at the time.

3.3 - Material (60000-99999)

These states represent that the asset has issues regarding materials.

60000 InletJamState

This machine does not perform its intended function due to a lack of material flow in the infeed of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several inlets, the condition o lack in the inlet refers to the main flow , i.e. to the material (crate, bottle) that is fed in the direction of the filling machine (Central machine). The defect in the infeed is an extraneous defect, but because of its importance for visualization and technical reporting, it is recorded separately.

Examples for InletJamState

  • WS_Cur_State: Lack

70000: OutletJamState

The machine does not perform its intended function as a result of a jam in the good flow discharge of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several discharges, the jam in the discharge condition refers to the main flow, i.e. to the good (crate, bottle) that is fed in the direction of the filling machine (central machine) or is fed away from the filling machine. The jam in the outfeed is an external fault 1v, but it is recorded separately, because of its importance for visualization and technical reporting.

Examples for OutletJamState

  • WS_Cur_State: Tailback

80000: CongestionBypassState

The machine does not perform its intended function due to a shortage in the bypass supply or a jam in the bypass discharge of the machine, detected by the sensor system of the control system (machine stop). This condition can only occur in machines with two outlets or inlets and in which the bypass is in turn the inlet or outlet of an upstream or downstream machine of the filling line (packaging and palleting machines). The jam/shortage in the auxiliary flow is an external fault, but it is recoded separately due to its importance for visualization and technical reporting.

Examples for the CongestionBypassState

  • WS_Cur_State: Lack/Tailback Branch Line

90000: MaterialIssueOtherState

The asset has a material issue, but it is not further specified.

Examples for MaterialIssueOtherState

  • WS_Mat_Ready (Information of which material is lacking)
  • PackML/Tobacco: Suspended

3.4 - Process(100000-139999)

These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.

100000: ChangeoverState

The asset is in a changeover process between products.

Examples for ChangeoverState

  • WS_Cur_Prog: Program-Changeover
  • Tobacco: CHANGE OVER

110000: CleaningState

The asset is currently in a cleaning process.

Examples for CleaningState

  • WS_Cur_Prog: Program-Cleaning
  • Tobacco: CLEAN
120000: EmptyingState

The asset is currently emptied, e.g. to prevent mold for food products over the long breaks, e.g. the weekend.

Examples for EmptyingState
  • Tobacco: EMPTY OUT

130000: SettingUpState

This machine is currently preparing itself for production, e.g. heating up.

Examples for SettingUpState
  • EUROMAP: PREPARING

3.5 - Operator (140000-159999)

These states represent that the asset is stopped because of operator related issues.

140000: OperatorNotAtMachineState

The operator is not at the machine.

150000: OperatorBreakState

The operator is taking a break.

This is different from a planned shift as it could contribute to performance losses.

Examples for OperatorBreakState

  • WS_Cur_Prog: Program-Break

3.6 - Planning (160000-179999)

These states represent that the asset is stopped as it is planned to stopped (planned idle time).

160000: NoShiftState

There is no shift planned at that asset.

170000: NO OrderState

There is no order planned at that asset.

3.7 - Technical (180000-229999)

These states represent that the asset has a technical issue.

180000: EquipmentFailureState

The asset itself is defect, e.g. a broken engine.

Examples for EquipmentFailureState

  • WS_Cur_State: Equipment Failure

190000: ExternalFailureState

There is an external failure, e.g. missing compressed air.

Examples for ExternalFailureState

  • WS_Cur_State: External Failure

200000: ExternalInterferenceState

There is an external interference, e.g. the crane to move the material is currently unavailable.

210000: PreventiveMaintenanceStop

A planned maintenance action.

Examples for PreventiveMaintenanceStop

  • WS_Cur_Prog: Program-Maintenance
  • PackML: Maintenance
  • EUROMAP: MAINTENANCE
  • Tobacco: MAINTENANCE

220000: TechnicalOtherStop

The asset has a technical issue, but it is not specified further.

Examples for TechnicalOtherStop

  • WS_Not_Of_Fail_Code
  • PackML: Held
  • EUROMAP: MALFUNCTION
  • Tobacco: MANUAL
  • Tobacco: SET UP
  • Tobacco: REMOTE SERVICE