Skip to content

Each configuration is counted as an additional indexed model in the Indexed Model Count.

System Information

MANAGE > System presents the current system level information about:

  • The Search Server

  • The Application Server

  • The Database Server

You may click DOWNLOAD to obtain a text version of this information.

Application Server Information

Provides information about the application server, including disk utilization for Windows environments.

The MM Server Time presents both the local time and (UTC), and is a convenient reference for validating times printed in logs (which are also based upon server time of the server where the bridge ran).

Search Server Information

Provides the total number of indexed models, the number of models and objects that are being indexed, and the last time the index was updated. Also provide information about the Solr server.

Application Server Information

Provides information about the database server.

Analyze Statistics

Configuration Statistics

Simple, easy to read statistics report. To be used by any administrator wishing to obtain high level statistics for a given configuration.

Steps

  1. Go to OBJECTS > Statistics in the banner.

The report is presented in the same order as in the BROWSE > Categories, as defined in the MM installation directory/conf/MetadtaExplorer.xml file. (See the Customization Tutorial for more details).

Explore Further

You may Download the report to CSV format.

Example

This report is for a specific configuration (the current one at the time of execution). Thus, comparing it with the Repository Statistics report can be misleading.

Repository Statistics

Simple, easy to read statistics report, to be used by any administrator wishing to obtain high level statistics on the repository model and features used.

Steps

  1. Sign in as a user with at least the Application Administrator capability global role assignment.

  2. Either:

  3. MANAGE > System:

    • Go to MANAGE > System in the banner.

    • Go to Scripts > Get repository statistics.

  4. MANAGE > Repository:

    • Go to MANAGE > Repository in the banner.

    • Right click on the Repository root and select Operations

      Get repository statistics.

  5. Click RUN OPERATION.

  6. Open the log of the operation when complete.

  7. Click Download Operation Files.

Example

Sign in as Administrator. Go to MANAGE > System in the banner. Go to Operations > Get repository statistics. Click RUN OPERATION. Open the log of the operation when complete.

Click Download Operation Files.

A screenshot of a computer Description automatically
generated

The count here is independent of number of versions. Thus, there are two configurations in this case, each with any number of versions.

In addition, "Naming Standards" is simply included in the Customer Models count, which is all the models based upon model types defined in MANAGE > Metamodels.

Repository Configuration Statistics

Simple, easy to read statistics report scoped to the models from the configuration perspective, to be used by any administrator wishing to obtain configuration level statistics on the all the configurations and their model.

Steps

  1. Sign in as a user with at least the Application Administrator capability global role assignment.

  2. Either:

  3. MANAGE > System:

    • Go to MANAGE > System in the banner.

    • Go to Scripts > Get repository configuration statistics.

  4. MANAGE > Repository:

    • Go to MANAGE > Repository in the banner.

    • Right click on the Repository root and select Operations

      Get repository configuration statistics.

  5. Click RUN OPERATION.

  6. Open the log of the operation when complete.

  7. Click Download Operation Files.

Example

Sign in as Administrator. Go to MANAGE > System in the banner. Go to Scripts > Get repository configuration statistics. Click RUN OPERATION. Open the log of the operation when complete.

Graphical user interface, text, application, email Description
automatically generated

Click Download Operation Files.

The A1 cell of the operation output file RepositoryConfigurationUsage.csv gives the parameter value of the =HYPERLINK function of other columns (starting from F1) . You can view a configuration architecture view by clicking on those columns. You need to provide your MM URL as the value of the A1 cell. By default the MM URL is http://localhost:19980/MM. If you are running on port 11580, you'll need to change the A1 value to http://localhost:11580/MM. Replace the host name accordingly if your host name is not localhost.

Starting from the 2nd row, the csv file shows a configuration level statistics of all models in the repository, their names, bridge types, model types, import server names, usage counts, in which configuration they are used etc.

System Statistics

To be used exclusively for support purposes debugging DB issues.

Steps

  1. Sign in as a user with at least the Application Administrator capability global role assignment.

  2. Either:

  3. MANAGE > System:

    • Go to MANAGE > System in the banner.

    • Go to Scripts > Get system statistics.

  4. MANAGE > Repository:

    • Go to MANAGE > Repository in the banner.

    • Right click on the Repository root and select Operations

      Get system statistics.

  5. Click RUN OPERATION.

  6. Open the log of the operation when complete.

  7. Click Download Operation Files.

Example

Sign in as Administrator. Go to MANAGE > System in the banner. Go to Scripts > Get repository content system statistics. Click RUN OPERATION. Open the log of the operation when complete.

Click Download Operation Files.

The Count of Users includes a system user which cannot be managed (and is not displayed) but is counted. Thus, while the Count here is 38 Users, in fact if you go to MANAGE > Users, you will only see 37.

Configuration and Repository Search Statistics

Simple, easy to read statistics report. To be used by any administrator wishing to obtain high level statistics on the repository object and features used.

Steps

  1. Sign in as a user with at least the Application Administrator capability global role assignment.

  2. Go to MANAGE > Repository in the banner.

  3. Select either:

  4. The Repository root

  5. A configuration

  6. Go to the Search Statistics tab

  7. From here you may select

    • The PERIOD to present statistics for

    i. 24 hours

    ii. 7 days

    iii. 30 days

    iv. 6 months

    v. Custom

    • Restrict to SEARCHES with or without results.

    • LIMIT the number of different results.

    • The type of presentation with DISPLAY AS.

    i. Grid

    ii. Bar

    iii. Pie

    iv. Line

Example

Sign in as Administrator, go to MANAGE > Repository, select the Repository root and go to the Search Statistics tab.

Click on Pie.

A screenshot of a computer Description automatically
generated

Run Performance Script

The performance script allows you to assess the relative performance of your system.

Steps

  1. Sign in as a user with at least the Application Administrator capability global role assignment.

  2. Go to MANAGE > System in the banner.

  3. Click on Operations > Test Performance.

Example

Then, Save the log from the operation and the results are in the log:

[42/Test performance] 2023-10-06 13:14:01 OPERI_S0002 Started
operation: Test performance
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 MIR server
version: 11.1.0 build date: 2023-10-03 17:18:42.
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 JVM name: OpenJDK
Runtime Environment.
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 JVM vendor:
Eclipse Adoptium.
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 JVM version:
11.0.13+8.
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 JVM 32/64 bit:
64.
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 OS name: Windows
10.
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 OS version: 10.0.
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 OS patch: .
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 System locale:
Cp1252.
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 CPU architecture:
amd64.
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 CPU name: amd64.
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 Tesing
performance for the MIR XMI model....
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 Creating the MIR
XMI model external content....
[42/Test performance] 2023-10-06 13:14:01 OPER_S0210 Harvesting the
MIR XMI model....
[42/Import model version from XMI fi] 2023-10-06 13:14:01 OPERI_S0002
Started operation: Import model version from XMI file
[42/Import model version from XMI fi] 2023-10-06 13:14:01 OPER_S0147
Skipping checksum comparison: a new version will be created even if the
content did not change.
[42/Import model version from XMI fi] 2023-10-06 13:14:02 OPER_S0010
Storing imported model to repository at: [-1,1667]
[42/Import model version from XMI fi] 2023-10-06 13:14:02 REPO_I0066
Loading the model
[42/Import model version from XMI fi] 2023-10-06 13:14:02 REPO_I0067
Profiling the model: Version [[-1,1667]], model name []orcl]
[42/Import model version from XMI fi] 2023-10-06 13:14:03 REPO_I0068
Saving the model
[42/Import model version from XMI fi] 2023-10-06 13:14:03 OPERI_S0003
Operation completed.
[42/Test performance] 2023-10-06 13:14:05 OPER_S0210 Harvesting the
MIR XMI model... took 3 secs..
[42/Test performance] 2023-10-06 13:14:05 OPER_S0210 Browsing the MIR
XMI model....
[42/Test performance] 2023-10-06 13:14:05 OPER_S0210 Listing all
children.
[42/Test performance] 2023-10-06 13:14:05 OPER_S0210 Found 921
children.
[42/Test performance] 2023-10-06 13:14:05 OPER_S0210 Browsing the MIR
XMI model... took 0 secs..
[42/Test performance] 2023-10-06 13:14:05 OPER_S0210 Pausing 10
seconds for search indexer.
[42/Test performance] 2023-10-06 13:14:15 OPER_S0210 Searching in the
MIR XMI model....
[42/Test performance] 2023-10-06 13:14:15 OPER_S0210 Found 13 classes.
[42/Test performance] 2023-10-06 13:14:15 OPER_S0210 Searching in the
MIR XMI model... took 0 secs..
[42/Test performance] 2023-10-06 13:14:15 OPER_S0210 Tesing
performance for the MIR XMI model... took 3 secs..
[42/Test performance] 2023-10-06 13:14:15 OPER_S0210 Total run time:
13 sec..
[42/Test performance] 2023-10-06 13:14:15 OPERI_S0003 Operation
completed.

Configure Database Connection

You may configure the connection to the repository database here.

Steps

  1. Sign in as a user with at least the Application Administrator capability global role assignment.

  2. Go to MANAGE > System in the banner.

  3. Click Configure.

Enable/disable debug logging

You may turn on debug level logging in order to better diagnose issues you may see reported in the logs or to display internal.

This setting applies to all subsequent actions which produce logs, whether manually invoked or scheduled. It also means that debug messages will appear in the Tomcat logs.

Debug logging is VERY verbose. Thus, be sure to turn off (Disabled) debug level logging as soon as you have captured your diagnostics. In addition, the debug logging is reset to Disabled when you restart the application server.

Debug level messages are not shown MANAGE > System Log, nor is it part of the download there. Instead, when you enable debug logging, those messages are written to the Tomcat server logs on the application server machine. You may find these in the installation directory at

/data/logs/tomcat/

Setting debug level logging also affect the attributes shown on the object page and also the Properties panel, showing additional information like Object Id and Stable ID. Other internal information like Lineage Options which are only for internal debugging purposes also will appear. Thus, debug logging should be disabled unless absolutely required.

Steps

  1. Sign in as a user with at least the Application Administrator capability global role assignment.

  2. Go to MANAGE > System in the banner.

  3. Select Enable/Disable in the DEBUG LOGGING pull-down.

Example

Enable/disable REST API

MetaKarta provides direct REST API call access.

You may enable this capability in MANAGE > System.

To prevent possible security vulnerabilities associated with this type of API, it is disabled by default. In addition, an interactive environment is provided to try out any API methods directly, but again it is disable by default. You must enable these as part of the setup procedures.

More details may be found in the deployment guide.

To go to the interactive Rest API environment to try out any API methods, select REST API from the avatar in the upper right corner of the any page.

Steps

  1. Sign in as a user with at least the Application Administrator capability global role assignment.

  2. Go to MANAGE > System in the banner.

  3. Select Enable/Disable in the REST API pull-down.

Upgrade

The Upgrade operation allows you to apply a software patch to the server from the UI. Each patch is in the ZIP format and is a collection of files and directories organized in the same manner as the home directory of the application server.

The most critical patches are officially delivered as cumulative patches.

You should inspect the currently running Operations and either let them finish or STOP them before the upgrade.

You may disable all schedules without deleting them by selecting the schedules you wish to set in the list and right-clicking.

MIMM Cumulative Patch Upgrade

Contains patches for the application server and UI, in the format:

MIMM-CumulativePatch-1010-20200423.zip

They are delivered in the patch ZIP file in the path tomcat/webapps. These cumulative patches include an updated MM.war file which will be automatically deployed by the Tomcat web application server once there are no active operations (e.g., database maintenance or model import).

However, the above upgrade process will NOT wait for all users to be logged out.

The date displayed is the date of from the $MM_HOME/tomcat/webapps/MM.war which can be slightly older than the MetaKarta cumulative patch if that patch did not provide and updated .war file.

While Tomcat is re-deploying the new .war file, users may temporarily experience this error message:

Request to the server failed.

Some MetaKarta patches will cause a database upgrade to be required. If so, you must sign in and then the log of that database upgrade is presented.

If you wish to see an older database upgrade log, it can be retrieved from the log files in the $MM_HOME/data/logs/tomcat folder.

Steps

  1. Sign in as a user with at least the Application Administrator capability global role assignment.

  2. Go to MANAGE > System in the banner.

  3. Click on Upgrade.

  4. Browse for the patch package.

If you cannot find the location using the Browse function you must configure (as part of the installation) the available paths to present to users. More details may be found in the deployment guide.

  1. Click UPGRADE.

Examples

Sign in as Administrator and go to MANAGE > System.

Click on Upgrade.

Browse for the patch package.

If you cannot find the location using the Browse function you must configure (as part of the installation) the available paths to present to users. More details may be found in the deployment guide.

Click UPGRADE.

MetaKarta Customization

Contains look & Feel customization, e.g., for the logo and banner of the application server. They are delivered in the patch ZIP file in the path conf/resources.

Use the same upgrade process as for the MetaKarta cumulative patches.

Steps

  1. Sign in as a user with at least the Application Administrator capability global role assignment.

  2. Go to MANAGE > System in the banner.

  3. Click Upgrade.

  4. Browse for the patch file (.ZIP) to apply.

If you cannot find the location using the Browse function you must configure (as part of the installation) the available paths to present to users. More details may be found in the deployment guide.

Software Upgrade Specialized Tasks

Migration to Hierarchical Metadata

The earlier metamodel/profile for JSON was too limited and incomplete as it did not model both arrays and structures, and did not capture the root object which could be an array. This situation has been corrected in the latest product. This change has impact in many places both MIMB and MIMM.

In MIMB, JSON is used by almost all bridges except databases.

  • File System / Data Lakes

  • File System (Delimited, Fixed Width, Excel, XML, JSON, Avro, Parquet, ORC, COBOL Copybook)

  • AWS Simple Storage Service (S3) File System

  • Azure Data Lake Storage (ADLS) File System

  • Azure Blob Storage File System

  • Google Cloud Storage (GCS) File System

  • Kafka File System

  • Swift Object Store File System

  • NoSQL databases bvased on JSON

  • MongoDB

  • Azure Cosmos DB NoSQL Database

  • Cassandra

  • CouchDB

  • ETL/DI/BI bridges reading/writing JSON file based data stores

  • Data Mapping Script

  • any DI tool supporting hierrachies like Talend DI jobs (many stitched to JSON files)

  • Haddop HiveQL

In MIMM, JSON is also used with

  • Model Browser

  • Data Mapper

  • Data Flow

  • Configuration / Connection resolutions

  • And others.

In order to ensure proper migration and no loss of metadata and no loss of linking (documentation, mappings, stitching/data flow, etc.) this upgrade is performed automatically as part of the database upgrade to this latest version.

Migration of single model databases to multi-model databases

As part of the on-going improvements in MetaKarta, many import bridges are updated to import the source metadata at multi-models, rather than huge single models. This improvement is implemented to ensure that there are no size limits when importing from exceedingly large sources.

A good example of this improvement is in the database based bridges. For example, a Hadoop based data lake may have thousands of databases each with many thousands of data elements (tables/columns). If imported as one model, this source becomes unwieldy. However, upgrading the bridge so that it imports each database as a separate model, means the object is a multi-model, and each model within is of a manageable size.

In general, one is going to first migrate from an earlier major release of MetaKarta where database model objects were single-model only, to a newer version of the product where database model objects are multi-models. Thus, first you must follow the instructions to upgrade to the new version of in MetaKarta. Then, you will have a number of database model objects which are single-model imported models which must be migrated.

The migration may be performed on a per model basis or on a folder or the entire repository. This last option is the recommended route. The process will create new configurations based upon the existing ones for each configuration which contains one or more single model(s) that are migrated.

The elements which are migrated include:

  • The imported model itself is re-harvested as a multi-model

  • Term classifications

  • Semantic mappings

  • Stitchings

  • Custom Attributes

  • Comments and Social Curation

  • Table level Relationships.

As part of the migration from single to multi model, certain structural data elements will change:

  • Schema -> Database Schema

  • Database -> Database Server.

This means that any custom attribute on Schema or Database objects cannot migrate until those same custom attributes are also assigned to the new structure data element (either Database Schema or Database Server).

Steps

  1. Go to MANAGE > Custom Attributes.

  2. Select a custom attribute from the list which applies to either Scheme or Database.

  3. Click CHOOSE next to Scope and include either Database Schema or Database Server.

  4. Finish all the other custom attributes from the list which applies to either Scheme or Database.

  5. Go to MANAGE > Repository

  6. Right-click on the Repository at the root of the tree and select Operations > Migrate to multi-model databases.

Example

For this example, we used a special import directly from Oracle with two schemas (Accounts Receivable and Accounts Payable) rather than separate models from SQL Server DDL.

We ensure that we are creating a single-model database model by NOT specifying "-multi-model" in the Miscellaneous bridge parameter.

Note, this model is stitched to the existing PAYTRANS and Staging DW data stores.

Mapped using a data mapping to the Staging DW model.

The ADDRESS table is mapped semantically from the Finance glossary.

The CUSTOMER table is classified

The CUSTOMER table is commented on, certified, endorsed and warned, and has custom attribution.

Now, let's migrate. Go to MANAGE > Repository and Right-click on the Repository at the root of the tree and select Operations > Migrate to multi-model databases.

The log presents any issues found.

There is a folder structure with the older configuration, before migration.

In the new configuration, we have this new model is stitched to the existing PAYTRANS and Staging DW data stores.

Mapped using a data mapping to the Staging DW model.

The CUSTOMER table is mapped semantically from the Finance glossary.

The CUSTOMER table is classified

The CUSTOMER table is commented on, certified, endorsed and warned, and has custom attribution.

Once you have verified that the migration was successful, you may delete the $$Database migration$$ folder which will remove the old single-model populated configurations.

Graphical user interface, text, application, email Description
automatically generated