Older Versions

TG System Administrator’s Guide

Version 2.1

Document updated

Hardware and Software Requirements

Version v2.1

Last update:

Hardware Requirements

Actual hardware requirements will vary based on your data size, workload and features you choose to install.

Component Minimum Recommended
CPU 1.8 GHz (64-bit processor) or faster multi-core Dual-socket multi-core, 2.0 GHz (64-bit processors) or faster
Memory* 8 GB ≥ 64GB
Storage* 20 GB ≥ 1TB, RAID10 volumes for better I/O throughput.

SSD storage is recommended.
Network 1 Gigabit Ethernet adapter

10Gigabit Ethernet adapter for inter-node communication

*Actual needs depend on data size. Consult our solution architects for an estimate of memory and storage needs.


  • The TigerGraph system is optimized to take advantage of multiple cores.

  • Performance is optimal when the memory is large enough to store the full graph and to perform computations.

  • The platform works excellently as a single node.  For high availability or scaling, a multi-node configuration is possible.

Certified Operating Systems

The TigerGraph Software Suite is built on 64-bit Linux. It can run on a variety of

Linux 64-bit distributions. The software has been tested on the operating systems listed below.  When a range of versions is given, it has been tested on the two endpoints, oldest and newest. We continually evaluate the operating systems on the market and work to update our set of supported operating systems as needed.  The TigerGraph installer will install its own copies of Java JDK and


, accessible only to the TigerGraph user account, to avoid interfering with any other applications on the same server.

On-Premises hosting Java JDK version

version (C/C++)

RedHat 6.5 to 6.9 (x64)
Yes 1.8.0_141 4.8.2

RedHat 7.0 to 7.4 (x64)
Yes 1.8.0_141 4.8.2
Centos 6.5 to 6.9 (x64) Yes 1.8.0_141 4.8.2
Centos 7.0 to 7.4 (x64) Yes 1.8.0_141 4.8.2

Ubuntu 14.04 LTS

Ubuntu 16.04 LTS

Ubuntu 18.04 LTS


Yes 1.8.0_141 4.8.4
Debian 8 (jessie) Yes 1.8.0_141 4.8.4

Additionally, we offer Amazon Machine Images (AMI) to run on Amazon EC2. Please contact us regarding recommended configurations.

Prerequisite Software


Before offline installation, the TigerGraph system needs a few basic software packages to be present.

  1. tar, to extract files from the offline package

  2. curl, an alternative way to send query request to TigerGraph
  3. crontab, a basic OS software module which TigerGraph relies on

  4. uuidgen, a tool to creates an universally unique identifier of the server
  5. ip, to configure the network

  6. ssh/sshd, to connect to the server

  7. more, a tool to display the License Agreement
  8. netstat, a basic OS tool to check the network status
  9. semanage, to manage SELinux context of ssh
  10. sshpass,
    if you intend to use password
    login method (P method) instead of ssh key login method (K method) to install the TigerGraph platform.

If they are not present, contact your system administrator to have them installed on your target system. For example, they can be installed with one of the following commands.

# Centos or RedHat:
sudo yum install tar curl cronie iproute util-linux-ng net-tools coreutils openssh-clients openssh-server sshpass policycoreutils-python

#Ubuntu or Debian:
sudo apt install tar curl cron iproute util-linux uuid-runtime net-tools coreutils openssh-client openssh-server sshpass policycoreutils


If you are running TigerGraph on a multi-node cluster, you


install, configure and run the NTP (Network Time Protocol) daemon service. This service will synchronize system time among all cluster nodes.


If you are running TigerGraph on a multi-node cluster, you


configure the iptables/firewall rules to make all tcp ports open among all cluster nodes.


In an on-premises installation, the system is fully functional without a web browser. To run the optional browser-based TigerGraph GraphStudio User Interface or Admin Portal, you need the

Google Chrome browser


end of Hardware and Software Requirements

Back to Top

Unable to render {include}

The included page could not be found.

end of Platform Installation

Back to Top

Configuring a High Availability (HA) Cluster

Document updated

A TigerGraph system with High Availability (HA) is a cluster of server machines which uses replication to provide continuous service when one or more servers are not available or when some service components fail. TigerGraph HA service provides loading balancing when all components are operational, as well as automatic failover in the event of a service disruption. One TigerGraph server consists of several components (e.g., GSE, GPE, RESTPP).
The default HA configuration has a replication factor of 2, meaning that a fully-functioning system maintains two copies of the data, stored on separate machines.
In advanced HA setup, users can set a higher replication factor.

System Requirements

  • An HA cluster needs at least 3 server
    . Machines can be physical or virtual. This is true even the system only has one graph partition.
  • For a distributed system with N partitions (where N > 1), the system must have at least 2N machines.
  • The same version of the TigerGraph software package is installed on each machine.


  1. HA configuration should be done immediately after system installation and before deploying the system for database use.
  2. To convert a non-HA system to an HA system, the current version of TigerGraph requires that all the data and
    metadata be cleared,
    and all TigerGraph services be stopped.  This limitation will be removed in a future release.


Starting from version 2.1, configuring a HA cluster is integrated into platform installation, please check the document

TigerGraph Platform Installation Guide v2.1

for detail.

Install TigerGraph

Follow the instructions in the document

TigerGraph Platform Installation Guide v2.1

to install the TigerGraph system in your cluster.

In the instructions below, all the commands need to be run as the


OS user, on the machine designated “m1” during the cluster installation.

(B) Stop the TigerGraph Service

Be sure you are logged in as the


OS user on machine “m1”. Before setting up HA or changing HA configuration, the current TigerGraph system must be fully stopped. If the system has any graph data, clear out the data (e.g., with “gsql DROP ALL”).

Stopping all TigerGraph services
gadmin stop ts3 -fy
gadmin stop all -fy
gadmin stop admin -fy

(C) Enable HA

After the cluster installation, create an HA configuration using the following command:

gadmin –enable ha

This command will automatically generate a configuration for a distributed (partitioned) database with an HA system replication factor of 2.
Some individual components may have a higher replication factor

Sample output:

Successful HA configuration
tigergraph@m1$ gadmin –enable ha
[FAB ][m3,m2] mkdir -p ~/.gium
[FAB ][m3,m2] scp -r -P 22 ~/.gium ~/
[FAB ][m3,m2] mkdir -p ~/.gsql
[FAB ][m3,m2] scp -r -P 22 ~/.gsql ~/
[FAB ][m3,m2] mkdir -p ~/.venv
[FAB ][m3,m2] scp -r -P 22 ~/.venv ~/
[FAB ][m3,m2] cd ~/.gium; ./
[RUN ] /home/tigergraph/.gsql/
[FAB ][m3,m2] mkdir -p /home/tigergraph/.gsql/
[FAB ][m3,m2] scp -r -P 22 /home/tigergraph/.gsql/all_log_cleanup /home/tigergraph/.gsql/
[FAB ][m3,m2] mkdir -p /home/tigergraph/.gsql/
[FAB ][m3,m2] scp -r -P 22 /home/tigergraph/.gsql/ /home/tigergraph/.gsql/
[FAB ][m1,m3,m2] /home/tigergraph/.gsql/
[FAB ][m1,m3,m2] rm -rf /home/tigergraph/tigergraph_coredump
[FAB ][m1,m3,m2] mkdir -p /home/tigergraph/tigergraph/logs/coredump
[FAB ][m1,m3,m2] ln -s /home/tigergraph/tigergraph/logs/coredump /home/tigergraph/tigergraph_coredump

If the HA configuration fails, e.g, if the cluster doesn’t satisfy the HA requirements, then the command will stop running with a warning.

HA configuration failure
tigergraph@m1$ gadmin –enable ha
Detect config change. Please run ‘gadmin config-apply’ to apply.
ERROR:root: To enable HA configuration, you need at least 3 machines.
Enable HA configuration failed.

(D) [Optional] Configure Advanced HA

In this optional additional step, advanced users can run several “gadmin –set” commands to control the replication factor and manually specify the host machine for each TigerGraph component. The table below shows the recommended settings for each component. See the later example section for different configuration cases.

Component Configuration Key Suggested Number of Hosts Suggested Number of Replicas
ZooKeeper zk.servers 3 or 5
Dictionary Server dictserver.servers 3 or 5
Kafka kafka.servers same as GPE
kafka.num.replicas 2 or 3
GSE gse.servers every host
gse.replicas 2
GPE gpe.servers every host
gpe.replicas 2
REST restpp.servers every host


There is a 3-machine cluster m1, m2 and m3. Kafka, GPE, GSE and RESTPP are all on m1 and m2, with replication factor 2. This is a non-distributed graph HA setup.

Example: 3-machine non-distributed HA cluster
gadmin –set zk.servers m1,m2,m3
gadmin –set dictserver.servers m1,m2,m3
gadmin –set dictserver.base_ports 17797,17797,17797
gadmin –set kafka.servers m1,m2
gadmin –set kafka.num.replicas 2
gadmin –set gse.replicas 2
gadmin –set gpe.replicas 2
gadmin –set gse.servers m1,m2
gadmin –set gpe.servers m1,m2
gadmin –set restpp.servers m1,m2

Install Package

Once the HA configuration is done, proceed to install the package from the
first machine (named “m1” in the cluster installation configuration).

gadmin pkg-install reset -fy


The table below shows how to setup for the common setups. Note if convert the system from another configuration,

must stop

the old TigerGraph system first.

System Goal Cluster Configuration

(number of servers in cluster is X)
How to

A,B,C, etc. refer to the Steps in the section above.
Non-distributed graph

with HA
Each server machine holds the complete graph.
  • For both initial installation and reconfiguration,

    (A) → B → C → D → E. While in D,  set all replicas to




    gpe.replicas =


    gse.replicas =


    restpp.replicas =


  • Note: (A) means A is needed only in initial installation
Distributed graph
without HA
Graph is partitioned among all the cluster servers.
  • Note: no HA is equivalent to replication factor 1
  • For initial installation, skip B, C, D and E.
  • For reconfiguration,

    B → C → D → E. While in D, set all replicas to 1, e.g.,

    gpe.replicas = 1

    gse.replicas = 1

    restpp.replicas = 1

Distributed graph

with HA

Graph is partitioned with replica factor N.

Number of partitions Y

equals X/N.
  • For both initial installation and reconfiguration,

    (A) → B → C → D → E. While in D, set all replicas to N

    , e.g.,

    gpe.replicas = N

    gse.replicas = N

  • Note: (A) means A is needed only in initial installation

end of HA Configuration

Back to Top

Activating a System-Specific License

Version 1.1 to 2.1

Document Updated:

This guide provides step-to-step instructions for activating or
a TigerGraph license, by generating and installing a license key unique to that TigerGraph system. This document applies to both non-distributed and distributed systems. In this document, a cluster acting cooperatively as one TigerGraph database is considered one system.

A valid license key activates the TigerGraph system for normal operation. A license key has a built-in expiration date and is valid on only one system. Some license keys may apply other restrictions, depending on your contract. Without a valid license key, a TigerGraph system can perform certain administration functions, but database operations will not work.

To activate a new license, a user first configures their TigerGraph system. The user then collects the fingerprint of theTigerGraph system (so-called license seed) using a TigerGraph-provided utility program. Then the collected materials are sent to TigerGraph or an authorized agent via email or web form. TigerGraph certifies the license based on the collected materials and sends a license key back to the user. The user then installs the license key on their system using another TigerGraph command. A new license key (e.g., one with a later expiration) can be installed on a live system that already has a valid license; the installation process does not
database operations.

If your system is currently using an older string-based license key which does not use a license seed, please contact

for the
procedure to upgrade to the new system-specific license type

Step-by-Step Guide

Note: Before beginning the license activation process, the TigerGraph package must be installed on each server, and the TigerGraph system must be
with gadmin.

  1. Collect the fingerprint of the whole TigerGraph system using the command tg_
    , which can be executed on any machine in the system. The command tg_lic_seed packs all the collected data into a local file (named tigergraph_seed). When tg_lic_seed has completed successfully, it outputs the path of the collected data to the console.

    Collect Fingerprint of TigerGraph System

    $ tg_lic_seed

    seed file is ready at /home/tigergraph/tigergraph/tigergraph_seed

  2. Send the tigergraph_seed file to TigerGraph
    , either through our license activation web portal (preferred) or by email to

    If using email, please include the following information:

    1. Company/Organization name
    2. Contract number
      . If you do not know you contract number, please contact your sales representative or
  3. If the contract and license seed are in good order, a new
    license key
    file will be certificated and
    sent back to you.
  4. Copy the license key file to a directory on the TigerGraph system where the TigerGraph linux user has r
    ead permission
  5. To install the license key, run command tg_
    , specifying the path to the license key file.

    Install License

    $ tg_lic_install
    Usage: tg_lic_install <license_path>

    If installation is completed successfully, the message “install license successfully” will be displayed in the console. Otherwise, another message “failed to install license” will be displayed.

Checking License Information

After a license key has been installed successfully on a TigerGraph system, the information of the installed license is available via the following REST API:

Get License Information

$ curl -X GET "localhost:9000/showlicenseinfo"
    "message": "",
    "error": false,
    "version": {
      "schema": 0,
      "api": "v2",
    "code": "",
    "results": [
        "Days remaining": 10160,
        "Expiration date": "Mon Oct  2 04:00:00 2045\n"

end of System-Specific License Activiation

Back to Top

Unable to render {include}

The included page could not be found.

end of User Privileges and Authentication

LDAP Authentication


Lightweight Directory Access Protocol



) is an industry-standard


for accessing and maintaining

directory information services across a


Typically, LDAP servers are used to provide centralized user authentication service. The Tigergraph system supports LDAP authentication by allowing a TigerGraph user to log in using an LDAP username and credentials. During the authentication process, the GSQL server connects to the LDAP server and requests the LDAP server to authenticate the user.

Supported Features

GSQL LDAP authentication supports any LDAP server that follows LDAPv3 protocol. StartTLS/SSL connection is also supported.

SASL authentication is not yet supported. Some LDAP server are configured to require a client certificate upon connection. Client certificate is not yet supported in GSQL LDAP authentication.

Mapping Users From LDAP to GSQL

In order to manage the user roles and privileges, the TigerGraph GSQL server employs two concepts—proxy user and proxy group.

Proxy User

A proxy user is a GSQL user created to correspond an external LDAP user. When operating within GSQL, the external LDAP user’s roles and privileges are determined by the proxy user.

Proxy Group

A proxy group is a GSQL user group that is used to manage a group of proxy users who share similar properties/attributes in LDAP.

An existing LDAP user can log in to GSQL only when the user matches at least one of the existing proxy groups’ criteria. Once the criteria are satisfied, a proxy user will be created for the LDAP user. The roles and privileges of the proxy user are

at least

as permissive as the proxy group(s) he belongs to. It is also possible to change the roles of a specific proxy user independently. When the roles and privileges of a proxy group changes, the roles and privileges of all the proxy users belonging to this proxy group change accordingly.

Configure GSQL LDAP Authentication

To configure a TigerGraph system to use LDAP, there are two main configuration steps:

  1. Configure the LDAP Connection.
  2. Configure GSQL Proxy Groups and Users.

In order to choose and specify your LDAP configuration settings, you must understand some basic LDAP concepts.  One reference for LDAP concepts is

Basic LDAP Concepts


Step 1 – Configure the LDAP Connection

To enable and configure LDAP, run three commands.

  1. Configure LDAP:

    gadmin –configure ldap

    The gadmin program will then prompt the user for the settings for several LDAP configuration parameters.

  2. Apply the configration:

    gadmin config-apply

  3. Restart the gsql server:

    gadmin restart gsql -y

An example configuration is shown below.

Example of gadmin –configure ldap
$ gadmin –configure ldap
# Enable LDAP authentication: default false
security.ldap.enable [False]: true

# Configure LDAP server hostname: default localhost []:

# Configure LDAP server port: default 389
security.ldap.port [389]: 389

# Configure LDAP search base DN, the root node to start the LDAP search for user authentication: must specify
security.ldap.base_dn [dc=tigergraph,dc=com]: dc=tigergraph,dc=com

# Configure LDAP search base DN, the root node to start the LDAP search for user authentication.
security.ldap.search_filter [(objectClass=*)]:

# Configure the username attribute name in LDAP server: default uid
security.ldap.username_attribute [uid]: uid

# Configure the DN of LDAP user who has read access to the base DN specified above. Empty if everyone has read access to LDAP data: default empty
security.ldap.admin_dn [cn=Manager,dc=tigergraph,dc=com]: cn=Manager,dc=tigergraph,dc=com

# Configure the password of the admin DN specified above. Needed only when admin_dn is specified: default empty
security.ldap.admin_password [secret]: secret

# Enable SSL/StartTLS for LDAP connection [none/ssl/starttls]: default none [starttls]: none

# Configure the truststore path for the certificates used in SSL: default empty [/tmp/ca_server.pkcs12]:

# Configure the truststore format [JKS/PKCS12]: default JKS [pkcs12]:

# Configure the truststore password: default changeit [test]:

# Configure to trust all LDAP servers (unsafe): default false [False]: false

Below is an explanation of each configuration parameter.


Set to “true” to enable LDAP; “false” to disable LDAP.

Hostname of LDAP server.


Port of LDAP server.


Base DN (Distinguished Name), in order for GSQL to perform the LDAP search.


A search filter is optional. When configured, the search is only performed for the LDAP entries that satisfy the filter. The filter must strictly follow LDAP filter format, i.e., the condition must be wrapped by parentheses, etc. A description of the different types of filters is avaiable at

LDAP Filters

. The official specification for LDAP filters is available at



This specifies the LDAP attribute to search when the GSQL server looks up the usernames in the LDAP server upon login. For example, in the configuration shown above, when a user logs in with the “-u john” option, the GSQL server will search the “uid” attribute in LDAP to find “john” and check the credentials only after “john” is found.

security.ldap.admin_password & security.ldap.admin_dn

These options are needed when the LDAP server is not publicly readable. In this case, the admin DN and corresponding password need to be specified in order for the GSQL server to connect to the LDAP server.

When set to “none”, TigerGraph uses insecure LDAP connection. This can be changed to a secure connection protocol: “starttls” or “ssl”. &

When starttls or ssl is used, a truststore path as well as its password needs to be configured.

Currently, the TigerGraph system supports two trustore formats: pkcs12 and jks.

When specified, the GSQL server will blindly trust any LDAP sever.

Step 2 – Configure GSQL Proxy Groups and Users

This section explains how to configure a GSQL proxy group in order to allow LDAP user authentication.

Configure Proxy Group

A GSQL proxy group is created by the CREATE GROUP command with a given proxy rule. For example, assume there is an attribute called “role” in the LDAP directory, and “engineering” is one of the “role” attribute values. We can create a proxy group with the proxy rule “role=engineering”. Different roles can then be assigned to the proxy group. An example is shown below. When a user logins, the GSQL server searches for the user’s entry in the LDAP directory. If the user’s LDAP entry matches the proxy rule of an existing proxy group, a proxy user is created to which the user will login in.

# create a proxy group
CREATE GROUP developers PROXY “role=engineering” // Any user in LDAP with role=engineer is proxied to the group ‘developers’

# grant role to proxy group
GRANT ROLE querywriter ON GRAPH computerNet TO developers

The SHOW GROUP command will display information about a group. The DROP GROUP command deletes the definition of a group.

# show the current groups

# delete a proxy group
DROP GROUP developers

Only users with the admin and superuser role can create, show, or drop a group.

Proxy User

Nothing needs to be configured for a proxy user. As long as the proxy rule matches, the proxy user will be automatically created upon login. A proxy user is very similar to a normal user. The minor differences are that a proxy user cannot change their password in GSQL and that a proxy user comes with default roles inherited from the proxy group that they belong to.

end of LDAP Authentication

Single Sign-On

Document Updated:

The Single Sign-On (SSO) feature in TigerGraph enables you to use your organization’s identity provider (IDP) to authenticate users to access TigerGraph GraphStudio and Admin Portal UI. If your IDP supports SAML 2.0 protocol, you should be able to integrate your

identity provider

with TigerGraph

Single Sign-On


Currently we have verified following identity providers:

In order to use

Single Sign-On

, you need perform
four steps

  1. Configure your

    identity provider

    to create a TigerGraph application.

  2. Provide information from your

    identity provider

    to enable TigerGraph

    Single Sign-On


  3. Create user groups with proxy rules to authorize

    Single Sign-On


  4. Change the password of the


    user to be other than the default, if you haven’t
    so already.

We assume you already have TigerGraph
up and running
, and you can access GraphStudio UI through a web browser using the URL:


If you enabled SSL connection, change http to https. If you changed the nginx port of the TigerGraph system, replace 14240 with the port you have set.

1 Configure I

dentity Provider

Here we provide detailed instructions for identity providers that we have verified. Please consult your IT or security department for how to configure the identity provider for your organization if it is not listed here.

After you finish configuring your identity provider, you will get an

Identity Provider Single Sign-On URL


Identity Provider Entity Id

, and an X.509 certificate file


. You need these 3 things to configure TigerGraph next.


After logging into Okta as the admin user, click


button at the top-right corner.


Add Applications

in the right menu.


Create New App

button in the left toolbar.

In the pop up window, choose

SAML 2.0

and click



Input TigerGraph (or whatever application name you want to use) in

App Name

, and click


. Upload a logo if you like.

Enter the

Assertion Consumer Service URL


Single sign on URL

, and

SP Entity ID


Both are URLs in our case. You need to know the hostname of the TigerGraph machine. If you can visit GraphStudio UI through a browser, the URL contains the hostname. It can be either an IP or a domain name.


Assertion Consumer Service URL

, or Single sign on URL, is



SP entity id

URL is:


Scroll to the bottom for Group Attribute Statements. Usually you want to grant roles to users based on their user group. You can give a name to your attribute statement; here we use


. For filter, we want to return all group attribute values of all users, so we use

Regex .*

as the filter. Click


after you set up everything.

In the final step, choose whether you want to integrate your app with Okta or not. Then click



Now your Okta identity provider settings are finished. Click

View Setup Instructions

button to gather information you will need to setup TigerGraph Single Sign-On.

Here you want to save

Identity Provider Single Sign-On URL


Identity Provider Issuer

(usually known as

Identity Provider Entity Id

). Download the certificate file as okta.cert, rename it as


, and put it somewhere on the TigerGraph machine. Let’s assume you put it under your home folder: /home/tigergraph/idp.cert. If you installed TigerGraph in a cluster, you should put it on the machine where the GSQL server is installed (usually it’s the machine whose alias is m1).

Finally, return to previous page, go to the


tab, click the


button, and assign people or groups in your organization to access this appllication.

end of Okta configuration instructions. Jump to Step 2:

Enable Single Sign-On for TigerGraph


After logging into Auth0, click


in the left navigation bar, and then click



In the pop-up window, enter

TigerGraph (or whatever application name you want to use) in the Name input box. Choose

Single Page Web Application

, and then click the





again. In the Shown Clients list, click the


icon of your newly created TigerGraph client.

Scroll down to the bottom of the settings section, and click

Show Advanced Settings


Click the


tab and then click


n the chooser list, choose


. Rename the downloaded file as



and put it somewhere on the TigerGraph machine. Let’s assume you put it under your home folder: /home/tigergraph/idp.cert. If you installed TigerGraph in a cluster, you should put it on the machine where the GSQL server is installed (

usually it’s the machine whose alias is m1


Click the


tab, and copy the text in the

SAML Protocol URL

text box.  This is the

Identity Provider Single Sign-On URL

that will be used to configure TigerGraph in an upcoming step.

Scroll up to the top of the page, click the


tab, and switch on the toggle at the right side of the



In the pop-up window, enter the

Assertion Consumer Service URL

in the Application Callback URL input box:


Scroll down to the end of the settings JSON code, click the


button, and log in as any existing user in your organization in the pop-up login page.

If login in successfully, the SAML response will be shown in decoded XML format. Scroll down to the attributes section. Here you will see some attribute names, which you will use to set proxy rules when creating groups in an upcoming configuration step.

Return to the previous pop-up window and click the


tab. Copy the


value. This is the

Identity Provider Entity Id

that will be used to configure TigerGraph in an upcoming step.

Click the


tab, scroll to the bottom of the pop-up window, and click the


button. Close the pop-up window.

end of Auth0 configuration instructions. Jump to Step 2:

Enable Single Sign-On for TigerGraph

2 Enable Single Sign-On in TigerGraph

Prepare certificate and private key on TigerGraph machine

According to the SAML standard trust model, a self-signed certificate is considered fine. This is different from configuring a SSL connection, where a CA-authorized certificate is considered mandatory if the system goes to production.

There are multiple ways to create a self-signed certificate.  One example is shown below.

First, use the following command to generate a private key in PKCS#1 format and a X.509 certificate file.

In the example below, the Common Name value should be your server hostname (IP or domain name).

Self-Signed Certificate generation example using openssl
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /home/tigergraph/sp-pkcs1.key -out /home/tigergraph/sp.cert

Generating a 2048 bit RSA private key
writing new private key to ‘/home/tigergraph/sp-pkcs1.key’
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:California
Locality Name (eg, city) []:Redwood City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:TigerGraph Inc.
Organizational Unit Name (eg, section) []:GLE
Common Name (e.g. server FQDN or YOUR name) []: tigergraph-machine-hostname
Email Address []

Second, convert your private key from PKCS#1 format to PKCS#8 format:

openssl pkcs8 -topk8 -inform pem -nocrypt -in /home/tigergraph/sp-pkcs1.key -outform pem -out /home/tigergraph/sp.pem

Finally, change the certificate and private key file to have permission 600 or less. (The tigergraph user can read or write the file; no other user has any permission.)

chmod 600 /home/tigergraph/sp.*

Enable and configure Single Sign-On Using Gadmin

From a TigerGraph machine, run the following command: gadmin –configure sso.saml

Answering the questions is straightforward; an example is shown below.

configure sso.saml example
$ gadmin –configure sso.saml
Enter new values or accept defaults in brackets with Enter.

Enable SAML2-based SSO: default false
security.sso.saml.enable [False]: true

Hostname of TigerGraph system: default
security.sso.saml.sp.hostname []: tigergraph-machine-hostname

Path to host machine’s x509 Certificate filepath: default empty
security.sso.saml.sp.x509cert: /home/tigergraph/sp.cert

Path to host machine’s private key filepath. Require PKCS#8 format (start with “BEGIN PRIVATE KEY”).
security.sso.saml.sp.private_key: /home/tigergraph/sp.pem

Identity Provider Entity ID: default
security.sso.saml.idp.entityid []:

Single Sign-On URL: default
security.sso.saml.idp.sso.url []: http://identity.provider.single-sign-on.url

Identity Provider’s x509 Certificate filepath: default empty
security.sso.saml.idp.x509cert: /home/tigergraph/idp.cert

Sign AuthnRequests before sending to Identity Provider: default true
security.sso.saml.advanced.authn_request.signed [True]:

Require Identity Provider to sign assertions: default true
security.sso.saml.advanced.assertions.signed [True]:

Require Identity Provider to sign SAML responses: default true
security.sso.saml.advanced.responses.signed [True]: false

Sign Metadata: default true
security.sso.saml.advanced.metadata.signed [True]:

Signiture algorithm [rsa-sha1/rsa-sha256/rsa-sha384/rsa-sha512]: default rsa-sha256
security.sso.saml.advanced.signature_algorithm [rsa-sha256]:

Authentication context (comma separate multiple values)
security.sso.saml.advanced.requested_authn_context [urn:oasis:names:tc:SAML:2.0:ac:classes:Password]:

Test servers with supplied settings? [Y/n] y

Success. All settings are valid
Save settings? [y/N] y


The reason we change


to false is because some identity providers (e.g., Auth0) don’t support signed assertion and response at the same time. If your identity provider supports signing both, we strongly suggest you leave it as true.

After making the configuration settings, apply the config changes, and restart gsql.

$ gadmin config-apply
$ gadmin restart gsql -y

3 Create user groups with proxy rules to authorize

Single Sign-On


In order to authorize Single Sign-On users, you need create user groups in GSQL with proxy rules and grant roles on graphs for the user groups.

In TigerGraph Single Sign-On, we support two types of proxy rules. The first type is nameid equations; the second type is attribute equations. Attribute equations are more commonly used because usually user group information is transferred as attributes to your identiity provider SAML assertions. In the Okta identity provider configuration example, it is transferred by the attribute statement named


. By granting roles to a user group, all users matching the proxy rule will be granted all the privileges of that role.  In some cases if you want to grant one specific Single Sign-On user some privilege, you can use a nameid equation to do so.

Single User Proxy

For example, if you want to create a user group SuperUserGroup that contains the user with nameid

only, and grant superuser role to that user, you can do so with the following command:

GSQL > GRANT ROLE superuser TO SuperUserGroup
Role “superuser” is successfully granted to user(s): SuperUserGroup

User Group Proxy

Suppose you want to create a user group HrDepartment which corresponds to the identity provider Single Sign-On users having the group attribute value “hr-department”, and want to grant the queryreader role to that group on the graph HrGraph:

GSQL > CREATE GROUP HrDepartment PROXY “group=hr-department”
GSQL > GRANT ROLE queryreader ON GRAPH HrGraph TO HrDepartment
Role “queryreader” is successfully granted to user(s): HrDepartment

4 Change Password Of Default User

Don’t forget to enable User Authorization in TigerGraph by changing the password of the default superuser


to other than its default value. If you do not change the password, then every time you visit the GraphStudio UI, you will automatically log in as the superuser tigergraph.

GSQL > change password
New Password : ********
Re-enter Password : ********
Password has been changed.
GSQL > exit

Testing Single Sign-On

Now you have finished all configurations for Single Sign-On. Let’s test it.

Visit the GraphStudio UI in your browser. You should see a

Login with SSO

button appear on top of the login panel:

Clicking the button will navigate to your identity provider’s login portal. If you have already logged in there, you will be redirected back to GraphStudio immediately. After about 10 seconds, the verification should finish, and you are authorized to use GraphStudio. If you haven’t login at your identity provider yet, you will need to log in there. After logging in successfully, you will see your Single Sign-On username when you click the User icon

at the upper right of the GraphStudio UI.

If after redirecting back to GraphStudio, you return to the login page with the error message shown below, that means the Single Sign-On user doesn’t have access to any graph. Please double check your user group proxy rules, and roles you have granted to the groups.

If your Single Sign-On fails with error message show below, that means either some configuration is inconsistent between TigerGraph and your identity provider, or something unexpected happened.

You can check your GSQL log to investigate. First, find your GSQL log file with the following:

$ gadmin log | grep GSQL_LOG
GSQL : /home/tigergraph/tigergraph/dev/gdk/gsql/logs/GSQL_LOG

Then, grep the SAML authentication-related logs:

cat /home/tigergraph/tigergraph/dev/gdk/gsql/logs/GSQL_LOG | grep SAMLAuth

Focus on the latest errors. Usually the text is self-descriptive. Follow the error message and try to fix TigerGraph or your identity provider’s configuration. If you encounter any errors that are not clear, please contact


end of Single Sign-On

Unable to render {include}

The included page could not be found.

end of Encrypting SSL Connections

Encrypting Data At Rest

Version 2.0 to 2.1

Last update:

The TigerGraph graph data store uses a proprietary encoding scheme which both compresses the data and obscures the data unless the user knows the encoding/decoding scheme. In addition, the TigerGraph system supports integration with industry-standard methods for encrypting data when stored in disk (“data at rest”).

Encryption Levels

Data at rest encryption can be applied at many different levels. A user can choose to use one or more level.

Encryption Level Description TigerGraph Support
Hardware Use specialized hard disks which perform automatic

encryption on write and decryption on read (by

authorized OS users)

Invisible to TigerGraph

Kernel-level file system Use Linux built-in utilities to encrypt data.

Root privilege required.
Invisible to TigerGraph
User-level file system Use Linux built-in utilities and customized libraries to encrypt data.

Root privilege is not required.
Invisible to TigerGraph

Kernel-level Encryption

File system encryption employs advanced encryption algorithms. Some tools allow the user to select from a menu of encryption algorithms. It can be done either in kernel mode or user mode. To run in kernel mode, superuser  permission is required.

Since Linux 2.6,  device-mapper has been an infrastructure, which provides a generic way to create virtual layers of block devices with transparent encryption blocks using the kernel crypto API.

In Ubuntu, full-disk encryption is an option during the OS installation process. For other Linux distributions, the disk can be encrypted with



A commonly used utility is


, which is licensed under GPL, and it is built into some kernels, such as Ubuntu.

User-Level Encryption

If root privilege is not available, a workaround is to use FUSE (Filesystem in User Space) to create a user-level filesystem running on top of the host operating system. While the performance may not be as good as running in kernel mode, there are more options available for customization and tuning.

Example 1: Kernel-mode file system encryption with dm-crypt

In this example, we use dm-crypt to provide kernel-mode file system encryption. The dm-crypt utility is widely available and offers a choice of encryption algorithms. It also can be set to encrypt various units of storage – full disk, partitions, logical volumes, or files.

The basic idea of this solution is to create a file, map an encrypted file system to it, and mount it as a storage directory for TigerGraph with R/W permission only to authorized users.


Before you start, you will need a Linux machine on which

  • you have root permission,
  • the TigerGraph system has not yet been installed,
  • and you have sufficient disk space for the TigerGraph data you wish to encrypt. This may be on your local disk or on a separate disk you have mounted.


  1. Install


    (cryptsetup is included with Ubuntu, but other OS users may need to install it with yum).
  2. Install the TigerGraph system.
  3. Grant sudo privilege to the TigerGraph OS user.
  4. Stop all TigerGraph services with the following commands:

    gadmin stop -y

    gadmin stop admin -y
  5. Acting as the tigergraph OS user, run the following export commands to set variables. Replace the placeholders enclosed in angle brackets <…> with the values of your choice:

    # The username for TigerGraph Database System, for example: tigergraph
    export db_user='<username>’

    # The path of encrypted file to be created for TigerGraph storage, for example: /home/tigergraph/secretfs
    export encrypted_file_path='<path-to-encrypted-file>’

    # The size of encrypted file to be created (used by dd command), for example: 60G
    export encrypted_file_size=<storage-size>

    # The password for the encrypted file, for example: DataAtRe5tPa55w0rd
    export encryption_password='<password>’

    # The root directory for tigergraph, for example: $HOME/tigergraph
    export tigergraph_root=”<tigergraph-root>”

    # Set the first available loop device for encrypted file mapping
    export loop_device=$(losetup -f)

  6. Create a file for TigerGraph data storage.

    dd of=$encrypted_file_path bs=$encrypted_file_size count=0 seek=1

  7. Change the permission of the file so that only the owner of the file (that is, only the tigergraph user who created the file in the previous step) will be able to access it:

    chmod 600 $encrypted_file_path

  8. Associate a loopback device with the file:

    sudo losetup $loop_device $encrypted_file_path

  9. Encrypt storage in the device.


    will use the Linux device mapper to create, in this case,


    . Initialize the volume and set a password interactively with the password you set to



    sudo cryptsetup -y luksFormat $loop_device

    If you are trying to automate the process with a script

    running with root TTY session

    , you may use the following command:

    echo “$encryption_password” | cryptsetup -y luksFormat $loop_device

  10. Open the partition, and create a mapping to



    sudo cryptsetup luksOpen $loop_device tigergraph_gstore

    If you are trying to automate the process with a script

    running with root TTY session

    , you may use the following command:

    echo “$encryption_password” | cryptsetup luksOpen $loop_device tigergraph_gstore

  11. Clear the password from bash variables and bash history.

    The following commands may clear your previous bash histories as well. Instead, you may edit ~/.bash_history to selectively delete the related entries.

    unset encryption_password
    history -c
    history -w

  12. Create a file system and verify its status:

    sudo mke2fs -j -O dir_index /dev/mapper/tigergraph_gstore

  13. Mount the new file system to /mnt/secretfs:

    sudo mkdir -p /mnt/secretfs
    sudo mount /dev/mapper/tigergraph_gstore /mnt/secretfs

  14. Change the permission to 700 so that only


    has access to the file system:

    sudo chmod -R 700 /mnt/secretfs
    sudo chown -R $db_user:$db_user /mnt/secretfs

  15. Move the original TigerGraph files to the encrypted filesystem and make a symbolic link. If you wish to encrypt only the TigerGraph data store (called gstore), use the following commands:

    mv $tigergraph_root/gstore /mnt/secretfs/gstore
    ln -s /mnt/secretfs/gstore $tigergraph_root/gstore

    There are other TigerGraph files which you might also consider to be sensitive and wish to encrypt.  These include the dictionary, kafka data files, and log files.  You could selectively identify files to protect or you could encrypt the entire TigerGraph folder. In this case, simply move  $tigergraph_root instead of $tigergraph_root/gstore.

    mv $tigergraph_root /mnt/secretfs/tigergraph
    ln -s /mnt/secretfs/tigergraph $tigergraph_root

The data of TigerGraph data is now stored in an encrypted filesystem.  It will be automated decrypted when the tigergraph user (and only this user) accesses it.

To automatically deploy this encryption solution, you may

  1. Chain all the steps as a bash script
  2. Remove all “sudo” since the script will be running as root.
  3. Run the script as


    user after TigerGraph Installation.

The setup scripts contain your encryption password. To follow good security procedures, do not leave your password in plaintext format in any files on your disk. Either remove the setup scripts or edit out the password.

Performance Evaluation

Encryption is usually CPU-bound rather than I/O-bound. If CPU usage reamains below 100%, encryption should not cause much  performance slowdown. A performance test using both small and large queries supports this prediction:

for small (~1 sec) and large (~100 sec) queries, there is a ~5% slowdown due to filesystem encryption.

GSE Cold Start (read)

Load Data (write)

original 45s 809s
encrypted 47s 854s
% slowdown 4.4% 5.8%

We used the

TPC-H dataset with scale factor 10 (


The data size is 23GB after loading into TigerGraph..

The write test (data loading) was done by running a loading job and then killing the GPE with SIGTERM (to exit gracefully) to ensure that all kafka data is consumed.
The read test (GSE cold start) measures the time from “gadmin start gse” until “online” appears in “gadmin status gse”.

Example 2: Encrypting Data on Amazon EC2

Major cloud service providers often provide their own methodologies for encrypting data at rest. For Amazon EC2, we recommend users start by reading the AWS Security Blog:

How to Protect Data at Rest with Amazon EC2 Instance Store Encryption


In this section, we provide a simple example for configuring file system encryption for a TigerGraph running on Amazon EC2. The steps are based on those given in

How to Protect Data at Rest with Amazon EC2 Instance Store Encryption

, with some addtions and modifications.

The basic idea of this solution is to create a file, map an encrypted file system to it, and mount it as a storage directory for TigerGraph with permission only to authorized users.

Angle brackets <…> are used to mark placeholders which you should replace with your own values (without the angle brackets).


Make sure you have installed and configured


with keys locally.

Create an S3 Bucket

from Amazon Data-at-Rest blog
  1. Sign in to the

    S3 console

    and choose

    Create Bucket

  2. In the

    Bucket Name

    box, type your bucket name and then choose


  3. You should see the details about your new bucket in the right pane.

Configure IAM roles and permission for the S3 bucket

from Amazon Data-at-Rest blog
  1. Sign in to the

    AWS Management Console

    and navigate to the

    IAM console

    In the navigation pane, choose


    , choose

    Create Policy

    . Choose the JSON tab, paste in the following JSON code, and then choose

    Review Policy

    . Name and describe the policy, and then choose

    Create Policy

    to save your work. For more details, see

    Creating Customer Managed Policies


    “Version”: “2012-10-17”,
    “Statement”: [
    “Sid”: “VisualEditor0”,
    “Effect”: “Allow”,
    “Action”: “s3:GetObject”,
    “Resource”: “arn:aws:s3:::<your-bucket-name>/LuksInternalStorageKey”

    The preceding policy grants read access to the bucket where the encrypted password is stored. This policy is used by the EC2 instance, which requires you to configure an IAM role. You will configure KMS permissions later in this post.

    (The following instructions have been updated since the original blog post.)

  2. “Select type of trusted entity: Choose

    AWS service


  3. “Select the service that will use this role”: Choose


    then choose

    Next: Permissions.

  4. Choose the policy you created in Step 1 and then choose

    Next: Review.

  5. On the Create role page, type your

    role name

    , a Role description, and choose

    Create role

  6. The newly created IAM role is now ready. You will use it when launching new EC2 instances, which will have the permission to access the encrypted password file in the S3 bucket.

Create a KMS Key (optional)

If you don’t have a KMS key, you can create it first:

  1. From the

    IAM console

    , choose

    Encryption keys

    from the navigation pane.
  2. Select

    Create Key

    , and type in


  3. For

    Step 2


    Step 3

    , see

    for advice.
  4. In

    Step 4 : Define Key Usage Permissions

    , select


  5. The role now has permission to use the key.

Encrypt a secret password with KMS and store it in the S3 bucket

from Amazon Data-at-Rest blog

Next, use KMS to encrypt a secret password. To encrypt text by using KMS, you must use


. AWS CLI is installed by default on EC2 Amazon Linux instances and you can


it on Linux, Windows, or Mac computers.

To encrypt a secret password with KMS and store it in the S3 bucket:

  • From the AWS CLI, type the following command to encrypt a secret password by using KMS (replace


    with your region). You must have the right permissions in order to create keys and put objects in S3 (for more details, see

    Using IAM Policies with AWS KMS

    ). In this example, I have used AWS CLI on the Linux OS to encrypt and generate the encrypted password file.
aws –region <your-region> kms encrypt –key-id ‘alias/<your-key-alias>’ –plaintext ‘<your-password>’ –query CiphertextBlob –output text | base64 –decode > LuksInternalStorageKey

aws s3 cp LuksInternalStorageKey s3://<your-bucket-name>/LuksInternalStorageKey

The preceding commands encrypt the password (Base64 is used to decode the cipher text). The command outputs the results to a file called LuksInternalStorageKey. It also creates a key alias (key name) that makes it easy to identify different keys; the alias is called


. The file is then copied to the S3 bucket created earlier in this post.

Configure EC2 with role and launch configurations

In this section, you launch a new EC2 instance with the new IAM role and a bootstrap script that executes the steps to encrypt the file system.

The script in this section requires root permission, and it cannot be run manually through an ssh tunnel or by an unprivileged user.

  1. In the

    EC2 console

    , launch a new instance (see

    this tutorial

    for more details).

    Amazon Linux AMI 2017.09.1 (HVM), SSD Volume Type (If NOT using Amazon Linux AMI, a script the installs python, pip and AWS CLI needs to be added in the beginning).

  2. In

    Step 3: Configure Instance Details

    1. In

      IAM role

      , choose


    2. In

      User Data

      , paste the following code block

      after replacing the placeholders with your values and appending TigerGraph installation script

Encryption bootstrap script


## Initial setup to be executed on boot
# Create an empty file. This file will be used to host the file system.
# In this example we create a <disk-size> (for example: 60G) file at <path-to-encrypted-file> (for example: /home/tigergraph/gstore_enc).
dd of=<path-to-encrypted-file> bs=<disk-size> count=0 seek=1

# Lock down normal access to the file.
chmod 600 <path-to-encrypted-file>

# Associate a loopback device with the file.
losetup /dev/loop0 <path-to-encrypted-file>

#Copy encrypted password file from S3. The password is used to configure LUKE later on.
aws s3 cp s3://<your-bucket-name>/LuksInternalStorageKey .

# Decrypt the password from the file with KMS, save the secret password in LuksClearTextKey
LuksClearTextKey=$(aws –region <your-region> kms decrypt –ciphertext-blob fileb://LuksInternalStorageKey –output text –query Plaintext | base64 –decode)

# Encrypt storage in the device. cryptsetup will use the Linux
# device mapper to create, in this case, /dev/mapper/tigergraph_gstore.
# Initialize the volume and set an initial key.
echo “$LuksClearTextKey” | cryptsetup -y luksFormat /dev/loop0

# Open the partition, and create a mapping to /dev/mapper/tigergraph_gstore.
echo “$LuksClearTextKey” | cryptsetup luksOpen /dev/loop0 tigergraph_gstore

# Clear the LuksClearTextKey variable because we don’t need it anymore.
unset LuksClearTextKey

# Create a file system and verify its status.
mke2fs -j -O dir_index /dev/mapper/tigergraph_gstore

# Mount the new file system to /mnt/secretfs.
mkdir -p /mnt/secretfs
mount /dev/mapper/tigergraph_gstore /mnt/secretfs

# create user tigergraph
adduser $db_user

# Change the permission so that only tigergraph has access to the file system
chmod -R 700 /mnt/secretfs
chown -R $db_user:$db_user /mnt/secretfs

# Install TigerGraph
# Run the one-command installation script with TigerGraphh root path under /mnt/secretfs

It may take a few minutes for the script to complete after system launch.

Then, you should be able to launch one or more EC2 machines with an encrypted folder under /mnt/secretfs that only OS user


can access.


Encryption is usually CPU-bound rather than I/O bound. If CPU usage is below 100%, TigerGraph tests show no significant performance downgrade.

end of Encrypting Data at Rest

TigerGraph Admin Portal

Version 2.1

Document Updated:


The TigerGraph Admin Portal is a browser-based dashboard which provides users an overview of a running TigerGraph system, from an application and infrastructure point of view. It also allows the users to configure the TigerGraph system through a user-friendly interface. This guide serves as an introduction and quick-start manual for Admin Portal. As of June 2018, the Admin Portal is supported on the following browsers:

Browser Chrome Safari Firefox Opera
Supported versions 54.0+ 11.1+ 59.0+ 52.0+

Not all features are guaranteed to work on other browsers.

Please make sure to enable JavaScript and cookies in your browser settings.

Log On

The Admin Portal and GraphStudio share the same port (14240).

If you are logged in one of the servers for your TigerGraph system, then you can use


for your <tigergraph_server_ip_address>.

The Admin Portal is on the admin page:

Admin Portal and GraphStudio URL

If user authentication has been enabled, then users need to log in to access the Admin Portal.

If you are already at GraphStudio, simply click the Admin button at the right end of the top menu bar.

Page Layout

The Admin Portal has two pages:




. Both pages have the same Header, Footer, and Navigation Menu.

The layout of the Admin Portal is responsive to screen size.  The layout will automatically adjust for devices with small screens like phones and tablets.

The full screen version of the Admin Portal is shown below, with the Dashboard page selected.

Page Header


Page Footer

Nav Menu


The mobile version is shown below:

Page Header

Clicking on the



will open up a list of notifications. If a notification is too long, some of its content will be omitted:

To view the full text, you can click on a notification to open a popup window containing the full message and its severity:

There are three severity levels: info, warning and error.




will open the user menu:

You can switch between a dark theme and light theme. The light theme is shown in the Overview. The dark theme is shown below:

To sign out of the Admin Portal, click on the

Sign out

button in the Account menu.

Clicking on the


button will take you to the documentation page containing this guide.

You can navigate to GraphStudio by clicking on


The overall system status

is always shown in the footer.  This single indicator shows:

  • Green indicates all services are online.
  • Gray means one or more service statuses is unknown.
  • Red means on of the component services is offline.

Clicking on the button will show you the list of statuses for the services in our system:

You can start or stop services from the Admin Portal by using the right most buttons

(NOTE: ONLY a superuser can see these buttons).

Clicking on the



will stop all of the services in the TigerGraph system.

Clicking on the



will start all of the services in the TigerGraph system (NOTE: because there is an interval between data collection period, the real status of the system will not be reflected in the status section right away).

Dashboard Page

The Dashboard page has three main parts: Overall Statistics, the Time Range Picker, and several Charts.

Overall Statistics

Time Range



Overall Cluster Statistics

Just below the page header, there are four cards showing statistics of our system, including number of nodes, number of graphs, number of vertices and number of edges.
These statistics
are refreshed live. (The default refresh interval is 1 minute).

Time Range Picker

The next card lets you set the time range to be used for the statistics in the charts below.

The leftmost input

lets you select the start time of the range.

The next input

lets you select the end time of the range. This has two options:

  1. “Now” means that the charts will be continually updated with the most recent data.
  2. “Custom” lets you select a fixed date.  The time range is historical, so the charts will be static.

The sliding bar on the right lets you fine tune the range. Click and drag an endpoint to adjust the start or end time.

Changing any of these selections will trigger a request for statistics data and the chart will be re-rendered accordingly.


Each charts displays some statistic or state information on the vertical axis and time on the horizontal axis.

There are two chart sections. The first section is GSQL Query Performance.  This lists all of the queries accessible to the current user. If you click on a query name, the display will expand to show detailed charts about that query. You can expand only one query panel at a time. The second section is Cluster Monitoring. This lists all of the machines within the TigerGraph cluster. Similar to the first section, you can only expand one panel at a time.

A Query Monitoring Panel includes three charts:

  • QPS (number of Queries completed per second)
  • Timeout (fraction of the query calls which timed out and therefore did not finish)
  • Latency (minimum, maximum, and average time to complete a query)

A Machine Monitoring Panel includes 4 charts. The first three charts break down the information among three processing-focused components (GPE, GSE, RESTPP). The last chart breaks down information among three components which may have large storage needs (GStore, Log files, and Apache Kafka).

  • Service status: ON or OFF status for the given component
  • CPU Usage: percentage of available CPU time used by the given component
  • Memory Usage: GB used by the given component
  • Disk Usage: GB used by the given component

Configuration Page

Currently (as of v2.1), the Configuration page supports one configuration operation: updating the GraphStudio license key.

Additional configuration operations, which are currently only available from a Linux console, will be added in future releases.

Update GraphStudio License

An example of the GraphStudio License Update panel is shown below.  The panel displays the full information about your license, including the expiration date.

To apply a new license key, paste the key into the text box below “Enter GraphStudio license” and click



end of Admin Portal UI Guide

Managing TigerGraph Servers with gadmin

Contents of this Section:


TigerGraph Graph Administrator (gadmin) is a tool for managing TigerGraph servers. It has a self-contained help function and a man page, whose output is shown below for reference. If you are unfamiliar with the TigerGraph servers, please see

GET STARTED with TigerGraph v2.1


To see a listing of all the options or commands available for gadmin, run any of the following commands:

$ gadmin -h
$ man gadmin
$ info gadmin

After changing a configuration setting, it is generally necessary to run

gadmin config-apply.

Some commands invoke config-apply automatically. If you are not certain, just run config-apply

Command Listing

Below is the man page for gadmin. Most of the commands are self-explanatory.

GADMIN(1) User Commands GADMIN(1)

gadmin – manual page for TigerGraph Administrator.

gadmin [options] COMMAND [parameters]

Version 1.0, Sept, 19, 2017

gadmin is a tool for managing TigerGraph servers

-h, –help
show this help message and exit

invoke interactive (re)configuration tool. Options: single_dir:/xxx/yyy(deploy directory will be /xxx/yyy), or a keyword(e.g., ‘gadmin –configure port’, will
configure any entry whose name has string ‘port’)

–set set one configuration

dump current configuration after parsing config files and command line options and exit

show what operation will be performed but don’t actually do it

the password to ssh to other nodes

-y, –yes
silently answer Yes to all prompts

-v, –verbose
enable verbose output

show gadmin version and exit

-f, –force
execute without performing checks

–wait wait for the last command to finish (e.g., snapshot)

Server status
gadmin status [gpe gse restpp dict,…]

IUM status
gadmin ium_status

Disk space of devices
gadmin ds [path]

Mount info of a path
gadmin mount {path}

Memory usage of TigerGraph components
gadmin mem [gse gpe restpp dict,…]

CPU usage of TigerGraph components
gadmin cpu [gse gpe restpp dict,…]

Check TigerGraph system prerequisites and resources
gadmin check

Show log of gpe, gse, restpp and issued fab commands
gadmin log [gse gpe restpp dict fab,…]

Get various information about gpe, gse and restpp
gadmin info [gse gpe restpp dict,…]

Software version(s) of TigerGraph components
gadmin version [gse gpe restpp dict,…]

Stop specified or all services
gadmin stop [gse gpe restpp dict,…]

Restart specified or all services
gadmin restart [gse gpe restpp dict,…]

Start specified or all services
gadmin start [gse gpe restpp dict,…]

Start the RESTPP loaders
gadmin start_restpp_loaders

Start the KAFKA loaders
gadmin start_kafka_loaders

Stop the RESTPP loaders
gadmin stop_restpp_loaders

Stop the KAFKA loaders
gadmin stop_kafka_loaders

Dump partial or full graph to a directory
gadmin dump_graph {gse, gpe [*, segment], all}, dir, separator

Snapshot gpe and gse
gadmin snapshot

Reset the kafka queues
gadmin reset

Show the available packages
gadmin pkg-info

Install new package to TigerGraph system
gadmin pkg-install

Update gpe, gse, restpp, dict, etc. without configuration change
gadmin pkg-update

Remove available packages or binaries from package pool
gadmin pkg-rm [files]

Apply new configure. Note some modules may need to restart
gadmin config-apply [gse gpe restpp dict kafka zk]

Set a new license key
gadmin set-license-key license key string

Update the new graph schema
gadmin update_graph_config

Update components under a directory
gadmin update

Setup sync of all gstore data in mutiple machines
gadmin setup_gstore_sync

Setup rate control of RESTPP loader
gadmin setup_restpploader_rate_ctl

Restart sync of all gstore data in mutiple machines
gadmin gstore_sync_restart

Stop sync of all gstore data in mutiple machines
gadmin gstore_sync_stop

For more information, updates and news, visit gadmin website:

The full documentation for gadmin is maintained as a Texinfo manual. If the info and gadmin programs are properly installed at your site, the command

info gadmin

should give you access to the complete manual.

TigerGraph Administrator. Sept 2017 GADMIN(1)


Checking the status of TigerGraph component servers:

Use “gadmin status” to report whether each of the main component servers is running (up) or stopped (off).  The example below shows the normal status when the graph store is empty and a graph schema has not been defined:


gadmin status

=== zk ===

[SUMMARY][ZK] process is up

[SUMMARY][ZK] /home/tigergraph/tigergraph/zk is ready

=== kafka ===

[SUMMARY][KAFKA] process is up

[SUMMARY][KAFKA] queue is ready

=== gse ===

[SUMMARY][GSE] process is


[SUMMARY][GSE] id service has


been initialized

=== dict ===

[SUMMARY][DICT] process is up

[SUMMARY][DICT] dict server is ready

=== graph ===

[SUMMARY][GRAPH] graph has


been initialized

=== restpp ===

[SUMMARY][RESTPP] process is


[SUMMARY][RESTPP] restpp has


been initialized

=== gpe ===

[SUMMARY][GPE] process is


[SUMMARY][GPE] graph has


been initialized

=== glive ===

[SUMMARY][GLIVE] process is up

[SUMMARY][GLIVE] glive is ready

=== Visualization ===

[SUMMARY][VIS] process is up (WebServer:2254; DataBase:2255)

[SUMMARY][VIS] Web server is working

Stopping a particular server, such as the rest server (name is “restpp”):

$ gadmin stop restpp

Changing the retention size of queue to 10GB:

$ gadmin –set -f online.queue.retention_size 10

Updating the TigerGraph License Key

A TigerGraph license key is initially set up during the installation process. If you have obtained a new license key,  run the command

gadmin set-license-key <new_key>

to install your new key. You should then follow this with

gadmin config-apply

Example: Setting the license key



admin set-license-key


[RUN ]


[RUN ]


[RUN ]

rm -rf /home/tigergraph/tigergraph_coredump

[RUN ]

mkdir -p /home/tigergraph/tigergraph/logs/coredump

[RUN ]

ln -s /home/tigergraph/tigergraph/logs/coredump /home/tigergraph/tigergraph_coredump


gadmin config-apply

[FAB ][2017-03-31 15:03:05] check_config

[FAB ][2017-03-31 15:03:06] update_config_all

Local config modification Found, will restart dict server and update configures.

[FAB ][2017-03-31 15:03:11] launch_zookeepers

[FAB ][2017-03-31 15:03:21] launch_gsql_subsystems:DICT

[FAB ][2017-03-31 15:03:22] gsql_mon_alert_on

Local config modification sync to dictionary successfully!


end of Managing TigerGraph Servers with gadmin

Back to Top

Backup and Restore


Introduction and Syntax

GBAR (Graph Backup And Restore), is an integrated tool for backing up and restoring the data and data dictionary (schema, loading jobs, and queries) of a single TigerGraph node. In Backup mode, it packs TigerGraph data and configuration information in a single file onto disk or a remote AWS S3 bucket. Multiple backup files can be archived. Later, you can use the Restore mode to rollback the system to any backup point. This tool can also be integrated easily with Linux cron to perform periodic backup jobs.

The current version of GBAR is intended for restoring the same machine that was backed up. For help with cloning a database (i.e., backing up machine A and restoring the database to machine B), please contact


Usage: gbar backup [options] -t <backup_tag>
gbar restore [options] <backup_tag>
gbar config
gbar list

-h, –help Show this help message and exit
-v Run with debug info dumped
-vv Run with verbose debug info dumped
-y Run without prompt
-t BACKUP_TAG Tag for backup file, required on backup

The -y option forces GBAR to skip interactive prompt questions by selecting the default answer. There are currently five situations for prompts:

  • During backup, if GBAR calculates there is insufficient disk space to copy and then compress the graph data, it will ask:

    Do you want to continue?(y/N). The default answer is no.
  • At the start of restore, GBAR will always asks if it is okay to stop and reset the TigerGraph services: (y/N)? The default answer is yes.
  • During restore, if user does not provide the backup_tag with a full backup file name in command line, and there are multiple files matching that tag, it by default choose the latest, and will ask: Do you want to continue?(y/N) The default answer is yes.
  • During restore, if GBAR calculates there is insufficient disk space to copy the current graph data and then uncompress the archived data, it will ask: Do you want to continue?(y/N). The default answer is no.
  • After restore, old gstore data will be left on disk. GBAR needs your confirmation to remove it, and will ask: Do you want to continue removing it?(y/N). The default answer is no.


gbar config

GBAR Config must be run before using GBAR backup/restore functionality. GBAR Config will open the following configuration template interactively in a text editor. Using the comments as a guide, edit the configuration file to set the configuration parameters according to your own needs.

# Configure file for GBAR
# you can specify storage method as either local or s3, or both

# Assign True if you want to store backup files on local disk
# Assign False otherwise, in this case no need to set path
store_local: False

# Assign True if you want to store backup files on AWS S3
# Assign False otherwise, in this case no need to set AWS key and bucket
store_s3: False
aws_access_key_id: YOUR_ACCESS_KEY
aws_secret_access_key: YOUR_SECRET_KEY

# The maximum timeout value to wait for core modules(GPE/GSE) on backup.
# As a roughly estimated number,
# GPE & GSE backup throughoutput is about 2GB in one minute on HDD.
# You can set this value according to your gstore size.
# Interval string could be with format 1h2m3s, means 1 hour 2 minutes 3 seconds,
# or 200m means 200 minutes.
# You can set to 0 for endless waiting.
backup_core_timeout: 5h


gbar backup -t <backup_tag>

The backup_tag acts like a filename prefix for the archive filename. The full name of the backup archive will be <backup_tag>-<timestamp>.tgz.

GBAR Backup performs a live backup, meaning that normal operations may continue while backup is in progress. When GBAR backup starts, it sends a request to GADMIN, which then requests the GPE and GSE to create snapshots of their data. Per the request, the GPE and GSE store their data under GBAR’s own working directory. GBAR also directly contacts the Dictionary and obtains a dump of its system configuration information. Besides, GBAR records TigerGraph system version. Then, GBAR compresses all the data and configuration information into a single file named <backup_tag>-<timestamp>.tgz. As the last step, GBAR copies that file to local storage or AWS S3, according to the Config settings, and removes all temporary files generated during backup.

The current version of GBAR Backup takes snapshots quickly to make it very likely that all the components (GPE, GSE, and Dictionary) are in a consistent state,
but it does not fully guarantee consistency.

It’s highly recommended when issuing the backup command, no active data update is in progress. A no-write time period of about 5 seconds is sufficient.

Backup does not save input message queues for REST++ or Kafka.


gbar restore <backup_tag>

Restore is an offline operation, requiring the data services to be temporarily shut down. The backup tag acts as a filename prefix. During restore, the user can provide either the tag (filename prefix) or the full filename with timestamp information in the name. When GBAR restore begins, it first searches for a backup file matching the backup_tag supplied in the command line.
If multiple matching backup archives are found
, GBAR will select the most recent one, and ask the user for confirmation to continue. Then it decompresses the backup file to a working directory. As the next step, GBAR will compare the Tigergraph system version in the backup archive with the current system’s version, to make sure that backup archive is compatible with that current system. It will then
shut down
the TigerGraph servers (GSE, RESTPP, etc.) temporarily. Then, GBAR makes a copy of the current graph data, as a precaution. If GBAR estimates that there is not sufficient disk space for the copy, GBAR will display a warning and prompt the user to abort (unless the user has overridden the prompt with the -y option). Next, GBAR copies the backup graph data into the GPE and GSE and notifies the Dictionary to load the configuration data. When these actions are all done, GBAR will restart the TigerGraph servers.

The primary purpose of GBAR is to save snapshots of the data configuraton of a TigerGraph system, so that in the future the same system can be rolled back (restored) to one of the saved states.  A key assumption is that Backup and Restore are performed on the same machine, and that the file structure of the TigerGraph software has not changed. Specific requirements are listed below.

Restore Requirements and Limitations

Restore is supported if the TigerGraph system has had only minor version updates since the backup.

  • TigerGraph version numbers have the format X.Y[.Z], where X is the major version number and Y is the minor version number.
  • Restore is supported if the backup archive and the current system have the same major version number AND the current system has a minor version number that is greater than or equal to the backup archive minor version number.
  • Backup archives from a 0.8.x system cannot be Restored to a 1.x system.
  • Examples:

    Backup archive’s system version current system version Restore is allowed?
    0.8 1.0 NO – Major versions differ
    1.1 1.1 YES – Major and minor versions are the same
    1.1 1.2 YES – Major versions are the same; current minor version > archived minor version
    1.1 1.0 NO – Major versions are the same; current minor version < archived minor version

Restore needs enough free space to accommodate both the old gstore and the gstore to be restored.

After restore, old gstore data will be left on disk by default. To remove the old data, either answer “Y” when Restore asks you, or remove it yourself after restore has completed and the system is running again.

List Backup Files

gbar list

This command lists all generated backup files in the storage place configured by the user. For each file, it shows the file’s full tag, file’s size in human readable format, and its creation time.

GBAR Detailed Example

The following example describes a real example, to show the actual commands, the expected output, and the amount of time and disk space used, for a given set of graph data. For this example, and Amazon EC2 instance was used, with the follwing specifications:

single instance with 32 CPU + 244GB memory + 2TB HDD.

Naturally, backup and restore time will vary depending on the hardware used.

GBAR Backup Operational Details

The flowchart below shows how GBAR processes a backup request.

To run a daily backup, we tell GBAR to backup with the tag name



$ gbar backup -t daily
[SUMMARY] Retrieve TigerGraph system configuration...
[SUMMARY] Check TigerGraph system status...
[SUMMARY] Get TigerGraph version as 1.0
[SUMMARY] Issued snapshot command to GPE/GSE
[SUMMARY] Wait for GPE/GSE snapshot done...
[SUMMARY] GPE/GSE snapshot done in 37m11s
[SUMMARY] Backup DICT...
[SUMMARY] Compress backup data to daily-20171206031441.tgz...
[SUMMARY] Compress data done in 39m38s
[SUMMARY] Clean intermediate files...
[SUMMARY] Backup file daily-20171206031441.tgz size 64.4GB
[SUMMARY] Copy daily-20171206031441.tgz to local storage /home/tigergraph/backups...
[SUMMARY] Copy finished in 10m31s
Backup done in 1h37m43s.

The total backup process took about 1 hour and a half, and the generated archive is about 64GB. Dumping the dump GPE/GSE data to disk took 37 minutes. Compressing the files to a single portable backup archive took another 40 minutes.

GBAR Restore Operational Details

This flowchart shows GBAR runs a restore job.

To restore from a backup archive, tell GBAR the backup tag (


).GBAR will choose the latest one by default. To select a specific archive to restore, provide GBAR with a full archive name, such as


. By default, restore will ask the user to approve at least two actions. If you want to pre-approve these actions, use the “-y” option. GBAR will make the default choice for you.

$ gbar restore daily
[SUMMARY] Retrieve TigerGraph system configuration...
GBAR restore needs to reset TigerGraph system.
Do you want to continue?(y/N):y
[SUMMARY] Multiple backup points found for tag daily, will pick up the latest.
Will restore from the latest one daily-20171206031441.
Do you want to continue?(y/N):y
[SUMMARY] Restore to latest one daily-20171206031441
[SUMMARY] Backup file daily-20171206031441.tgz size 64.4GB
[SUMMARY] Copy daily-20171206031441.tgz to GBAR work dir...
[SUMMARY] Copy finished in 4m13s
[SUMMARY] Decompress daily-20171206031441.tgz with size 64.4GB...
[SUMMARY] Decompress done in 13m23s
[SUMMARY] Backup data with version 1.0 applicable to 1.0 system.
[SUMMARY] Stop TigerGraph system...
[SUMMARY] Move aside old GPE data...
[SUMMARY] Move aside old GSE data...
[SUMMARY] Snapshot old GDICT data...
[SUMMARY] Restore GPE data...
[SUMMARY] Restore IDS data...
[SUMMARY] Restore DICT...
[SUMMARY] Reset TigerGraph system...
[SUMMARY] Start TigerGraph system...
[SUMMARY] Reinstall all GSQL query...
[SUMMARY] Recompile all loading job...
[SUMMARY] Running post restore jobs...
Restore done in 21m33s.
GPE/GSE old data still saved on disk, you can remove it after TigerGraph system stable, or remove it right now.
Do you want to continue removing it?(y/N):n
GPE/GSE old data saved as /home/tigergraph/tigergraph/gstore/0/part-20171206032413 and /home/tigergraph/tigergraph/gstore/0/<part_id>/ids-20171206032413, you need to remove them manually.

For our test,  GBAR spent about 20 minutes to finish the restore job. Most of the time (13 minutes) was spent decompressing the backup archive.

Note that after the restore is done, GBAR prompts you to make a choice whether to remove old data. Here we choose no, in that case, we will need to remove the old gstore files manually after, say, we verify that the restored system is functioning correctly.

Performance Summary

GStore size Backup file size Backup time Restore time
278GB 64GB 1.5 hour 20 mins

end of Backup and Restore

Back to Top