Install tntapi cluster

Install and configure the tntapi component as follows:

  1. Install tntapi on designated hosts. Tarantool will be installed automatically along it.

    sudo apt-get update
    sudo apt-get install findface-tarantool-server
    
  2. Create tntapi shards on each tntapi host. To learn how to shard, let’s consider a case-study of a cluster environment containing 4 database hosts. We’ll create 4 shards on each.

    Важно

    When creating shards in large installations, observe the following rules:

    1. 1 tntapi shard can handle approximately 10,000,000 faces.

    2. The number of shards on a single host should not exceed the number of its physical processor cores minus 1.

  3. Disable the Tarantool example service autostart and stop the service. Do so for all the 4 hosts.

    sudo systemctl disable tarantool@example && sudo systemctl stop tarantool@example
    
  4. Disable the shard created by default. Do so for all the 4 hosts.

    sudo systemctl disable tarantool@FindFace
    
  5. Write a bash script shard.sh that will automatically create configuration files for all shards on a particular host. Do so for the 4 hosts. Use the following script as a base for your own code. The exemplary script creates 4 shards listening to the ports: tntapi 33001..33004 and http 8001..8004.

    Важно

    The script below creates configuration files based on the default configuration settings stored in the file /etc/tarantool/instances.enabled/FindFace.lua. We strongly recommend you not to add or edit anything in the default file, except the maximum memory usage (memtx_memory) and the NTLS IP address required for the tntapi licensing. The maximum memory usage should be set in bytes for each shard, depending on the number of faces a shard handles, at the rate roughly 1280 byte per face.

    Open the configuration file:

    sudo vi /etc/tarantool/instances.enabled/FindFace.lua
    

    Edit the value due the number of faces a shard handles. The value 1.2*1024*1024*1024 corresponds to 1,000,000 faces:

    memtx_memory = 1.2*1024*1024*1024,
    

    Specify the NTLS IP address if NTLS is remote:

    FindFace.start(“127.0.0.1”, 8001, {license_ntls_server=“192.168.113.2:3133”})
    
    #!/bin/sh
    set -e
    
    for I in `seq 1 4`; do
           TNT_PORT=$((33000+$I)) &&
           HTTP_PORT=$((8000+$I)) &&
           sed "
                   s#/opt/ntech/var/lib/tarantool/default#/opt/ntech/var/lib/tarantool/shard_$I#g;
                   s/listen = .*$/listen = '127.0.0.1:$TNT_PORT',/;
                   s/\"127.0.0.1\", 8001,/\"0.0.0.0\", $HTTP_PORT,/;
           " /etc/tarantool/instances.enabled/FindFace.lua > /etc/tarantool/instances.enabled/FindFace_shard_$I.lua;
    
           mkdir -p /opt/ntech/var/lib/tarantool/shard_$I/snapshots
           mkdir -p /opt/ntech/var/lib/tarantool/shard_$I/xlogs
           mkdir -p /opt/ntech/var/lib/tarantool/shard_$I/index
           chown -R tarantool:tarantool /opt/ntech/var/lib/tarantool/shard_$I
           echo "Shard #$I inited"
    done;
    

    Совет

    Download the exemplary script.

  6. Run the script from the home directory.

    sudo sh ~/shard.sh
    
  7. Check the configuration files created.

    ls /etc/tarantool/instances.enabled/
    
    ##example.lua FindFace.lua FindFace_shard_1.lua FindFace_shard_2.lua FindFace_shard_3.lua FindFace_shard_4.lua
    
  8. Launch all the 4 shards. Do so on each host.

    for I in `seq 1 4`; do sudo systemctl enable tarantool@FindFace_shard_$I; done;
    for I in `seq 1 4`; do sudo systemctl start tarantool@FindFace_shard_$I; done;
    
  9. Retrieve the shards status.

    sudo systemctl status tarantool@FindFace*
    

    You should get the following output:

    tarantool@FindFace_shard_3.service - Tarantool Database Server
    Loaded: loaded (/lib/systemd/system/tarantool@.service; disabled; vendor preset: enabled)
    Active: active (running) since Tue 2017-01-10 16:22:07 MSK; 32s ago
    ...
    tarantool@FindFace_shard_2.service - Tarantool Database Server
    Loaded: loaded (/lib/systemd/system/tarantool@.service; disabled; vendor preset: enabled)
    Active: active (running) since Tue 2017-01-10 16:22:07 MSK; 32s ago
    ...
    tarantool@FindFace_shard_1.service - Tarantool Database Server
    Loaded: loaded (/lib/systemd/system/tarantool@.service; disabled; vendor preset: enabled)
    Active: active (running) since Tue 2017-01-10 16:22:07 MSK; 32s ago
    ...
    tarantool@FindFace_shard_4.service - Tarantool Database Server
    Loaded: loaded (/lib/systemd/system/tarantool@.service; disabled; vendor preset: enabled)
    Active: active (running) since Tue 2017-01-10 16:22:07 MSK; 32s ago
    ...
    

    Совет

    You can view the tntapi logs by executing:

    sudo tail -f /var/log/tarantool/FindFace_shard_{1,2,3,4}.log
    
  10. On the findface-facenapi host, create a file tntapi_cluster.json containing the addresses and ports of all the shards. Distribute available shards evenly over ~1024 cells in one line. Click here to see the file for 4 hosts with 4 shards on each.

    Совет

    You can create tntapi_cluster.json as follows:

    1. Create a file that lists all the shards, each shard with a new line (click here to view the example).

      sudo vi s.txt
      
    2. Run the script below (click here to view the script). As a result, a new file tntapi_cluster.json will be created and contain a list of all shards distributed evenly over 1024 cells.

    cat s.txt | perl -lane 'push(@s,$_); END{$m=1024; $t=scalar @s;for($i=0;$i<$m;$i++){$k=int($i*$t/$m); push(@r,"\"".$s[$k]."\"")} print "[[".join(", ",@r)."]]"; }' > tntapi_cluster.json
    
  11. Move tntapi_cluster.json to the directory /etc/.

    Важно

    You will have to uncomment and specify the path to tntapi_cluster.json when configuring network.