This guide will walk you through the steps necessary to transfer a Unification validator node from one VPS to another. The genesis for this guide was NLHybrid, who published his fantastic article on How to setup a Unification node on Digital Ocean for just $5. Originally I created my validator node Big Boss Capital on AWS, following the guide on the Unification Medium blog. Both of these are great resources and should be used as a reference for any future setup.
After running my AWS node for 3 weeks, I noticed that its costs was becoming prohibitively expensive. I was running a T2 micro with 500 hundred GB of storage. Originally I had just used the default 10GB of space provided by AWS, but it only took a few days for my storage to fill up and my node stopped operating. I added too much volume and it was my largest cost. In retrospect, I should have tried to scale down my volume first before moving, but I think Digital Ocean will be cheaper in the long run.
Currently the Unification Blockchain takes up 9.3Gb of space and is growing everyday. For now, a 25Gb volume is large enough to store the whole blockchain. In the future when I need to scale up, its easy to add on additional volume space with Digital Ocean.
Before you start with this guide, I implore you to setup two validator nodes on testnet and try this operation first. If you make a mistake while working with Mainnet you could screw up your delegation and end up in Jail or not be able to access your FUND for 30 days while you unbond. Use this link for Digital Ocean to get $100 free credit and setup two nodes first on Testnet. DO NOT TRY THIS ON MAINNET FIRST!!!
This guide assumes that you have the following:
- Node A - A fully operating Unification validator node on the network you want to move from
- Node B - A fully synced node on the network. (Complete all steps in this guide up to Section 4.)
After completing these two requirements the steps for transferring your validator node is the following:
- Stop both nodes and add --halt-height to und service file
- Halt both nodes at a predetermined block height
- Delete Config folder and priv_validator_state.json in Node B
- SCP config folder and priv_validator_state.json from Node A to Node B
- Start Node B
In Depth Directions
The first step will be to edit your und service file to stop both Nodes at the same height. This will ensure that the chain data being transferred between both nodes is synced correctly.
Stop both nodes with systemctl
sudo systemctl stop und
Next, access the und.service file.
sudo nano /etc/systemd/system/und.service
Before you edit your service file, choose a block to halt on. Find out what the latest block is with the Unification Block Explorer. I would suggest choosing a block that is at least 100 blocks ahead, to give you enough time to change both und.service files. For this guide, the block that will be halted on is designated as block XYZ.
Add the --halt-height command to your service file. --halt-height should be placed in between start and --home. I’ve bolded where it should be added below.
[Unit] Description=Unification Mainchain Validator Node [Service] User=centos Group=centos WorkingDirectory=/home/centos ExecStart=/usr/local/bin/und start --halt-height XYZ --home=/home/centos/.und_mainchain Restart=on-failure RestartSec=5 LimitNOFILE=4096 [Install] WantedBy=default.target
Save the changes to und.service by pressing crtl + X. Make sure to change und.service for both Node A and B.
Reload your systemctl with the updated changes to und.service for both Nodes
sudo systemctl daemon-reload
Turn journaling on for both Node A and B to make sure both halt at the same block.
sudo journalctl -u und --follow
Watch the log to make sure that both Nodes halt at the same height. If successful press crtl + C to return to command line.
Now that both Nodes are halted at the same block height, the next step will be to delete the config directory & priv_validator_state.json in Node B.
Find out what the directory names and files are:
We are interested in .und_mainchain. Navigate to that folder:
There should be two directory folders in .und_mainchain called config and data. Delete config (MAKE SURE THAT YOU DO THIS IN NODE B ONLY)
rm -r config
Now navigate to the data folder and delete priv_validator_state.json
After deleting both, the next step will be to transfer the corresponding files from Node A to Node B with SCP (Secure Copy Protocol). SCP is executed locally not in your Digital Ocean or AWS instance, so open up a new terminal. If you run SCP in your instance it will not work.
Copy the file "foobar.txt" from a remote host to the local hos
$ scp email@example.com:foobar.txt /some/local/directory
Copy the file "foobar.txt" from the local host to a remote host
$ scp foobar.txt firstname.lastname@example.org:/some/remote/directory
Copy the directory "foo" from the local host to a remote host's directory "bar"
$ scp -r foo email@example.com:/some/remote/directory/bar
Since we are using ssh the flag “-i” needs to be added to designate where your private key is located on your local machine. If you followed Unifications official guide or NLHybrid’s guide, its most likely in ~/.ssh
“id_rsa” is the name for your private key in .ssh
First export the config directory to your local computer
scp -i ~/.ssh/id_rsa -r centos@[Node_A_Instance_IP]:/home/centos/.und_mainchain/config $HOME/[folder_export_name]
Next export the
scp -i ~/.ssh/id_rsa -r centos@[Node_A_Instance_IP]:/home/centos/.und_mainchain/data/priv_validator_state.json $HOME/[folder_export_name]
Now reverse the scp address logic and upload the directory and file to Node B
scp -i ~/.ssh/id_rsa -r $HOME/[folder_export_name]/config centos@[Node_B_Instance_IP]:/home/centos/.und_mainchain/config
scp -i ~/.ssh/id_rsa -r $HOME/[folder_export_name]/priv_validator_state.json centos@[Node_B_Instance_IP]:/home/centos/.und_mainchain/data/
At this point Node B is almost ready to take over as validator. Before you try and start your node, remove the --halt-height command from your und.service file.
Once your und.service file is fixed, start your node and sync.
Congratulations, you have successfully transferred your validator node!