Our mission is to enable everyone to deploy and run resilient, secure and performant protocol infrastructure.
In the world of software, you usually need to decide between using a managed SaaS or running everything yourself in a self-hosted environment, which means handling all the operations to keep things running smoothly. At kuutamo we believe that there is a better way. A hybrid cloud first way. A next generation cloud. Our packaged services can be deployed anywhere, to any cloud, bare metal, and to our users own infrastructure. We aim to provide all the updates, monitoring and ops tooling needed, along with world-class SRE for protocol and infrastructure support services.
- Server/node: Any Linux OS
- Workstation/development machine: Any Linux OS
These are two different machines. The kneard manager, kneard-mgr
will run on your workstation. It will talk over SSH to your server/node. During install the server/node will be wiped and a fresh kuutamo near distribution will be installed onto it.
You will need a server with any Linux OS installed. You will need SSH root access with a key.
We have validated:
- OVH - Advance 1 Gen 2, 64GB RAM, 2 x 960GB NVMe, with Ubuntu
Before installing Ubuntu on the server, add your workstation SSH key.
- Latitude - c3.medium.x86, with Ubuntu
Before installing Ubuntu on the server, add your workstation SSH key.
-
Install the Nix package manager, if you don't already have it. https://zero-to-nix.com/start/install is an excellent resource.
-
Enable
nix
command and flakes features:
$ mkdir -p ~/.config/nix/ && printf 'experimental-features = nix-command flakes' >> ~/.config/nix/nix.conf
- Trust pre-built binaries (optional):
$ printf 'trusted-substituters = https://cache.garnix.io https://cache.nixos.org/\ntrusted-public-keys = cache.garnix.io:CTFPyKSLcx5RMJKfLo5EEPUObbA78b0YQ2DTCJXqr9g= cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=' | sudo tee -a /etc/nix/nix.conf && sudo systemctl restart nix-daemon
- Alias
kneard-mgr
and usenix run
command:
$ printf 'alias kneard-mgr="nix run --refresh github:kuutamolabs/near-staking-knd --"' >> ~/.bashrc && source ~/.bashrc
- Test the
kneard-mgr
command:
$ kneard-mgr --help
Answer ‘y’ to the four questions asked. After some downloading you should see the help output.
Subcommand to run
Usage: kneard-mgr [OPTIONS] <COMMAND>
Commands:
generate-config Generate NixOS configuration
install Install Validator on a given machine. This will remove all data of the current system!
dry-update Upload update to host and show which actions would be performed on an update
update Update validator
rollback Rollback validator
proxy Proxy remote rpc to local
restart Schedule a restart in a window where no blocks or chunks are expected to be produced by the validator
ssh SSH into a host
system-info Get system info from a host
help Print this message or the help of the given subcommand(s)
Options:
--config <CONFIG> configuration file to load [env: KUUTAMO_CONFIG=] [default: kneard.toml]
--yes skip interactive dialogs by assuming the answer is yes
-h, --help Print help
-V, --version Print version
-
New pool deployments can be done via the webapp UI
Get Started
flow at near.kuutamo.app - (GitHub) -
Download encrypted kuutamo app key file, config file (
kneard.toml
) and an optional monitoring access token (kuutamo-monitoring.token
) viaManage
button in UI: (here are example). The example file is generated bykneard-mgr generate-example
. -
Create a new directory and put these files in it.
[you@workstation:~/my-near-validator-1/]$ ls
my-pool.pool.devnet.zip kneard.toml
- In this directory run:
$ kneard-mgr install
- After this install finishes you can connect to the node.
$ kneard-mgr ssh
- Follow the logs
[root@validator-00:~]$ journalctl -u kuutamod.service
In the folder:
$ kneard-mgr update
Although monitoring is not mandatory for deploying a node, it is highly recommended. By setting up monitoring, you can easily track metrics and identify potential issues.
To set up monitoring for your node, you can use the Kuutamo monitoring token. The process is simple - just obtain the token and configure your node to send metrics to the Kuutamo monitor. By default, the token file name is kuutamo-monitoring.token
, and the default monitoring server is https://mimir.monitoring-00-cluster.kuutamo.computer
.
If you have multiple hosts in kneard.toml
and want to use different tokens for each host, you can set the kuutamo_monitoring_token_file
field for each host to point to the desired token file.
Alternatively, if you prefer to self-monitor the node, you can customize your monitor server by setting the self_monitoring_url
, self_monitoring_username
, and self_monitoring_password
fields of the host. The self_monitoring_url
should implement Prometheus's Remote Write API.