Home Install Torq
🛠️

Install Torq

By Henrik Skogstrøm and 1 other
7 articles

Install Torq with Podman

Torq is just a simple binary, which means it can be run using Podman just like any other container solution. We also have a container on docker hub at lncapital/torq:latest. Note: Only run with host network when your server has a firewall and doesn't automatically open all port to the internet. You don't want the database to be accessible from the internet! Torq only requires a Postgres data base with the TimescaleDB plugin. Use this command to run the database via host network: podman run -d --name torqdb --network=host -v torq_db:/var/lib/postgresql/data -e POSTGRES_PASSWORD="<YourPostgresPasswordHere>" timescale/timescaledb:latest-pg14 Then create your TOML configuration file and store it in ~/.torq/torq.conf (or another location you choose, but remember to change the location of the config variable bellow). [db] # Name of the database #name = "torq" # Name of the postgres user with access to the database #user = "postgres" # Password used to access the database password = "runningtorq" # Port of the database #port = "5432" # Host of the database host = "<YourDatabaseHost>" [torq] # Password used to access the API and frontend password = "<YourUIPassword>" # Network interface to serve the HTTP API" #network-interface = "0.0.0.0" # Port to serve the HTTP API port = "<YourPort>" # Specify different debug levels (panic|fatal|error|warn|info|debug|trace) #debuglevel = "info" [customize] # Mempool custom URL (no trailing slash) #mempool.url = "https://mempool.space" You can see the full configuration options here: https://github.com/lncapital/torq/blob/main/docker/example-torq.conf Then you can run Torq via host network like this: podman run -d --name torq --network=host -v ~/.torq/torq.conf:/home/torq/torq.conf lncapital/torq:latest --config=/home/torq/torq.conf start You should now be able to see the Torq frontend running on port 8080. Bitcoin Network: Be aware that when you try Torq on testnet, simnet or some other type of network that you must use the network switch when trying to browse the web interface. The network switch is the globe icon in the top left corner, next to the Torq logo.

Last updated on Jan 19, 2024

Torq Configuration options

Torq supports a TOML configuration file. The docker compose install script auto generates this file. You can find an example configuration file at example-torq.conf It is also possible not to use any TOML configuration files and use command like parameters. The list of parameters are: --lnd.url: Host:Port of the LND node (example: "127.0.0.1:10009") --lnd.macaroon-path: Path on disk to LND Macaroon (example: "~/.lnd/admin.macaroon") --lnd.tls-path: Path on disk to LND TLS file (example: "~/.lnd/tls.cert") --cln.url: Host:Port of the CLN node (example: "127.0.0.1:17272") --cln.certificate-path: Path on disk to CLN client certificate file (example: "~/.cln/client.pem") --cln.key-path: Path on disk to CLN client key file (example: "~/.cln/client-key.pem") --cln.ca-certificate-path: Path on disk to CLN certificate authority file (example: "~/.cln/ca.pem") --db.name: Name of the database (default: "torq") --db.user: Name of the postgres user with access to the database (default: "postgres") --db.password: Password used to access the database (default: "runningtorq") --db.port: Port of the database (default: "5432") --db.host: Host of the database (default: "localhost") --torq.password: Password used to access the API and frontend (example: "C44y78A4JXHCVziRcFqaJfFij5HpJhF6VwKjz4vR") --torq.network-interface: The nework interface to serve the HTTP API (default: "0.0.0.0") --torq.port: Port to serve the HTTP API (default: "8080") --torq.pprof.path: When pprof path is set then pprof is loaded when Torq boots. (example: ":6060"). WARNING: pprof exposes internals of your app on whichever path you specify, be careful not to expose this publicly. --torq.debuglevel: Specify different debug levels (panic|fatal|error|warn|info|debug|trace) (default: "info") --torq.vector.url: Alternative path for alternative vector service implementation (default: "https://vector.ln.capital/") --torq.cookie-path: Path to auth cookie file --torq.no-sub: Start the server without subscribing to node data (default: "false") --torq.auto-login: Allows logging in without a password (default: "false") --customize.mempool.url: Mempool custom URL (no trailing slash) (default: "https://mempool.space")

Last updated on Jan 19, 2024

How to add a domain to Torq using Caddy.

In this help article, we will guide you through the process of adding a reverse proxy using Caddy as part of the Docker-Compose setup for Torq. Caddy will automatically create and update TLS certificates, meaning you will have https ready to go out of the box. Prerequisites Before proceeding, please ensure that you have: - Docker and Docker-Compose installed on your server - A registered domain name Step 1: Stop Torq Stop your docker-compose before proceeding. Either with stop-torq or docker-compose down. Step 2: Add Caddy to the Docker-Compose Configuration Add the following configuration to your docker compose file to set up your Caddy reverse proxy: ​ ... the rest of the docker compose file from the quick install script above this point... caddy: image: "lucaslorentz/caddy-docker-proxy:ci-alpine" environment: - CADDY_INGRESS_NETWORKS=caddy restart: unless-stopped ports: - "80:80" - "443:443" command: - reverse-proxy - --from - your.domain - --to - torq:8080 volumes: - /var/run/docker.sock:/var/run/docker.sock - caddy_data:/data - caddy_config:/config volumes: torq_db: caddy_data: caddy_config: NB: replace your.domain with your actual domain. Step 3: Create the Required Caddy Folders Add caddy_data and caddy_config folders inside the folder used for your Torq installation. By default, the torq folder is located at ~/.torq. To create these folders, run the following commands if you are using the default location: mkdir -p ~/.torq/caddy_data mkdir -p ~/.torq/caddy_config Step 4: Update Your DNS Settings Add an A record in your domain's DNS settings pointing to the IP address of your server. This step may vary depending on your domain registrar, so refer to their documentation for specific instructions. Step 5: Start Torq You can then start torq again using start-torq, or navigate to your .torq folder and docker-compose up -d. NB: If your domain is not yet pointing to the address (for example before the DNS settings have propagated), you might need to restart the caddy container (or run the docker compose startup again).

Last updated on Jan 19, 2024

Infrastructure and node monitoring

When you use Torq as middleware then you can get very good insights on how well your Lightning node is functioning by exposing prometheus and open telemetry. To enable prometheus you need to provide extra configuration. torq.prometheus.path = "localhost:7070" To enable open-telemetry there are several options required, here is an example for Jaeger-HTTP: otel.exporter.type = "otlpHttp" otel.exporter.endpoint = "http://localhost:4318" sampler.fraction = 0.1 Jaeger-gRPC: otel.exporter.type = "otlpGrpc" otel.exporter.endpoint = "http://localhost:4317" sampler.fraction = 0.1 Note: Make sure otlp is enabled in Jaeger with --collector.otlp.enabled=true Prometheus is for real-time statistics. Open-telemetry (Jaeger) is for tracing, so more for backtracking how things behaved at a certain point in time. How many executions and how long those took to complete. Grafana is for creating nice insightfull graphs Prometheus node exporter is for OS metrics like memory, CPU, discspace, ... Below we will provide an example configuration of how you could monitor you entire stack including OS. The example setup is using podman with host networking. Whenever using host network make sure you understand: you need a firewall! File ~/prometheus.yml global: scrape_interval: 5s external_labels: monitor: 'monitoring-torq-stack' scrape_configs: - job_name: 'prometheus-torq' metrics_path: '/metrics' static_configs: - targets: - 'localhost:7070' - job_name: 'prometheus-jaeger' metrics_path: '/metrics' static_configs: - targets: - 'localhost:14269' - job_name: 'prometheus-node' static_configs: - targets: ['localhost:9100'] rule_files: - '/alert.rules' File ~/alert.rules groups: - name: generic.service_down rules: - alert: service_down expr: up == 0 for: 30s annotations: summary: Instance is down File ~/grafana.ini [paths] logs = /log [server] root_url = http://localhost/grafana serve_from_sub_path = true router_logging = true [auth.anonymous] enabled = true ;org_name = torq.co ;org_role = Viewer Boot grafana container podman run -d --name grafana -h grafana --network=host --restart=always -v /etc/localtime:/etc/localtime:ro -v grafanaVolume:/var/lib/grafana -v logVolume:/log -e "GF_SECURITY_ADMIN_PASSWORD=YOURSECUREPASSWORDGOESINHERE" -e "GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-worldmap-panel,grafana-piechart-panel,briangann-datatable-panel" -v ~/grafana.ini:/etc/grafana/grafana.ini:z docker.io/grafana/grafana Boot prometheus container podman run -d --name prometheus -h prometheus --network=host --restart=always -v /etc/localtime:/etc/localtime:ro -v ~/prometheus.yml:/prometheus.yml:z -v ~/alert.rules:/alert.rules:z -v logVolume:/log docker.io/prom/prometheus --config.file=/prometheus.yml --web.route-prefix=/prometheus --web.external-url=http://localhost/prometheus Boot Jaeger container (open telemetry) podman run -d --name jaeger -h jaeger --network=host --restart=always -v /etc/localtime:/etc/localtime:ro -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 -e COLLECTOR_OTLP_ENABLED=true -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 4317:4317 -p 4318:4318 -p 14250:14250 -p 14268:14268 -p 14269:14269 -p 9411:9411 -v logVolume:/log docker.io/jaegertracing/all-in-one:1.45 Boot prometheus node exporter (OS metrics podman edition) podman run -d --name prometheus-node-exporter -h prometheus-node-exporter --network=host --restart=always -v /etc/localtime:/etc/localtime:ro --pid="host" --net="host" -v "/:/host:ro,rslave" quay.io/prometheus/node-exporter:latest --path.rootfs=/host

Last updated on Jan 19, 2024