antitree / private-tor-network

Run an isolated instance of a tor network in Docker containers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Ignoring unsupported options: links

callmexss opened this issue · comments

Docker version:

Docker version 18.06.1-ce, build e68fc7a

The issure:
While I run the command:

docker stack deploy --compose-file docker-compose.yml torstack

The terminal outputs like this:

➜  private-tor-network git:(master) docker stack deploy --compose-file docker-compose.yml torstack
Ignoring unsupported options: links

Updating service torstack_relay (id: tffiq75zoc8grn3qa0wed1qwy)
Updating service torstack_exit (id: aeyhou3i5eck7pa8o8g6pioyw)
Updating service torstack_client (id: yq15cmssxeymvyihlx2fv0ia9)
Updating service torstack_hs (id: dzbjwbrag9135lg1am45wevmp)
Updating service torstack_web (id: kxoa2psvu64i3gfc9vzpidtk9)
Updating service torstack_da1 (id: 0661dlucl6cd9v3gwdhe6cpc5)
Updating service torstack_da2 (id: ezbgb6v528i0ofubjwthpxibg)
Updating service torstack_da3 (id: niij6h9rjdkvwa1rshono0p8g)

And I try to google something, from stackoverflow I get this:

The above answer is actually wrong links: is not supported in docker stack deploy see this link : https://docs.docker.com/compose/compose-file/#not-supported-for-docker-stack-deploy
ref: how to connect to container in docker stack deploy

Could you please make an upgrade?

I also face the same issue @callmexss @antitree

Yeah "--links" is old an should be removed but is it causing an issue or just showing the warning?

Yeah "--links" is old an should be removed but is it causing an issue or just showing the warning?

The stack can be deployed, but the private tor network seems not.
I am not quite familiar with tor and docker. So I just can give the rough description what I have done...

First I do the deploy:

➜  private-tor-network git:(master) ✗ docker stack deploy --compose-file docker-compose.yml torstack
Ignoring unsupported options: links

Creating network torstack_default
Creating service torstack_relay
Creating service torstack_exit
Creating service torstack_client
Creating service torstack_hs
Creating service torstack_web
Creating service torstack_da1
Creating service torstack_da2
Creating service torstack_da3

Then I check the stack service tasks.

➜  private-tor-network git:(master) ✗ docker stack ps torstack                                      
ID                  NAME                IMAGE                         NODE                DESIRED STATE       CURRENT STATE           ERROR                       PORTS
rquqbiy21cp8        torstack_da3.1      antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
xr29xgok1f0u        torstack_da2.1      antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
ensw9omtro8l        torstack_da1.1      antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
tz9z38t4tr9r        torstack_hs.1       antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
sg3jll0xl2yn        torstack_web.1      nginx:latest                  lfish               Running             Running 2 minutes ago                               
m48h9na13l5q        torstack_hs.1       antitree/private-tor:latest   lfish               Shutdown            Failed 2 minutes ago    "task: non-zero exit (1)"   
c2f0hubjuntv        torstack_client.1   antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
7ae8k9nqxml4        torstack_exit.1     antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
i98stqwge6w1        torstack_relay.1    antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
wy7y3ddh7i67        torstack_hs.2       antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
g7xj67v41vg6        torstack_exit.2     antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
r3xjmgrx61az        torstack_relay.2    antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
tie0bodaeq7q        torstack_hs.3       antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
3e7jkoe80rod        torstack_exit.3     antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
2frxmxmecq6e        torstack_relay.3    antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
u1ubyw7bju49        torstack_hs.4       antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
s6d9dje1bvvv        torstack_exit.4     antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
rjn4eko26kvz        torstack_relay.4    antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
i4t98v7i1syu        torstack_exit.5     antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
sio72mxm49g0        torstack_relay.5    antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
wu7fpwdapg3j        torstack_exit.6     antitree/private-tor:latest   lfish               Running             Running 2 minutes ago                               
93q74i1sg1z1        torstack_relay.6    antitree/private-tor:latest   lfish               Running             Running 2 minutes ago  

Then the services

➜  private-tor-network git:(master) ✗ docker stack services torstack
ID                  NAME                MODE                REPLICAS            IMAGE                         PORTS
1hyy9yobaari        torstack_exit       replicated          6/6                 antitree/private-tor:latest   
a38kuebxs4iq        torstack_hs         replicated          4/4                 antitree/private-tor:latest   
avpdwb4nwtdz        torstack_web        replicated          1/1                 nginx:latest                  
ihaui4vbrk4s        torstack_da1        replicated          1/1                 antitree/private-tor:latest   
jldcyast5zr6        torstack_relay      replicated          6/6                 antitree/private-tor:latest   
ldui3pmc6v3a        torstack_da2        replicated          1/1                 antitree/private-tor:latest   
p9i5a6k89t5s        torstack_da3        replicated          1/1                 antitree/private-tor:latest   
w4mziouhly0c        torstack_client     replicated          1/1                 antitree/private-tor:latest   *:9050-9051->9050-9051/tcp

DA's log

➜  private-tor-network git:(master) ✗ docker logs --tail 10 torstack_da3.1.rquqbiy21cp8a8jovqqsyom87
Oct 31 02:54:55.000 [warn] We don't have enough votes to generate a consensus: 1 of 29
Oct 31 02:54:58.000 [notice] Time to fetch any signatures that we're missing.
Oct 31 02:55:00.000 [notice] Time to publish the consensus and discard old votes
Oct 31 02:55:00.000 [warn] Not enough info to publish pending ns consensus
Oct 31 02:55:00.000 [warn] Not enough info to publish pending microdesc consensus
Oct 31 02:55:00.000 [info] sr_state_update(): SR: State prepared for upcoming voting period (2018-10-31 02:55:00). Upcoming phase is commit (counters: 2 commit & 0 reveal rounds).
Oct 31 02:55:00.000 [info] Choosing expected valid-after time as 2018-10-31 03:00:00: consensus_set=0, interval=300
Oct 31 02:55:02.000 [info] run_connection_housekeeping(): Expiring wedged directory conn (fd 15, purpose 14)
Oct 31 02:55:02.000 [info] conn_close_if_marked(): Conn (addr "172.18.0.20", fd 15, type Directory, state 1) marked, but wants to flush 518 bytes. (Marked at src/or/main.c:1210)
Oct 31 02:55:02.000 [info] conn_close_if_marked(): We stalled too much while trying to write 518 bytes to address "172.18.0.20".  If this happens a lot, either something is wrong with your network connection, or something is wrong with theirs. (fd 15, type Directory, state 1, marked at src/or/main.c:1210).

HS's log

➜  private-tor-network git:(master) ✗ docker logs --tail 10 torstack_hs.1.tz9z38t4tr9reez4qeyt4k7f8
Oct 31 02:58:16.000 [info] circuit_n_chan_done(): Channel failed; closing circ.
Oct 31 02:58:16.000 [info] circuit_mark_for_close_(): Circuit 0 (id: 17) marked for close at src/or/circuitbuild.c:624 (orig reason: 8, new reason: 0)
Oct 31 02:58:16.000 [info] connection_or_note_state_when_broken(): Connection died in state 'connect()ing with SSL state (No SSL object)'
Oct 31 02:58:16.000 [info] circuit_build_failed(): Our circuit 0 (id: 17) died before the first hop with no connection
Oct 31 02:58:16.000 [info] circuit_free_(): Circuit 0 (id: 17) has been freed.
Oct 31 02:58:27.000 [info] run_connection_housekeeping(): Expiring wedged directory conn (fd -1, purpose 14)
Oct 31 02:58:27.000 [info] connection_free_minimal(): Freeing linked Directory connection [client reading] with 0 bytes on inbuf, 0 on outbuf.
Oct 31 02:58:27.000 [info] connection_edge_process_inbuf(): data from edge while in 'waiting for circuit' state. Leaving it on buffer.
Oct 31 02:58:27.000 [info] connection_edge_reached_eof(): conn (fd -1) reached eof. Closing.
Oct 31 02:58:27.000 [info] connection_free_minimal(): Freeing linked Socks connection [waiting for circuit] with 514 bytes on inbuf, 0 on outbuf.

WEB's log (no output)

➜  private-tor-network git:(master) ✗ docker logs --tail 10 torstack_web.1.sg3jll0xl2ynplr0q3wehm2wq

CLIENT's log

➜  private-tor-network git:(master) ✗ docker logs --tail 10 torstack_client.1.c2f0hubjuntvje6rr4867j4pg
Oct 31 03:18:39.000 [info] connection_edge_reached_eof(): conn (fd -1) reached eof. Closing.
Oct 31 03:18:39.000 [info] connection_free_minimal(): Freeing linked Socks connection [waiting for circuit] with 507 bytes on inbuf, 0 on outbuf.
Oct 31 03:20:19.000 [warn] Problem bootstrapping. Stuck at 5%: Connecting to directory server. (Connection timed out; TIMEOUT; count 20; recommendation warn; host 1EE96B28A0378FAE8965949198F7A0CE03C19662 at 172.18.0.20:7000)
Oct 31 03:20:19.000 [warn] 19 connections have failed:
Oct 31 03:20:19.000 [warn]  19 connections died in state connect()ing with SSL state (No SSL object)
Oct 31 03:20:19.000 [info] circuit_n_chan_done(): Channel failed; closing circ.
Oct 31 03:20:19.000 [info] circuit_mark_for_close_(): Circuit 0 (id: 20) marked for close at src/or/circuitbuild.c:624 (orig reason: 8, new reason: 0)
Oct 31 03:20:19.000 [info] connection_or_note_state_when_broken(): Connection died in state 'connect()ing with SSL state (No SSL object)'
Oct 31 03:20:19.000 [info] circuit_build_failed(): Our circuit 0 (id: 20) died before the first hop with no connection
Oct 31 03:20:19.000 [info] circuit_free_(): Circuit 0 (id: 20) has been freed.

EXIT's log

➜  private-tor-network git:(master) ✗ docker logs --tail 10 torstack_exit.1.7ae8k9nqxml4q4657iivz4hck  
Oct 31 03:21:39.000 [info] run_connection_housekeeping(): Expiring wedged directory conn (fd 13, purpose 14)
Oct 31 03:21:39.000 [info] conn_close_if_marked(): Conn (addr "172.18.0.18", fd 13, type Directory, state 1) marked, but wants to flush 507 bytes. (Marked at src/or/main.c:1210)
Oct 31 03:21:39.000 [info] conn_close_if_marked(): We stalled too much while trying to write 507 bytes to address "172.18.0.18".  If this happens a lot, either something is wrong with your network connection, or something is wrong with theirs. (fd 13, type Directory, state 1, marked at src/or/main.c:1210).
Oct 31 03:22:20.000 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Oct 31 03:22:20.000 [info] directory_send_command(): Downloading consensus from 172.18.0.20:9030 using /tor/status-vote/current/consensus-microdesc/00141F+039577+11CB06+20D57B+289A0C+2AEFEC+2C2526+2D5F71+2F536A+31F30F+3B5C3E+3E6B30+412D37+4A4B6E+52877C+58CD5A+5F5B1F+6236AE+63BA7C+64C364+660B2B+7361BB+75F9D4+7AF17A+82B537+833C8D+8368B4+849D37+865D3C+88905D+8E82D6+9B108E+9F487F+A2671C+A27E8A+A3BB64+A814CF+A935CD+AF6BC6+B08EB3+B0918B+B80B4E+B88C57+C72875+CBD43B+CF0DDF+D3F07D+DBA1A8+E77AE6+EAB5CC+EAC742+EAF448+EBEF88+EE0338.z
Oct 31 03:22:51.000 [info] run_connection_housekeeping(): Expiring wedged directory conn (fd 13, purpose 14)
Oct 31 03:22:51.000 [info] conn_close_if_marked(): Conn (addr "172.18.0.20", fd 13, type Directory, state 1) marked, but wants to flush 507 bytes. (Marked at src/or/main.c:1210)
Oct 31 03:22:51.000 [info] conn_close_if_marked(): We stalled too much while trying to write 507 bytes to address "172.18.0.20".  If this happens a lot, either something is wrong with your network connection, or something is wrong with theirs. (fd 13, type Directory, state 1, marked at src/or/main.c:1210).
Oct 31 03:23:37.000 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Oct 31 03:23:37.000 [info] directory_send_command(): Downloading consensus from 172.18.0.9:9030 using /tor/status-vote/current/consensus-microdesc/00141F+039577+11CB06+20D57B+289A0C+2AEFEC+2C2526+2D5F71+2F536A+31F30F+3B5C3E+3E6B30+412D37+4A4B6E+52877C+58CD5A+5F5B1F+6236AE+63BA7C+64C364+660B2B+7361BB+75F9D4+7AF17A+82B537+833C8D+8368B4+849D37+865D3C+88905D+8E82D6+9B108E+9F487F+A2671C+A27E8A+A3BB64+A814CF+A935CD+AF6BC6+B08EB3+B0918B+B80B4E+B88C57+C72875+CBD43B+CF0DDF+D3F07D+DBA1A8+E77AE6+EAB5CC+EAC742+EAF448+EBEF88+EE0338.z

RELAY's log

➜  private-tor-network git:(master) ✗ docker logs --tail 10 torstack_relay.1.i98stqwge6w1yeh0h7y3dtele
Oct 31 03:15:32.000 [info] update_consensus_networkstatus_downloads(): Launching ns standard networkstatus consensus download.
Oct 31 03:15:32.000 [info] directory_send_command(): Downloading consensus from 172.18.0.18:9030 using /tor/status-vote/current/consensus/00141F+039577+11CB06+20D57B+289A0C+2AEFEC+2C2526+2D5F71+2F536A+31F30F+3B5C3E+3E6B30+412D37+4A4B6E+52877C+58CD5A+5F5B1F+6236AE+63BA7C+64C364+660B2B+7361BB+75F9D4+7AF17A+82B537+833C8D+8368B4+849D37+865D3C+88905D+8E82D6+9B108E+9F487F+A2671C+A27E8A+A3BB64+A814CF+A935CD+AF6BC6+B08EB3+B0918B+B80B4E+B88C57+C72875+CBD43B+CF0DDF+D3F07D+DBA1A8+E77AE6+EAB5CC+EAC742+EAF448+EBEF88+EE0338.z
Oct 31 03:16:03.000 [info] run_connection_housekeeping(): Expiring wedged directory conn (fd 13, purpose 14)
Oct 31 03:16:03.000 [info] conn_close_if_marked(): Conn (addr "172.18.0.18", fd 13, type Directory, state 1) marked, but wants to flush 497 bytes. (Marked at src/or/main.c:1210)
Oct 31 03:16:03.000 [info] conn_close_if_marked(): We stalled too much while trying to write 497 bytes to address "172.18.0.18".  If this happens a lot, either something is wrong with your network connection, or something is wrong with theirs. (fd 13, type Directory, state 1, marked at src/or/main.c:1210).
Oct 31 03:17:45.000 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Oct 31 03:17:45.000 [info] directory_send_command(): Downloading consensus from 172.18.0.22:9030 using /tor/status-vote/current/consensus-microdesc/00141F+039577+11CB06+20D57B+289A0C+2AEFEC+2C2526+2D5F71+2F536A+31F30F+3B5C3E+3E6B30+412D37+4A4B6E+52877C+58CD5A+5F5B1F+6236AE+63BA7C+64C364+660B2B+7361BB+75F9D4+7AF17A+82B537+833C8D+8368B4+849D37+865D3C+88905D+8E82D6+9B108E+9F487F+A2671C+A27E8A+A3BB64+A814CF+A935CD+AF6BC6+B08EB3+B0918B+B80B4E+B88C57+C72875+CBD43B+CF0DDF+D3F07D+DBA1A8+E77AE6+EAB5CC+EAC742+EAF448+EBEF88+EE0338.z
Oct 31 03:18:16.000 [info] run_connection_housekeeping(): Expiring wedged directory conn (fd 13, purpose 14)
Oct 31 03:18:16.000 [info] conn_close_if_marked(): Conn (addr "172.18.0.22", fd 13, type Directory, state 1) marked, but wants to flush 507 bytes. (Marked at src/or/main.c:1210)
Oct 31 03:18:16.000 [info] conn_close_if_marked(): We stalled too much while trying to write 507 bytes to address "172.18.0.22".  If this happens a lot, either something is wrong with your network connection, or something is wrong with theirs. (fd 13, type Directory, state 1, marked at src/or/main.c:1210).

That's all what I have done and the logs I got. From the logs, it seems that the private network did not connect as a whole. I am shortage of the specific knowledge..I am trying to learn more, but still need help to make the tool work.

Actually it is showing an error,
desktop:~/private-tor-network$ sudo docker stack deploy --compose-file docker-compose.yml torstack

Ignoring unsupported options: links

and one more thing when i run sudo docker stack ps torstaack

Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.

I am trying to setup private tor network & i have 3 servers how to configure as a one DA, one relay, one exit node servers.
Please help me, i am newbie in this.
@callmexss @antitree

This is an issue with the quickstart instructions now that docker-compose doesn't allow v2 of the older docker-compose.yml format. I'll need to update it.

Thanks for the feedback. I've updated the docker-compose.yml file to be v3 and changed the way volumes work. You can now follow the directions for using docker-compose up to build a test network.