torodb / stampede

The ToroDB solution to provide better analytics on top of MongoDB and make it easier to migrate from MongoDB to SQL

Home Page:https://www.torodb.com/stampede/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Issue in initial set up

jayadevanm opened this issue · comments

I am testing torodb in development before trying in production. There are 3 databases in our MongoDB. Some of the collections are about 50 GB size. When starting stampede, there are quite a few messages like this
Documents don't fit on <database>.<collection> Rolling back.
I ignore those. Then stampede crashes throwing errors like this
Error while trying to prepare the schema... Refreshing metadata... FATAL TOROD - Fatal error while trying to refresh the metadata after an erroneuous attempt to prepare the schema ERROR LIFECYCLE - Error reported by com.torodb.torod.impl.sql.schema.Logic@3cf9486d. Stopping ToroDB Stampede
The dev database is not under any load. Does this have something to do with stampede running out of memory/heap? Other info - torodb-stampede-1.0.0, MongoDB 3.2, CentOS 7

Hi @jayadevanm, can you publish the whole stacktrace?

Also, can you enable stacktrace with this log4j2.xml file:

torodb-stampede --log4j2-file log4j2.xml

Attaching output till toro started replicating ..
06:42:14.701 INFO LIFECYCLE 'StampedeService-STARTING' c.t.s.StampedeService - Starting up ToroDB Stampede 06:42:14.996 INFO BACKEND 'StampedeService-STARTING' c.t.b.p.PostgreSqlDbBackend - Configured PostgreSQL backend at localhost:5432 06:42:16.730 DEBUG BACKEND 'PostgreSqlBackendBundle-START c.t.b.AbstractBackendBundle - Starting low level backend service 06:42:16.855 INFO BACKEND 'PostgreSqlDbBackend-STARTING' c.t.b.AbstractDbBackendService - Created pool session with size 28 and level TRANSACTION_REPEATABLE_READ 06:42:16.921 INFO BACKEND 'PostgreSqlDbBackend-STARTING' c.t.b.AbstractDbBackendService - Created pool system with size 1 and level TRANSACTION_REPEATABLE_READ 06:42:16.949 INFO BACKEND 'PostgreSqlDbBackend-STARTING' c.t.b.AbstractDbBackendService - Created pool cursors with size 1 and level TRANSACTION_REPEATABLE_READ 06:42:16.994 DEBUG BACKEND 'PostgreSqlBackendBundle-START c.t.b.AbstractBackendBundle - Low level backend service started 06:42:16.994 DEBUG BACKEND 'PostgreSqlBackendBundle-START c.t.b.AbstractBackendBundle - Starting backend service 06:42:16.995 DEBUG BACKEND 'BackendServiceImpl-STARTING' c.t.b.s.BackendServiceImpl - Starting backend... 06:42:16.995 DEBUG BACKEND 'BackendServiceImpl-STARTING' c.t.b.s.BackendServiceImpl - Backend started 06:42:16.995 DEBUG BACKEND 'PostgreSqlBackendBundle-START c.t.b.AbstractBackendBundle - Backend service started 06:42:16.995 DEBUG BACKEND 'PostgreSqlBackendBundle-START c.t.b.AbstractBackendBundle - Validating database metadata 06:42:17.181 INFO BACKEND 'PostgreSqlBackendBundle-START c.t.b.m.AbstractSchemaUpdater - Schema 'torodb' not found. Creating it... 06:42:17.312 INFO BACKEND 'PostgreSqlBackendBundle-START c.t.b.m.AbstractSchemaUpdater - Schema 'torodb' created 06:42:17.322 INFO BACKEND 'PostgreSqlBackendBundle-START c.t.b.AbstractBackendBundle - Database metadata has been validated 06:42:17.522 WARN LIFECYCLE 'StampedeService-STARTING' c.t.s.StampedeService - Found that replication shard unsharded is not consistent. 06:42:17.522 WARN LIFECYCLE 'StampedeService-STARTING' c.t.s.StampedeService - Dropping user data. 06:42:17.570 INFO REPL-unsha 'StampedeService-STARTING' c.t.s.StampedeService - Consistent state has been set to 'false' 06:42:17.675 DEBUG TOROD 'SqlTorodBundle-STARTING' c.t.t.AbstractTorodBundle - Starting Torod sevice 06:42:17.703 DEBUG TOROD 'SqlTorodServer-STARTING' c.t.t.i.s.SqlTorodServer - Reading last used rids... 06:42:17.710 DEBUG TOROD 'SqlTorodBundle-STARTING' c.t.t.AbstractTorodBundle - Torod sevice started 06:42:17.717 INFO LIFECYCLE 'StampedeService-STARTING' .t.m.r.s.MongoDbShardingBundle - Starting replication from replica set named rs1 06:42:18.318 DEBUG REPL 'StampedeService-STARTING' c.t.m.r.g.AkkaDbClonerProvider - Using AkkaDbCloner with: {parallelLevel: 1, docsPerTransaction: 1000} 06:42:18.326 DEBUG MONGOD 'MongodServer-STARTING' c.t.m.c.MongodServer - Waiting for Torod server to be running 06:42:18.326 DEBUG MONGOD 'MongodServer-STARTING' c.t.m.c.MongodServer - MongodServer ready to run 06:42:18.328 INFO REPL 'MongoDbReplBundle-STARTING' c.t.m.r.MongoDbReplBundle - Starting replication service 06:42:18.330 DEBUG REPL 'ReplCoreBundle-STARTING' c.t.m.r.ReplCoreBundle - Starting oplog manager 06:42:18.331 DEBUG REPL 'OplogManager-STARTING' c.t.m.r.OplogManager - Starting OplogManager 06:42:18.357 DEBUG REPL 'OplogManager-STARTING' c.t.m.r.OplogManager - Started OplogManager 06:42:18.357 DEBUG REPL 'ReplCoreBundle-STARTING' c.t.m.r.ReplCoreBundle - Oplog manager started 06:42:18.361 DEBUG REPL 'TopologyService-STARTING' c.t.m.r.t.TopologyService - Starting topology service 06:42:18.362 DEBUG REPL 'Heartbeat-handler-STARTING' m.r.t.TopologyHeartbeatHandler - Starting up Heartbeat handler 06:42:19.022 DEBUG REPL 'topology-executor-0' m.r.t.TopologyHeartbeatHandler - Accepting the new replica set config (version is 1) without validating it first (not supported yet) 06:42:19.026 DEBUG REPL 'topology-executor-0' c.t.m.r.t.TopologyExecutor - Changing version from -1 to 1 06:42:19.027 DEBUG REPL 'topology-executor-0' m.r.t.TopologyHeartbeatHandler - Scheduling new heartbeats to nodes on config 1 06:42:19.046 DEBUG REPL 'Heartbeat-handler-STARTING' m.r.t.TopologyHeartbeatHandler - Heartbeat handler has been started up 06:42:19.060 INFO REPL 'topology-executor-0' c.t.m.r.t.TopologyCoordinator - Waiting for 2 pings from other members before syncing 06:42:19.061 DEBUG REPL 'TopologyService-STARTING' c.t.m.r.t.TopologyService - Waiting until topology is ready 06:42:19.177 INFO REPL 'topology-executor-0' c.t.m.r.t.TopologyCoordinator - Member dev:27017 is now in state RS_PRIMARY 06:42:20.061 INFO REPL 'topology-executor-0' c.t.m.r.t.TopologyCoordinator - Waiting for 1 pings from other members before syncing 06:42:20.062 DEBUG REPL 'TopologyService-STARTING' c.t.m.r.t.TopologyService - Waiting until topology is ready 06:42:21.062 INFO REPL 'topology-executor-0' c.t.m.r.t.TopologyCoordinator - Waiting for 1 pings from other members before syncing 06:42:21.062 DEBUG REPL 'TopologyService-STARTING' c.t.m.r.t.TopologyService - Waiting until topology is ready 06:42:22.068 INFO REPL 'topology-executor-0' c.t.m.r.t.TopologyCoordinator - syncing from: dev:27017 06:42:22.068 INFO REPL 'TopologyService-STARTING' c.t.m.r.t.TopologyService - Topology service started 06:42:22.132 DEBUG REPL 'ReplCoordinator-STARTING' c.t.m.r.ReplCoordinator - Starting replication coordinator 06:42:22.135 DEBUG REPL 'ReplCoordinator-STARTING' c.t.m.r.ReplCoordinator - loadStoredConfig() is not implemented yet 06:42:22.135 INFO REPL 'ReplCoordinator-STARTING' c.t.m.r.ReplCoordinator - Database is not consistent. 06:42:22.136 DEBUG REPL 'ReplCoordinator-STARTING' c.t.m.r.ReplCoordinator - Replication coordinator started 06:42:22.136 INFO REPL 'MongoDbReplBundle-STARTING' c.t.m.r.MongoDbReplBundle - Replication service started 06:42:22.138 INFO REPL 'repl-coord-starting-recovery' .r.ReplCoordinatorStateMachine - Starting RECOVERY mode 06:42:22.140 INFO LIFECYCLE 'StampedeService-STARTING' c.t.s.StampedeService - ToroDB Stampede is now running 06:42:22.143 INFO REPL 'RecoveryService' c.t.m.r.RecoveryService - Starting RECOVERY service 06:42:22.143 INFO REPL 'RecoveryService' c.t.m.r.RecoveryService - Starting initial sync 06:42:22.157 INFO REPL 'RecoveryService' c.t.m.r.ReplCoordinator - Consistent state has been set to 'false' 06:42:22.160 INFO REPL 'RecoveryService' c.t.m.r.RecoveryService - Using node dev:27017 to replicate from 06:42:22.160 DEBUG REPL 'RecoveryService' c.t.m.r.RecoveryService - Remote client obtained 06:42:22.185 INFO REPL 'RecoveryService' c.t.m.r.RecoveryService - Remote database cloning started 06:42:22.205 DEBUG TOROD 'RecoveryService' c.t.t.i.s.SqlWriteTransaction - Documents don't fit on torodb.oplog.replication. Rolling back 06:42:22.210 DEBUG unclassifi 'RecoveryService' c.t.c.r.SmartRetrier - Exception catched on the replier com.torodb.core.transaction.RollbackException: com.torodb.torod.impl.sql.IncompatibleSchemaException at com.torodb.torod.impl.sql.SqlWriteTransaction.insert(SqlWriteTransaction.java:86) ~[torod-0.50.3.jar:?] at com.torodb.torod.WriteDocTransactionDecorator.insert(WriteDocTransactionDecorator.java:46) ~[torod-0.50.3.jar:?] at com.torodb.torod.WriteDocTransaction.insert(WriteDocTransaction.java:71) ~[torod-0.50.3.jar:?] at com.torodb.mongodb.commands.impl.general.InsertImplementation.apply(InsertImplementation.java:62) ~[mongodb-core-0.50.3.jar:?] at com.torodb.mongodb.commands.impl.general.InsertImplementation.apply(InsertImplementation.java:44) ~[mongodb-core-0.50.3.jar:?] at com.torodb.mongowp.commands.impl.MapBasedCommandExecutor.execute(MapBasedCommandExecutor.java:71) ~[mongowp-core-0.50.2.jar:?] at com.torodb.mongodb.core.WriteMongodTransactionImpl.executeProtected(WriteMongodTransactionImpl.java:74) ~[mongodb-core-0.50.3.jar:?] at com.torodb.mongodb.core.MongodTransactionImpl.execute(MongodTransactionImpl.java:45) ~[mongodb-core-0.50.3.jar:?] at com.torodb.mongodb.repl.OplogManager.lambda$storeState$0(OplogManager.java:151) ~[repl-0.50.3.jar:?] at com.torodb.common.util.RetryHelper.retryOrThrow(RetryHelper.java:50) [common-0.50.3.jar:?] at com.torodb.core.retrier.AbstractHintableRetrier.retry(AbstractHintableRetrier.java:62) [core-0.50.3.jar:?] at com.torodb.core.retrier.Retrier.retry(Retrier.java:122) [core-0.50.3.jar:?] at com.torodb.mongodb.repl.OplogManager.storeState(OplogManager.java:136) [repl-0.50.3.jar:?] at com.torodb.mongodb.repl.OplogManager.access$300(OplogManager.java:74) [repl-0.50.3.jar:?] at com.torodb.mongodb.repl.OplogManager$WriteOplogTransaction.truncate(OplogManager.java:349) [repl-0.50.3.jar:?] at com.torodb.mongodb.repl.RecoveryService.initialSync(RecoveryService.java:213) [repl-0.50.3.jar:?] at com.torodb.mongodb.repl.RecoveryService.runProtected(RecoveryService.java:147) [repl-0.50.3.jar:?] at com.torodb.core.services.RunnableTorodbService.run(RunnableTorodbService.java:67) [core-0.50.3.jar:?] at com.google.common.util.concurrent.AbstractExecutionThreadService$1$2.run(AbstractExecutionThreadService.java:66) [guava-21.0.jar:?] at com.google.common.util.concurrent.Callables$4.run(Callables.java:122) [guava-21.0.jar:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] Caused by: com.torodb.torod.impl.sql.IncompatibleSchemaException at com.torodb.torod.impl.sql.InsertD2RTranslator.analyze(InsertD2RTranslator.java:80) ~[torod-0.50.3.jar:?] at com.torodb.torod.impl.sql.SqlWriteTransaction.insert(SqlWriteTransaction.java:69) ~[torod-0.50.3.jar:?] ... 20 more 06:42:22.220 DEBUG D2R 'schema-manager-0' t.m.WrapperMutableMetaSnapshot - Looking for a meta collection by the unindexed attribute 'id'. Performance lost is expected 06:42:22.295 INFO BACKEND 'schema-manager-0' c.t.b.d.DefaultStructureDdlOps - Created internal index did_pkey for table oplog_replication 06:42:22.320 INFO BACKEND 'schema-manager-0' c.t.b.d.DefaultStructureDdlOps - Created internal index rid_pkey for table oplog_replication_lastappliedoplogentry 06:42:22.324 INFO BACKEND 'schema-manager-0' c.t.b.d.DefaultStructureDdlOps - Created internal index did_seq_idx for table oplog_replication_lastappliedoplogentry 06:42:22.355 INFO REPL 'RecoveryService' c.t.m.r.RecoveryService - Local databases dropping started 06:42:22.377 INFO REPL 'RecoveryService' c.t.m.r.RecoveryService - Local databases dropping finished 06:42:22.377 INFO REPL 'RecoveryService' c.t.m.r.RecoveryService - Remote database cloning started

This error means that while concurrently updating the PostgreSQL schema ToroDB Stampede block an update because it conflict with another one. In this case the behaviour should be that the operation is retried. Since parallel changes without schema updates does not generates this kind of conflict the operation retried should complete correctly asap there are no other parallel operation with schema updates.

It is strange that then ToroDB Stampede stops with a FATAL error... can you check the stacktrace you provided is complete (I do not see the FATAL here)?