MongoDB Ops Manager java.lang.OutOfMemoryError: unable to create new native thread

When starting MongoDB Ops Manager version 1.8.2.312, we have this error:

Migrate MMS data
Running migrations…[  OK  ]
Start MMS server
Instance 0 starting……….[  OK  ]
Start Backup HTTP Server
Instance 0 starting…….[FAILED]
instance: 0  – msg: unable to create new native thread

Solution: Increase the max number of process for user mongodb-mms:

$ sudo vi /etc/security/limits.d/90-nproc.conf

mongodb-mms     soft    nproc     unlimited

MongoDB Ops Manager java.lang.OutOfMemoryError: unable to create new native thread

MongoDB replSet error loading set config (BADCONFIG)

Error when initiate replicaset in MongoDB:

“errmsg” : “replSet error loading set config (BADCONFIG)”, “ok” : 0 }

Solution:

1. Stop mongod, delete all local.* files in DBPath, start mongod or delete database local directly in mongo shell:

use local
db.dropDatabase()

2. Reconfigure replica set members:

config = {_id: “repl1″, members:[

{_id: 0, host: ‘192.168.2.1:27017′},

{_id: 1, host: ‘192.168.2.2:27017′}]

}

 

rs.reconfig(config, {force: true})

MongoDB replSet error loading set config (BADCONFIG)

MongoDB failed to start Assertion failure isOk() src/mongo/db/storage/extent.h 80

MongoDB failed to start with this error in log file:

2015-08-28T01:24:17.062+0000 [initandlisten] build info: Linux build5.nj1.10gen. cc 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49
2015-08-28T01:24:17.062+0000 [initandlisten] allocator: tcmalloc
2015-08-28T01:24:17.062+0000 [initandlisten] options: { config: “/etc/mongod.conf”, net: { port: 27017 }, processManagement: { pidFilePath: “/var/run/mongodb/mongod.pid” }, security: { authorization: “disabled” }, storage: { dbPath: “/var/lib/mongo”, journal: { enabled: false } }, systemLog: { destination: “file”, logAppend: true, path: “/var/lib/mongo/mongod.log”, quiet: true } }
2015-08-28T01:24:17.164+0000 [initandlisten] test Assertion failure isOk() src/mongo/db/storage/extent.h 80
2015-08-28T01:24:17.170+0000 [initandlisten] test 0x1205431 0x11a7229 0x118b53e 0xf04360 0xf436a3 0x8b779f 0xaa2ca0 0xd8237f 0x8cd367 0x76c32d 0x76e247 0x76ee7b 0x76f4b5 0x76f759 0x7f96a8516d5d 0x766329
mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x1205431]
mongod(_ZN5mongo10logContextEPKc+0x159) [0x11a7229]
mongod(_ZN5mongo12verifyFailedEPKcS1_j+0x17e) [0x118b53e]
mongod(_ZNK5mongo13ExtentManager9getExtentERKNS_7DiskLocEb+0x60) [0xf04360]
mongod(_ZN5mongo12FlatIteratorC1EPKNS_10CollectionERKNS_7DiskLocERKNS_20CollectionScanParams9DirectionE+0xd3) [0xf436a3]
mongod(_ZNK5mongo10Collection11getIteratorERKNS_7DiskLocEbRKNS_20CollectionScanParams9DirectionE+0x9f) [0x8b779f]
mongod(_ZN5mongo14CollectionScan4workEPm+0x350) [0xaa2ca0]
mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_7DiskLocE+0xef) [0xd8237f]
mongod(_ZN5mongo8Database19clearTmpCollectionsEv+0x147) [0x8cd367]
mongod() [0x76c32d]
mongod(_ZN5mongo14_initAndListenEi+0x637) [0x76e247]
mongod(_ZN5mongo13initAndListenEi+0x1b) [0x76ee7b]
mongod() [0x76f4b5]
mongod(main+0x9) [0x76f759]
/lib64/libc.so.6(__libc_start_main+0xfd) [0x7f96a8516d5d]
mongod() [0x766329]
2015-08-28T01:24:17.172+0000 [initandlisten] exception in initAndListen: 0 assertion src/mongo/db/storage/extent.h:80, terminating

Cause: Your DB files are corrupted. Use

mongod –repair

command to fix:

mongod –repair

MongoDB failed to start Assertion failure isOk() src/mongo/db/storage/extent.h 80

MongoDB MMS com.xgen.svc.brs.svc.DaemonAssignmentSvc: ERROR Assignment failed: No Daemon found with enough free space

When adding new backup job to Mongo DB MMS, we can have this error
com.xgen.svc.brs.svc.DaemonAssignmentSvc: ERROR Assignment failed: No Daemon found with enough free space

But checking the storage, the usage is just under 20%.

Solution: MMS requires manually configure the backup daemon when you add a new backup job to an existing one. Go to MMS Admin -> Backup -> Jobs -> Set binding, click on it and forces choosing the daemon in select list.

MongoDB MMS com.xgen.svc.brs.svc.DaemonAssignmentSvc: ERROR Assignment failed: No Daemon found with enough free space

MongoDB Ops Manager Insufficient oplog size: The oplog window must be at least 3 hours over the last 24 hours for all members of replica set

When configuring MongoDB Ops Manager (MMS), MMS backup check the oplog of last 24 hours. If there is a oplog windows does not cover minimum 3 hours, Ops Manager will not allow you to continue the backup configuration:

Insufficient oplog size: The oplog window must be at least 3 hours over the last 24 hours for all members of replica set . Please increase the oplog

Solution: If the current oplog size is indeed insufficient for 3 normal hours operations for your application, increase its size following this guide: http://docs.mongodb.org/v3.0/tutorial/change-oplog-size/
If it is the case that you just set up the DB and do an intensive insert, then you don’t need to increase the oplog. Wait for another 24 hours and try the backup configuration process again (https://jira.mongodb.org/browse/MMS-2374)

MongoDB Ops Manager Insufficient oplog size: The oplog window must be at least 3 hours over the last 24 hours for all members of replica set

MongoDB restore DB from MMS backup on shard clusters with different IPs than original shard

When restore shards using MongoDB MMS backup files, if the new shard IP addresses are not the same as the IPs from where you do backup, MongoDB will raise the error “11002 exception: socket exception [CONNECT_ERROR] for since the old IPs is stored in config server DB.

Solution: Open a shell to MongoDB config server(s) and update the shard cluster IP in config DB:

use config;
db.shards.find()
db.shards.update({_id: “shard_id”}, {$set: {“host”: “shard_id/new_ip:port”}} })

MongoDB restore DB from MMS backup on shard clusters with different IPs than original shard

MongoDB error can’t add shard because a local database ‘test’ exists in another shard

Error: When add shard with sh.addShard(), if new shard contains a DB with the same name as in previously added shards, MongoDB can’t automatically merge them:

errmsg” : “can’t add shard server2:27017 because a local database ‘test’ exists in another shard000:server1:27017

Solutions: mongodump the data on new shard, then drop that database and addShard. After that you can import the dump data using mongorestore

MongoDB error can’t add shard because a local database ‘test’ exists in another shard

MongoDB Replica Set STARTUP2 error initial sync need a member to be primary or secondary to do our initial sync

Error: MongoDB replica set stuck in STARTUP2 status with no replica becoming PRIMARY

 error initial sync need a member to be primary or secondary to do our initial sync

Cause: You accidentally delete the local.* files in MongoDB DB path on one or multiple but not all replicas

Solution: Stop all replicas, delete local.* on all replicas, and then start them and redo the rs.initiate() and rs.add() command from any replica member.

MongoDB Replica Set STARTUP2 error initial sync need a member to be primary or secondary to do our initial sync

AuthenticationFailed MONGODB-CR credentials missing in the user document

Error Description: When you are using MongoDB 3.0 and decide to use MONGODB-CR as password authentification mechanism instead of the default SCRAM-SHA-1 (or maybe MONGODB-CR is not your choice but is your legacy PHP/Java/Ruby driver support), the new user you create don’t have MONGODB-CR credentials as expected:

use admin
db.system.users.findOne({user: “”admin”,”})
{
“_id” : “admin.admin”,
“user” : “admin”,
“db” : “admin”,
“credentials” : {
“SCRAM-SHA-1″ : {
“iterationCount” : 10000,
“salt” : “FPnmqmCI04KHJVZunfaI2Q==”,
“storedKey” : “i+jvORcFsnx6CXt0Bd924e2f804=”,
“serverKey” : “PQHG8nYYcJTjFEClqjFRZ8PTLTA=”
},
“MONGODB-CR” : “8aab8902fd862afad8064b73bd149d00″
},
“roles” : [
{
“role” : “userAdminAnyDatabase”,
“db” : “admin”
}
]
}

Cause: authSchemaVersion is set  to “5” and only SCRAM credentials will be generated in MongoDB 3.0:

use admin
db.system.version.find()
{ “_id” : “authSchema”, “currentVersion” : 5 }

Solution: Restart mongod/mongos while disable –auth, then change authSchemaVersion to “3” to support MONGODB-CR. See https://jira.mongodb.org/browse/SERVER-17459

use admin
db.system.version.update({ “_id” : “authSchema”},{$set: {“currentVersion” : 3} })

AuthenticationFailed MONGODB-CR credentials missing in the user document