Before restoring database dumps to new servers we need to install MongoDB on each system.
Step 1: Installation:
1 | vi /etc/yum.repos.d/mongodb-org-3.6.repo |
inside it:
1 2 3 4 5 6 | [mongodb-org-3.6] name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/ gpgcheck=1 enabled=1 gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc |
1 | sudo yum install -y mongodb-org |
Configure SELinux
If you are using SELinux, you must configure SELinux to allow MongoDB to start on Red Hat Linux-based systems (Red Hat Enterprise Linux or CentOS Linux).
To configure SELinux, administrators have three options:
-If SELinux is in enforcing mode, enable access to the relevant ports that the MongoDB deployment will use (e.g. 27017). See Default MongoDB Port for more information on MongoDB’s default ports.
For default settings, this can be accomplished by running
1 | semanage port -a -t mongod_port_t -p tcp 27017 |
-Disable SELinux by setting the SELINUX setting to disabled in /etc/selinux/config.
1 | SELINUX=disabled |
You must reboot the system for the changes to take effect.
-Set SELinux to permissive mode in /etc/selinux/config by setting the SELINUX setting to permissive.
1 | SELINUX=permissive |
You must reboot the system for the changes to take effect.
You can instead use setenforce to change to permissive mode. setenforce does not require a reboot but is not persistent.
Alternatively, you can choose not to install the SELinux packages when you are installing your Linux operating system, or choose to remove the relevant packages. This option is the most invasive and is not recommended.
Start MongoDB
1 | sudo service mongod start |
Stop MongoDB
1 | sudo service mongod stop |
Restart MongoDB
1 | sudo service mongod restart |
Begin using MongoDB
1 | mongo --host 127.0.0.1:27017 |
After installation is completed the second step is deploying the replica set:
Step 2: Deploy a new replica set for each shard.
Replica set deployment
1- Create /mongodb directory on each shard
1 | sudo mkdir /mongodb |
2- change owner of the directory to mongo user
1 | sudo chown mongo:mongo /mongodb |
3- create shard and config directories under the /mongodb directory.
Note: In this test environment I use 3 servers for 3 shard. And in each server I put the relicas of other shards.
In busy production environments all the replicas can be located in different servers.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | [mongo@mongotest01 mongodb]$ mkdir shard01a [mongo@mongotest01 mongodb]$ mkdir shard02a [mongo@mongotest01 mongodb]$ mkdir shard03a [mongo@mongotest01 mongodb]$ mkdir cfg01 [mongo@mongotest01 mongodb]$ mkdir logs [mongo@mongotest02 mongodb]$ mkdir shard01b [mongo@mongotest02 mongodb]$ mkdir shard02b [mongo@mongotest02 mongodb]$ mkdir shard03b [mongo@mongotest02 mongodb]$ mkdir cfg02 [mongo@mongotest02 mongodb]$ mkdir logs [mongo@mongotest03 mongodb]$ mkdir shard01c [mongo@mongotest03 mongodb]$ mkdir shard02c [mongo@mongotest03 mongodb]$ mkdir shard03c [mongo@mongotest03 mongodb]$ mkdir cfg03 [mongo@mongotest03 mongodb]$ mkdir logs |
4- Under /mongodb directory start the config servers in each server:
1 2 3 | mongod --configsvr --dbpath cfg01 --port 26050 --fork --logpath logs/log.cfg01 --logappend mongod --configsvr --dbpath cfg02 --port 26050 --fork --logpath logs/log.cfg02 --logappend mongod --configsvr --dbpath cfg03 --port 26050 --fork --logpath logs/log.cfg03 --logappend |
5- Under /mongodb server create the shard servers in each server:
Note: If you use one server for two or more shard, config or mongos, you need to give different ports for each process.
In my test case in one server I create 3 different shards, one config and one mongos so I gave different ports for each of them.
1 2 3 4 5 6 7 8 9 10 11 | mongod --shardsvr --replSet shard01 --dbpath shard01a --logpath logs/log.shard01a --port 27000 --fork --logappend mongod --shardsvr --replSet shard02 --dbpath shard02a --logpath logs/log.shard02a --port 27100 --fork --logappend mongod --shardsvr --replSet shard03 --dbpath shard03a --logpath logs/log.shard03a --port 27200 --fork --logappend mongod --shardsvr --replSet shard01 --dbpath shard01b --logpath logs/log.shard01b --port 27000 --fork --logappend mongod --shardsvr --replSet shard02 --dbpath shard02b --logpath logs/log.shard02b --port 27100 --fork --logappend mongod --shardsvr --replSet shard03 --dbpath shard03b --logpath logs/log.shard03b --port 27200 --fork --logappend mongod --shardsvr --replSet shard01 --dbpath shard01c --logpath logs/log.shard01c --port 27000 --fork --logappend mongod --shardsvr --replSet shard02 --dbpath shard02c --logpath logs/log.shard02c --port 27100 --fork --logappend mongod --shardsvr --replSet shard03 --dbpath shard03c --logpath logs/log.shard03c --port 27200 --fork --logappend |
6- Initiate the replica sets
Log on to each shard using appropriate port and run the rs.initiate command:
1 2 3 4 5 6 7 8 | mongo --port 27000 rs.initiate({_id:"shard01", members: [{"_id":0, "host":"172.18.239.201:27000"},{"_id":1, "host":"172.18.239.202:27000"},{"_id":2, "host":"172.18.239.203:27000"}]}) mongo --port 27100 rs.initiate({_id:"shard02", members: [{"_id":0, "host":"172.18.239.201:27100"},{"_id":1, "host":"172.18.239.202:27100"},{"_id":2, "host":"172.18.239.203:27100"}]}) mongo --port 27200 rs.initiate({_id:"shard03", members: [{"_id":0, "host":"172.18.239.201:27200"},{"_id":1, "host":"172.18.239.202:27200"},{"_id":2, "host":"172.18.239.203:27200"}]}) |
7- Start the config servers in different servers:
1 2 3 4 5 6 | mongod --configsvr --replSet cfg --dbpath /mongodb/cfg01/ --bind_ip localhost,172.18.239.201 --port 26050 --fork --logpath /mongodb/logs/log.cfg01 --logappend mongod --configsvr --replSet cfg --dbpath /mongodb/cfg02/ --bind_ip localhost,172.18.239.202 --port 26050 --fork --logpath /mongodb/logs/log.cfg02 --logappend mongod --configsvr --replSet cfg --dbpath /mongodb/cfg01/ --bind_ip localhost,172.18.239.201 --port 26050 --fork --logpath /mongodb/logs/log.cfg03 --logappend mongo --port 26050 rs.initiate({_id:"cfg", configsvr: true, members: [{"_id":0, "host":"172.18.239.201:26050"},{"_id":1, "host":"172.18.239.202:26050"},{"_id":2, "host":"172.18.239.203:26050"}]}) |
Step 3: Start the mongoses in different servers:
1 2 3 | mongos --configdb cfg/172.18.239.201:26050,172.18.239.202:26050,172.18.239.203:26050 --bind_ip localhost,172.18.239.201 --fork --logappend --logpath /db/logs/log.mongos mongos --configdb cfg/172.18.239.201:26050,172.18.239.202:26050,172.18.239.203:26050 --bind_ip localhost,172.18.239.202 --fork --logappend --logpath /db/logs/log.mongos mongos --configdb cfg/172.18.239.201:26050,172.18.239.202:26050,172.18.239.203:26050 --bind_ip localhost,172.18.239.203 --fork --logappend --logpath /db/logs/log.mongos |
Step 4: Adding shards:
connect to mongos:
1 2 3 4 5 | sh.addShard("shard01/172.18.239.201:27000") sh.addShard("shard02/172.18.239.201:27100") sh.addShard("shard03/172.18.239.201:27200") |
Step 5: Shut down the mongos instances.
1 2 3 | mongos use admin db.shutdownServer() |
Step 6: Restore the shard data:
Restore each shard dump to the appropriate primary replica of each shard:
1 2 3 4 5 | mongorestore --drop --oplogReplay /backup/FULL/05-10-18/shard03 --port 27200 mongorestore --drop --oplogReplay /backup/FULL/05-10-18/shard02 --port 27100 mongorestore --drop --oplogReplay /backup/FULL/05-10-18/shard01 --port 27000 |
Step 7: Restore the config server data.
1 | mongorestore --drop --oplogReplay /backup/FULL/05-10-18/configReplset --port 26050 |
Step 8: Start mongos instances in each server.
1 2 3 | mongos --configdb cfg/172.18.239.201:26050,172.18.239.202:26050,172.18.239.203:26050 --bind_ip localhost,172.18.239.201 --fork --logappend --logpath /db/logs/log.mongos mongos --configdb cfg/172.18.239.201:26050,172.18.239.202:26050,172.18.239.203:26050 --bind_ip localhost,172.18.239.202 --fork --logappend --logpath /db/logs/log.mongos mongos --configdb cfg/172.18.239.201:26050,172.18.239.202:26050,172.18.239.203:26050 --bind_ip localhost,172.18.239.203 --fork --logappend --logpath /db/logs/log.mongos |
Step 9: If shard hostnames have changed, update the config database.
connect to the primary config:
1 2 3 4 5 6 7 8 | cfg:PRIMARY> db.shards.find() { "_id" : "shard01", "host" : "shard01/mshard01-ra.saglik.lokal:27017,mshard01-rb.saglik.lokal:27017,mshard01-rc.saglik.lokal:27017" } { "_id" : "shard02", "host" : "shard02/mshard02-ra.saglik.lokal:27017,mshard02-rb.saglik.lokal:27017,mshard02-rc.saglik.lokal:27017" } { "_id" : "shard03", "host" : "shard03/mshard03-ra.saglik.lokal:27017,mshard03-rb.saglik.lokal:27017,mshard03-rc.saglik.lokal:27017" } db.shards.update( { _id : "shard01" } , { $set : {"host" : "shard01/172.18.239.201:27000,172.18.239.202:27000,172.18.239.203:27000"} } ) db.shards.update( { _id : "shard02" } , { $set : {"host" : "shard02/172.18.239.201:27100,172.18.239.202:27100,172.18.239.203:27100"} } ) db.shards.update( { _id : "shard03" } , { $set : {"host" : "shard03/172.18.239.201:27200,172.18.239.202:27200,172.18.239.203:27200"} } ) |
Step 10 : Restart all the shard mongod instances.
Step 11: Restart the other mongos instances.
Step 12: Verify that the cluster is operational.
1 2 | db.printShardingStatus() show collections |