In my previous articles, I talked about MongoDB Community Edition installation and MongoDB Sharded Cluster components in general. You can access these posts below:
“How To Install MongoDB Community Edition“,
“MongoDB Sharded Cluster Components”
If you want to enable MongoDB Sharded Cluster Authorization you should read the below article.
“How To Enable MongoDB Sharded Cluster Authorization”
Also you may want to install MongoDB Sharded Cluster with Authorization, In this case you should read the below article. You should definetely read the below article. Its very detailed article, and tested step by step.
“Deploy Sharded Cluster with Keyfile Access Control on Red Hat Enterprise Linux or CentOS Linux”
In this section, we will install Sharded Cluster on three separate machines (mongodb1, mongodb2, mongodb3) that have the CentOS Linux operating system.
Install MongoDB Sharded Cluster
Step 1: Install Community Edition
First, create users on all three machines and install MongoDB Community Edition as I have mentioned in my previous article named “How To Install MongoDB Community Edition“.
Step 2: Specify Path Structure
You need to create a proper folder structure to store database files. MongoDB is very flexible in this regard, you can design the folder structure as you want. I decided to create /mongodb directory on each server and create shard, configuration and log folders. The path structure will be as follows:
mongodb1
1 2 3 4 5 6 |
/mongodb ………. shA0 ………. shB2 ………. shC1 ………. cfg0 ………. logs |
mongodb2
1 2 3 4 5 6 |
/mongodb ………. shA1 ………. shB0 ………. shC2 ………..cfg1 ………. logs |
mongodb3
1 2 3 4 5 6 |
/mongodb ………. shA2 ………. shB1 ………. shC0 ………. cfg3 ………. logs |
We can summarize the structure as follows:
- The shA, shB, and shC folders are three separate shards. The suffixes 0,1 and 2 means replica sets. As you can see, the replica set of each shard’s data resides on another server, which minimizes data loss. Even if one server becomes unavailable, the other two servers will not lose data.
- The cfg directory will contain data of the configuration server and is available on three separate servers.
- Log files related to processes and servers will be stored in the logs directories.
- All three servers have shards, one config server and one router. Since we have more than one router, applications will be able to use other routers without any interruption of service in case of a problem about the router.
Step 3: Create Config Servers
Create configuration servers by specifying file paths and cfg names with the following command on all three machines.
Script for mongodb1
Run the same script on 3 servers by changing names and paths.
1 2 |
[mongodb@mongodb1 ~]$ cd /mongodb [mongodb@mongodb1 ~]$ mongod --configsvr --dbpath cfg0 --port 26001 --fork --logpath logs/log.cfg0 --logappend |
Step 4: Create Shard Servers
Create Shard servers with replica sets. The paths mentioned above should be used.
Script for mongodb1
Run the same script on 3 servers by changing names and paths.
1 2 3 |
[mongodb@mongodb1 ~]$ mongod --shardsvr --replSet shA --dbpath shA0 --logpath logs/log.shA0 --port 27500 --fork –-logappend [mongodb@mongodb1 ~]$ mongod --shardsvr --replSet shB --dbpath shB2 --logpath logs/log.shB2 --port 27600 --fork –-logappend [mongodb@mongodb1 ~]$ mongod --shardsvr --replSet shC --dbpath shC1 --logpath logs/log.shC1 --port 27700 --fork --logappend |
Step 5: Start Routers
Routers (mongos) are started on the default port (27017) on all three machines.
Script for mongodb1
Run the same script on 3 servers..
1 |
[mongodb@mongodb1 ~]$ mongos --configdb mongodb1:26001,mongodb2:26001,mongodb3:26001 --fork --logappend --logpath logs/log.mongos |
Step 6: Check Routers Status
Use the below command to check that the process starts correctly.
1 |
ps –ef | grep mongo |
Step 7: Create Replica Set
At this stage we will first add replica set members.
Script for mongodb1 shA set
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
[mongodb@mongodb1 ~]$ mongo --port 27500 >rs.status() { "info" : "run rs.initiate(...) if not yet done for the set", "ok" : 0, "errmsg" : "no replset config has been received", "code" : 94 } >rs.initiate({_id:"shA", members: [{"_id":0, "host":"mongodb1:27500"},{"_id":1, "host":"mongodb2:27500"},{"_id":2, "host":"mongodb3:27500"}]}) { "ok" : 1 } >rs.status() { "set" : "shA", "date" : ISODate("2016-09-25T19:16:59.040Z"), "myState" : 1, "term" : NumberLong(1), "heartbeatIntervalMillis" : NumberLong(2000), "members" : [ { "_id" : 0, "name" : "mongodb1:27500", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 3013, "optime" : { "ts" : Timestamp(1474831011, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2016-09-25T19:16:51Z"), "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1474831010, 1), "electionDate" : ISODate("2016-09-25T19:16:50Z"), "configVersion" : 1, "self" : true }, { "_id" : 1, "name" : "mongodb2:27500", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 19, "optime" : { "ts" : Timestamp(1474831011, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2016-09-25T19:16:51Z"), "lastHeartbeat" : ISODate("2016-09-25T19:16:58.758Z"), "lastHeartbeatRecv" : ISODate("2016-09-25T19:16:57.587Z"), "pingMs" : NumberLong(0), "syncingTo" : "mongodb1:27500", "configVersion" : 1 }, { "_id" : 2, "name" : "mongodb3:27500", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 19, "optime" : { "ts" : Timestamp(1474831011, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2016-09-25T19:16:51Z"), "lastHeartbeat" : ISODate("2016-09-25T19:16:58.757Z"), "lastHeartbeatRecv" : ISODate("2016-09-25T19:16:57.586Z"), "pingMs" : NumberLong(0), "syncingTo" : "mongodb1:27500", "configVersion" : 1 } ], "ok" : 1 } |
As can be seen, one of the three members is PRIMARY and the others are SECONDARY. Read and write requests are provided on the PRIMARY member and SECONDARY members are synchronized.
Create Shards
On the servers where the primary replica member is present, shards are added with the following commands.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
[mongodb@mongodb1 ~]$ mongo mongos> sh.addShard("shA/mongodb1:27500") { "shardAdded" : "shA", "ok" : 1 } mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("27e81987jns3e8fp81fdb1") } shards: { "_id" : "shA", "host" : "shA/mongodb1:27500,mongodb2:27600,mongodb3:27700" } active mongoses: "3.2.9" : 4 balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: |