MongoDB Essentials

MongoDB is a leading non-relational database management system, and a prominent member of the NoSQL movement. Rather than using the tables and fixed schema’s of a relational database management system (RDBMS), MongoDB uses key-value storage in collection of documents. It also supports a number of options for horizontal scaling in large, production environments. MongoDB is written in C++. MongoDB uses JSON-like documents with schemas.

In this guide, we’ll explain how to set up a sharded cluster for highly available distributed datasets.

There are two broad categories of scaling strategies for data. Vertical scaling involves adding more resources to a server so that it can handle larger datasets. The upside is that the process is usually as simple as migrating the database, but it often involves downtime and is difficult to automate. Horizontal scaling involves adding more servers to increase the resources, and is generally preferred in configurations that use fast-growing, dynamic datasets. Because it is based on the concept of adding more servers, not more resources on one server, datasets often need to be broken into parts and distributed across the servers. Sharding refers to the breaking up of data into subsets so that it can be stored on separate database servers (a sharded cluster).

MongoDB Sharding Topology

  • Sharding is implemented through three separate components. Each part performs a specific function:
  • Config Server: Each production sharding implementation must contain exactly three configuration servers. This is to ensure redundancy and high availability.

Config servers are used to store the metadata that links requested data with the shard that contains it. It organizes the data so that information can be retrieved reliably and consistently.

  •  Query Routers: The query routers are the machines that your application actually connects to. These machines are responsible for communicating to the config servers to figure out where the requested data is stored. It then accesses and returns the data from the appropriate shard(s).

Each query router runs the “mongos” command.

  •  Shard Servers: Shards are responsible for the actual data storage operations. In production environments, a single shard is usually composed of a replica set instead of a single machine. This is to ensure that data will still be accessible in the event that a primary shard server goes offline.

Let’s review the components of the setup we’ll be creating :

Use replica sets for each shard to ensure high availability.

Initial Setup :

  •      1 Config Server ( 3 required in Production Servers )
  •       1 Query Routers
  •       3 Replica set servers  ( 1 Shard )

Configure Hosts File :

We recommend adding a private IP address for each one and using those here to avoid transmitting data over the public internet.

Add IP Address to the /etc/hosts file 

$ vi /etc/hosts
192.168.202.111 mongod1
192.168.202.112 mongod2
192.168.202.113 mongod3
192.168.202.114 mongoconf
192.168.202.115 mongorouter

Open Port’s :

firewall-cmd --permanent --zone=public --add-port=27017/tcp
firewall-cmd --permanent --zone=public --add-port=27017/udp
firewall-cmd --permanent --zone=public --add-port=27018/tcp
firewall-cmd --permanent --zone=public --add-port=27018/udp
firewall-cmd --permanent --zone=public --add-port=27019/tcp
firewall-cmd --permanent --zone=public --add-port=27020/udp


firewall-cmd --reload

For CentOS 6 use :

iptables -I INPUT -p tcp -m tcp --dport 27017 -j ACCEPT
iptables -I INPUT -p udp -m udp --dport 27017 -j ACCEPT

iptables -I INPUT -p tcp -m tcp --dport 27018 -j ACCEPT
iptables -I INPUT -p udp -m udp --dport 27018 -j ACCEPT

iptables -I INPUT -p tcp -m tcp --dport 27019 -j ACCEPT
iptables -I INPUT -p udp -m udp --dport 27019 -j ACCEPT

Set Open files Limit :

 We will increase the limits of the mongod user to ‘65536’ – number of processes or nproc  

and the number of open files or nofile to 65536.

vi  /etc/security/limits.conf

mongod soft nproc 65536
mongod hard nproc 65536
mongod soft nofile 65536
mongod hard nofile 65536

Directory Structure Layout :

/data/mongocluster/
         |
         | ---------                 mongod1/
         |                                       logs/
         |                                       data/
         |                                        tmp/
         |                                        var/
         |----------                 mongod2/   
         |                                      logs/
         |                                      data/
         |                                      tmp/
         |                                      var/
         |----------                 mongod3/
         |                                     logs/
         |                                     data/
         |                                     tmp/
         |                                     var/
         |
         |
         |----------                 mongocnf1/
         |                                     logs/
         |                                     data/
         |                                     tmp/
         |                                     var/
         |
         |
         |------------               mongos/
         |                                     logs/
         |                                     data/
         |                                     tmp/
         |                                     var/


Create the directory structure on respective servers :

mkdir -p /data/mongocluster/{mongod1,mongod2,mongod3,mongocnf1,mongos}
mkdir -p /data/mongocluster/mongod1/{data,logs,tmp,var}
mkdir -p /data/mongocluster/mongod2/{data,logs,tmp,var}
mkdir -p /data/mongocluster/mongod3/{data,logs,tmp,var}
mkdir -p /data/mongocluster/mongocnf1/{data,logs,tmp,var}
mkdir -p /data/mongocluster/mongos/{data,logs,tmp,var}

Installing Binaries :


Add mongod user,set password and give sudo permission :

$ adduser mongod
$ passwd mongod

$ chown mongod:mongod -R /data/mongocluster/
$ chown mongod:mongod -R /data/mongocluster/*

$  visudo

mongod   ALL=(ALL)       NOPASSWD: ALL

$ usermod -aG wheel mongod

Install the MongoDB binaries (version 4.0 ) :

vi /etc/yum.repos.d/mongodb-org-4.0.repo
[mongodb-org-4.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc
 yum -y install mongodb-org

To install specific release of MongoDB specify each package individually.

sudo yum install -y mongodb-org-4.0.4 mongodb-org-server-4.0.4 mongodb-org-shell-4.0.4 mongodb-org-mongos-4.0.4 mongodb-org-tools-4.0.4

For Amazon Linux :

vi /etc/yum.repos.d/mongodb-org-4.0.repo
[mongodb-org-4.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/4.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc
 yum -y install mongodb-org

To install specific release of MongoDB specify each package individually.

sudo yum install -y mongodb-org-4.0.4 mongodb-org-server-4.0.4 mongodb-org-shell-4.0.4 mongodb-org-mongos-4.0.4 mongodb-org-tools-4.0.4

 

Generate a Key file :

Issue this command to generate your key file:

openssl rand -base64 756 > mongo-keyfile

Once you’ve generated the key, copy it to each member of your replica set :

The rest of the steps in this section should be performed on each member of the replica

set , so that they all have the key file located in the same directory with permissions.

Create the /data/mongocluster/<hostname>/var directory to store your key file:

cp mongo-keyfile  /data/mongocluster/mongod1/var/ 
cp mongo-keyfile  /data/mongocluster/mongod2/var/ 
cp mongo-keyfile  /data/mongocluster/mongod3/var/ 
cp mongo-keyfile  /data/mongocluster/mongocnf1/var/ 
cp mongo-keyfile  /data/mongocluster/mongos/var/

Change the Ownership and permission :

chmod 400 /data/mongocluster/mongod1/var/mongo-keyfile
chmod 400 /data/mongocluster/mongod2/var/mongo-keyfile
chmod 400 /data/mongocluster/mongod3/var/mongo-keyfile
chmod 400 /data/mongocluster/mongocnf1/var/mongo-keyfile
chmod 400 /data/mongocluster/mongos/var/mongo-keyfile
chown mongod:mongod /data/mongocluster/mongod1/var/mongo-keyfile
chown mongod:mongod /data/mongocluster/mongod2/var/mongo-keyfile
chown mongod:mongod /data/mongocluster/mongod3/var/mongo-keyfile
chown mongod:mongod /data/mongocluster/mongocnf1/var/mongo-keyfile
chown mongod:mongod /data/mongocluster/mongos/var/mongo-keyfile 

Setup and initialize Replica set :

A replica set is a group of mongod instances that host the same data set. In

replica, one node is primary node that receives all write operations. All other

instances, such as secondaries, apply operations from the primary so that they

have the same data set. Replica set can have only one primary node.

Node 1 (mongod1) :

Setup the config file :

mv mongod.conf mongod.sample.conf mv mongod.conf mongod.sample.conf
$ vi /etc/mongod1.conf

systemLog:
   destination: file
   logAppend: true
   path: /data/mongocluster/mongod1/logs/mongod.log

storage:
   dbPath: /data/mongocluster/mongod1/data
   journal:
   enabled: true

processManagement:
   fork: true # fork and run in background
   pidFilePath: /data/mongocluster/mongod1/var/mongo.pid

net:
   port: 27017
   bindIp:

security:
   authorization: enabled
   keyFile: /data/mongocluster/mongod1/var/mongo-keyfile

#replication:
#   replSetName: r0

#sharding:
 # clusterRole: configsvr

Create a new systemd unit file for mongod called /lib/systemd/system/mongod1.service for

auto start after server reboot

$ vi /lib/systemd/system/mongod1.service 


[Unit]
Description=Mongo Instance 1
After=syslog.target network.target

[Service]

RemainAfterExit=yes
User=mongod
Group=mongod
ExecStart=/bin/mongod --port 27017  --logpath /data/mongocluster/mongod1/logs/mongod.log --config /etc/mongod1.conf --dbpath /data/mongocluster/mongod1/data

[Install]
WantedBy=multi-user.target

Start the service 

$ systemctl daemon-reload

$ systemctl start mongod1.service

Check the instance status :

# ps -ef | grep mongo
mongod    2338     1  1 08:47 ?        00:04:41 /usr/bin/mongod --port 27017 --logpath /data/mongocluster/mongod1/logs/mongod.log --config /etc/mongod1.conf --dbpath /data/mongocluster/mongod1/data 


Check the Listener Port of mongo :

# netstat -tulpn | grep mongo
tcp        0      0 192.168.202.111:27017   0.0.0.0:*               LISTEN      2338/mongod
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      2338/mongod

Connect to the instance :

mongo --port 27017

Create the user administrator.

In the admin database, add a user with the root privilege.

use admin ;
db.createUser({user: "admin", pwd: "XXXXX", roles:[{role: "root", db: "admin"}]})

Disconnect the mongo shell.

  •       Connect and authenticate as the user administrator.

To authenticate during connection :

mongo --port=27017 -u admin -pXXXXX --authenticationDatabase admin

To authenticate after connecting:

$ mongo --port 27017 use admin db.auth("admin", "XXXXX" )

 

Node 2 (mongod2) :

Set up config file for Replica set 2 :

$ vi /etc/mongod2.conf
systemLog:
  destination: file
  logAppend: true
  path: /data/mongocluster/mongod2/logs/mongod.log

storage:
  dbPath: /data/mongocluster/mongod2/data
  journal:
  enabled: true

processManagement:
  fork: true # fork and run in background
  pidFilePath: /data/mongocluster/mongod2/var/mongo.pid

net:
  port: 27017
  bindIp:

security:
  authorization: enabled
  keyFile: /data/mongocluster/mongod2/var/mongo-keyfile
#replication:
  #replSetName: r0

#sharding:
  # clusterRole: shardsvr

Create a new systemd unit file for mongod called /lib/systemd/system/mongod2.service

$ vi /lib/systemd/system/mongod2.service 


[Unit]
Description=Mongo Instance 2
After=syslog.target network.target

[Service]

RemainAfterExit=yes
User=mongod
Group=mongod
ExecStart=/bin/mongod --port 27017  --logpath /data/mongocluster/mongod2/logs/mongod.log --config /etc/mongod2.conf --dbpath /data/mongocluster/mongod2/data

[Install]
WantedBy=multi-user.target


Start the service 

$ systemctl daemon-reload

$ systemctl start mongod2.service


Connect to the instance :

mongo --port 27017

Create the user administrator.

use admin ;
db.createUser({user: "admin", pwd: "XXXXX", roles:[{role: "root", db: "admin"}]})


To authenticate during connection :

mongo --port=27017  -u admin -pXXXXX --authenticationDatabase admin


Node 3 (mongod3) :

Set up config file for Replica set 3 :

$ vi /etc/mongod3.conf
systemLog:
  destination: file
  logAppend: true
  path: /data/mongocluster/mongod3/logs/mongod.log

storage:
  dbPath: /data/mongocluster/mongod3/data
  journal:
  enabled: true

processManagement:
  fork: true # fork and run in background
  pidFilePath: /data/mongocluster/mongod3/var/mongo.pid

net:
  port: 27017
  bindIp:

security:
  authorization: enabled
  keyFile: /data/mongocluster/mongod3/var/mongo-keyfile
#replication:
  #replSetName: r0

#sharding:
 # clusterRole: shardsvr

Create a new systemd unit file for mongod called /lib/systemd/system/mongod3.service

$ vi /lib/systemd/system/mongod3.service 


[Unit]
Description=Mongo Instance 3
After=syslog.target network.target

[Service]

RemainAfterExit=yes
User=mongod
Group=mongod
ExecStart=/bin/mongod --port 27017  --logpath /data/mongocluster/mongod3/logs/mongod.log --config /etc/mongod3.conf --dbpath /data/mongocluster/mongod3/data

[Install]
WantedBy=multi-user.target


Start the service 

$ systemctl daemon-reload

$ systemctl start mongod3.service


Connect to the instance :

mongo --port 27017

Create the user administrator.

use admin ;
db.createUser({user: "admin", pwd: "XXXXX", roles:[{role: "root", db: "admin"}]})


To authenticate during connection :

mongo --port=27017  -u admin -pXXXXX --authenticationDatabase admin

Deploy a Replica Set :

Configuration Setup :

Stop all the running instance :

$ systemctl stop mongod1.service
$ systemctl stop mongod2.service
$ systemctl stop mongod3.service


Uncomment the below parameters on /etc/mongod*.conf file 

replication:
  replSetName: r0
sharding:
  clusterRole: shardsvr

Start each member of the replica set with the appropriate options :

For each member, start a mongod and specify the replica set name through the replSet option. 

Specify any other parameters specific to your deployment.

Make following changes on each systemd unit files as below 

$ vi /lib/systemd/system/mongod1.service 

[Service]
ExecStart=/bin/mongod --port 27017  --logpath /data/mongocluster/mongod1/logs/mongod.log --config /etc/mongod1.conf --dbpath /data/mongocluster/mongod1/data --replSet "r0"

      

$ vi /lib/systemd/system/mongod2.service 

[Service]
ExecStart=/bin/mongod --port 27017  --logpath /data/mongocluster/mongod2/logs/mongod.log --config /etc/mongod2.conf --dbpath /data/mongocluster/mongod2/data --replSet "r0"
$ vi /lib/systemd/system/mongod3.service 

[Service]
ExecStart=/bin/mongod --port 27017  --logpath /data/mongocluster/mongod3/logs/mongod.log --config /etc/mongod3.conf --dbpath /data/mongocluster/mongod3/data --replSet "r0"

Reload the service file :

$ systemctl daemon-reload

Start all the instances one by one :

$ systemctl start mongod1.service
$ systemctl start mongod2.service
$ systemctl start mongod3.service


Enable the services for running at startup :

systemctl enable mongod1.service
systemctl enable mongod2.service
systemctl enable mongod3.service


Connect a mongo shell to a replica set member.


Node 1 (mongod1) :


Connect to the MongoDB shell using the administrative user you created previously:

mongo  --port =27017 -u admin -pXXXX --authenticationDatabase admin


Initiate the replica set.

Use rs.initiate() on one and only one member of the replica set:

> rs.initiate() 


Output:

{
 "info2" : "no configuration explicitly specified -- making one",
 "me" : "192.168.202.111:27017",
 "ok" : 1
}



Verify the initial replica set configuration.

> rs.conf()
r0:OTHER> rs.conf()
{
 "_id" : "r0",
 "version" : 1,
 "members" : [
  {
   "_id" : 0,
   "host" : "192.168.202.111:27017",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : 0,
   "votes" : 1
  }
 ],
 "settings" : {
  "chainingAllowed" : true,
  "heartbeatTimeoutSecs" : 10,
  "getLastErrorModes" : {
   rs.add() can, in some cases, trigger an election. If the mongod you are connected to becomes a secondary, you need to connect the mongo shell to the new primary to continue adding new replica set members. Use rs.status() to identify the primary in the replica set.

  },
  "getLastErrorDefaults" : {
   "w" : 1,
   "wtimeout" : 0
  }
 }
}


Add the remaining members to the replica set.


Add the remaining members with the rs.add() method. You must be connected to the 

primary to add members to a replica set.

rs.add() can, in some cases, trigger an election. If the mongod you are connected to 

becomes a secondary, you need to connect the mongo shell to the new primary to 

continue adding new replica set members. Use rs.status() to identify the primary in the 

replica set.

rs.add("192.168.202.112:27017")
rs.add("192.168.202.113:27017")

When complete, you have a fully functional replica set. The new replica set will elect a 

primary.

We can also initiate and add the replica set using single command.

rs.initiate(
{
_id: "r0",
version: 1,
members: [
{ _id: 0, host : "192.168.202.111:27017" },
{ _id: 1, host : "192.168.202.112:27017" },
{ _id: 2, host : "192.168.202.113:27017" }
]
}
)

Check the status of the replica set.

Use the rs.status() operation:

> rs.status()

     

If the replica set has been configured properly, you’ll see output similar to the following:

r0:PRIMARY> rs.status()
{
        "set" : "r0",
        "date" : ISODate("2017-09-06T05:59:13.642Z"),
        "myState" : 1,
        "term" : NumberLong(2),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.202.111:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 6788,
                        "optime" : {
                                "ts" : Timestamp(1504676007, 150),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2017-09-06T05:33:27Z"),
                        "lastHeartbeat" : ISODate("2017-09-06T05:59:12.315Z"),
                        "lastHeartbeatRecv" : ISODate("2017-09-06T05:59:12.544Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "192.168.202.131:27019",
                        "configVersion" : 3
                },
                {
                        "_id" : 1,
                        "name" : "192.168.202.112:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 6809,
                        "optime" : {
                                "ts" : Timestamp(1504676007, 150),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2017-09-06T05:33:27Z"),
                        "electionTime" : Timestamp(1504670776, 1),
                        "electionDate" : ISODate("2017-09-06T04:06:16Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "192.168.202.113:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 6788,
                        "optime" : {
                                "ts" : Timestamp(1504676007, 150),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2017-09-06T05:33:27Z"),
                        "lastHeartbeat" : ISODate("2017-09-06T05:59:12.276Z"),
                        "lastHeartbeatRecv" : ISODate("2017-09-06T05:59:13.595Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "192.168.202.131:27018",
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}
r0:PRIMARY> rs.conf()
{
        "_id" : "r0",
        "version" : 3,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "192.168.202.111:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "192.168.202.112:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "192.168.202.113:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("59ae41f4d3b760c7476fdefc")
        }
}

db.printSlaveReplicationInfo() 

Returns a formatted report of the status of a replica set from the perspective of the 

secondary member of the set.

r0:PRIMARY> db.printSlaveReplicationInfo();
source: 192.168.202.112:27017
        syncedTo: Sat Sep 09 2017 16:48:00 GMT+0530 (IST)
        0 secs (0 hrs) behind the primary
source: 192.168.202.113:27019
        syncedTo: Sat Sep 09 2017 16:48:00 GMT+0530 (IST)
        0 secs (0 hrs) behind the primary

db.serverStatus()

The serverStatus command returns a document that provides an overview of the 

database’s state.

r0:PRIMARY> db.serverStatus() ;
{
        "host" : "prodserver11:27018",
        "advisoryHostFQDNs" : [ ],
        "version" : "3.2.16",
        "process" : "mongod",
        "pid" : NumberLong(2340),
        "uptime" : 36351,
        "uptimeMillis" : NumberLong(36356036),
        "uptimeEstimate" : 31425,
        "localTime" : ISODate("2017-09-09T13:23:10.607Z"),
        "asserts" : {
                "regular" : 0,
                "warning" : 0,
                "msg" : 0,
                "user" : 1,
                "rollovers" : 0
        },
        "connections" : {
                "current" : 9,
                "available" : 810,
                "totalCreated" : NumberLong(24)
        }

Connections :

r0:PRIMARY> db.serverStatus().connections
{ "current" : 11, "available" : 808, "totalCreated" : NumberLong(27) }

Initialize Config Servers :

Config servers store the metadata for a sharded cluster.

The metadata reflects state and organization for all data and components within the

sharded cluster.

The metadata includes the list of chunks on every shard and the ranges that define the

chunks.

Config servers store metadata in the Config Database.

Rather than using a single config server, recommend to use a replica set to ensure the 

integrity of the metadata. 

$ vi /etc/mongodc1.conf

systemLog:
   destination: file
   logAppend: true
   path: /data/mongocluster/mongocnf1/logs/mongod.log

storage:
   dbPath: /data/mongocluster/mongocnf1/data
   journal:
   enabled: true

processManagement:
   fork: true # fork and run in background
   pidFilePath: /data/mongocluster/mongocnf1/var/mongo.pid

net:
   port: 27019
   bindIp:

security:
   authorization: enabled
   keyFile: /data/mongocluster/mongocnf1/var/mongo-keyfile

sharding:
  clusterRole: configsvr

Create a new systemd unit file for mongo config server called 

/lib/systemd/system/mongodc1.service

$ vi /lib/systemd/system/mongodc1.service 

[Unit]
Description=Mongo Config Server 1
After=syslog.target network.target

[Service]

RemainAfterExit=yes
User=mongod
Group=mongod
ExecStart=/bin/mongod --configsvr --port 27019  --logpath /data/mongocluster/mongocnf1/logs/mongod.log --config /etc/mongodc1.conf --dbpath /data/mongocluster/mongocnf1/data

[Install]
WantedBy=multi-user.target

Start the config server :

$ systemctl daemon-reload
$ systemctl start mongodc1.service

Create the administrator user.

mongo --port=27019

use admin ;

db.createUser({user: "admin", pwd: "XXXXX", roles:[{role: "root", db: "admin"}]})

Enable services at startup :

$ systemctl enable mongodc1.service

For multiple config server start the config servers with –replSet options.

Connect a mongo shell to the primary of the config server replica set and use rs.add() to 

add the new member.

rs.add("<hostnameNew>:<portNew>")

OR

rs.initiate( { _id: "configReplSet", configsvr: true, members: [ { _id: 0, host: "mongoconf1:27019" }, { _id: 1, host: "mongoconf2:27019" }, { _id: 2, host: "mongoconf3:27019" } ] } )

To remove the member to replace from the config server replica set.

rs.remove("<hostnameOld>:<portOld>")

Configure Query Router

MongoDB mongos instances route queries and write operations to shards in a sharded 

cluster.

Applications never connect or communicate directly with the shards.

The mongos tracks what data is on which shard by caching the metadata from the config 

servers.

$ vi /etc/mongos.conf

systemLog:
   destination: file
   logAppend: true
   path: /data/mongocluster/mongos/logs/mongod.log

storage:
   dbPath: /data/mongocluster/mongos/data
   journal:
   enabled: true

processManagement:
   fork: true # fork and run in background
   pidFilePath: /data/mongocluster/mongos/var/mongo.pid

net:
   port: 27019
   bindIp:

security:
   authorization: enabled
   keyFile: /data/mongocluster/mongos/var/mongo-keyfile

sharding:
  configDB: "192.168.202.114:27019"
  autoSplit: true

Create a new systemd unit file for mongod called /lib/systemd/system/mongos.service

$ vi /lib/systemd/system/mongos.service


[Unit]
Description=Mongo Router Service
After=network.target

[Service]
RemainAfterExit=yes
User=mongod
Group=mongod
ExecStart=/bin/mongos --port 27019  --config /etc/mongos.conf

[Install]
WantedBy=multi-user.target

Start the config server :

$ systemctl daemon-reload
$ systemctl start mongos.service
$ systemctl enable mongos.service

Connect to the query router from one of your shard servers

mongo --port=27019
use admin ;
db.auth('admin', 'XXXXX')

 From the mongos interface, add each shard individually :

mongos>  sh.addShard( "r0/192.168.202.111:27017")


Optionally, if you configured replica sets for each shard instead of single servers, you can 

add them at this stage with a similar command

sh.addShard( "r0/192.168.202.111:27017,192.168.202.112:27017,192.168.202.113:27017" )


In this format, rs0 is the name of the replica set for the first shard, 192.168.202.111 is the 

 first host in the shard (using port 27017)

Enable Sharding at Database Level


From the mongos shell, create a new database. Eg:  exampleDB

use exampleDB
sh.enableSharding("exampleDB")


To verify that the sharding was successful, first switch to the config database:

Next, run a find() method on your databases:

mongos> use config
switched to db config

mongos> db.databases.find()
{ "_id" : "exampleDB", "primary" : "r0", "partitioned" : true }

Enable Sharding at Collection Level


Switch to the exampleDB database we created previously:

Create a new collection called exampleCollection and hash its _id key. 

The _id key is already created by default as a basic index for new documents:

use exampleDB
db.exampleCollection.ensureIndex( { _id : "hashed" } )
sh.shardCollection( "exampleDB.exampleCollection", { "_id" : "hashed" } )


Test your Cluster :


Run the following code in the mongo shell to generate 500 simple documents and insert 

them into exampleCollection:

use exampleDB

for (var i = 1; i <= 500; i++) db.exampleCollection.insert( { x : i } )

db.exampleCollection.getShardDistribution()


getShardMap command gives the config string that is passed to mongos server



use admin ;
db.runCommand("getShardMap")
mongos> db.runCommand("getShardMap")
{
        "map" : {
                "192.168.202.114:27019" : "192.168.202.114:27019",
                "192.168.202.111:27017" : "r0/192.168.202.111:27017,192.168.202.112:27017,192.168.202.113:27017",
                "192.168.202.112:27017" : "r0/192.168.202.111:27017,192.168.202.112:27017,192.168.202.113:27017",
                "192.168.202.113:27017" : "r0/192.168.202.111:27017,192.168.202.112:27017,192.168.202.113:27017",
                "config" : "192.168.202.114:27019",
                "r0" : "r0/192.168.202.111:27017,192.168.202.112:27017,192.168.202.113:27017",
        },
        "ok" : 1
}


sh.getBalancerState() returns true when the balancer is enabled and false if the balancer 

is disabled.

mongos> sh.getBalancerState()
true

Use the sh.status() method in the mongo shell to see an overview of the cluster.

sh.status()
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("59af82aedf5f2dcf39831c7e")
}
  shards:
        {  "_id" : "r0",  "host" : "r0/192.168.202.111:27017,192.168.202.112:27017,192.168.202.113:27017" }
  active mongoses:
        "3.2.16" : 1
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  1
        Last reported error:  could not find host matching read preference { mode: "primary" } for set r0
        Time of Reported error:  Sat Sep 09 2017 16:47:54 GMT+0530 (IST)
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "exampleDB",  "primary" : "r0",  "partitioned" : true }


db.mongos.find() to check the mongos status 

mongos> db.mongos.find()
{ "_id" : "mongorouter:27019", "ping" : ISODate("2017-09-09T13:19:32.234Z"), "up" : NumberLong(31353), "waiting" : true, "mongoVersion" : "3.2.16" }


db.chunks.find()

mongos> db.chunks.find()
{ "_id" : "exampleDB.exampleCollection-_id_MinKey", "lastmod" : Timestamp(1, 1), "lastmodEpoch" : ObjectId("59af86b2df5f2dcf39831ceb"), "ns" : "exampleDB.exampleCollection", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : NumberLong(0) }, "shard" : "r0" }
{ "_id" : "exampleDB.exampleCollection-_id_0", "lastmod" : Timestamp(1, 2), "lastmodEpoch" : ObjectId("59af86b2df5f2dcf39831ceb"), "ns" : "exampleDB.exampleCollection", "min" : { "_id" : NumberLong(0) }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "r0" }


Print sharding configuration using db.printShardingStatus() 

mongos> db.printShardingStatus() ;
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("59af82aedf5f2dcf39831c7e")
}
  shards:
        {  "_id" : "r0",  "host" : "r0/192.168.202.111:27017,192.168.202.112:27017,192.168.202.113:27017" }
  active mongoses:
        "3.2.16" : 1
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  1
        Last reported error:  could not find host matching read preference { mode: "primary" } for set r0
        Time of Reported error:  Sat Sep 09 2017 16:47:54 GMT+0530 (IST)
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "exampleDB",  "primary" : "r0",  "partitioned" : true }



The getCmdLineOpts command returns a document containing command line options

mongos> db.serverCmdLineOpts()
{
        "argv" : [
                "/bin/mongos",
                "--port",
                "27019",
                "--config",
                "/etc/mongos.conf"
        ],
        "parsed" : {
                "config" : "/etc/mongos.conf",
                "net" : {
                        "bindIp" : "127.0.0.1,192.168.202.115",
                        "port" : 27019
                },
                "processManagement" : {
                        "pidFilePath" : "/data/mongocluster/mongos/var/mongos.pid"
                },
                "security" : {
                        "keyFile" : "/data/mongocluster/mongos/var/mongo-keyfile"
                },
                "sharding" : {
                        "configDB" : "192.168.202.114:27019"
                },
                "systemLog" : {
                        "destination" : "file",
                        "logAppend" : true,
                        "path" : "/data/mongocluster/mongos/logs/mongos.log"
                }
        },
        "ok" : 1
}



Force a Member in Replica set to Become Primary


Force a Member to be Primary by Setting its Priority High


Find the current configuration :

r0:PRIMARY> rs.conf()
{
        "_id" : "r0",
        "version" : 3,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "192.168.202.111:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "192.168.202.112:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "192.168.202.113:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
 


Use the following sequence of operations :

cfg = rs.conf()
cfg.members[0].priority = 1
cfg.members[1].priority = 0.5
cfg.members[2].priority = 0.5
rs.reconfig(cfg)
r0:PRIMARY> rs.conf()
{
        "_id" : "r0",
        "version" : 4,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "192.168.202.111:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "192.168.202.112:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 0.5,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "192.168.202.113:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 0.5,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
 

Force a Member to be Primary Using Database Commands


To force a member to become primary use the following procedure


Here 192.168.202.112:27017 is the primary 


1. In a mongo shell, run rs.status() to ensure your replica set is running as expected

2. In a mongo shell connected to the mongod instance running on 192.168.202.113:27017

    so that it does not attempt to become primary for 60 seconds.

rs.freeze(60)

3. In a mongo shell connected the mongod running on 192.168.202.112:27017, step down this instance

rs.stepDown(60)

192.168.202.111:27017 becomes primary 

Download the mongo process kill script  click here

Free Monitoring 

Free monitoring provides information about your deployment, including:

  • Operation Execution Times
  • Memory Usage
  • CPU Usage
  • Operation Counts

Enable/Disable Free Monitoring

By default, you can enable/disable free monitoring during runtime using db.enableFreeMonitoring() and db.disableFreeMonitoring().

 

rep01:PRIMARY> db.enableFreeMonitoring()
{
"state" : "enabled",
"message" : "To see your monitoring data, navigate to the unique URL below. Anyone you share the URL with will also be able to view this page. You can disable monitoring at any time by running db.disableFreeMonitoring().",
"url" : "https://cloud.mongodb.com/freemonitoring/cluster/C5O6XAX3OTJWLOJXJSOPUAPEZTZF2RIN",
"userReminder" : "",
"ok" : 1,
"operationTime" : Timestamp(1532924493, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1532924493, 1),
"signature" : {
"hash" : BinData(0,"YDvof6Q1FOpTGT4ZCAIF0NqJ7Cw="),
"keyId" : NumberLong("6583859825938006017")
}
}
}

When running with access control, the user must have the following privileges to enable free monitoring and get status:

{ resource: { cluster : true }, actions: [ “setFreeMonitoring”, “checkFreeMonitoringStatus” ] }

When enabled, the monitored data is uploaded periodically. The monitored data expires after 24 hours. That is, you can only access monitored data that has been uploaded within the past 24 hours.

free_monitoring_image-vezg1yk7cc

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Powered by WordPress.com.

Up ↑

%d bloggers like this: