MongoDB Database Administration Commands

This article lists the  commands used for MongoDB administration.Please feel free to suggest and contribute more commands.

mongo_comm

Login to mongodb instance

# mongo -u <username> -p <password> --authenticationDatabase <dbname>

Authenticate and logout  from database

use admin; 
db.auth('user','password');

// Logout
db.logout()

Show all databases

show dbs

Define a database name

use <databasename>

This command is to switch from default database to define database (even non-exists database name will work).However, database doesn’t get created yet, until you save collections inside it.

Display the current database name

db.getName()

Create a collection

db.createCollection("collectionName");

List down collections

show collections ;
db.getCollectionNames();

Insert document in a collection

// Insert single document
//
db.<collectionName>.insert({field1: "value", field2: "value"})
//
// Insert multiple documents
//
db.<collectionName>.insert([{field1: "value1"}, {field1: "value2"}])
db.<collectionName>.insertMany([{field1: "value1"}, {field1: "value2"}])

Drop a collection

db.<collectionName>.drop()

Remove document from collection

db.<collectionName>.remove()

//

db.<collectionName>.deleteMany({})

Drop a database

use <database>

db.dropDatabase()

Display collection records

Retrieve all records

db.<collectionName>.find();

Retrieve limited number of records; Following command will print 10 results;

db.<collectionName>.find().limit(10);

Retrieve records by id

db.<collectionName>.find({"_id": ObjectId("someid")});

Retrieve values of specific collection attributes by passing an object having attribute names assigned to 1 or 0 based on whether that attribute value needs to be included in the output or not, respectively.

db.<collectionName>.find({"_id": ObjectId("someid")}, {field1: 1, field2: 1});
db.<collectionName>.find({"_id": ObjectId("someid")}, {field1: 0}); // Exclude field1

Collection count

db.<collectionName>.count();
db.orders.countDocuments({})                        // from version 4.0
db = db.getSiblingDB('users')
db.active.count()

To display the results in a formatted way, you can use pretty() method.

db.<collectionName>.find().pretty()

Use the $text query operator to perform text searches on a collection with a text index.

db.<collectionName>.find( { $text: { $search: "java coffee shop" } } )

Search for exact phrases by wrapping them in double-quotes

db.<collectionName>.find( { $text: { $search: "\"coffee shop\"" } } )

Search and find the length of result

db.<collectionName>.runCommand("text", {search: "robot"}).results.length
2

By adding a heading minus sign to a search word, you can exclude documents containing that word.Let’s say, we want all documents on “robot” but no “humans”.

db.<collectionName>.runCommand("text", {search: "robot -humans"})

Using Comparison Query Operators

db.<collectionName>.find({Employeeid : {$gt:2}})
db.<collectionName>.find( { qty: { $eq: 20 } } )
db.<collectionName>.find({Employeeid : {$gte:2}})
db.<collectionName>.find( { qty: { $in: [ 5, 15 ] } } )
db.<collectionName>.find( { qty: { $ne: 20 } } )

Using Logical Query Operators

db.<collectionName>.find( { $and: [ { price: { $ne: 3} }, { price: { $exists: true } } ] } )
db.<collectionName>.find( { $or: [ { quantity: { $lt: 20 } }, { price: 10 } ] } )
db.<collectionName>.find( { item: { $not: /^p.*/ } } )

Sorting ascending

db.<collectionName>.find().sort({Employeeid: 1})

Sorting descending

db.<collectionName>.find().sort({Employeeid: -1})

Return Distinct Values for a Field

db.<collectionName>.distinct( "Field" )

Save or Update Document

db.<collectionName>.update({"Employeeid" : 1},{$set: { "EmployeeName" : "NewMartin"}});
db.<collectionName>.update
(
	{
		Employeeid : 1
	},
	{
		$set :
		{
			"EmployeeName" : "NewMartin"
			"Employeeid" : 22
		}
	}
)
// Matching document will be updated; In case, no document matching the ID is found, a new document is created

db.<collectionName>.save({"_id": new ObjectId("jhgsdjhgdsf"), field1: "value", field2: "value"});

To update multiple documents with the update() method:

db.<collectionName>.update({"name":"Jon Snow"},{$set:{"name":"Kit Harington"}},{multi:true})

db.<collectionName>.update({location_city:"New York"},{ $set : { location_country: "FUDGE!"}}, {multi: true});

Adding a field to a document

db.<collectionName>.save({_id:1, x:10})
db.<collectionName>.update({_id:1},{$set:{y:9}})

replaceOne() replaces the first matching document in the collection that matches the filter, using the replacement document.

db.<collectionName>.replaceOne(
      { "name" : "Pizza Rat's Pizzaria" },
      { "_id": 4, "name" : "Pizza Rat's Pizzaria", "Borough" : "Manhattan", "violations" : 8 }
   );
} catch (e){
   print(e);
}

Delete  document from collection

Delete All Documents that Match a Condition

db.<collectionName>.deleteMany({ status : "A" })
db.<collectionName>.remove( { status : "P" } )

Remove Only One Document that Matches a Condition

 db.<collectionName>.deleteOne( { status: "D" } )
 db.<collectionName>.remove( { status: "D" }, 1)
 db.<collectionName>.remove( { qty: { $gt: 20 } }, true )

Rename a collection

db.<collectionName>.renameCollection("<NewcollectionName>")

Copy all document from one collection to other

db.<collectionName>.copyTo(<DestinationcollectionName>)

Create an Index

The following example creates an ascending index on the field orderDate.

db.<collectionName>.createIndex( { orderDate: 1 } )

The following example creates a compound index on the orderDate field (in ascending order) and the zipcode field (in descending order.)

db.<collectionName>.createIndex( { orderDate: 1, zipcode: -1 } )

Create indexes with collation specified

By specifying a collation strength of 1 or 2, you can create a case-insensitive index. Index with a collation strength of 1 is both diacritic- and case-insensitive.

db.<collectionName>.createIndex(
   { category: 1 },
   { name: "category_fr", collation: { locale: "fr", strength: 2 } }
)

Create a 2d sphere Index 

db.<collectionName>.createIndex( { loc : "2dsphere" } )

We can build a 2d geospatial index with a location range other than the default. Use the min and max options when creating the index.

db.<collectionName>.createIndex( { <location field> : "2d" } ,{ min : <lower bound> , max : <upper bound> } )

Create a Hackstack Index 

If you have a collection with documents that contain fields similar to the following:
{ _id : 100, pos: { lng : 126.9, lat : 35.2 } , type : “restaurant”}
{ _id : 200, pos: { lng : 127.5, lat : 36.1 } , type : “restaurant”}
{ _id : 300, pos: { lng : 128.0, lat : 36.7 } , type : “national park”}
The following operations create a haystack index with buckets that store keys within 1 unit of longitude or latitude.A bucketSize of 1 creates an index that groups location values that are within 1 units of the specified longitude and latitude.

db.<collectionName>.createIndex( { pos : "geoHaystack", type : 1 } ,  { bucketSize : 1 } )

Create multiple indexes with createIndexes()

db.<collectionName>.createIndexes([{"borough": 1}, {"location": "2dsphere"}])

List all indexes in collection

db.<collectionName>.getIndexes()

List all indexes of a database

db.getCollectionNames().forEach(function(collection) {
   indexes = db[collection].getIndexes();
   print("Indexes for " + collection + ":");
   printjson(indexes);
});

Remove specific Index in a collection

The following operation removes an ascending index on the location field

db.<collectionName>.dropIndex( { "location": 1 } )

Remove all Index in a collection

The following operation removes all index in a collection except _id index

db.<collectionName>.dropIndexes()

Modify an Index 

To modify an existing index, you need to drop and recreate the index.

Create View

Views act as read-only collections, and are computed on demand during read operations.

Create a View from a Single Collection

{ _id: 1, empNumber: "abc123", feedback: { management: 3, environment: 3 }, department: "A" }
{ _id: 2, empNumber: "xyz987", feedback: { management: 2, environment: 3 }, department: "B" }
{ _id: 3, empNumber: "ijk555", feedback: { management: 3, environment: 4 }, department: "A" }

The following operation creates a managementFeedback view with the _id, feedback.management, and department fields:

db.createView(
   "managementFeedback",
   "<collectionName>",
   [ { $project: { "management": "$feedback.management", department: 1 } } ]
)

Query a View

db.<viewName>.find()
> show collections ;
employee
managementFeedback
system.views

> db.system.views.find()

Create a View from Multiple Collections

The orders Collection

{ "_id" : 1, "item" : "abc", "price" : NumberDecimal("12.00"), "quantity" : 2 }
{ "_id" : 2, "item" : "jkl", "price" : NumberDecimal("20.00"), "quantity" : 1 }
{ "_id" : 3, "item" : "abc", "price" : NumberDecimal("10.95"), "quantity" : 5 }
{ "_id" : 4, "item" : "xyz", "price" : NumberDecimal("5.95"), "quantity" : 5 }
{ "_id" : 5, "item" : "xyz", "price" : NumberDecimal("5.95"), "quantity" : 10 }

The Inventory Collection

{ "_id" : 1, "sku" : "abc", description: "product 1", "instock" : 120 }
{ "_id" : 2, "sku" : "def", description: "product 2", "instock" : 80 }
{ "_id" : 3, "sku" : "ijk", description: "product 3", "instock" : 60 }
{ "_id" : 4, "sku" : "jkl", description: "product 4", "instock" : 70 }
{ "_id" : 5, "sku" : "xyz", description: "product 5", "instock" : 200 }

View by joining two collections

db.createView (
   "orderDetails",
   "orders",
   [
     { $lookup: { from: "inventory", localField: "item", foreignField: "sku", as: "inventory_docs" } },
     { $project: { "inventory_docs._id": 0, "inventory_docs.sku": 0 } }
   ]
)

Create a Capped Collection

Size of capped collection is in bytes

db.createCollection( "<collectionName>", { capped: true, size: 100000 } )

Query a Capped Collection

By default, a find query on a capped collection will display results in insertion order. But if you want the documents to be retrieved in reverse order, use the sort command.

db.<collectionName>.find().sort({$natural:-1})

Check if a Collection is Capped

db.<collectionName>.isCapped()

Convert a collection to Capped

db.runCommand({"convertToCapped": "<collectionName>", size: 100000});

Size of  Collection in bytes

db.<collectionName>.dataSize()

Total amount of storage allocated to collection for document storage

db.<collectionName>.storageSize()

The total size in bytes of the data in the collection plus the size of every index on the collection

db.<collectionName>.totalSize()

The total size of all indexes for the collection

db.<collectionName>.totalIndexSize()

Explain Plan for Collection

queryPlanner Mode

db.<collectionName>.explain().count( { quantity: { $gt: 50 } } )

executionStats Mode

db.<collectionName>.explain("executionStats").find({ quantity: { $gt: 50 }, category: "apparel" })

allPlansExecution Mode

db.<collectionName>.explain("allPlansExecution").update(
   { quantity: { $lt: 1000}, category: "apparel" },
   { $set: { reorder: true } }
)

Find the data distribution statistics for a sharded collection

db.<ShardedcollectionName>.getShardDistribution()

Information regarding the state of data in a sharded cluster

db.<ShardedcollectionName>.getShardVersion()

Information regarding latency statistics of a collection

db.<collectionName>.latencyStats()

Open a Change Stream

watchCursor = db.getSiblingDB("<Database>").<collectionName>.watch()
while (!watchCursor.isExhausted()){
   if (watchCursor.hasNext()){
      printjson(watchCursor.next());
   }
}

Validate a collection

db.<collectionName>.validate()

Print collection Statistics

db.printCollectionStats()

db.adminCommand()

db.adminCommand runs commands against the admin database regardless of the database context in which it runs.

Kill MongoDB Operations

db.adminCommand( { "killOp": 1, "op": 724 } )

Create a User 

use admin ;
 
 db.adminCommand(
  {
    createUser: "Username",
    pwd: "<password>",
    roles: [
      { role: "dbOwner", db: "admin" }
    ]
  }
)

Identify user’s role and privilege

use <database>
db.getUser("reportsUser")
use <database>
db.getRole( "readWrite", { showPrivileges: true } )

Revoke a Role

use <database>
db.revokeRolesFromUser(
    "reportsUser",
    [
      { role: "readWrite", db: "accounts" }
    ]
)

Grant a Role

use <database>
db.grantRolesToUser(
    "reportsUser",
    [
      { role: "read", db: "accounts" }
    ]
)

Modify password for existing user

use <database>
db.changeUserPassword("reporting", "SOh3xxxxxxxxxxx")
use <database>
db.updateUser(
"user123",
{
pwd: "KNlZmiaNUp0B",
customData: { title: "Senior Manager" }
}
)

Drop an existing User

use <database>
db.dropUser("reportUser1", {w: "majority", wtimeout: 5000})

Aggregate Pipeline with $currentOp

The first stage runs the $currentOp operation and the second stage filters the results of that operation.

use admin                                           // from version 3.6 
db.aggregate( [ {
   $currentOp : { allUsers: true, idleConnections: true } }, {
   $match : { shard: "shard01" }
   }
] )

Clone Collection

This operation copies the profiles collection from the users database on the server at mongodb.example.net into the users database on the local server. The operation only copies documents that satisfy the query { ‘active’ : true }.

db.cloneCollection('mongodb.example.net:27017', 'profiles', { 'active' : true } )

Find the in-progress operations for the database instance

db.currentOp()

Write Operations Waiting for a Lock

db.currentOp(
   {
     "waitingForLock" : true,
     $or: [
        { "op" : { "$in" : [ "insert", "update", "remove" ] } },
        { "query.findandmodify": { $exists: true } }
    ]
   }
)

Active Operations on a Specific Database

The following example returns information on all active operations for database db1 that have been running longer than 3 seconds:

db.currentOp(
   {
     "active" : true,
     "secs_running" : { "$gt" : 3 },
     "ns" : /^db1\./
   }
)

Active Indexing Operations

db.currentOp(
    {
      $or: [
        { op: "command", "query.createIndexes": { $exists: true } },
        { op: "none", ns: /\.system\.indexes\b/ }
      ]
    }
)

fsyncLock

Forces the mongod to flush all pending write operations to disk and locks the entire mongod instance to prevent additional writes until the user releases the lock

db.fsyncLock()

To unlock the instance for writes, we must run db.fsyncUnlock()

Get Collection Information in a database

Returns an array of documents with collection or view information, such as name and options, for the current database.

use <database>
db.getCollectionInfos()

Find the current log message verbosity

db.getLogComponents()

Test the mongodb connectivity from shell

db.getMongo()

Database Profiling

The Database Profiler collects detailed information about operations run against a mongod instance. The profiler’s output can help to identify inefficient queries and operations.

We can enable and configure profiling for individual databases or for all databases on a mongod instance. Profiler settings affect only a single mongod instance and will not propagate across a replica set or sharded cluster.

Display the current profiling level

db.getProfilingLevel()

Display current profile level, slowOpThresholdMs setting, and slowOpSampleRate setting.

db.getProfilingStatus()

Profiling Levels

Level Description
0 The profiler is off and does not collect any data. This is the default profiler level.
1 The profiler collects data for operations that take longer than the value of slowms.
2 The profiler collects data for all operations.

Enable Profiling and set profiling level

db.setProfilingLevel(2)
db.setProfilingLevel(1, { slowms: 20 })

Configuration file settings

operationProfiling:
   mode: slowOp                                   //accept value off,slowOp,all
   slowOpThresholdMs: 200                         // default :100
   slowOpSampleRate: 2.0                          // default 1.0

Query profiler data

Operations slower than 5 millisec

db.system.profile.find( { millis : { $gt : 5 } } ).pretty()

Change Size of system.profile Collection on the Primary

db.setProfilingLevel(0)
db.system.profile.drop()
db.createCollection( "system.profile", { capped: true, size:4000000 } )
db.setProfilingLevel(1)

Display database host information

db.hostInfo()

Display the role of database instance

db.isMaster()

End the current authentication session

db.logout()

Repair Database

Rebuilds the database and indexes by discarding invalid or corrupt data that may be present due to an unexpected system restart or shutdown.

Repair for Wired tiger

db.runCommand( { repairDatabase: 1 } )
use <database>
db.repairDatabase()

For MMAPV1

mongod --repair --repairpath /opt/vol2/data

List the Parameters used to compile the mongodb instance

db.serverBuildInfo()

Server Status

The serverStatus command returns a document that provides an overview of the database’s state.

db.runCommand( { serverStatus: 1 } )
db.serverStatus()

Database instance Uptime

//seconds
db.serverStatus().uptime
// minutes
db.serverStatus().uptime / 60

// hours
db.serverStatus().uptime / 3600

// days
db.serverStatus().uptime / 86400

Database Connection Status

db.serverStatus( { connections: 1 } )

connections.current : The number of incoming connections from clients to the database server .
connections.available : The number of unused incoming connections available.
connections.totalCreated : Count of all incoming connections created to the server.
connections.active : The number of active client connections to the server.

Verbosity level of log messages

db.setLogLevel(1)

Database Statistics

db.stats()

Enable Access Control and Enforce Authentication

//Start MongoDB without authentication

// Connect to the instance
$ mongo --port 27017

// Create the user administrator.
> use admin
> db.createUser(
{
user: "superAdmin",
pwd: "admin123",
roles: [ { role: "root", db: "admin" } ]
})

// Re-start the MongoDB instance with access control

// Add the security.authorization setting to the config file

$ sudo vi /etc/mongod.conf
systemLog:
destination: file
path: /usr/local/var/log/mongodb/mongo.log
logAppend: true
storage:
dbPath: /usr/local/var/mongodb
net:
bindIp: 127.0.0.1
security:
authorization: enabled

// Restart mongodb
$ sudo service mongod restart

// Connect to database instance with superAdmin access
$ mongo --port 27017 -u "superAdmin" -p "admin123" --authenticationDatabase "admin"

// Create user access (readWrite) for specific database
$ use myAppDb
$ db.createUser(
{
user: "myAppDbUser",
pwd: "myApp123",
roles: [ "readWrite"]
})

x.509 Certificate Based Authentication ( TLS/SSL Transport Encryption)

Please refer the below link.

https://dinfratechsource.com/2018/12/16/securing-mongodb-with-x-509-authentication/

Encryption at Rest using local key file

// Create the base64 encoded keyfile with the 16 or 32 character string.

# mkdir /data/key
# openssl rand -base64 32 > /data/key/mongodb.key
# chmod 600 /data/key/mongodb.key

// Add encryption Variables in mongod.conf

security:
  enableEncryption: true
  encryptionKeyFile: /data/key/mongodb.key

// Start mongod process

# systemctl start mongod

// Verify if the encryption key manager successfully initialized with the keyfile.

> db.serverCmdLineOpts().parsed.security
{ "enableEncryption" : true, "encryptionKeyFile" : "/data/key/mongodb.key" }

// If the operation was successful, the process will log the following message:

[initandlisten] Encryption key manager initialized with key file: /data/key/mongodb.key

Deploy  new replicaset With Keyfile Access Control

// Create a keyfile

openssl rand -base64 756 > <path-to-keyfile>
chmod 400 <path-to-keyfile>

// Copy the keyfile to each replica set member.
Copy the keyfile to each server hosting the replica set members. Ensure that the user running the mongod instances is the owner of the file and can access the keyfile.

// Start each member of the replica set with access control enabled.

// Configuration file :
security:
  keyFile: <path-to-keyfile>
replication:
  replSetName: <replicaSetName>
net:
  bindIp: localhost,<hostname(s)|ip address(es)>


// Initiate the replica set.
Run rs.initiate() on just one and only one mongod instance for the replica set

OR

rs.initiate(
{
_id : <replicaSetName>,
members: [
{ _id : 0, host : "mongo1.example.net:27017" },
{ _id : 1, host : "mongo2.example.net:27017" },
{ _id : 2, host : "mongo3.example.net:27017" }
]
}
)

// Create the user administrator.

admin = db.getSiblingDB("admin")
admin.createUser(
{
user: "fred",
pwd: "changeme1",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
}
)

// Authenticate as the user administrator.

db.getSiblingDB("admin").auth("fred", "changeme1" )
OR 
mongo -u "fred" -p "changeme1" --authenticationDatabase "admin"

// Create the cluster administrator.

db.getSiblingDB("admin").createUser(
{
"user" : "ravi",
"pwd" : "changeme2",
roles: [ { "role" : "clusterAdmin", "db" : "admin" } ]
}
)

Note : Enforce key-file without downtime in replica set step down primary and restart server with option below

transitionToAuth: true

Enable Auditing

This feature is available for MongoDB Enterprise version. Specify the following option in mongod.conf file

storage:
   dbPath: data/db
auditLog:
   destination: file
   format: JSON
   path: data/db/auditLog.json

Check Replicaset health status

rs.status()
rs.printReplicationInfo()
db.getReplicationInfo()

Add a member to replicaset

rs.add()

Add an arbiter to replicaset

rs.addArb()

Adjust priority of replicaset member

// Copy the replica set configuration to a variable

cfg = rs.conf()

// Change each member’s priority value

cfg.members[0].priority = 0.5
cfg.members[1].priority = 2
cfg.members[2].priority = 2

// Assign the replica set the new configuration

rs.reconfig(cfg)

To prevent a secondary from becoming primary update

cfg.members[2].priority = 0

Configure a hidden replicaset member

cfg = rs.conf()
cfg.members[0].priority = 0
cfg.members[0].hidden = true
rs.reconfig(cfg)

Configure a delayed replicaset member

cfg = rs.conf()
cfg.members[0].priority = 0
cfg.members[0].hidden = true
cfg.members[0].slaveDelay = 3600
rs.reconfig(cfg)

Configure a non-voting  replicaset member

cfg = rs.conf()
cfg.members[3].votes = 0;
cfg.members[3].priority = 0;
rs.reconfig(cfg);

Change the size of Oplog

// Verify the current size of the oplog

use local
db.oplog.rs.stats().maxSize

// Change the oplog size of the replica set member

db.adminCommand({replSetResizeOplog: 1, size: 16000})

// Compact oplog.rs to reclaim disk space ( optional)
// Do not run compact against the primary replica set member.

use local
db.runCommand({ "compact" : "oplog.rs" } )

Force a Member to be Primary Using Database Commands

Consider a replica set with the following members:

mdb0.example.net – the current primary.
mdb1.example.net – a secondary.
mdb2.example.net – a secondary .

To force a member to become primary use the following procedure:

In a mongo shell, run rs.status() to ensure your replica set is running as expected.

//In a mongo shell connected to the mongod instance running on mdb2.example.net, freeze mdb2.example.net so that it does not attempt to become primary for 120 seconds.

> rs.freeze(120)

//In a mongo shell connected the mongod running on mdb0.example.net, step down this instance that the mongod is not eligible to become primary for 120 seconds:

> rs.stepDown(120)

mdb1.example.net becomes primary.

Configure a Secondary’s Sync Target

If an initial sync operation is in progress when you run replSetSyncFrom/rs.syncFrom(), replSetSyncFrom/rs.syncFrom() stops the in-progress initial sync and restarts the sync process with the new target.

replSetSyncFrom/rs.syncFrom() provide a temporary override of default behavior. mongod will revert to the default sync behavior in the following situations:

  • The mongod instance restarts.
  • The connection between the mongod and the sync target closes.
  • If the sync target falls more than 30 seconds behind another member of the replica set.
db.adminCommand( { replSetSyncFrom: "hostname<:port>" } );

Check the Replication Lag

rs.printSlaveReplicationInfo()

Confirm whether the current instance is master

db.isMaster()

Allow read operations on secondary  replicaset nodes

rs.slaveOk()

Add shards to cluster

sh.addShard( "<replSetName>/s1-mongo1.example.net:27018")

Enable sharding for database

sh.enableSharding("<database>")

Shard a Collection using Hashed Sharding

sh.shardCollection("<database>.<collection>", { <shard key> : "hashed" } )

Shard a Collection using Ranged Sharding

If the collection already contains data, we must create an index on the shard key using the db.collection.createIndex() method before using shardCollection().

sh.shardCollection("<database>.<collection>", { <shard key> : <direction> } )

Managing shard zones

// Add Shards to a Zone

sh.addShardTag("shard0000", "NYC")
sh.addShardTag("shard0001", "NYC")

// You may remove zone from a particular shard 

sh.removeShardTag("shard0002", "NRT")

// Create a Zone Range

sh.addTagRange("records.users", { zipcode: "10001" }, { zipcode: "10281" }, "NYC")
sh.addTagRange("records.users", { zipcode: "11201" }, { zipcode: "11240" }, "NYC")

// Remove a zone range 

sh.removeRangeFromZone("records.user", {zipcode: "10001"}, {zipcode: "10281"})

// view existing zones 
// return all shards with the NYC zone.

use config
db.shards.find({ tags: "NYC" })

//  return any range associated to the NYC zone

use config
db.tags.find({ tags: "NYC" })

Modify chunk size in sharded cluster

use config
db.settings.save( { _id:"chunksize", value: <sizeInMB> } )

Check the Balancer State

sh.getBalancerState()

Enable Balancer

sh.setBalancerState(true)

Check  if balancer is running

sh.isBalancerRunning()

Schedule balancing window

> use config
> sh.setBalancerState( true )
> db.settings.update(
   { _id: "balancer" },
   { $set: { activeWindow : { start : "<start-time>", stop : "<stop-time>" } } },
   { upsert: true }
)

Remove balancing window

use config
db.settings.update({ _id : "balancer" }, { $unset : { activeWindow : true } })

Disable balancer

sh.stopBalancer()

sh.getBalancerState()

// To verify no migrations are in progress after disabling, issue the following operation in the mongo shell:

use config
while( sh.isBalancerRunning() ) {
          print("waiting...");
          sleep(1000);
}

Disable balancing on a collection

sh.disableBalancing("students.grades")

Enable balancing on a collection

sh.enableBalancing("students.grades")

Confirm balancing is enabled or disabled

db.getSiblingDB("config").collections.findOne({_id : "students.grades"}).noBalance;

Migrate Chunks in sharded cluster

username is the shard key for collection users in myapp database

Migrate single chunk

db.adminCommand( { moveChunk : "myapp.users",
                   find : {username : "smith"},
                   to : "mongodb-shard3.example.net" } )

Evenly migrate chunks

var shServer = [ "sh0.example.net", "sh1.example.net", "sh2.example.net", "sh3.example.net", "sh4.example.net" ];
for ( var x=97; x<97+26; x++ ){
  for( var y=97; y<97+26; y+=6 ) {
    var prefix = String.fromCharCode(x) + String.fromCharCode(y);
    db.adminCommand({moveChunk : "myapp.users", find : {email : prefix}, to : shServer[(y-97)/6]})
  }
}

List databases with sharding enabled

use config
db.databases.find( { "partitioned": true } )

List shards

db.adminCommand( { listShards : 1 } )

View cluster details

db.printShardingStatus()  or sh.status()

Remove chunk from shard

db.adminCommand( { removeShard: "mongodb0" } )

End Sessions in mongodb

db.runCommand( { endSessions: [ { id : <UUID> }, ... ] } )

Kill all  Sessions for specific user

db.runCommand( { killAllSessions: [ { user: "appReader", db: "db1" }, { user: "appReader", db: "db2" } ] } )

Build Info

db.runCommand( { buildInfo: 1 } )

Connection pool status

The command connPoolStats returns information regarding the open outgoing connections from the current database instance to other members of the sharded cluster or replica set.

db.runCommand( { "connPoolStats" : 1 } )

Connection status

db.runCommand( { connectionStatus: 1, showPrivileges: true } )

getlog

Retrieve Available Log Filters

db.adminCommand( { getLog: "*" } )

Retrieve Recent Events from Log

db.adminCommand( { getLog : "global" } )

Hash Values for All Collections in a Database

use test
db.runCommand( { dbHash: 1 } )

Find the port mongod is listening

sudo lsof -iTCP -sTCP:LISTEN | grep mongo

top

db.adminCommand("top")

Free monitoring status

db.adminCommand( { getFreeMonitoringStatus: 1 } )                // from version 4.0

Set free monitoring

db.adminCommand( { setFreeMonitoring: 1, action: "<enable|disable>" } )          // from version 4.0

Check Disk I/O Bottle neck

iostat -xmt 1

Update the SELinux policy to allow the mongod service to use the new directory

 // Update the SELinux policy to allow the mongod service to use the new directory:

$ semanage fcontext -a -t <type> </some/MongoDB/directory.*>

// where specify one of the following types as appropriate:
mongod_var_lib_t for data directory
mongod_log_t for log file directory
mongod_var_run_t for pid file directory

// Update the SELinux user policy for the new directory:

$ chcon -Rv -u system_u -t <type> </some/MongoDB/directory>

// Apply the updated SELinux policies to the directory:

$ restorecon -R -v </some/MongoDB/directory>

// Example 
semanage fcontext -a -t mongod_var_lib_t '/mongodb/data.*'
chcon -Rv -u system_u -t mongod_var_lib_t '/mongodb/data'
restorecon -R -v '/mongodb/data'

// Non Default mongodb ports 

semanage port -a -t mongod_port_t -p tcp <portnumber>

Production Notes :

Concurrency : WiredTiger , WiredTiger supports concurrent access by readers and writers to the documents in a collection.
Data Consistency : Journaling MongoDB uses write ahead logging to an on-disk journal.
Manage Connection Pool Sizes : The command connPoolStats returns information regarding the open outgoing connections from the current database instance to other members of the sharded cluster or replica set.
Allocate Sufficient RAM and CPU :

WiredTiger :

  • Throughput increases as the number of concurrent active operations increases up to the number of CPUs.
  • Throughput decreases as the number of concurrent active operations exceeds the number of CPUs by some threshold amount.
  • The threshold depends on your application. You can determine the optimum number of concurrent active operations for your application by experimenting and measuring throughput. The output from mongostat provides statistics on the number of active reads/writes in the (ar|aw) column.

WiredTiger internal cache : 50% of (RAM – 1 GB), or 256 MB

For example, on a system with a total of 4GB of RAM the WiredTiger cache will use 1.5GB of RAM (0.5 * (4 GB – 1 GB) = 1.5 GB)

Data in the filesystem cache is the same as the on-disk format, including benefits of any compression for data files. The filesystem cache is used by the operating system to reduce disk I/O.

storage.wiredTiger Options

storage:
   wiredTiger:
      engineConfig:
         cacheSizeGB: <number>
         journalCompressor: <string>
         directoryForIndexes: <boolean>
      collectionConfig:
         blockCompressor: <string>
      indexConfig:
         prefixCompression: <boolean>

Use Solid State Disks (SSDs)

Configuring NUMA (Non-Uniform Access Memory) on Linux

$ echo 0 | sudo tee /proc/sys/vm/zone_reclaim_mode
$ sudo sysctl -w vm.zone_reclaim_mode=0

Swap : Assign swap space for your systems. Allocating swap space can avoid issues with memory contention and can prevent the OOM Killer on Linux systems from killing mongod.
RAID : For optimal performance in terms of the storage layer, use disks backed by RAID-10

Compression : WiredTiger can compress collection data using either snappy or zlib compression library. snappy provides a lower compression rate but has little performance cost, whereas zlib provides better compression rate but has a higher performance cost.

Swappiness : “Swappiness” is a Linux kernel setting that influences the behavior of the Virtual Memory manager when it needs to allocate a swap, ranging from 0 to 100

$ cat /proc/sys/vm/swappines
$ sysctl vm.swappiness=1

ulimit : Set the file descriptor limit, -n, and the user process limit (ulimit), -u, above 20,000, according to the suggestions in the ulimit reference.

$ ulimit -a
$ ulimit -n <value>

TCP idle timeout : You should set tcp_keepalive_time to 120

$ sysctl net.ipv4.tcp_keepalive_time

$ cat /proc/sys/net/ipv4/tcp_keepalive_time

$ sudo sysctl -w net.ipv4.tcp_keepalive_time=<value>
$ echo <value> | sudo tee /proc/sys/net/ipv4/tcp_keepalive_time

// These operations do not persist across system reboots. To persist the setting, add the following line to /etc/sysctl.conf:
net.ipv4.tcp_keepalive_time = <value>

 

One thought on “MongoDB Database Administration Commands

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Powered by WordPress.com.

Up ↑

%d bloggers like this: