MongoDB Database Administration Commands

This article lists the  commands used for MongoDB administration.Please feel free to suggest and contribute more commands.


Login to mongodb instance

# mongo -u <username> -p <password> --authenticationDatabase <dbname>

Authenticate and logout  from database

use admin; 

// Logout

Show all databases

show dbs

Define a database name

use <databasename>

This command is to switch from default database to define database (even non-exists database name will work).However, database doesn’t get created yet, until you save collections inside it.

Display the current database name


Create a collection


List down collections

show collections ;

Insert document in a collection

// Insert single document
db.<collectionName>.insert({field1: "value", field2: "value"})
// Insert multiple documents
db.<collectionName>.insert([{field1: "value1"}, {field1: "value2"}])
db.<collectionName>.insertMany([{field1: "value1"}, {field1: "value2"}])

Drop a collection


Remove document from collection




Drop a database

use <database>


Display collection records

Retrieve all records


Retrieve limited number of records; Following command will print 10 results;


Retrieve records by id

db.<collectionName>.find({"_id": ObjectId("someid")});

Retrieve values of specific collection attributes by passing an object having attribute names assigned to 1 or 0 based on whether that attribute value needs to be included in the output or not, respectively.

db.<collectionName>.find({"_id": ObjectId("someid")}, {field1: 1, field2: 1});
db.<collectionName>.find({"_id": ObjectId("someid")}, {field1: 0}); // Exclude field1

Collection count

db.orders.countDocuments({})                        // from version 4.0
db = db.getSiblingDB('users')

To display the count of documents on all the collections in a database:

use <database>
db.getCollectionNames().forEach(function(collection) { resultCount = db[collection].count(); print("Results count for " + collection + ": "+ resultCount); });

To display the results in a formatted way, you can use pretty() method.


Use the $text query operator to perform text searches on a collection with a text index.

db.<collectionName>.find( { $text: { $search: "java coffee shop" } } )

Search for exact phrases by wrapping them in double-quotes

db.<collectionName>.find( { $text: { $search: "\"coffee shop\"" } } )

Search and find the length of result

db.<collectionName>.runCommand("text", {search: "robot"}).results.length

By adding a heading minus sign to a search word, you can exclude documents containing that word.Let’s say, we want all documents on “robot” but no “humans”.

db.<collectionName>.runCommand("text", {search: "robot -humans"})

Using Comparison Query Operators

db.<collectionName>.find({Employeeid : {$gt:2}})
db.<collectionName>.find( { qty: { $eq: 20 } } )
db.<collectionName>.find({Employeeid : {$gte:2}})
db.<collectionName>.find( { qty: { $in: [ 5, 15 ] } } )
db.<collectionName>.find( { qty: { $ne: 20 } } )

Using Logical Query Operators

db.<collectionName>.find( { $and: [ { price: { $ne: 3} }, { price: { $exists: true } } ] } )
db.<collectionName>.find( { $or: [ { quantity: { $lt: 20 } }, { price: 10 } ] } )
db.<collectionName>.find( { item: { $not: /^p.*/ } } )

Sorting ascending

db.<collectionName>.find().sort({Employeeid: 1})

Sorting descending

Find the last document in collection (in reverse)


Return Distinct Values for a Field

db.<collectionName>.distinct( "Field" )

Save or Update Document

db.<collectionName>.update({"Employeeid" : 1},{$set: { "EmployeeName" : "NewMartin"}});
		Employeeid : 1
		$set :
			"EmployeeName" : "NewMartin"
			"Employeeid" : 22
// Matching document will be updated; In case, no document matching the ID is found, a new document is created

db.<collectionName>.save({"_id": new ObjectId("jhgsdjhgdsf"), field1: "value", field2: "value"});

To update multiple documents with the update() method:

db.<collectionName>.update({"name":"Jon Snow"},{$set:{"name":"Kit Harington"}},{multi:true})

db.<collectionName>.update({location_city:"New York"},{ $set : { location_country: "FUDGE!"}}, {multi: true});

Adding a field to a document

db.<collectionName>.save({_id:1, x:10})

replaceOne() replaces the first matching document in the collection that matches the filter, using the replacement document.

      { "name" : "Pizza Rat's Pizzaria" },
      { "_id": 4, "name" : "Pizza Rat's Pizzaria", "Borough" : "Manhattan", "violations" : 8 }
} catch (e){

Delete  document from collection

Delete All Documents that Match a Condition

db.<collectionName>.deleteMany({ status : "A" })
db.<collectionName>.remove( { status : "P" } )

Remove Only One Document that Matches a Condition

 db.<collectionName>.deleteOne( { status: "D" } )
 db.<collectionName>.remove( { status: "D" }, 1)
 db.<collectionName>.remove( { qty: { $gt: 20 } }, true )

Rename a collection


Copy all document from one collection to other


Display fields of a collection

var allKeys = {}; db.<collection>.find().forEach(function(doc){Object.keys(doc).forEach(function(key){allKeys[key]=1})}); allKeys;

Return the number of collections in a database

use <database> ;
var documentCount = 0; db.getCollectionNames().forEach(function(collection) { documentCount++; }); print("Available Documents count: "+ documentCount);

Create an Index

The following example creates an ascending index on the field orderDate.

db.<collectionName>.createIndex( { orderDate: 1 } )

The following example creates a compound index on the orderDate field (in ascending order) and the zipcode field (in descending order.)

db.<collectionName>.createIndex( { orderDate: 1, zipcode: -1 } )

Create indexes with collation specified

By specifying a collation strength of 1 or 2, you can create a case-insensitive index. Index with a collation strength of 1 is both diacritic- and case-insensitive.

   { category: 1 },
   { name: "category_fr", collation: { locale: "fr", strength: 2 } }

Create a 2d sphere Index 

db.<collectionName>.createIndex( { loc : "2dsphere" } )

We can build a 2d geospatial index with a location range other than the default. Use the min and max options when creating the index.

db.<collectionName>.createIndex( { <location field> : "2d" } ,{ min : <lower bound> , max : <upper bound> } )

Create a Hackstack Index 

If you have a collection with documents that contain fields similar to the following:
{ _id : 100, pos: { lng : 126.9, lat : 35.2 } , type : “restaurant”}
{ _id : 200, pos: { lng : 127.5, lat : 36.1 } , type : “restaurant”}
{ _id : 300, pos: { lng : 128.0, lat : 36.7 } , type : “national park”}
The following operations create a haystack index with buckets that store keys within 1 unit of longitude or latitude.A bucketSize of 1 creates an index that groups location values that are within 1 units of the specified longitude and latitude.

db.<collectionName>.createIndex( { pos : "geoHaystack", type : 1 } ,  { bucketSize : 1 } )

Create multiple indexes with createIndexes()

db.<collectionName>.createIndexes([{"borough": 1}, {"location": "2dsphere"}])

List all indexes in collection


List all indexes of a database

db.getCollectionNames().forEach(function(collection) {
   indexes = db[collection].getIndexes();
   print("Indexes for " + collection + ":");

Display count of indexes on each collection in a database

db.getCollectionNames().forEach(function(collection) {print(collection + ": " + db[collection].getIndexes().length);});

Remove specific Index in a collection

The following operation removes an ascending index on the location field

db.<collectionName>.dropIndex( { "location": 1 } )

Remove all Index in a collection

The following operation removes all index in a collection except _id index


Modify an Index 

To modify an existing index, you need to drop and recreate the index.

Create View

Views act as read-only collections, and are computed on demand during read operations.

Create a View from a Single Collection

{ _id: 1, empNumber: "abc123", feedback: { management: 3, environment: 3 }, department: "A" }
{ _id: 2, empNumber: "xyz987", feedback: { management: 2, environment: 3 }, department: "B" }
{ _id: 3, empNumber: "ijk555", feedback: { management: 3, environment: 4 }, department: "A" }

The following operation creates a managementFeedback view with the _id,, and department fields:

   [ { $project: { "management": "$", department: 1 } } ]

Query a View

> show collections ;

> db.system.views.find()

Create a View from Multiple Collections

The orders Collection

{ "_id" : 1, "item" : "abc", "price" : NumberDecimal("12.00"), "quantity" : 2 }
{ "_id" : 2, "item" : "jkl", "price" : NumberDecimal("20.00"), "quantity" : 1 }
{ "_id" : 3, "item" : "abc", "price" : NumberDecimal("10.95"), "quantity" : 5 }
{ "_id" : 4, "item" : "xyz", "price" : NumberDecimal("5.95"), "quantity" : 5 }
{ "_id" : 5, "item" : "xyz", "price" : NumberDecimal("5.95"), "quantity" : 10 }

The Inventory Collection

{ "_id" : 1, "sku" : "abc", description: "product 1", "instock" : 120 }
{ "_id" : 2, "sku" : "def", description: "product 2", "instock" : 80 }
{ "_id" : 3, "sku" : "ijk", description: "product 3", "instock" : 60 }
{ "_id" : 4, "sku" : "jkl", description: "product 4", "instock" : 70 }
{ "_id" : 5, "sku" : "xyz", description: "product 5", "instock" : 200 }

View by joining two collections

db.createView (
     { $lookup: { from: "inventory", localField: "item", foreignField: "sku", as: "inventory_docs" } },
     { $project: { "inventory_docs._id": 0, "inventory_docs.sku": 0 } }

Create a Capped Collection

Size of capped collection is in bytes

db.createCollection( "<collectionName>", { capped: true, size: 100000 } )

Query a Capped Collection

By default, a find query on a capped collection will display results in insertion order. But if you want the documents to be retrieved in reverse order, use the sort command.


Check if a Collection is Capped


Convert a collection to Capped

db.runCommand({"convertToCapped": "<collectionName>", size: 100000});

Size of  Collection in bytes


Total amount of storage allocated to collection for document storage


The total size in bytes of the data in the collection plus the size of every index on the collection


The total size of all indexes for the collection


Explain Plan for Collection

queryPlanner Mode

db.<collectionName>.explain().count( { quantity: { $gt: 50 } } )

executionStats Mode

db.<collectionName>.explain("executionStats").find({ quantity: { $gt: 50 }, category: "apparel" })

allPlansExecution Mode

   { quantity: { $lt: 1000}, category: "apparel" },
   { $set: { reorder: true } }

Find the data distribution statistics for a sharded collection


Information regarding the state of data in a sharded cluster


Information regarding latency statistics of a collection


Open a Change Stream

watchCursor = db.getSiblingDB("<Database>").<collectionName>.watch()
while (!watchCursor.isExhausted()){
   if (watchCursor.hasNext()){

Validate a collection


Print collection Statistics



db.adminCommand runs commands against the admin database regardless of the database context in which it runs.

Kill MongoDB Operations

db.adminCommand( { "killOp": 1, "op": 724 } )

Create a User 

use admin ;
    createUser: "Username",
    pwd: "<password>",
    roles: [
      { role: "dbOwner", db: "admin" }

Identify user’s role and privilege

use <database>
use <database>
db.getRole( "readWrite", { showPrivileges: true } )

Revoke a Role

use <database>
      { role: "readWrite", db: "accounts" }

Grant a Role

use <database>
      { role: "read", db: "accounts" }

Modify password for existing user

use <database>
db.changeUserPassword("reporting", "SOh3xxxxxxxxxxx")
use <database>
pwd: "KNlZmiaNUp0B",
customData: { title: "Senior Manager" }

Drop an existing User

use <database>
db.dropUser("reportUser1", {w: "majority", wtimeout: 5000})

Aggregate Pipeline with $currentOp

The first stage runs the $currentOp operation and the second stage filters the results of that operation.

use admin                                           // from version 3.6 
db.aggregate( [ {
   $currentOp : { allUsers: true, idleConnections: true } }, {
   $match : { shard: "shard01" }
] )

Clone Collection

This operation copies the profiles collection from the users database on the server at into the users database on the local server. The operation only copies documents that satisfy the query { ‘active’ : true }.

db.cloneCollection('', 'profiles', { 'active' : true } )

Find the in-progress operations for the database instance


Write Operations Waiting for a Lock

     "waitingForLock" : true,
     $or: [
        { "op" : { "$in" : [ "insert", "update", "remove" ] } },
        { "query.findandmodify": { $exists: true } }

Active Operations on a Specific Database

The following example returns information on all active operations for database db1 that have been running longer than 3 seconds:

     "active" : true,
     "secs_running" : { "$gt" : 3 },
     "ns" : /^db1\./

Active Indexing Operations

      $or: [
        { op: "command", "query.createIndexes": { $exists: true } },
        { op: "none", ns: /\.system\.indexes\b/ }


Forces the mongod to flush all pending write operations to disk and locks the entire mongod instance to prevent additional writes until the user releases the lock


To unlock the instance for writes, we must run db.fsyncUnlock()

Get Collection Information in a database

Returns an array of documents with collection or view information, such as name and options, for the current database.

use <database>

Find the current log message verbosity


Test the mongodb connectivity from shell


Database Profiling

The Database Profiler collects detailed information about operations run against a mongod instance. The profiler’s output can help to identify inefficient queries and operations.

We can enable and configure profiling for individual databases or for all databases on a mongod instance. Profiler settings affect only a single mongod instance and will not propagate across a replica set or sharded cluster.

Display the current profiling level


Display current profile level, slowOpThresholdMs setting, and slowOpSampleRate setting.


Profiling Levels

Level Description
0 The profiler is off and does not collect any data. This is the default profiler level.
1 The profiler collects data for operations that take longer than the value of slowms.
2 The profiler collects data for all operations.

Enable Profiling and set profiling level

db.setProfilingLevel(1, { slowms: 20 })

Configuration file settings

   mode: slowOp                                   //accept value off,slowOp,all
   slowOpThresholdMs: 200                         // default :100
   slowOpSampleRate: 2.0                          // default 1.0

Query profiler data

Operations slower than 5 millisec

db.system.profile.find( { millis : { $gt : 5 } } ).pretty()

Change Size of system.profile Collection on the Primary

db.createCollection( "system.profile", { capped: true, size:4000000 } )

Display database host information


Display the role of database instance


End the current authentication session


Repair Database

Rebuilds the database and indexes by discarding invalid or corrupt data that may be present due to an unexpected system restart or shutdown.

Repair for Wired tiger

db.runCommand( { repairDatabase: 1 } )
use <database>


mongod --repair --repairpath /opt/vol2/data

List the Parameters used to compile the mongodb instance


Server Status

The serverStatus command returns a document that provides an overview of the database’s state.

db.runCommand( { serverStatus: 1 } )

Database instance Uptime

// minutes
db.serverStatus().uptime / 60

// hours
db.serverStatus().uptime / 3600

// days
db.serverStatus().uptime / 86400

Database Connection Status

db.serverStatus( { connections: 1 } )

connections.current : The number of incoming connections from clients to the database server .
connections.available : The number of unused incoming connections available.
connections.totalCreated : Count of all incoming connections created to the server. : The number of active client connections to the server.

Verbosity level of log messages


Database Statistics


Enable Access Control and Enforce Authentication

//Start MongoDB without authentication

// Connect to the instance
$ mongo --port 27017

// Create the user administrator.
> use admin
> db.createUser(
user: "superAdmin",
pwd: "admin123",
roles: [ { role: "root", db: "admin" } ]

// Re-start the MongoDB instance with access control

// Add the security.authorization setting to the config file

$ sudo vi /etc/mongod.conf
destination: file
path: /usr/local/var/log/mongodb/mongo.log
logAppend: true
dbPath: /usr/local/var/mongodb
authorization: enabled

// Restart mongodb
$ sudo service mongod restart

// Connect to database instance with superAdmin access
$ mongo --port 27017 -u "superAdmin" -p "admin123" --authenticationDatabase "admin"

// Create user access (readWrite) for specific database
$ use myAppDb
$ db.createUser(
user: "myAppDbUser",
pwd: "myApp123",
roles: [ "readWrite"]

x.509 Certificate Based Authentication ( TLS/SSL Transport Encryption)

Please refer the below link.

Encryption at Rest using local key file

// Create the base64 encoded keyfile with the 16 or 32 character string.

# mkdir /data/key
# openssl rand -base64 32 > /data/key/mongodb.key
# chmod 600 /data/key/mongodb.key

// Add encryption Variables in mongod.conf

  enableEncryption: true
  encryptionKeyFile: /data/key/mongodb.key

// Start mongod process

# systemctl start mongod

// Verify if the encryption key manager successfully initialized with the keyfile.

> db.serverCmdLineOpts()
{ "enableEncryption" : true, "encryptionKeyFile" : "/data/key/mongodb.key" }

// If the operation was successful, the process will log the following message:

[initandlisten] Encryption key manager initialized with key file: /data/key/mongodb.key

Deploy  new replicaset With Keyfile Access Control

// Create a keyfile

openssl rand -base64 756 > <path-to-keyfile>
chmod 400 <path-to-keyfile>

// Copy the keyfile to each replica set member.
Copy the keyfile to each server hosting the replica set members. Ensure that the user running the mongod instances is the owner of the file and can access the keyfile.

// Start each member of the replica set with access control enabled.

// Configuration file :
  keyFile: <path-to-keyfile>
  replSetName: <replicaSetName>
  bindIp: localhost,<hostname(s)|ip address(es)>

// Initiate the replica set.
Run rs.initiate() on just one and only one mongod instance for the replica set


_id : <replicaSetName>,
members: [
{ _id : 0, host : "" },
{ _id : 1, host : "" },
{ _id : 2, host : "" }

// Create the user administrator.

admin = db.getSiblingDB("admin")
user: "fred",
pwd: "changeme1",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]

// Authenticate as the user administrator.

db.getSiblingDB("admin").auth("fred", "changeme1" )
mongo -u "fred" -p "changeme1" --authenticationDatabase "admin"

// Create the cluster administrator.

"user" : "ravi",
"pwd" : "changeme2",
roles: [ { "role" : "clusterAdmin", "db" : "admin" } ]

Note : Enforce key-file without downtime in replica set step down primary and restart server with option below

transitionToAuth: true

Enable Auditing

This feature is available for MongoDB Enterprise version. Specify the following option in mongod.conf file

   dbPath: data/db
   destination: file
   format: JSON
   path: data/db/auditLog.json

Check Replicaset health status


Add a member to replicaset


Add an arbiter to replicaset


Adjust priority of replicaset member

// Copy the replica set configuration to a variable

cfg = rs.conf()

// Change each member’s priority value

cfg.members[0].priority = 0.5
cfg.members[1].priority = 2
cfg.members[2].priority = 2

// Assign the replica set the new configuration


To prevent a secondary from becoming primary update

cfg.members[2].priority = 0

Configure a hidden replicaset member

cfg = rs.conf()
cfg.members[0].priority = 0
cfg.members[0].hidden = true

Configure a delayed replicaset member

cfg = rs.conf()
cfg.members[0].priority = 0
cfg.members[0].hidden = true
cfg.members[0].slaveDelay = 3600

Configure a non-voting  replicaset member

cfg = rs.conf()
cfg.members[3].votes = 0;
cfg.members[3].priority = 0;

Change the size of Oplog

// Verify the current size of the oplog

use local

// Change the oplog size of the replica set member

db.adminCommand({replSetResizeOplog: 1, size: 16000})

// Compact to reclaim disk space ( optional)
// Do not run compact against the primary replica set member.

use local
db.runCommand({ "compact" : "" } )

Force a Member to be Primary Using Database Commands

Consider a replica set with the following members: – the current primary. – a secondary. – a secondary .

To force a member to become primary use the following procedure:

In a mongo shell, run rs.status() to ensure your replica set is running as expected.

//In a mongo shell connected to the mongod instance running on, freeze so that it does not attempt to become primary for 120 seconds.

> rs.freeze(120)

//In a mongo shell connected the mongod running on, step down this instance that the mongod is not eligible to become primary for 120 seconds:

> rs.stepDown(120) becomes primary.

Configure a Secondary’s Sync Target

If an initial sync operation is in progress when you run replSetSyncFrom/rs.syncFrom(), replSetSyncFrom/rs.syncFrom() stops the in-progress initial sync and restarts the sync process with the new target.

replSetSyncFrom/rs.syncFrom() provide a temporary override of default behavior. mongod will revert to the default sync behavior in the following situations:

  • The mongod instance restarts.
  • The connection between the mongod and the sync target closes.
  • If the sync target falls more than 30 seconds behind another member of the replica set.
db.adminCommand( { replSetSyncFrom: "hostname<:port>" } );

Check the Replication Lag


Confirm whether the current instance is master


Allow read operations on secondary  replicaset nodes


Add shards to cluster

sh.addShard( "<replSetName>/")

Enable sharding for database


Shard a Collection using Hashed Sharding

sh.shardCollection("<database>.<collection>", { <shard key> : "hashed" } )

Shard a Collection using Ranged Sharding

If the collection already contains data, we must create an index on the shard key using the db.collection.createIndex() method before using shardCollection().

sh.shardCollection("<database>.<collection>", { <shard key> : <direction> } )

Managing shard zones

// Add Shards to a Zone

sh.addShardTag("shard0000", "NYC")
sh.addShardTag("shard0001", "NYC")

// You may remove zone from a particular shard 

sh.removeShardTag("shard0002", "NRT")

// Create a Zone Range

sh.addTagRange("records.users", { zipcode: "10001" }, { zipcode: "10281" }, "NYC")
sh.addTagRange("records.users", { zipcode: "11201" }, { zipcode: "11240" }, "NYC")

// Remove a zone range 

sh.removeRangeFromZone("records.user", {zipcode: "10001"}, {zipcode: "10281"})

// view existing zones 
// return all shards with the NYC zone.

use config
db.shards.find({ tags: "NYC" })

//  return any range associated to the NYC zone

use config
db.tags.find({ tags: "NYC" })

Modify chunk size in sharded cluster

use config { _id:"chunksize", value: <sizeInMB> } )

Check the Balancer State


Enable Balancer


Check  if balancer is running


Schedule balancing window

> use config
> sh.setBalancerState( true )
> db.settings.update(
   { _id: "balancer" },
   { $set: { activeWindow : { start : "<start-time>", stop : "<stop-time>" } } },
   { upsert: true }

Remove balancing window

use config
db.settings.update({ _id : "balancer" }, { $unset : { activeWindow : true } })

Disable balancer



// To verify no migrations are in progress after disabling, issue the following operation in the mongo shell:

use config
while( sh.isBalancerRunning() ) {

Disable balancing on a collection


Enable balancing on a collection


Confirm balancing is enabled or disabled

db.getSiblingDB("config").collections.findOne({_id : "students.grades"}).noBalance;

Migrate Chunks in sharded cluster

username is the shard key for collection users in myapp database

Migrate single chunk

db.adminCommand( { moveChunk : "myapp.users",
                   find : {username : "smith"},
                   to : "" } )

Evenly migrate chunks

var shServer = [ "", "", "", "", "" ];
for ( var x=97; x<97+26; x++ ){
  for( var y=97; y<97+26; y+=6 ) {
    var prefix = String.fromCharCode(x) + String.fromCharCode(y);
    db.adminCommand({moveChunk : "myapp.users", find : {email : prefix}, to : shServer[(y-97)/6]})

List databases with sharding enabled

use config
db.databases.find( { "partitioned": true } )

List shards

db.adminCommand( { listShards : 1 } )

View cluster details

db.printShardingStatus()  or sh.status()

Remove chunk from shard

db.adminCommand( { removeShard: "mongodb0" } )

End Sessions in mongodb

db.runCommand( { endSessions: [ { id : <UUID> }, ... ] } )

Kill all  Sessions for specific user

db.runCommand( { killAllSessions: [ { user: "appReader", db: "db1" }, { user: "appReader", db: "db2" } ] } )

Build Info

db.runCommand( { buildInfo: 1 } )

Connection pool status

The command connPoolStats returns information regarding the open outgoing connections from the current database instance to other members of the sharded cluster or replica set.

db.runCommand( { "connPoolStats" : 1 } )

Connection status

db.runCommand( { connectionStatus: 1, showPrivileges: true } )


Retrieve Available Log Filters

db.adminCommand( { getLog: "*" } )

Retrieve Recent Events from Log

db.adminCommand( { getLog : "global" } )

Hash Values for All Collections in a Database

use test
db.runCommand( { dbHash: 1 } )

Find the port mongod is listening

sudo lsof -iTCP -sTCP:LISTEN | grep mongo



Free monitoring status

db.adminCommand( { getFreeMonitoringStatus: 1 } )                // from version 4.0

Set free monitoring

db.adminCommand( { setFreeMonitoring: 1, action: "<enable|disable>" } )          // from version 4.0

Check Disk I/O Bottle neck

iostat -xmt 1

Update the SELinux policy to allow the mongod service to use the new directory

 // Update the SELinux policy to allow the mongod service to use the new directory:

$ semanage fcontext -a -t <type> </some/MongoDB/directory.*>

// where specify one of the following types as appropriate:
mongod_var_lib_t for data directory
mongod_log_t for log file directory
mongod_var_run_t for pid file directory

// Update the SELinux user policy for the new directory:

$ chcon -Rv -u system_u -t <type> </some/MongoDB/directory>

// Apply the updated SELinux policies to the directory:

$ restorecon -R -v </some/MongoDB/directory>

// Example 
semanage fcontext -a -t mongod_var_lib_t '/mongodb/data.*'
chcon -Rv -u system_u -t mongod_var_lib_t '/mongodb/data'
restorecon -R -v '/mongodb/data'

// Non Default mongodb ports 

semanage port -a -t mongod_port_t -p tcp <portnumber>

Production Notes :

Concurrency : WiredTiger , WiredTiger supports concurrent access by readers and writers to the documents in a collection.
Data Consistency : Journaling MongoDB uses write ahead logging to an on-disk journal.
Manage Connection Pool Sizes : The command connPoolStats returns information regarding the open outgoing connections from the current database instance to other members of the sharded cluster or replica set.
Allocate Sufficient RAM and CPU :

WiredTiger :

  • Throughput increases as the number of concurrent active operations increases up to the number of CPUs.
  • Throughput decreases as the number of concurrent active operations exceeds the number of CPUs by some threshold amount.
  • The threshold depends on your application. You can determine the optimum number of concurrent active operations for your application by experimenting and measuring throughput. The output from mongostat provides statistics on the number of active reads/writes in the (ar|aw) column.

WiredTiger internal cache : 50% of (RAM – 1 GB), or 256 MB

For example, on a system with a total of 4GB of RAM the WiredTiger cache will use 1.5GB of RAM (0.5 * (4 GB – 1 GB) = 1.5 GB)

Data in the filesystem cache is the same as the on-disk format, including benefits of any compression for data files. The filesystem cache is used by the operating system to reduce disk I/O.

storage.wiredTiger Options

         cacheSizeGB: <number>
         journalCompressor: <string>
         directoryForIndexes: <boolean>
         blockCompressor: <string>
         prefixCompression: <boolean>

Use Solid State Disks (SSDs)

Configuring NUMA (Non-Uniform Access Memory) on Linux

$ echo 0 | sudo tee /proc/sys/vm/zone_reclaim_mode
$ sudo sysctl -w vm.zone_reclaim_mode=0

Swap : Assign swap space for your systems. Allocating swap space can avoid issues with memory contention and can prevent the OOM Killer on Linux systems from killing mongod.
RAID : For optimal performance in terms of the storage layer, use disks backed by RAID-10

Compression : WiredTiger can compress collection data using either snappy or zlib compression library. snappy provides a lower compression rate but has little performance cost, whereas zlib provides better compression rate but has a higher performance cost.

Swappiness : “Swappiness” is a Linux kernel setting that influences the behavior of the Virtual Memory manager when it needs to allocate a swap, ranging from 0 to 100

$ cat /proc/sys/vm/swappines
$ sysctl vm.swappiness=1

ulimit : Set the file descriptor limit, -n, and the user process limit (ulimit), -u, above 20,000, according to the suggestions in the ulimit reference.

$ ulimit -a
$ ulimit -n <value>

TCP idle timeout : You should set tcp_keepalive_time to 120

$ sysctl net.ipv4.tcp_keepalive_time

$ cat /proc/sys/net/ipv4/tcp_keepalive_time

$ sudo sysctl -w net.ipv4.tcp_keepalive_time=<value>
$ echo <value> | sudo tee /proc/sys/net/ipv4/tcp_keepalive_time

// These operations do not persist across system reboots. To persist the setting, add the following line to /etc/sysctl.conf:
net.ipv4.tcp_keepalive_time = <value>

Back Up and Restoration

mongodump for database and collection

$ mongodump --host XXXXXXXXXXXXXX -d identity -c testcoll --port 27017 --username dbadXXXX --authenticationDatabase admin  --out /opt/application/testbackup/

mongodump using –gzip option

$mongodump --host XXXXXXXXXXXXXX --port 27017 --username dbadXXXX --authenticationDatabase admin  --gzip -d identity  --out /opt/application/testbackup/

mongodump of oplog

$ mongodump -u admin -p XXXXXX --authenticationDatabase admin -d local -c -o oplogdump

mongorestore of database

$ mongorestore --host XXXXXXXXXXXXXX  --port 27017 --username dbadXXXX --authenticationDatabase admin --drop -d identity --verbose /opt/application/testbackup/

Restore Collections Using Wild Cards

$ mongorestore --nsInclude='transactions.*' --nsExclude='transactions.*_dev' dump/

Exclude Index restore

$ mongorestore --db=test --collection=purchaseorders --noIndexRestore dump/test/purchaseorders.bson

mongorestore using –gzip option

$mongorestore --host XXXXXXXXXXXXXX  --port 27017 --username dbadXXXX --authenticationDatabase admin  --gzip -d identity /opt/application/testbackup/

Export a Collection to a JSON File

$ mongoexport --db music --collection artists --out /data/dump/music/artists.json

Export a Collection to a CSV File

$ mongoexport --db music --collection artists --type=csv --fields _id,artistname --out /data/dump/music/artists.csv

Export the results of a Query

$ mongoexport --db music --collection artists --query '{"artistname": "Miles Davis"}' --out /data/dump/music/miles_davis.json

Limit the number of documents in the export

$ mongoexport --db music --collection artists --limit 3 --out /data/dump/music/3_artists.json

Import JSON File

$ mongoimport --db music --file /data/dump/music/artists.json

Import CSV File

$ mongoimport --db music --collection catalog --type csv --headerline --file /data/dump/music/catalog.csv

Import CSV File Without Header Row

$ mongoimport --db music --collection producers --type csv --fields name,born --file /data/dump/music/producers.csv


One thought on “MongoDB Database Administration Commands

Add yours

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Powered by

Up ↑

%d bloggers like this: