How to upgrade your MongoDB deployment to 3.0 version
It is the moment to upgrade our MongoDB installation, do we know how to do it? In this post we will explain how to achieve it for all sort of deployments, a stand-alone node, a Replica set or a Sharded Cluster. We will talk about how to upgrade to 3.0 version.
Release and Upgrade Notes
At the very beginning we must always read the Release Notes and the Upgrade Notes, specially between major releases. These are the official documentation urls:
- mongodb release notes 3.0 upgrade
- mongodb release notes 3.0 upgrade replica set
- mongodb release notes 3.0 upgrade cluster
Regardless what kind of installation we have, we will always need the binary files of the new version. We can download them from this url:
We will obtain a compressed file that will be necessary to descompress to access those binaries.
As always, we will choose the most appropiate version, we can pick out between:
|SO||64 bit||32 bit|
|Mac OS X||√|
It will be necessary to have installed the 2.6 version in order to upgrade to 3.0.
Once 3.0 installed we will not be able to downgrade to a less version than 2.6.5.
If we have installed MongoDB from apt, yum or zypper repositories, we will upgrade the version using the package manager. We can read the instructions at this url: installation instructions.
Upgrade of a standalone node installation
These are the steps we must follow:
- Shut down the MongoDB instance.
- Replace the existing binary with the 3.0 mongod binary.
- Restart mongod.
With this purpose we can use one of the following commands:
sudo service mongod stop
mongod --dbpath --shutdown
We will restart mongod instance by:
sudo service mongod restart
mongod --dbpath with the remaining necessary options
Upgrading a Replica Set
When carrying out maintenance work (as it can be a version upgrade) in the nodes of a Replica Set, we will have two goals:
- Do not run any risk.
- Do not interrupt the service at any time.
- The Replica Set must have a minimum of three nodes, with two full secondaries for not having any risk and for having two data copies in every moment. This would not be possible with, for example, one primary, one secondary and an arbiter, because we would only have one copy of data (primary’s) while upgrading.
- We do not want to lose any operation while we are upgrading, for this reason the oplog time must be big enough. The oplog is a capped collection in which MongoDB stores all the activity occurred with our data. We will use this register for catching up the node that has just been upgraded when it is working again in the Replica Set. We will not lose any operation when the time needed to upgrade is less than the time we are able to store in the oplog. We can know the size of our oplog window using this command:
Steps to follow with secondary nodes
Logically, in order to keep two copies of our data in every moment, the Replica Set’s secondaries must be upgrade one by one.
- Shutdown the mongod instance. The Replica Set still works with the primary and one secondary (both of them will continue to make ping to the downloaded node for checking its state).
- Replace the 2.6 binary with 3.0 binary.
- Start the instance with the same options it had.
- We must wait for the upgraded node to catch up before beginning with other secondary. Looking at the optimeDate value returned by the rs.status() command we can know if the replication is over (it must be equal for all the Replica Set members).
Steps to follow with the primary node
- We close existing connections to drivers, convert our primary to secondary and force an election. We get all this with the rs.stepDown() command (step down the primary and force the set to failover). The drivers will automatically establish connections with the new primary node and with little time penalty. We remember that a Replica Set failover is not instantaneous and, while complete, writes are not supported.
12345678910111213a:PRIMARY> rs.stepDown()2014-12-10T00:39:07.094+0100 I NETWORK DBClientCursor::init call() failed2014-12-10T00:39:07.099+0100 I QUERY Error: error doing query: failedat DBQuery._exec (src/mongo/shell/query.js:83:36)at DBQuery.hasNext (src/mongo/shell/query.js:240:10)at DBCollection.findOne (src/mongo/shell/collection.js:185:19)at DB.runCommand (src/mongo/shell/db.js:58:41)at DB.adminCommand (src/mongo/shell/db.js:66:41)at Function.rs.stepDown (src/mongo/shell/utils.js:998:43)at (shell):1:4 at src/mongo/shell/query.js:832014-12-10T00:39:07.101+0100 I NETWORK trying reconnect to 127.0.0.1:27000 (127.0.0.1) failed2014-12-10T00:39:07.101+0100 I NETWORK reconnect 127.0.0.1:27000 (127.0.0.1) oka:SECONDARY>
- Now, all we need is to apply the four steps for the secondaries.
Upgrading a Sharded Cluster
All members of a cluster must be upgraded to 2.6 version in order to upgrade it to 3.0 version.
A Sharded Cluster is made up of Replica Sets but, also, it has:
- Config Servers (they keep the database which tells us the shard where our data is stored)
- mongos processes (they enroute the client requests to the shard returned by the config server)
- Please, make sure you have got a backup of the ‘config’ database before upgrading the cluster.
- While we are upgrading the cluster, we must be sure that no client is updating the meta-data (config database).
- First we will upgrade the cluster’s metadata, then the mongos processes and, finally, the mongod’s.
Steps to follow
- Disable the balancer (if there are operations in progress MongoDB will wait until they are finished).
12345mongos> sh.stopBalancer()Waiting for active hosts...Waiting for the balancer lock...Waiting again for active hosts after balancer is off...mongos>
- Upgrade the cluster’s metadata.
- Upgrade one mongos to 3.0 version.
- Start this mongos with the same options it had and with this new one –upgrade (this execution will be finished when the upgrade is over). MongoDB will not make splits nor chunk moves while this execution is in progress. This message will be sent out when the metadata upgrade is well finished:
12upgrade of config server to v6 successfulConfig database is at version v6
- Upgrade to 3.0 version the remaining mongos instances.
- Start the three mongos without the –upgrade option.
- Upgrade the three config servers, one by one, like they were mongod standalone nodes (shutdown, upgrade and start). We know that in a production environment is recommendable to have three config servers. They keep key information, in a proprietary database, related to the data contained in each shard, therefore, before upgrading them is very recommendable to make a backup. All of them store the same information, so, you only have to stop one service and make the backup from it. This is possible because if one of them is off, automatically the metadata turns to read-only mode.
- Upgrade all shard secondary nodes. The secondaries of a shard must be upgraded one by one, however, we can upgrade them at once if they belong to distinct shards.
- Upgrade, one by one, all shard primary nodes. It is not recommended to upgrade at a time because the mongos’s will enroute all writes to the new primaries and the system will be busy establishing new connections. Remember that a stepDown over a primary destroy all existing connections and new ones are established.
- Re-enable the balancer.
If we try to upgrade the ‘config’ database before stopping the balancer MongoDB return us an error.
Both mongodbspain.com like myself disclaim all responsibility for any problems that may arise in updating your MongoDB deployments. This article is written for information and educational purposes only. Naturally, you always must read the official MongoDB sources.