Linux version :Rhel 6.3
Mongodb version: 2.6
Components of a sharded cluster:-
Every sharded cluster has three main components:
Shards: This are the actual places where the data is stored. Each of the shards can be a
mongod instance or a replica set.
Config Servers: The config server has the metadata about the cluster. It is in charge of
keeping track of which shard has each piece of data.
Query Routers: The query routers are the point of interaction between the clients and the
shard. The query servers use information from the config servers to retrieve the data from
the shards.
For development purposes I am going to use three mongod instances as shards, exactly one
mongod instance as config server and one mongos instance to be a query router.
It is important to remember that due to mongo restrictions the number of mongo config servers
needs to be either one or three. In a production environment you need to use three to
guarantee redundancy but for a development environment with one will be enough.
*Install mongodb
Step:1 System Login as root user. We are checking system OS type and system bits type.
# uname –a
# cat /etc/issue
Step:2 Now we are creating a yum repo file .like /etc/yum.repos.d/mongodb.repo
# vi /etc/yum.repos.d/mongodb.repo
[mongodb]
name=mongodb Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1
:wq
Step:3 Now we install mongodb client and server using yum
# yum install mongo-*
If you face any package error do the following
# yum erase mongo*
yum shell
> install mongodb-org
> remove mongo-10gen
> remove mongo-10gen-server
> run
Step:4 Now we can configure and basic setting in Mongodb Database Server
# vi /etc/mongod.conf
logappend=true
logpath=logpath=/var/log/mongodb/mongod.log
dbpath=/var/lib/mongo
smallfiles = true
:wq
Step:5 Start Mongodb Server
# /etc/init.d/mongod start
# chkconfig mongod on
Open another terminal and type
#mongo
Step 6. Create the directory structure like the below tree.
Here, {mongod1,mongod2,mongod3}these folders will be used for the shards, mongoc for the
config server and mongos for the query router.
Once the above directory structure has been created give them proper permission
[root@server1 ]# chown -Rf mongod:mongod /u01/mongocluster/
[root@server1 ]# chmod -Rf 775 /u01/mongocluster/
Step 7.
Shards configuration:-
We are going to create a mongodN.conf inside each of the mongodN folders, replacing N for the
corresponding number of shard.Also it is important to set a different port to each of the
shards, of course these ports have to be available in the host.
[root@server1 ]# cd /u01/mongocluster/mongod1
[root@server1 mongod1]vi mongod1.conf
systemLog:
destination: file
path: "/u01/mongocluster/mongod1/logs/mongod1.log"
logAppend: true
processManagement:
pidFilePath: "/u01/mongocluster/mongod1/mongod1.pid"
fork: true
net:
bindIp: 127.0.0.1
port: 47018
storage:
dbPath: "/u01/mongocluster/mongod1/data"
directoryPerDB: true
sharding:
clusterRole: shardsvr
operationProfiling:
mode: all
:wq
[root@server1 ]# cd /u01/mongocluster/mongod2
[root@server1 mongod2]#vi mongod2.conf
systemLog:
destination: file
path: "/u01/mongocluster/mongod2/logs/mongod2.log"
logAppend: true
processManagement:
pidFilePath: "/u01/mongocluster/mongod2/mongod2.pid"
fork: true
net:
bindIp: 127.0.0.1
port: 48018
storage:
dbPath: "/u01/mongocluster/mongod2/data"
directoryPerDB: true
sharding:
clusterRole: shardsvr
operationProfiling:
mode: all
:wq
[root@server1 ]# cd /u01/mongocluster/mongod3/
[root@server1 mongod3]#vi mongod3.conf
systemLog:
destination: file
path: "/u01/mongocluster/mongod3/logs/mongod3.log"
logAppend: true
processManagement:
pidFilePath: "/u01/mongocluster/mongod3/mongod3.pid"
fork: true
net:
bindIp: 127.0.0.1
port: 49018
storage:
dbPath: "/u01/mongocluster/mongod3/data"
directoryPerDB: true
sharding:
clusterRole: shardsvr
operationProfiling:
mode: all
:wq
The important things to notice here are:
That dbPath under the storage section is pointing to the correct place, otherwise you might
have issues with the files mongod creates for normal operation if two of the shards point to
the same data directory.
The sharding.clusterRole is the essential part of this configuration, it is the one that
indicates that the mongod instance is part of a sharded cluster and that its role is to be a
data shard.
Step 8.
Config server configuration
[root@server1 ]#vi /u01/mongocluster/mongoc/mongoc.conf
systemLog:
destination: file
path: "/u01/mongocluster/mongoc/logs/mongoc.log"
logAppend: true
processManagement:
pidFilePath: "/u01/mongocluster/mongoc/mongoc.pid"
fork: true
net:
bindIp: 127.0.0.1
port: 47019
storage:
dbPath: "/u01/mongocluster/mongoc/data"
directoryPerDB: true
sharding:
clusterRole: configsvr
operationProfiling:
mode: "all"
:wq
Step 9.
Query router (Mongos)
The configuration of the query router is pretty simple. The important part in it, is the
sharding.configDB value.The value needs to be a string containing the configuration server's
location in the form of <host>:<port>.
If you have a 3-config server cluster you need to put the location of the three configuration
servers separated by commas in the string.
Important: if you have more than one query router, make sure you use exactly the same string
for the sharding.configDB in every query router.
[root@server1 ]#vi /u01/mongocluster/mongos/mongos.conf
systemLog:
destination: file
path: "/u01/mongocluster/mongos/logs/mongos.log"
logAppend: true
processManagement:
pidFilePath: "/u01/mongocluster/mongos/mongos.pid"
fork: true
net:
bindIp: 127.0.0.1
port: 47017
sharding:
configDB: "localhost:47019"
:wq
Step 10.Running the sharded cluster
Starting the components
The order in which the components should be started is the following:
*shards
*config servers
*query routers
#Start the mongod shard instances
[root@server1 ]mongod --config /u01/mongocluster/mongod1/mongod1.conf
[root@server1 ]mongod --config /u01/mongocluster/mongod2/mongod2.conf
[root@server1 ]mongod --config /u01/mongocluster/mongod3/mongod3.conf
#Start the mongod config server instance
[root@server1 ]mongod --config /u01/mongocluster/mongoc/mongoc.conf
#Start the mongos
[root@server1 ]mongos -f /u01/mongocluster/mongos/mongos.conf
Stopping the components:-
To stop the components we just need to stop the started instances.
For that we are going to use the kill command. In order to use it, we need the PIDs of each
of the processes. For that reason, we added the processManagement.pidFile to the configuration
files of the components: the instances are going to store their PIDs in the those files,
making it easy to get the PID of the process to kill when wanting to shutdown the cluster.
The following script shuts down each of the processes in case the PID file exists:
[root@server1 ] vi processkill.sh
#!/bin/bash
#Stop mongos
PID_MONGOS_FILE=/u01/mongocluster/mongos/mongos.pid
if [ -e $PID_MONGOS_FILE ]; then
PID_MONGOS=$(cat $PID_MONGOS_FILE)
kill $PID_MONGOS
rm $PID_MONGOS_FILE
fi
#Stop mongo config
PID_MONGOC_FILE=/u01/mongocluster/mongoc/mongoc.pid
if [ -e $PID_MONGOC_FILE ]; then
PID_MONGOC=$(cat $PID_MONGOC_FILE)
kill $PID_MONGOC
rm $PID_MONGOC_FILE
fi
#Stop mongod shard instances
PID_MONGOD1_FILE=/u01/mongocluster/mongod1/mongod1.pid
if [ -e $PID_MONGOD1_FILE ]; then
PID_MONGOD1=$(cat $PID_MONGOD1_FILE)
kill $PID_MONGOD1
rm $PID_MONGOD1_FILE
fi
PID_MONGOD2_FILE=/u01/mongocluster/mongod2/mongod2.pid
if [ -e $PID_MONGOD2_FILE ]; then
PID_MONGOD2=$(cat $PID_MONGOD2_FILE)
kill $PID_MONGOD2
rm $PID_MONGOD2_FILE
fi
PID_MONGOD3_FILE=/u01/mongocluster/mongod3/mongod3.pid
if [ -e $PID_MONGOD3_FILE ]; then
PID_MONGOD3=$(cat $PID_MONGOD3_FILE)
kill $PID_MONGOD3
rm $PID_MONGOD3_FILE
fi
:wq
Step 11.Before using the sharded cluster
What we need to do is setup the shards we created in the configuration server.
In order to do that we need to connect to the cluster using the mongo client against
the query server, like this:
[root@server1 ] mongo localhost:47017
Once we are connected we need to issue the following commands to add the shards to the cluster:
mongos> sh.addShard("localhost:47018")
mongos> sh.addShard("localhost:48018")
mongos> sh.addShard("localhost:49018")
To check the sharding information:-
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 4,
"minCompatibleVersion" : 4,
"currentVersion" : 5,
"clusterId" : ObjectId("54d8dde8ea5c30beb58658eb")
}
shards:
{ "_id" : "shard0000", "host" : "localhost:47018" }
{ "_id" : "shard0001", "host" : "localhost:48018" }
{ "_id" : "shard0002", "host" : "localhost:49018" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard0000" }
To List Databases with Sharding Enabled:-
mongos> use config
switched to db config
mongos> db.databases.find( { "partitioned": true } )db.databases.find( { "partitioned": true } )
{ "_id" : "students", "partitioned" : true, "primary" : "shard0002" }
To enable sharding on a particular database:-
sh.enableSharding("students")
Define a database name as soumya:-
>use soumya
then to check your current db name:-
>db
To add the created db in dblist we need to add collection in this database.For instance, create a document in a customers collection like this:
db.customers.save({"firstName":"Alvin", "lastName":"Alexander"})
Next, verify that your document was created with this command:
db.customers.find()
Now check your db name:-
>show dbs
Now to add a new user in a db:-
>use soumya
>db.addUser( { user: "soumya",
pwd: "redhat2",
roles: [ "readWrite", "dbAdmin" ]
} )
To check all the users in your current db:-
>show users
or
db.system.users.find()
To drop the database pizzas:-
use pizzas;
>db.dropDatabase()
To check current version:-
db.version()
Done..
Mongodb version: 2.6
Components of a sharded cluster:-
Every sharded cluster has three main components:
Shards: This are the actual places where the data is stored. Each of the shards can be a
mongod instance or a replica set.
Config Servers: The config server has the metadata about the cluster. It is in charge of
keeping track of which shard has each piece of data.
Query Routers: The query routers are the point of interaction between the clients and the
shard. The query servers use information from the config servers to retrieve the data from
the shards.
For development purposes I am going to use three mongod instances as shards, exactly one
mongod instance as config server and one mongos instance to be a query router.
It is important to remember that due to mongo restrictions the number of mongo config servers
needs to be either one or three. In a production environment you need to use three to
guarantee redundancy but for a development environment with one will be enough.
*Install mongodb
Step:1 System Login as root user. We are checking system OS type and system bits type.
# uname –a
# cat /etc/issue
Step:2 Now we are creating a yum repo file .like /etc/yum.repos.d/mongodb.repo
# vi /etc/yum.repos.d/mongodb.repo
[mongodb]
name=mongodb Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1
:wq
Step:3 Now we install mongodb client and server using yum
# yum install mongo-*
If you face any package error do the following
# yum erase mongo*
yum shell
> install mongodb-org
> remove mongo-10gen
> remove mongo-10gen-server
> run
Step:4 Now we can configure and basic setting in Mongodb Database Server
# vi /etc/mongod.conf
logappend=true
logpath=logpath=/var/log/mongodb/mongod.log
dbpath=/var/lib/mongo
smallfiles = true
:wq
Step:5 Start Mongodb Server
# /etc/init.d/mongod start
# chkconfig mongod on
Open another terminal and type
#mongo
Step 6. Create the directory structure like the below tree.
/u01/mongocluster/
mongod1/
logs/
data/
mongod2/
logs/
data/
mongod3/
logs/
data/
mongoc/
logs/
data/
mongos/
logs/
data/
Here, {mongod1,mongod2,mongod3}these folders will be used for the shards, mongoc for the
config server and mongos for the query router.
Once the above directory structure has been created give them proper permission
[root@server1 ]# chown -Rf mongod:mongod /u01/mongocluster/
[root@server1 ]# chmod -Rf 775 /u01/mongocluster/
Step 7.
Shards configuration:-
We are going to create a mongodN.conf inside each of the mongodN folders, replacing N for the
corresponding number of shard.Also it is important to set a different port to each of the
shards, of course these ports have to be available in the host.
[root@server1 ]# cd /u01/mongocluster/mongod1
[root@server1 mongod1]vi mongod1.conf
systemLog:
destination: file
path: "/u01/mongocluster/mongod1/logs/mongod1.log"
logAppend: true
processManagement:
pidFilePath: "/u01/mongocluster/mongod1/mongod1.pid"
fork: true
net:
bindIp: 127.0.0.1
port: 47018
storage:
dbPath: "/u01/mongocluster/mongod1/data"
directoryPerDB: true
sharding:
clusterRole: shardsvr
operationProfiling:
mode: all
:wq
[root@server1 ]# cd /u01/mongocluster/mongod2
[root@server1 mongod2]#vi mongod2.conf
systemLog:
destination: file
path: "/u01/mongocluster/mongod2/logs/mongod2.log"
logAppend: true
processManagement:
pidFilePath: "/u01/mongocluster/mongod2/mongod2.pid"
fork: true
net:
bindIp: 127.0.0.1
port: 48018
storage:
dbPath: "/u01/mongocluster/mongod2/data"
directoryPerDB: true
sharding:
clusterRole: shardsvr
operationProfiling:
mode: all
:wq
[root@server1 ]# cd /u01/mongocluster/mongod3/
[root@server1 mongod3]#vi mongod3.conf
systemLog:
destination: file
path: "/u01/mongocluster/mongod3/logs/mongod3.log"
logAppend: true
processManagement:
pidFilePath: "/u01/mongocluster/mongod3/mongod3.pid"
fork: true
net:
bindIp: 127.0.0.1
port: 49018
storage:
dbPath: "/u01/mongocluster/mongod3/data"
directoryPerDB: true
sharding:
clusterRole: shardsvr
operationProfiling:
mode: all
:wq
The important things to notice here are:
That dbPath under the storage section is pointing to the correct place, otherwise you might
have issues with the files mongod creates for normal operation if two of the shards point to
the same data directory.
The sharding.clusterRole is the essential part of this configuration, it is the one that
indicates that the mongod instance is part of a sharded cluster and that its role is to be a
data shard.
Step 8.
Config server configuration
[root@server1 ]#vi /u01/mongocluster/mongoc/mongoc.conf
systemLog:
destination: file
path: "/u01/mongocluster/mongoc/logs/mongoc.log"
logAppend: true
processManagement:
pidFilePath: "/u01/mongocluster/mongoc/mongoc.pid"
fork: true
net:
bindIp: 127.0.0.1
port: 47019
storage:
dbPath: "/u01/mongocluster/mongoc/data"
directoryPerDB: true
sharding:
clusterRole: configsvr
operationProfiling:
mode: "all"
:wq
Step 9.
Query router (Mongos)
The configuration of the query router is pretty simple. The important part in it, is the
sharding.configDB value.The value needs to be a string containing the configuration server's
location in the form of <host>:<port>.
If you have a 3-config server cluster you need to put the location of the three configuration
servers separated by commas in the string.
Important: if you have more than one query router, make sure you use exactly the same string
for the sharding.configDB in every query router.
[root@server1 ]#vi /u01/mongocluster/mongos/mongos.conf
systemLog:
destination: file
path: "/u01/mongocluster/mongos/logs/mongos.log"
logAppend: true
processManagement:
pidFilePath: "/u01/mongocluster/mongos/mongos.pid"
fork: true
net:
bindIp: 127.0.0.1
port: 47017
sharding:
configDB: "localhost:47019"
:wq
Step 10.Running the sharded cluster
Starting the components
The order in which the components should be started is the following:
*shards
*config servers
*query routers
#Start the mongod shard instances
[root@server1 ]mongod --config /u01/mongocluster/mongod1/mongod1.conf
[root@server1 ]mongod --config /u01/mongocluster/mongod2/mongod2.conf
[root@server1 ]mongod --config /u01/mongocluster/mongod3/mongod3.conf
#Start the mongod config server instance
[root@server1 ]mongod --config /u01/mongocluster/mongoc/mongoc.conf
#Start the mongos
[root@server1 ]mongos -f /u01/mongocluster/mongos/mongos.conf
Stopping the components:-
To stop the components we just need to stop the started instances.
For that we are going to use the kill command. In order to use it, we need the PIDs of each
of the processes. For that reason, we added the processManagement.pidFile to the configuration
files of the components: the instances are going to store their PIDs in the those files,
making it easy to get the PID of the process to kill when wanting to shutdown the cluster.
The following script shuts down each of the processes in case the PID file exists:
[root@server1 ] vi processkill.sh
#!/bin/bash
#Stop mongos
PID_MONGOS_FILE=/u01/mongocluster/mongos/mongos.pid
if [ -e $PID_MONGOS_FILE ]; then
PID_MONGOS=$(cat $PID_MONGOS_FILE)
kill $PID_MONGOS
rm $PID_MONGOS_FILE
fi
#Stop mongo config
PID_MONGOC_FILE=/u01/mongocluster/mongoc/mongoc.pid
if [ -e $PID_MONGOC_FILE ]; then
PID_MONGOC=$(cat $PID_MONGOC_FILE)
kill $PID_MONGOC
rm $PID_MONGOC_FILE
fi
#Stop mongod shard instances
PID_MONGOD1_FILE=/u01/mongocluster/mongod1/mongod1.pid
if [ -e $PID_MONGOD1_FILE ]; then
PID_MONGOD1=$(cat $PID_MONGOD1_FILE)
kill $PID_MONGOD1
rm $PID_MONGOD1_FILE
fi
PID_MONGOD2_FILE=/u01/mongocluster/mongod2/mongod2.pid
if [ -e $PID_MONGOD2_FILE ]; then
PID_MONGOD2=$(cat $PID_MONGOD2_FILE)
kill $PID_MONGOD2
rm $PID_MONGOD2_FILE
fi
PID_MONGOD3_FILE=/u01/mongocluster/mongod3/mongod3.pid
if [ -e $PID_MONGOD3_FILE ]; then
PID_MONGOD3=$(cat $PID_MONGOD3_FILE)
kill $PID_MONGOD3
rm $PID_MONGOD3_FILE
fi
:wq
Step 11.Before using the sharded cluster
What we need to do is setup the shards we created in the configuration server.
In order to do that we need to connect to the cluster using the mongo client against
the query server, like this:
[root@server1 ] mongo localhost:47017
Once we are connected we need to issue the following commands to add the shards to the cluster:
mongos> sh.addShard("localhost:47018")
mongos> sh.addShard("localhost:48018")
mongos> sh.addShard("localhost:49018")
To check the sharding information:-
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 4,
"minCompatibleVersion" : 4,
"currentVersion" : 5,
"clusterId" : ObjectId("54d8dde8ea5c30beb58658eb")
}
shards:
{ "_id" : "shard0000", "host" : "localhost:47018" }
{ "_id" : "shard0001", "host" : "localhost:48018" }
{ "_id" : "shard0002", "host" : "localhost:49018" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard0000" }
To List Databases with Sharding Enabled:-
mongos> use config
switched to db config
mongos> db.databases.find( { "partitioned": true } )db.databases.find( { "partitioned": true } )
{ "_id" : "students", "partitioned" : true, "primary" : "shard0002" }
To enable sharding on a particular database:-
sh.enableSharding("students")
Define a database name as soumya:-
>use soumya
then to check your current db name:-
>db
To add the created db in dblist we need to add collection in this database.For instance, create a document in a customers collection like this:
db.customers.save({"firstName":"Alvin", "lastName":"Alexander"})
Next, verify that your document was created with this command:
db.customers.find()
Now check your db name:-
>show dbs
Now to add a new user in a db:-
>use soumya
>db.addUser( { user: "soumya",
pwd: "redhat2",
roles: [ "readWrite", "dbAdmin" ]
} )
To check all the users in your current db:-
>show users
or
db.system.users.find()
To drop the database pizzas:-
use pizzas;
>db.dropDatabase()
To check current version:-
db.version()
Done..
Nenhum comentário:
Postar um comentário