How to setup mongodb sharded cluster in a standalone server

Linux version :Rhel 6.3
Mongodb version: 2.6

Components of a sharded cluster:-
Every sharded cluster has three main components:
Shards: This are the actual places where the data is stored. Each of the shards can be a
mongod instance or a replica set.

Config Servers: The config server has the metadata about the cluster. It is in charge of
keeping track of which shard has each piece of data.

Query Routers: The query routers are the point of interaction between the clients and the
shard. The query servers use information from the config servers to retrieve the data from
the shards.
For development purposes I am going to use three mongod instances as shards, exactly one
mongod instance as config server and one mongos instance to be a query router.

It is important to remember that due to mongo restrictions the number of mongo config servers
needs to be either one or three. In a production environment you need to use three to
guarantee redundancy but for a development environment with one will be enough.

*Install mongodb
Step:1 System Login as root user. We are checking system OS type and system bits type.

# uname –a

# cat /etc/issue

Step:2 Now we are creating a yum repo file .like /etc/yum.repos.d/mongodb.repo

# vi /etc/yum.repos.d/mongodb.repo
[mongodb]
name=mongodb Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1
:wq

Step:3 Now we install mongodb client and server using yum

# yum install mongo-*

If you face any package error do the following
# yum erase mongo*
yum shell
> install mongodb-org
> remove mongo-10gen
> remove mongo-10gen-server
> run

Step:4 Now we can configure and basic setting in Mongodb Database Server

# vi /etc/mongod.conf
logappend=true
logpath=logpath=/var/log/mongodb/mongod.log
dbpath=/var/lib/mongo
smallfiles = true
:wq

Step:5 Start Mongodb Server

# /etc/init.d/mongod start
# chkconfig mongod on

Open another terminal and type
#mongo

Step 6. Create the directory structure like the below tree.

/u01/mongocluster/
                              mongod1/
                                             logs/
                                             data/
                               mongod2/   
                                             logs/
                                             data/
                               mongod3/
                                             logs/
                                             data/
                               mongoc/
                                            logs/
                                            data/
                               mongos/
                                            logs/
                                            data/

Here, {mongod1,mongod2,mongod3}these folders will be used for the shards, mongoc for the
config server and mongos for the query router.
               
Once the above directory structure has been created give them proper permission
[root@server1 ]# chown -Rf mongod:mongod /u01/mongocluster/
[root@server1 ]# chmod -Rf 775 /u01/mongocluster/
               
               
Step 7.
Shards configuration:-
We are going to create a mongodN.conf inside each of the mongodN folders, replacing N for the
corresponding number of shard.Also it is important to set a different port to each of the
shards, of course these ports have to be available in the host.

[root@server1 ]# cd /u01/mongocluster/mongod1
[root@server1 mongod1]vi mongod1.conf

systemLog:
  destination: file
  path: "/u01/mongocluster/mongod1/logs/mongod1.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongod1/mongod1.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 47018
storage:
  dbPath: "/u01/mongocluster/mongod1/data"
  directoryPerDB: true
sharding:
  clusterRole: shardsvr
operationProfiling:
  mode: all

:wq
 
[root@server1 ]# cd /u01/mongocluster/mongod2 
[root@server1 mongod2]#vi mongod2.conf
systemLog:
  destination: file
  path: "/u01/mongocluster/mongod2/logs/mongod2.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongod2/mongod2.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 48018
storage:
  dbPath: "/u01/mongocluster/mongod2/data"
  directoryPerDB: true
sharding:
  clusterRole: shardsvr
operationProfiling:
  mode: all
 
:wq

[root@server1 ]# cd /u01/mongocluster/mongod3/
[root@server1 mongod3]#vi mongod3.conf 
systemLog:
  destination: file
  path: "/u01/mongocluster/mongod3/logs/mongod3.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongod3/mongod3.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 49018
storage:
  dbPath: "/u01/mongocluster/mongod3/data"
  directoryPerDB: true
sharding:
  clusterRole: shardsvr
operationProfiling:
  mode: all
 
:wq

The important things to notice here are:
That dbPath under the storage section is pointing to the correct place, otherwise you might
have issues with the files mongod creates for normal operation if two of the shards point to
the same data directory.
The sharding.clusterRole is the essential part of this configuration, it is the one that
indicates that the mongod instance is part of a sharded cluster and that its role is to be a
data shard. 

Step 8. 
Config server configuration
[root@server1 ]#vi /u01/mongocluster/mongoc/mongoc.conf
systemLog:
  destination: file
  path: "/u01/mongocluster/mongoc/logs/mongoc.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongoc/mongoc.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 47019
storage:
  dbPath: "/u01/mongocluster/mongoc/data"
  directoryPerDB: true
sharding:
  clusterRole: configsvr
operationProfiling:
  mode: "all"
 
:wq

Step 9.
Query router (Mongos)
The configuration of the query router is pretty simple. The important part in it, is the
sharding.configDB value.The value needs to be a string containing the configuration server's
location in the form of <host>:<port>.

If you have a 3-config server cluster you need to put the location of the three configuration
servers separated by commas in the string.
Important: if you have more than one query router, make sure you use exactly the same string
for the sharding.configDB in every query router.

[root@server1 ]#vi /u01/mongocluster/mongos/mongos.conf
systemLog:
  destination: file
  path: "/u01/mongocluster/mongos/logs/mongos.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongos/mongos.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 47017
sharding:
  configDB: "localhost:47019"
 
:wq 

Step 10.Running the sharded cluster
Starting the components
The order in which the components should be started is the following:
*shards
*config servers
*query routers

#Start the mongod shard instances 
[root@server1 ]mongod --config /u01/mongocluster/mongod1/mongod1.conf
[root@server1 ]mongod --config /u01/mongocluster/mongod2/mongod2.conf
[root@server1 ]mongod --config /u01/mongocluster/mongod3/mongod3.conf

#Start the mongod config server instance
[root@server1 ]mongod --config /u01/mongocluster/mongoc/mongoc.conf

#Start the mongos
[root@server1 ]mongos -f /u01/mongocluster/mongos/mongos.conf


Stopping the components:-
To stop the components we just need to stop the started instances.

For that we are going to use the kill  command. In order to use it, we need the PIDs of each
of the processes. For that reason, we added the processManagement.pidFile to the configuration
files of the components: the instances are going to store their PIDs in the those files,
making it easy to get the PID of the process to kill when wanting to shutdown the cluster.

The following script shuts down each of the processes in case the PID file exists:
[root@server1 ] vi processkill.sh

#!/bin/bash
#Stop mongos
PID_MONGOS_FILE=/u01/mongocluster/mongos/mongos.pid
if [ -e $PID_MONGOS_FILE ]; then
    PID_MONGOS=$(cat $PID_MONGOS_FILE)
    kill $PID_MONGOS
    rm $PID_MONGOS_FILE
fi

#Stop mongo config
PID_MONGOC_FILE=/u01/mongocluster/mongoc/mongoc.pid
if [ -e $PID_MONGOC_FILE ]; then
    PID_MONGOC=$(cat $PID_MONGOC_FILE)
    kill $PID_MONGOC
    rm $PID_MONGOC_FILE
fi

#Stop mongod shard instances
PID_MONGOD1_FILE=/u01/mongocluster/mongod1/mongod1.pid
if [ -e $PID_MONGOD1_FILE ]; then
    PID_MONGOD1=$(cat $PID_MONGOD1_FILE)
    kill $PID_MONGOD1
    rm $PID_MONGOD1_FILE
fi

PID_MONGOD2_FILE=/u01/mongocluster/mongod2/mongod2.pid
if [ -e $PID_MONGOD2_FILE ]; then
    PID_MONGOD2=$(cat $PID_MONGOD2_FILE)
    kill $PID_MONGOD2
    rm $PID_MONGOD2_FILE
fi

PID_MONGOD3_FILE=/u01/mongocluster/mongod3/mongod3.pid
if [ -e $PID_MONGOD3_FILE ]; then
    PID_MONGOD3=$(cat $PID_MONGOD3_FILE)
    kill $PID_MONGOD3
    rm $PID_MONGOD3_FILE
fi

:wq

Step 11.Before using the sharded cluster
What we need to do is setup the shards we created in the configuration server.
In order to do that we need to connect to the cluster using the mongo client against
the query server, like this:

[root@server1 ] mongo localhost:47017

Once we are connected we need to issue the following commands to add the shards to the cluster:

mongos> sh.addShard("localhost:47018")
mongos> sh.addShard("localhost:48018")
mongos> sh.addShard("localhost:49018")

To check the sharding information:-
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "version" : 4,
        "minCompatibleVersion" : 4,
        "currentVersion" : 5,
        "clusterId" : ObjectId("54d8dde8ea5c30beb58658eb")
}
  shards:
        {  "_id" : "shard0000",  "host" : "localhost:47018" }
        {  "_id" : "shard0001",  "host" : "localhost:48018" }
        {  "_id" : "shard0002",  "host" : "localhost:49018" }
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "test",  "partitioned" : false,  "primary" : "shard0000" }


To List Databases with Sharding Enabled:-
mongos> use config
switched to db config
mongos> db.databases.find( { "partitioned": true } )db.databases.find( { "partitioned": true } )
{ "_id" : "students", "partitioned" : true, "primary" : "shard0002" }

To enable sharding on a particular database:-
sh.enableSharding("students")

Define a database name as soumya:-
>use soumya

then to check your current db name:-
>db

To add the created db in dblist we need to add collection in this database.For instance, create a document in a customers collection like this:
db.customers.save({"firstName":"Alvin", "lastName":"Alexander"})

Next, verify that your document was created with this command:
db.customers.find()

Now check your db name:-
>show dbs

Now to add a new user in a db:-
>use soumya
>db.addUser( { user: "soumya",
              pwd: "redhat2",
              roles: [ "readWrite", "dbAdmin" ]
            } )
           
To check all the users in your current db:-
>show users
or
db.system.users.find()

To drop the database pizzas:-
use pizzas;
>db.dropDatabase()

To check current version:-
db.version()           

Done..


Alert mail script for checking oracle database up/down status

Step 1. create the shell script for checking dbstatus

vi /home/oracle/dbcheck.sh

#!/bin/bash
hostname=server1.soumya.com
oracle_sid=prim
export instance_name=ora_pmon_prim
status=`ps -ef |grep pmon | head -1 | awk '{print $8}'`
if [ "$status" == "$instance_name" ]; then
echo "DB IS UP"
else
echo "DB down"
mail -s "Attention!! $oracle_sid Database is Down at $hostname!! " yourmailid@gmail.com
fi
exit 0
:wq

Step 2. Schedule it for checkup in every 5 minutes
crontab -e
*/5 * * * * /home/oracle/dbcheck.sh > /dev/null

p.s.Make sure mailx rpm is installed and sendmail or other mail service is running on server.

Shell script for auto start of oracle and listener at boot time

Step 1. Edit /etc/oratab file.
# vi /etc/oratab
prim:/u01/app/oracle/product/11.2.0/db_1:Y

:wq

Step 2. Create a file called "/etc/init.d/dbora" as the root user

vi /etc/init.d/dbora

#!/bin/bash
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.
ORA_HOME=/u01/app/oracle/product/11.2.0/db_1
ORA_OWNER=oracle
if [ ! -f $ORA_HOME/bin/dbstart ]
then
    echo "Oracle startup: cannot start"
    exit
fi
case "$1" in
    'start')
        su $ORA_OWNER -c $ORA_HOME/bin/dbstart &
        su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl start" &
        touch /var/lock/subsys/dbora
        ;;
    'stop')
        su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl stop"
        su $ORA_OWNER -c $ORA_HOME/bin/dbshut
        rm -f /var/lock/subsys/dbora
        ;;
esac

:wq

Step 3. Give dbora file proper permission .
chmod 750 /etc/init.d/dbora
chkconfig --add dbora

Monitoring Flash Recovery Area space using shell script


This script will check the flash recovery area and will shoot a mail if the space is over 80% 
filled up.

From oracle user:
[root@server1 ~]# vi /home/oracle/flashback.sql

col ROUND(SPACE_LIMIT/1048576) heading "Space Allocated (MB)" format 999999
col round(space_used/1048576) heading "Space Used (MB)" format 99999
col round((space_used/space_limit)*100) heading " % used "  format 99999
col name format a30
set head off
select name, round(space_limit/1048576),round(space_used/1048576),round ((space_used/space_limit)*100) as "% used"
from  v$RECOVERY_FILE_DEST;
exit;

:wq

Now lets create the shell-script which will monitor the space usage of flash recovery area:-
From root user
[root@server1 ~]# vi /root/spacecheck.sh

su - oracle -c "sqlplus -S / as sysdba @/home/oracle/flashback.sql" > /home/oracle/test.txt
space=`cat /home/oracle/test.txt | awk '{print $4}'`
if
[ $space -gt 80 ]; then
mail -s "Attention!! Low space in Flash recovery area! " yourmailid@gmail.com
fi
exit 0

:wq

We schedule the script which will check in every 5 mins.
[root@server1 ~]crontab -e
*/5 * * * * /root/spacecheck.sh > /dev/null

Recover database from a backup taken before a RESETLOGS

Prior to Oracle 10g, the redo log files generated after opening the database with RESETLOGS
could not be used with the backups taken before the RESETLOGS was performed.

Therefore, whenever a resetlogs was done, it was important to take an immediate full database
backup, since all previous backups became invalid.

A RESETLOGS needs to be performed when we need to do the following activities
1)    Do a point in time recovery
2)    Recover a database using a backup of the control file

What does a RESETLOGS do?

•    Archives the current online redo logs (if they are accessible) and then erases the contents
    of the online redo logs and resets the log sequence number to 1.
•    Creates the online redo log files if they do not currently exist.
•    Updates all current datafiles and online redo logs and all subsequent archived redo logs
    with a new RESETLOGS SCN and time stamp.
   
Scenario: Here i'm trying to recover my database using a backup which was taken before resetlogs
option.


[oracle@server1 prim]$ sqlplus /  as sysdba
SQL> select group#, member from v$logfile;

    GROUP#            MEMBER
--------------------------------------------------------------------------------
        3            /u01/app/oracle/oradata/prim/redo03.log

        2            /u01/app/oracle/oradata/prim/redo02.log

        1            /u01/app/oracle/oradata/prim/redo01.log
       
SQL> !
[oracle@server1 prim]$ cd /u01/app/oracle/oradata/prim
[oracle@server1 prim]$ ls -ltrh
total 1.5G
-rw-r-----. 1 oracle oinstall  21M Jan  8 22:22 temp01.dbf
-rw-r-----. 1 oracle oinstall  51M Jan  8 23:09 redo02.log
-rw-r-----. 1 oracle oinstall  51M Jan  8 23:09 redo03.log
-rw-r-----. 1 oracle oinstall 101M Jan  8 23:09 example01.dbf
-rw-r-----. 1 oracle oinstall 5.1M Jan  8 23:14 users01.dbf
-rw-r-----. 1 oracle oinstall  56M Jan  8 23:25 undotbs01.dbf
-rw-r-----. 1 oracle oinstall 671M Jan  8 23:25 system01.dbf
-rw-r-----. 1 oracle oinstall 501M Jan  8 23:25 sysaux01.dbf
-rw-r-----. 1 oracle oinstall  51M Jan  8 23:26 redo01.log
-rw-r-----. 1 oracle oinstall 9.3M Jan  8 23:27 control01.ctl

Before dropping the redologs lets take a fresh backup of database.
RMAN> backup database plus archivelog;
RMAN> exit

Now we will manually remove the redologs.
[oracle@server1 prim]$ cd /u01/app/oracle/oradata/prim
[oracle@server1 prim]$ rm -rf *.log

Here's the output in alertlog file:-
[oracle@server1 u01]$ tail -100f /u01/app/oracle/diag/rdbms/prim/prim/trace/alert_prim.log
Thu Jan 08 23:44:12 2015
Thread 1 advanced to log sequence 6 (LGWR switch)
  Current log# 3 seq# 6 mem# 0: /u01/app/oracle/oradata/prim/redo03.log
Thu Jan 08 23:44:12 2015
Errors in file /u01/app/oracle/diag/rdbms/prim/prim/trace/prim_arc1_8374.trc:
ORA-00313: open failed for members of log group 2 of thread 1
ORA-00312: online log 2 thread 1: '/u01/app/oracle/oradata/prim/redo02.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Thu Jan 08 23:44:12 2015       

SQL> shut abort;
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area  413372416 bytes
Fixed Size                  2213896 bytes
Variable Size             322963448 bytes
Database Buffers           83886080 bytes
Redo Buffers                4308992 bytes
Database mounted.

Now we will recover the database until last log sequence

[oracle@server1 prim]$ rman target /
RMAN> run {
 set until logseq=6 thread=1; 
 restore database;
 recover database;
 }
executing command: SET until clause

Starting restore at 08-JAN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/prim/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/prim/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/prim/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/prim/users01.dbf
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/prim/example01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/PRIM/backupset/2015_01_08/o1_mf_nnndf_TAG20150108T233907_bbxky3k1_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/PRIM/backupset/2015_01_08/o1_mf_nnndf_TAG20150108T233907_bbxky3k1_.bkp tag=TAG20150108T233907
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:36
Finished restore at 08-JAN-15

Starting recover at 08-JAN-15
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_08/o1_mf_1_4_bbxkztt9_.arc
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_08/o1_mf_1_4_bbxkztt9_.arc thread=1 sequence=4
unable to find archived log
archived log thread=1 sequence=5
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 01/08/2015 23:51:10
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 5 and starting SCN of 1034128

Here we can ignore the above error. its generated cause Oracle does not stop recovery at the
last available archive log in the database backupsets as per the restored control file. It keeps
on requesting for "next available archive log sequence number (5)".
In fact, there is no log sequence 5 recorded in the restored control file.

Now open the database using resetlogs option.
RMAN> alter database open resetlogs;

Now lets insert some data into a schema.
[oracle@server1 prim]$ sqlplus soumya/soumya

SQL> create table t3 (id number);
Table created.
SQL> insert into t3 values(1);
1 row created.
SQL> /
1 row created.
SQL> /
1 row created.
SQL> commit;
Commit complete.
SQL> exit

Now lets do a few log switches.

[oracle@server1 prim]$ sqlplus /  as sysdba
SQL> alter system switch logfile;
System altered.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
SQL> /   
SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     4
Next log sequence to archive   6
Current log sequence           6
SQL>

Now lets remove the redolog files once again.
NOTE: No Backup has been taken after opening the database with RESETLOGS option.
[oracle@server1 prim]$ cd /u01/app/oracle/oradata/prim
[oracle@server1 prim]$ rm -rf *.log

SQL> shut abort;
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  413372416 bytes
Fixed Size                  2213896 bytes
Variable Size             327157752 bytes
Database Buffers           79691776 bytes
Redo Buffers                4308992 bytes
Database mounted.
ORA-03113: end-of-file on communication channel
Process ID: 9221
Session ID: 1 Serial number: 5
SQL> exit

[oracle@server1 prim]$ rman target /
RMAN> startup mount

Oracle instance started
database mounted

Total System Global Area     413372416 bytes

Fixed Size                     2213896 bytes
Variable Size                327157752 bytes
Database Buffers              79691776 bytes
Redo Buffers                   4308992 bytes

RMAN> list incarnation of database;

using target database control file instead of recovery catalog

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       PRIM     4130009889       PARENT  1          15-AUG-09
2       2       PRIM     4130009889       PARENT  945184     24-JAN-14
3       3       PRIM     4130009889       CURRENT 1034129    08-JAN-15

Now we have to recover the database once again until the log sequence no 6

RMAN> run {
2> set until sequence=6 thread =1;
3> restore database;
4> recover database;
5>  }

executing command: SET until clause

Starting restore at 09-JAN-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/prim/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/prim/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/prim/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/prim/users01.dbf
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/prim/example01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/PRIM/backupset/2015_01_08/o1_mf_nnndf_TAG20150108T233907_bbxky3k1_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/PRIM/backupset/2015_01_08/o1_mf_nnndf_TAG20150108T233907_bbxky3k1_.bkp tag=TAG20150108T233907
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 09-JAN-15

Starting recover at 09-JAN-15
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_08/o1_mf_1_4_bbxkztt9_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_1_bbxmmqq9_.arc
archived log for thread 1 with sequence 2 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_2_bbxn97nj_.arc
archived log for thread 1 with sequence 3 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_3_bbxn98kg_.arc
archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_4_bbxn99oh_.arc
archived log for thread 1 with sequence 5 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_5_bbxn9dmp_.arc
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_08/o1_mf_1_4_bbxkztt9_.arc thread=1 sequence=4
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_1_bbxmmqq9_.arc thread=1 sequence=1
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_2_bbxn97nj_.arc thread=1 sequence=2
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_3_bbxn98kg_.arc thread=1 sequence=3
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_4_bbxn99oh_.arc thread=1 sequence=4
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_5_bbxn9dmp_.arc thread=1 sequence=5
media recovery complete, elapsed time: 00:00:02
Finished recover at 09-JAN-15

RMAN> alter database open resetlogs;
database opened

RMAN> quit
[oracle@server1 prim]$ sqlplus soumya/soumya
SQL> select * from t3;

        ID
----------
         1
         1
         1
       
Now we can see all data recoved until point of failure.        







How to increase expdp/impdp performance using Parallel parameter

Before using the parameter we must know about the parameter:-

1.By default the value of the parallel option is 1
2.Export job creates those many threads specified in the parallel option.
3.The value of the parallel option can be modified in interactive mode.
4.This option is used with %U clause in the filename parameter of the expdp/impdp.
5.It is recommended that the value of the parameter should not be more than 2 times of number
of CPUs in the database server for the optimum performance.
6.In transportable tablespace export, the degree of parallelism cannot be greater than 1.
7.During import (impdp) the PARALLEL parameter value should not be larger than the number of files in the dumpset

sample example:-
expdp username/password directory=dump_dir filesize=1G dumpfile=full%U.dmp logfile=fulllog.log parallel=2 exclude=statistics

impdp username/password directory=dump_dir dumpfile=full%U.dmp logfile=full.log parallel=2

To get your server's no of cpu core:-

#lscpu
root@server1 dpdump]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                1
On-line CPU(s) list:   0
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 42
Stepping:              7
CPU MHz:               3292.549
BogoMIPS:              6585.09
Hypervisor vendor:     VMware
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              6144K
NUMA node0 CPU(s):     0

How to find out scn for a specific time


To get current scn:-

SQL> set time on
10:12:32 SQL> col scn for 9999999999999999
10:12:36 SQL> select DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
                 1208594

10:12:38 SQL> /

GET_SYSTEM_CHANGE_NUMBER
------------------------
                 1208595


Now to find out scn for a specific time:-

SQL> select timestamp_to_scn(to_date('02/02/2015 10:12:38','mm/dd/yyyyhh24:mi:ss')) from dual;
TIMESTAMP_TO_SCN(TO_DATE('02/02/201510:12:38','MM/DD/YYYYHH24:MI:SS'))
----------------------------------------------------------------------
                                                               1208592
The usual precision of the result value is +/- 3 seconds.

Related Posts Plugin for WordPress, Blogger...