How to set username and instance name at SQL prompt

[oracle@server1 ~]$ sqlplus /  as sysdba
SQL> SET SQLPROMPT "_USER @ _CONNECT_IDENTIFIER> "
SYS @ prim> select instance_name from v$instance;
INSTANCE_NAME
----------------
prim

MongoDB backup using shell script for all databases

vi /backups/mongodb_backup.sh

#!/bin/bash
date1=`date +%d%m%Y_%H%M%S`
export path1=/backups/mongodb_backup
OUTPUTDIR="$path1/$date1"
/usr/bin/find /backups/mongodb_backup/* -type d -mtime +30 -exec rm -r {} \; 2> /dev/null
mongodump -o $path1/$date1
/bin/tar -zcvf  /backups/mongodb_backup/backup_$date1.tar.gz $path1/$date1
rm -rf $path1/$date1
exit

#Schedule in crontab:-
 crontab -e

0 0 * * * /backups/mongodb_backup.sh > /dev/null

How to start/stop oracle database from linux command prompt

How to start/stop oracle database from linux command prompt:-
[oracle@configsrv1 ~]$vi /etc/oratab
#

# This file is used by ORACLE utilities.  It is created by root.sh
# and updated by the Database Configuration Assistant when creating
# a database.

# A colon, ':', is used as the field terminator.  A new line terminates
# the entry.  Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
#   $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third filed indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
prim:/u01/app/oracle/product/11.2.0/db_1:N

change the entry to this

prim:/u01/app/oracle/product/11.2.0/db_1:Y

and save the file.

For starting db
[oracle@configsrv1 ~] dbstart
ORACLE_HOME_LISTNER is not SET, unable to auto-start Oracle Net Listener
Usage: /u01/app/oracle/product/11.2.0/db_1/bin/dbstart ORACLE_HOME
Processing Database instance "prim": log file /u01/app/oracle/product/11.2.0/db_1/startup.log

To fix the above error;

[oracle@configsrv1 ~]export ORACLE_HOME_LISTNER=$ORACLE_HOME
[oracle@configsrv1 ~]dbstart $ORACLE_HOME
Then start the db again
[oracle@configsrv1 bin]$ dbstart $ORACLE_HOME
Processing Database instance "prim": log file /u01/app/oracle/product/11.2.0/db_1/startup.log

For stopping db
[oracle@configsrv1 ~] dbshut


How to enable/disable case-sensitive password in oracle 11g


From oracle 11g password can be case sensitive by setting
SEC_CASE_SENSITIVE_LOGON = true in  initialization parameter file.

SQL> show parameter SEC_CASE_SENSITIVE_LOGON

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon             boolean     TRUE

Lets check the feature,
[oracle@configsrv1 ~]$ sqlplus /  as sysdba
SQL> create user soumya identified by das;

User created.

SQL> grant connect to soumya;

Grant succeeded.

[oracle@configsrv1 ~]$ sqlplus /  as sysdba
SQL> conn soumya/das
Connected.

Now we will check by providing the password in caps letter.
SQL> conn soumya/DAS
ERROR:
ORA-01017: invalid username/password; logon denied


Warning: You are no longer connected to ORACLE.

We can disable this feature by setting the parameter value to false.
SQL> alter system set SEC_CASE_SENSITIVE_LOGON= false scope=both;

System altered.

SQL> conn soumya/DAS
Connected.

Postgresql backup shell script for all databases in linux

vi /backups/postgre_backup.sh

#!/bin/bash
date1=`date +%d%m%Y_%H%M%S`
export path1=/backups/postgre_backup
OUTPUTDIR="$path1/$date1"
/usr/bin/find /backups/postgre_backup/* -type d -mtime +32 -exec rm -r {} \; 2> /dev/null
su postgres -c 'pg_dumpall' > "$OUTPUTDIR.sql"
/bin/tar -zcvf /backups/postgre_backup/backup_$date1.tar.gz $OUTPUTDIR.sql
rm $OUTPUTDIR.sql
exit

:wq

Schedule in crontab:-
# crontab -e

0 0 * * * /backups/postgre_backup.sh > /dev/null

How to setup mongodb sharded cluster in a standalone server

Linux version :Rhel 6.3
Mongodb version: 2.6

Components of a sharded cluster:-
Every sharded cluster has three main components:
Shards: This are the actual places where the data is stored. Each of the shards can be a
mongod instance or a replica set.

Config Servers: The config server has the metadata about the cluster. It is in charge of
keeping track of which shard has each piece of data.

Query Routers: The query routers are the point of interaction between the clients and the
shard. The query servers use information from the config servers to retrieve the data from
the shards.
For development purposes I am going to use three mongod instances as shards, exactly one
mongod instance as config server and one mongos instance to be a query router.

It is important to remember that due to mongo restrictions the number of mongo config servers
needs to be either one or three. In a production environment you need to use three to
guarantee redundancy but for a development environment with one will be enough.

*Install mongodb
Step:1 System Login as root user. We are checking system OS type and system bits type.

# uname –a

# cat /etc/issue

Step:2 Now we are creating a yum repo file .like /etc/yum.repos.d/mongodb.repo

# vi /etc/yum.repos.d/mongodb.repo
[mongodb]
name=mongodb Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1
:wq

Step:3 Now we install mongodb client and server using yum

# yum install mongo-*

If you face any package error do the following
# yum erase mongo*
yum shell
> install mongodb-org
> remove mongo-10gen
> remove mongo-10gen-server
> run

Step:4 Now we can configure and basic setting in Mongodb Database Server

# vi /etc/mongod.conf
logappend=true
logpath=logpath=/var/log/mongodb/mongod.log
dbpath=/var/lib/mongo
smallfiles = true
:wq

Step:5 Start Mongodb Server

# /etc/init.d/mongod start
# chkconfig mongod on

Open another terminal and type
#mongo

Step 6. Create the directory structure like the below tree.

/u01/mongocluster/
                              mongod1/
                                             logs/
                                             data/
                               mongod2/   
                                             logs/
                                             data/
                               mongod3/
                                             logs/
                                             data/
                               mongoc/
                                            logs/
                                            data/
                               mongos/
                                            logs/
                                            data/

Here, {mongod1,mongod2,mongod3}these folders will be used for the shards, mongoc for the
config server and mongos for the query router.
               
Once the above directory structure has been created give them proper permission
[root@server1 ]# chown -Rf mongod:mongod /u01/mongocluster/
[root@server1 ]# chmod -Rf 775 /u01/mongocluster/
               
               
Step 7.
Shards configuration:-
We are going to create a mongodN.conf inside each of the mongodN folders, replacing N for the
corresponding number of shard.Also it is important to set a different port to each of the
shards, of course these ports have to be available in the host.

[root@server1 ]# cd /u01/mongocluster/mongod1
[root@server1 mongod1]vi mongod1.conf

systemLog:
  destination: file
  path: "/u01/mongocluster/mongod1/logs/mongod1.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongod1/mongod1.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 47018
storage:
  dbPath: "/u01/mongocluster/mongod1/data"
  directoryPerDB: true
sharding:
  clusterRole: shardsvr
operationProfiling:
  mode: all

:wq
 
[root@server1 ]# cd /u01/mongocluster/mongod2 
[root@server1 mongod2]#vi mongod2.conf
systemLog:
  destination: file
  path: "/u01/mongocluster/mongod2/logs/mongod2.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongod2/mongod2.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 48018
storage:
  dbPath: "/u01/mongocluster/mongod2/data"
  directoryPerDB: true
sharding:
  clusterRole: shardsvr
operationProfiling:
  mode: all
 
:wq

[root@server1 ]# cd /u01/mongocluster/mongod3/
[root@server1 mongod3]#vi mongod3.conf 
systemLog:
  destination: file
  path: "/u01/mongocluster/mongod3/logs/mongod3.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongod3/mongod3.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 49018
storage:
  dbPath: "/u01/mongocluster/mongod3/data"
  directoryPerDB: true
sharding:
  clusterRole: shardsvr
operationProfiling:
  mode: all
 
:wq

The important things to notice here are:
That dbPath under the storage section is pointing to the correct place, otherwise you might
have issues with the files mongod creates for normal operation if two of the shards point to
the same data directory.
The sharding.clusterRole is the essential part of this configuration, it is the one that
indicates that the mongod instance is part of a sharded cluster and that its role is to be a
data shard. 

Step 8. 
Config server configuration
[root@server1 ]#vi /u01/mongocluster/mongoc/mongoc.conf
systemLog:
  destination: file
  path: "/u01/mongocluster/mongoc/logs/mongoc.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongoc/mongoc.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 47019
storage:
  dbPath: "/u01/mongocluster/mongoc/data"
  directoryPerDB: true
sharding:
  clusterRole: configsvr
operationProfiling:
  mode: "all"
 
:wq

Step 9.
Query router (Mongos)
The configuration of the query router is pretty simple. The important part in it, is the
sharding.configDB value.The value needs to be a string containing the configuration server's
location in the form of <host>:<port>.

If you have a 3-config server cluster you need to put the location of the three configuration
servers separated by commas in the string.
Important: if you have more than one query router, make sure you use exactly the same string
for the sharding.configDB in every query router.

[root@server1 ]#vi /u01/mongocluster/mongos/mongos.conf
systemLog:
  destination: file
  path: "/u01/mongocluster/mongos/logs/mongos.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongos/mongos.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 47017
sharding:
  configDB: "localhost:47019"
 
:wq 

Step 10.Running the sharded cluster
Starting the components
The order in which the components should be started is the following:
*shards
*config servers
*query routers

#Start the mongod shard instances 
[root@server1 ]mongod --config /u01/mongocluster/mongod1/mongod1.conf
[root@server1 ]mongod --config /u01/mongocluster/mongod2/mongod2.conf
[root@server1 ]mongod --config /u01/mongocluster/mongod3/mongod3.conf

#Start the mongod config server instance
[root@server1 ]mongod --config /u01/mongocluster/mongoc/mongoc.conf

#Start the mongos
[root@server1 ]mongos -f /u01/mongocluster/mongos/mongos.conf


Stopping the components:-
To stop the components we just need to stop the started instances.

For that we are going to use the kill  command. In order to use it, we need the PIDs of each
of the processes. For that reason, we added the processManagement.pidFile to the configuration
files of the components: the instances are going to store their PIDs in the those files,
making it easy to get the PID of the process to kill when wanting to shutdown the cluster.

The following script shuts down each of the processes in case the PID file exists:
[root@server1 ] vi processkill.sh

#!/bin/bash
#Stop mongos
PID_MONGOS_FILE=/u01/mongocluster/mongos/mongos.pid
if [ -e $PID_MONGOS_FILE ]; then
    PID_MONGOS=$(cat $PID_MONGOS_FILE)
    kill $PID_MONGOS
    rm $PID_MONGOS_FILE
fi

#Stop mongo config
PID_MONGOC_FILE=/u01/mongocluster/mongoc/mongoc.pid
if [ -e $PID_MONGOC_FILE ]; then
    PID_MONGOC=$(cat $PID_MONGOC_FILE)
    kill $PID_MONGOC
    rm $PID_MONGOC_FILE
fi

#Stop mongod shard instances
PID_MONGOD1_FILE=/u01/mongocluster/mongod1/mongod1.pid
if [ -e $PID_MONGOD1_FILE ]; then
    PID_MONGOD1=$(cat $PID_MONGOD1_FILE)
    kill $PID_MONGOD1
    rm $PID_MONGOD1_FILE
fi

PID_MONGOD2_FILE=/u01/mongocluster/mongod2/mongod2.pid
if [ -e $PID_MONGOD2_FILE ]; then
    PID_MONGOD2=$(cat $PID_MONGOD2_FILE)
    kill $PID_MONGOD2
    rm $PID_MONGOD2_FILE
fi

PID_MONGOD3_FILE=/u01/mongocluster/mongod3/mongod3.pid
if [ -e $PID_MONGOD3_FILE ]; then
    PID_MONGOD3=$(cat $PID_MONGOD3_FILE)
    kill $PID_MONGOD3
    rm $PID_MONGOD3_FILE
fi

:wq

Step 11.Before using the sharded cluster
What we need to do is setup the shards we created in the configuration server.
In order to do that we need to connect to the cluster using the mongo client against
the query server, like this:

[root@server1 ] mongo localhost:47017

Once we are connected we need to issue the following commands to add the shards to the cluster:

mongos> sh.addShard("localhost:47018")
mongos> sh.addShard("localhost:48018")
mongos> sh.addShard("localhost:49018")

To check the sharding information:-
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "version" : 4,
        "minCompatibleVersion" : 4,
        "currentVersion" : 5,
        "clusterId" : ObjectId("54d8dde8ea5c30beb58658eb")
}
  shards:
        {  "_id" : "shard0000",  "host" : "localhost:47018" }
        {  "_id" : "shard0001",  "host" : "localhost:48018" }
        {  "_id" : "shard0002",  "host" : "localhost:49018" }
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "test",  "partitioned" : false,  "primary" : "shard0000" }


To List Databases with Sharding Enabled:-
mongos> use config
switched to db config
mongos> db.databases.find( { "partitioned": true } )db.databases.find( { "partitioned": true } )
{ "_id" : "students", "partitioned" : true, "primary" : "shard0002" }

To enable sharding on a particular database:-
sh.enableSharding("students")

Define a database name as soumya:-
>use soumya

then to check your current db name:-
>db

To add the created db in dblist we need to add collection in this database.For instance, create a document in a customers collection like this:
db.customers.save({"firstName":"Alvin", "lastName":"Alexander"})

Next, verify that your document was created with this command:
db.customers.find()

Now check your db name:-
>show dbs

Now to add a new user in a db:-
>use soumya
>db.addUser( { user: "soumya",
              pwd: "redhat2",
              roles: [ "readWrite", "dbAdmin" ]
            } )
           
To check all the users in your current db:-
>show users
or
db.system.users.find()

To drop the database pizzas:-
use pizzas;
>db.dropDatabase()

To check current version:-
db.version()           

Done..


Alert mail script for checking oracle database up/down status

Step 1. create the shell script for checking dbstatus

vi /home/oracle/dbcheck.sh

#!/bin/bash
hostname=server1.soumya.com
oracle_sid=prim
export instance_name=ora_pmon_prim
status=`ps -ef |grep pmon | head -1 | awk '{print $8}'`
if [ "$status" == "$instance_name" ]; then
echo "DB IS UP"
else
echo "DB down"
mail -s "Attention!! $oracle_sid Database is Down at $hostname!! " yourmailid@gmail.com
fi
exit 0
:wq

Step 2. Schedule it for checkup in every 5 minutes
crontab -e
*/5 * * * * /home/oracle/dbcheck.sh > /dev/null

p.s.Make sure mailx rpm is installed and sendmail or other mail service is running on server.

Shell script for auto start of oracle and listener at boot time

Step 1. Edit /etc/oratab file.
# vi /etc/oratab
prim:/u01/app/oracle/product/11.2.0/db_1:Y

:wq

Step 2. Create a file called "/etc/init.d/dbora" as the root user

vi /etc/init.d/dbora

#!/bin/bash
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.
ORA_HOME=/u01/app/oracle/product/11.2.0/db_1
ORA_OWNER=oracle
if [ ! -f $ORA_HOME/bin/dbstart ]
then
    echo "Oracle startup: cannot start"
    exit
fi
case "$1" in
    'start')
        su $ORA_OWNER -c $ORA_HOME/bin/dbstart &
        su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl start" &
        touch /var/lock/subsys/dbora
        ;;
    'stop')
        su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl stop"
        su $ORA_OWNER -c $ORA_HOME/bin/dbshut
        rm -f /var/lock/subsys/dbora
        ;;
esac

:wq

Step 3. Give dbora file proper permission .
chmod 750 /etc/init.d/dbora
chkconfig --add dbora
Related Posts Plugin for WordPress, Blogger...