SQL Databases and the Firebase Database - The Firebase Database For SQL Developers #1

SQL Databases and the Firebase Database - The Firebase Database For SQL Developers

... JSON ... I lost my breath. "Breathing is good. - It's very good. You can breathe. Database Firebase for SQL Developers 1 - SQL Databases and the Firebase Database Hello, everyone. Welcome to the first episode of the Firebase Database series for SQL developers. In this lesson, I'm going to teach you the difference between how data is saved in a SQL database and in a NoSQL database, such as Firebase Database.


To start, let's go to the laptop. Relational databases use tables to save data and a table is made up of columns and rows. A column is a field that can contain something like "name," "birthday," or "place." And a line is everything that represents that entity. In this case, it could be a customer. And, honestly, nothing sounds as SQL as a multi-client table. #SQLJoke How will we know that each of these columns will have a valid entry? What happens if any goes blank? To avoid this, we use "Schemas". A Scheme is like a diagram. Defines how the data will be organized. It specifies the name of the column, the data type, and whether it is required.


These specifications are called "Restrictions" because they restrict some properties of the data. One of the most notable Restrictions is called the "primary key" and each row has one. A primary key must be unique because it is used to identify each data entry. If you want to enter data in this table, enter an INSERT statement that indicates which table will receive the data, which fields will be filled, and the values ​​for such fields. Let's say I want to add a field that I'll call LastName. What happens if I add it to the INSERT statement and click execute? He will not give me permission to do that.


Because "LastName" is not yet in our Schema. So if we want to insert "LastName", we need to go back and insert an ALTER statement to add it to the Scheme. The ALTER statement creates the "LastName" column, but also adds a constraint because it can not be NULL. And this becomes a paradox. So how do I add the NOT NULL constraint if there is no value in the table for "LastName"? To fix this paradox, you will have to add a new column without the NOT NULL constraint and then manually enter the values ​​in each row.


Once you have done this, you can change the column to add the NOT NULL constraint. After that, the table is ready. SQL databases are rigid to ensure greater data integrity. But it can be tricky when you start to develop because you do not know how your Scheme will work. And there may be sudden changes in the data structure, which will force you to change your Scheme and migrate the data.

In relational models, everything does not always fit. And this is where NoSQL, like the Firebase database, comes into the picture. Firebase Database is a JSON database in NoSQL. It's a fancy way of saying it's just a JSON object. JSON is very simple. It has keys and values. A key is an identifier and a value is ... a value. Just like in the tables, how do you know what each key will accept as value? Is there a Schema that we can use? Well, Firebase Database, like many NoSQL databases, does not have Schemas, which means you do not have to define the structure of your data before you insert them. This provides flexibility because you do not need to change the Schema just to update data. However, just because you do not need to specify a Schema, this does not prevent you from validating the types of data that will be saved to your database.


The Firebase Database has a rule language called "Security Rules". Security Rules allow you to specify the shape and size of the data before it is saved to the database. These rules can be specified after you have decided the best structure for your data, giving you more flexibility. You consider the Security Rules as the SQL Restrictions. This rule example checks all direct child keys below the client key. Every time a new child key is saved, the rule confirms that "firstName" has the form String, "birthday" has Number, and "location" also has String format. If the parameters are different, the data will not be saved. You will still be able to validate your data structure as if there is a Constraint Scheme. To create the data, you can use the Firebase SDK. We choose an object from the database to create a reference where the data will be saved.


In this case, we have created a reference to the client location. Then specify the role of the child key under the parent key. This child key will be like your primary key, because it will be unique in that collection. After that, you can save your data. And the key is unique because of the nature of JSON. If you try to add another "customer_one" key to this data structure, it replaces the previous one. Essentially, there is no way to have two "customer_one" keys under the "clients" key. In this example, you can imagine clients as a table and "customer_one" as the primary key. But the procedure will not always be so straightforward. NoSQL databases can be used in the structure of columns and rows, but you have flexibility since you do not need to adhere to a Schema. Sometimes there are better options for structuring your data. The main difference between SQL databases and NoSQL lies in data integrity and flexibility. SQL databases are very inflexible, but give us much more data integrity.


NoSQL databases are fairly flexible, but part of the data integrity depends on the developer, as you saw with the Security Rules. In the next lesson, we'll explore a more advanced relational model that will be converted to a NoSQL database. If you liked this episode, do not forget to enjoy and sign up to stay up to date on our content, like our Firecasts series that features several tutorials. And if you have any question for us, you can send it through any social network of your choice. Just use the #AskFirebase hashtag. We will respond in the weekly Q & A program. And that is all. See you in the next episode. Thanks for watching the video. You might also like this one or this one. I like this. And sign up. Now, feel free to see me try to get popcorn with your mouth.


Two in a row! .

Error in invoking target 'agent nmhs' of makefile while installing oracle 11g

When installing Oracle 11g  you may get the following error :-

 Error in invoking target 'agent nmhs'  of makefile '/u01/app/oracle/product/11.1.0/db_1/sysman/lib/ins_emagent.mk

Solution:-

Open the file it reports in error message.

vi $ORACLE_HOME/sysman/lib/ins_emagent.mk


Search for the line 
$(MK_EMAGENT_NMECTL)
Change it to:
$(MK_EMAGENT_NMECTL) -lnnz11




Note : edit this file while you still have the installer open, once you have changed it just click retry on the installer window.

Migrating databases from NON-ASM to ASM in oracle 12c

Migrating databases from NON-ASM to ASM in oracle 12c

Step 1.
Login to the database and write down the path of all the datafiles, tempfiles and redo log files.


SQL> select status,version,instance_name from v$instance;

STATUS       VERSION           INSTANCE_NAME
------------ ----------------- ----------------
OPEN         12.1.0.2.0        orclnew


SQL> select file_name,tablespace_name from dba_data_files;
FILE_NAME                                                                        TABLESPACE_NAME
-------------------------------------------------------------------------------- -------------------------
/u01/app/oracle/oradata/ORCLNEW/datafile/o1_mf_users_dcldb9g3_.dbf               USERS
/u01/app/oracle/oradata/ORCLNEW/datafile/o1_mf_undotbs1_dcldbbm9_.dbf            UNDOTBS1
/u01/app/oracle/oradata/ORCLNEW/datafile/o1_mf_system_dcld7ty2_.dbf              SYSTEM
/u01/app/oracle/oradata/ORCLNEW/datafile/o1_mf_sysaux_dcld611r_.dbf              SYSAUX


SQL> show parameter control_files;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      /u01/app/oracle/oradata/ORCLNE
                                                 W/controlfile/o1_mf_dcldcd09_.
                                                 ctl


Step 2. Lets find out what are the disks available in ask disk group along with the free space.

SQL> select NAME,STATE,TOTAL_MB,PATH from v$asm_disk;
NAME                           STATE      TOTAL_MB PATH
------------------------------ -------- ---------- -------------------------
VOL1                           NORMAL         9209 ORCL:VOL1
VOL2                           NORMAL         9209 ORCL:VOL2
VOL3                           NORMAL         9209 ORCL:VOL3

SQL>  select NAME,GROUP_NUMBER,FREE_MB,TOTAL_MB from v$asm_diskgroup;

NAME                           GROUP_NUMBER    FREE_MB   TOTAL_MB
------------------------------ ------------ ---------- ----------
DATA                                      1      23719      27627


Now set the control file to point the ASM disk group DATA.

SQL> alter system set control_files='+DATA' scope=spfile;

System altered.


Step 3. Since our database in running on archivelog mode, lets change the db_recovery_file_dest location to ASM diskgroup DATA.

SQL> alter system set db_recovery_file_dest='+DATA' scope=spfile;

System altered.

SQL> alter system set db_recovery_file_dest_SIZE=200M scope=spfile;

System altered.


Step 4. Shutdown and start the database in nomount stage.
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup nomount
ORACLE instance started.

Total System Global Area  440401920 bytes
Fixed Size                  2925360 bytes
Variable Size             360713424 bytes
Database Buffers           71303168 bytes
Redo Buffers                5459968 bytes
SQL>



Step 5.
Connect to RMAN and restore the controlfile from the file system to the disk group DATA.

[oracle@server3 ~]$ rman target sys/redhat

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Mar 3 15:12:56 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCLNEW (not mounted)

RMAN> restore controlfile from '/u01/app/oracle/oradata/ORCLNEW/controlfile/o1_mf_dcldcd09_.ctl';

Starting restore at 03-MAR-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=26 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+DATA/ORCLNEW/CONTROLFILE/current.271.937667637
Finished restore at 03-MAR-17


Step 6. After restoring the controlfile mount the database from rman.

RMAN> alter database mount;

Statement processed
released channel: ORA_DISK_1


Step 7. Now take the backup of the database in the ASM disk group DATA.

RMAN> BACKUP AS COPY DATABASE FORMAT '+DATA';

Starting backup at 03-MAR-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=28 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u01/app/oracle/oradata/ORCLNEW/datafile/o1_mf_system_dcld7ty2_.dbf
output file name=+DATA/ORCLNEW/DATAFILE/system.272.937667983 tag=TAG20170303T151941 RECID=1 STAMP=937668020
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u01/app/oracle/oradata/ORCLNEW/datafile/o1_mf_sysaux_dcld611r_.dbf
output file name=+DATA/ORCLNEW/DATAFILE/sysaux.273.937668029 tag=TAG20170303T151941 RECID=2 STAMP=937668059
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:36
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/ORCLNEW/datafile/o1_mf_undotbs1_dcldbbm9_.dbf
output file name=+DATA/ORCLNEW/DATAFILE/undotbs1.274.937668065 tag=TAG20170303T151941 RECID=3 STAMP=937668068
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=+DATA/ORCLNEW/CONTROLFILE/backup.275.937668073 tag=TAG20170303T151941 RECID=4 STAMP=937668074
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:04
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=/u01/app/oracle/oradata/ORCLNEW/datafile/o1_mf_users_dcldb9g3_.dbf
output file name=+DATA/ORCLNEW/DATAFILE/users.276.937668075 tag=TAG20170303T151941 RECID=5 STAMP=937668075
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 03-MAR-17
channel ORA_DISK_1: finished piece 1 at 03-MAR-17
piece handle=+DATA/ORCLNEW/BACKUPSET/2017_03_03/nnsnf0_tag20170303t151941_0.277.937668077 tag=TAG20170303T151941 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 03-MAR-17


Step 8. Now change the NON ASM database to ASM by running the below command.
RMAN> switch database to copy;

datafile 1 switched to datafile copy "+DATA/ORCLNEW/DATAFILE/system.272.937667983"
datafile 3 switched to datafile copy "+DATA/ORCLNEW/DATAFILE/sysaux.273.937668029"
datafile 4 switched to datafile copy "+DATA/ORCLNEW/DATAFILE/undotbs1.274.937668065"
datafile 6 switched to datafile copy "+DATA/ORCLNEW/DATAFILE/users.276.937668075"
RMAN>


Step 9. Now once the datafiles have been moved into ASM diskgroup we can open the database.
RMAN> alter database open;

Statement processed


Step 10. Once database has been opened determine the datafiles location

SQL> select file_name,tablespace_name from dba_data_files;

FILE_NAME                                                                        TABLESPACE_NAME
-------------------------------------------------------------------------------- ------------------------------
+DATA/ORCLNEW/DATAFILE/users.276.937668075                                       USERS
+DATA/ORCLNEW/DATAFILE/undotbs1.274.937668065                                    UNDOTBS1
+DATA/ORCLNEW/DATAFILE/system.272.937667983                                      SYSTEM
+DATA/ORCLNEW/DATAFILE/sysaux.273.937668029                                      SYSAUX

SQL>


Step 11.After moving the datafiles into ASM diskgroup , we still need to move the temp tablespace and redolog files into ASM diskgroup.
SQL> select file_name,tablespace_name from dba_Temp_Files;

FILE_NAME                                                                        TABLESPACE_NAME
-------------------------------------------------------------------------------- ------------------------------
/u01/app/oracle/oradata/ORCLNEW/datafile/o1_mf_temp_dcldd5h8_.tmp                TEMP


SQL> select group#,member from v$logfile;
    GROUP# MEMBER
---------- ----------------------------------------------------------------------
         3 /u01/app/oracle/oradata/ORCLNEW/onlinelog/o1_mf_3_dcldchys_.log
         2 /u01/app/oracle/oradata/ORCLNEW/onlinelog/o1_mf_2_dcldcgk9_.log
         1 /u01/app/oracle/oradata/ORCLNEW/onlinelog/o1_mf_1_dcldcfg7_.log



Step 11. To move the temp tablespace we need to drop the existing temp files  first then recreate it into ASM diskgroup

SQL> alter database tempfile '/u01/app/oracle/oradata/ORCLNEW/datafile/o1_mf_temp_dcldd5h8_.tmp' drop including datafiles;

Database altered.


SQL> alter tablespace TEMP add tempfile '+DATA' size 150M autoextend on;

Tablespace altered.

SQL> select file_name,tablespace_name from dba_Temp_Files;

FILE_NAME                                                                        TABLESPACE_NAME
-------------------------------------------------------------------------------- ------------------------------
+DATA/ORCLNEW/TEMPFILE/temp.278.937669453                                        TEMP

SQL>



Step 12. Now move the redolog group into ASM diskgroup

For this first we will drop the inactive redo log group and then recreate it.
SQL> select group#,status from v$log;

    GROUP# STATUS
---------- ----------------
         1 CURRENT
         2 INACTIVE
         3 INACTIVE

Here log group 2 and log group 3 are INACTIVE. Hence, we can drop them and recreate the log groups 2 and 3.

Change  default location of online redo log files as +DATA disk group.


SQL> alter system set db_create_online_log_dest_1='+DATA' SCOPE=BOTH;

System altered.


SQL> alter database drop logfile group 2;

Database altered.

SQL> alter database  add logfile group 2;

SQL> alter database drop logfile group 3;

Database altered.

SQL> alter database  add logfile group 3;


SQL>alter system switch logfile;

SQL> select group#,status from v$log;

    GROUP# STATUS
---------- ----------------
         1 INACTIVE
         2 CURRENT
         3 INACTIVE

SQL> alter database drop logfile group 1;

Database altered.

SQL> alter database  add logfile group 1;

SQL> select * from v$logfile;

    GROUP# STATUS  TYPE    MEMBER                                                                 IS_     CON_ID
---------- ------- ------- ---------------------------------------------------------------------- --- ----------
         3         ONLINE  +DATA/ORCLNEW/ONLINELOG/group_3.280.937672657                          NO           0
         2         ONLINE  +DATA/ORCLNEW/ONLINELOG/group_2.279.937672551                          NO           0
         1         ONLINE  +DATA/ORCLNEW/ONLINELOG/group_1.285.937672783                          NO           0


Now finally all physical datafiles/logfiles have been moved from NON ASM TO ASM.

How to clone a database using cold backup and restore it with different SID on a new host server



Objective :-We are cloning a database running on target server  and restoring it into clone server with a different sid.

Note- Make sure oracle binary is already installed on target db server.In this case the directory structure of target server is different than source server.

Database Version :- Oracle Enterprise Edition  11.2.0.1 on RHEL 6.4 on both server

Source  Server: 192.168.0.102
Database sid:- prim
Hostname:-server1.soumya.com


Destination Server: 192.168.0.104
database sid: newprim
Hostname:-server2.soumya.com


1.Take backup of controlfile as trace:[SOURCE DB]

SQL> alter database backup controlfile to trace as '/u01/app/oracle/control01.sql';
Database altered.

2. check the location of datafiles[SOURCE DB]

SQL>  select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/prim/users01.dbf
/u01/app/oracle/oradata/prim/undotbs01.dbf
/u01/app/oracle/oradata/prim/sysaux01.dbf
/u01/app/oracle/oradata/prim/system01.dbf
/u01/app/oracle/oradata/prim/example01.dbf


SQL> select file_name from dba_temp_files;

FILE_NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/prim/temp01.dbf


3. Shutdown the database:[SOURCE DB]
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.


4. Copy the datafiles to the target db server
[oracle@server1 ]$cd /u01/app/oracle/oradata/prim

[oracle@server1 backup]$ scp *.dbf oracle@server2:/u01/
The authenticity of host 'server2 (192.168.0.104)' can't be established.
RSA key fingerprint is 0b:59:e4:8b:b1:e6:12:3a:38:4f:ba:74:ef:8a:ad:46.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'server2,192.168.0.104' (RSA) to the list of known hosts.
oracle@server2's password:
example01.dbf                           100%  100MB  25.0MB/s   00:04   
sysaux01.dbf                            100%  500MB  16.7MB/s   00:30   
system01.dbf                            100%  670MB  19.7MB/s   00:34   
temp01.dbf                              100%   20MB   6.7MB/s   00:03   
undotbs01.dbf                           100%   55MB  27.5MB/s   00:02   
users01.dbf                             100% 5128KB   5.0MB/s   00:00   

Also copy the controlfile which we took backup at step 1 and transfer it into target server.

5.Make changes in the init file for target db:[TARGET DB]
We can copy the pfile from source db and the change the required parameters like DB_NAME and control_file localtion,audit_file_dest location,diag location etc.


[oracle@server2 dbs]$ cat initnewprim.ora

newprim.__db_cache_size=138412032
newprim.__java_pool_size=4194304
newprim.__large_pool_size=4194304
newprim.__oracle_base='/u01/newapp/oracle'#ORACLE_BASE set from environment
newprim.__pga_aggregate_target=167772160
newprim.__sga_target=247463936
newprim.__shared_io_pool_size=0
newprim.__shared_pool_size=92274688
newprim.__streams_pool_size=0
*.audit_file_dest='/u01/newapp/oracle/admin/newprim/adump'
*.audit_trail='db'
*.compatible='11.2.0.0.0'
*.control_files='/u01/newapp/oracle/oradata/newprim/control01.ctl','/u01/newapp/oracle/flash_recovery_area/newprim/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='newprim'
*.db_recovery_file_dest='/u01/newapp/oracle/flash_recovery_area'
*.db_recovery_file_dest_size=4070572032
*.diagnostic_dest='/u01/newapp/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=newprimXDB)'
*.memory_target=414187520
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'

Create necessary directories as per new sid

[oracle@server2 dbs]$ mkdir -p /u01/newapp/oracle/admin/newprim/adump
[oracle@server2 dbs]$ mkdir -p /u01/newapp/oracle/oradata/newprim/
[oracle@server2 dbs]$ mkdir -p /u01/newapp/oracle/flash_recovery_area/newprim/



6. Now Start the database in nomount stage:[TARGET DB]
[oracle@server2 dbs]$ export ORACLE_SID=newprim

[oracle@server2 dbs]$sqlplus / as sysdba
SQL> startup nomount pfile='/u01/newapp/oracle/product/11.2.0/db_1/dbs/initnewprim.ora';
ORACLE instance started.

Total System Global Area  413372416 bytes
Fixed Size                  2213896 bytes
Variable Size             268437496 bytes
Database Buffers          138412032 bytes
Redo Buffers                4308992 bytes



7. Re-recreate the controlfile [ TARGET DB ]

Open the controlfile and remove word REUSE and make it SET
remove word NORESETLOGS and make it RESETLOGS
remove or keep the word ARCHIVELOG depending upon our requirement
change database name from "prim" to "newprim"
change directory name everywhere from 'prim' to 'newprim'
:wq

The content should look like this

[oracle@server2 u01]$ vi control01.sql

CREATE CONTROLFILE SET DATABASE "NEWPRIM" RESETLOGS  NOARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 '/u01/app/oracle/oradata/newprim/redo01.log'  SIZE 50M BLOCKSIZE 512,
  GROUP 2 '/u01/app/oracle/oradata/newprim/redo02.log'  SIZE 50M BLOCKSIZE 512,
  GROUP 3 '/u01/app/oracle/oradata/newprim/redo03.log'  SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
  '/u01/app/oracle/oradata/newprim/system01.dbf',
  '/u01/app/oracle/oradata/newprim/sysaux01.dbf',
  '/u01/app/oracle/oradata/newprim/undotbs01.dbf',
  '/u01/app/oracle/oradata/newprim/users01.dbf',
  '/u01/app/oracle/oradata/newprim/example01.dbf'
CHARACTER SET WE8MSWIN1252
;


8.Create the controlfile at target db.

SQL>@/u01/control01.sql
Control file created.

9.Open the database in resetlog mode:

SQL> ALTER DATABASE OPEN RESETLOGS;

Database altered.

10.Create the temp files.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/newapp/oracle/oradata/newprim/temp_01.dbf' SIZE 50m autoextend on next 10m maxsize unlimited;
Tablespace altered.

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
newprim

Oracle Exadata Database Machine and Cloud Service 2017 Implementation Essentials Dump

Title:- Oracle Exadata Database Machine and Cloud Service 2017 Implementation Essentials Dump

Exam Code:-1Z0-338

Link:- https://1drv.ms/b/s!AojZQaghYsq5ggJuRwSIWZ9M2RPO


P.S. Please keep in mind the dump is used only for practice purpose, one should not totally rely on it to clear a certification.

How to clone a database using user managed hot backup in oracle database 11g

How to clone a database using user managed hot backup in oracle database 11g

Objective :-We are cloning a database running on target server  and restoring it into clone server.


Database Version :- Oracle Enterprise Edition  11.2.0.1 on RHEL 6.4

Target Server: 192.168.0.101
Database sid:- prim

Clone Server: 192.168.0.104
database sid: newprim


****At target Database

Take online backup of database

Before taking the online backup we need to put the database into archivelog mode.


$ export ORACLE_SID=prim

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area  413372416 bytes
Fixed Size                  2213896 bytes
Variable Size             310380536 bytes
Database Buffers           96468992 bytes
Redo Buffers                4308992 bytes
Database mounted.
SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL>alter database begin backup;
Database altered.

SQL> create table test (id number);

Table created.

SQL> insert into test values(1);

1 row created.

SQL> /

1 row created.

SQL> /
SQL> commit;
SQL> select * from v$backup;

     FILE# STATUS                CHANGE# TIME
---------- ------------------ ---------- ---------
         1 ACTIVE                1067890 01-MAY-18
         2 ACTIVE                1067890 01-MAY-18
         3 ACTIVE                1067890 01-MAY-18
         4 ACTIVE                1067890 01-MAY-18
         5 ACTIVE                1067890 01-MAY-18

SQL> exit


Now switch into the datafile directory and copy all the .dbf and .log files into the backup directory

[oracle@server1]$ cd /u01/app/oracle/oradata/prim
[oracle@server1 prim]$ cp -rpf *.log /home/oracle/bkup/
[oracle@server1 prim]$ cp -rpf *.dbf /home/oracle/bkup/


SQL> alter database end backup;

Database altered.

SQL> select * from v$backup;

     FILE# STATUS                CHANGE# TIME
---------- ------------------ ---------- ---------
         1 NOT ACTIVE            1067890 01-MAY-18
         2 NOT ACTIVE            1067890 01-MAY-18
         3 NOT ACTIVE            1067890 01-MAY-18
         4 NOT ACTIVE            1067890 01-MAY-18
         5 NOT ACTIVE            1067890 01-MAY-18

Now take backup of controlfile
SQL> alter database backup controlfile to trace as '/home/oracle/control.sql';

Database altered.

Now edit the content of control file as per below.

[oracle@server1 ~]$vi /home/oracle/control.sql
Remove word REUSE and make it SET
remove word NORESETLOGS and make it RESETLOGS
remove or keep the word ARCHIVELOG depending upon our requirement
change database name from "prim" to "newprim"
change folder name everywhere from 'prim' to 'newprim'
:wq

The content should look like this


CREATE CONTROLFILE SET DATABASE "NEWPRIM" RESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 '/u01/app/oracle/oradata/newprim/redo01.log'  SIZE 50M BLOCKSIZE 512,
  GROUP 2 '/u01/app/oracle/oradata/newprim/redo02.log'  SIZE 50M BLOCKSIZE 512,
  GROUP 3 '/u01/app/oracle/oradata/newprim/redo03.log'  SIZE 50M BLOCKSIZE 512
DATAFILE
  '/u01/app/oracle/oradata/newprim/system01.dbf',
  '/u01/app/oracle/oradata/newprim/sysaux01.dbf',
  '/u01/app/oracle/oradata/newprim/undotbs01.dbf',
  '/u01/app/oracle/oradata/newprim/users01.dbf',
  '/u01/app/oracle/oradata/newprim/example01.dbf'
CHARACTER SET WE8MSWIN1252
;



Now transfer all the datafiles, controlfile, redolog files into clone server using scp.

[oracle@server1 ~]$ scp control.sql oracle@192.168.0.104:/u01/app/oracle/oradata/newprim/
The authenticity of host '192.168.0.104 (192.168.0.104)' can't be established.
RSA key fingerprint is 0b:59:e4:8b:b1:e6:12:3a:38:4f:ba:74:ef:8a:ad:46.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.104' (RSA) to the list of known hosts.
oracle@192.168.0.104's password:
control.sql                                 100%  698     0.7KB/s   00:00   
[oracle@server1 ~]$ cd bkup/
[oracle@server1 bkup]$ ll
total 1534024
-rw-r-----. 1 oracle oinstall 104865792 May  1 20:23 example01.dbf
-rw-r-----. 1 oracle oinstall  52429312 May  1 20:33 redo01.log
-rw-r-----. 1 oracle oinstall  52429312 May  1 20:23 redo02.log
-rw-r-----. 1 oracle oinstall  52429312 May  1 20:23 redo03.log
-rw-r-----. 1 oracle oinstall 524296192 May  1 20:29 sysaux01.dbf
-rw-r-----. 1 oracle oinstall 702554112 May  1 20:33 system01.dbf
-rw-r-----. 1 oracle oinstall  20979712 May  1 19:58 temp01.dbf
-rw-r-----. 1 oracle oinstall  57679872 May  1 20:31 undotbs01.dbf
-rw-r-----. 1 oracle oinstall   5251072 May  1 20:33 users01.dbf
[oracle@server1 bkup]$ scp * oracle@192.168.0.104:/u01/app/oracle/oradata/newprim/
oracle@192.168.0.104's password:
example01.dbf                                100%  100MB  50.0MB/s   00:02   
redo01.log                                   100%   50MB  50.0MB/s   00:01   
redo02.log                                   100%   50MB  25.0MB/s   00:02   
redo03.log                                   100%   50MB  25.0MB/s   00:02   
sysaux01.dbf                                 100%  500MB  19.2MB/s   00:26   
system01.dbf                                 100%  670MB  12.6MB/s   00:53   
temp01.dbf                                   100%   20MB  20.0MB/s   00:01   
undotbs01.dbf                                100%   55MB  55.0MB/s   00:00   
users01.dbf                                  100% 5128KB   5.0MB/s   00:00   
[oracle@server1 bkup]$ cd
[oracle@server1 ~]$ ll



****At clone server

Create necessary directory structure for clone database.
mkdir -p /u01/app/oracle/oradata/newprim/
mkdir -p /u01/app/oracle/admin/newprim/adump


export ORACLE_SID=newprim
SQL> startup nomount pfile='/u01/app/oracle/product/11.2.0/db_1/dbs/initnewprim.ora';
ORACLE instance started.

Total System Global Area  413372416 bytes
Fixed Size                  2213896 bytes
Variable Size             268437496 bytes
Database Buffers          138412032 bytes
Redo Buffers                4308992 bytes

Now recreate control file.
SQL> @/u01/app/oracle/oradata/newprim/control.sql;

Control file created.


Now recover the database.

sql> recover database using backup controlfile until cancel;
here one by one apply all archivelog files
apply all 3 redo log files with full path until we get the msg that recovery is done.

Here its better to notedown the current redo log file and the pending archivelogs from target db at the time of backup. We need to apply only those
archives and only one current redo log file.


SQL> recover database using backup controlfile until cancel;
ORA-00279: change 1067890 generated at 05/01/2018 20:23:49 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/NEWPRIM/archivelog/2014_12_28/o1_mf_1_7_%u_.
arc
ORA-00280: change 1067890 for thread 1 is in sequence #7


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/u01/app/oracle/flash_recovery_area/NEWPRIM/archivelog/2014_12_28/o1_mf_1_7_%u_.arc
ORA-00308: cannot open archived log
'/u01/app/oracle/flash_recovery_area/NEWPRIM/archivelog/2014_12_28/o1_mf_1_7_%u_.arc'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/u01/app/oracle/oradata/newprim/redo01.log
Log applied.
Media recovery complete.


Open the database with resetlogs

SQL> alter database open resetlogs;

Database altered.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
newprim

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

Now the process of cloning a database is complete into a new server using user managed backup.
Thanks

Related Posts Plugin for WordPress, Blogger...