Expdp and Impdp

How to Take schema backup using Expdp

Export, Import are the most commonly used backup and restoration technique in Oracle Database by Database Administrator. This tool is improved in every version of Oracle Database.  Before Oracle 10g exp and imp were used, which is not upgraded to expdp and impdp.

Here, I am using Expdp to take schema backup, because exp is almost obsolete now a days. In both Impdp and Expdp I will use system database user for taking and restoring backups. System user will take backup with user creation and grant scripts. So that, on the destination server it will automatically create and provide same grants.

The backup command after successful execution will make a .dmp file at defined directory on the Database Server, To restore it DBA has to move this file from source to destination and execute Impdp command.

Perquisites for Export and Import Backups:
     Database must be in open mode.
     A directory in database should have created.
Steps to take Schema Backup using Expdp:
Below are the four steps to take schema backup using expdp.
1. Set Database Environment variable
Export the sid of database using export command
[oracle@test ~]$ export ORACLE_SID=test01
[oracle@test ~]$ echo $ORACLE_SID
test01
2. Backup Command Syntax:
Support you want to take backup of schema remote_dba, Then Use this command to take backup
Syntax:
[oracle@test ~]$ expdp {user_name}/{Password} dumpfile={Dump file name} directory={Name of directory } logfile={Log file Name} schemas={Backup schema Name}
Example:
[oracle@test ~]$ expdp system/sys dumpfile=remote_dba.dmp directory=backup logfile=remote_dba_imp.log  schemas=remote_dba
Detail of parameters used in expdp command:
There are various parameters used with expdp command, We must have a knowledge about these to have a successful backup.

User Name/Password: For a full schema backup use system user name. It will include all commands to make a user and assign grants to the user at time of schema restoration at destination database.

Dumpfile: Name of the backup file created. Give any appropriate name with extension (.dmp).
Directory:  This is the path at which backups will be placed. We can use default directory made in oracle “data_pump_dir”. Data_pump_dir is physically located at $ORACLE_HOME/rdbms/log.
If you want to place Database Expdp backup at some other place then DBA has to create a new database directory. Use following to Create new database directory
a. Before creating the directory check

The physical location of directory must exist. like /opt/oracle .
Ownership of the directory to Oracle user. Use this command
[oracle@test opt]$ ls -ltrh
total 12K
drwxrwxrwx  15 oracle oinstall  12K Aug  4  2010 oracle
Here Oracle is the owner of /op/oracle folder.
b. Create Database Directory
SQL> create directory backup  as ‘/opt/oracle’;
Directory created.
SQL> grant read, write on directory backup  to remote_dba;
Grant succeeded.
Here, I have created a new database directory for backup, Now i can use this “backup” name in the directory parameter of expdp command and my backups will store at “/opt/oracle”
Schemas:  Name of the schema for which backup is taken.  More than one name is also allowed by command separator. Only one backup file will generate for all schemas.
Logfile:  The name of log file to keep log of the backups. The path for log file will be same as given in directory path. It’s recommended to give logfile name for future references of backup.
Above are minimum parameters to take a schema backup.

3. Execute backup command: Once you will execute backup command you will see this type of output on your screen.

[oracle@test ~]$ expdp system/sys dumpfile=remote_dba.dmp directory=backup logfile=remote_dba.log  schemas=remote_dba

Export: Release 10.2.0.4.0 - Production on Thursday, 19 May, 2011 11:45:48
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 -Production 
With the Partitioning, OLAP, Data Mining and Real Application Testing option Starting       Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 38 MB
------------------------------------------------
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
exported "REMOTE_DBA"."TEST1"           11.16 MB  123342 rows
exported "REMOTE_DBA"."TEST2"                     4.671 MB   25545 rows
exported "REMOTE_DBA"."TEST3"           2.380 MB   20221 rows  Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:  /opt/oracle/remote_dba.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:46:16

Here database back is done, Path of backup file is /opt/oracle/remote_dba.dmp.

How to Restore schema using Impdp

Export, Import are the most commonly used backup and restoration technique available for DBA in Oracle. Since you have already taken backup of your database schema using Expdp. Now, its time to restore backup of the schema

Perquisites for Import:
Destination Database must be in open mode.
A directory in database should have created. 
Four Steps to Restore Schema Backup using Impdp:
Database Administrator has to move .dmp backup of source schema from source database to destination database directory. DBA can find existing database directories using:
sql> select * from dba_directories;
1. Set Database Environment variable
Export the sid of database using export command
[oracle@test ~]$ export ORACLE_SID=test01
[oracle@test ~]$ echo $ORACLE_SID
test01
2. Dump restoration scenarios:
In case of dump restoration, there could by n number of scenarios for DBA to consider. Here, I am considering most commonly used two scenarios.
a. Restoring dump on same database Instance.
b. Restoring dump on different database Instance.
Let’s discuss both one by one.

Restoring Schema Dump on Same Database Instance: Usually this type of restoration is done, when Database Administrator wants to make replica of a existing schema on same instance. Since, you are restoring schema on same database, you have to create new database user for dump restoration. Impdp command executed by system user will automatically create new database user with different name given by you but with same privillages. DBA has to just add “remap_schema” parameter with impdp command.

Syntax:
[oracle@test ~]$ impdp {user_name}/{Password} dumpfile={Dump file name} directory={Name of directory } logfile={Log file Name} schemas={Backup schema Name} remap_schema={exising_schema_name:new_schema_name}
Example:
[oracle@test ~]$ impdp system/sys dumpfile=remote_dba.dmp directory=backup logfile=test_schema_imp.log  schemas=remote_dba remap_schema=remote_dba:remote_dba1

Restoring Schema Dump on Different Database Instance: When Database Administrator moves a schema from one database to another database using then he use this approach. Suppose DBA is moveing user “Remote_dba” from server1 to server2. There are two possibilities for database user existence on destination database. Same way there are two possibilities in case of Tablespaces, either all tablespaces as on source database exists on destination database or doesn’t exists. Let’s see them all in detail.

A. Database User not Exists: If database user (e.g. remote_dba) is not on destination instance proceed with dump restoration, user will be created by dump restoration automatically.
B. Database User Exists: If database user (e.g. remote_dba) is already on destination instance. We need to drop the user and then restore dump. Following are steps to drop a user:
    Connect to the database using sys user and execute following.
    SQL> drop user remote_dba cascade;
    User dropped.
    If you got this message then proceed with restoration, otherwise you may encounter error like
    SQL> drop user remote_dba cascade;
    drop user remote_dba cascade
    *

    ERROR at line 1:

    ORA-01940: cannot drop a user that is currently connected.

    To resolve this connect by sys user and execute following.
    SQL> select sid,SERIAL# from v$session where username like ‘remote_dba%’;
    SID    SERIAL#
    ———- ———-
    110        654
    Now kill all the session connected using
    SQL> ALTER SYSTEM KILL SESSION ‘110,654’ immediate;
    System altered.
    Now again execute the drop user command
    SQL> drop user remote_dba cascade;
    User dropped.
    Now you can proceed with dump restoration.

C. Tablesapces exists: If tablespaces with same name as on source exists on destination then no issue with restoration, this is because at time of restoring by default restoration process looks for same name of tablespaces at destination schema.

D. Tablesapces Not exists: If tablespaces with same name as on source does not exists on destination then restoration will return error “ORA-00959: tablespace ‘test_Tablesapce’ does not exist”
To avoid this error ues “remap_tablesape” parameter with impdp command. More than one tablespaces can also be remap by comma separator. like
remap_tablespace={source_tablespace_name: dest_tablespace_name}, {source_tablespace_name1: dest_tablespace_name1}
Syntex:

[oracle@test ~]$ impdp {user_name}/{Password} dumpfile={Dump file name} directory={Name of directory } logfile={Log file Name} schemas={Backup schema Name} remap_tablespace={source_tablespace_name: dest_tablespace_name}

Example:

[oracle@test ~]$ impdp system/sys dumpfile=remote_dba.dmp directory=backup logfile=test_schema_imp.log  schemas=remote_dba  remap_tablespace= tablespace_name1: tablespace_name2

3. Execute Impdp Command: This output is for a simple case, when DBA is restoring schema on different database instance where all tablespace exists which were available on source database. Destination schema has not any user as “remote_dba“, So this impdp will create a new remote_dba user.

[oracle@test opt]$ impdp system/sys dumpfile=remote_dba.dmp directory=backup logfile= test01_imp.log schemas=remote_dba

Import: Release 10.2.0.4.0 - Production on Friday, 20 May, 2011 11:25:06
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** dumpfile=remote_dba.dmp directory=backup logfile= remote_dba_imp.log schemas=remote_dba
------------------------------------------------------------------------------------
/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 11:31:56

Check restoration log in the directory folder by remote_dba_imp.log.log name.
As explained, this impdp will make a new database user “remote_dba” into this database and this use will be locked by default. So, DBA has to unlock and provide password for this user.
sql> alter user remote_dba identified by {password};
sql> alter user remote_dba account unlock;
4. Some Known Issues:
A. DB links are not created properly: Please reconfigure it.
B. Jobs are not restored: Get job creation script from using and paste it in destination schema.
C. Invalid Objects: Compile all invalid objects.
DBA can also take and Restore whole database backup instead of one schema backup using What Everybody Should Know about Export Database and Import Database In Oracle ?
Are you able to restore your dump with help of this article. Please share !!

http://www.dbas-oracle.com/2011/05/how-to-take-schema-backup-using-expdp.html

http://www.dbas-oracle.com/2012/11/how-to-restore-schema-using-impdp.html

Thanks to Umesh Sharma (founder of DBA-Oracles.com)

Naming Conventions

Dear Friends,

Please study the below and provide your feedback by email (csenthilmurugan@hotmail.com).

General Rules

1. Only letters, numbers, and the underscore are allowed in names. Although Oracle allows $ and #, they are not necessary and may cause unexpected problems.
2. All names are in UPPERCASE. Or at least of no importance which case. Ignoring this rule usually leads referencing to tables and columns very clumsy because all names must be included in double quotes.
3. The first character in the name must be letter.
4. Keep the names meaningful, but in the same time don’t use long_names_describing_every_single_detail_of_particular_object.

Tables

1. Table names are in plural form, for example, persons, materials, addresses. If table name contains more than one word, they are separated with underscore in form {name1}_{name2}. Only the last one is in plural, for example person_addresses.
2. All tables have 3 character long aliases that are unique in a schema. Aliases are not directly used in name of table, but they are used to create column names. For example, persons – prs, materials – mat, addresses – adr.
3. Sometimes it is useful to distinguish some logical parts of an application. Therefore prefixes are used in table names. For example, sec_users, sec_roles, sec_rights all are related to security subsystem.

Columns (for tables)

1. All columns are in form {alias}_{colname}. For example prs_id, prs_name, prs_adr_id, adr_street_name. This guarantees that column names are unique in a schema, except denormalized columns from another table, which are populated using triggers or application logic.
2. All columns are in singular. If you think you need a column name in plural think twice whether it is the right design? Usually it means you are including multiple values in the same column and that should be avoided but allowed. Provide Remarks in all tables which can be used to capture un-expected values.
3. All tables have surrogate primary key column in form {alias}_id, which is the first column in the table. For example, prs_id, mat_id, adr_id.
To my mind surrogate primary keys have at least following benefits:
  1) each table will have its own _id
  2) they will be always the same type
  3) so front end as well as in SQL*Plus screen can use consistent framework to access all tables in the SAME way
  4) they wouldn’t have any natural meaning and even with the most immutable natural keys in the world there is possibility that they’ll change and I really don’t want to change all connected FK columns
  5) I’d really don’t want to write joins including more than one column for each pair because even for one pair developers tend to create Cartesian joins sometimes (once I had to make some reports based on a parent->child->grandchild->great-grandchild tables joining 3 columns for 2 tables)
  6) if you always need access parent (columns) from great-great-..-grandchild you can easily add derived FK’s (or even other columns i.e. make denormalization), but I’d say these are specific cases not ordinary needs. Of course these (at least mostly) have to be identified even BEFORE you generate any CREATE TABLE scripts.

  To my mind UK’s are just for natural keys and potential waste of properly cached sequence created surrogate key space in table and index is far outweighed by potential waste of 3 column FK, potential change of it, nonconsistent access of tables both from developer minds and Oracle side.
4. All foreign key columns are in form {alias1}_{alias2}_id. For example, prs_adr_id. Try to put all foreign key columns just after the primary key column, mostly because of human factor. The first glance gives nice overview how many and which tables are referenced. Of course as time goes by, schema evolves and columns are added situation may change.
5. If there is more than one foreign key column to another table then foreign key columns are named {alias1}_{alias2}_{meaningful_name}_id. For example, prs_adr_curr_id, prs_adr_prev_id.
6. If several tables contain columns with the same content use the same consistent column names. For example if you need some auditing info, then use {alias}_last_chg_time and {alias}_last_chg_user for all tables. Of course, you can choose your own column names but use them consistently. It is also relevant for some flag columns, for example {alias}_is_active and also columns containing describing info – use either notes, description, comments or whatever but only one of them for the same purpose (remarks is the best one). All consistencies help to understand the data model, all inconsistencies – prevent.

Keys, Constraints

1. All primary keys are named {alias}_pk. For example prs_pk.
2. All foreign keys are named {alias1}_{alias2}_fk. For example prs_adr_fk.
3. If there are more than one foreign key to another table then foreign keys are named {alias1}_{alias2}_{meaningful_name}_fk. For example, prs_adr_curr_fk, prs_adr_prev_fk.
4. All unique keys are named {alias}_{meaningful_name}_uk. For example, mat_typename_uk which is on columns mat_name and mat_type.
5. All check constraints are named {alias}_{meaningful_name|number}_ck. For example, mat_type_ck which is on column mat_type.

Indexes

1. Every index on foreign key is in form {alias1}_{alias2}_fk_i. For example, prs_adr_fk_i.
2. Every index on one column is in form {full_column_name}_i. For example, mat_name_i.
3. Every index on two or more columns is in form {alias}_{meaningful_name|number}_i. For example, mat_1_i, mat_2_i.

Sequences

1. Every table has its own sequence in form {alias}_seq. It and ONLY it is used to populate corresponding primary key column. If an application needs some precreated data, remember to create sequences with start value greater than max value of corresponding column. For example, prs_seq, mat_seq.

Views

1. View names are prefixed with vw_.
2. View names are in plural form, for example, vw_students. If view name contains more than one word, they are separated with underscore in form vw_{name1}_{name2}. Only the last one is in plural, for example vw_student_addresses.
3. All views have 3 or 4 character long aliases that are unique in a schema. It is necessary only in case the view contains some complex derived columns from underlying tables.

Columns (for views)

1. All column names derived directly from tables stay the same. For example column prs_name in view vw_students derived from table persons. Of course if view is a self join of two tables then you have to make view column names unique, for example prs_parent_name and prs_child_name in view vw_parents derived from self join of persons.
2. All column names that aren’t directly derived from underlying tables e.g. summary columns, calculated columns etc. are in form {view_alias}_{colname}. For example column sta_complete_adress (concatenation of adr_country, adr_city and adr_street) for view vw_student_addresses.

Conclusion

The advantages are that one can surely identify object type and object relation with table using queries to data dictionary, not use aliases in almost all selects (aliases must be in self joins).
Disadvantage – at least one – longer column names.

There must be at least some standard developing application and schema. Not obligatory this one. So I think every approach is good enough if it isn’t chaotic and you are satisfied with it.

Thanks to GINTS PLIVNA