Ads

12 April 2026

Oracle datapump utility EXPDP and IMPDP

The Oracle Data Pump utilities — EXPDP (Export Data Pump) and IMPDP (Import Data Pump) — are powerful tools used for logical backup and restore in Oracle Database. 

They are faster and more flexible than the old exp/imp utilities.


Logical backups using Datapump:

Table level 

Schema level

Entire database

Tablespace level

Transportable tablespace level



Create Data Pump Directory at OS level:

[oracle@ora101 ~]$ mkdir -p /u01/app/datapump/

Create directory inside the database:

SQL> create directory DATA_PUMP as '/u01/app/datapump/';

Directory created.

Grant permissions on directory to system user:

SQL> GRANT READ,WRITE on directory DATA_PUMP to system;

Grant succeeded.

View directory information:

SQL>
SQL> select * from dba_directories where DIRECTORY_NAME='DATA_PUMP';

OWNER
--------------------------------------------------------------------------------
DIRECTORY_NAME
--------------------------------------------------------------------------------
DIRECTORY_PATH
--------------------------------------------------------------------------------
ORIGIN_CON_ID
-------------
SYS
DATA_PUMP
/u01/app/datapump/
            0

Table Level Export:

[oracle@ora101 ~]$ expdp system/Manager1 directory=DATA_PUMP tables=HR.EMPLOYEES dumpfile=HR_EMPLOYEES.dump logfile=HR_EMPLOYEES.log


[oracle@ora101 datapump]$ expdp system/Manager1 directory=DATA_PUMP tables=HR.EMPLOYEES,HR.JOBS dumpfile=HR_JOBS_EMPLOYEES.dump logfile=HR_JOBS_EMPLOYEES.log

[oracle@ora101 datapump]$ expdp system/Manager1 directory=DATA_PUMP tables=HR.EMPLOYEES,INFODBA.DEPT1 dumpfile=HR_EMPLOYEES_INFODBA.DEPT1.dump logfile=HR_EMPLOYEESINFODBA.DEPT1.log

[oracle@ora101 datapump]$ expdp system/Manager1 directory=DATA_PUMP tables=HR.EMPLOYEES,INFODBA.DEPT1 dumpfile=HR_EMPLOYEES_INFODBA.DEPT1.dump logfile=HR_EMPLOYEESINFODBA.DEPT1.log PARALLEL=4

[oracle@ora101 ~]$ expdp system/Manager1 directory=DATA_PUMP tables=HR.EMPLOYEES dumpfile=HR_EMPLOYEESDATA_ONLY.dump logfile=HR_EMPLOYEESDATA_ONLY.log content=DATA_ONLY

[oracle@ora101 ~]$ expdp system/Manager1 directory=DATA_PUMP tables=HR.EMPLOYEES dumpfile=HR_EMPLOYEESMETADATA_ONL.dump logfile=HR_EMPLOYEESMETADATA_ONL.log content=METADATA_ONLY

Table Level Import:

[oracle@ora101 datapump]$ impdp system/Manager1 directory=DATA_PUMP tables=HR.EMPLOYEES dumpfile=HR_EMPLOYEES.dump logfile=HR_EMPLOYEES.log 

[oracle@ora101 datapump]$ impdp system/Manager1 directory=DATA_PUMP tables=HR.EMPLOYEES dumpfile=HR_EMPLOYEES.dump logfile=HR_EMPLOYEES.log table_exists_action=skip

[oracle@ora101 datapump]$ impdp system/Manager1 directory=DATA_PUMP tables=HR.EMPLOYEES REMAP_TABLE=HR.EMPLOYEES:HR dumpfile=HR_EMPLOYEES.dump logfile=HR_EMPLOYEES.log

[oracle@ora101 datapump]$ impdp system/Manager1 directory=DATA_PUMP tables=HR.EMPLOYEES REMAP_SCHEMA=HR:INFODBA dumpfile=HR_EMPLOYEES.dump logfile=HR_EMPLOYEES.log

[oracle@ora101 datapump]$ impdp system/Manager1 directory=DATA_PUMP tables=HR.EMPLOYEES REMAP_SCHEMA=HR:INFODBA REMAP_TABLE=EMPLOYEES:EMPLOYEES1  dumpfile=HR_EMPLOYEES.dump logfile=HR_EMPLOYEES.log



Schema Level Export:

[oracle@ora101 ~]$ expdp system/Manager1 directory=DATA_PUMP schemas=HR dumpfile=HR.dump logfile=HR.log PARALLEL=4

[oracle@ora101 ~]$ expdp system/Manager1 directory=DATA_PUMP schemas=HR,INFODBA dumpfile=HR_INFODBA.dump logfile=HR_INFODBA.log PARALLEL=4 

[oracle@ora101 ~]$ expdp system/Manager1 directory=DATA_PUMP schemas=HR dumpfile=HR_DATA_ONLY.dump logfile=HR_DATA_ONLY.log content=DATA_ONLY  PARALLEL=4

[oracle@ora101 ~]$ expdp system/Manager1 directory=DATA_PUMP schemas=HR dumpfile=HR_METADATA_ONLY.dump logfile=HR_METADATA_ONLY.log content=METADATA_ONLY PARALLEL=4


Schema Level Export:

[oracle@ora101 datapump]$ impdp system/Manager1 directory=DATA_PUMP dumpfile=HR.dump logfile=HR.log schemas=HR

[oracle@ora101 datapump]$ impdp system/Manager1 directory=DATA_PUMP dumpfile=HR.dump logfile=HR.log remap_schema=HR:HR

[oracle@ora101 datapump]$ impdp system/Manager1 directory=DATA_PUMP dumpfile=HR.dump logfile=HR.log remap_schema=HR:APP


Database Level Export:

[oracle@ora101 ~]$ expdp system/Manager1 directory=DATA_PUMP FULL=Y dumpfile=PROD_FULL.dump logfile=PROD_FULL.log


Database Level Import:

Import full database

On source
SQL> select name from v$tablespace;

On target
SQL> select name from v$tablespace;

Create missing tablespaces on target
Make sure target tablespace has enough free space
Drop all non-oracle schemas (done during refresh)
DROP USER HR CASCADE;

[oracle@ora101 datapump]$ impdp system/Manager1 directory=DATA_PUMP FULL=Y dumpfile=PROD_FULL.dump logfile=PROD_FULL.log

[oracle@ora101 datapump]$ impdp system/Manager1 directory=DATA_PUMP FULL=Y dumpfile=PROD_FULL.dump logfile=PROD_FULL.log table_exists_action=skip


Tablespace Level Export:

[oracle@ora101 ~]$ expdp system/Manager1 directory=DATA_PUMP TABLESPACES=HR
dumpfile=TBS_HR.dump logfile=TBS_HR.log

[oracle@ora101 ~]$ expdp system/Manager1 directory=DATA_PUMP TABLESPACES=HR,MANIT
dumpfile=TBS_HR_MANIT.dump logfile=TBS_HR_MANIT.log

[oracle@ora101 ~]$ expdp system/Manager1 directory=DATA_PUMP TABLESPACES=HR
dumpfile=TBS_HR.dump logfile=TBS_HR.log PARALLEL=4 COMPRESSION=ALL


Option                                 Purpose
PARALLEL=4                         Faster export
COMPRESSION=ALL                 Reduce size
EXCLUDE=STATISTICS                 Skip stats
FLASHBACK_TIME=SYSTIMESTAMP Consistent backup
LOGTIME=ALL                         Detailed logs

Option         What Happens Use Case
SKIP         Ignore table         Safe runs
APPEND Add rows         Data merge
TRUNCATE Delete + reload Refresh
REPLACE Drop & recreate Migration


Data Pump Export Progress %:

Use below query to get the percentage (%) completion of the export process

SELECT SID, SERIAL#, USERNAME, CONTEXT, SOFAR, TOTALWORK,ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"FROM V$SESSION_LONGOPS WHERE TOTALWORK != 0 AND SOFAR <> TOTALWORK;

SELECT * FROM dba_datapump_sessions;


SET LINES 200
COL job_name FORMAT A25
COL state FORMAT A12

SELECT owner_name, job_name, operation, job_mode, state
FROM dba_datapump_jobs;

SELECT sid, serial#, opname,
ROUND(sofar/totalwork*100,2) "%DONE"
FROM v$session_longops
WHERE opname LIKE 'SYS_EXPORT%' OR opname LIKE 'SYS_IMPORT%';






10 April 2026

Oracle Database Tablespace Management

A tablespace is a logical storage container in Oracle Database used to store database objects such as tables and indexes. Each tablespace consists of one or more physical datafiles where the actual data is stored

We have two types tablespaces in oracle

1) Normal Tablespace

2) Big Tablespace


Note: Data file MAX size is depends on the data block size.

Normal Tablespace

Block Size Max Datafile Size

2 KB 8 GB

4 KB 16 GB

8 KB 32 GB

16 KB 64 GB

32 KB 128 GB


Big Tablespace:

Block Size Max Datafile Size

8 KB 32 TB

16 KB 64 TB

32 KB 128 TB


Default Tablespaces:

SYSTEM:

Stores data dictionary objects

Contains core metadata of the database

Must always be ONLINE

SYSAUX:

Auxiliary tablespace for SYSTEM

Stores components like:

AWR (Automatic Workload Repository)

OEM data

UNDO:

Stores undo data:

Rollback operations ,Read consistency and Flashback

Users:

Default tablespace for normal users

Temp:

Temporary operations:

Sorting, Joins and Hash operations


You’ll deal with three main types:

Permanent tablespaces — where actual table/index data lives.
Temporary tablespaces — used for sorts, joins, and other temporary work.
Undo tablespaces — store undo data for rollbacks and read consistency.

Tablespace States:

State         Description

ONLINE         Accessible

OFFLINE                 Not accessible

READ ONLY         No changes allowed

READ WRITE        Default

To View Tablespace Information:

v$tablespace

v$datafile

v$tempfile

dba_tablespaces

dba_data_files

dba_temp_files


Create tablespace:

Create tablespace PROD_DATA datafile '/2a/data/oracle12c/SDADS/datafile/PROD_DATA.dbf' size 1G;

Create tablespace with autoextend on:

CREATE TABLESPACE PROD DATAFILE '/2a/data/oracle12c/SDADS/datafile/PROD.dbf' SIZE 500M AUTOEXTEND ON;

CREATE TABLESPACE PROD_IDATA DATAFILE '/2a/data/oracle12c/SDADS/datafile/PROD_IDATA .dbf' SIZE 500M AUTOEXTEND ON NEXT 50M

MAXSIZE UNLIMITED;


Resize Datafile:

alter database datafile '/2a/data/oracle12c/SDADS/datafile/prod1.dbf' resize 300m;

Add a new datafile to existing tablespace:

alter tablespace PROD add datafile '/2a/data/oracle12c/SDADS/datafile/prod1.dbf' size 20m;


Big Tablespace:

CREATE BIGFILE TABLESPACE Big_DATA DATAFILE '/2a/data/oracle12c/SDADS/datafile/Big_DATA.dbf' SIZE 5G AUTOEXTEND ON;

Resize Big Tablespace Datafile:

ALTER DATABASE DATAFILE '/2a/data/oracle12c/SDADS/datafile/Big_DATA.dbf'  RESIZE 20G;

Enable Autoextend:

ALTER DATABASE DATAFILE ''2a/data/oracle12c/SDADS/datafile/Big_DATA.dbf'
AUTOEXTEND ON;

ALTER DATABASE DATAFILE ''2a/data/oracle12c/SDADS/datafile/Big_DATA.dbf'
AUTOEXTEND ON NEXT 1G MAXSIZE 100G;


Drop Tablespace:

Drop a tablespace without removing the physical database files:

drop tablespace PROD_DATA;

Drop tablespace including the physical datafiles:

Drop tablespace PROD_DATA including contents and datafiles;


Make tablespace Read only:

alter tablespace PROD_DATA READ ONLY;

alter tablespace PROD_DATA read write;


Take tablespace offline:

ALTER TABLESPACE PROD_DATA OFFLINE;

ALTER TABLESPACE PROD_DATA ONLINE;


Create a temporary tablespace:

CREATE TEMPORARY TABLESPACE TEMP1 '/1a/app/oracle/oradata/ORCL/datafile/temp01.dbf' size 1G autoextend on next 200M;


Adding a tempfile:

alter tablespace TEMP1 add tempfile '/1a/app/oracle/oradata/ORCL/datafile/temp02.dbf' size 1G autoextend on next 200M;


Resize tempfile:

alter database tempfile '/1a/app/oracle/oradata/ORCL/datafile/temp02.dbf' resize 2G;


Drop Temp Tablespace:

select FILE_NAME,FILE_ID,TABLESPACE_NAME,

AUTOEXTENSIBLE,STATUS,maxbytes/1024/1024/1024,BYTES/1024/1024/1024 

from dba_temp_files ;


Verify if any active sessions are using the temp files:

SELECT b.tablespace,b.segfile#,b.segblk#,b.blocks,a.sid,a.serial#,

a.username,a.osuser, a.status

FROM v$session a,v$sort_usage b

WHERE a.saddr = b.session_addr;


If any active sessions using the temp tablespace kill it:

alter system kill session 'SID_NUMBER, SERIAL#' immediate;

Drop Temp Tablespace:

ALTER TABLESAPCETEMP drop tempfile '/1a/app/oracle/oradata/ORCL/datafile/temp01.dbf';

ALTER DATABASE TEMPFILE '/1a/app/oracle/oradata/ORCL/datafile/temp02.dbf' DROP INCLUDING DATAFILES;


Tablespaces and Datafiles Info:

SELECT file_name, tablespace_name, bytes/1024/1024 MB

FROM dba_data_files;


SELECT tablespace_name, bigfile FROM dba_tablespaces WHERE tablespace_name = 'PROD_BIGTS';

select FILE_NAME,TABLESPACE_NAME,BYTES/1024/1024,STATUS,AUTOEXTENSIBLE from dba_data_files;

SELECT tablespace_name, ROUND((used_space / tablespace_size) * 100, 2) pct_used

FROM dba_tablespace_usage_metrics;


Tablespace Usage Query:

SELECT

    df.tablespace_name,

    ROUND(df.total_mb,2) total_mb,

    ROUND(fs.free_mb,2) free_mb,

    ROUND(df.total_mb - fs.free_mb,2) used_mb,

    ROUND((df.total_mb - fs.free_mb)/df.total_mb * 100,2) used_pct

FROM

(SELECT tablespace_name, SUM(bytes)/1024/1024 total_mb

 FROM dba_data_files

 GROUP BY tablespace_name) df,

(SELECT tablespace_name, SUM(bytes)/1024/1024 free_mb

 FROM dba_free_space

 GROUP BY tablespace_name) fs

WHERE df.tablespace_name = fs.tablespace_name

ORDER BY used_pct DESC;


Tablespace Usage Report:

SET LINES 200

SET PAGES 100

COLUMN tablespace_name FORMAT A20

COLUMN total_mb FORMAT 999999

COLUMN used_mb FORMAT 999999

COLUMN free_mb FORMAT 999999

COLUMN pct_used FORMAT 999.99


SELECT 

    df.tablespace_name,

    ROUND(df.total_mb) total_mb,

    ROUND(df.total_mb - fs.free_mb) used_mb,

    ROUND(fs.free_mb) free_mb,

    ROUND((df.total_mb - fs.free_mb)/df.total_mb * 100, 2) pct_used

FROM

    (SELECT tablespace_name, SUM(bytes)/1024/1024 total_mb

     FROM dba_data_files

     GROUP BY tablespace_name) df,

    (SELECT tablespace_name, SUM(bytes)/1024/1024 free_mb

     FROM dba_free_space

     GROUP BY tablespace_name) fs

WHERE df.tablespace_name = fs.tablespace_name

ORDER BY pct_used DESC;


TEMP Tablespace Monitoring:

SELECT

    tablespace_name,

    SUM(bytes_used)/1024/1024 used_mb,

    SUM(bytes_free)/1024/1024 free_mb

FROM v$temp_space_header

GROUP BY tablespace_name;


Verify Tablesapce are comsuming above 85%:

SELECT tablespace_name || ' ' || used_percent 

FROM dba_tablespace_usage_metrics 

WHERE used_percent > 85;

Monitor longest-running transactions with:

SELECT s.sid, s.serial#, t.used_ublk, t.used_urec

FROM v$transaction t, v$session s

WHERE t.ses_addr = s.saddr; 


Check usage:

SELECT tablespace_name, SUM(blocks)*8/1024 MB_USED

FROM v$sort_usage

GROUP BY tablespace_name; 


Check Overall Usage:

SELECT tablespace_name, ROUND((used_space/tablespace_size)*100, 2) pct_used

FROM dba_tablespace_usage_metrics

ORDER BY pct_used DESC; 


Identify Top Space Consumers:

SELECT owner, segment_name, segment_type, bytes/1024/1024 MB

FROM dba_segments

ORDER BY bytes DESC FETCH FIRST 10 ROWS ONLY; 


TEMP Usage by Session:

SELECT s.username, s.sid, u.tablespace, u.blocks*8/1024 AS mb_used

FROM v$sort_usage u, v$session s

WHERE u.session_addr = s.saddr

ORDER BY mb_used DESC; 



28 March 2026

Enable and Disabling Archive Log Mode In Oracle

 

Checking Archivelog Mode:

SQL> archive log list;

You can also use below command

SQL> SELECT LOG_MODE FROM V$DATABASE;

Set Archivelog Destination:

You must set a destination for archivelog files

SQL> alter system set log_archive_dest_1='location=/u01/app/oracle/archivelogs';

Enable Archivelog Mode:

SQL> Shut immediate;

SQL> Startup mount;

SQL> alter database archivelog;

SQL> alter database open;

SQL> archive log list;


Disable Archivelog Mode:

SQL> shut immediate;

SQL> startup mount;

SQL> alter database noarchivelog;

SQL> alter database open;

SQL> archive log list;


How to Estimate Archive Destination Space:

SELECT A.*, 

Round(A.Count#*B.AVG#/1024/1024/1024) Daily_Avg_gb 

FROM 

(SELECT 

To_Char(First_Time,'YYYY-MM-DD') DAY, 

Count(1) Count#, 

Min(RECID) Min#, 

Max(RECID) Max# 

FROM v$log_history 

GROUP 

BY To_Char(First_Time,'YYYY-MM-DD') 

ORDER 

BY 1 

) A, 

(SELECT 

Avg(BYTES) AVG#, 

Count(1) Count#, 

Max(BYTES) Max_Bytes, 

Min(BYTES) Min_Bytes 

FROM 

v$log ) B;


27 March 2026

RPO and RTO

 RPO ==> Recover point object

    How much data you can afford to lose

    Measured in time

    Defines backup frequency

    RPO = 15 minutes

    You can lose max 15 minutes of data


RTO ==> Recover time object 

    How quickly you must restore the system

    Measured in time

    Defines recovery speed

    RTO = 30 minutes

    System must be back within 30 minutes




11 March 2026

PostgreSQL Source Code Installation

 Build From Source Code:

PostgreSQL Source Code Installation means compiling and installing PostgreSQL directly from its source files instead of using prebuilt packages.

Prerequisite:


Before we install PostgreSQL software/cluster, we need to ensure that we have all the necessary tools and libraries on our system.

We need install software using root user only 

○ C Compiler: PostgreSQL is written in C, so a C compiler is a must. Popular choices include GCC (GNU Compiler Collection) and Clang.

○ Build Tools: Ensure you have essential build tools like make and autoconf installed. These tools help in compiling the source code efficiently.

○ Libraries: PostgreSQL requires certain libraries like readline and zlib. We need to install these development libraries on our system.

○ readline-devel: Provides the necessary libraries for command-line editing and history.

○ zlib-devel: Required for compression support.

○ gcc: The GNU Compiler Collection needed to compile the PostgreSQL source code.

○ make: A build tool used to compile and install the software.

Step1) Install Required Packages

[root@post101 ~]# sudo dnf groupinstall "Development Tools" -y

[root@post101 ~]# yum install readline-devel zlib-devel openssl-devel libxml2-devel libxslt-devel systemd-devel wget tar perl perl-core perl-FindBin -y

OR

[root@post101 ~]# sudo dnf install readline-devel zlib-devel openssl-devel libxml2-devel libxslt-devel systemd-devel wget tar perl perl-core perl-FindBin -y

[root@post101 ~]# sudo dnf groupinstall "Development Tools" -y

Step2) Create postgres user and add user to sudoers file

[root@post101 ~]# adduser postgres
[root@post101 ~]# passwd postgres

Retype new password:
passwd: all authentication tokens updated successfully.
[root@post101 ~]# vi /etc/sudoers
## Allow root to run any commands anywhere
root    ALL=(ALL)       ALL
postgres ALL=(ALL)      ALL

Step3) 

Download Source Code From https://www.postgresql.org/download/ 

Download software and copy software using WinSCP




OR

Use below command if we have internet connection 

[root@post101 ~]# 

[root@post101 ~]# wget https://ftp.postgresql.org/pub/source/v17.1/postgresql-17.1.tar.gz










Step4) untar zip file and go to postgresql-17.1

[root@post101 ~]# tar -xvzf postgresql-17.1.tar.gz












[root@post101 ~]# cd postgresql-17.1
[root@post101 postgresql-17.1]# ls -lrt
total 820
drwxrwxr-x. 16 root root   4096 Nov 12  2024 src
-rw-rw-r--.  1 root root    983 Nov 12  2024 README.md
-rw-rw-r--.  1 root root   6484 Nov 12  2024 meson_options.txt
-rw-rw-r--.  1 root root 115220 Nov 12  2024 meson.build
-rw-rw-r--.  1 root root   1822 Nov 12  2024 Makefile
-rw-rw-r--.  1 root root    277 Nov 12  2024 HISTORY
-rw-rw-r--.  1 root root   4176 Nov 12  2024 GNUmakefile.in
drwxrwxr-x.  3 root root     87 Nov 12  2024 doc
-rw-rw-r--.  1 root root   1192 Nov 12  2024 COPYRIGHT
drwxrwxr-x. 59 root root   4096 Nov 12  2024 contrib
-rw-rw-r--.  1 root root  88218 Nov 12  2024 configure.ac
-rwxrwxr-x.  1 root root 578504 Nov 12  2024 configure
drwxrwxr-x.  2 root root   4096 Nov 12  2024 config
-rw-rw-r--.  1 root root    365 Nov 12  2024 aclocal.m4
[root@post101 postgresql-17.1]#

















Step5) Create directory for software binaries installation

[root@post101 ~]# mkdir -p /u01/pgsql-17/
[root@post101 ~]# cd /u01/pgsql-17/
[root@post101 pgsql-17]# ls -lrt
total 0
[root@post101 pgsql-17]#
 


Step6) ./configure

Configure the Build

This prepares the build environment.

prefix ==> Installation directory
configure ==> Checks OS libraries & prepares compilation

[root@post101 postgresql-17.1]# ./configure --prefix=/u01/pgsql-17

i got the below errors , so can skip by using below command '

checking for icu-uc icu-i18n... no
configure: error: ICU library not found
If you have ICU already installed, see config.log for details on the
failure.  It is possible the compiler isn't looking in the proper directory.
Use --without-icu to disable ICU support.

[root@post101 postgresql-17.1]# ./configure --prefix=/u01/pgsql-17 --without-icu --without-readline

This time this got succeed without any erros 

config.status: creating src/include/pg_config_ext.h
config.status: creating src/interfaces/ecpg/include/ecpg_config.h
config.status: linking src/backend/port/tas/dummy.s to src/backend/port/tas.s
config.status: linking src/backend/port/posix_sema.c to src/backend/port/pg_sema.c
config.status: linking src/backend/port/sysv_shmem.c to src/backend/port/pg_shmem.c
config.status: linking src/include/port/linux.h to src/include/pg_config_os.h
config.status: linking src/makefiles/Makefile.linux to src/Makefile.port
[root@post101 postgresql-17.1]#

step7) make (Compile the Source Code)

This step builds PostgreSQL binaries.

[root@post101 postgresql-17.1]# make

Steep8) make install (Install PostgreSQL)

[root@post101 postgresql-17.1]# cd /u01/pgsql-17/
[root@post101 pgsql-17]# ls -lrt
total 16
drwxr-xr-x. 6 root root 4096 Mar 11 20:42 include
drwxr-xr-x. 2 root root 4096 Mar 11 20:43 bin
drwxr-xr-x. 6 root root 4096 Mar 11 20:43 share
drwxr-xr-x. 4 root root 4096 Mar 11 20:43 lib
[root@post101 pgsql-17]#


After installation, PostgreSQL will be located at:

 /u01/pgsql-17/

we can see all executable files on bin location

[root@post101 postgresql-17.1]# cd /u01/pgsql-17/
[root@post101 pgsql-17]# ls -lrt
total 16
drwxr-xr-x. 6 root root 4096 Mar 11 20:42 include
drwxr-xr-x. 2 root root 4096 Mar 11 20:43 bin
drwxr-xr-x. 6 root root 4096 Mar 11 20:43 share
drwxr-xr-x. 4 root root 4096 Mar 11 20:43 lib
[root@post101 pgsql-17]# cd bin/
[root@post101 bin]# ls -lrt
total 15360
-rwxr-xr-x. 1 root root 10154008 Mar 11 20:42 postgres
-rwxr-xr-x. 1 root root  1083408 Mar 11 20:42 ecpg
-rwxr-xr-x. 1 root root   170704 Mar 11 20:42 initdb
-rwxr-xr-x. 1 root root   115224 Mar 11 20:42 pg_amcheck
-rwxr-xr-x. 1 root root    49664 Mar 11 20:42 pg_archivecleanup
-rwxr-xr-x. 1 root root   173376 Mar 11 20:42 pg_basebackup
-rwxr-xr-x. 1 root root   114304 Mar 11 20:42 pg_createsubscriber
-rwxr-xr-x. 1 root root   116408 Mar 11 20:42 pg_receivewal
-rwxr-xr-x. 1 root root   116160 Mar 11 20:42 pg_recvlogical
-rwxr-xr-x. 1 root root    85440 Mar 11 20:42 pg_checksums
-rwxr-xr-x. 1 root root   187312 Mar 11 20:42 pg_combinebackup
-rwxr-xr-x. 1 root root    47808 Mar 11 20:42 pg_config
-rwxr-xr-x. 1 root root    66592 Mar 11 20:42 pg_controldata
-rwxr-xr-x. 1 root root    82176 Mar 11 20:42 pg_ctl
-rwxr-xr-x. 1 root root   440376 Mar 11 20:42 pg_dump
-rwxr-xr-x. 1 root root   220784 Mar 11 20:42 pg_restore
-rwxr-xr-x. 1 root root   136528 Mar 11 20:42 pg_dumpall
-rwxr-xr-x. 1 root root    77088 Mar 11 20:42 pg_resetwal
-rwxr-xr-x. 1 root root   169448 Mar 11 20:42 pg_rewind
-rwxr-xr-x. 1 root root    55160 Mar 11 20:42 pg_test_fsync
-rwxr-xr-x. 1 root root    44120 Mar 11 20:42 pg_test_timing
-rwxr-xr-x. 1 root root   189328 Mar 11 20:42 pg_upgrade
-rwxr-xr-x. 1 root root   139072 Mar 11 20:42 pg_verifybackup
-rwxr-xr-x. 1 root root   123536 Mar 11 20:42 pg_waldump
-rwxr-xr-x. 1 root root    76488 Mar 11 20:42 pg_walsummary
-rwxr-xr-x. 1 root root   211344 Mar 11 20:43 pgbench
-rwxr-xr-x. 1 root root   528168 Mar 11 20:43 psql
-rwxr-xr-x. 1 root root    83648 Mar 11 20:43 createdb
-rwxr-xr-x. 1 root root    79032 Mar 11 20:43 dropdb
-rwxr-xr-x. 1 root root    88576 Mar 11 20:43 createuser
-rwxr-xr-x. 1 root root    78968 Mar 11 20:43 dropuser
-rwxr-xr-x. 1 root root    83888 Mar 11 20:43 clusterdb
-rwxr-xr-x. 1 root root    97744 Mar 11 20:43 vacuumdb
-rwxr-xr-x. 1 root root    93024 Mar 11 20:43 reindexdb
-rwxr-xr-x. 1 root root    78472 Mar 11 20:43 pg_isready
[root@post101 bin]#


Step8) Create Data Directory and grant Permissions to postgres user

[root@post101 bin]# mkdir -p /u01/pgdata
[root@post101 bin]# cd /u01/pgdata
[root@post101 pgdata]# ls -lrt
total 0
[root@post101 pgdata]#
[root@post101 pgdata]# chown -R postgres:postgres /u01/pgdata

Step9) switch to postgres user 

[root@post101 pgdata]# su - postgres
[postgres@post101 ~]$ cd /u01/pg
pgdata/   pgsql-17/
[postgres@post101 ~]$ cd /u01/pgdata/
[postgres@post101 pgdata]$ pwd
/u01/pgdata
[postgres@post101 pgdata]$ ls -lrt
total 0
[postgres@post101 pgdata]$

Step10)  Initialize Database Cluster

[postgres@post101 bin]$ /u01/pgsql-17/bin/initdb -D /u01/pgdata

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /u01/pgdata ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default "max_connections" ... 100
selecting default "shared_buffers" ... 128MB
selecting default time zone ... Asia/Kolkata
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /u01/pgsql-17/bin/pg_ctl -D /u01/pgdata -l logfile start

[postgres@post101 bin]$

Step11) Start PostgreSQL cluster 

[postgres@post101 bin]$ /u01/pgsql-17/bin/pg_ctl  -D /u01/pgdata start
waiting for server to start....2026-03-11 20:56:42.024 IST [9732] LOG:  starting PostgreSQL 17.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44.0.3), 64-bit
2026-03-11 20:56:42.035 IST [9732] LOG:  listening on IPv6 address "::1%1", port 5432
2026-03-11 20:56:42.035 IST [9732] LOG:  listening on IPv4 address "127.0.0.1", port 5432
2026-03-11 20:56:42.417 IST [9732] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2026-03-11 20:56:42.432 IST [9735] LOG:  database system was shut down at 2026-03-11 20:54:02 IST
2026-03-11 20:56:42.446 IST [9732] LOG:  database system is ready to accept connections
 done
server started
[postgres@post101 bin]$
[postgres@post101 bin]$ ps -ef|grep postgres
root      7357  3458  0 20:51 pts/0    00:00:00 su - postgres
postgres  7358  7357  0 20:51 pts/0    00:00:00 -bash
postgres  9732     1  0 20:56 ?        00:00:00 /u01/pgsql-17/bin/postgres -D /u01/pgdata
postgres  9733  9732  0 20:56 ?        00:00:00 postgres: checkpointer
postgres  9734  9732  0 20:56 ?        00:00:00 postgres: background writer
postgres  9736  9732  0 20:56 ?        00:00:00 postgres: walwriter
postgres  9737  9732  0 20:56 ?        00:00:00 postgres: autovacuum launcher
postgres  9738  9732  0 20:56 ?        00:00:00 postgres: logical replication launcher
postgres 10878  7358 99 20:59 pts/0    00:00:00 ps -ef
postgres 10879  7358  0 20:59 pts/0    00:00:00 grep --color=auto postgres
[postgres@post101 bin]$



Step12) Set Environment Variables

[postgres@post101 ~]$ vi .bash_profile

export PATH=$PATH:/u01/pgsql-17/bin
export PGDATA=/u01/pgdata


[postgres@post101 ~]$ . .bash_profile
[postgres@post101 ~]$ source .bash_profile
[postgres@post101 ~]$ psql
psql (17.1)
Type "help" for help.

postgres=# \l
                                                     List of databases
   Name    |  Owner   | Encoding | Locale Provider |   Collate   |    Ctype    | Locale | ICU
Rules |   Access privileges
-----------+----------+----------+-----------------+-------------+-------------+--------+-----
------+-----------------------
 postgres  | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |        |
      |
 template0 | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |        |
      | =c/postgres          +
           |          |          |                 |             |             |        |
      | postgres=CTc/postgres
 template1 | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |        |
      | =c/postgres          +
           |          |          |                 |             |             |        |
      | postgres=CTc/postgres
(3 rows)



Step13) How to start postgres services auto start whenever server rebooted/shutdown 

Switch to root user

[postgres@post101 ~]$ su -
Password:
Last login: Wed Mar 11 19:59:13 IST 2026 from 2406:7400:9a:e591:fc9d:9082:d6a7:20c1 on pts/0
[root@post101 ~]# 
[root@post101 ~]# cd /etc/rc.d
[root@post101 rc.d]# ls -lrt
total 8
drwxr-xr-x. 2 root root   61 Jun 12  2024 rc6.d
drwxr-xr-x. 2 root root   61 Jun 12  2024 rc5.d
drwxr-xr-x. 2 root root   61 Jun 12  2024 rc4.d
drwxr-xr-x. 2 root root   61 Jun 12  2024 rc3.d
drwxr-xr-x. 2 root root   61 Jun 12  2024 rc2.d
drwxr-xr-x. 2 root root   61 Jun 12  2024 rc1.d
drwxr-xr-x. 2 root root   61 Jun 12  2024 rc0.d
-rw-r--r--. 1 root root  473 Oct 10  2024 rc.local
drwxr-xr-x. 2 root root 4096 Feb  6 17:55 init.d
[root@post101 rc.d]# vi rc.local
[root@post101 rc.d]# cat rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local

su - postgres -c 'pg_ctl -D $PGDATA start'

[root@post101 rc.d]#
[root@post101 rc.d]# chmod +x /etc/rc.d/rc.local







09 March 2026

RMAN Backup scripts ( Full, Level 0 and Level 1 Incremental )

Oracle Database, Recovery Manager (RMAN) backups are mainly of two types:

Full Backup and Incremental Backup (Level 0, Level 1).

Level 1 can be Differential or Cumulative.


RMAN Full Backup

A Full Backup copies all blocks of the database datafiles, regardless of whether they changed or not.

Backs up entire database

Includes used + unused blocks

Does not act as a base for incremental backups

Larger backup size

Syntax 

    Backup database; ## FRA Location

    Backup database plus archivelog;

    Backup database format ='/backup/rman/Bak_%d_%T_%U.bkp'

    Backup database format ='/backup/rman/Bak_%d_%T_%U.bkp' plus archivelog format ='/backup/rman/Arch_%d_%T_%U.bkp';


RMAN Incremental Backup

Incremental backups copy only changed blocks since a previous backup.

Two levels of backups

Level 0

Level 1


Level 0 Backup

A Level 0 backup is the base backup for incremental strategy.

Copies all used blocks

Acts as base for Level 1 backups

Similar to full backup but supports incremental chain

RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE;

RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE format ='/backup/rman/Bak_L0_%d_%T_%U.bkp';


Level 0 = foundation backup

Level 1 Backup

Level 1 backs up only blocks changed since the Level 0 backup or previous Level 1 backup.


Two types exist:

Differential

Cumulative

Level 1 Differential Backup

Backs up blocks changed since the last incremental backup (Level 0 or Level 1).

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE format ='/backup/rman/Bak_L0_%d_%T_%U.bkp';


Level 0 (Sunday)

     |---- Level 1 Differential (Mon)

     |---- Level 1 Differential (Tue)

     |---- Level 1 Differential (Wed)

     |---- Level 1 Differential (Thu)

     |---- Level 1 Differential (Fri)

     |---- Level 1 Differential (Sat)


Level 1 Cumulative Backup

Backs up all blocks changed since the last Level 0 backup.

RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE;

RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE  DATABASE format ='/backup/rman/Bak_L0_%d_%T_%U.bkp';


    Level 0 (Sunday)

     |---- Level 1 Cum (Mon -Changes from since Sun)

     |---- Level 1 Cum (Tue -Changes from since Sun)

     |---- Level 1 Cum (Wed -Changes from since Sun)

     |---- Level 1 Cum (Thu -Changes from since Sun)

     |---- Level 1 Cum (Fri -Changes from since Sun)

     |---- Level 1 Cum (Sat -Changes from since Sun)


Backup Archive Log

RMAN> Backup archivelog all;

RMAN> Backup archivelog all format='/backup/rman/Arch_%d_%T_%U.bkp';


Crontab Schedule :

crontab -e

# Sunday Level 0 Backup at 2 AM

0 2 * * 0 /u01/scripts/rman_level0.sh >> /u01/scripts/logs/l0_backup.log 2>&1

# Monday–Saturday Level 1 Cumulative Backup at 2 AM

0 2 * * 1-6 /u01/scripts/rman_level1_cum.sh >> /u01/scripts/logs/l1_cum_backup.log 2>&1


#Sunday – Level 0 Backup Script

#!/bin/bash


export ORACLE_SID=PROD

export ORACLE_HOME=/u01/app/oracle/product/19c/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH


rman target / <<EOF


RUN {

ALLOCATE CHANNEL c1 DEVICE TYPE DISK;

BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'WEEKLY_SUN_L0';

BACKUP ARCHIVELOG ALL DELETE INPUT;

RELEASE CHANNEL c1;

}


EXIT;

EOF


# Weekdays – Level 1 Differential Script

#!/bin/bash


export ORACLE_SID=PROD

export ORACLE_HOME=/u01/app/oracle/product/19c/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH


rman target / <<EOF


RUN {

ALLOCATE CHANNEL c1 DEVICE TYPE DISK;

BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'DAILY_L1';

BACKUP ARCHIVELOG ALL DELETE INPUT;

RELEASE CHANNEL c1;

}


EXIT;

EOF





Sunday Level 0 RMAN Script


#!/bin/bash


export ORACLE_SID=PRODDB

export ORACLE_HOME=/u01/app/oracle/product/19c/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH


rman target / <<EOF


RUN {

ALLOCATE CHANNEL c1 DEVICE TYPE DISK;


BACKUP INCREMENTAL LEVEL 0 DATABASE

TAG 'WEEKLY_L0'

FORMAT '/backup/rman/L0_%d_%T_%U.bkp';


BACKUP ARCHIVELOG ALL DELETE INPUT;


RELEASE CHANNEL c1;

}


EXIT;

EOF



#Weekdays Level 1 Cumulative Script

# %d = Database, %T = Date Time and %U= 8 number 

#!/bin/bash


export ORACLE_SID=PROD

export ORACLE_HOME=/u01/app/oracle/product/19c/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH


rman target / <<EOF


RUN {

ALLOCATE CHANNEL c1 DEVICE TYPE DISK;

BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE

TAG 'DAILY_L1_CUM'

FORMAT '/backup/rman/L1C_%d_%T_%U.bkp';

BACKUP ARCHIVELOG ALL DELETE INPUT;

RELEASE CHANNEL c1;

}

EXIT;

EOF



### Script 

export ORACLE_SID=PROD

export ORACLE_HOME=/u01/app/oracle/product/19c/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH

rman target / <<EOF

run
{
Allocate channel ch1 device type disk;
allocate channel ch2 device type disk;
crosscheck backup;
delete noprompt expired backup;
crosscheck archivelog all;
delete noprompt archivelog all completed before 'SYSDATE -7'; 
backup as backupset format='/u01/app/data/PROD/Backup_%T_%d_%U'
plus archivelog format ='/u01/app/data/PROD/Archive_%T_%d_%U';
backup current controlfile format='/u01/app/data/PROD/controlfile_%F';
release channel ch1;
release channel ch2;

}
quit;
EOF

RMAN Settings/Configuration Parameters

[oracle@ORA-TEST301 ~]$ rman target /

RMAN> show all;

using target database control file instead of recovery catalog

RMAN configuration parameters for database with db_unique_name TEST are:

CONFIGURE RETENTION POLICY TO REDUNDANCY 1;

       CONFIGURE RETENTION POLICY TO Redundancy 7; ## Number of backup files

       CONFIGURE RETENTION POLICY TO Recovery window 7 days; ## Number of days backup files

CONFIGURE BACKUP OPTIMIZATION OFF; # default

        CONFIGURE BACKUP OPTIMIZATION ON;

            Read only tablespace and data files not backup.

CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default

    Defines where backups are stored Disk or Tape/SBT

CONFIGURE CONTROLFILE AUTOBACKUP ON;

    Automatically backs up the control file and SPFILE after backup operations.

    Full backup,Level0 , Level 1. Archive log backup and data file backup

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/2c/data/oracle12c/rman_bkup/SDADS/%F';

    %F 12 character alph numaric 

CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default

    How many backup streams run simultaneously.

    RMAN uses 4 channels

    Backup runs faster because multiple files are backed up in parallel.

    CONFIGURE DEVICE TYPE DISK PARALLELISM 4;

    Backup Types

        Backup of copy ==> An Image Copy is an exact byte-to-byte copy of a datafile.

        Backup as copy database;

        Backup sets ==> RMAN stores backups in compressed binary files.

        Backup as bakupset database;

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default

CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default

CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/2c/data/oracle12c/rman_bkup/TEST/bkup_%U';

    Backup file naming convention.

    %d database name

    %U unique backup identifier

    %T date

CONFIGURE MAXSETSIZE TO UNLIMITED; # default

    Only for backup sets

    If we keep MAXSETSIZE to 100GB if your database size is 300GB it will take 3 backup files

CONFIGURE ENCRYPTION FOR DATABASE OFF; # default

    Backup will be encryption 

CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default

   Default AES125 Backup algorithm, we need to enable TDE 

CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default

    Backup size will be reduced 

    Default is Basic but we have Low , Medium and High

CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default

    How many days RMAN Backup information we have to keep on control file

    List backup;

CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default

    NONE means anyone can delete Archive log files on OS level

    If we have Standby server, if we keep NONE Primary and DR will go out of sync 

    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;

CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/2c/data/oracle12c/rman_bkup/TEST/snapcf_TEST.f';

    During backup:

            Datafiles are being backed up

            Control file records are constantly updating

            Archivelogs are generated

 

06 March 2026

SQL Server Always ON Readbale Secondary Options

 Always ON Readbale Secondary Options


NO

Secondary replica cannot accept any read connections. Only used for failover.

Yes

Secondary replica accepts all read connections (even if the connection string does not specify read-intent).

READ_INTENT_ONLY

Secondary replica accepts connections only if the client specifies ApplicationIntent=ReadOnly in the connection string.

Server=AOAGListener;
Database=DB;
ApplicationIntent=ReadOnly;

How to validate the Backups in SQL server

 How to checks whether the SQL backup file is structurally valid and readable.


1) RESTORE VERIFYONLY

2) RESTORE HEADERONLY

3) RESTORE FILELISTONLY 


RESTORE VERIFYONLY

FROM DISK = 'D:\SQLData\Backup\DBA_FULL.bak';


RESTORE HEADERONLY

FROM DISK = 'D:\SQLData\Backup\DBA_FULL.bak';


RESTORE FILElistonly

FROM DISK = 'D:\SQLData\Backup\DBA_FULL.bak';




04 March 2026

PostgreSQL Architecture

PostgreSQL Cluster / Instance is combination of Memory and Background process, cluster is nothing but collection / Group of databases managed by single instance of the PostgreSQL cluster 




Memory Components

Shared Memory 

Shared Buffer ==> This is the primary cache of PostgreSQL, where data blocks writes/reads from the disk. 

WAL Buffer ==> Buffer for Write ahead logging (WAL) data before it is written to disk.

CLOG Buffer ==> Store commit log information to manage transaction states

Temp Buffers ==> Used for storing temporary tables. Default 8MB

Work Memory ==> Used for sorting operations , Bitmap operation, hash join and merge join operations. Default 4MB 

Maintenance Work Memory ==> Used for vacuum and index operations. Default 64MB

Catalog Cache and Execution Memory ==> Memory used for caching system catalogs and                 executing SQL queries


Backend Process

Postmaster Process

    When we start PostgreSQL cluster ,the postmaster process is the first to initialize 

    Responsibilities of Postmaster

            Accepting incoming connection 

            Forks new backend process

            Starts background process

            Handles server restart and crash recovery 

BG Writer ==> Periodically Writes Dirty buffers/pages from shared buffer to the disk

WAL Writer ==> Writes contents of the WAL buffers to WAL files

Checkpoint ==> Writes dirty buffers to disk when a checkpoint occurs , this reduces the recovery time after a crash. 

Stats Collector ==> Collect DBMS usage statistics information such as session activity information(pg_stat_activity) and table usage statistics information (pg_stat_all_tables)

Gathers and maintains statistical data about the database, which the query planner uses to optimize query execution plans 

Archiver ==> When in archive log mode , copies the WAL files to the archive log location 

Sys Logger  ==> Logs error messages and a variety of information to log files

Auto vacuum Launcher ==> Fork the auto vacuum worker when vacuum is required(Automatically reclaims storage occupied by dead tuples( Rows that have updated or deleted), to prevent table bloat and maintains query performance.

Logical Replication Launcher ==> Manages the logical replication workers that handle replication of changes to other databases.

WAL Sender  ==> Sending WAL records to standby servers, runs on the primary server

WAL Receiver ==> Receive WAL records and apply on standby server , runs on the standby server

27 February 2026

RMAN Expired and Obsolete Backups

 RMAN Expired and Obsolete Backups

Expired Backups:

Backup set or backup pieces are deleted on OS Level, Database control file have backup details on disk but OS level files are not available.

Action:

RMAN> crosscheck backup;

RMAN> delete expired backup;


Obsolete Backups:

The general meaning of OBSOLETE is no longer used or required for recovery.

RMAN considers backups as OBSOLETE when they are no longer required for database recovery.This is done by one of the RMAN CONFIGURATION parameters.

Action:

RMAN> report obsolete;

RMAN> delete obsolete;


26 February 2026

RMAN Backups Methodology Full and Incremental

RMAN provides multiple backup types, but the most commonly used in production environments are Full backups and Incremental backups. Understanding their differences helps DBAs design efficient backup strategies that optimize storage, speed, and recovery time.


1. Full Backup

A Full Backup (or Level 0 backup) is a complete backup of the entire database.

Key Characteristics

Backs up all data blocks, regardless of whether they changed.

Forms the baseline for incremental backup strategies.

Larger in size than incremental backups.

Takes more time and requires more storage.

Recovery is straightforward—restore full backup + apply archived logs.

2. Incremental Backups

Incremental backups only capture blocks that have changed since a previous L0 backup.

RMAN supports two types:

Differential Incremental Backup (Level 1 Differential)

Cumulative Incremental Backup (Level 1 Cumulative)

Both rely on a Level L0 full backup as a baseline.

2.1. Differential Incremental Backup

A Differential Level 1 backup captures all blocks changed since the last backup (Level 0 or Level 1).

Key Characteristics

Smaller and faster than full backups.

Captures daily changes.

During recovery, RMAN may use multiple Level 1 backups.

When it comes to recovery we need L0 Backup and all L1 backups including Archive logs need to be apply.


2.2. Cumulative Incremental Backup

A Cumulative Level 1 backup captures all blocks changed since the last Level 0.

Key Characteristics

Slightly larger than differential backups.

Faster recovery (fewer incremental files to apply).

Preferred for large, mission-critical systems.

When it comes to recovery we need L0 and recent L1 Cumulative backup including Archive logs need to be apply.






Understanding RMAN ( Recovery Manager) and Backups

 RMAN Key Points:

    - RMAN (Recovery Manager)

    - Oracle Database backup and recovery automation

    - It Introduced in Oracle 8i version

    - It performs block level backup

    - Parallelism

    - Detecting corruption in datafiles

    - Validating the backup

    - Incremental Backup

    - Recovery Catalog etc

    - Multi Destination Backups

    - Archive log mode is must to use RMAN

RMAN is faster because it takes block level backups.
RMAN is faster because we can initiate parallel processes.
RMAN will data block corruptions and repair it for you.
RMAN stores backup metadata information in the database control file, you can also store the information into deferent database on different server which is called as RECOVERY catalog
RMAN Incremental Recovery
              - Level 0 - Full database backup
              - Level 1 - Backup of only changed blocks taken by referring data block header for updated SCN
              -  
              - As per recovery, we must only restore the database from level 0 and recovery the Database using level 1. When you try to take level 1 backup, RMAN checks if already you have level 0 backup or not. If not then it will take level 0 backup automatically.

Recovery is of two parts:
              - Restore and Recover - We cannot perform recovery without restore. Hence Level 0 is used for restore and level 1 is used for recovery.
RMAN configuration Items
              - show all;

RMAN Utility:

RMAN utility comes with Oracle Binaries
No special installation or license required for using RMAN
At command prompt just type rman
It defaults connects to database environmental variables defined
RMAN utility can be used only when your DB is in at least MOUNT stage
RMAN is used while the DB is up and running and have very little performance impact if the backup is running

RMAN Backup Methodology:

- Full Backup
        - Entire Database backup plus archive logs 
        - Cannot apply Incremental backup on full backup
                            
- Incremental Backup
        - Level 0 - FULL Database backup
        - Level 1 - Backup changes only from last incremental backup
- Differential Backup 
                - Cumulative Backup

Components of RMAN:

  - RMAN Prompt
  - Target Database
  - Recovery Catalog
  - Auxiliary Database --- When cloning, Clone DB is called as auxiliary Database
  - Media Management Layer --- layer between RMAN 3rd part vendor backup tools netbackup,  Veritos and Tape backups
  - RMAN Channels

RMAN Configuration Parameter:

- RETENTION POLICY - tells till what date our backup will be stored which we can use for recovery
               - Redundancy -- How many backups to be retained
               - Recover Window -- How many days backup to be retained
        - Channels -- You can define channel to take backup to disk or tape
        - Control file auto backup -- includes control file and spfile auto backup
        - Parallelism -- Creates multiple processes to speed up backup
        - Encryption -- to secure the backup

25 February 2026

Backups and RMAN Backups ( Recovery Manager)

 

Why we need backup

The primary purpose of taking backups is to protect the database from data loss and ensure the ability to reconstruct the database in case of corruption, errors, or failures.

Categories of Database Failure:

Data loss can occur at any time

       Human errors, accidental deletions, logical corruption, hardware failures, or storage issues can lead to partial or complete data loss.

Backups preserve data integrity

    When data corruption or inconsistencies arise, backups enable you to restore the database to a consistent and trustworthy state.

Support for disaster recovery

    In the event of serious failures—like server crashes, media corruption, or site outages—backups are the foundation for bringing the database back online.

Compliance and business continuity

    Many industries require proper backup policies to meet audit standards and ensure uninterrupted business operations.


Backup Types:

Oracle backups are broadly classified into two categories:

1. Logical Backups

Logical backups extract database objects in a logical format.

Examples: Table-level, Schema-level and Tablespace-level export

Tools Used: Data Pump (expdp / impdp) and  Export/Import (exp / imp)

Logical backups are mainly used for:

    Application-level migrations

    Object-level recoveries

    Copying data between databases


2. Physical Backups

Physical backups involve copying actual database files at the OS level.

Files Included Datafiles, Control files, SPFILE and Archived redo logs

Types of Physical Backups

    RMAN Backups (most recommended)

    Hot Backup (User‑Managed Online Backup)

    Cold Backup (User‑Managed Offline Backup)

Hot and Cold Backups

Hot and Cold backups are known as manual backups, as they are performed manually without using RMAN or any backup tool.

Cold Backup (Consistent Backup)

    Database is shut down.

    All database files are copied at OS level.

    Backup is consistent.

    No recovery required during restore.

    Also referred to as a consistent backup.

Hot Backup (Inconsistent Backup)

    Database is open and available to users.

    Tablespaces are placed in backup mode.

    Datafiles are copied manually at OS level.

    Recovery is required because the backup is inconsistent.

    Archived logs are needed for full recovery.


Backup States / Backup Types in Oracle

Oracle backups fall under two main categories based on the state of the database:

1) Consistent Backup

    A backup is consistent when: The database is shut down cleanly using SHUTDOWN IMMEDIATE / NORMAL / TRANSACTIONAL.

All files (datafiles, control files, redo logs) are synchronized.

2) Inconsistent Backup

A backup is inconsistent when the database is in OPEN or MOUNT mode during the backup.

Files are not synchronized; hence recovery is mandatory.

Examples

    Hot Backup (User‑managed online backup)

    RMAN Backups (always inconsistent unless database is closed)


Oracle RMAN (Recovery Manager)

Oracle Recovery Manager (RMAN) is a powerful, built‑in Oracle database utility used for performing backup, restore, and recovery operations. It is a no‑cost, command‑line tool provided by Oracle to help DBAs protect critical database files—including datafiles, control files, SPFILEs, and archived redo logs.
RMAN integrates tightly with the Oracle database engine, enabling advanced features such as:

Block‑level corruption detection
RMAN scans blocks during backup and restore operations, identifying corrupted blocks early and ensuring data integrity.

Automated backup retention policies
You can define retention periods or redundancy levels, and RMAN will automatically manage obsolete backups.

Backup compression
Built‑in compression reduces storage costs and speeds up backup operations.
Parallelization
File identification
Block recovery
Media recovery

RMAN can run backup and restore operations across multiple channels, improving performance and meeting high‑availability needs.


What to Backup 

Parameter file / SPfile , Control file , archive log files and data files

What file needed in different stages of database 

1) Shutdown - Nothing needed
2) No Mount - Pfile or SPfile
3) Mount - Control file
4) Open - Data and redo log files

Recovery:

       1. Complete Recovery
          --- Recovering the database exactly till the point of failure
                                          
       2. Incomplete Recovery
          --- It cannot recover the database till the point of failure
  --- You can recover the database till the time you had taken the backup