if (parseInt(leaderTL) !== parseInt(replicaTL)) {
return `Mismatch: Leader TL=${leaderTL}, Replica TL=${replicaTL}`;
}
return "TL match.";
Support@example.com
Get in touch - Saturday: 8:30 am - 6:00 pm
As requested in our previous emails, this is a gentle reminder to provide the necessary details regarding the upcoming BCM activity scheduled in the coming weeks. Specifically, we are awaiting:
Additional information related to the change request
Your preferred date and time for BCP activation
Your input is important to help us plan and execute the activity smoothly. Kindly share the details at your earliest convenience to avoid any last-minute coordination issues.
Let us know if you need any clarification.
Certainly! Here's an updated version of the Automated BCM Workflow Knowledge Article with a clear explanation that the workflow is task-driven, and includes an approval/control step for the application team before it proceeds with switchover and RPO validation.
Title: Automated BCM Workflow for Oracle Databases
Category: Oracle DBA / Business Continuity
Created On: 23-Jun-2025
Author: [Your Name or Team]
Audience: DB Support, Application Teams, Infra
This article outlines the automated BCM (Business Continuity Management) process for Oracle databases. It provides a task-driven, controlled workflow for switching roles between Primary and Standby databases. Users initiate BCM through a ticket, and the system handles validations, evidence gathering, and controlled execution.
The BCM automation follows a multi-stage workflow, which gives explicit control to the application team. After the ticket is raised:
This ensures collaboration, traceability, and accountability between DB and application teams.
| Requirement | Details |
|---|---|
| 🎫 Service Ticket | Must be raised in ITSM system with all required fields |
| 📌 Correct Database Info | Ensure accurate DB name, primary host, standby host |
| ✅ Healthy Replication | Archive logs applied without significant lag |
| 📡 Connectivity | Ensure listener & SQLNet connectivity between primary and standby |
| 🧾 Application Team Readiness | Confirmed via automated task in workflow |
| 🔐 Business Approval | Required for production; attach approval thread/email |
flowchart TD
A[User Raises Ticket] --> B[Workflow Validates Inputs]
B --> C[Create Task for App Team]
C --> D[Wait for Task Closure]
D --> E{Task Closed?}
E -- No --> F[Wait / Notify SLA Breach]
E -- Yes --> G[Gather RPO & Replication Evidence]
G --> H[Execute Switchover/Failover]
H --> I[Post-Check and Update Ticket]
Once the application team marks the task as completed/closed:
🧪 RPO Evidence Collection:
🔁 Switchover Execution:
dgmgrl switchover or patronictl switchover📋 Post-Checks:
| Field | Sample |
|---|---|
| Request Type | BCM – Oracle Switchover |
| DB Name | ORCLPRD |
| Primary Host | db1.prod.local |
| Standby Host | db2.dr.local |
| Business Approval | Attached |
| Preferred Window | 10 PM – 12 AM SGT |
✅ Ensures application team involvement and readiness
✅ Automates technical checks, RPO validation, and switch execution
✅ Reduces manual errors and ensures auditability
✅ Integrates into your ITSM system with ticket/task traceability
Let me know if you'd like this exported as Confluence markdown, Word doc, or HTML for your portal — or if you'd like a task checklist for application teams included too.
Here’s a professional and structured Knowledge Article you can publish internally for your team or organization regarding the Automated BCM (Business Continuity Management) Process for Oracle Databases.
Title: Automated BCM Workflow for Oracle Databases
Category: Oracle DBA / Business Continuity
Created On: 23-Jun-2025
Author: [Your Name or Team]
Audience: DB Support, Application Teams, Infra
This article outlines the automated process for BCM (Business Continuity Management) in Oracle databases. It enables controlled and validated switchover or failover operations between Primary and Standby environments using an internal automation framework. Users are expected to follow the procedure to initiate BCM safely.
| Requirement | Details |
|---|---|
| 🎫 Ticket Raised | User must raise a service ticket via [ITSM/Helpdesk Portal] |
| 🗂 Correct DB Info | Ensure Database Name, Primary Host, and Standby Host are clearly mentioned in the ticket |
| ✅ Replication Healthy | No significant lag or archive backlog on Standby — visible via patronictl list / dgmgrl |
| 📡 Listener Reachability | TNS connectivity between primary and standby must be verified |
| 🔄 App Readiness | Application owners should confirm if DB switch is transparent or needs downtime |
| 📝 Approval Attached | Business approval (if required for production) must be part of the ticket or comments |
flowchart TD
A[User Raises Ticket] --> B[Automation Picks Up Ticket via Queue]
B --> C[Validates DB Name, Hostnames]
C --> D[Checks Replication Lag, Listener Reachability]
D --> E{Validation Passed?}
E -- No --> F[Ticket Routed to DBA for Manual Action]
E -- Yes --> G[Auto Trigger Switchover/Failover]
G --> H[Update Ticket with Outcome + Logs]
DB Existence in Catalog
Primary–Standby Pairing
Replication Health
Listener + SQLNet Reachability
SELECT
i.index_name,
i.table_name,
u.used,
u.start_monitoring,
u.end_monitoring,
ROUND(s.bytes / 1024 / 1024, 2) AS size_mb,
LISTAGG(c.column_name, ', ') WITHIN GROUP (ORDER BY c.column_position) AS columns
FROM
v$object_usage u
JOIN
dba_indexes i
ON u.index_name = i.index_name AND u.table_name = i.table_name AND i.owner = SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA')
JOIN
dba_ind_columns c
ON c.index_name = i.index_name AND c.index_owner = i.owner
LEFT JOIN
dba_segments s
ON s.segment_name = i.index_name AND s.owner = i.owner AND s.segment_type = 'INDEX'
WHERE
u.used = 'NO'
GROUP BY
i.index_name, i.table_name, u.used, u.start_monitoring, u.end_monitoring, s.bytes
ORDER BY
size_mb DESC NULLS LAST;
SET LINESIZE 200
SET PAGESIZE 100
COLUMN index_name FORMAT A30
COLUMN table_name FORMAT A30
COLUMN used FORMAT A4
COLUMN segment_size_mb FORMAT 999999.99
COLUMN columns FORMAT A40
SELECT
iu.index_name,
iu.table_name,
iu.used,
ROUND(s.bytes/1024/1024,2) AS segment_size_mb,
LISTAGG(ic.column_name, ', ')
WITHIN GROUP (ORDER BY ic.column_position) AS columns
FROM
v$object_usage iu
JOIN dba_indexes i
ON iu.index_name = i.index_name
AND iu.table_name = i.table_name
AND i.owner = SYS_CONTEXT('USERENV','CURRENT_SCHEMA')
JOIN dba_ind_columns ic
ON ic.index_name = iu.index_name
AND ic.table_name = iu.table_name
AND ic.index_owner = i.owner
LEFT JOIN dba_segments s
ON s.segment_name = iu.index_name
AND s.owner = i.owner
AND s.segment_type = 'INDEX'
WHERE
iu.used = 'NO'
GROUP BY
iu.index_name, iu.table_name, iu.used, s.bytes
ORDER BY
segment_size_mb DESC;
SELECT
ROUND(AVG((bs.completion_time - bs.start_time) * 1440), 2) AS avg_full_backup_duration_mins,
COUNT(*) AS total_full_backups,
SYSDATE AS report_generated_at
FROM
v$backup_set bs
WHERE
bs.backup_type = 'D' -- Full backups only
AND EXISTS (
SELECT 1
FROM v$backup_piece bp
WHERE bp.set_stamp = bs.set_stamp
AND bp.set_count = bs.set_count
AND bp.status = 'A' -- Only include backup sets with successful pieces
)
AND bs.start_time >= SYSDATE - 30
AND bs.start_time IS NOT NULL
AND bs.completion_time IS NOT NULL;
SELECT
ROUND(AVG((completion_time - start_time) * 1440), 2) AS avg_full_backup_duration_mins,
COUNT(*) AS total_full_backups,
SYSDATE AS report_generated_at
FROM
v$backup_set
WHERE
backup_type = 'D' -- Only full backups
AND status = 'A' -- Only successful
AND start_time IS NOT NULL
AND completion_time IS NOT NULL
AND start_time >= SYSDATE - 30; -- Adjust window as needed
SELECT
ROUND(AVG((MAX(bp.completion_time) - MIN(bp.start_time)) * 1440), 2) AS avg_full_backup_duration_mins,
COUNT(DISTINCT bs.set_stamp || '.' || bs.set_count) AS total_full_backup_jobs,
SYSDATE AS report_generated_at
FROM
v$backup_set bs
JOIN
v$backup_piece bp
ON bs.set_stamp = bp.set_stamp AND bs.set_count = bp.set_count
WHERE
bs.backup_type = 'D' -- Full backup
AND bp.status = 'A' -- Successful pieces
AND bp.start_time IS NOT NULL
AND bp.completion_time IS NOT NULL
AND bp.start_time >= SYSDATE - 30
GROUP BY
bs.set_stamp, bs.set_count;
WITH full_backup_sets AS (
SELECT
bs.set_stamp,
bs.set_count,
MIN(bp.start_time) AS set_start_time,
MAX(bp.completion_time) AS set_end_time
FROM
v$backup_set bs
JOIN
v$backup_piece bp
ON bs.set_stamp = bp.set_stamp AND bs.set_count = bp.set_count
WHERE
bs.backup_type = 'D' -- Full backups only
AND bp.status = 'A' -- Successful pieces
AND bp.start_time >= SYSDATE - 30 -- Last 30 days (adjust as needed)
GROUP BY
bs.set_stamp, bs.set_count
)
SELECT
ROUND(AVG((set_end_time - set_start_time) * 1440), 2) AS avg_full_backup_duration_mins,
COUNT(*) AS total_full_backup_jobs,
SYSDATE AS report_generated_at
FROM
full_backup_sets;
SELECT
ROUND(AVG((bp.completion_time - bp.start_time) * 1440), 2) AS avg_full_backup_duration_mins,
COUNT(*) AS total_full_backups,
SYSDATE AS report_time
FROM
v$backup_piece bp
JOIN
v$backup_set bs
ON bp.set_stamp = bs.set_stamp AND bp.set_count = bs.set_count
WHERE
bs.backup_type = 'D' -- 'D' = Full backup (datafile level)
AND bp.status = 'A' -- Successful backups only
AND bp.start_time IS NOT NULL
AND bp.completion_time IS NOT NULL
AND bp.start_time >= SYSDATE - 30; -- Optional: Last 30 days
SELECT
ROUND(SUM(CASE WHEN bp.status = 'A' THEN 1 ELSE 0 END) * 100.0 / COUNT(*), 2) AS archlog_backup_success_pct,
COUNT(*) AS total_archlog_backups,
SUM(CASE WHEN bp.status = 'A' THEN 1 ELSE 0 END) AS successful_archlog_backups,
SYSDATE AS report_time
FROM
v$backup_piece bp
JOIN
v$backup_set bs
ON bp.set_stamp = bs.set_stamp AND bp.set_count = bs.set_count
WHERE
bs.backup_type = 'L' -- 'L' = Archive log backups
AND bp.start_time >= SYSDATE - 30; -- Optional: last 30 days
SELECT
d.name AS db_name,
ROUND(AVG((bp.completion_time - bp.start_time) * 1440), 2) AS avg_full_backup_duration_mins,
COUNT(*) AS full_backup_piece_count
FROM
rc_backup_set bs
JOIN
rc_backup_piece bp ON bs.set_stamp = bp.set_stamp AND bs.set_count = bp.set_count
JOIN
rc_database d ON bs.db_key = d.db_key
WHERE
bs.backup_type = 'D' -- Full backup
AND bp.start_time IS NOT NULL
AND bp.completion_time IS NOT NULL
AND bp.status = 'A'
AND bp.start_time >= SYSDATE - 30
GROUP BY
d.name
ORDER BY
avg_full_backup_duration_mins DESC;
SELECT
d.name AS db_name,
ROUND(SUM(CASE WHEN bs.status = 'A' THEN 1 ELSE 0 END) * 100.0 / COUNT(*), 2) AS inc_backup_success_pct
FROM
rc_backup_set bs
JOIN
rc_database d ON bs.db_key = d.db_key
WHERE
bs.backup_type = 'I' -- Incremental
AND bs.start_time >= SYSDATE - 30
GROUP BY
d.name;
SELECT
d.name AS db_name,
ROUND(SUM(CASE WHEN bs.status = 'A' THEN 1 ELSE 0 END) * 100.0 / COUNT(*), 2) AS arch_backup_success_pct
FROM
rc_backup_set bs
JOIN
rc_database d ON bs.db_key = d.db_key
WHERE
bs.backup_type = 'L' -- Archive Log
AND bs.start_time >= SYSDATE - 30
GROUP BY
d.name;
SELECT
d.name AS db_name,
ROUND(AVG((bs.completion_time - bs.start_time) * 1440), 2) AS avg_full_backup_duration_mins
FROM
rc_backup_set bs
JOIN
rc_database d ON bs.db_key = d.db_key
WHERE
bs.backup_type = 'D' -- Full backup
AND bs.start_time IS NOT NULL
AND bs.completion_time IS NOT NULL
AND bs.start_time >= SYSDATE - 30
AND bs.status = 'A' -- Available
GROUP BY
d.name;
SELECT
db_name,
ROUND(AVG((completion_time - start_time) * 1440), 2) AS avg_full_backup_duration_mins
FROM
rc_backup_set
WHERE
backup_type = 'D' -- 'D' = Datafile full backup
AND status = 'A' -- 'A' = Available (successful)
AND start_time IS NOT NULL
AND completion_time IS NOT NULL
AND start_time >= SYSDATE - 30
GROUP BY
db_name;
SELECT
db_key,
db_name,
ROUND(AVG((completion_time - start_time) * 1440), 2) AS avg_duration_mins,
COUNT(*) AS backup_count
FROM (
SELECT
db.db_key,
db.db_name,
bs.start_time,
bs.completion_time
FROM
db db
JOIN
dbinc dbi ON db.db_key = dbi.db_key
JOIN
bset bs ON bs.dbinc_key = dbi.dbinc_key
WHERE
bs.start_time IS NOT NULL
AND bs.completion_time IS NOT NULL
AND bs.start_time > SYSDATE - 30
)
GROUP BY
db_key, db_name
ORDER BY
avg_duration_mins DESC;
#!/bin/bash
echo "🔍 Oracle Listener + DB Diagnostics Script"
echo "------------------------------------------"
ORACLE_HOME=$(ps -ef | grep pmon | grep -v grep | awk -F'/' '{print "/"$2"/"$3"/"$4}' | uniq)
LISTENER_LOG="${ORACLE_HOME}/network/log/listener.log"
echo -e "\n🧪 Checking Listener Status..."
lsnrctl status | tee /tmp/listener_status.txt | grep -E "Service|READY|FAILURE|Connecting"
echo -e "\n🧪 Checking DB Services Registration..."
ps -ef | grep pmon | grep -v grep
echo "Try running in SQL: ALTER SYSTEM REGISTER;"
echo -e "\n🧪 Checking Resource Limits from DB..."
sqlplus -s / as sysdba <<EOF
SET LINESIZE 200
COLUMN resource_name FORMAT A25
SELECT resource_name, current_utilization, max_utilization, limit_value
FROM v\$resource_limit
WHERE resource_name IN ('processes','sessions');
EXIT;
EOF
echo -e "\n🧪 Checking OS CPU and Memory Status..."
echo "CPU Load:"
uptime
echo "Memory:"
free -g
echo "Swap Usage:"
swapon --show
echo -e "\n🧪 Checking Listener Log Size..."
du -sh "$LISTENER_LOG" 2>/dev/null
tail -n 10 "$LISTENER_LOG"
echo -e "\n🧪 Checking Disk Space..."
df -h | grep -v tmpfs
echo -e "\n🧪 Checking for Wallet or TCPS SSL Config..."
grep -Ei 'wallet|ssl' $ORACLE_HOME/network/admin/sqlnet.ora 2>/dev/null
grep -Ei 'wallet|ssl' $ORACLE_HOME/network/admin/listener.ora 2>/dev/null
echo -e "\n✅ Diagnostics Complete."
#!/bin/bash
SGA_GB=128
SGA_MB=$((SGA_GB * 1024))
SGA_KB=$((SGA_MB * 1024))
echo "🔍 Oracle SGA Feasibility + Recommendation (Target: ${SGA_GB} GB)"
echo "-------------------------------------------------------------"
# Get system memory
TOTAL_MEM_KB=$(grep MemTotal /proc/meminfo | awk '{print $2}')
TOTAL_MEM_GB=$((TOTAL_MEM_KB / 1024 / 1024))
echo "🧠 Total Physical Memory: ${TOTAL_MEM_GB} GB"
# Get HugePages info
HP_SIZE_KB=$(grep Hugepagesize /proc/meminfo | awk '{print $2}')
HP_TOTAL=$(grep HugePages_Total /proc/meminfo | awk '{print $2}')
HP_FREE=$(grep HugePages_Free /proc/meminfo | awk '{print $2}')
HP_RSVD=$(grep HugePages_Rsvd /proc/meminfo | awk '{print $2}')
HP_USED=$((HP_TOTAL - HP_FREE))
# Required HugePages
REQUIRED_HUGEPAGES=$((SGA_KB / HP_SIZE_KB))
HP_TOTAL_KB=$((HP_TOTAL * HP_SIZE_KB))
HP_FREE_KB=$((HP_FREE * HP_SIZE_KB))
# Print HugePages summary
echo "📦 HugePage Size: $((HP_SIZE_KB / 1024)) MB"
echo "📦 HugePages Configured: ${HP_TOTAL} (${HP_TOTAL_KB} KB)"
echo "📦 HugePages Free: ${HP_FREE} (${HP_FREE_KB} KB)"
echo "📐 Required HugePages for ${SGA_GB} GB SGA: ${REQUIRED_HUGEPAGES}"
echo "-------------------------------------------------------------"
# Evaluation
if [ "$TOTAL_MEM_KB" -lt "$((SGA_KB + 10485760))" ]; then
echo "❌ ERROR: Not enough physical memory for ${SGA_GB} GB SGA + OS overhead."
exit 1
fi
if [ "$HP_TOTAL" -lt "$REQUIRED_HUGEPAGES" ]; then
echo "❌ ERROR: HugePages configured are less than required."
echo "➡️ You need at least $REQUIRED_HUGEPAGES HugePages."
echo -e "\n🛠️ Suggested /etc/sysctl.conf settings:"
echo "vm.nr_hugepages = ${REQUIRED_HUGEPAGES}"
echo "kernel.shmmax = $((SGA_GB * 1024 * 1024 * 1024))"
echo "kernel.shmall = $((SGA_KB / 4)) # (SGA / 4KB)"
echo -e "\n🛠️ Suggested /etc/security/limits.conf for oracle:"
echo "oracle soft memlock ${SGA_KB}"
echo "oracle hard memlock ${SGA_KB}"
echo "oracle soft nofile 65536"
echo "oracle hard nofile 65536"
echo "oracle soft nproc 16384"
echo "oracle hard nproc 16384"
echo -e "\n⚠️ Don't forget to disable MEMORY_TARGET and MEMORY_MAX_TARGET in Oracle:"
echo "ALTER SYSTEM SET MEMORY_TARGET=0 SCOPE=SPFILE;"
echo "ALTER SYSTEM SET MEMORY_MAX_TARGET=0 SCOPE=SPFILE;"
echo "ALTER SYSTEM SET SGA_MAX_SIZE=${SGA_GB}G SCOPE=SPFILE;"
echo "ALTER SYSTEM SET SGA_TARGET=${SGA_GB}G SCOPE=SPFILE;"
echo "➡️ Then restart the DB."
exit 1
fi
if [ "$HP_FREE" -lt "$REQUIRED_HUGEPAGES" ]; then
echo "⚠️ WARNING: Not enough HugePages FREE at the moment."
echo "➡️ You may need to stop other Oracle instances or reboot the system."
exit 1
fi
echo "✅ YES: Server is READY for ${SGA_GB} GB SGA allocation using HugePages!"
exit 0
#!/bin/bash
SGA_TARGET_GB=128
SGA_TARGET_KB=$((SGA_TARGET_GB * 1024 * 1024))
echo "🔍 Oracle SGA Feasibility Check (Target: ${SGA_TARGET_GB} GB)"
echo "-------------------------------------------------------------"
# Get total system memory
TOTAL_MEM_KB=$(grep MemTotal /proc/meminfo | awk '{print $2}')
TOTAL_MEM_GB=$((TOTAL_MEM_KB / 1024 / 1024))
echo "🧠 Total Physical Memory: ${TOTAL_MEM_GB} GB"
# Get HugePages info
HP_SIZE_KB=$(grep Hugepagesize /proc/meminfo | awk '{print $2}')
HP_TOTAL=$(grep HugePages_Total /proc/meminfo | awk '{print $2}')
HP_FREE=$(grep HugePages_Free /proc/meminfo | awk '{print $2}')
HP_RSVD=$(grep HugePages_Rsvd /proc/meminfo | awk '{print $2}')
# Derived values
HP_TOTAL_KB=$((HP_TOTAL * HP_SIZE_KB))
HP_FREE_KB=$((HP_FREE * HP_SIZE_KB))
REQUIRED_HUGEPAGES=$((SGA_TARGET_KB / HP_SIZE_KB))
echo "📦 HugePage Size: $((HP_SIZE_KB / 1024)) MB"
echo "📦 HugePages Configured: ${HP_TOTAL} (${HP_TOTAL_KB} KB)"
echo "📦 HugePages Free: ${HP_FREE} (${HP_FREE_KB} KB)"
echo "📐 Required HugePages for ${SGA_TARGET_GB} GB SGA: ${REQUIRED_HUGEPAGES}"
echo "-------------------------------------------------------------"
# Evaluation
if [ "$TOTAL_MEM_KB" -lt "$((SGA_TARGET_KB + 10485760))" ]; then
echo "❌ ERROR: Not enough total system memory for ${SGA_TARGET_GB} GB SGA (plus OS needs)."
exit 1
fi
if [ "$HP_TOTAL" -lt "$REQUIRED_HUGEPAGES" ]; then
echo "❌ ERROR: Not enough HugePages configured."
echo "➡️ You need at least $REQUIRED_HUGEPAGES HugePages."
exit 1
fi
if [ "$HP_FREE" -lt "$REQUIRED_HUGEPAGES" ]; then
echo "⚠️ WARNING: HugePages configured but not enough free pages available now."
echo "➡️ Consider rebooting or shutting down DBs using HugePages."
exit 1
fi
echo "✅ YES: Your server is READY for ${SGA_TARGET_GB} GB SGA allocation using HugePages!"
exit 0
#!/bin/bash
# Oracle HugePages Configuration Helper
# Make sure to run as root
# Estimated SGA in GB (Change if needed)
SGA_GB=128
SGA_MB=$((SGA_GB * 1024))
SGA_KB=$((SGA_MB * 1024))
echo "🔍 Checking system memory..."
TOTAL_MEM_KB=$(grep MemTotal /proc/meminfo | awk '{print $2}')
TOTAL_MEM_GB=$((TOTAL_MEM_KB / 1024 / 1024))
echo "✔️ Total Memory: ${TOTAL_MEM_GB} GB"
echo -e "\n📦 Estimating HugePages needed for ${SGA_GB} GB SGA..."
HP_SIZE_KB=$(grep Hugepagesize /proc/meminfo | awk '{print $2}')
HP_SIZE_MB=$((HP_SIZE_KB / 1024))
REQUIRED_HUGEPAGES=$((SGA_MB / HP_SIZE_MB))
echo "✔️ HugePage Size: ${HP_SIZE_MB} MB"
echo "🧮 Required HugePages: ${REQUIRED_HUGEPAGES}"
echo -e "\n🧪 Checking current HugePages settings..."
grep Huge /proc/meminfo
echo -e "\n🛠️ Recommended /etc/sysctl.conf entries:"
echo "vm.nr_hugepages = ${REQUIRED_HUGEPAGES}"
echo "kernel.shmmax = $((SGA_GB * 1024 * 1024 * 1024))"
echo "kernel.shmall = $((SGA_KB / 4)) # (SGA / 4KB pages)"
echo -e "\n🛠️ Recommended limits.conf entries for oracle user:"
echo "oracle soft memlock $SGA_KB"
echo "oracle hard memlock $SGA_KB"
echo "oracle soft nofile 65536"
echo "oracle hard nofile 65536"
echo "oracle soft nproc 16384"
echo "oracle hard nproc 16384"
echo -e "\n🔁 Disable AMM if enabled and use ASMM in Oracle:"
echo "ALTER SYSTEM SET MEMORY_TARGET=0 SCOPE=SPFILE;"
echo "ALTER SYSTEM SET MEMORY_MAX_TARGET=0 SCOPE=SPFILE;"
echo "ALTER SYSTEM SET SGA_MAX_SIZE=${SGA_GB}G SCOPE=SPFILE;"
echo "ALTER SYSTEM SET SGA_TARGET=${SGA_GB}G SCOPE=SPFILE;"
echo -e "\n🧹 After setting up, reboot the server and verify HugePages are in use."
Backup Performance Optimization – Data Insights Collected
Hi Team,
As part of our initiative to enhance backup performance for very large databases, I've gathered key data points that will help us fine-tune our strategy:
Least Used Tables (based on last access)
➤ Helps identify cold data that can be backed up less frequently or moved to dedicated subclients — reducing backup volume and time.
LOBs (Large Objects)
➤ Uncompressed and non-secure LOBs are heavy on backup performance.
➤ This data will help us prioritize converting to SecureFiles and enabling compression.
Recently Modified Partitions
➤ Allows us to focus backups only on partitions that have changed — minimizing unnecessary data scan.
Fragmentation Metrics
➤ Identifies bloated tables and indexes contributing to higher I/O during backup.
➤ Supports reorg planning to streamline performance.
Partition Compression Status
➤ Highlights where compression is disabled. Enabling it can reduce read size and speed up backups.
Summary of Key Findings
1. Least Used Tables (by last access timestamp)
Total analyzed tables: 6,780
Identified cold tables (no access in >180 days): 2,150
Recommendation: Move to infrequent/custom backup subclient.
2. Tables with LOBs
Tables with LOB columns: 1,320
LOBs using BasicFile (non-secure): 810
LOBs uncompressed: 760
Recommendation: Target for SecureFile migration and enable compression.
3. Recently Modified Partitions
Partitioned tables checked: 420
Partitions modified in last 30 days: 97
Recommendation: Use for incremental/targeted backup set design.
4. Fragmented Tables (High Wasted Space %)
Tables with >30% fragmentation: 175
Largest fragmentation observed: 3.6 GB wasted in a single table
Recommendation: Include in reorg planning to reduce I/O.
5. Partition Compression Status
Total partitions: 2,340
Partitions with compression disabled: 1,540
Estimated potential space saving if enabled: ~480 GB
Recommendation: Evaluate enabling compression for backup and storage gain.
---
📎 Attachment: backup_analysis_summary.csv
Includes columns for table name, segment size, LOB type, compression, last access date, and partition info.
---
🔍 How Unsecured & Uncompressed LOBs Degrade Backup Performance
1. Storage as BasicFiles (vs. SecureFiles)
BasicFiles LOBs are the legacy storage method in Oracle.
They lack features like compression, deduplication, and efficient I/O.
Reading BasicFiles during backups involves more disk I/O, as data is stored in a less optimized manner.
2. No Compression = Larger Backup Volume
Uncompressed LOBs consume more space on disk.
During backup, especially RMAN or CommVault, this means:
More data needs to be read, transferred, and written to backup media.
Increases network load and backup window duration.
3. Streaming Performance Bottlenecks
LOBs often bypass RMAN’s optimized data streaming methods.
For very large LOBs, this creates slow, row-by-row or block-by-block reads.
4. Inefficient Incremental Backups
Changes to a small part of a LOB can result in the entire LOB being flagged as changed.
This causes RMAN to back up the entire LOB again, even with block change tracking.
---
✅ Why SecureFiles with Compression Help
SecureFiles LOBs support:
Compression (medium/high)
Deduplication
Faster read/write access
This results in:
Smaller backup sizes
Less disk I/O
Faster overall backup performance
---
📌 Summary
LOB Type Compression Impact on Backup Notes
BasicFile No ❌ Slowest Legacy, inefficient
BasicFile Yes ⚠️ Somewhat better Compression still limited
SecureFile No ✅ Faster Modern I/O improvements
SecureFile Yes ✅✅ Fastest Optimal for backup
SET COLSEP ','
SET PAGESIZE 50000
SET LINESIZE 2000
SET TRIMSPOOL ON
SET FEEDBACK OFF
SET HEADING ON
SPOOL lob_storage_summary.csv
SELECT
l.OWNER,
l.TABLE_NAME,
l.COLUMN_NAME,
l.SEGMENT_NAME,
l.TABLESPACE_NAME,
DECODE(l.SECUREFILE, 'NO', 'BASICFILE', 'YES', 'SECUREFILE', 'UNKNOWN') AS LOB_TYPE,
l.IN_ROW,
l.CHUNK,
l.PCTVERSION,
l.RETENTION,
DECODE(l.COMPRESSION, 'ENABLED', 'COMPRESSED', 'DISABLED', 'UNCOMPRESSED', 'UNKNOWN') AS COMPRESSION_STATUS,
NVL(s.BYTES/1024/1024, 0) AS SEGMENT_SIZE_MB
FROM
DBA_LOBS l
LEFT JOIN
DBA_SEGMENTS s
ON l.SEGMENT_NAME = s.SEGMENT_NAME
AND l.OWNER = s.OWNER
AND s.SEGMENT_TYPE = 'LOBSEGMENT'
ORDER BY
l.OWNER, l.TABLE_NAME, l.COLUMN_NAME;
SPOOL OFF
🔍 Factors & Their Impact on Backup Performance:
Least Used Tables (via last accessed)
➤ Helps identify cold data that can be excluded from frequent backups or moved to separate subclients for infrequent backup schedules.
➤ Reduces backup size and time.
Tables with LOBs (Large Objects)
➤ LOBs, especially non-secure and uncompressed, significantly increase backup time.
➤ Enables prioritization of SecureFiles or compression strategies for LOBs to improve performance.
Tables with Partitions Recently Modified
➤ Allows targeted incremental or subclient backups focusing only on recently changed partitions.
➤ Reduces unnecessary scanning and data movement.
Fragmentation Details
➤ Identifies bloated tables or indexes causing excessive I/O during backups.
➤ Supports reorg planning to optimize space usage and backup efficiency.
Partitions with Compression Enabled/Not
➤ Compressed partitions result in less data to read and transfer, enhancing backup speed.
➤ Highlights areas where enabling compression can yield performance gains.
---
✅ Use Case: Backup Strategy Tuning
Use the above insights to:
Define custom backup policies (e.g., frequent for hot data, rare for cold).
Improve subclient segmentation.
Apply compression, LOB optimization, or reorganization where it matters most.
Below is a standalone SQL*Plus script that runs all fragmentation queries together, spooling each result into a properly formatted CSV file. This script can be executed directly in SQLPlus or any terminal that supports SQLPlus input.
---
📄 Filename: fragmentation_analysis.sql
-- Fragmentation Analysis SQL Script
-- Spools multiple CSV reports from Oracle DBA views
SET LINESIZE 1000
SET PAGESIZE 0
SET FEEDBACK OFF
SET VERIFY OFF
SET ECHO OFF
SET TRIMSPOOL ON
SET TERMOUT OFF
SET COLSEP ','
-- Change to desired directory (optional)
-- DEFINE OUT_DIR='/tmp'
-- 1. Table-Level Fragmentation
SPOOL table_fragmentation.csv
SELECT
OWNER,
TABLE_NAME,
NUM_ROWS,
BLOCKS,
EMPTY_BLOCKS,
AVG_SPACE,
CHAIN_CNT,
ROUND((BLOCKS - EMPTY_BLOCKS) * 100 / BLOCKS, 2) AS USED_BLOCK_PCT
FROM
DBA_TABLES
WHERE
OWNER NOT IN ('SYS', 'SYSTEM')
AND BLOCKS > 0
ORDER BY
USED_BLOCK_PCT ASC;
SPOOL OFF
-- 2. Tablespace Fragmentation Summary
SPOOL tablespace_fragmentation.csv
SELECT
df.TABLESPACE_NAME,
df.TOTAL_MB,
fs.FREE_MB,
ROUND((fs.FREE_MB/df.TOTAL_MB)*100, 2) AS FREE_PCT,
ROUND(((df.TOTAL_MB - fs.FREE_MB)/df.TOTAL_MB)*100, 2) AS USED_PCT
FROM
(SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024 AS TOTAL_MB FROM DBA_DATA_FILES GROUP BY TABLESPACE_NAME) df,
(SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024 AS FREE_MB FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME) fs
WHERE
df.TABLESPACE_NAME = fs.TABLESPACE_NAME
ORDER BY
USED_PCT DESC;
SPOOL OFF
-- 3. LOB Fragmentation Details
SPOOL lob_fragmentation.csv
SELECT
OWNER,
TABLE_NAME,
COLUMN_NAME,
SEGMENT_NAME,
SECUREFILE,
IN_ROW,
CHUNK,
RETENTION,
DEDUPLICATE,
COMPRESSION
FROM
DBA_LOBS
WHERE
SECUREFILE = 'NO'
OR COMPRESSION = 'NONE'
ORDER BY
OWNER, TABLE_NAME;
SPOOL OFF
-- 4. Free Space Fragmentation (by Extents)
SPOOL free_space_frag_detail.csv
SELECT
TABLESPACE_NAME,
COUNT(*) AS EXTENT_COUNT,
MAX(BYTES)/1024/1024 AS MAX_EXTENT_MB,
MIN(BYTES)/1024/1024 AS MIN_EXTENT_MB,
ROUND(AVG(BYTES)/1024/1024, 2) AS AVG_EXTENT_MB
FROM
DBA_FREE_SPACE
GROUP BY
TABLESPACE_NAME
ORDER BY
AVG_EXTENT_MB;
SPOOL OFF
-- Done
PROMPT All fragmentation CSVs are generated successfully.
EXIT;
---
✅ How to Run
sqlplus -s sys/password@ORCL as sysdba @fragmentation_analysis.sql
All output files will be created in the current working directory unless a directory is defined using SPOOL /path/to/dir/file.csv.
---
📂 Output Files
table_fragmentation.csv
tablespace_fragmentation.csv
lob_fragmentation.csv
free_space_frag_detail.csv
---
Let me know if you'd like to:
Combine all outputs into a single CSV file
Add a timestamp to each file
Schedule this via DBMS_SCHEDULER inside the database
To analyze fragmentation in an Oracle database and output results in CSV format, you can use the following SQL queries. These target different types of fragmentation: table-level, tablespace-level, and LOBs. After each query, I’ve included a method to spool the output in proper CSV formatting.
---
🔸 1. Table-Level Fragmentation (Row Chaining / Space Wastage)
SET LINESIZE 1000
SET PAGESIZE 0
SET FEEDBACK OFF
SET VERIFY OFF
SET TERMOUT OFF
SET ECHO OFF
SET COLSEP ','
SPOOL table_fragmentation.csv
SELECT
OWNER,
TABLE_NAME,
NUM_ROWS,
BLOCKS,
EMPTY_BLOCKS,
AVG_SPACE,
CHAIN_CNT,
ROUND((BLOCKS - EMPTY_BLOCKS) * 100 / BLOCKS, 2) AS USED_BLOCK_PCT
FROM
DBA_TABLES
WHERE
OWNER NOT IN ('SYS', 'SYSTEM')
AND BLOCKS > 0
ORDER BY
USED_BLOCK_PCT ASC;
SPOOL OFF
---
🔸 2. Tablespace Fragmentation Summary (Free Space vs Used)
SET LINESIZE 1000
SET PAGESIZE 0
SET FEEDBACK OFF
SET VERIFY OFF
SET TERMOUT OFF
SET ECHO OFF
SET COLSEP ','
SPOOL tablespace_fragmentation.csv
SELECT
df.TABLESPACE_NAME,
df.TOTAL_MB,
fs.FREE_MB,
ROUND((fs.FREE_MB/df.TOTAL_MB)*100, 2) AS FREE_PCT,
ROUND(((df.TOTAL_MB - fs.FREE_MB)/df.TOTAL_MB)*100, 2) AS USED_PCT
FROM
(SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024 AS TOTAL_MB FROM DBA_DATA_FILES GROUP BY TABLESPACE_NAME) df,
(SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024 AS FREE_MB FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME) fs
WHERE
df.TABLESPACE_NAME = fs.TABLESPACE_NAME
ORDER BY
USED_PCT DESC;
SPOOL OFF
---
🔸 3. LOB Fragmentation (Identify non-secure/uncompressed LOBs)
SET LINESIZE 1000
SET PAGESIZE 0
SET FEEDBACK OFF
SET VERIFY OFF
SET TERMOUT OFF
SET ECHO OFF
SET COLSEP ','
SPOOL lob_fragmentation.csv
SELECT
OWNER,
TABLE_NAME,
COLUMN_NAME,
SEGMENT_NAME,
SECUREFILE,
IN_ROW,
CHUNK,
RETENTION,
DEDUPLICATE,
COMPRESSION
FROM
DBA_LOBS
WHERE
SECUREFILE = 'NO'
OR COMPRESSION = 'NONE'
ORDER BY
OWNER, TABLE_NAME;
SPOOL OFF
---
🔸 4. Free Space Fragmentation Detail (Fragmented Extents)
SET LINESIZE 1000
SET PAGESIZE 0
SET FEEDBACK OFF
SET VERIFY OFF
SET TERMOUT OFF
SET ECHO OFF
SET COLSEP ','
SPOOL free_space_frag_detail.csv
SELECT
TABLESPACE_NAME,
COUNT(*) AS EXTENT_COUNT,
MAX(BYTES)/1024/1024 AS MAX_EXTENT_MB,
MIN(BYTES)/1024/1024 AS MIN_EXTENT_MB,
ROUND(AVG(BYTES)/1024/1024, 2) AS AVG_EXTENT_MB
FROM
DBA_FREE_SPACE
GROUP BY
TABLESPACE_NAME
ORDER BY
AVG_EXTENT_MB;
SPOOL OFF
---
✅ Notes for Usage:
Execute these queries using SQL*Plus or any CLI that supports SPOOL.
You can change SPOOL <filename.csv> to match your target directory.
Run as a DBA user or one with access to DBA_* views.
---
Would you like me to convert this into a shell script or SQL script that runs all together and bundles the output?
SET LINESIZE 200
SET PAGESIZE 100
-- Script: Average Time for Incremental Backups
SELECT
ROUND(AVG((END_TIME - START_TIME) * 24 * 60), 2) AS AVG_INCREMENTAL_BACKUP_TIME_MINUTES,
COUNT(*) AS TOTAL_INCREMENTAL_BACKUPS
FROM
V$RMAN_BACKUP_JOB_DETAILS
WHERE
BACKUP_TYPE = 'INCREMENTAL'
AND STATUS = 'COMPLETED'
AND START_TIME IS NOT NULL
AND END_TIME IS NOT NULL;
yes I need sql script that run for 1 given database and generates a CSV Summary of these categories for easy dashboarding.
Perfect, Daidipya! Here's a PL/SQL-based SQL script that you can run from SQL*Plus or sqlcl, which:
Collects insights for a given Oracle database
Summarizes:
LOB-heavy tables
Low-access or stale tables
Tiny status/lookup tables
Generates CSV-like output to easily export or integrate with Confluence/MS Teams
generate_db_data_profile_report.sqlsqlSET PAGESIZE 50000
SET LINESIZE 300
SET FEEDBACK OFF
SET VERIFY OFF
SET ECHO OFF
SET TERMOUT OFF
SET TRIMSPOOL ON
SET COLSEP ','
SPOOL db_data_profile_report.csv
PROMPT CATEGORY,OWNER,TABLE_NAME,COLUMN_NAME,LOB_TYPE,BYTES_MB,SECUREFILE
-- A. BLOB/CLOB Tables
SELECT 'LOB_USAGE' AS CATEGORY,
l.owner,
l.table_name,
l.column_name,
l.column_name AS lob_type,
ROUND(s.bytes/1024/1024, 2) AS size_mb,
l.securefile
FROM dba_lobs l
LEFT JOIN dba_segments s ON l.segment_name = s.segment_name AND l.owner = s.owner
WHERE l.data_type IN ('CLOB', 'BLOB', 'NCLOB');
PROMPT CATEGORY,OWNER,TABLE_NAME,ROW_COUNT,INSERTS,UPDATES,DELETES
-- B. Low-Access Tables
SELECT 'LOW_ACCESS' AS CATEGORY,
t.table_owner AS owner,
t.table_name,
COALESCE(tb.num_rows, 0) AS row_count,
t.inserts,
t.updates,
t.deletes
FROM dba_tab_modifications t
LEFT JOIN dba_tables tb ON tb.owner = t.table_owner AND tb.table_name = t.table_name
WHERE (t.inserts + t.updates + t.deletes) < 100;
PROMPT CATEGORY,OWNER,TABLE_NAME,ROW_COUNT,BYTES_MB
-- C. Status / Lookup Tables (small, static)
SELECT 'STATIC_LOOKUP' AS CATEGORY,
t.owner,
t.table_name,
t.num_rows,
ROUND(s.bytes/1024/1024, 2) AS size_mb
FROM dba_tables t
JOIN dba_segments s ON t.table_name = s.segment_name AND t.owner = s.owner
WHERE t.num_rows < 1000
AND s.bytes < 10*1024*1024
AND (t.table_name LIKE '%STATUS%' OR t.table_name LIKE '%LOOKUP%' OR t.table_name LIKE '%CODE%' OR t.table_name LIKE '%TYPE%');
SPOOL OFF
PROMPT CSV Report Generated: db_data_profile_report.csv
bashsqlplus -S / as sysdba @generate_db_data_profile_report.sql
✅ Output will be saved in:
db_data_profile_report.csv
You can then:
Attach the CSV to your Teams channel
Embed in Confluence tables
Use it for capacity planning or optimization reviews
Would you also like:
A version that saves each category in a separate CSV file?
A shell wrapper to automate this across multiple Oracle homes or DBs?
Let me know how you'd like to expand it!
Great, Daidipya. Based on that, here's a robust and production-ready version of the SQL script. It includes:
Accurate joins with DBA_SEGMENTS, DBA_LOBS, and DBA_TAB_MODIFICATIONS
NULL-safe logic
Inline formatting for CSV
Adaptive WHERE clauses to prevent over-reporting
db_data_profile_report.sqlsqlSET PAGESIZE 50000
SET LINESIZE 400
SET FEEDBACK OFF
SET VERIFY OFF
SET ECHO OFF
SET TERMOUT OFF
SET TRIMSPOOL ON
SET COLSEP ','
-- Output File
SPOOL db_data_profile_report.csv
PROMPT CATEGORY,OWNER,TABLE_NAME,COLUMN_NAME,LOB_TYPE,SIZE_MB,SECUREFILE
-- A. LOB-heavy tables
SELECT 'LOB_USAGE',
l.owner,
l.table_name,
l.column_name,
l.data_type,
NVL(ROUND(s.bytes/1024/1024, 2), 0) AS size_mb,
l.securefile
FROM dba_lobs l
LEFT JOIN dba_segments s
ON l.owner = s.owner AND l.segment_name = s.segment_name AND s.segment_type = 'LOBSEGMENT'
WHERE l.data_type IN ('CLOB', 'BLOB', 'NCLOB');
PROMPT CATEGORY,OWNER,TABLE_NAME,ROW_COUNT,INSERTS,UPDATES,DELETES
-- B. Low-access tables
SELECT 'LOW_ACCESS',
t.table_owner,
t.table_name,
NVL(tb.num_rows, 0),
NVL(t.inserts, 0),
NVL(t.updates, 0),
NVL(t.deletes, 0)
FROM dba_tab_modifications t
LEFT JOIN dba_tables tb ON tb.owner = t.table_owner AND tb.table_name = t.table_name
WHERE (NVL(t.inserts, 0) + NVL(t.updates, 0) + NVL(t.deletes, 0)) < 100;
PROMPT CATEGORY,OWNER,TABLE_NAME,ROW_COUNT,SIZE_MB
-- C. Status / Lookup / Code tables (typically static)
SELECT 'STATIC_LOOKUP',
t.owner,
t.table_name,
t.num_rows,
ROUND(s.bytes/1024/1024, 2)
FROM dba_tables t
JOIN dba_segments s ON t.owner = s.owner AND t.table_name = s.segment_name AND s.segment_type = 'TABLE'
WHERE t.num_rows < 1000
AND s.bytes < 10*1024*1024
AND (t.table_name LIKE '%STATUS%' OR t.table_name LIKE '%TYPE%' OR t.table_name LIKE '%LOOKUP%' OR t.table_name LIKE '%CODE%');
SPOOL OFF
PROMPT >> CSV Summary Saved: db_data_profile_report.csv
Uses NVL() for safer arithmetic and joins
Assumes default segment types (LOBSEGMENT, TABLE)
Avoids NULL data skewing logic
Handles absence of stats (some DBs may not have stats gathered)
After running this:
Use UTL_FILE or SCP to ship CSV to a reporting location if needed
Load CSV into Excel or Power BI for visual profiling
Use this report in tuning discussions with application and infra teams
Would you like me to give you a version that:
Only shows top 10 space consumers in each category?
Embeds file size summary at the end?
Let me know how deep you'd like to take this — we can enhance further if needed.
Hello Friends! I am aliena part time blogger from Bihar. I am a student and I manage my study as well as my blog. I am 14 years old and I want to help those people who really want to do something on internet.
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit...
Copyright © 201. Metro UI Theme. Designed by: Templateism