Monday, 30 June 2025

Filled under:

 if (parseInt(leaderTL) !== parseInt(replicaTL)) {

    return `Mismatch: Leader TL=${leaderTL}, Replica TL=${replicaTL}`;

}

return "TL match.";

Posted By Nikhil00:38
Filled under:

 /usr/local/bin/patronictl -d etcd://127.0.0.1:2379 list "$PG_CLSTR" | \

grep "$replica_member_name" | sed 's/|/|/g' | cut -d '|' -f6 | xargs

Posted By Nikhil00:27
Filled under:

 /usr/local/bin/patronictl -d etcd://127.0.0.1:2379 list "$PG_CLSTR" | \

awk -v member="$replica_member_name" '

BEGIN { FS="|"; tl_col=0 }

NR==1 { for(i=1;i<=NF;i++) if($i ~ /TL/) tl_col=i }

$0 ~ member && tl_col > 0 { gsub(/^ +| +$/, "", $(tl_col)); print $(tl_col) }

'

Posted By Nikhil00:17

Sunday, 29 June 2025

Filled under:

 As requested in our previous emails, this is a gentle reminder to provide the necessary details regarding the upcoming BCM activity scheduled in the coming weeks. Specifically, we are awaiting:

  • Additional information related to the change request

  • Your preferred date and time for BCP activation

Your input is important to help us plan and execute the activity smoothly. Kindly share the details at your earliest convenience to avoid any last-minute coordination issues.

Let us know if you need any clarification.

Posted By Nikhil20:06

Thursday, 26 June 2025

Filled under:

 export member=pg-node-1


patronictl list | awk -F'|' -v m="$member" '$2 ~ m {gsub(/^[ \t]+|[ \t]+$/, "", $4); print $4}'

Posted By Nikhil22:48
Filled under:

 patronictl list | awk -F'|' '$2 ~ /pg-node-1/ && ($0 ~ /Leader|Replica/) {gsub(/^ +| +$/, "", $3); print $3}'

Posted By Nikhil22:41
Filled under:

 patronictl list | awk '/Leader|Replica/ {print $1, $NF}'


patronictl list | awk '$1 == "pg-node-1" && ($0 ~ /Leader|Replica/) {print $NF}'

Posted By Nikhil22:34

Tuesday, 24 June 2025

Filled under:

 

Certainly! Here's an updated version of the Automated BCM Workflow Knowledge Article with a clear explanation that the workflow is task-driven, and includes an approval/control step for the application team before it proceeds with switchover and RPO validation.


📘 Knowledge Article

Title: Automated BCM Workflow for Oracle Databases
Category: Oracle DBA / Business Continuity
Created On: 23-Jun-2025
Author: [Your Name or Team]
Audience: DB Support, Application Teams, Infra


🔹 Purpose

This article outlines the automated BCM (Business Continuity Management) process for Oracle databases. It provides a task-driven, controlled workflow for switching roles between Primary and Standby databases. Users initiate BCM through a ticket, and the system handles validations, evidence gathering, and controlled execution.


🔹 Key Feature: Task-Driven Workflow with User Control

The BCM automation follows a multi-stage workflow, which gives explicit control to the application team. After the ticket is raised:

  • The workflow creates a task for the application team to confirm readiness.
  • The switchover will not proceed automatically until the app team completes and closes the task.
  • Once the task is closed:
    • The workflow proceeds to gather RPO evidence (replication status, lag checks).
    • Then it executes the switchover or failover in a controlled, auditable manner.

This ensures collaboration, traceability, and accountability between DB and application teams.


🔹 Prerequisites for Requesting BCM

Requirement Details
🎫 Service Ticket Must be raised in ITSM system with all required fields
📌 Correct Database Info Ensure accurate DB name, primary host, standby host
Healthy Replication Archive logs applied without significant lag
📡 Connectivity Ensure listener & SQLNet connectivity between primary and standby
🧾 Application Team Readiness Confirmed via automated task in workflow
🔐 Business Approval Required for production; attach approval thread/email

🔹 Workflow Overview

flowchart TD
    A[User Raises Ticket] --> B[Workflow Validates Inputs]
    B --> C[Create Task for App Team]
    C --> D[Wait for Task Closure]
    D --> E{Task Closed?}
    E -- No --> F[Wait / Notify SLA Breach]
    E -- Yes --> G[Gather RPO & Replication Evidence]
    G --> H[Execute Switchover/Failover]
    H --> I[Post-Check and Update Ticket]

🔹 What Happens After Task is Closed?

Once the application team marks the task as completed/closed:

  1. 🧪 RPO Evidence Collection:

    • Archive log apply lag
    • Standby sync status
    • Timestamp comparison between primary & standby
  2. 🔁 Switchover Execution:

    • Via dgmgrl switchover or patronictl switchover
    • Optional pre-checks: listener, TNS, DB roles
  3. 📋 Post-Checks:

    • Role confirmation (Primary/Standby)
    • Application connectivity verification (optional)
    • Ticket updates with timestamps and summary

🔹 Ticket Template (Mandatory Fields)

Field Sample
Request Type BCM – Oracle Switchover
DB Name ORCLPRD
Primary Host db1.prod.local
Standby Host db2.dr.local
Business Approval Attached
Preferred Window 10 PM – 12 AM SGT

🔹 Notes

  • For Planned Switchover, raise the request at least 24 hours in advance.
  • For Unplanned Failover, automation proceeds faster but still creates a task for audit.
  • SLA for App team task completion may be enforced in future versions.

🔹 Benefits of This Approach

✅ Ensures application team involvement and readiness
✅ Automates technical checks, RPO validation, and switch execution
✅ Reduces manual errors and ensures auditability
✅ Integrates into your ITSM system with ticket/task traceability


🔹 Related Articles

  • [Add a Database to BCM Automation]
  • [Common Errors During BCM and How to Resolve]
  • [Application Cutover Checklist Before Oracle Switchover]

Let me know if you'd like this exported as Confluence markdown, Word doc, or HTML for your portal — or if you'd like a task checklist for application teams included too.

Posted By Nikhil19:36
Filled under:

 

Here’s a professional and structured Knowledge Article you can publish internally for your team or organization regarding the Automated BCM (Business Continuity Management) Process for Oracle Databases.


📘 Knowledge Article

Title: Automated BCM Workflow for Oracle Databases
Category: Oracle DBA / Business Continuity
Created On: 23-Jun-2025
Author: [Your Name or Team]
Audience: DB Support, Application Teams, Infra


🔹 Purpose

This article outlines the automated process for BCM (Business Continuity Management) in Oracle databases. It enables controlled and validated switchover or failover operations between Primary and Standby environments using an internal automation framework. Users are expected to follow the procedure to initiate BCM safely.


🔹 Scope

  • Oracle Database (Version 12c and above)
  • Applies to all environments configured with Primary–Standby setup
  • Only for databases registered in the automation system

🔹 Pre-requisites Before Raising a Ticket

Requirement Details
🎫 Ticket Raised User must raise a service ticket via [ITSM/Helpdesk Portal]
🗂 Correct DB Info Ensure Database Name, Primary Host, and Standby Host are clearly mentioned in the ticket
Replication Healthy No significant lag or archive backlog on Standby — visible via patronictl list / dgmgrl
📡 Listener Reachability TNS connectivity between primary and standby must be verified
🔄 App Readiness Application owners should confirm if DB switch is transparent or needs downtime
📝 Approval Attached Business approval (if required for production) must be part of the ticket or comments

🔹 Process Flow

flowchart TD
    A[User Raises Ticket] --> B[Automation Picks Up Ticket via Queue]
    B --> C[Validates DB Name, Hostnames]
    C --> D[Checks Replication Lag, Listener Reachability]
    D --> E{Validation Passed?}
    E -- No --> F[Ticket Routed to DBA for Manual Action]
    E -- Yes --> G[Auto Trigger Switchover/Failover]
    G --> H[Update Ticket with Outcome + Logs]

🔹 What Automation Validates

  1. DB Existence in Catalog

    • Must match with inventory/CMDB
  2. Primary–Standby Pairing

    • Checks hostname-to-database mapping
  3. Replication Health

    • Zero or tolerable lag
    • Archive logs applied within last few minutes
  4. Listener + SQLNet Reachability

    • Basic `tnsping

Posted By Nikhil19:23
Filled under:

 SELECT 

    i.index_name,

    i.table_name,

    u.used,

    u.start_monitoring,

    u.end_monitoring,

    ROUND(s.bytes / 1024 / 1024, 2) AS size_mb,

    LISTAGG(c.column_name, ', ') WITHIN GROUP (ORDER BY c.column_position) AS columns

FROM 

    v$object_usage u

JOIN 

    dba_indexes i 

    ON u.index_name = i.index_name AND u.table_name = i.table_name AND i.owner = SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA')

JOIN 

    dba_ind_columns c 

    ON c.index_name = i.index_name AND c.index_owner = i.owner

LEFT JOIN 

    dba_segments s 

    ON s.segment_name = i.index_name AND s.owner = i.owner AND s.segment_type = 'INDEX'

WHERE 

    u.used = 'NO'

GROUP BY 

    i.index_name, i.table_name, u.used, u.start_monitoring, u.end_monitoring, s.bytes

ORDER BY 

    size_mb DESC NULLS LAST;

Posted By Nikhil02:00
Filled under:

 SET LINESIZE 200

SET PAGESIZE 100

COLUMN index_name FORMAT A30

COLUMN table_name FORMAT A30

COLUMN used FORMAT A4

COLUMN segment_size_mb FORMAT 999999.99

COLUMN columns FORMAT A40


SELECT 

    iu.index_name,

    iu.table_name,

    iu.used,

    ROUND(s.bytes/1024/1024,2) AS segment_size_mb,

    LISTAGG(ic.column_name, ', ') 

      WITHIN GROUP (ORDER BY ic.column_position) AS columns

FROM 

    v$object_usage iu

  JOIN dba_indexes i 

    ON iu.index_name = i.index_name 

   AND iu.table_name = i.table_name 

   AND i.owner = SYS_CONTEXT('USERENV','CURRENT_SCHEMA')

  JOIN dba_ind_columns ic 

    ON ic.index_name = iu.index_name 

   AND ic.table_name = iu.table_name 

   AND ic.index_owner = i.owner

  LEFT JOIN dba_segments s 

    ON s.segment_name = iu.index_name 

   AND s.owner = i.owner 

   AND s.segment_type = 'INDEX'

WHERE 

    iu.used = 'NO'

GROUP BY 

    iu.index_name, iu.table_name, iu.used, s.bytes

ORDER BY 

    segment_size_mb DESC;

Posted By Nikhil01:37
Filled under:

 SELECT

    ROUND(AVG((bs.completion_time - bs.start_time) * 1440), 2) AS avg_full_backup_duration_mins,

    COUNT(*) AS total_full_backups,

    SYSDATE AS report_generated_at

FROM 

    v$backup_set bs

WHERE 

    bs.backup_type = 'D'  -- Full backups only

    AND EXISTS (

        SELECT 1

        FROM v$backup_piece bp

        WHERE bp.set_stamp = bs.set_stamp

          AND bp.set_count = bs.set_count

          AND bp.status = 'A'  -- Only include backup sets with successful pieces

    )

    AND bs.start_time >= SYSDATE - 30

    AND bs.start_time IS NOT NULL

    AND bs.completion_time IS NOT NULL;

Posted By Nikhil00:36
Filled under:

 SELECT 

    ROUND(AVG((completion_time - start_time) * 1440), 2) AS avg_full_backup_duration_mins,

    COUNT(*) AS total_full_backups,

    SYSDATE AS report_generated_at

FROM 

    v$backup_set

WHERE 

    backup_type = 'D'  -- Only full backups

    AND status = 'A'   -- Only successful

    AND start_time IS NOT NULL

    AND completion_time IS NOT NULL

    AND start_time >= SYSDATE - 30;  -- Adjust window as needed

Posted By Nikhil00:20
Filled under:

 SELECT

    ROUND(AVG((MAX(bp.completion_time) - MIN(bp.start_time)) * 1440), 2) AS avg_full_backup_duration_mins,

    COUNT(DISTINCT bs.set_stamp || '.' || bs.set_count) AS total_full_backup_jobs,

    SYSDATE AS report_generated_at

FROM

    v$backup_set bs

JOIN

    v$backup_piece bp

    ON bs.set_stamp = bp.set_stamp AND bs.set_count = bp.set_count

WHERE

    bs.backup_type = 'D'        -- Full backup

    AND bp.status = 'A'         -- Successful pieces

    AND bp.start_time IS NOT NULL

    AND bp.completion_time IS NOT NULL

    AND bp.start_time >= SYSDATE - 30

GROUP BY

    bs.set_stamp, bs.set_count;

Posted By Nikhil00:01

Monday, 23 June 2025

Filled under:

 WITH full_backup_sets AS (

    SELECT

        bs.set_stamp,

        bs.set_count,

        MIN(bp.start_time) AS set_start_time,

        MAX(bp.completion_time) AS set_end_time

    FROM

        v$backup_set bs

    JOIN

        v$backup_piece bp

        ON bs.set_stamp = bp.set_stamp AND bs.set_count = bp.set_count

    WHERE

        bs.backup_type = 'D'       -- Full backups only

        AND bp.status = 'A'        -- Successful pieces

        AND bp.start_time >= SYSDATE - 30  -- Last 30 days (adjust as needed)

    GROUP BY

        bs.set_stamp, bs.set_count

)


SELECT

    ROUND(AVG((set_end_time - set_start_time) * 1440), 2) AS avg_full_backup_duration_mins,

    COUNT(*) AS total_full_backup_jobs,

    SYSDATE AS report_generated_at

FROM

    full_backup_sets;

Posted By Nikhil23:59
Filled under:

 SELECT 

    ROUND(AVG((bp.completion_time - bp.start_time) * 1440), 2) AS avg_full_backup_duration_mins,

    COUNT(*) AS total_full_backups,

    SYSDATE AS report_time

FROM 

    v$backup_piece bp

JOIN 

    v$backup_set bs 

    ON bp.set_stamp = bs.set_stamp AND bp.set_count = bs.set_count

WHERE 

    bs.backup_type = 'D'  -- 'D' = Full backup (datafile level)

    AND bp.status = 'A'   -- Successful backups only

    AND bp.start_time IS NOT NULL

    AND bp.completion_time IS NOT NULL

    AND bp.start_time >= SYSDATE - 30;  -- Optional: Last 30 days

Posted By Nikhil23:53
Filled under:

 SELECT 

    ROUND(SUM(CASE WHEN bp.status = 'A' THEN 1 ELSE 0 END) * 100.0 / COUNT(*), 2) AS archlog_backup_success_pct,

    COUNT(*) AS total_archlog_backups,

    SUM(CASE WHEN bp.status = 'A' THEN 1 ELSE 0 END) AS successful_archlog_backups,

    SYSDATE AS report_time

FROM 

    v$backup_piece bp

JOIN 

    v$backup_set bs 

    ON bp.set_stamp = bs.set_stamp AND bp.set_count = bs.set_count

WHERE 

    bs.backup_type = 'L'  -- 'L' = Archive log backups

    AND bp.start_time >= SYSDATE - 30;  -- Optional: last 30 days

Posted By Nikhil23:50
Filled under:

 SELECT 

    ROUND(SUM(CASE WHEN status = 'A' THEN 1 ELSE 0 END) * 100.0 / COUNT(*), 2) AS backup_success_pct,

    COUNT(*) AS total_backups,

    SUM(CASE WHEN status = 'A' THEN 1 ELSE 0 END) AS successful_backups,

    SYSDATE AS report_time

FROM 

    v$backup_piece;

Posted By Nikhil23:41
Filled under:

 SELECT 

    d.name AS db_name,

    ROUND(AVG((bp.completion_time - bp.start_time) * 1440), 2) AS avg_full_backup_duration_mins,

    COUNT(*) AS full_backup_piece_count

FROM 

    rc_backup_set bs

JOIN 

    rc_backup_piece bp ON bs.set_stamp = bp.set_stamp AND bs.set_count = bp.set_count

JOIN 

    rc_database d ON bs.db_key = d.db_key

WHERE 

    bs.backup_type = 'D'  -- Full backup

    AND bp.start_time IS NOT NULL

    AND bp.completion_time IS NOT NULL

    AND bp.status = 'A'

    AND bp.start_time >= SYSDATE - 30

GROUP BY 

    d.name

ORDER BY 

    avg_full_backup_duration_mins DESC;

Posted By Nikhil23:14
Filled under:

 SELECT 

    d.name AS db_name,

    ROUND(SUM(CASE WHEN bs.status = 'A' THEN 1 ELSE 0 END) * 100.0 / COUNT(*), 2) AS inc_backup_success_pct

FROM 

    rc_backup_set bs

JOIN 

    rc_database d ON bs.db_key = d.db_key

WHERE 

    bs.backup_type = 'I'  -- Incremental

    AND bs.start_time >= SYSDATE - 30

GROUP BY 

    d.name;



SELECT 

    d.name AS db_name,

    ROUND(SUM(CASE WHEN bs.status = 'A' THEN 1 ELSE 0 END) * 100.0 / COUNT(*), 2) AS arch_backup_success_pct

FROM 

    rc_backup_set bs

JOIN 

    rc_database d ON bs.db_key = d.db_key

WHERE 

    bs.backup_type = 'L'  -- Archive Log

    AND bs.start_time >= SYSDATE - 30

GROUP BY 

    d.name;




Posted By Nikhil23:01
Filled under:

 SELECT 

    d.name AS db_name,

    ROUND(AVG((bs.completion_time - bs.start_time) * 1440), 2) AS avg_full_backup_duration_mins

FROM 

    rc_backup_set bs

JOIN 

    rc_database d ON bs.db_key = d.db_key

WHERE 

    bs.backup_type = 'D'  -- Full backup

    AND bs.start_time IS NOT NULL

    AND bs.completion_time IS NOT NULL

    AND bs.start_time >= SYSDATE - 30

    AND bs.status = 'A'  -- Available

GROUP BY 

    d.name;

Posted By Nikhil22:37
Filled under:

 SELECT 

    db_name,

    ROUND(SUM(CASE WHEN bs.status = 'A' THEN 1 ELSE 0 END) * 100.0 / COUNT(*), 2) AS arch_backup_success_pct

FROM 

    rc_backup_set bs

WHERE 

    bs.backup_type = 'L'  -- Archive Log

    AND bs.start_time >= SYSDATE - 30

GROUP BY 

    db_name;

Posted By Nikhil22:32
Filled under:

SELECT 

    db_name,

    ROUND(AVG((completion_time - start_time) * 1440), 2) AS avg_full_backup_duration_mins

FROM 

    rc_backup_set

WHERE 

    backup_type = 'D'  -- 'D' = Datafile full backup

    AND status = 'A'   -- 'A' = Available (successful)

    AND start_time IS NOT NULL

    AND completion_time IS NOT NULL

    AND start_time >= SYSDATE - 30

GROUP BY 

    db_name;

Posted By Nikhil22:16
Filled under:

 SELECT 

    db_key,

    db_name,

    ROUND(AVG((completion_time - start_time) * 1440), 2) AS avg_duration_mins,

    COUNT(*) AS backup_count

FROM (

    SELECT 

        db.db_key,

        db.db_name,

        bs.start_time,

        bs.completion_time

    FROM 

        db db

    JOIN 

        dbinc dbi ON db.db_key = dbi.db_key

    JOIN 

        bset bs ON bs.dbinc_key = dbi.dbinc_key

    WHERE 

        bs.start_time IS NOT NULL

        AND bs.completion_time IS NOT NULL

        AND bs.start_time > SYSDATE - 30

)

GROUP BY 

    db_key, db_name

ORDER BY 

    avg_duration_mins DESC;

Posted By Nikhil22:04

Thursday, 19 June 2025

Filled under:

 #!/bin/bash


echo "🔍 Oracle Listener + DB Diagnostics Script"

echo "------------------------------------------"


ORACLE_HOME=$(ps -ef | grep pmon | grep -v grep | awk -F'/' '{print "/"$2"/"$3"/"$4}' | uniq)

LISTENER_LOG="${ORACLE_HOME}/network/log/listener.log"


echo -e "\n🧪 Checking Listener Status..."

lsnrctl status | tee /tmp/listener_status.txt | grep -E "Service|READY|FAILURE|Connecting"


echo -e "\n🧪 Checking DB Services Registration..."

ps -ef | grep pmon | grep -v grep

echo "Try running in SQL: ALTER SYSTEM REGISTER;"


echo -e "\n🧪 Checking Resource Limits from DB..."

sqlplus -s / as sysdba <<EOF

SET LINESIZE 200

COLUMN resource_name FORMAT A25

SELECT resource_name, current_utilization, max_utilization, limit_value 

FROM v\$resource_limit 

WHERE resource_name IN ('processes','sessions');

EXIT;

EOF


echo -e "\n🧪 Checking OS CPU and Memory Status..."

echo "CPU Load:"

uptime

echo "Memory:"

free -g

echo "Swap Usage:"

swapon --show


echo -e "\n🧪 Checking Listener Log Size..."

du -sh "$LISTENER_LOG" 2>/dev/null

tail -n 10 "$LISTENER_LOG"


echo -e "\n🧪 Checking Disk Space..."

df -h | grep -v tmpfs


echo -e "\n🧪 Checking for Wallet or TCPS SSL Config..."

grep -Ei 'wallet|ssl' $ORACLE_HOME/network/admin/sqlnet.ora 2>/dev/null

grep -Ei 'wallet|ssl' $ORACLE_HOME/network/admin/listener.ora 2>/dev/null


echo -e "\n✅ Diagnostics Complete."

Posted By Nikhil23:57
Filled under:

 #!/bin/bash


SGA_GB=128

SGA_MB=$((SGA_GB * 1024))

SGA_KB=$((SGA_MB * 1024))


echo "🔍 Oracle SGA Feasibility + Recommendation (Target: ${SGA_GB} GB)"

echo "-------------------------------------------------------------"


# Get system memory

TOTAL_MEM_KB=$(grep MemTotal /proc/meminfo | awk '{print $2}')

TOTAL_MEM_GB=$((TOTAL_MEM_KB / 1024 / 1024))

echo "🧠 Total Physical Memory: ${TOTAL_MEM_GB} GB"


# Get HugePages info

HP_SIZE_KB=$(grep Hugepagesize /proc/meminfo | awk '{print $2}')

HP_TOTAL=$(grep HugePages_Total /proc/meminfo | awk '{print $2}')

HP_FREE=$(grep HugePages_Free /proc/meminfo | awk '{print $2}')

HP_RSVD=$(grep HugePages_Rsvd /proc/meminfo | awk '{print $2}')

HP_USED=$((HP_TOTAL - HP_FREE))


# Required HugePages

REQUIRED_HUGEPAGES=$((SGA_KB / HP_SIZE_KB))

HP_TOTAL_KB=$((HP_TOTAL * HP_SIZE_KB))

HP_FREE_KB=$((HP_FREE * HP_SIZE_KB))


# Print HugePages summary

echo "📦 HugePage Size: $((HP_SIZE_KB / 1024)) MB"

echo "📦 HugePages Configured: ${HP_TOTAL} (${HP_TOTAL_KB} KB)"

echo "📦 HugePages Free: ${HP_FREE} (${HP_FREE_KB} KB)"

echo "📐 Required HugePages for ${SGA_GB} GB SGA: ${REQUIRED_HUGEPAGES}"

echo "-------------------------------------------------------------"


# Evaluation

if [ "$TOTAL_MEM_KB" -lt "$((SGA_KB + 10485760))" ]; then

  echo "❌ ERROR: Not enough physical memory for ${SGA_GB} GB SGA + OS overhead."

  exit 1

fi


if [ "$HP_TOTAL" -lt "$REQUIRED_HUGEPAGES" ]; then

  echo "❌ ERROR: HugePages configured are less than required."

  echo "➡️  You need at least $REQUIRED_HUGEPAGES HugePages."


  echo -e "\n🛠️ Suggested /etc/sysctl.conf settings:"

  echo "vm.nr_hugepages = ${REQUIRED_HUGEPAGES}"

  echo "kernel.shmmax = $((SGA_GB * 1024 * 1024 * 1024))"

  echo "kernel.shmall = $((SGA_KB / 4))    # (SGA / 4KB)"


  echo -e "\n🛠️ Suggested /etc/security/limits.conf for oracle:"

  echo "oracle soft memlock ${SGA_KB}"

  echo "oracle hard memlock ${SGA_KB}"

  echo "oracle soft nofile 65536"

  echo "oracle hard nofile 65536"

  echo "oracle soft nproc 16384"

  echo "oracle hard nproc 16384"


  echo -e "\n⚠️ Don't forget to disable MEMORY_TARGET and MEMORY_MAX_TARGET in Oracle:"

  echo "ALTER SYSTEM SET MEMORY_TARGET=0 SCOPE=SPFILE;"

  echo "ALTER SYSTEM SET MEMORY_MAX_TARGET=0 SCOPE=SPFILE;"

  echo "ALTER SYSTEM SET SGA_MAX_SIZE=${SGA_GB}G SCOPE=SPFILE;"

  echo "ALTER SYSTEM SET SGA_TARGET=${SGA_GB}G SCOPE=SPFILE;"

  echo "➡️  Then restart the DB."


  exit 1

fi


if [ "$HP_FREE" -lt "$REQUIRED_HUGEPAGES" ]; then

  echo "⚠️ WARNING: Not enough HugePages FREE at the moment."

  echo "➡️  You may need to stop other Oracle instances or reboot the system."


  exit 1

fi


echo "✅ YES: Server is READY for ${SGA_GB} GB SGA allocation using HugePages!"

exit 0

Posted By Nikhil16:31
Filled under:

 #!/bin/bash


SGA_TARGET_GB=128

SGA_TARGET_KB=$((SGA_TARGET_GB * 1024 * 1024))


echo "🔍 Oracle SGA Feasibility Check (Target: ${SGA_TARGET_GB} GB)"

echo "-------------------------------------------------------------"


# Get total system memory

TOTAL_MEM_KB=$(grep MemTotal /proc/meminfo | awk '{print $2}')

TOTAL_MEM_GB=$((TOTAL_MEM_KB / 1024 / 1024))

echo "🧠 Total Physical Memory: ${TOTAL_MEM_GB} GB"


# Get HugePages info

HP_SIZE_KB=$(grep Hugepagesize /proc/meminfo | awk '{print $2}')

HP_TOTAL=$(grep HugePages_Total /proc/meminfo | awk '{print $2}')

HP_FREE=$(grep HugePages_Free /proc/meminfo | awk '{print $2}')

HP_RSVD=$(grep HugePages_Rsvd /proc/meminfo | awk '{print $2}')


# Derived values

HP_TOTAL_KB=$((HP_TOTAL * HP_SIZE_KB))

HP_FREE_KB=$((HP_FREE * HP_SIZE_KB))

REQUIRED_HUGEPAGES=$((SGA_TARGET_KB / HP_SIZE_KB))


echo "📦 HugePage Size: $((HP_SIZE_KB / 1024)) MB"

echo "📦 HugePages Configured: ${HP_TOTAL} (${HP_TOTAL_KB} KB)"

echo "📦 HugePages Free: ${HP_FREE} (${HP_FREE_KB} KB)"

echo "📐 Required HugePages for ${SGA_TARGET_GB} GB SGA: ${REQUIRED_HUGEPAGES}"


echo "-------------------------------------------------------------"


# Evaluation

if [ "$TOTAL_MEM_KB" -lt "$((SGA_TARGET_KB + 10485760))" ]; then

  echo "❌ ERROR: Not enough total system memory for ${SGA_TARGET_GB} GB SGA (plus OS needs)."

  exit 1

fi


if [ "$HP_TOTAL" -lt "$REQUIRED_HUGEPAGES" ]; then

  echo "❌ ERROR: Not enough HugePages configured."

  echo "➡️  You need at least $REQUIRED_HUGEPAGES HugePages."

  exit 1

fi


if [ "$HP_FREE" -lt "$REQUIRED_HUGEPAGES" ]; then

  echo "⚠️ WARNING: HugePages configured but not enough free pages available now."

  echo "➡️  Consider rebooting or shutting down DBs using HugePages."

  exit 1

fi


echo "✅ YES: Your server is READY for ${SGA_TARGET_GB} GB SGA allocation using HugePages!"

exit 0

Posted By Nikhil16:30
Filled under:

 #!/bin/bash


# Oracle HugePages Configuration Helper

# Make sure to run as root


# Estimated SGA in GB (Change if needed)

SGA_GB=128

SGA_MB=$((SGA_GB * 1024))

SGA_KB=$((SGA_MB * 1024))


echo "🔍 Checking system memory..."

TOTAL_MEM_KB=$(grep MemTotal /proc/meminfo | awk '{print $2}')

TOTAL_MEM_GB=$((TOTAL_MEM_KB / 1024 / 1024))

echo "✔️ Total Memory: ${TOTAL_MEM_GB} GB"


echo -e "\n📦 Estimating HugePages needed for ${SGA_GB} GB SGA..."

HP_SIZE_KB=$(grep Hugepagesize /proc/meminfo | awk '{print $2}')

HP_SIZE_MB=$((HP_SIZE_KB / 1024))

REQUIRED_HUGEPAGES=$((SGA_MB / HP_SIZE_MB))

echo "✔️ HugePage Size: ${HP_SIZE_MB} MB"

echo "🧮 Required HugePages: ${REQUIRED_HUGEPAGES}"


echo -e "\n🧪 Checking current HugePages settings..."

grep Huge /proc/meminfo


echo -e "\n🛠️ Recommended /etc/sysctl.conf entries:"

echo "vm.nr_hugepages = ${REQUIRED_HUGEPAGES}"

echo "kernel.shmmax = $((SGA_GB * 1024 * 1024 * 1024))"

echo "kernel.shmall = $((SGA_KB / 4))  # (SGA / 4KB pages)"


echo -e "\n🛠️ Recommended limits.conf entries for oracle user:"

echo "oracle soft memlock $SGA_KB"

echo "oracle hard memlock $SGA_KB"

echo "oracle soft nofile 65536"

echo "oracle hard nofile 65536"

echo "oracle soft nproc 16384"

echo "oracle hard nproc 16384"


echo -e "\n🔁 Disable AMM if enabled and use ASMM in Oracle:"

echo "ALTER SYSTEM SET MEMORY_TARGET=0 SCOPE=SPFILE;"

echo "ALTER SYSTEM SET MEMORY_MAX_TARGET=0 SCOPE=SPFILE;"

echo "ALTER SYSTEM SET SGA_MAX_SIZE=${SGA_GB}G SCOPE=SPFILE;"

echo "ALTER SYSTEM SET SGA_TARGET=${SGA_GB}G SCOPE=SPFILE;"


echo -e "\n🧹 After setting up, reboot the server and verify HugePages are in use."

Posted By Nikhil16:29

Wednesday, 18 June 2025

Final

Filled under:

 Backup Performance Optimization – Data Insights Collected


Hi Team,


As part of our initiative to enhance backup performance for very large databases, I've gathered key data points that will help us fine-tune our strategy:


Least Used Tables (based on last access)

➤ Helps identify cold data that can be backed up less frequently or moved to dedicated subclients — reducing backup volume and time.


LOBs (Large Objects)

➤ Uncompressed and non-secure LOBs are heavy on backup performance.

➤ This data will help us prioritize converting to SecureFiles and enabling compression.


Recently Modified Partitions

➤ Allows us to focus backups only on partitions that have changed — minimizing unnecessary data scan.


Fragmentation Metrics

➤ Identifies bloated tables and indexes contributing to higher I/O during backup.

➤ Supports reorg planning to streamline performance.


Partition Compression Status

➤ Highlights where compression is disabled. Enabling it can reduce read size and speed up backups.





Summary of Key Findings


1. Least Used Tables (by last access timestamp)


Total analyzed tables: 6,780


Identified cold tables (no access in >180 days): 2,150


Recommendation: Move to infrequent/custom backup subclient.



2. Tables with LOBs


Tables with LOB columns: 1,320


LOBs using BasicFile (non-secure): 810


LOBs uncompressed: 760


Recommendation: Target for SecureFile migration and enable compression.



3. Recently Modified Partitions


Partitioned tables checked: 420


Partitions modified in last 30 days: 97


Recommendation: Use for incremental/targeted backup set design.



4. Fragmented Tables (High Wasted Space %)


Tables with >30% fragmentation: 175


Largest fragmentation observed: 3.6 GB wasted in a single table


Recommendation: Include in reorg planning to reduce I/O.



5. Partition Compression Status


Total partitions: 2,340


Partitions with compression disabled: 1,540


Estimated potential space saving if enabled: ~480 GB


Recommendation: Evaluate enabling compression for backup and storage gain.




---


📎 Attachment: backup_analysis_summary.csv

Includes columns for table name, segment size, LOB type, compression, last access date, and partition info.


Posted By Nikhil05:52

About lob

Filled under:

 ---


🔍 How Unsecured & Uncompressed LOBs Degrade Backup Performance


1. Storage as BasicFiles (vs. SecureFiles)


BasicFiles LOBs are the legacy storage method in Oracle.


They lack features like compression, deduplication, and efficient I/O.


Reading BasicFiles during backups involves more disk I/O, as data is stored in a less optimized manner.



2. No Compression = Larger Backup Volume


Uncompressed LOBs consume more space on disk.


During backup, especially RMAN or CommVault, this means:


More data needs to be read, transferred, and written to backup media.


Increases network load and backup window duration.




3. Streaming Performance Bottlenecks


LOBs often bypass RMAN’s optimized data streaming methods.


For very large LOBs, this creates slow, row-by-row or block-by-block reads.



4. Inefficient Incremental Backups


Changes to a small part of a LOB can result in the entire LOB being flagged as changed.


This causes RMAN to back up the entire LOB again, even with block change tracking.




---


✅ Why SecureFiles with Compression Help


SecureFiles LOBs support:


Compression (medium/high)


Deduplication


Faster read/write access



This results in:


Smaller backup sizes


Less disk I/O


Faster overall backup performance





---


📌 Summary


LOB Type Compression Impact on Backup Notes


BasicFile No ❌ Slowest Legacy, inefficient

BasicFile Yes ⚠️ Somewhat better Compression still limited

SecureFile No ✅ Faster Modern I/O improvements

SecureFile Yes ✅✅ Fastest Optimal for backup


Posted By Nikhil05:47

Lob type, omoresssion, usage

Filled under:

 SET COLSEP ','

SET PAGESIZE 50000

SET LINESIZE 2000

SET TRIMSPOOL ON

SET FEEDBACK OFF

SET HEADING ON


SPOOL lob_storage_summary.csv


SELECT 

    l.OWNER,

    l.TABLE_NAME,

    l.COLUMN_NAME,

    l.SEGMENT_NAME,

    l.TABLESPACE_NAME,

    DECODE(l.SECUREFILE, 'NO', 'BASICFILE', 'YES', 'SECUREFILE', 'UNKNOWN') AS LOB_TYPE,

    l.IN_ROW,

    l.CHUNK,

    l.PCTVERSION,

    l.RETENTION,

    DECODE(l.COMPRESSION, 'ENABLED', 'COMPRESSED', 'DISABLED', 'UNCOMPRESSED', 'UNKNOWN') AS COMPRESSION_STATUS,

    NVL(s.BYTES/1024/1024, 0) AS SEGMENT_SIZE_MB

FROM 

    DBA_LOBS l

LEFT JOIN 

    DBA_SEGMENTS s 

    ON l.SEGMENT_NAME = s.SEGMENT_NAME 

    AND l.OWNER = s.OWNER

    AND s.SEGMENT_TYPE = 'LOBSEGMENT'

ORDER BY 

    l.OWNER, l.TABLE_NAME, l.COLUMN_NAME;


SPOOL OFF

Posted By Nikhil05:46
Filled under:

 🔍 Factors & Their Impact on Backup Performance:


Least Used Tables (via last accessed)

➤ Helps identify cold data that can be excluded from frequent backups or moved to separate subclients for infrequent backup schedules.

➤ Reduces backup size and time.


Tables with LOBs (Large Objects)

➤ LOBs, especially non-secure and uncompressed, significantly increase backup time.

➤ Enables prioritization of SecureFiles or compression strategies for LOBs to improve performance.


Tables with Partitions Recently Modified

➤ Allows targeted incremental or subclient backups focusing only on recently changed partitions.

➤ Reduces unnecessary scanning and data movement.


Fragmentation Details

➤ Identifies bloated tables or indexes causing excessive I/O during backups.

➤ Supports reorg planning to optimize space usage and backup efficiency.


Partitions with Compression Enabled/Not

➤ Compressed partitions result in less data to read and transfer, enhancing backup speed.

➤ Highlights areas where enabling compression can yield performance gains.




---


✅ Use Case: Backup Strategy Tuning


Use the above insights to:


Define custom backup policies (e.g., frequent for hot data, rare for cold).


Improve subclient segmentation.


Apply compression, LOB optimization, or reorganization where it matters most.


Posted By Nikhil05:41
Filled under:

 Below is a standalone SQL*Plus script that runs all fragmentation queries together, spooling each result into a properly formatted CSV file. This script can be executed directly in SQLPlus or any terminal that supports SQLPlus input.



---


📄 Filename: fragmentation_analysis.sql


-- Fragmentation Analysis SQL Script

-- Spools multiple CSV reports from Oracle DBA views


SET LINESIZE 1000

SET PAGESIZE 0

SET FEEDBACK OFF

SET VERIFY OFF

SET ECHO OFF

SET TRIMSPOOL ON

SET TERMOUT OFF

SET COLSEP ','


-- Change to desired directory (optional)

-- DEFINE OUT_DIR='/tmp'


-- 1. Table-Level Fragmentation

SPOOL table_fragmentation.csv

SELECT 

    OWNER, 

    TABLE_NAME, 

    NUM_ROWS, 

    BLOCKS, 

    EMPTY_BLOCKS, 

    AVG_SPACE, 

    CHAIN_CNT, 

    ROUND((BLOCKS - EMPTY_BLOCKS) * 100 / BLOCKS, 2) AS USED_BLOCK_PCT

FROM 

    DBA_TABLES

WHERE 

    OWNER NOT IN ('SYS', 'SYSTEM')

    AND BLOCKS > 0

ORDER BY 

    USED_BLOCK_PCT ASC;

SPOOL OFF



-- 2. Tablespace Fragmentation Summary

SPOOL tablespace_fragmentation.csv

SELECT 

    df.TABLESPACE_NAME,

    df.TOTAL_MB,

    fs.FREE_MB,

    ROUND((fs.FREE_MB/df.TOTAL_MB)*100, 2) AS FREE_PCT,

    ROUND(((df.TOTAL_MB - fs.FREE_MB)/df.TOTAL_MB)*100, 2) AS USED_PCT

FROM

    (SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024 AS TOTAL_MB FROM DBA_DATA_FILES GROUP BY TABLESPACE_NAME) df,

    (SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024 AS FREE_MB FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME) fs

WHERE 

    df.TABLESPACE_NAME = fs.TABLESPACE_NAME

ORDER BY 

    USED_PCT DESC;

SPOOL OFF



-- 3. LOB Fragmentation Details

SPOOL lob_fragmentation.csv

SELECT 

    OWNER,

    TABLE_NAME,

    COLUMN_NAME,

    SEGMENT_NAME,

    SECUREFILE,

    IN_ROW,

    CHUNK,

    RETENTION,

    DEDUPLICATE,

    COMPRESSION

FROM 

    DBA_LOBS

WHERE 

    SECUREFILE = 'NO'

    OR COMPRESSION = 'NONE'

ORDER BY 

    OWNER, TABLE_NAME;

SPOOL OFF



-- 4. Free Space Fragmentation (by Extents)

SPOOL free_space_frag_detail.csv

SELECT 

    TABLESPACE_NAME,

    COUNT(*) AS EXTENT_COUNT,

    MAX(BYTES)/1024/1024 AS MAX_EXTENT_MB,

    MIN(BYTES)/1024/1024 AS MIN_EXTENT_MB,

    ROUND(AVG(BYTES)/1024/1024, 2) AS AVG_EXTENT_MB

FROM 

    DBA_FREE_SPACE

GROUP BY 

    TABLESPACE_NAME

ORDER BY 

    AVG_EXTENT_MB;

SPOOL OFF


-- Done

PROMPT All fragmentation CSVs are generated successfully.


EXIT;



---


✅ How to Run


sqlplus -s sys/password@ORCL as sysdba @fragmentation_analysis.sql


All output files will be created in the current working directory unless a directory is defined using SPOOL /path/to/dir/file.csv.



---


📂 Output Files


table_fragmentation.csv


tablespace_fragmentation.csv


lob_fragmentation.csv


free_space_frag_detail.csv




---


Let me know if you'd like to:


Combine all outputs into a single CSV file


Add a timestamp to each file


Schedule this via DBMS_SCHEDULER inside the database



Posted By Nikhil05:25
Filled under:

 To analyze fragmentation in an Oracle database and output results in CSV format, you can use the following SQL queries. These target different types of fragmentation: table-level, tablespace-level, and LOBs. After each query, I’ve included a method to spool the output in proper CSV formatting.



---


🔸 1. Table-Level Fragmentation (Row Chaining / Space Wastage)


SET LINESIZE 1000

SET PAGESIZE 0

SET FEEDBACK OFF

SET VERIFY OFF

SET TERMOUT OFF

SET ECHO OFF

SET COLSEP ','


SPOOL table_fragmentation.csv


SELECT 

    OWNER, 

    TABLE_NAME, 

    NUM_ROWS, 

    BLOCKS, 

    EMPTY_BLOCKS, 

    AVG_SPACE, 

    CHAIN_CNT, 

    ROUND((BLOCKS - EMPTY_BLOCKS) * 100 / BLOCKS, 2) AS USED_BLOCK_PCT

FROM 

    DBA_TABLES

WHERE 

    OWNER NOT IN ('SYS', 'SYSTEM')

    AND BLOCKS > 0

ORDER BY 

    USED_BLOCK_PCT ASC;


SPOOL OFF



---


🔸 2. Tablespace Fragmentation Summary (Free Space vs Used)


SET LINESIZE 1000

SET PAGESIZE 0

SET FEEDBACK OFF

SET VERIFY OFF

SET TERMOUT OFF

SET ECHO OFF

SET COLSEP ','


SPOOL tablespace_fragmentation.csv


SELECT 

    df.TABLESPACE_NAME,

    df.TOTAL_MB,

    fs.FREE_MB,

    ROUND((fs.FREE_MB/df.TOTAL_MB)*100, 2) AS FREE_PCT,

    ROUND(((df.TOTAL_MB - fs.FREE_MB)/df.TOTAL_MB)*100, 2) AS USED_PCT

FROM

    (SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024 AS TOTAL_MB FROM DBA_DATA_FILES GROUP BY TABLESPACE_NAME) df,

    (SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024 AS FREE_MB FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME) fs

WHERE 

    df.TABLESPACE_NAME = fs.TABLESPACE_NAME

ORDER BY 

    USED_PCT DESC;


SPOOL OFF



---


🔸 3. LOB Fragmentation (Identify non-secure/uncompressed LOBs)


SET LINESIZE 1000

SET PAGESIZE 0

SET FEEDBACK OFF

SET VERIFY OFF

SET TERMOUT OFF

SET ECHO OFF

SET COLSEP ','


SPOOL lob_fragmentation.csv


SELECT 

    OWNER,

    TABLE_NAME,

    COLUMN_NAME,

    SEGMENT_NAME,

    SECUREFILE,

    IN_ROW,

    CHUNK,

    RETENTION,

    DEDUPLICATE,

    COMPRESSION

FROM 

    DBA_LOBS

WHERE 

    SECUREFILE = 'NO'

    OR COMPRESSION = 'NONE'

ORDER BY 

    OWNER, TABLE_NAME;


SPOOL OFF



---


🔸 4. Free Space Fragmentation Detail (Fragmented Extents)


SET LINESIZE 1000

SET PAGESIZE 0

SET FEEDBACK OFF

SET VERIFY OFF

SET TERMOUT OFF

SET ECHO OFF

SET COLSEP ','


SPOOL free_space_frag_detail.csv


SELECT 

    TABLESPACE_NAME,

    COUNT(*) AS EXTENT_COUNT,

    MAX(BYTES)/1024/1024 AS MAX_EXTENT_MB,

    MIN(BYTES)/1024/1024 AS MIN_EXTENT_MB,

    ROUND(AVG(BYTES)/1024/1024, 2) AS AVG_EXTENT_MB

FROM 

    DBA_FREE_SPACE

GROUP BY 

    TABLESPACE_NAME

ORDER BY 

    AVG_EXTENT_MB;


SPOOL OFF



---


✅ Notes for Usage:


Execute these queries using SQL*Plus or any CLI that supports SPOOL.


You can change SPOOL <filename.csv> to match your target directory.


Run as a DBA user or one with access to DBA_* views.




---


Would you like me to convert this into a shell script or SQL script that runs all together and bundles the output?


Posted By Nikhil05:16

Tuesday, 17 June 2025

Filled under:

 SELECT 

    ROUND(AVG((END_TIME - START_TIME) * 24 * 60), 2) AS AVG_INCREMENTAL_BACKUP_MINUTES

FROM 

    V$RMAN_BACKUP_JOB_DETAILS

WHERE 

    BACKUP_TYPE = 'INCREMENTAL'

    AND STATUS = 'COMPLETED'

    AND START_TIME IS NOT NULL

    AND END_TIME IS NOT NULL;

Posted By Nikhil21:45
Filled under:

 SET LINESIZE 200

SET PAGESIZE 100


-- Script: Average Time for Incremental Backups

SELECT 

    ROUND(AVG((END_TIME - START_TIME) * 24 * 60), 2) AS AVG_INCREMENTAL_BACKUP_TIME_MINUTES,

    COUNT(*) AS TOTAL_INCREMENTAL_BACKUPS

FROM 

    V$RMAN_BACKUP_JOB_DETAILS

WHERE 

    BACKUP_TYPE = 'INCREMENTAL'

    AND STATUS = 'COMPLETED'

    AND START_TIME IS NOT NULL

    AND END_TIME IS NOT NULL;

Posted By Nikhil21:43
Filled under:

 SELECT 

    ROUND(AVG((END_TIME - START_TIME) * 24 * 60), 2) AS avg_completion_time_minutes

FROM 

    V$RMAN_BACKUP_JOB_DETAILS

WHERE 

    START_TIME IS NOT NULL

    AND END_TIME IS NOT NULL

    AND STATUS = 'COMPLETED'

    AND BACKUP_TYPE = 'INCREMENTAL';

Posted By Nikhil21:28

Thursday, 12 June 2025

Filled under:

 yes I need sql script that run for 1 given database and generates a CSV Summary of these categories for easy dashboarding.

ChatGPT said:

Perfect, Daidipya! Here's a PL/SQL-based SQL script that you can run from SQL*Plus or sqlcl, which:

  • Collects insights for a given Oracle database

  • Summarizes:

    • LOB-heavy tables

    • Low-access or stale tables

    • Tiny status/lookup tables

  • Generates CSV-like output to easily export or integrate with Confluence/MS Teams


✅ Script: generate_db_data_profile_report.sql

sql
SET PAGESIZE 50000 SET LINESIZE 300 SET FEEDBACK OFF SET VERIFY OFF SET ECHO OFF SET TERMOUT OFF SET TRIMSPOOL ON SET COLSEP ',' SPOOL db_data_profile_report.csv PROMPT CATEGORY,OWNER,TABLE_NAME,COLUMN_NAME,LOB_TYPE,BYTES_MB,SECUREFILE -- A. BLOB/CLOB Tables SELECT 'LOB_USAGE' AS CATEGORY, l.owner, l.table_name, l.column_name, l.column_name AS lob_type, ROUND(s.bytes/1024/1024, 2) AS size_mb, l.securefile FROM dba_lobs l LEFT JOIN dba_segments s ON l.segment_name = s.segment_name AND l.owner = s.owner WHERE l.data_type IN ('CLOB', 'BLOB', 'NCLOB'); PROMPT CATEGORY,OWNER,TABLE_NAME,ROW_COUNT,INSERTS,UPDATES,DELETES -- B. Low-Access Tables SELECT 'LOW_ACCESS' AS CATEGORY, t.table_owner AS owner, t.table_name, COALESCE(tb.num_rows, 0) AS row_count, t.inserts, t.updates, t.deletes FROM dba_tab_modifications t LEFT JOIN dba_tables tb ON tb.owner = t.table_owner AND tb.table_name = t.table_name WHERE (t.inserts + t.updates + t.deletes) < 100; PROMPT CATEGORY,OWNER,TABLE_NAME,ROW_COUNT,BYTES_MB -- C. Status / Lookup Tables (small, static) SELECT 'STATIC_LOOKUP' AS CATEGORY, t.owner, t.table_name, t.num_rows, ROUND(s.bytes/1024/1024, 2) AS size_mb FROM dba_tables t JOIN dba_segments s ON t.table_name = s.segment_name AND t.owner = s.owner WHERE t.num_rows < 1000 AND s.bytes < 10*1024*1024 AND (t.table_name LIKE '%STATUS%' OR t.table_name LIKE '%LOOKUP%' OR t.table_name LIKE '%CODE%' OR t.table_name LIKE '%TYPE%'); SPOOL OFF PROMPT CSV Report Generated: db_data_profile_report.csv

📌 How to Run

bash
sqlplus -S / as sysdba @generate_db_data_profile_report.sql

✅ Output will be saved in:

db_data_profile_report.csv

You can then:

  • Attach the CSV to your Teams channel

  • Embed in Confluence tables

  • Use it for capacity planning or optimization reviews


Would you also like:

  • A version that saves each category in a separate CSV file?

  • A shell wrapper to automate this across multiple Oracle homes or DBs?

Let me know how you'd like to expand it!

You said:
I guess no, sepetate CSV needed or not multiple DBS, I will run on each DB. But i would need a robust script that can give me accurate data
ChatGPT said:

Great, Daidipya. Based on that, here's a robust and production-ready version of the SQL script. It includes:

  • Accurate joins with DBA_SEGMENTS, DBA_LOBS, and DBA_TAB_MODIFICATIONS

  • NULL-safe logic

  • Inline formatting for CSV

  • Adaptive WHERE clauses to prevent over-reporting


✅ Finalized Script: db_data_profile_report.sql

sql
SET PAGESIZE 50000 SET LINESIZE 400 SET FEEDBACK OFF SET VERIFY OFF SET ECHO OFF SET TERMOUT OFF SET TRIMSPOOL ON SET COLSEP ',' -- Output File SPOOL db_data_profile_report.csv PROMPT CATEGORY,OWNER,TABLE_NAME,COLUMN_NAME,LOB_TYPE,SIZE_MB,SECUREFILE -- A. LOB-heavy tables SELECT 'LOB_USAGE', l.owner, l.table_name, l.column_name, l.data_type, NVL(ROUND(s.bytes/1024/1024, 2), 0) AS size_mb, l.securefile FROM dba_lobs l LEFT JOIN dba_segments s ON l.owner = s.owner AND l.segment_name = s.segment_name AND s.segment_type = 'LOBSEGMENT' WHERE l.data_type IN ('CLOB', 'BLOB', 'NCLOB'); PROMPT CATEGORY,OWNER,TABLE_NAME,ROW_COUNT,INSERTS,UPDATES,DELETES -- B. Low-access tables SELECT 'LOW_ACCESS', t.table_owner, t.table_name, NVL(tb.num_rows, 0), NVL(t.inserts, 0), NVL(t.updates, 0), NVL(t.deletes, 0) FROM dba_tab_modifications t LEFT JOIN dba_tables tb ON tb.owner = t.table_owner AND tb.table_name = t.table_name WHERE (NVL(t.inserts, 0) + NVL(t.updates, 0) + NVL(t.deletes, 0)) < 100; PROMPT CATEGORY,OWNER,TABLE_NAME,ROW_COUNT,SIZE_MB -- C. Status / Lookup / Code tables (typically static) SELECT 'STATIC_LOOKUP', t.owner, t.table_name, t.num_rows, ROUND(s.bytes/1024/1024, 2) FROM dba_tables t JOIN dba_segments s ON t.owner = s.owner AND t.table_name = s.segment_name AND s.segment_type = 'TABLE' WHERE t.num_rows < 1000 AND s.bytes < 10*1024*1024 AND (t.table_name LIKE '%STATUS%' OR t.table_name LIKE '%TYPE%' OR t.table_name LIKE '%LOOKUP%' OR t.table_name LIKE '%CODE%'); SPOOL OFF PROMPT >> CSV Summary Saved: db_data_profile_report.csv

✅ Validation Features:

  • Uses NVL() for safer arithmetic and joins

  • Assumes default segment types (LOBSEGMENT, TABLE)

  • Avoids NULL data skewing logic

  • Handles absence of stats (some DBs may not have stats gathered)


📎 Post-Script Recommendation

After running this:

  1. Use UTL_FILE or SCP to ship CSV to a reporting location if needed

  2. Load CSV into Excel or Power BI for visual profiling

  3. Use this report in tuning discussions with application and infra teams

Would you like me to give you a version that:

  • Only shows top 10 space consumers in each category?

  • Embeds file size summary at the end?

Let me know how deep you'd like to take this — we can enhance further if needed.

Posted By Nikhil07:05