Friday, 30 May 2025

Filled under:

 

Setting up a new physical standby database for an existing Oracle primary database (Data Guard configuration) requires careful planning and a series of precise steps. The process below assumes you are creating a physical standby (not logical) database and both servers (primary and standby) run on similar OS and Oracle versions.


✅ Assumptions

  • Primary DB Name: PROD
  • Standby DB Name: PROD (must be the same for physical standby)
  • Oracle version: 19c (similar steps apply to 12c/21c with minor changes)
  • Oracle is running in ARCHIVELOG mode
  • Primary DB is using Force Logging
  • Oracle Net (TNS) is properly configured between the two servers
  • You have password file and listener setup
  • You have sufficient disk space and network access between primary and standby

🛠️ High-Level Steps

  1. [ ] Enable Force Logging & ArchiveLog on Primary
  2. [ ] Configure Initialization Parameters (Primary & Standby)
  3. [ ] Create Password File on Standby
  4. [ ] Configure Network (tnsnames.ora, listener.ora)
  5. [ ] Create Standby Control File
  6. [ ] Copy Files to Standby Server
  7. [ ] Prepare the Standby Parameter File (pfile/spfile)
  8. [ ] Start Standby Instance in NOMOUNT
  9. [ ] Restore or Duplicate the Database
  10. [ ] Start Redo Apply (MRP)
  11. [ ] Configure Data Guard Broker (Optional but Recommended)
  12. [ ] Test the Setup

🔍 Step-by-Step Details


🔹 1. Enable ARCHIVELOG and FORCE LOGGING on Primary

-- Log in to SQL*Plus on primary as SYSDBA
SQL> ARCHIVE LOG LIST;
-- If not enabled:
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;

-- Enable Force Logging
SQL> ALTER DATABASE FORCE LOGGING;

🔹 2. Configure Primary Database Initialization Parameters

Modify or add these to PROD primary (typically in spfile):

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(PROD)' SCOPE=BOTH;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=/archivelogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PROD' SCOPE=BOTH;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=PROD_STBY ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PROD_STBY' SCOPE=BOTH;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=BOTH;

ALTER SYSTEM SET FAL_SERVER=PROD_STBY SCOPE=BOTH;
ALTER SYSTEM SET FAL_CLIENT=PROD SCOPE=BOTH;
ALTER SYSTEM SET DB_FILE_NAME_CONVERT='/u02/oradata/PROD_STBY','/u02/oradata/PROD' SCOPE=SPFILE;
ALTER SYSTEM SET LOG_FILE_NAME_CONVERT='/u02/oradata/PROD_STBY','/u02/oradata/PROD' SCOPE=SPFILE;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO SCOPE=BOTH;

Replace /u02/oradata/... paths with your actual datafile/log locations.


🔹 3. Create Password File on Primary and Copy to Standby

On primary:

cd $ORACLE_HOME/dbs
orapwd file=orapwPROD password=oracle entries=10 format=12

Copy it to the standby server:

scp $ORACLE_HOME/dbs/orapwPROD oracle@standby_host:$ORACLE_HOME/dbs/

🔹 4. Configure Network Files (tnsnames.ora, listener.ora)

On both servers, add TNS entries for both PROD and PROD_STBY:

Example tnsnames.ora:

PROD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = primary_host)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = PROD)
    )
  )

PROD_STBY =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = standby_host)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = PROD)
    )
  )

Start listener on both:

lsnrctl start

Test connectivity using:

tnsping PROD
tnsping PROD_STBY

🔹 5. Create Standby Control File from Primary

SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/standby.ctl';

🔹 6. Copy Files to Standby Server

Copy from primary to standby:

  • Datafiles
  • Standby control file
  • Online Redo Logs (optional if using Data Guard to create them)
  • Password file
  • Init file if using pfile
  • Archivelogs (at least a few for recovery)
  • tnsnames.ora and listener.ora

Example commands:

scp /u02/oradata/PROD/* oracle@standby_host:/u02/oradata/PROD_STBY/
scp /tmp/standby.ctl oracle@standby_host:/u02/oradata/PROD_STBY/control01.ctl

🔹 7. Prepare Init or Spfile on Standby

You can either:

  • Create initPROD.ora manually
  • Or copy and modify a pfile from primary and create a spfile

Sample initPROD.ora on standby:

DB_NAME=PROD
DB_UNIQUE_NAME=PROD_STBY
CONTROL_FILES='/u02/oradata/PROD_STBY/control01.ctl'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(PROD,PROD_STBY)'
LOG_ARCHIVE_DEST_1='LOCATION=/archivelogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PROD_STBY'
LOG_ARCHIVE_DEST_2='SERVICE=PROD ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PROD'
FAL_SERVER=PROD
FAL_CLIENT=PROD_STBY
DB_FILE_NAME_CONVERT='/u02/oradata/PROD','/u02/oradata/PROD_STBY'
LOG_FILE_NAME_CONVERT='/u02/oradata/PROD','/u02/oradata/PROD_STBY'
STANDBY_FILE_MANAGEMENT=AUTO

🔹 8. Start Standby in NOMOUNT

export ORACLE_SID=PROD
sqlplus / as sysdba
SQL> STARTUP NOMOUNT PFILE='$ORACLE_HOME/dbs/initPROD.ora';

(Use spfile if already created.)


🔹 9. Restore or Duplicate the Database

From primary, connect to standby using RMAN and duplicate the database:

rman TARGET sys/oracle@PROD AUXILIARY sys/oracle@PROD_STBY

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE
      DORECOVER
      NOFILENAMECHECK;

This will copy the datafiles and perform media recovery automatically.


🔹 10. Start Redo Apply on Standby

-- On standby
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

For real-time apply:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;

🔹 11. (Optional) Use Data Guard Broker

Create a broker configuration:

dgmgrl sys/oracle@PROD

DGMGRL> CREATE CONFIGURATION 'DGConfig' AS PRIMARY DATABASE IS 'PROD' CONNECT IDENTIFIER IS 'PROD';
DGMGRL> ADD DATABASE 'PROD_STBY' AS CONNECT IDENTIFIER IS 'PROD_STBY' MAINTAINED AS PHYSICAL;
DGMGRL> ENABLE CONFIGURATION;

🔹 12. Verify & Monitor

On standby:

SQL> SELECT DATABASE_ROLE, OPEN_MODE FROM V$DATABASE;
-- Should show: PHYSICAL STANDBY and MOUNTED

SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY;

On primary, force log switch:

SQL> ALTER SYSTEM SWITCH LOGFILE;

Check that logs are shipped and applied.


✅ Final Notes

  • Ensure time is in sync between primary and standby
  • If using DNFS, configure accordingly
  • Set up monitoring for lag and MRP process
  • Consider using Fast-Start Failover (FSFO) for high availability

Would you like a bash script version of the steps for semi-automation or a checklist document for team use?

Posted By Nikhil20:06
Filled under:

 

Sure, here’s a concise and professional Teams message you can send:


Hey [Name], just a heads-up — I’ve sent an email to the backup team asking if there’s any alternative to the Power BI report for checking backup status, since the current setup has some UI and data visibility issues. Thought you should be in the loop.


Let me know if you want to make it more casual or detailed!

Posted By Nikhil00:58
Filled under:

 

Here's a professionally worded email you can send to the backup team:


Subject: Alternative to Power BI Report for Backup Status Monitoring

Hi [Backup Team / Team Name],

I hope you're doing well.

We’ve noticed that ever since the backup reports were migrated from Tableau to Power BI, there have been some inconsistencies — at times, the data isn't visible, and the UI feels less intuitive and somewhat unresponsive.

Given these challenges, I wanted to check if you're aware of any alternative method or report (apart from launching the CommVault console) that we can use to monitor backup status more effectively.

Appreciate your input on this.

Best regards,
[Your Name]


Let me know if you'd like to adjust the tone (more formal/informal) or personalize it further.

Posted By Nikhil00:49

Thursday, 29 May 2025

Filled under:

 

Here's a professional and concise email you can send to your team member:


Subject: Database Rebuild Completed – Request to Take Over and Assist with Onboarding Issue

Hi [Team Member's Name],

I wanted to update you on the progress regarding the database issue. I managed to bring up the database by building it manually from scratch.

After the initial creation, I exported the schema from another database on the same host and performed the import into the newly created instance. However, I’ve noticed a difference in the database size after the import. If the application team raises any concerns regarding this, please assist in reimporting the data.

Additionally, the database onboarding process has been failing post-creation. I’d appreciate your help in looking into and remediating the onboarding issue.

Kindly take over this from here and coordinate further with the user as needed.

Let me know if you need any additional details from my end.

Thanks,
[Your Name]


Let me know if you'd like a more formal or casual tone, or if you want to include any specific names or technical references.

Posted By Nikhil00:15

Wednesday, 28 May 2025

Filled under:

 

Here’s a professional email draft you can send to your application users regarding the large database size and long-running backups:


Subject: Action Required: Database Size Growth and Long-Running Backups

Dear Application Team,

We would like to bring to your attention that the size of your application database(s) has grown significantly over time. As a result, both incremental and full backup operations are taking considerably longer to complete. This impacts not only the backup window but also the overall performance and maintenance activities.

While the DBA team is actively reviewing and exploring possible optimizations to improve backup throughput, we strongly recommend that you assess the data growth in your application and consider the following:

  • Periodic purging of obsolete or unnecessary data
  • Archiving of historical data to alternate storage solutions
  • Review of data retention policies to ensure alignment with operational and compliance needs

Reducing the database size will directly contribute to faster and more efficient backups, better resource utilization, and improved performance.

Please reach out if you need assistance in identifying large tables, inactive partitions, or historical data candidates for purging or archiving. We’re happy to support any analysis or cleanup efforts.

Your cooperation is greatly appreciated.

Best regards,
[Your Name]
Database Administrator
[Your Team/Department Name]
[Your Contact Info]


Let me know if you'd like it adapted for a more technical audience or a specific application.

Posted By Nikhil00:48

Sunday, 25 May 2025

Filled under:

 SET LINES 300

COLUMN owner FORMAT A20

COLUMN table_name FORMAT A30

COLUMN tablespace_name FORMAT A20

COLUMN total_size_gb FORMAT 999,999,999.99  -- wider to fit big numbers


-- Get total table size (including base + LOB + overflow segments)

SELECT s.owner,

       s.segment_name AS table_name,

       s.tablespace_name,

       ROUND(SUM(s.bytes) / (1024 * 1024 * 1024), 2) AS total_size_gb

FROM dba_segments s

WHERE s.segment_type IN ('TABLE', 'TABLE PARTITION', 'LOBSEGMENT', 'LOB PARTITION')

  AND s.tablespace_name = 'TABLESPACE_A'

GROUP BY s.owner, s.segment_name, s.tablespace_name

HAVING SUM(s.bytes) > 0

ORDER BY total_size_gb DESC;


Posted By Nikhil05:13
Filled under:

 SET LINES 200

COLUMN owner FORMAT A20

COLUMN table_name FORMAT A30

COLUMN tablespace_name FORMAT A20

COLUMN total_size_gb FORMAT 999,999.99


-- Get total table size (including base + LOB + overflow segments)

SELECT s.owner,

       s.segment_name AS table_name,

       s.tablespace_name,

       ROUND(SUM(s.bytes) / (1024 * 1024 * 1024), 2) AS total_size_gb

FROM dba_segments s

WHERE s.segment_type IN ('TABLE', 'TABLE PARTITION', 'LOBSEGMENT', 'LOB PARTITION')

  AND s.tablespace_name = 'TABLESPACE_A'

GROUP BY s.owner, s.segment_name, s.tablespace_name

HAVING SUM(s.bytes) > 0

ORDER BY total_size_gb DESC;


Posted By Nikhil05:02
Filled under:

 SET LINES 200

COLUMN owner FORMAT A20

COLUMN table_name FORMAT A30

COLUMN tablespace_name FORMAT A20

COLUMN size_gb FORMAT 999,999.99


-- Report largest tables in TABLESPACE_A (size in GB)

SELECT owner,

       table_name,

       tablespace_name,

       ROUND(SUM(bytes) / (1024 * 1024 * 1024), 2) AS size_gb

FROM dba_segments

WHERE segment_type = 'TABLE'

  AND tablespace_name = 'TABLESPACE_A'

GROUP BY owner, table_name, tablespace_name

ORDER BY size_gb DESC;




-------------------------------------------


SET PAGES 0
SET HEADING OFF
SET FEEDBACK OFF
SET LINES 200

-- Generate ALTER TABLE MOVE commands for large tables in TABLESPACE_A
SELECT 'ALTER TABLE ' || owner || '.' || table_name ||
       ' MOVE TABLESPACE TABLESPACE_B;' AS move_cmd
FROM dba_segments
WHERE segment_type = 'TABLE'
  AND tablespace_name = 'TABLESPACE_A'
ORDER BY bytes DESC;

Posted By Nikhil04:57

Analyzes tables and indexes to reduce HWM Dynamically generates the ALTER DATABASE DATAFILE

Filled under:


ere’s a robust Oracle script that:
✅ Analyzes tables and indexes to reduce HWM
✅ Shrinks them if eligible
✅ Calculates the new possible minimum datafile size
✅ Dynamically generates the ALTER DATABASE DATAFILE ... RESIZE commands




-- Enable serveroutput

SET SERVEROUTPUT ON SIZE 1000000;


-- Cursor over all user tables

BEGIN

  FOR t IN (

    SELECT table_name

    FROM user_tables

    WHERE temporary = 'N'

  ) LOOP

    BEGIN

      -- Enable row movement

      EXECUTE IMMEDIATE 'ALTER TABLE ' || t.table_name || ' ENABLE ROW MOVEMENT';

      -- Shrink table space and reduce HWM

      EXECUTE IMMEDIATE 'ALTER TABLE ' || t.table_name || ' SHRINK SPACE';

      DBMS_OUTPUT.PUT_LINE('Shrunk table: ' || t.table_name);

    EXCEPTION

      WHEN OTHERS THEN

        DBMS_OUTPUT.PUT_LINE('Skipped table: ' || t.table_name || ' - ' || SQLERRM);

    END;

  END LOOP;

END;

/


-- Cursor over all user indexes

BEGIN

  FOR i IN (

    SELECT index_name

    FROM user_indexes

    WHERE temporary = 'N'

  ) LOOP

    BEGIN

      -- Shrink index space

      EXECUTE IMMEDIATE 'ALTER INDEX ' || i.index_name || ' SHRINK SPACE';

      DBMS_OUTPUT.PUT_LINE('Shrunk index: ' || i.index_name);

    EXCEPTION

      WHEN OTHERS THEN

        DBMS_OUTPUT.PUT_LINE('Skipped index: ' || i.index_name || ' - ' || SQLERRM);

    END;

  END LOOP;

END;

/


-- Analyze datafiles for potential resize

SET LINES 200

COL datafile_name FOR A60


DECLARE

  v_sql VARCHAR2(1000);

BEGIN

  FOR r IN (

    SELECT d.file_id,

           d.file_name AS datafile_name,

           d.bytes / 1024 / 1024 AS current_size_mb,

           f.maxbytes / 1024 / 1024 AS max_free_mb,

           (d.bytes - f.maxbytes) / 1024 / 1024 AS suggested_resize_mb

    FROM dba_data_files d

    JOIN (

      SELECT file_id, MAX(block_id + blocks) * (SELECT value FROM v$parameter WHERE name = 'db_block_size') AS maxbytes

      FROM dba_free_space

      GROUP BY file_id

    ) f ON d.file_id = f.file_id

    WHERE d.autoextensible = 'NO' -- exclude autoextensible files for now

  ) LOOP

    v_sql := 'ALTER DATABASE DATAFILE ''' || r.datafile_name || ''' RESIZE ' || ROUND(r.suggested_resize_mb) || 'M;';

    DBMS_OUTPUT.PUT_LINE('-- Current: ' || ROUND(r.current_size_mb) || ' MB, Suggested Resize: ' || ROUND(r.suggested_resize_mb) || ' MB');

    DBMS_OUTPUT.PUT_LINE(v_sql);

  END LOOP;

END;

/


Posted By Nikhil04:49
Filled under:

 Could you please let me know the purpose or reason you initially assigned it to me? If you were expecting any specific help or input from my side to check or resolve this incident, please let me know. I’ll be happy to assist as needed.

Posted By Nikhil03:42
Filled under:

 Following up on the recent task you assigned — I’ve completed a thorough analysis to check the possibility of resizing the tablespace.

After deep-diving into the details, I can confirm that resizing is not currently feasible, as the High Water Mark (HWM) is already at its peak across all associated datafiles. I’ve attached the detailed breakdown of current file sizes and HWM positions for your reference.

I also synced up with Vikram and Vikas during the review. Vikas was already analyzing this, and we both reached the same conclusion. At this point, the only viable options are:

  • Extend the underlying disks/ACFS filesystem, or

  • Migrate some large tables to a new tablespace and then attempt a resize on the current one.

@Vikas – Please let me know your preference on how you'd like to proceed. I can coordinate with the UNIX/DI team to arrange for additional disk space, or if you’d prefer, I can start planning the migration of large tables to a new tablespace and then perform the resize.

Happy to support either approach — just let me know your call.

Posted By Nikhil03:01

Saturday, 24 May 2025

Filled under:

 As per the plan, I logged into both the source and target hosts today to initiate the requested database restore. I performed all the preliminary checks, and everything was in order.

However, just before starting the restore, I observed that the target filesystem size is exactly 2.6 TB, which is the same as the current size of the source database. This presents a risk of failure, as there is a high possibility that additional space will be required during the restore process. Proceeding with the restore under these conditions could result in an incomplete or abandoned database on the target side.

I also explored whether any tablespaces on the source database could be resized to reduce its footprint, but unfortunately, no space could be reclaimed at this time.

At this point, the only viable option is to request the DI team to allocate additional space on the ACFS filesystem at the target side to safely accommodate the full restore and ensure operational integrity.

Please let me know how you'd like to proceed, and I can coordinate with the DI team accordingly.

Best regards,
Daidipya Upalekar

Posted By Nikhil01:24

optimizd

Filled under:

SET SERVEROUTPUT ON

SET LINESIZE 300

SET PAGESIZE 1000


DECLARE

    v_ts_name            VARCHAR2(30) := 'YOUR_TABLESPACE_NAME'; -- << CHANGE

    v_block_size         NUMBER;

    v_buffer_mb          CONSTANT NUMBER := 10;

    v_show_shrink_cmds   BOOLEAN := FALSE; -- Toggle to TRUE if needed

BEGIN

    -- Get block size once

    SELECT block_size INTO v_block_size

    FROM dba_tablespaces

    WHERE tablespace_name = UPPER(v_ts_name);


    DBMS_OUTPUT.PUT_LINE('===== ANALYSIS FOR TABLESPACE: ' || v_ts_name || ' =====');


    -- Combined HWM + datafile info

    FOR df IN (

        SELECT

            df.file_id,

            df.file_name,

            df.bytes AS file_bytes,

            NVL(MAX(e.block_id + e.blocks), 0) AS hwm_block

        FROM

            dba_data_files df

            LEFT JOIN dba_extents e ON df.file_id = e.file_id

        WHERE

            df.tablespace_name = UPPER(v_ts_name)

        GROUP BY

            df.file_id, df.file_name, df.bytes

    ) LOOP

        DECLARE

            v_hwm_bytes        NUMBER := df.hwm_block * v_block_size;

            v_resize_target_mb NUMBER := CEIL((df.hwm_block * v_block_size + v_buffer_mb * 1024 * 1024) / 1024 / 1024);

        BEGIN

            DBMS_OUTPUT.PUT_LINE(CHR(10) || '>> File: ' || df.file_name);

            DBMS_OUTPUT.PUT_LINE('    Current Size: ' || ROUND(df.file_bytes / 1024 / 1024) || ' MB');

            DBMS_OUTPUT.PUT_LINE('    HWM Estimate: ' || ROUND(v_hwm_bytes / 1024 / 1024) || ' MB');

            DBMS_OUTPUT.PUT_LINE('    Safe Resize To: ' || v_resize_target_mb || ' MB');


            IF df.file_bytes > (v_hwm_bytes + v_buffer_mb * 1024 * 1024) THEN

                DBMS_OUTPUT.PUT_LINE('    --> SUGGEST: ALTER DATABASE DATAFILE ''' || df.file_name || ''' RESIZE ' || v_resize_target_mb || 'M;');

            ELSE

                DBMS_OUTPUT.PUT_LINE('    --> No reclaimable space found.');

            END IF;


            -- Show top 3 segments near the end of file

            DBMS_OUTPUT.PUT_LINE('    Segments near file end (may block resize):');

            FOR seg IN (

                SELECT *

                FROM (

                    SELECT owner, segment_name, segment_type,

                           (block_id + blocks) * v_block_size / 1024 / 1024 AS segment_end_mb

                    FROM dba_extents

                    WHERE file_id = df.file_id

                    ORDER BY segment_end_mb DESC

                )

                WHERE ROWNUM <= 3

            ) LOOP

                DBMS_OUTPUT.PUT_LINE('       - ' || seg.segment_type || ' ' || seg.owner || '.' || seg.segment_name || ' ends at ~' || ROUND(seg.segment_end_mb) || ' MB');

            END LOOP;

        END;

    END LOOP;


    -- Shrink recommendations if toggled

    IF v_show_shrink_cmds THEN

        DBMS_OUTPUT.PUT_LINE(CHR(10) || '===== SHRINK CANDIDATES =====');

        FOR tbl IN (

            SELECT owner, segment_name

            FROM dba_segments

            WHERE segment_type = 'TABLE'

              AND tablespace_name = UPPER(v_ts_name)

        ) LOOP

            DBMS_OUTPUT.PUT_LINE(

                'ALTER TABLE ' || tbl.owner || '.' || tbl.segment_name || ' ENABLE ROW MOVEMENT;'

            );

            DBMS_OUTPUT.PUT_LINE(

                'ALTER TABLE ' || tbl.owner || '.' || tbl.segment_name || ' SHRINK SPACE;'

            );

        END LOOP;


        FOR idx IN (

            SELECT owner, segment_name

            FROM dba_segments

            WHERE segment_type = 'INDEX'

              AND tablespace_name = UPPER(v_ts_name)

        ) LOOP

            DBMS_OUTPUT.PUT_LINE(

                'ALTER INDEX ' || idx.owner || '.' || idx.segment_name || ' SHRINK SPACE;'

            );

        END LOOP;

    END IF;


END;

/


Posted By Nikhil00:03

Friday, 23 May 2025

Filled under:

 SELECT

    SUM(space) * 8192 / 1024 / 1024 AS recyclebin_size_mb

FROM

    dba_recyclebin;


SELECT

    object_name,

    original_name,

    type,

    ts_name AS tablespace,

    droptime,

    space * 8192 / 1024 / 1024 AS size_mb

FROM

    dba_recyclebin

ORDER BY

    droptime DESC;


Posted By Nikhil23:50

Full PL/SQL Script to Reclaim Space and Suggest Safe Resize

Filled under:

✅ What This Script Does

  • Analyzes each datafile in a given tablespace.

  • Calculates a safe resize size based on HWM + buffer.

  • Lists any segments blocking resize (i.e., those near the file end).

  • Generates shrink commands for tables and indexes.

  • Ensures you're prepared to safely resize without hitting ORA-03297.




SET SERVEROUTPUT ON

SET LINESIZE 300

SET PAGESIZE 1000


DECLARE

    v_ts_name            VARCHAR2(30) := 'YOUR_TABLESPACE_NAME'; -- << Change here

    v_block_size         NUMBER;

    v_hwm_blocks         NUMBER;

    v_hwm_bytes          NUMBER;

    v_current_bytes      NUMBER;

    v_file_id            NUMBER;

    v_file_name          VARCHAR2(512);

    v_buffer_mb          CONSTANT NUMBER := 10; -- Safety buffer

    v_resize_target_mb   NUMBER;

    v_owner              VARCHAR2(30);

    v_segment_name       VARCHAR2(30);

    v_segment_type       VARCHAR2(20);

BEGIN

    -- Get block size

    SELECT block_size INTO v_block_size

    FROM dba_tablespaces

    WHERE tablespace_name = UPPER(v_ts_name);


    DBMS_OUTPUT.PUT_LINE('===== ANALYSIS FOR TABLESPACE: ' || v_ts_name || ' =====');


    -- Loop through each datafile

    FOR df IN (

        SELECT file_id, file_name, bytes

        FROM dba_data_files

        WHERE tablespace_name = UPPER(v_ts_name)

    ) LOOP

        v_file_id := df.file_id;

        v_file_name := df.file_name;

        v_current_bytes := df.bytes;


        -- Get High Water Mark

        SELECT NVL(MAX(block_id + blocks), 0)

        INTO v_hwm_blocks

        FROM dba_extents

        WHERE file_id = v_file_id;


        v_hwm_bytes := v_hwm_blocks * v_block_size;

        v_resize_target_mb := CEIL((v_hwm_bytes + v_buffer_mb * 1024 * 1024) / 1024 / 1024);


        DBMS_OUTPUT.PUT_LINE(CHR(10) || '>> File: ' || v_file_name);

        DBMS_OUTPUT.PUT_LINE('    Current Size: ' || ROUND(v_current_bytes / 1024 / 1024) || ' MB');

        DBMS_OUTPUT.PUT_LINE('    HWM Estimate: ' || ROUND(v_hwm_bytes / 1024 / 1024) || ' MB');

        DBMS_OUTPUT.PUT_LINE('    Safe Resize To: ' || v_resize_target_mb || ' MB');


        IF v_current_bytes > (v_hwm_bytes + v_buffer_mb * 1024 * 1024) THEN

            DBMS_OUTPUT.PUT_LINE('    --> SUGGEST: ALTER DATABASE DATAFILE ''' || v_file_name || ''' RESIZE ' || v_resize_target_mb || 'M;');

        ELSE

            DBMS_OUTPUT.PUT_LINE('    --> No reclaimable space found.');

        END IF;


        -- Show segments at tail end of file

        DBMS_OUTPUT.PUT_LINE('    Segments near end of file (may block resize):');


        FOR seg IN (

            SELECT owner, segment_name, segment_type,

                   (block_id + blocks) * v_block_size / 1024 / 1024 AS segment_end_mb

            FROM dba_extents

            WHERE file_id = v_file_id

              AND (block_id + blocks) * v_block_size / 1024 > (v_current_bytes - 20 * 1024 * 1024) -- within last 20MB

            ORDER BY segment_end_mb DESC

        ) LOOP

            DBMS_OUTPUT.PUT_LINE('       - ' || seg.segment_type || ' ' || seg.owner || '.' || seg.segment_name ||

                                 ' ends at ~' || ROUND(seg.segment_end_mb) || ' MB');

        END LOOP;

    END LOOP;


    DBMS_OUTPUT.PUT_LINE(CHR(10) || '===== SHRINK CANDIDATES =====');

    

    -- Generate shrink space statements

    FOR tbl IN (

        SELECT owner, segment_name

        FROM dba_segments

        WHERE segment_type = 'TABLE'

          AND tablespace_name = UPPER(v_ts_name)

    ) LOOP

        DBMS_OUTPUT.PUT_LINE(

            'ALTER TABLE ' || tbl.owner || '.' || tbl.segment_name || ' ENABLE ROW MOVEMENT;'

        );

        DBMS_OUTPUT.PUT_LINE(

            'ALTER TABLE ' || tbl.owner || '.' || tbl.segment_name || ' SHRINK SPACE;'

        );

    END LOOP;


    FOR idx IN (

        SELECT owner, segment_name

        FROM dba_segments

        WHERE segment_type = 'INDEX'

          AND tablespace_name = UPPER(v_ts_name)

    ) LOOP

        DBMS_OUTPUT.PUT_LINE(

            'ALTER INDEX ' || idx.owner || '.' || idx.segment_name || ' SHRINK SPACE;'

        );

    END LOOP;


END;

/


Posted By Nikhil23:45

hwwm

Filled under:

SELECT

    df.file_name,

    df.file_id,

    df.bytes / 1024 / 1024 AS current_size_mb,

    CEIL((NVL(MAX(e.block_id + e.blocks), 1) * ts.block_size) / 1024 / 1024) AS hwm_mb

FROM

    dba_data_files df

    JOIN dba_tablespaces ts ON df.tablespace_name = ts.tablespace_name

    LEFT JOIN dba_extents e ON df.file_id = e.file_id

WHERE

    df.tablespace_name = UPPER('YOUR_TS_NAME')

GROUP BY

    df.file_name, df.file_id, df.bytes, ts.block_size;


Posted By Nikhil23:38
Filled under:

SET SERVEROUTPUT ON

SET LINESIZE 200

SET PAGESIZE 1000


DECLARE

    v_tablespace    VARCHAR2(30) := 'USERS'; -- << CHANGE YOUR TABLESPACE NAME HERE

    v_block_size    NUMBER;

    v_file_id       NUMBER;

    v_file_name     VARCHAR2(512);

    v_current_bytes NUMBER;

    v_hwm_blocks    NUMBER;

    v_hwm_bytes     NUMBER;

    v_new_size_bytes NUMBER;

    v_autoextensible VARCHAR2(3);

    v_min_resize_bytes NUMBER;

    v_buffer_mb     NUMBER := 10; -- Safety margin in MB

BEGIN

    -- Get block size of the tablespace

    SELECT block_size INTO v_block_size

    FROM dba_tablespaces

    WHERE tablespace_name = UPPER(v_tablespace);


    -- Loop through datafiles of the tablespace

    FOR r IN (

        SELECT file_id, file_name, bytes, autoextensible

        FROM dba_data_files

        WHERE tablespace_name = UPPER(v_tablespace)

    ) LOOP

        v_file_id := r.file_id;

        v_file_name := r.file_name;

        v_current_bytes := r.bytes;

        v_autoextensible := r.autoextensible;


        -- Get highest block used in this file

        SELECT NVL(MAX(block_id + blocks), 0)

        INTO v_hwm_blocks

        FROM dba_extents

        WHERE file_id = v_file_id;


        v_hwm_bytes := v_hwm_blocks * v_block_size;

        v_min_resize_bytes := v_hwm_bytes + (v_buffer_mb * 1024 * 1024);


        -- Only suggest resize if current size is more than the HWM + buffer

        IF v_current_bytes > v_min_resize_bytes THEN

            v_new_size_bytes := FLOOR(v_min_resize_bytes / (1024*1024)) * 1024*1024; -- aligned to MB

            DBMS_OUTPUT.PUT_LINE(

                'ALTER DATABASE DATAFILE ''' || v_file_name || ''' RESIZE ' || ROUND(v_new_size_bytes / (1024*1024)) || 'M;'

            );

        ELSE

            DBMS_OUTPUT.PUT_LINE('-- No reclaimable space in file: ' || v_file_name);

        END IF;


    END LOOP;

END;

/


Posted By Nikhil22:53
Filled under:

 SET SERVEROUTPUT ON

DECLARE

    v_tablespace   VARCHAR2(50) := 'USERS'; -- Change this to your tablespace

    v_cmd          VARCHAR2(1000);

BEGIN

    FOR rec IN (

        SELECT

            df.file_name,

            df.file_id,

            df.tablespace_name,

            df.bytes / 1024 / 1024 AS current_size_mb,

            (hwm.highest_block_id * ts.block_size) / 1024 / 1024 AS hwm_mb

        FROM

            dba_data_files df

            JOIN dba_tablespaces ts ON df.tablespace_name = ts.tablespace_name

            LEFT JOIN (

                SELECT

                    file_id,

                    MAX(block_id + blocks) AS highest_block_id

                FROM

                    dba_extents

                GROUP BY

                    file_id

            ) hwm ON df.file_id = hwm.file_id

        WHERE

            df.tablespace_name = UPPER(v_tablespace)

    )

    LOOP

        IF rec.hwm_mb IS NOT NULL AND rec.current_size_mb - rec.hwm_mb > 10 THEN

            -- Allow 10MB safety margin

            v_cmd := 'ALTER DATABASE DATAFILE ''' || rec.file_name || ''' RESIZE ' || FLOOR(rec.hwm_mb + 10) || 'M;';

            DBMS_OUTPUT.PUT_LINE(v_cmd);

        ELSE

            DBMS_OUTPUT.PUT_LINE('-- No reclaimable space in file: ' || rec.file_name);

        END IF;

    END LOOP;

END;

/

Posted By Nikhil22:49

Monday, 19 May 2025

Filled under:

 

Here's a professional draft of your email:


Subject: Passport Renewal Attempt and Rescheduled Appointment

Dear [Recipient's Name],

As advised, I visited the BSL Office to initiate the passport renewal process immediately after returning from my leave. However, the application could not be processed as my current passport still has one year and one day remaining before expiry. Due to this, the office did not accept my renewal request.

I have now rebooked the appointment for 23rd May.

Will keep you updated on further progress.

Best regards,
[Your Full Name]


Let me know if you'd like to customize the tone or add any additional details.

Posted By Nikhil17:40

Thursday, 15 May 2025

Filled under:

 There’s a peer review cycle going on that’s tied to some "upward movement opportunities" 😉.

If my name pops up, a little positive push would be truly appreciated.

Appreciate your support – and of course, I’m happy to return the favor anytime!

Posted By Nikhil02:14
Filled under:

Conducting a real-time test will help us evaluate the automation flow and fine-tune it as needed. Once streamlined, this will significantly reduce manual intervention in future BCM events and improve overall efficiency.

Rest assured, our team will remain available during the activity window to proceed with manual execution if necessary, and the BCM will go ahead as per the planned timeline.

Looking forward to your response.

Posted By Nikhil01:23

Wednesday, 14 May 2025

Filled under:

SET LINESIZE 200

COL input_type FOR A15

COL success_count FOR 15

COL failed_count FOR 15

COL total FOR 10

COL success_rate_pct FOR 20

SELECT

DECODE(input_type,

'DB FULL', 'FULL',

'DB INCR', 'INCREMENTAL',

input_type) AS input_type,

COUNT(CASE WHEN status = 'COMPLETED' THEN 1 END) AS success_count,

COUNT(CASE WHEN status <> 'COMPLETED' THEN 1 END) AS failed_count,

COUNT(*) AS total,

ROUND(

(COUNT(CASE WHEN status = 'COMPLETED' THEN 1 END) * 100.0) /

COUNT(*), 2) AS success_rate_pct

FROM

v$rman_backup_job_details

WHERE

start_time >= SYSDATE - 7

AND input_type IN ('DB FULL', 'DB INCR')

GROUP BY

DECODE(input_type,

'DB FULL', 'FULL',

'DB INCR', 'INCREMENTAL',

input_type)

ORDER BY

input_type; 

Posted By Nikhil23:54
Filled under:

SET LINESIZE 200

COL backup_type FOR A15

COL success_count FOR 15

COL failed_count FOR 15

COL total FOR 10

COL success_rate_pct FOR 20


SELECT

    DECODE(backup_type,

           'DB FULL', 'FULL',

           'DB INCR', 'INCREMENTAL',

           'OTHER') AS backup_type,

    COUNT(CASE WHEN status = 'COMPLETED' THEN 1 END) AS success_count,

    COUNT(CASE WHEN status <> 'COMPLETED' THEN 1 END) AS failed_count,

    COUNT(*) AS total,

    ROUND(

        (COUNT(CASE WHEN status = 'COMPLETED' THEN 1 END) * 100.0) / 

         COUNT(*), 2) AS success_rate_pct

FROM 

    v$rman_backup_job_details

WHERE 

    start_time >= SYSDATE - 7

    AND backup_type IN ('DB FULL', 'DB INCR')

GROUP BY

    DECODE(backup_type,

           'DB FULL', 'FULL',

           'DB INCR', 'INCREMENTAL',

           'OTHER')

ORDER BY

    backup_type;


Posted By Nikhil23:52
Filled under:

 SET LINESIZE 200

COL backup_type FOR A15

COL success_count FOR 15

COL failed_count FOR 15

COL total FOR 10

COL success_rate_pct FOR 20


WITH backup_status AS (

    SELECT

        DECODE(b.incremental_level,

               NULL, 'FULL',

               0, 'LEVEL 0 INCR',

               1, 'LEVEL 1 INCR',

               'OTHER') AS backup_type,

        CASE

            WHEN bs.status = 'A' THEN 1  -- Available (Success)

            ELSE 0

        END AS is_success,

        CASE

            WHEN bs.status <> 'A' THEN 1

            ELSE 0

        END AS is_failed

    FROM 

        v$backup_set b

        JOIN v$backup_set_details bs

          ON b.set_stamp = bs.set_stamp AND b.set_count = bs.set_count

    WHERE 

        b.start_time >= SYSDATE - 7  -- adjust as needed

)

SELECT

    backup_type,

    SUM(is_success) AS success_count,

    SUM(is_failed) AS failed_count,

    COUNT(*) AS total,

    ROUND((SUM(is_success) * 100.0) / COUNT(*), 2) AS success_rate_pct

FROM

    backup_status

GROUP BY

    backup_type

ORDER BY

    backup_type;


Posted By Nikhil23:42
Filled under:

 SET LINESIZE 200

COL backup_type FOR A15

COL avg_duration_mins FOR 20

COL total_backups FOR 15


SELECT 

    DECODE(b.incremental_level, 

           NULL, 'FULL', 

           0, 'LEVEL 0 INCR', 

           1, 'LEVEL 1 INCR', 

           'OTHER') AS backup_type,

    ROUND(AVG((b.completion_time - b.start_time) * 24 * 60), 2) AS avg_duration_mins,

    COUNT(*) AS total_backups

FROM 

    v$backup_set b

WHERE 

    b.start_time >= SYSDATE - 7  -- Change as needed

GROUP BY 

    DECODE(b.incremental_level, 

           NULL, 'FULL', 

           0, 'LEVEL 0 INCR', 

           1, 'LEVEL 1 INCR', 

           'OTHER')

ORDER BY 

    backup_type;


Posted By Nikhil23:38
Filled under:

SELECT

    status,

    filename

FROM

    v$block_change_tracking;

Posted By Nikhil22:27

avg - Piece size

Filled under:

 SET LINESIZE 200

COL avg_size_gb FOR 15


SELECT

    ROUND(AVG(size_gb), 2) AS avg_size_gb

FROM (

    SELECT

        ROUND(bp.bytes / 1024 / 1024 / 1024, 2) AS size_gb

    FROM

        v$backup_piece bp,

        v$backup_set bset

    WHERE

        bp.set_stamp = bset.set_stamp

        AND bp.set_count = bset.set_count

        AND bset.incremental_level IS NOT NULL

        AND bset.start_time > SYSDATE - 7

);


Posted By Nikhil22:19

Factors

Filled under:

 

  • If backup pieces are very small, RMAN spends extra time opening/closing files, resulting in overhead and slower backups.

  • If backup pieces are too large and exceed filesystem or media manager limits, backups can fail or hang during writes or flushes.

  • Improper piece size might cause excessive I/O wait or network overhead if streaming backups or using tape devices.

  • Media manager throttling or channel bottlenecks due to file size mismatches.

  • I’ve gathered the necessary information related to our slow Oracle RMAN backup performance and have updated the Excel sheet accordingly.

    The following key factors were taken into consideration during the analysis:

    • Allocated channels

    • Semaphores configured

    • Average piece size for full/incremental backups

    • nofile and nproc limits

    • System page size

    • shmmax and shmall values

    • OS-level limits

    Please review the updated file and let me know if there are any questions or if additional inputs are required.

  • Oracle RMAN Backup Performance Factors

    Factor

    Effect on Backup Speed

    What to Check

    Allocated Channels

    More channels = better parallelism; too few = slow backup

    `ALLOCATE CHANNEL`, `PARALLELISM` in RMAN scripts

    Semaphores Configured

    Insufficient semaphores can block or delay processes

    Check `/proc/sys/kernel/sem`, `ipcs -s`, `sysctl` settings

    Avg. Piece Size (Full/Inc)

    Too small = overhead; too large = write delays or failures

    RMAN logs, `MAXPIECESIZE`, backup piece sizes by level

    nofile and nproc Limits

    Low values limit open files or processes, may halt backups

    `ulimit -n` (nofile), `ulimit -u` (nproc), `/etc/security/limits.conf`

    System Page Size

    Mismatch with DB block size may affect memory and I/O efficiency

    `getconf PAGE_SIZE`, database block size (`db_block_size`)

    shmmax / shmall

    Low values can cause shared memory errors during large backups

    `/proc/sys/kernel/shmmax`, `shmall`, `ipcs -m`

    OS Limits

    Global limits on files, memory, processes can restrict backups

    `ulimit -a`, system-level limits in `/etc/security/limits.d/`

     

  • FactorEffect on Backup SpeedWhat to Check
    Backup Piece Too SmallHigh overhead, slower backupMAXPIECESIZE setting, backup piece sizes
    Backup Piece Too LargePossible failures or slow flushesFilesystem limits, media manager limits
    OS Limits (ulimit/fs)Backup stalls or fails if limits exceededulimit -f, filesystem max file size
    Number of ChannelsToo few = slow, too many = overloadRMAN configured channels, storage capacity
    Media Manager SettingsFile size limits or throttling impacting speed


    Coordinate with backup software team

  • Posted By Nikhil22:12

    new

    Filled under:

     SET LINESIZE 200

    COL piece_handle FOR A70

    COL level FOR 5

    COL size_gb FOR 10

    COL start_time FOR A20

    COL end_time FOR A20


    SELECT

        bp.handle AS piece_handle,

        bset.incremental_level,

        ROUND(bp.bytes / 1024 / 1024 / 1024, 2) AS size_gb,

        TO_CHAR(bset.start_time, 'YYYY-MM-DD HH24:MI:SS') AS start_time,

        TO_CHAR(bset.completion_time, 'YYYY-MM-DD HH24:MI:SS') AS end_time

    FROM

        v$backup_piece bp,

        v$backup_set bset

    WHERE

        bp.set_stamp = bset.set_stamp

        AND bp.set_count = bset.set_count

        AND bset.incremental_level IS NOT NULL

        AND bset.start_time > SYSDATE - 7

    ORDER BY

        bset.start_time DESC;


    Posted By Nikhil22:04
    Filled under:

     SET LINESIZE 200

    COL piece_handle FOR A70

    COL level FOR 5

    COL size_gb FOR 10

    COL start_time FOR A20

    COL end_time FOR A20


    SELECT

        bp.handle AS piece_handle,

        bs.incremental_level AS level,

        ROUND(bp.bytes / 1024 / 1024 / 1024, 2) AS size_gb,

        TO_CHAR(bs.start_time, 'YYYY-MM-DD HH24:MI:SS') AS start_time,

        TO_CHAR(bs.completion_time, 'YYYY-MM-DD HH24:MI:SS') AS end_time

    FROM

        v$backup_piece bp

    JOIN

        v$backup_set bs ON bp.set_stamp = bs.set_stamp AND bp.set_count = bs.set_count

    WHERE

        bs.incremental_level IS NOT NULL

        AND bs.start_time > SYSDATE - 7  -- Change this to adjust date range

    ORDER BY

        bs.start_time DESC;


    Posted By Nikhil21:58
    Filled under:

     SELECT handle,

           ROUND(bytes/1024/1024/1024, 2) AS size_gb,

           completion_time

    FROM v$backup_piece

    Posted By Nikhil21:55
    Filled under:

     SELECT ROUND(SUM(bytes) / 1024 / 1024 / 1024, 2) AS size_gb

    FROM dba_data_files;


    SELECT ROUND(SUM(bytes) / 1024 / 1024 / 1024, 2) AS total_size_gb

    FROM (

        SELECT bytes FROM dba_data_files

        UNION ALL

        SELECT bytes FROM dba_temp_files

    );


    SELECT con_id,
           (SELECT name FROM v$containers WHERE con_id = f.con_id) AS pdb_name,
           ROUND(SUM(f.bytes) / 1024 / 1024 / 1024, 2) AS size_gb
    FROM cdb_data_files f
    GROUP BY con_id;



    SELECT ROUND(SUM(bytes)/1024/1024/1024, 2) AS size_gb
    FROM dba_data_files
    WHERE tablespace_name IN (
        SELECT tablespace_name
        FROM dba_tablespaces
        WHERE con_id = (SELECT con_id FROM v$containers WHERE name = 'PDB_NAME')
    );

    Posted By Nikhil21:32
    Filled under:

     While we are currently waiting for our access request to be approved, I wanted to highlight another potential issue that might be affecting the server itself.

    When attempting to log in, we receive an error message immediately after entering our username. This suggests that the server may not be accessible, possibly due to IACP sync issues.

    Ideally, this should have been verified with the PamOps team at the time of raising the access request. To move this forward, I have raised an incident with the team to investigate and assist in resolving the accessibility issue.

    Posted By Nikhil20:18
    Filled under:

    SELECT

      t1.typname AS source_type,

      t2.typname AS target_type,

      c.castcontext,

      c.castmethod

    FROM

      pg_cast c

      JOIN pg_type t1 ON c.castsource = t1.oid

      JOIN pg_type t2 ON c.casttarget = t2.oid

    WHERE

      t1.typname = 'varchar' AND t2.typname = 'numeric';

    Posted By Nikhil02:32
    Filled under:

     As this is a development environment, I’ve gone ahead and created the required cast in the database as requested.

    Just a quick note — since creating casts requires superuser privileges and is typically restricted in production for safety reasons, I’d recommend considering alternatives in future implementations, such as:

    • Using explicit type conversion functions (e.g., ::type or CAST(expr AS type))

    • Creating wrapper functions for conversions, if applicable

    These approaches are generally safer and more portable across environments, especially when moving code toward production.

    Posted By Nikhil02:23
    Filled under:

     Risk of Unsafe Type Conversions:

  • A cast defines how one data type is converted to another.

  • If misused, it can cause data corruption, unexpected query behavior, or bypass of security constraints.

  • System-wide Impact:

    • Once a cast is created, it’s globally available in the database.

    • This can interfere with existing logic, cause compatibility issues with future upgrades, or affect other users' queries.

  •  Philosophy:

    • Only superusers are allowed to define or override behavior that affects core database functionality — such as type casting, system catalog changes, and low-level security settings.

    1. Use explicit conversion functions in SQL queries instead of relying on implicit or automatic casts.

    2. Consider writing a wrapper function that does the conversion safely.

    Posted By Nikhil01:26