Thursday, 31 July 2025

Filled under:

 Thanks for sharing the BCM activity details.


To proceed further, please raise the self-service task along with the associated change record. This task will be processed by automation and will enable your application team to invoke the workflow independently as needed.


You can find more details and steps in the following link: (link)


If you’d like, feel free to schedule a quick call—I’ll be happy to walk you through the workflow and answer any questions.

Posted By Nikhil17:45

Wednesday, 30 July 2025

Filled under:

 Thank you for confirming that the PostgreSQL database is hosted in Switzerland.

I wanted to clarify the context of the Business Continuity Management (BCM) exercise:

  • The scheduled BCM activation applies to Singapore operations only.

  • Could you please help clarify why the Switzerland‑hosted PostgreSQL database has been included in scope? Specifically:

    • Is there a cross‑border dependency where Swiss components are identified as part of critical business services supporting Singapore?

    • Does the application’s end‑to‑end dependency map include this foreign database as essential for Singapore continuity?
      The Monetary Authority of Singapore’s BCM Guidelines emphasise that critical business services must include identification of all dependencies—including those in other jurisdictions

Posted By Nikhil02:45
Filled under:

 I’m writing regarding the BCM (Business Continuity Management) activity scheduled in Singapore. The application user has included a PostgreSQL database that is currently hosted in Switzerland as part of the continuity plan.

To properly coordinate this, could you please clarify:

  • Is this an application‑level failover only, where traffic shifts to a standby application instance automatically, without any DBA intervention on the database?

  • Or is it a full PostgreSQL switchover (planned role reversal between primary and standby), which would require tasks to be performed by a DBA—such as promotion, demotion, WAL sync checks, or repmgr/dbvisit orchestration?

  • If it is indeed a switchover, could you confirm whether any DBA involvement is required during the Singapore BCM window?

Understanding this will help us confirm the scope, assign the right resources, and ensure our runbook covers the necessary steps.

Thanks in advance, and please let me know if you need any additional details from our end.

Best regards,
[Your Name]
[Your Role/Team]
[Contact Information]


📝 Additional Context

Posted By Nikhil00:00

Thursday, 24 July 2025

Filled under:

 As checked and as discussed, the mapping does not exist correctly, which is likely causing the authentication issue.

Upon review, it appears that the authorization, subsystem, and operational environment values are not aligned or configured correctly.

Request you to review and correct the mapping details accordingly. Once updated, feel free to loop me back in for verification or further assistance.

Posted By Nikhil02:27

Wednesday, 23 July 2025

Filled under:

 Received a great feedback from one of the SREs — quite a rare scenario but definitely worth considering for our automation logic.

🧩 Scenario:
During the switchover, the system initially shows an error message indicating failure (see red error in the image below).
However, upon inspection, the site actually appears to have successfully switched over (see highlighted portion in the image).

📌 Next Steps / Action Items:

  1. 🔍 I’ll be reviewing this case thoroughly to confirm if the site is indeed healthy and safe for handover despite the error.

  2. 🤖 Once confirmed, we’ll look to enhance our BCM automation logic to handle this edge case intelligently.

📷 [Attach the screenshot with error and highlight]

Will keep you all posted once the analysis is complete.

Posted By Nikhil21:22
Filled under:

 n a production environment configured with Patroni, a clean switchover (from primary to a healthy replica) typically takes:

🔹 5 to 15 seconds under normal conditions
🔹 May vary slightly depending on:

  • Replication lag (ideally should be 0 or minimal)

  • Size of WALs to be applied

  • Hardware and network latency

  • Application connectivity retries

Posted By Nikhil20:56
Filled under:

 If you're finding the default Atlas view a bit difficult to navigate, you can use this alternate List View which closely resembles GSNOW:

👉 Click here for List-View

🔹 This view allows for:

  • Easier navigation

  • Opening tickets in a new browser tab, not within the same tab like default Atlas.

  • A layout more familiar to GSNOW users.

🛠️ Tip: You can also create custom lists and add them to your Favourites for quicker access.
📸 See image below for how to do it:

[Attach image here showing how to create a custom list and mark it as a favourite. Maybe highlight the steps or use arrows for clarity]

Let me know if you need help customizing your view!

Posted By Nikhil20:42

Thursday, 10 July 2025

Filled under:

 

Sure! Here’s a polished and concise email version you can send:


Subject: BCM Tomorrow – Task Flow & Next Steps

Hi all,

Thanks for raising the automated RITM requests ahead of tomorrow’s BCM.

✅ It’s perfectly fine to get the two approvals done today — that will generate the precheck task for today, which runs transparently in the background.

🔔 Important:
Let’s ensure that only the highlighted task (see image below) is closed tomorrow, once we’re ready to proceed with the failover (BCP activation).
👉 Closure of that task is the actual trigger point.

Appreciate everyone’s support and coordination 🙂

Thanks,
[Your Name]


Let me know if you'd like a Teams message version too.

Posted By Nikhil21:13

Saturday, 5 July 2025

Filled under:

 function processLines(input, e_status) {

    e_status = e_status - 1;

    var lines = input.split('\n').filter(line => line.trim() !== "");


    for (var i = 0; i < lines.length; i++) {

        var line = lines[i].trim();

        var lagValue = parseFloat(line);


        if (isNaN(lagValue)) {

            return `Invalid lag value detected: "${line}"`;

        }


        if (lagValue > 0) {

            return `Cluster has lag (${lagValue}MB). Exiting...`;

        }

    }


    return "Cluster is in sync, proceeding";

}

Posted By Nikhil07:26
Filled under:

 function processLines(input, e_status) {

    e_status = e_status - 1;

    var lines = input.split('\n').filter(line => line.trim() !== "");

    

    var threshold = 1.0; // Max acceptable lag in MB

    for (var i = 0; i < lines.length; i++) {

        var line = lines[i].trim();

        var lagValue = parseFloat(line);

        

        if (isNaN(lagValue)) {

            return `Invalid lag value detected: "${line}"`;

        }


        if (lagValue > threshold) {

            return `Cluster has lag (${lagValue}MB). Exiting...`;

        }

    }


    return "Cluster is in sync, proceeding";

}

Posted By Nikhil05:34

Friday, 4 July 2025

structurally duplicate

Filled under:

 structurally duplicate


Same datatype but different column names.


WITH table_defs AS (

  SELECT 

    c.owner,

    c.table_name,

    LISTAGG(UPPER(c.data_type), ',') 

      WITHIN GROUP (ORDER BY c.column_id) AS type_signature

  FROM all_tab_columns c

  WHERE c.owner NOT IN (

    'SYS', 'SYSTEM', 'XDB', 'OUTLN', 'MDSYS', 'ORDDATA', 'ORDSYS',

    'CTXSYS', 'DBSNMP', 'APPQOSSYS', 'AUDSYS', 'DVSYS',

    'GSMADMIN_INTERNAL', 'ANONYMOUS', 'WMSYS', 'OLAPSYS',

    'LBACSYS', 'SI_INFORMTN_SCHEMA'

  )

  GROUP BY c.owner, c.table_name

),

table_sizes AS (

  SELECT 

    owner, 

    segment_name AS table_name, 

    ROUND(SUM(bytes)/1024/1024, 2) AS size_mb

  FROM dba_segments 

  WHERE segment_type = 'TABLE'

  GROUP BY owner, segment_name

),

duplicates AS (

  SELECT type_signature

  FROM table_defs

  GROUP BY type_signature

  HAVING COUNT(*) > 1

)

SELECT 

  d.type_signature,

  d.owner,

  d.table_name,

  NVL(s.size_mb, 0) AS size_mb

FROM table_defs d

JOIN duplicates dup

  ON d.type_signature = dup.type_signature

LEFT JOIN table_sizes s

  ON d.owner = s.owner AND d.table_name = s.table_name

ORDER BY d.type_signature, d.owner, d.table_name;


Posted By Nikhil23:50
Filled under:

 WITH table_defs AS (

  SELECT 

    c.owner,

    c.table_name,

    LISTAGG(c.column_name || ':' || c.data_type || ':' || c.data_length, ',') 

      WITHIN GROUP (ORDER BY c.column_id) AS col_signature

  FROM all_tab_columns c

  WHERE c.owner NOT IN (

    'SYS', 'SYSTEM', 'XDB', 'OUTLN', 'MDSYS', 'ORDDATA', 'ORDSYS',

    'CTXSYS', 'DBSNMP', 'APPQOSSYS', 'AUDSYS', 'DVSYS',

    'GSMADMIN_INTERNAL', 'ANONYMOUS', 'WMSYS', 'OLAPSYS',

    'LBACSYS', 'SI_INFORMTN_SCHEMA'

  )

  GROUP BY c.owner, c.table_name

),

table_sizes AS (

  SELECT 

    owner, 

    segment_name AS table_name, 

    ROUND(SUM(bytes)/1024/1024, 2) AS size_mb

  FROM dba_segments 

  WHERE segment_type = 'TABLE'

  GROUP BY owner, segment_name

),

duplicates AS (

  SELECT col_signature

  FROM table_defs

  GROUP BY col_signature

  HAVING COUNT(*) > 1

)

SELECT 

  d.col_signature,

  d.owner,

  d.table_name,

  NVL(s.size_mb, 0) AS size_mb

FROM table_defs d

JOIN duplicates dup

  ON d.col_signature = dup.col_signature

LEFT JOIN table_sizes s

  ON d.owner = s.owner AND d.table_name = s.table_name

ORDER BY d.col_signature, d.owner, d.table_name;


Posted By Nikhil23:45
Filled under:

 WITH table_defs AS (

  SELECT 

    c.owner,

    c.table_name,

    LISTAGG(c.column_name || ':' || c.data_type || ':' || c.data_length, ',') 

      WITHIN GROUP (ORDER BY c.column_id) AS col_signature

  FROM all_tab_columns c

  WHERE c.owner NOT IN (

    'SYS', 'SYSTEM', 'XDB', 'OUTLN', 'MDSYS', 'ORDDATA', 'ORDSYS',

    'CTXSYS', 'DBSNMP', 'APPQOSSYS', 'AUDSYS', 'DVSYS',

    'GSMADMIN_INTERNAL', 'ANONYMOUS', 'WMSYS', 'OLAPSYS',

    'LBACSYS', 'SI_INFORMTN_SCHEMA'

  )

  GROUP BY c.owner, c.table_name

),

table_sizes AS (

  SELECT 

    owner, 

    segment_name AS table_name, 

    ROUND(SUM(bytes)/1024/1024, 2) AS size_mb

  FROM dba_segments 

  WHERE segment_type = 'TABLE'

  GROUP BY owner, segment_name

)

SELECT 

  d.col_signature,

  COUNT(*) AS table_count,

  LISTAGG(d.owner || '.' || d.table_name || ' (' || NVL(s.size_mb, 0) || ' MB)', ', ')

    WITHIN GROUP (ORDER BY d.owner, d.table_name) AS tables_with_size

FROM table_defs d

LEFT JOIN table_sizes s

  ON d.owner = s.owner AND d.table_name = s.table_name

GROUP BY d.col_signature

HAVING COUNT(*) > 1

ORDER BY table_count DESC;


Posted By Nikhil22:28

correct

Filled under:

 SELECT 

    t.owner,

    t.table_name,

    s.tablespace_name,

    ROUND(s.bytes / 1024 / 1024, 2) AS size_mb,

    t.num_rows,

    t.last_analyzed,

    o.last_ddl_time AS last_modified

FROM dba_tables t

JOIN dba_segments s 

    ON t.owner = s.owner 

   AND t.table_name = s.segment_name 

   AND s.segment_type = 'TABLE'

JOIN dba_objects o

    ON t.owner = o.owner 

   AND t.table_name = o.object_name 

   AND o.object_type = 'TABLE'

WHERE t.owner NOT IN (

    'SYS', 'SYSTEM', 'XDB', 'OUTLN', 'MDSYS', 'ORDDATA', 'ORDSYS',

    'CTXSYS', 'DBSNMP', 'APPQOSSYS', 'AUDSYS', 'DVSYS',

    'GSMADMIN_INTERNAL', 'ANONYMOUS', 'WMSYS', 'OLAPSYS',

    'LBACSYS', 'SI_INFORMTN_SCHEMA'

)

AND REGEXP_LIKE(t.table_name, 

    '(\d+$|' ||

    '(JAN|FEB|MAR|APR|MAY|JUN|JUL|AUG|SEP|OCT|NOV|DEC)|' ||

    '(JANUARY|FEBRUARY|MARCH|APRIL|JUNE|JULY|AUGUST|SEPTEMBER|OCTOBER|NOVEMBER|DECEMBER)|' ||

    '(20[0-9]{2}[_]?(0[1-9]|1[0-2]))$|' ||

    '((0[1-9]|1[0-2])(20[0-9]{2}))$)',

    'i'

)

ORDER BY size_mb DESC;


Posted By Nikhil22:25
Filled under:

 SELECT 

    t.owner,

    t.table_name,

    s.tablespace_name,

    ROUND(s.bytes / 1024 / 1024, 2) AS size_mb,

    t.num_rows,

    t.last_analyzed,

    o.last_ddl_time AS last_modified

FROM dba_tables t

JOIN dba_segments s 

    ON t.owner = s.owner 

   AND t.table_name = s.segment_name 

   AND s.segment_type = 'TABLE'

JOIN dba_objects o

    ON t.owner = o.owner 

   AND t.table_name = o.object_name 

   AND o.object_type = 'TABLE'

WHERE t.owner NOT IN (

    'SYS', 'SYSTEM', 'XDB', 'OUTLN', 'MDSYS', 'ORDDATA', 'ORDSYS',

    'CTXSYS', 'DBSNMP', 'APPQOSSYS', 'AUDSYS', 'DVSYS',

    'GSMADMIN_INTERNAL', 'ANONYMOUS', 'WMSYS', 'OLAPSYS',

    'LBACSYS', 'SI_INFORMTN_SCHEMA'

)

AND REGEXP_LIKE(t.table_name, 

    '(\d+$|' || 

    '(JAN|FEB|MAR|APR|MAY|JUN|JUL|AUG|SEP|OCT|NOV|DEC)|' ||

    '(JANUARY|FEBRUARY|MARCH|APRIL|JUNE|JULY|AUGUST|SEPTEMBER|OCTOBER|NOVEMBER|DECEMBER)|' || 

    '(20[0-9]{2}[_]?(0[1-9]|1[0-2])$|' || 

    '(0[1-9]|1[0-2])(20[0-9]{2})$)', 

    'i'

)

ORDER BY size_mb DESC;


Posted By Nikhil22:19
Filled under:

 SELECT 

    t.owner,

    t.table_name,

    s.tablespace_name,

    ROUND(s.bytes / 1024 / 1024, 2) AS size_mb,

    t.num_rows,

    t.last_analyzed,

    o.last_ddl_time AS last_modified

FROM dba_tables t

JOIN dba_segments s 

    ON t.owner = s.owner 

   AND t.table_name = s.segment_name 

   AND s.segment_type = 'TABLE'

JOIN dba_objects o

    ON t.owner = o.owner 

   AND t.table_name = o.object_name 

   AND o.object_type = 'TABLE'

WHERE t.owner = 'YOUR_SCHEMA'

  AND REGEXP_LIKE(t.table_name, 

       '(\d+$|' ||                               -- ends with number

       '(JAN|FEB|MAR|APR|MAY|JUN|JUL|AUG|SEP|OCT|NOV|DEC)|' ||  -- short months

       '(JANUARY|FEBRUARY|MARCH|APRIL|JUNE|JULY|AUGUST|SEPTEMBER|OCTOBER|NOVEMBER|DECEMBER)|' || 

       '(20[0-9]{2}[_]?(0[1-9]|1[0-2])$|' ||     -- ends in YYYYMM or YYYY_MM

       '(0[1-9]|1[0-2])(20[0-9]{2})$)', 'i')

ORDER BY size_mb DESC;


Posted By Nikhil22:14
Filled under:

 Hi [Colleague's Name],

As discussed, I’ve gathered the data — please find the attached Excel. Tab #7 includes the heuristic analysis.

It contains an expanded and categorized list of common table name patterns (prefixes/suffixes like %_BAK, TMP_%, etc.) that often indicate duplicates, backups, temporary, or staging tables.
We’ve also included rough size estimates and the last analyzed date (note that size is only an approximation).

Let me know if you need anything refined or explored further.

Posted By Nikhil21:41
Filled under:

 SELECT 

    t.owner,

    t.table_name,

    s.tablespace_name,

    ROUND(s.bytes / 1024 / 1024, 2) AS size_mb,

    t.num_rows,

    t.last_analyzed,

    o.last_ddl_time AS last_modified

FROM dba_tables t

JOIN dba_segments s 

    ON t.owner = s.owner 

   AND t.table_name = s.segment_name 

   AND s.segment_type = 'TABLE'

JOIN dba_objects o

    ON t.owner = o.owner 

   AND t.table_name = o.object_name 

   AND o.object_type = 'TABLE'

WHERE t.owner = 'YOUR_SCHEMA'

  AND REGEXP_LIKE(t.table_name, 

       '(_BAK$|_TMP$|_COPY$|_OLD$|_TEST$|_ARCH$|_STAGE$|_SNAP$|_DUP$|^TMP_|^STG_|^BK_|^OLD_|^NEW_|^TEST_)', 

       'i')

ORDER BY size_mb DESC;


Posted By Nikhil21:41
Filled under:

 SELECT 

    t.owner,

    t.table_name,

    s.tablespace_name,

    ROUND(s.bytes / 1024 / 1024, 2) AS size_mb,

    t.num_rows,

    t.last_analyzed,

    MAX(hs.end_time) AS last_access_time

FROM dba_tables t

JOIN dba_segments s 

    ON t.owner = s.owner AND t.table_name = s.segment_name AND s.segment_type = 'TABLE'

LEFT JOIN dba_hist_seg_stat_obj hso

    ON t.owner = hso.owner AND t.table_name = hso.object_name

LEFT JOIN dba_hist_seg_stat hs

    ON hso.obj# = hs.obj#

WHERE t.owner = 'YOUR_SCHEMA'

  AND REGEXP_LIKE(t.table_name, 

       '(_BAK$|_TMP$|_COPY$|_OLD$|_TEST$|_ARCH$|_STAGE$|_SNAP$|_DUP$|^TMP_|^STG_|^BK_|^OLD_|^NEW_|^TEST_)', 'i')

GROUP BY t.owner, t.table_name, s.tablespace_name, s.bytes, t.num_rows, t.last_analyzed

ORDER BY size_mb DESC;


Posted By Nikhil21:36
Filled under:

 SELECT 

    t.owner,

    t.table_name,

    ROUND(t.blocks * 8 / 1024, 2) AS size_mb,

    MAX(s.END_TIME) AS last_access_time

FROM dba_tables t

JOIN dba_hist_seg_stat_obj o

    ON t.owner = o.owner AND t.table_name = o.object_name

JOIN dba_hist_seg_stat s

    ON o.obj# = s.obj#

WHERE t.owner = 'YOUR_SCHEMA'

  AND REGEXP_LIKE(t.table_name, 

       '(_BAK$|_TMP$|_COPY$|_OLD$|_TEST$|_ARCH$|_STAGE$|_SNAP$|_DUP$|^TMP_|^STG_|^BK_|^OLD_|^NEW_|^TEST_)',

       'i')

GROUP BY t.owner, t.table_name, t.blocks

ORDER BY last_access_time DESC NULLS LAST;


Posted By Nikhil19:59
Filled under:

 heuristic analysis


Common Suffix Patterns (LIKE '%XYZ')

PatternMeaning / Use Case
_BAKBackup copy of the table
_BACKUPSame as above
_TMPTemporary version of the table
_TEMPTemporary or interim data
_COPYCopy of an original table
_OLDOld version, before a structural/data change
_NEWNew version during migration/ETL
_TESTUsed for testing
_ARCHIVEArchived data
_STAGEStaging table before transformation
_HISTHistorical data
_HISTORYExtended historical storage
_SNAPSnapshot of a live table
_MIGMigration purposes
_EXPORTTable used for export/dump
_IMPORTTable used during import
_VALIDATIONFor data validation during ETL
_DUPDuplicate copy
_ROLLBACKUsed to rollback or undo a change

🛠️ Common Prefix Patterns (LIKE 'XYZ%')

PatternMeaning / Use Case
TMP_Temporary table
STG_Staging table
BK_Backup copy
OLD_Old version
NEW_New version
ARCH_Archived data
COPY_Duplicate
TEST_Testing tables
SNAP_Snapshots
VAL_Validation tables
DUP_Duplicate
ROLLBACK_Used for undo / rollback testing
T_Often used as temp or test table




WITH table_defs AS (
  SELECT table_name,
         LISTAGG(column_name || ':' || data_type || ':' || data_length, ',') 
           WITHIN GROUP (ORDER BY column_id) AS col_signature
  FROM all_tab_columns
  WHERE owner = 'YOUR_SCHEMA'
  GROUP BY table_name
)
SELECT col_signature, COUNT(*), LISTAGG(table_name, ', ') WITHIN GROUP (ORDER BY table_name) AS tables
FROM table_defs
GROUP BY col_signature
HAVING COUNT(*) > 1;



SELECT 
    owner,
    table_name,
    num_rows,
    blocks,
    avg_row_len,
    ROUND(blocks * 8 / 1024, 2) AS size_mb, -- 1 block = 8KB by default
    last_analyzed
FROM dba_tables
WHERE owner = 'YOUR_SCHEMA'
  AND REGEXP_LIKE(table_name, 
       '(_BAK$|_BACKUP$|_TMP$|_TEMP$|_COPY$|_OLD$|_NEW$|_TEST$|_ARCH$|_ARCHIVE$|_STAGE$|_SNAP$|_DUP$|_EXPORT$|_IMPORT$|_HIST$|_HISTORY$|^TMP_|^STG_|^BK_|^OLD_|^NEW_|^TEST_|^ARCH_|^SNAP_|^COPY_|^VAL_|^DUP_)',
       'i') -- case-insensitive
  AND num_rows IS NOT NULL -- ensure stats exist
ORDER BY size_mb DESC;



SELECT 
    t.owner,
    t.table_name,
    t.tablespace_name,
    t.num_rows,
    t.blocks,
    ROUND(t.blocks * 8 / 1024, 2) AS size_mb,
    t.last_analyzed,
    MAX(s.end_time) AS last_access_time
FROM dba_tables t
JOIN dba_hist_seg_stat_obj o
    ON t.owner = o.owner AND t.table_name = o.object_name
JOIN dba_hist_seg_stat s
    ON o.obj# = s.obj#
WHERE t.owner = 'YOUR_SCHEMA'
  AND REGEXP_LIKE(t.table_name, 
       '(_BAK$|_BACKUP$|_TMP$|_TEMP$|_COPY$|_OLD$|_NEW$|_TEST$|_ARCH$|_ARCHIVE$|_STAGE$|_SNAP$|_DUP$|_EXPORT$|_IMPORT$|_HIST$|_HISTORY$|^TMP_|^STG_|^BK_|^OLD_|^NEW_|^TEST_|^ARCH_|^SNAP_|^COPY_|^VAL_|^DUP_)',
       'i')
GROUP BY 
    t.owner, t.table_name, t.tablespace_name, t.num_rows, t.blocks, t.last_analyzed
ORDER BY size_mb DESC;

Posted By Nikhil18:40

Thursday, 3 July 2025

Filled under:

 

Here’s a professional and concise meeting invite you can use:


Subject: Walkthrough: BCM Automation Task for Upcoming BCM Activity

Body:

Dear Application Owners,

As part of the upcoming BCM activity, we have developed an automation workflow to streamline the BCP activation and related tasks.

We would like to walk you through this BCM automation process, highlight key checkpoints, and ensure alignment across teams.

Meeting Details:
📅 Date: [Insert Date]
🕒 Time: [Insert Time]
📍 Location/Link: [Insert MS Teams/Zoom link]

Agenda:

  • Overview of BCM automation flow
  • Role of application owners during BCP activation
  • Key dependencies and validation steps
  • Q&A and next steps

Your participation is important to ensure a smooth execution during the BCM event. Looking forward to your attendance.

Best regards,
[Your Name]
[Your Team/Title]


Let me know if you'd like a version with more technical details or if the invite is going to a broader audience.

Posted By Nikhil22:01

Tuesday, 1 July 2025

Filled under:

 

Sure! Here's your updated MS Teams message with that phrasing:


Hi Team,
I did a simulation and noticed that Patroni allows a switchover even if the replica is on a different timeline.

Upon checking a few documentations, I learned that timeline mismatch can indicate WAL history divergence, and promoting such a replica may lead to data loss if it's not fully in sync.

📋 Quick pre-switchover checklist:

  • ✅ Confirm replica is streaming and caught up (pg_stat_replication)
  • ✅ Verify same timeline ID (pg_controldata)
  • ✅ Compare WAL receive vs replay LSN
  • ✅ Check for missing WALs or gaps in history file

Let me know if anyone needs a quick script to validate these before doing a switchover.


Let me know if you'd like to push this to Confluence as well.

Posted By Nikhil04:26