Zahid Karim
Recent Posts
1.Write fast and efficient PLSQL - DBMS_PROFILER
2.Fix physically corrupt data blocks using PLSQL - DBMS_REPAIR
3.Encrypt or Decrypt sensitive data using PLSQL - DBMS_CRYPTO
4.ORA-24248 XMLDB extensible security not installed
5.Upgrade Oracle 10g Release 2 to Oracle 11g Release 1
Find me on
Add ASM Instance into Enterprise Manager Using Oracle LogMiner to undo incorrectly committed changes

How to read or analyze redo log files using LogMiner

Posted by Zahid on June 4, 2010.

Oracle records all changes to the user data and data dictionary in the online redo log files for recovery operations. Archival of these online redo log files generates archive redo log files. These files are binary files and unreadable to a human. LogMiner is an Oracle utility which can be used to peek into the online redo log or archive redo log files through a SQL interface.

Follow the article below to see how to use LogMiner in Oracle Database.

First thing to do is preparing a directory location to create a dictionary file for LogMiner. This dictionary file contains objects information from data dictionary which helps LogMiner to identify object names against the object numbers provided in the redo log files. For example take the following sql statement.

INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY, MAX_SALARY) VALUES('IT_WT','Technical Writer', 4000, 11000);

If we don't use the dictionary information with the LogMiner it will show the above sql statement as follows:

insert into "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4") values (HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'), HEXTORAW('c229'),HEXTORAW('c3020b'));

There are three ways available to use dictionary information with LogMiner in oracle 10g.

1). Using the Online Catalog with LogMiner

It is most efficient and easy-to-use way when LogMiner has access to the database to which log file belong and no column definition changes are expected in the tables of interest. This option can be used when the LogMiner session is started as follows:

begin
   dbms_logmnr.start_logmnr( options => dbms_logmnr.dict_from_online_catalog );
end;
/

2). Extracting Dictionary Information to the Redo Log Files

Use this option when using LogMiner in a database to which log file don't belong and column definition changes are expected in the tables of interest. For this to work you will have to embed the dictionary information into the redo logs at the database from where the redo logs for analysis are coming.

begin
   dbms_logmnr_d.build( options => dbms_logmnr_d.store_in_redo_logs );
end;
/

This may embed the dictionary info in multiple log files. Use following sql statements to see where the dictionary embedding starts and where it ends.

SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';
SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';

You will have to add these files and the files between them in the LogMiner analysis.

3). Extracting the LogMiner Dictionary to a Flat File

This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead.

This article is prepared using "Extracting the LogMiner Dictionary to a Flat File" as it requires a little bit of setup and I wanted to demonstrate that.

$ mkdir -p /u01/apps/logminer_dir 
$ sqlplus / as sysdba

/* 
  The location where dictionary will be created should be set in utl_file_dir initialization
  parameter. 
*/

SQL> alter system set utl_file_dir='/u01/apps/logminer_dir' scope=spfile;

System altered.

shutdown immediate
startup
show parameter utl_file_dir

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
utl_file_dir                         string      /u01/apps/logminer_dir

Normally oracle records the change vector in the redo log files i.e. just the information that is required to reconstruct the operation at recovery time. If you want additional information in the redo log then you need to enable supplemental logging prior to generating log files that will be analyzed by LogMiner. Therefore, at the very least, we will enable minimal supplemental logging, as the following SQL statement shows:

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;    

SUPPLEME
--------
NO

/* Minimum supplemental logging is not enabled. */

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Database altered.

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

SUPPLEME
--------
YES

/* Minimum supplemental logging is now enabled. */

SQL> alter system switch logfile;

System altered.

SQL> select g.group# , g.status , m.member
     from v$log g, v$logfile m
     where g.group# = m.group#
     and g.status = 'CURRENT';

    GROUP# STATUS   MEMBER
---------- -------- -------------------------------------------
         2 CURRENT  +DG1/ora10g/onlinelog/group_2.264.718794539

/* start fresh with a new log file which is the group 2.*/

SQL> create table scott.test_logmnr           
  2  (id  number,
  3   name varchar2(10) 
  4  );

Table created.

SQL> begin 
       dbms_logmnr_d.build
       (
         dictionary_filename => 'dictionary.dic',
         dictionary_location => '/u01/apps/logminer_dir',
         options             => dbms_logmnr_d.store_in_flat_file
       );
     end;
     /

PL/SQL procedure successfully completed.

/* 
  This has recorded the dictionary information into the file 
  "/u01/apps/logminer_dir/dictionary.dic". 
*/

Now lets make a few user data changes by logging in as user SCOTT.

SQL> conn scott/tiger
connected.
SQL> insert into test_logmnr values (1,'TEST1');

1 row created.

SQL> insert into test_logmnr values (2,'TEST2');

1 row created.

SQL> commit;

Commit complete.

SQL> update test_logmnr set name = 'TEST';

2 rows updated.

SQL> commit;

Commit complete.

SQL> delete from test_logmnr;

2 rows deleted.

SQL> commit;

Commit complete.

After these changes lets log back in as sysdba and start LogMiner session.

SQL> conn / as sysdba
connected.
SQL> select g.group# , g.status , m.member
     from v$log g, v$logfile m
     where g.group# = m.group#
     and g.status = 'CURRENT';

    GROUP# STATUS   MEMBER
---------- -------- -------------------------------------------
         2 CURRENT  +DG1/ora10g/onlinelog/group_2.264.718794539

/* 
  Log group 2 is still current, that means all of the changes we made are in this redo group. 
*/

SQL> begin
      dbms_logmnr.add_logfile 
      (
       logfilename => '+DG1/ora10g/onlinelog/group_2.264.718794539',
       options     => dbms_logmnr.new
      );
     end;
     /

PL/SQL procedure successfully completed.

/* 
  DBMS_LOGMNR.ADD_LOGFILE builds up a list of redo log files for LogMiner analysis.
  The first file is added with the options => dbms_logmnr.new and rest are added
  with the options => dbms_logmnr.addfile
*/

SQL> select filename from v$logmnr_logs;

FILENAME
--------------------------------------------
+DG1/ora10g/onlinelog/group_2.264.718794539

/* 
  Dictionary view v$logmnr_logs contains the list of log files that are added 
  via DBMS_LOGMNR.ADD_LOGFILE.
*/

SQL> begin
       dbms_logmnr.start_logmnr 
       (    
        dictfilename => '/u01/apps/logminer_dir/dictionary.dic',
        options      => dbms_logmnr.print_pretty_sql + 
  	                dbms_logmnr.no_sql_delimiter + 
	                dbms_logmnr.ddl_dict_tracking
       );
     end;
     /

PL/SQL procedure successfully completed.

DBMS_LOGMNR.START_LOGMNR

starts a LogMiner session. It will populate the dictionary view v$logmnr_contents with the contents of log files in the list we built with DBMS_LOGMNR.ADD_LOGFILE.

v$logminer_contents is only accessible to the current session which has started LogMiner and only until the DBMS_LOGMNR.END_LOGMNR is called. There could be many options provided with START_LOGMNR which affects the data representation in v$logmnr_contents e.g.

dbms_logmnr.print_pretty_sql

will format the sql statements to enhance readability.

dbms_logmnr.no_sql_delimiter

will omit the ";" from the end of the sql statements which is useful when sql are meant to be re-executed in PL/SQL routines.

dbms_logmnr.ddl_dict_tracking

tracks the DDL statements in the log files.

SQL> DROP TABLE myLogAnalysis;

Table dropped.

SQL> create table myLogAnalysis
     as
     select * from v$logmnr_contents;

Table created.

/* 
  Its always better to copy contents of v$logmnr_contents to a user table and then perform 
  the analysis as it is quite expensive to query v$logmnr_contents. Moreover, the user table
  can be indexed for better query performance.
*/

SQL> begin
       DBMS_LOGMNR.END_LOGMNR();
     end;
     /

PL/SQL procedure successfully completed.

DBMS_LOGMNR.END_LOGMNR()

ends the LogMiner session and v$logmnr_contents is no more accessible but our user table myLogAnalysis is still available which is a copy of v$logmnr_contents.

set lines 1000
set pages 500
column scn format a6
column username format a8
column seg_name format a11
column sql_redo format a33
column sql_undo format a33

/* 
   The below query will show the changes made by the user SCOTT and either the table is
   TEST_LOGMNR or there is no table at all i.e. transaction start and transaction end
   statements etc.
   The output below shows the system change number for the change, the segment on which the
   change was made, the sql statement to redo the change and the sql statement to undo the
   change.
*/

select scn , seg_name , sql_redo , sql_undo
from   myLogAnalysis
where username = 'SCOTT'
AND (seg_owner is null OR seg_owner = 'SCOTT')

   SCN SEG_NAME    SQL_REDO                          SQL_UNDO
------ ----------- --------------------------------- ---------------------------------
639968 TEST_LOGMNR create table scott.test_logmnr
                   (id  number,
                    name varchar2(10)
                    )

640039             set transaction read write
640039 TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
                    values                            where
                       "ID" = 1,                         "ID" = 1 and
                       "NAME" = 'TEST1'                  "NAME" = 'TEST1' and
                                                         ROWID = 'AAAM7vAAEAAAALcAAA'

640041 TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
                    values                            where
                       "ID" = 2,                         "ID" = 2 and
                       "NAME" = 'TEST2'                  "NAME" = 'TEST2' and
                                                         ROWID = 'AAAM7vAAEAAAALcAAB'

640044             commit
640047             set transaction read write
640047 TEST_LOGMNR update "SCOTT"."TEST_LOGMNR"      update "SCOTT"."TEST_LOGMNR"
                     set                               set
                       "NAME" = 'TEST'                   "NAME" = 'TEST1'
                     where                             where
                       "NAME" = 'TEST1' and              "NAME" = 'TEST' and
                       ROWID = 'AAAM7vAAEAAAALcAAA'      ROWID = 'AAAM7vAAEAAAALcAAA'

640047 TEST_LOGMNR update "SCOTT"."TEST_LOGMNR"      update "SCOTT"."TEST_LOGMNR"
                     set                               set
                       "NAME" = 'TEST'                   "NAME" = 'TEST2'
                     where                             where
                       "NAME" = 'TEST2' and              "NAME" = 'TEST' and
                       ROWID = 'AAAM7vAAEAAAALcAAB'      ROWID = 'AAAM7vAAEAAAALcAAB'

640050             commit
640052             set transaction read write
640058 TEST_LOGMNR delete from "SCOTT"."TEST_LOGMNR" insert into "SCOTT"."TEST_LOGMNR"
                    where                             values
                       "ID" = 1 and                      "ID" = 1,
                       "NAME" = 'TEST' and               "NAME" = 'TEST'
                       ROWID = 'AAAM7vAAEAAAALcAAA'

640058 TEST_LOGMNR delete from "SCOTT"."TEST_LOGMNR" insert into "SCOTT"."TEST_LOGMNR"
                    where                             values
                       "ID" = 2 and                      "ID" = 2,
                       "NAME" = 'TEST' and               "NAME" = 'TEST'
                       ROWID = 'AAAM7vAAEAAAALcAAB'

640066             commit

See also:
Using LogMiner to Analyze Redo Log Files
Using Oracle LogMiner to undo incorrectly committed changes
Oracle 11g Using LogMiner to analyze redo log files