爱心技术专栏专题

AuditingPastTransactionsWithOracleL

摘录:Oracle基础 来源:Oracle基础 加入时间:2007年03月25日
摘要:
AuditingPastTransactionsWithOracleL
Auditing Past Transactions With Oracle LogMiner
by Mike Hordila

Have you ever had to answer questions like, This data does not look right. Could we find out who changed it and whe…

转载:转载请保留本信息,本文来自
http://www.51dibs.com
/html/2006/article/info22/a_fd77ce6db137161d.htm

AuditingPastTransactionsWithOracleL

站点:爱心种子小博士 关键字:AuditingPastTransa

   
Auditing Past Transactions With Oracle LogMiner
by Mike Hordila

Have you ever had to answer questions like, This data does not look right. Could we find out who changed it and when? How did they change it? What was there before the change? How do we fix it back? If so, this article will give you a quick start in the right direction. It really is possible to find out the answers to these questions, and not that difficult.

I have used these principles on all types of systems in production today, on Unix (AIX, HP-UX, Sun Solaris, Linux) and Windows (NT, 2000) servers, on Oracle 8.x, 8i, 9i. The techniques Im going to cover require some knowledge of Oracle and some experimentation. However, samples are provided that should help you to get

Oracle Log Miner

Most DBAs would not want to enable the Oracle auditing, as there is a visible impact on space consumption and especially performance (some authors report a 10 — 20 percent performance loss for significant auditing). However, transaction information is recorded in the redo logs (on line and archives) and, starting with version 8.1.5, Oracle supports log mining. LogMiner can be run on the redo log producing (source) database or on an analyzing (miner) database.

Some restrictions of LogMiner:

      • It is only available in Oracle version 8.1 or later
      • It can only analyze redo log files (online or archived) from 8.0 or later databases
      • Oracle7 has a different format of the redo log files, so this version cannot be log mined
      • The same hardware platform must be on both databases
      • The same database character set must be on both databases
      •  The same database block size must be on both databases
      • The dictionary file can only be created in a directory included in parameter UTL_FILE_DIR in file INIT.ORA
      •  LogMiner does not support file access across database links, so dictionary files and redo logs must be moved to the machine hosting the analyzing instance.

LogMiner 8i does not support operations on:

      • data types LONG and LOB
      • non-scalar data types
      • simple and nested abstract data types (ADTs)
      • collections (nested tables and VARRAYS)
      • Object Refs
      • Index Organized Tables (IOTs)
      • clustered tables/indexes
      • chained rows
      • direct path inserts, even though such operations are logged

LogMiner 9.2.x can be used with LONG and LOB, but cannot be used with:

      • simple and nested abstract data types (ADTs)
      • collections (nested tables and VARRAYS)
      • Object Refs
      • Index Organized Tables (IOTs)

Oracle9i Log Miner New Features

Enhancements to LogMiner for Oracle9i generated log files include:

      • A new LogMiner Viewer GUI in addition to the command line interface
      • Translating DML associated with Index Clusters
      • Grouping DML statements into completed transactions, returned in the commit SCN order
      • Mining for changes by value
      • Support for chained and migrated rows on redos produced by 9i
      • Support for direct path inserts
      • Using an online dictionary
      • Extracting the data dictionary into the redo log files to seamlessly integrate DDL changes
      • DDL statement tracking on redos produced by 9i
      • Can skip log corruptions
      • Can specify that only committed transactions be displayed
      • Can generate SQL_REDO and SQL_UNDO with primary key information to help the DBA undo changes changes

Preparing The Log Miner

Log Miner consists of the Log Miner ( dbms_logmnr) package with three procedures and the Dictionary (dbms_logmnr_d ) package. These are normally built by catproc, which executes the following scripts:

      • $ORACLE_HOME/rdbms/admin/dbmslogmnrd.sql - $ORACLE_HOME/rdbms/admin/dbmslogmnr.sql - $ORACLE_HOME/rdbms/admin/prvtlogmnr.plb

and since 8.1.6:

      • $ORACLE_HOME/rdbms/admin/dbmslmd.sql - $ORACLE_HOME/rdbms/admin/dbmslm.sql - $ORACLE_HOME/rdbms/admin/prvtlm.plb

A few views are also created:

V$LOGMNR_CONTENTS— the contents of the redo log files being analyzed – used by the DBA for auditing

V$LOGMNR_DICTIONARY — the dictionary file in use

V$LOGMNR_LOG — which redo log files are being analyzed

V$LOGMNR_PARAMETERS — current parameter settings for LogMiner

Also, depending on the specific version, a few more objects related to the LogMiner system are created, like the view v$logmnr_interesting_cols is created by $ORACLE_HOME/rdbms/admin/dbmslmd.sql and is for internal use by LogMiner.

For Oracle 8.0.x, this system is not created, so the DBA has to run manually one the dictionary scripts ( dbmslogmnrd.sql or dbmslmd.sql ) or all scripts. Simply ignore the errors referring to creating other objects than the dictionary package. Even later, some errors may be generated while running the package for objects like SUBPARTCOL$, TABSUBPART$, INDSUBPART$, TABCOMPART$ and INDCOMPART$. Ignore these errors as well.

The Dictionary File

The dictionary file is produced in order to convert object ID numbers to object names. It is not required, but is recommended. Without it the equivalent SQL statements will use Oracle internal object IDs for the object name and present column values as hex data. For example, instead of the SQL statement:

INSERT INTO emp(name, salary) VALUES (John Doe, 50000);

 LogMiner will display:

 insert into Object#2581(col#1, col#2) values (hextoraw(4a6f686e20446f65), hextoraw(c306));"

The contents of a dictionary file looks like:

CREATE_TABLE DICTIONARY_TABLE ( DB_NAME VARCHAR2(9), DB_ID NUMBER(20), DB_CREATEDVARCHAR2(20), DB_DICT_CREATED VARCHAR2(20), DB_DICT_SCN NUMBER(22), DB_THREAD_MAP NUMBER(22), DB_RESETLOGS_CHANGE# NUMBER(22), DB_RESETLOGS_TIME VARCHAR2(20), DB_VERSION_TIME VARCHAR2(20), DB_REDO_TYPE_ID VARCHAR2(8), DB_REDO_RELEASE VARCHAR2(60), DB_CHARACTER_SET VARCHAR2(30), DB_VERSION VARCHAR2(64), DB_STATUS VARCHAR2(64), DB_DICT_MAXOBJECTS NUMBER(22), DB_DICT_OBJECTCOUNT NUMBER(22)); INSERT_INTO DICTIONARY_TABLE VALUES (MHD1,41190674,12/24/2002 23:36:15,03/16/2003 12:30:24,,,1,12/24/2002 16:05:38,12/24/200223:36:15,REDODATA,8.0.5.0.0,WE8ISO8859P1,8.0.5.1.0,Production,2788,2697);

The dictionary file can be converted into a SQL script by replacing globally the underscores with spaces.

CREATE_TABLE —> CREATE TABLE; CREATE_INDEX —> CREATE INDEX; INSERT_INTO —> INSERT INTO; and so on (there are more details in the header comments in the dictionary file itself).

Also, see Oracle Note 77638.1 on how to build a package and a LogMiner Place Holder Columns file.

Running The Log Miner

We have used four scripts to demonstrate the concepts in this paper. They are in the file and the logs in .

First, on the source database, we create some transactions like:

INSERT INTO table1 ( rec_id,  emp_last_name,  emp_first_name,  salary )VALUES ( 03, LASTTHREE, FIRSTTHREE, 10000.10 );

Then we update one row:

UPDATE table1 SET salary = 20000.10 WHERE rec_id = 03;

Then we build the dictionary file:

execute dbms_logmnr_d.build(dictionary_filename => dictionary.920.ora, -                                                  dictionary_location => C:\TEMP);

Now, we copy the dictionary file, the online and archived redo log files from the period of time that interests us to the analyzing database machine. The analyzing database does not have to be only mounted, it can be open, in which case I normally just copy the V$LOGMNR_CONTENTS into a regular table.

Then, an the analyzing database, we load the redo logs:

execute dbms_logmnr.add_logfile(logfilename => C:\TEMP\redo01.log, -                                options => dbms_logmnr.new);

(Here, if you have the wrong redo logs, you can get some errors, like archived log does not contain any redo. Most of them can be ignored.)

And then we start the logminer:

execute dbms_logmnr.start_logmnr(dictfilename => C:\TEMP\dictionary.920.ora, -                  starttime => to_date(18-MAR-2003 00:00:00, DD-MON-YYYY HH24:MI:SS), -                  endtime => to_date(18-MAR-2003 23:59:59, DD-MON-YYYY HH24:MI:SS));

(Here, if you have the wrong redo logs, you can get some errors, like archived log out of range.)

Now, since the database is open, I can do my table copy. This is useful for thorough analyzing, as V$LOGMNR_CONTENTS is very slow and can contain Millions of rows on a busy production system.

create table SYSTEM.LOGMINER_CONTENTS_920 tablespace TOOLS 
as select * from v$logmnr_contents;

Then we can finish:

execute dbms_logmnr.add_logfile(logfilename => C:\TEMP\redo01.log, -                                options => dbms_logmnr.removefile);execute dbms_logmnr.end_logmnr;

And now, we can use our own table. We can create a few indexes, if we really need to work a lot with it.

select count(*) from SYSTEM.LOGMINER_CONTENTS_920; COUNT(*)   ----------   11037select to_char(timestamp, DD-MON-YYYY HH24:MI:SS) timestamp,   scn, log_id, username, seg_owner, seg_name, seg_type, operation,   sql_redo   from SYSTEM.LOGMINER_CONTENTS_920   where username = TESTX   and seg_owner = TESTX   and seg_name = TABLE1;TIMESTAMP SCN LOG_ID USERNAME   -------------------- ---------- ---------- -----------------   SQL_REDO   ------------------------------------------------------------   18-MAR-2003 20:47:10 181209 10 TESTX   CREATE TABLE table1 (   rec_id VARCHAR2(12) NOT NULL,   emp_last_name VARCHAR2(30),   emp_first_name VARCHAR2(30),   salary NUMBER(8,2) )   TABLESPACE tools;18-MAR-2003 20:47:16 181293 10 TESTX   insert into "TESTX"."TABLE1"("REC_ID","EMP_LAST_NAME","EMP_FIRST_NAME",    ……..18-MAR-2003 20:47:30 181409 11 TESTX   update "TESTX"."TABLE1" set "SALARY" = 20000,1    where "SALARY" = 1000 ……..

Some Frequent Questions Before Oracle9i

How do you know in versions earlier than 9i that multiple statements belong to the same transaction? You can check USERNAME (or session_info) and XIDUSN (rollback segment number) and you can see first and last for transaction. XIDSQN identifies the SCN. XIDSLOT can also be used to order the transaction components: operation START, sql_redo set transaction read write, and operation COMMIT, sql_redo commit.

How do you know in versions earlier than 9i that a table was dropped? DROP TABLE will generate DELETE operations on COL$, OBJ$ and TAB$.

select seg_name, operation, scn, count(*) from v$logmnr_contentswhere operation != INTERNALgroup by seg_name, operation, scn order by scn; SEG_NAME OPERATION SCN COUNT(*)    --------------- -------------------------------- ---------- ----------    COL$ DELETE 5012065 3    OBJ$ DELETE 5012065 1    TAB$ DELETE 5012065 1    START 5012065 1    SEG$ UPDATE 5012065 1

How do you know in versions earlier than 9i to repopulate a table that had chained rows? DML on chained rows are included in "v$logmnr_contents.sql_redo" and "v$logmnr_contents.sql_undo". The SQL redo/undo columns are NULL for INSERT and UPDATE and contain Unsupported for DELETE. Other columns (including data_blk#, data_obj#, row_id) can be used to identify chained rows, but it we cannot determine the SQL redo/undo statement.  So, note that v$logmnr_contents.sql_redo CANNOT be used to completely repopulate a table that ever contained chained rows.

Log Miner Procedures Summary

      1. Many people make the dictionary file creation part of the daily backup procedures
      2. Do not run the log analysis on the production database, as it takes a lot of I/O and PGA
      3. Accessing V$LOGMNR_CONTENTS is very slow, a full scan can take 10-20 hours, physically reading the redo log files, using a lot of PGA not SGA. The analyzing database can be only mounted. V$LOGMNR_CONTENTS can contain Millions of rows on a busy production system
      4. Some people recommend using a standby in mount or read-only state to analyze the redos from the primary. This will read directly the redo files, so it will be very slow, as described above.
      5. I would rather recommend using an opened database, even on a workstation, and doing the copy of the view V$LOGMNR_CONTENTS to a regular table. If you need, you can build indexes on it. This will also avoid a number of reported problems and crashes caused by running directly against V$LOGMNR_CONTENTS. V$LOGMNR_CONTENTS.

There is not a lot of literature on Oracle LogMiner, and some of it can be confusing, but things are not that complicated. You can avoid a lot of aggravation by just being well organized. For a list of Frequently Asked Questions and tips on running my packages, visit .

--

Mike Hordila is a DBA OCP v.7, 8, 8i, 9i, and has his own Oracle consulting company, DBActions Inc., , in Toronto, Ontario. He specializes in tuning, automation, security, and very large databases. Mike has articles in Oracle Magazine Online, Oracle Internals and DBAzine.com. Updated versions of his work are available on . He is also a technical editor with Hungry Minds (formerly IDG Books).