Basic knowledge of DB2 archive log
more details please go to DB2 logging official website
There are two types of logging in DB2, the circular logging and archive logging.
-
Circular logging
Circular logging is the default logging strategy used for a database. In this strategy, once the last primary log file is filled in the log directory, new transactions will be written to the first log file thereby overwriting existing log data. These new transactions will continue to overwrite each old log file in sequence. This method of logging ensures data consistency for all committed transactions so that crash recovery is possible.
-
Archival logging
In contrast to circular logging, the archival logging process creates a new log file when the last log file is filled so that future transactions will not overwrite existing log files. When the database is initialized, the system allocates a certain number of primary log files of a specified size in the active log directory. This number is controlled by a database configuration parameter (discussed in the next section). When the primary log files are all full, secondary log files are created on an "as-needed" basis until the maximum number of secondary log files has been created. Once this number is reached, if additional log space is needed, an error is issued indicating no more log files are available and all database activity is stopped.
With archival logging, it is possible to take an online database backup during which database activity continues to be logged. In the event of a database crash or failure, the database may be restored using a full backup image followed by a roll-forward operation using the archived logs to bring the database to a point-in-time state or to the most recent consistent state by rolling forward to the end of the logs.
Archived logs are of two types:
Online archived logs: These are the log files that reside in the database log directory ('online') and are no longer needed for normal database activity.
Off-line archived logs: These are the log files that have been moved from the database log directory to an off-line storage location (such as a backup server) and are not needed for normal database activity.
The physical log files store in the Disk with archive logging
There are 5 logging folders store the different period of archive logs. We can use blow db2 command to find all of the 5 logfolder path:
db2 get db cfg
You will see the 5 paths:
Now, actually the 5 logging path blows to Online archived logs and Off-line archived logs.
** Online archived logs **
- LOGPATH
store the logprimary+logsecond online archive log, we can change the log file quantity by:
UPDATE DB CFG FOR db_name USING LOGPRIMARY 3
UPDATE DB CFG FOR db_name USING LOGSECOND 2
- MIRRORLOGPATH
If mirrorlogpath is configured, DB2 will create active log files in both the log path and the mirror log path. All log data will be written to both paths. The mirror log path has a duplicated set of active log files, such that if there is a disk error or human error that destroys active log files on one of the paths, the database can still function.
Off-line archived logs
- LOGARCHMETH1
You can use this parameter to specify the media type of the primary destination for archived logs. - LOGARCHMETH2
This parameter specifies the media type of the secondary destination for archived logs. - FAILARCHPATH
This parameter specifies a path to which DB2® will try to archive log files if the log files cannot be archived to either the primary or the secondary (if set) archive destinations because of a media problem affecting those destinations. This specified path must reference a disk.
How DB2 backup files with Archive logging
So,the key point is:
- DB2 will store online logs in LOGPATH and MIRRORLOGPATH(if set),but the file quantity is limited.
- If the online log files number is bigger than what you defined, will archive those online log files to LOGARCHMETH1 and LOGARCHMETH2.
- If archive failed, DB2 will archive online log files to FAILARCHPATH.
How DB2 restore archive logs with Archive logging
So, the key point is:
- DB2 will get log records from archive log files which in LOGPATH or MIRRORLOGPATH.
- If the archive log file not in LOGPATH and MIRRORLOGPATH,will try to search from LOGARCHMETH1,LOGARCHMETH2 and FAILARCHPATH. If found, DB2 will copy those archive files to LOGPATH,and then read log record from those archive log files in LOGPATH.
- In a short words, no matter what path contains the required archived log files, DB2 can restore or read log records correctly.
What situation can make RAU can't read archive logs
The only scenario is the required archived files not exist in all of five archive log paths. So If we want to ensure RAU run smoothly , I suggest to back up archive logs to at least one off-line log path. Recommend use the LOGARCHMETH1.
Test steps for the multiple archive path
- First of all, Install or Find a DB2 instance,make sure you have administrator privilege.
- Set the off-line log path
>db2
db2 =>connect to db_name
db2 =>UPDATE DATABASE CONFIGURATION USING logarchmeth1 DISK:path1
db2 =>UPDATE DATABASE CONFIGURATION USING logarchmeth2 DISK:path2
- Limit the online log files number
db2 =>UPDATE DB CFG FOR db_name USING SOFTMAX 200
db2 =>UPDATE DB CFG FOR db_name USING LOGPRIMARY 3
db2 =>UPDATE DB CFG FOR db_name USING LOGSECOND 2
db2 =>BACKUP DATABASE db_name TO path1
db2 =>db2stop force
db2 =>db2start
- start RAU instance and config the RAU instance to connect to DB2
ra_config pds_host_name,localhost
go
ra_config pds_database_name,db_name
go
ra_config pds_port_number,50000
go
ra_config pds_username,pds_user
go
ra_config pds_password,Sybase123
go
ra_config pds_datasource_name,db_name
go
test_connection PDS
go
ra_admin init
go
trace LTITRACELTL,true
go
ra_config connect_to_rs,false
go
ra_config use_rssd,false
go
- create a test table in DB2
db2 =>create table chartest(pkey int,varchar(20))
- mark test table and move truncption in RA
pdb_setreptable chartest,mark
go
ra_locator move_truncpt
go
- write down all file names in LOGPATH
for example S0000167 ~ S0000171, the db2 current log locator is in S0000167 file. - insert mass data into test table
insert into mass test data into test table, this action will update the online log file in LOGPATH and backup S0000167 ~ S0000171 to logarchmeth1 .Keep insert data until the online archive files' name bigger than S0000171, then stop insert data. Please use blow sql with "KKKKKK" keyword as the test sqls:
insert into chartest values(1,'KKKKKK')
recommend you use DB2 procedure or write a script to insert mass data
- insert mass test data again
Suppose the current log file names are S0000172 ~ S0000176, then insert mass test sql data until all file names in LOGPATH is bigger than S0000176. The step just make sure we make a lot of archive files. Test sql is :
insert into chartest values(1,'KKKKKK')
- insert another mass data into test table
This step just to ensure DB2 put all archive log files that contains “KKKKK” keyword are archived to off-line log path logarchmeth1 and logarchmeth2. Suppose the current log files' name are S0000177 ~ S0000181 in LOGPATH, then insert mass data until all log files' name bigger than S0000181.Test Sql is :
insert into chartest values(1,'HHHHHH')
now lets look at all archive log path, in LOGPATH is S0000182 ~S0000186, in MIRRORLOGPATH is S0000182 ~S0000186 too (those are online logs), in LOGARCHMETH1 and LOGARCHMETH2 are S0000167 ~ S0000181(those are archived logs).
- Try to restore off-line archived log(For RAU ,is read the archived log record)
For online log S0000182 ~S0000186, they don't contains the keyword 'KKKKK' .The ra_locator is point to S0000167, For the S0000167~ S0000181,some archived log files contains keyword 'KKKKK'.
Let's delete all archived log files in LOGARCHMETH1 and check whether the LOGARCHMETH2 can restore S0000167 ~ S0000181. After delete all archived log file in LOGARCHMETH1,then run RA command:
resume
go
after resume RAU, you can see the off-line archived log file is coping to LOGPATH. And then ,open the LTITRACELL.log file in RAU instance folder,you will find the keyword 'KKKKK'.
Run RA command:
suspend
go
- Test no archived files
backup the LOGARCHMETH2 to a new folder.Delete all archived files in LOGARCHMETH1 and LOGARCHMETH2. Then run RA command:
resume
go
you will find ra stoped replication due to error,no LTL log output.
The error log like LogReader encountered exception with unknown error: <An attempt to read transaction log data failed because the Log containing the requested Log Sequence Number can not be found...
The error occurs because the log locator can't match any exist on-line archive files and off-line archived files.
Summary
We should suggest customer to enable LOGARCHMETH1 or enable LOGARCHMETH1 and LOGARCHMETH2 to ensure all archive files archived to off-line.