My Blog

January 5, 2009

Automatic Workload Repository (AWR) & ADDM(Automatic Database Diagnostic Monitor)

Filed under: Oracle — by enotebook @ 4:30 pm
Tags: , ,

Oracle have provided many performance gathering and reporting tools over the years. Originally the UTLBSTAT/UTLESTAT scripts were used to monitor performance metrics. Oracle8i introduced the Statspack functionality which Oracle9i extended. In Oracle 10g statspack has evolved into the Automatic Workload Repository (AWR).

bash-3.00$ pwd
/u01/app/oracle/product/10.2.0/db_1/rdbms/admin
bash-3.00$ ls |grep awr
awrddinp.sql
awrddrpi.sql
awrddrpt.sql
awrextr.sql
awrinfo.sql
awrinpnm.sql
awrinput.sql
awrload.sql
awrrpt.sql
awrrpt_1_142_143.html
awrrpti.sql
awrsqrpi.sql
awrsqrpt.sql
catawrtb.sql
catawrvw.sql
dbmsawr.sql
prvtawr.plb
bash-3.00$ sqlplus “/as sysdba”

SQL*Plus: Release 10.2.0.2.0 – Production on Mon Jan 5 10:58:51 2009

Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 – Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> @?/rdbms/admin/awrrpt.sql

Current Instance
~~~~~~~~~~~~~~~~

DB Id    DB Name      Inst Num Instance
———– ———— ——– ————
1202538587 ORCL                1 orcl1

Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter ‘html’ for an HTML report, or ‘text’ for plain text
Defaults to ‘html’
Enter value for report_type:

Type Specified:  html

Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

DB Id     Inst Num DB Name      Instance     Host
———— ——– ———— ———— ————
1202538587        2 ORCL         orcl2        solaris02
* 1202538587        1 ORCL         orcl1        solaris01

Using 1202538587 for database Id
Using          1 for instance number

Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed.  Pressing <return> without
specifying a number lists all completed snapshots.

Enter value for num_days: 1

Listing the last day’s Completed Snapshots

Snap
Instance     DB Name        Snap Id    Snap Started    Level
———— ———— ——— —————— —–
orcl1        ORCL               134 05 Jan 2009 00:00      1
135 05 Jan 2009 01:00      1
136 05 Jan 2009 02:00      1
137 05 Jan 2009 03:00      1
138 05 Jan 2009 04:00      1
139 05 Jan 2009 05:00      1
140 05 Jan 2009 06:00      1
141 05 Jan 2009 07:00      1
142 05 Jan 2009 08:00      1
143 05 Jan 2009 09:00      1
144 05 Jan 2009 10:00      1
145 05 Jan 2009 11:00      1

Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 144
Begin Snapshot Id specified: 144

Enter value for end_snap: 145
Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is awrrpt_1_144_145.html.  To use this name,
press <return> to continue, otherwise enter an alternative.

Enter value for report_name:

Report written to awrrpt_1_144_145.html
SQL>

Now you can check the file of awrrpt_1_144_145.html to see the detailed info what likes you have been using statpack to generate.
As a optional part of oracle database performance tuning process, we can use ADDM(Automatic Database Diagnostic Monitor) to check database system  by running the scripts of @?/rdbms/admin/addmrpt.sql.
The Automatic Database Diagnostic Monitor (ADDM) analyzes data in the Automatic Workload Repository (AWR) to identify potential performance bottlenecks. For each of the identified issues it locates the root cause and provides recommendations for correcting the problem. An ADDM analysis task is performed and its findings and recommendations stored in the database every time an AWR snapshot is taken provided the STATISTICS_LEVEL parameter is set to TYPICAL or ALL. The ADDM analysis includes:

* CPU load
* Memory usage
* I/O usage
* Resource intensive SQL
* Resource intensive PL/SQL and Java
* RAC issues
* Application issues
* Database configuration issues
* Concurrency issues
* Object contention

For more details, refer to following links:
http://www.oracle.com/technology/pub/articles/10gdba/week6_10gdba.html
http://www.oracle-base.com/articles/10g/AutomaticWorkloadRepository10g.php
http://www.oracle.com/technology/oramag/oracle/04-may/o34tech_talking.html
http://www.oracle-base.com/articles/10g/AutomaticDatabaseDiagnosticMonitor10g.php

Using RMAN to delete the backups of database and archivelog

Filed under: Oracle — by enotebook @ 4:30 pm
Tags: ,

This is a simple test by using RMAN to delete the backups of database and archivelog.

RMAN> backup database tag ‘dbfull0812’;

Starting backup at 04-JAN-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf
input datafile fno=00003 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf
.
.
.
input datafile fno=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: starting piece 1 at 04-JAN-09
channel ORA_DISK_1: finished piece 1 at 04-JAN-09
piece handle=/u01/app/oracle/flash_recovery_area/ORCL/backupset/2009_01_04/o1_mf_nnndf_DBFULL0812_4p0cb35n_.bkp comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:03:36
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current controlfile in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 04-JAN-09
channel ORA_DISK_1: finished piece 1 at 04-JAN-09
piece handle=/u01/app/oracle/flash_recovery_area/ORCL/backupset/2009_01_04/o1_mf_ncsnf_DBFULL0812_4p0cjwr6_.bkp comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04
Finished backup at 04-JAN-09

RMAN> list backup of database;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
——- —- — ———- ———– ———— —————
318     Full    1G         DISK        00:04:01     01-DEC-08
BP Key: 318   Status: AVAILABLE  Compressed: NO  Tag: DBFULL
Piece Name: /u01/app/orabackup/data/dbfull_a7k15m6q_327_1
List of Datafiles in backup set 318
File LV Type Ckp SCN    Ckp Time  Name
—- — —- ———- ——— —-
1       Full 51602730   01-DEC-08 /u01/app/oracle/oradata/orcl/system01.dbf
2       Full 51602730   01-DEC-08 /u01/app/oracle/oradata/orcl/undotbs01.dbf
.
.
.

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
——- —- — ———- ———– ———— —————
418     Full    1G         DISK        00:03:30     04-JAN-09
BP Key: 418   Status: AVAILABLE  Compressed: NO  Tag: DBFULL0812
Piece Name: /u01/app/oracle/flash_recovery_area/ORCL/backupset/2009_01_04/o1_mf_nnndf_DBFULL0812_4p0cb35n_.bkp
List of Datafiles in backup set 418
File LV Type Ckp SCN    Ckp Time  Name
—- — —- ———- ——— —-
1       Full 53779751   29-DEC-08 /u01/app/oracle/oradata/orcl/system01.dbf
2       Full 53779751   29-DEC-08 /u01/app/oracle/oradata/orcl/undotbs01.dbf
.
.
.

RMAN> delete backup of database tag ‘dbfull0812’;

using channel ORA_DISK_1

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
——- ——- — — ———– ———– ———-
418     418     1   1   AVAILABLE   DISK        /u01/app/oracle/flash_recovery_area/ORCL/backupset/2009_01_04/o1_mf_nnndf_DBFULL0812_4p0cb35n_.bkp

Do you really want to delete the above objects (enter YES or NO)? yes
deleted backup piece
backup piece handle=/u01/app/oracle/flash_recovery_area/ORCL/backupset/2009_01_04/o1_mf_nnndf_DBFULL0812_4p0cb35n_.bkp recid=418 stamp=675257411
Deleted 1 objects

RMAN> list backup of database;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
——- —- — ———- ———– ———— —————
318     Full    1G         DISK        00:04:01     01-DEC-08
BP Key: 318   Status: AVAILABLE  Compressed: NO  Tag: DBFULL
Piece Name: /u01/app/orabackup/data/dbfull_a7k15m6q_327_1
List of Datafiles in backup set 318
File LV Type Ckp SCN    Ckp Time  Name
—- — —- ———- ——— —-
1       Full 51602730   01-DEC-08 /u01/app/oracle/oradata/orcl/system01.dbf
2       Full 51602730   01-DEC-08 /u01/app/oracle/oradata/orcl/undotbs01.dbf
.
.
.

RMAN> list backup;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
——- —- — ———- ———– ———— —————
318     Full    1G         DISK        00:04:01     01-DEC-08
BP Key: 318   Status: AVAILABLE  Compressed: NO  Tag: DBFULL
Piece Name: /u01/app/orabackup/data/dbfull_a7k15m6q_327_1
List of Datafiles in backup set 318
File LV Type Ckp SCN    Ckp Time  Name
—- — —- ———- ——— —-
1       Full 51602730   01-DEC-08 /u01/app/oracle/oradata/orcl/system01.dbf
2       Full 51602730   01-DEC-08 /u01/app/oracle/oradata/orcl/undotbs01.dbf
.
.
.

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
——- —- — ———- ———– ———— —————
319     Full    4M         DISK        00:00:02     01-DEC-08
BP Key: 319   Status: AVAILABLE  Compressed: NO  Tag: DBFULL
Piece Name: /u01/app/orabackup/data/dbfull_a8k15meh_328_1
Controlfile Included: Ckp SCN: 51602878     Ckp time: 01-DEC-08
SPFILE Included: Modification time: 03-NOV-08

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
——- —- — ———- ———– ———— —————
320     Full    4M         DISK        00:00:00     01-DEC-08
BP Key: 320   Status: AVAILABLE  Compressed: NO  Tag: CTL
Piece Name: /u01/app/orabackup/data/ctl_329_1_672324058
Controlfile Included: Ckp SCN: 51602889     Ckp time: 01-DEC-08

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
——- —- — ———- ———– ———— —————
419     Full    4M         DISK        00:00:02     04-JAN-09
BP Key: 419   Status: AVAILABLE  Compressed: NO  Tag: DBFULL0812
Piece Name: /u01/app/oracle/flash_recovery_area/ORCL/backupset/2009_01_04/o1_mf_ncsnf_DBFULL0812_4p0cjwr6_.bkp
Controlfile Included: Ckp SCN: 53780052     Ckp time: 29-DEC-08
SPFILE Included: Modification time: 04-JAN-09

RMAN> delete noprompt backup of archivelog;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01005: syntax error: found “;”: expecting one of: “all, from, high, like, logseq, low, scn, sequence, time, until”
RMAN-01007: at line 1 column 37 file: standard input

RMAN>

RMAN> delete noprompt backup of archivelog from logseq 3000;

using channel ORA_DISK_1

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
——- ——- — — ———– ———– ———-
321     321     1   1   AVAILABLE   DISK        /u01/app/orabackup/data/arch_aak15mf4_330_1
323     323     1   1   AVAILABLE   DISK        /u01/app/orabackup/data/arch_ack15mfc_332_1
.
.
.
backup piece handle=/u01/app/orabackup/data/arch_aak15mf4_330_1 recid=321 stamp=672324069
deleted backup piece
backup piece handle=/u01/app/orabackup/data/arch_ack15mfc_332_1 recid=323 stamp=672324077
deleted backup piece
.
.
.
Deleted 59 objects

iSCSI storage with zvols – setup iscsi on Solaris 10

Filed under: Unix — by enotebook @ 4:25 pm
Tags: ,

Solairs 10 supports iscsi storage via ZFS, following shows the process to setup iscsi storage with zvols

1. create a pool on the server and share it over iscsi.
bash-3.00# zpool create iscsistore c0t1d0
bash-3.00# zfs create -s -V 30gb iscsistore/zvol
bash-3.00# zfs set shareiscsi=on iscsistore/zvol
bash-3.00# iscsitadm list target -v
Target: iscsistore/zol
iSCSI Name: iqn.1986-03.com.sun:02:04342d14-f21d-c534-e00c-a417327804e7
Alias: iscsistore/zol
Connections: 1
Initiator:
iSCSI Name: iqn.1986-03.com.sun:01:4afacef3ffff.494a1fa4
Alias: unknown
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 01000013728528c000002a00494b45bd
VID: SUN
PID: SOLARIS
Type: disk
Size:   30G
Backing store: /dev/zvol/rdsk/iscsistore/zol
Status: online
bash-3.00#

2. on the client, we need to discover the iSCSI share (172.16.113.146 is the IP of the server):
bash-3.00# iscsiadm list discovery
Discovery:
Static: disabled
Send Targets: enabled
iSNS: disabled

bash-3.00# iscsiadm modify discovery –sendtargets enable
bash-3.00# iscsiadm add discovery-address 172.16.113.146
bash-3.00# svcadm enable network/iscsi_initiator
bash-3.00# iscsiadm list target
bash-3.00# iscsiadm list target
Target: iqn.1986-03.com.sun:02:04342d14-f21d-c534-e00c-a417327804e7
Alias: iscsistore/zol
TPGT: 1
ISID: 4000002a0000
Connections: 1
bash-3.00#

3. now we can format the shared iscsi storage as needed on the client
bash-3.00# format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c1d0 <DEFAULT cyl 10526 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
1. c2t01000013728528C000002A00494B45BDd0 <DEFAULT cyl 3913 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g01000013728528c000002a00494b45bd
Specify disk (enter its number): 1
selecting c2t01000013728528C000002A00494B45BDd0
[disk formatted]

FORMAT MENU:
disk       – select a disk
type       – select (define) a disk type
partition  – select (define) a partition table
current    – describe the current disk
format     – format and analyze the disk
fdisk      – run the fdisk program
repair     – repair a defective sector
label      – write label to the disk
analyze    – surface analysis
defect     – defect list management
backup     – search for backup labels
verify     – read and display labels
save       – save new disk/partition definitions
inquiry    – show vendor, product and revision
volname    – set 8-character volume name
!<cmd>     – execute <cmd>, then return
quit
format> q
bash-3.00#
Done.

January 4, 2009

Hello world!

Filed under: Uncategorized — by enotebook @ 7:16 am

Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging!

December 11, 2008

Setup dataguard on Linux without shutting down primary database

Filed under: Oracle — by enotebook @ 9:26 am

This scenario assumes that you already have a database is running on archivelog mode and need to upgrade this single database to data guard solution, following are the steps to archive this.

1. Prepare for the standby database – install Linux and oracle database with the edition same as primary database.

2. Enable force logging on primary database

SQL> ALTER DATABASE FORCE LOGGING;

3. Configure the Standby Redo Log (only need to do this on primary database since following uses RMAN to duplicate the standby database)

SQL> ALTER DATABASE ADD STANDBY LOGFILE
2> (‘/u01/oradata/orcl/standbyredo01.log’,’/u01/oradata/orcl/standbyredo02.log
‘,’/u01/oradata/orcl/standbyredo03.log’,’/u01/oradata/orcl/standbyredo04.log’
) SIZE 50M;

Be aware of 3 points as below:
* Ensure log file sizes are identical on the primary and standby databases.
* Determine the appropriate number of standby redo log file groups. There is an equation: (maximum number of logfiles for each thread + 1) * maximum number of threads
* Verify related database parameters and settings – like the values of MAXLOGFILES and MAXLOGMEMBERS

4. Set Primary Database Initialization Parameters(refer to parameters showed in step 6).
Assume the primary database uses SPFILE and we cannot shutdown it, so we use ALTER SYSTEM SET parameter=value SCOPE=BOTH to change the initialization parameters. For examples:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG=’DG_CONFIG=(orcl,orclstb)’ SCOPE=BOTH;
SQL> ALTER SYSTEM SET log_archive_dest_1=’LOCATION=/u01/arch/dg1 VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=orcl’ SCOPE=BOTH;
SQL> ALTER SYSTEM SET log_archive_dest_2=’SERVICE=boston LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=orclstb’ SCOPE=BOTH;

5. Create pfile and standby control file on primary database and copy them to standby database, then re-name them accordingly.

SQL> CREATE pfile=’/u01/pfile.ora’ FROM SPFILE;
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS ‘/u01/stb.ctl’;

6. Modify the pfile on standby database like following(copied from oracle database manual ):

DB_NAME=chicago
DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/boston/control1.ctl’, ‘/arch2/boston/control2.ctl’
DB_FILE_NAME_CONVERT=’chicago’,’boston’
LOG_FILE_NAME_CONVERT=
‘/arch1/chicago/’,’/arch1/boston/’,’/arch2/chicago/’,’/arch2/boston/’

LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
LOG_ARCHIVE_DEST_1=
‘LOCATION=/arch1/boston/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_2=
‘SERVICE=chicago LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=chicago’

LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=chicago
FAL_CLIENT=boston

7. Backup primary database and archived logs using RMAN, and copy them to standby database with same directory structure.

RMAN> BACKUP DATABASE format ‘/u01/backup/data.%U.rman’ plus archivelog format ‘/oracle/oradata/rmanbackup/ARCH.%U.rman’

8. Startup the instance of standby database without mounting it.
SQL> STARTUP NOMOUNT;

9. Using RMAN to duplicate the standby database (Do this operation on primary database).

oracle@dg1~$ rman target / auxiliary sys/change_on_install@STANDBY

RMAN> duplicate target database for standby dorecover nofilenamecheck;

Please be aware you will encounter an error while doing this, which is ORA-12528: TNS:listener: all appropriate instances are blocking new connections, it is because only the Auxiliary Instance is running .The instance registers with the listener by the PMON process. TO start the PMON process the database need to be in MOUNT status. Before there is nothing to register the database to the listerner so the instance is BLOCKED.
To avoid this issue, use the Oracle10G feature:
tnsnames.ora
————
SAIRAM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = apadhi-idc)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SAIRAM)(UR=A)
)
)

10. Bring up standby database to managed recovery

SQL> alter database recover managed standby database disconnect from session;

Done.

Connection to Auxilary using connect string failed with ORA-12528

Filed under: Oracle — by enotebook @ 9:25 am

Refer to metalink Note:419440.1
During duplicate database creation, connection to the auxiliary database from target database server fails with ORA-12528: TNS:listener: all appropriate instances are blocking new connections.

E:oracleproduct10.2.0db_1BIN>rman target / auxiliary sys/sys@duplicate

Recovery Manager: Release 10.2.0.1.0 – Production on Fri Mar 16 00:11:05 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: ORIGINAL (DBID=1451288337)
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04006: error from auxiliary database: ORA-12528: TNS:listener: all appropri
ate instances are blocking new connections

When it fails , add manual entry for sid in listener.ora as per Note 301099.1

However, still you receive the same error on connect in some cases.

Cause
Its because only the Auxiliary Instance is running .The instance registers with the listener by the PMON process. TO start the PMON process the database need to be in MOUNT status. Before there is nothing to register the database to the listerner so the instance is BLOCKED.
Solution

There are 2 options to solve this problem

Option 1

Connect to the auxiliary instance using OS authentication (as /) and use password file authentication for target database and connect via Listener.For an example:

E:>rman target sys/sys@original auxiliary /

Recovery Manager: Release 10.2.0.1.0 – Production on Thu Mar 15 23:57:08 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: ORIGINAL (DBID=1451288337)
connected to auxiliary database: DUPLICATE (not mounted)

Option 2

Create a static entry for the auxiliary database in the listener.ora file and restart the listener. For example

Listener.ora
————-

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = SAIRAM)
(ORACLE_HOME = G:oracleproduct10.2.0db_1)
(SID_NAME = SAIRAM)
)
)

Or

Use the Oracle10G feature :

tnsnames.ora
————

SAIRAM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = apadhi-idc)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SAIRAM)(UR=A)
)
)

This fix is required for administrative connections where we specify (UR=A) in connect data.

December 9, 2008

Standby database cannot archive redo log with ORA-16014: no available destinations

Filed under: Oracle — by enotebook @ 4:20 pm

I have newly configured a data guard server, it initially was running fine after the configuration has been done, but the standby database freezed in 10 miunites with the following error in alert log:

ORA-16401: archivelog rejected by RFS
ARCk: Failed to archive thread 1 sequence 27 (4)
ARCH: Archival stopped, error occurred. Will continue retrying
Tue Dec 2 10:20:18 2008
ORACLE Instance orcl – Archival Error
Tue Dec 2 10:20:18 2008
ORA-16014: log 4 sequence# 27 not archived, no available destinations
ORA-00312: online log 4 thread 1: ‘/opt/oracle/product/10gr2/oradata/orcl/standbyredo01.log’
Tue Dec 2 10:20:18 2008
RFS[5]: Archivelog thread 1 sequence 28 cannot be reused
Tue Dec 2 10:20:18 2008
Errors in file /opt/oracle/product/10gr2/admin/orcl/bdump/orcl_arck_1746.trc:
ORA-16014: log 4 sequence# 27 not archived, no available destinations
ORA-00312: online log 4 thread 1: ‘/opt/oracle/product/10gr2/oradata/orcl/standbyredo01.log’
Tue Dec 2 10:20:18 2008
Errors in file /opt/oracle/product/10gr2/admin/orcl/udump/orcl_rfs_1786.trc:
ORA-16401: archivelog rejected by RFS

Apparently, the standby database freezed since the standby redo log is not able to be archived. i suspect if the parameter of log_archive_dest_1 is correctly specified, following lists some of the system parameters with non-default values in standby database:

db_unique_name=orcl144
log_archive_config=’DG_CONFIG=(orcl145,orcl144)’
log_archive_dest_1=’LOCATION=/opt/oracle/arch/archlog144/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl144′
log_archive_dest_2=’SERVICE=dg145 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLES) DB_UNIQUE_NAME=orcl145′
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
log_archive_format=’%t_%s_%r.arc’
log_archive_max_processes=30
fal_server=dg145
fal_client=dg144
standby_file_management=auto
standby_archive_dest=’/opt/oracle/arch/archlog144/’

Yes, it causes the problem – the parameter of log_archive_dest_1 specified incorrectly. For VALID_FOR clause, the value associate to it should be ALL_LOGFILES instead of ONLINE_LOGFILES, which means all the log files intend to be archived in specified location including STANDBY REDOLOG.
Problem solved after changing the value of log_archive_dest_1 to ‘LOCATION=/opt/oracle/arch/archlog144/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl144’.

Regarding VALID_FOR clause, i extract some content from oracle database 10 manual.

The VALID_FOR attribute is optional. However, Oracle recommends that you define a VALID_FOR attribute for each destination so that your Data Guard configuration operates properly after a role transition.
Note:
Although the (ALL_LOGFILES,ALL_ROLES) keyword pair is the default, it is not appropriate for every destination. For example, if the destination is a logical standby database, which is an open database that is creating its own redo data, the redo data being transmitted by redo transport services could potentially overwrite the logical standby database’s local online redo log files.
To configure these factors for each LOG_ARCHIVE_DEST_n destination, you specify this attribute with a pair of keywords: VALID_FOR=(redo_log_type,database_role):
The redo_log_type keyword identifies the destination as valid for archiving one of the following:
ONLINE_LOGFILE—This destination is valid only when archiving online redo log files.
STANDBY_LOGFILE—This destination is valid only when archiving standby redo log files.
ALL_LOGFILES— This destination is valid when archiving either online redo log files or standby redo log files.
The database_role keyword identifies the role in which this destination is valid for archiving:
PRIMARY_ROLE—This destination is valid only when the database is running in the primary role.
STANDBY_ROLE—This destination is valid only when the database is running in the standby role.
ALL_ROLES—This destination is valid when the database is running in either the primary or the standby role.
If you do not specify the VALID_FOR attribute for a destination, by default, archiving online redo log files and standby redo log files is enabled at the destination, regardless of whether the database is running in the primary or the standby role. This default behavior is equivalent to setting the (ALL_LOGFILES,ALL_ROLES) keyword pair on the VALID_FOR attribute. For example:
LOG_ARCHIVE_DEST_1=’LOCATION=/disk1/oracle/oradata/payroll/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
The VALID_FOR attribute enables you to use the same initialization parameter file for both the primary and standby roles.

November 24, 2008

MapReduce

Filed under: Miscellaneous — by enotebook @ 11:15 am

Copy from: http://en.wikipedia.org/wiki/MapReduce
MapReduce is a software framework introduced by Google to support parallel computations over large (multiple petabyte[1]) data sets on clusters of computers. This framework is largely taken from map and reduce functions commonly used in functional programming,[2] although the actual semantics of the framework are not the same.[3]
MapReduce implementations have been written in C++, Java, Python and other languages.

Logical view
The Map and Reduce functions of MapReduce are both defined with respect to data structured in (key, value) pairs. Map takes one pair of data with a type on a data domain, and returns a list of pairs in a different domain:
Map(k1,v1) -> list(k2,v2)
The map function is applied in parallel to every item in the input dataset. This produces a list of (k2,v2) pairs for each call. After that, the MapReduce framework collects all pairs with the same key from all lists and groups them together, thus creating one group for each one of the different generated keys.
The Reduce function is then applied in parallel to each group, which in turn produces a collection of values in the same domain:
Reduce(k2, list (v2)) -> list(v2)
Each Reduce call typically produces either one value v2 or an empty return, though one call is allowed to return more than one value. The returns of all calls are collected as the desired result list.
Thus the MapReduce framework transforms a list of (key, value) pairs into a list of values. This behavior is different from the functional programming map and reduce combination, which accepts a list of arbitrary values and returns one single value that combines all the values returned by map.
It is necessary but not sufficient to have implementations of the map and reduce abstractions in order to implement MapReduce. Furthermore effective implementations of MapReduce require a distributed file system to connect the processes performing the Map and Reduce phases.

Example
The canonical example application of MapReduce is a process to count the appearances of each different word in a set of documents:
map(String name, String document):
// key: document name
// value: document contents
for each word w in document:
EmitIntermediate(w, 1);

reduce(String word, Iterator partialCounts):
// key: a word
// values: a list of aggregated partial counts
int result = 0;
for each v in partialCounts:
result += ParseInt(v);
Emit(result);

Here, each document is split in words, and each word is counted initially with a “1” value by the Map function, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce, thus this function just needs to sum all of its input values to find the total appearances of that word.

Dataflow
The frozen part of the MapReduce framework is a large distributed sort. The hot spots, which the application defines, are:
* an input reader
* a Map function
* a partition function
* a compare function
* a Reduce function
* an output writer
Input reader
The input reader divides the input into 16MB to 128MB splits and the framework assigns one split to each Map function. The input reader reads data from stable storage (typically a distributed file system like Google File System) and generates key/value pairs.
A common example will read a directory full of text files and return each line as a record.
Map function
Each Map function takes a series of key/value pairs, processes each, and generates zero or more output key/value pairs. The input and output types of the map can be (and often are) different from each other.
If the application is doing a word count, the map function would break the line into words and output the word as the key and “1” as the value.
Partition function
The output of all of the maps is allocated to particular reduces by the application’s partition function. The partition function is given the key and the number of reduces and returns the index of the desired reduce.
A typical default is to hash the key and modulo the number of reduces.
Comparison function
The input for each reduce is pulled from the machine where the map ran and sorted using the application’s comparison function.
Reduce function
The framework calls the application’s reduce function once for each unique key in the sorted order. The reduce can iterate through the values that are associated with that key and output 0 or more key/value pairs.
In the word count example, the reduce function takes the input values, sums them and generates a single output of the word and the final sum.
Output writer
The Output Writer writes the output of the reduce to stable storage, usually a distributed file system, such as Google File System.

Distribution and reliability
MapReduce achieves reliability by parceling out a number of operations on the set of data to each node in the network; each node is expected to report back periodically with completed work and status updates. If a node falls silent for longer than that interval, the master node (similar to the master server in the Google File System) records the node as dead, and sends out the node’s assigned work to other nodes. Individual operations use atomic operations for naming file outputs as a double check to ensure that there are not parallel conflicting threads running; when files are renamed, it is possible to also copy them to another name in addition to the name of the task (allowing for side-effects).
The reduce operations operate much the same way, but because of their inferior properties with regard to parallel operations, the master node attempts to schedule reduce operations on the same node, or in the same rack as the node holding the data being operated on; this property is desirable as it conserves bandwidth across the backbone network of the datacenter.
Implementations may not be highly-available; in Hadoop, for example, the NameNode is a Single Point of Failure for the distributed filesystem; if theJobTracker fails, all outstanding work is lost.

Uses
MapReduce is useful in a wide range of applications, including: “distributed grep, distributed sort, web link-graph reversal, term-vector per host, web access log stats, inverted index construction, document clustering, machine learning, statistical machine translation…” Most significantly, when MapReduce was finished, it was used to completely regenerate Google’s index of the World Wide Web, and replaced the old ad hoc programs that updated the index and ran the various analyses. [4]
MapReduce’s stable inputs and outputs are usually stored in a distributed file system. The transient data is usually stored on local disk and fetched remotely by the reduces.

Criticism
David DeWitt and Michael Stonebraker, pioneering experts in parallel databases and shared nothing architectures, have made some controversial assertions about the breadth of problems that MapReduce can be used for. They called its interface too low-level, and questioned whether it really represents the paradigm shift its proponents have claimed it is.[5] They challenge the MapReduce proponents claims of novelty, citing Teradata as an example of prior art that has existed for over two decades; they compared MapReduce programmers to Codasyl programmers, noting both are “writing in a low-level language performing low-level record manipulation”.[5] MapReduce’s use of input files and lack of schema support prevents the perf
orm
ance improvements enabled by common database system features such as B-trees and hash partitioning, though projects such as PigLatin and Sawzall are starting to address these problems.[6]

Implementations
* The Google MapReduce framework is implemented in C++ with interfaces in Python and Java.
* The Hadoop project is a free open source Java MapReduce implementation.
* Greenplum is a commercial MapReduce implementation, with support for Python, Perl, SQL and other languages.[7]
* Phoenix [1] is a shared-memory implementation of MapReduce implemented in C.
* MapReduce has also been implemented for the Cell Broadband Engine, also in C. [2]
* MapReduce has been implemented on NVIDIA GPUs (Graphics Processors) using CUDA [3].
* Qt Concurrent is a simplified version of the framework, implemented in C++, used for distributing a task between multiple processor cores.
* CouchDB uses a MapReduce framework for defining views over distributed documents
* Skynet is an open source Ruby implementation of Google’s MapReduce framework
* Disco is an open source MapReduce implementation by Nokia. Its core is written in Erlang and jobs are normally written in Python.
* Aster Data Systems nCluster In-Database MapReduce implements MapReduce inside the database.
* The open-source Hive framework from Facebook (which provides a SQL-like language over files, layered on the open-source Hadoop MapReduce engine.)

References
Specific references:
1. ^ Google spotlights data center inner workings | Tech news blog – CNET News.com
2. ^ “Our abstraction is inspired by the map and reduce primitives present in Lisp and many other functional languages.” -“MapReduce: Simplified Data Processing on Large Clusters”, by Jeffrey Dean and Sanjay Ghemawat; from Google Labs
3. ^ “Google’s MapReduce Programming Model — Revisited” — paper by Ralf Lammel; from Microsoft
4. ^ “How Google Works”. baselinemag.com. “As of October, Google was running about 3,000 computing jobs per day through MapReduce, representing thousands of machine-days, according to a presentation by Dean. Among other things, these batch routines analyze the latest Web pages and update Google’s indexes.”
5. ^ a b David DeWitt; Michael Stonebraker. “MapReduce: A major step backwards”. databasecolumn.com. Retrieved on 2008-08-27.
6. ^ David DeWitt; Michael Stonebraker. “MapReduce II”. databasecolumn.com. Retrieved on 2008-08-27.
7. ^ Parallel Programming in the Age of Big Data
General references:
* Dean, Jeffrey & Ghemawat, Sanjay (2004). “MapReduce: Simplified Data Processing on Large Clusters”. Retrieved Apr. 6, 2005.
* “High-performance analytics”.

November 17, 2008

Oracle 10gr2 installation on Solaris: monitor must be configured to display at least 256 colors

Filed under: Oracle — by enotebook @ 11:27 am

When installing Oracle 10g release 2 on an Intel x86 box from a remote connection, you may encounter an error after the command runInstaller issued – something like monitor must be configured to display at least 256 colors.

-bash-3.00$ /opt/stage/database/runInstaller
Starting Oracle Universal Installer…
Checking installer requirements…
Checking operating system version: must be 5.10. Actual 5.10
Passed
Checking Temp space: must be greater than 250 MB. Actual 1019 MB Passed
Checking swap space: must be greater than 500 MB. Actual 1597 MB Passed
Checking monitor: must be configured to display at least 256 colors^C
>>> Could not execute auto check for display colors using command /usr/openwin/bin/xdpyinfo. Check if the DISPLAY variable is set. Failed <<<<
Some requirement checks failed. You must fulfill these requirements before
continuing with the installation,at which time they will be rechecked.
Continue? (y/n) n
User Selected: No
Exiting Oracle Universal Installer, log for this session can be found at /export/home/oracle/oraInventory/logs/installActions2008-11-17_10-01-33AM.log
-bash-3.00$

The reason is the client you are installing Oracle cannot display the outputs while executing command /usr/openwin/bin/xdpyinfo, you may get it work when running as root user, but it fails when current OS user is oracle.
We can set the DISPLAY varible to solve the problem.
bash-3.00# xhost + –execute this under root user

-bash-3.00$ export DISPLAY=Your_Client_IP_Address:0.0
-bash-3.00$ /usr/openwin/bin/xclock –chech if it works
-bash-3.00$ /opt/stage/database/runInstaller
Starting Oracle Universal Installer…
Checking installer requirements…
Checking operating system version: must be 5.10. Actual 5.10
Passed
Checking Temp space: must be greater than 250 MB. Actual 1018 MB Passed
Checking swap space: must be greater than 500 MB. Actual 1597 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
All installer requirements met.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-11-17_10-14-51AM. Please wait …-bash-3.00$ Oracle Universal Installer, Version 10.2.0.2.0 Production
Copyright (C) 1999, 2006, Oracle. All rights reserved.
No swap devices configured
Oracle Universal Installer, Version 10.2.0.2.0 Production
Copyright (C) 1999, 2006, Oracle. All rights reserved.

Now installation is moving ahead.
Done.

November 12, 2008

DBA Hacker’s guide to Solaris

Filed under: Oracle — by enotebook @ 4:45 pm

(Copy from: http://www.euroboxtec.co.uk/dba_hack.htm)
Not just for DBAs: this is to remind anyone of syntax and options.

1. Handy Solaris commands HW config, swap, patches etc.
2. Real-time performance monitoring vmstat, iostat, mpstat etc.
3. Unix Commands find, awk, sed, grep etc.
4. vi editor commands the 22 most useful vi commands
5. Unix file permissions explained sounds basic, but you may learn something new here!
6. The Solaris Name Service explained how hostname and username lookups work

1. Handy Solaris commands
show hardware config (memory, CPUs etc) :
/usr/platform/sun4u/sbin/prtdiag -v |more
/usr/sbin/psrinfo
show kernel settings:
/usr/sbin/sysdef |more
more /etc/system

show list of Solaris patches:
showrev -p |more
show software installed:
pkginfo |more
check swap space:
/usr/sbin/swap -s
check system load average: an uptime more than double the number of CPUs is getting a bit busy
uptime
top CPU users:
/usr/ucb/ps uax |head
top memory users:
/usr/ucb/ps vax |head
check disk space:
df -k (shows all mounted filesystems -including NFS etc)
df -lk (shows only local filesystems)
df -k -Fufs (shows only local Sun UFS filesystems (normal hard disks)
df -k . (show just the filesystem you are currently in)
2. Real-time performance monitoring:

memory usage :
vmstat 5 -look at memory/free field and page/sr field. Ignore the first line output as it’s historical
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr s0 s1 s6 s3 in sy cs us sy id
0 0 83 4456 456 1 431 266 70 167 0 35 6 6 0 2 523 567 31 14 9 76
0 0 62 3588464 46824 0 196 64 0 0 0 0 5 4 0 0 606 9743 882 86 7 7
0 0 62 3587960 42672 1 552 41 1 1 0 0 2 2 0 0 789 5488 1040 84 7 9
0 1 62 3584704 38848 0 471 3 38 38 0 0 5 5 0 1 1426 5270 968 64 9 27
0 0 62 3586464 38456 0 451 0 0 0 0 0 2 2 0 0 929 6039 1265 70 6 24
Also make sure that cpu/us is at least double cpu/sy

disk busy-ness
mpstat 5
(or iostat -c 5 )
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 221 3 544 227 75 582 61 31 28 7 267 18 12 2 68
2 209 2 446 395 178 328 37 31 32 6 299 11 7 2 80
-look at the wt field, this is wait-for-I/O which can be network or disk I/O, and should not be more than 30-40 (percent)

To see individual disk performance and find slow (‘hot’) disks :
iostat -d 5
sd0 sd1 sd6 sd37
kps tps serv kps tps serv kps tps serv kps tps serv
123 6 44 123 6 42 0 0 42 66 2 8
33 1 3 37 1 1 0 0 0 3 0 5
-check the serv column for each disk: this is the disk service time in milliseconds. However iostat by default shows only the first four disks: if you have more use -x and/or -l options :
iostat -xdnl 7 5 (e.g. if you have seven hard disks)
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.4 1.6 3.2 70.4 0.0 0.0 0.0 2.7 0 1 c0t0d0
0.2 1.6 1.6 70.4 0.0 0.0 0.0 3.0 0 1 c0t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t6d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t8d0
88.8 19.8 1276.8 158.4 0.0 0.6 0.0 5.7 0 51 c1t9d0
0.0 0.4 0.0 100.8 0.0 0.0 0.0 17.4 0 1 c1t10d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t11d0

3. Unix Commands
at – run a command or script once in the future
bash$ at 0815 tomorrow
at> rm $HOME/zxc
at> (type Control-D)
commands will be executed using /usr/bin/bash
job 984032100.a at Thu Mar 8 06:15:00 2001

awk

cron
crontab -l to see your crontab
crontab -e to edit it (NB but first do EDITOR=vi (or EDITOR=emacs) ; export EDITOR )
Remember fields are : minute hour day month day_of_week (0=Sun)
0-59 0-23 1-31 1-12 0-6
e.g. run a script only on Tuesdays to Fridays, at 10:12 am, 3:12pm, 4:12pm and 5:12pm
12 10,15-17 * * 2-5/home/oracle/bin/script

du -show disk space usage
du -k
-show disk usage in kB in all subdirectories below the current one
du -sk -show just the total of everything below the current dir
du -sk * -show individual totals for each file or directory in the current dir

find
find things you can’t remember the exact name of :
find /usr/lib -name “*socket*”
find big files (more than 20000 blocks, ie about 10MB )
find . -size +20000
find files or directories modified in last day
find . -mtime -1
find files not accessed (read) in 45 days
find /tmp -atime +45
Add -ls to the end of any of the above commands to see a full ls -l listing of the files found.
You can combine multiple options , e.g look for which big files have filled up a filesystem recently
find . -size +20000 -mtime -1 -ls
Or combine options using an “OR” syntax, e.g. find files which were modified either less than 1 week ago or more than 2 weeks ago
find . \( -mtime +14 -o -mtime -7 \) -ls
You can send the ouput of find to another command : the xargs command is ideal for this: e.g. interactively delete all core files under your $HOME
find ~ -type f -name core |xargs rm -i
or look for a particular word in all your .c files:
find . -name “*.c” |xargs grep -l libsocket

grep and its more flexible and faster cousins egrep and fgrep
Search for either of two strings in all log files:
egrep “error|PROBLEM” *log
case-insensitive search
fgrep -i error *log (will find Error, ERROR, etc.)
just see the filenames where the search text was found
grep -l error *log

sort
-n sorts by number instead of alphabetically
e.g. cd /export/home; du -sk * | sort -nr lists who is using most space in their home directory

tar
e.g. to copy a directory tree, preserving symlinks:
cd fromdir; tar cf – . | (cd todir; tar xpf -)

tr translate characters
convert lower-case to upper:
cat myfile | tr
change colons to newlines:
tr : “\n” newfile

who show who is currently logged on and where from

w show what people are doing

other useful Unix commands for you to check out:
cut, paste , split
diff, sdiff, cmp, dircmp
head, tail, tail -f
id, groups
4. vi commandsThe basic vi commands:
i insert before current character a append after current charI insert at beginning of line A append after end of linex delete current characterX backspace-delete (deletes char to left)dd delete current line22dd delete 22 linesu undo the last thing you didp paste those 22 lines below the current line12yy copy 12 lines into the ‘clipboard’ for later pasting to get out of editing (insert/append) mode:wq to save (‘write’) and quit (:q! to quit without writing)
vi moving-around Timesavers:

using h,j,k,l to move left,down,up and right is quicker than using
the arrow keys (once you get used to it!). w move one word forward b move back a worde go to end of wordCtl-F move Forward a whole page Ctl-B move Back a page0(zero) go to beginning of line $go to end of line :242go to line 242 Ctl-Gsee what line you are on
vi modifying things:
cw change word 3cw change 3 wordsC change to end-of-linedw delete word D delete to end-of-lineccreplace current line with what you type next rreplace one character Rendless replace (like over-typing) oopen/add a new line below the current one Oopen new line above current one xpswap two characters (e.g. when you make a typo) /bobsearch forward for ‘bob’ nrepeat previous search ?search backwards
Probably the handiest vi command:
.
(dot) -repeat your last command

useful extra vi commands:
~ swap case of current character (capitalize or lower-case)
Ctl-L re-draw the screen (e.g when something from a background process writes to your screen and messes up your vi window)
:set nu show line numbers (:set nonu to remove line numbers)
:set list show hidden characters, line endings etc. (: set nolist)

Global replace:
:%s/old/new/g (% means all lines)
or :g/old/s//new/g
or :%s/old/new/gc (c is to confirm each replacement, type ‘y’ to accept)

5. Unix file permissions You probably already know the basics:
Each user in Unix belongs to at least one group, each file or directory on the system belongs to one user and one group. When you do an ls -l on a file you see what permissions the file owner, group owner and everyone else (‘world’ or ‘others’) have on it
-rwxr-xr– 1 robins devteam 180 Mar 8 13:50 instbb
so on the file instbb there is full access (rwx- read,write,execute) for the owner (robins), read and execute for the group devteam and read-only for everyone else.

What you may not know is that whether someone can delete that file is not determined by the file permissions, but by the permissions of the directory the file is in :
bash$ ls -al
total 598
drwxrwxr-x 3 robins devteam 512 Mar 8 12:02 .
drwxr-xr-x 24 root wheel 1024 Mar 8 12:02 ..
-rwxr-xr– 1 robins devteam 180 Mar 8 13:50 instbb
In this case the directory (.) is group writeable, which means anyone in the group devteam can delete the file instbb. Although they can’t modify it, they could copy it to a new file in that directory, modify it, then delete the original and rename the new file as instbb. So the file isn’t as secure as it may appear..

6. Solaris Name servicesSolaris provides a mechanism for getting hostnames and usernames etc from several sources (e.g DNS, NIS, or the traditional /etc/hosts file) : the file /etc/nsswitch.conf , which may contain
hosts: files dns
this means that when you try to access a remote host by name (e.g. ping neptune) it will look for neptune first in /etc/hosts, then do an lookup in DNS (nslookup using the server specified in /etc/resolv.conf)

Similarly for usernames, you may have
passwd: files nis
So when you run a command which refers to a username (e.g. cd ~oracle ) , it first looks in /etc/passwd then does a lookup in NIS (ypmatch oracle passwd).

However you can look up hosts and users without worrying about where they are stored, using the getent command :
bash$ getent passwd oracle
oracle:##oracle:3008:5001:Oracle DBA:/export/home/oracle:/usr/local/bin/bash
bash$ getent hosts http://www.inkq.com
192.168.245.12 http://www.inkq.com
bash$ getent hosts 10.0.0.91
10.0.0.91 plop.inkq.com plop mailhost

Next Page »

Create a free website or blog at WordPress.com.