Oracle 11g RAC ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443 解决方法

摘要:
1、 问题描述:在OracleLinux6.1上安装11.2.0.1的RAC,并执行root。sh脚本。报告错误如下:[root@rac1bin]#/u01/app/11.2.0/grid/root.shRunningOracle11groot.shscript…以下环境变量如下:ORACLE_ OWNER=oracleOR


一.问题描述

在Oracle Linux 6.1 上安装11.2.0.1 的RAC,在安装grid时执行root.sh 脚本,报错,如下:

[root@rac1 bin]#/u01/app/11.2.0/grid/root.sh

Running Oracle 11g root.sh script...

The following environment variables are setas:

   ORACLE_OWNER= oracle

   ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bindirectory: [/usr/local/bin]:

  Copying dbhome to /usr/local/bin ...

  Copying oraenv to /usr/local/bin ...

  Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratabfile as needed by

Database Configuration Assistant when adatabase is created

Finished running generic part of root.shscript.

Now product-specific root actions will beperformed.

2012-06-27 10:31:18: Parsing the host name

2012-06-27 10:31:18: Checking for superuser privileges

2012-06-27 10:31:18: User has super userprivileges

Using configuration parameter file:/u01/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp'root'..

Operation successful.

 root wallet

 root wallet cert

 root cert export

 peer wallet

  profile reader wallet

  pawallet

 peer wallet keys

  pawallet keys

 peer cert request

  pacert request

 peer cert

  pacert

 peer root cert TP

 profile reader root cert TP

  paroot cert TP

 peer pa cert TP

  papeer cert TP

 profile reader pa cert TP

 profile reader peer cert TP

 peer user cert

  pauser cert

Adding daemon to inittab

CRS-4124: Oracle High Availability Services startup failed.

CRS-4000: Command Start failed, or completed with errors.

ohasd failed to start: Inappropriate ioctl for device

ohasd failed to start at/u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.

据说这个错误只在linux 6.1下,且Oracle 版本为11.2.0.1的时候出现,在11.2.0.3的时候就不会有这种问题,而解决方法就是在生成了文件/var/tmp/.oracle/npohasd文件后,root立即执行命令:

/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/nullbs=1024 count=1

二.清除安装历史记录

这里有两种方法:1.清除grid,2,清除root.sh.

2.1 清除GRID

在我们继续执行之前先清除GRID,具体步骤参考:

RAC 卸载 说明

http://blog.csdn.net/tianlesoftware/article/details/5892225

在所有节点执行:

rm –rf /etc/oracle/*

rm -rf /etc/init.d/init.cssd

rm -rf /etc/init.d/init.crs

rm -rf /etc/init.d/init.crsd

rm -rf /etc/init.d/init.evmd

rm -rf /etc/rc2.d/K96init.crs

rm -rf /etc/rc2.d/S96init.crs

rm -rf /etc/rc3.d/K96init.crs

rm -rf /etc/rc3.d/S96init.crs

rm -rf /etc/rc5.d/K96init.crs

rm -rf /etc/rc5.d/S96init.crs

rm -rf /etc/oracle/scls_scr

rm -rf /etc/inittab.crs

rm -rf /var/tmp/.oracle/*

or

rm -rf /tmp/.oracle/*

移除ocr.loc 文件,通常在/etc/oracle 目录下:

[root@rac1 ~]# cd /etc/oracle

You have new mail in /var/spool/mail/root

[root@rac1 oracle]# ls

lastgasp ocr.loc  ocr.loc.orig  olr.loc olr.loc.orig  oprocd

[root@rac1 oracle]# rm -rf ocr.*

格式化ASM 裸设备:

[root@rac1 utl]# ll /dev/asm*

brw-rw---- 1 oracle dba 8, 17 Jun 27 09:38 /dev/asm-disk1

brw-rw---- 1 oracle dba 8, 33 Jun 27 09:38/dev/asm-disk2

brw-rw---- 1 oracle dba 8, 49 Jun 27 09:38/dev/asm-disk3

brw-rw---- 1 oracle dba 8, 65 Jun 27 09:38/dev/asm-disk4

dd if=/dev/zero of=/dev/asm-disk1 bs=1Mcount=256

dd if=/dev/zero of=/dev/asm-disk2 bs=1Mcount=256

dd if=/dev/zero of=/dev/asm-disk3 bs=1Mcount=256

dd if=/dev/zero of=/dev/asm-disk4 bs=1Mcount=256

移除/tmp/CVU* 目录:

[root@rac1 ~]# rm -rf /tmp/CVU*

删除/var/opt目录下的Oracle信息和ORACLE_BASE目录:

# rm -rf /data/oracle

# rm -rf /var/opt/oracle

删除/usr/local/bin目录下的设置:

# rm -rf /usr/local/bin/dbhome

# rm -rf /usr/local/bin/oraenv

# rm -rf /usr/local/bin/coraenv

移除Grid 安装目录,并重建:

[root@rac1 oracle]# rm -rf /u01/app

[root@rac2 u01]# mkdir -p  /u01/app/11.2.0/grid

[root@rac2 u01]# mkdir -p/u01/app/oracle/product/11.2.0/db_1

[root@rac2 u01]# chown -R oracle:oinstall/u01

[root@rac2 u01]#chmod -R775 /u01/

2.2 清除root.sh 记录

使用rootcrs.pl 命令来清楚记录,命令如下:

[root@rac1 oracle]#/u01/app/11.2.0/grid/crs/install/rootcrs.pl-deconfig  -verbose -force

2012-06-27 14:30:17: Parsing the host name

2012-06-27 14:30:17: Checking for superuserprivileges

2012-06-27 14:30:17: User has superuserprivileges

Using configuration parameterfile:/u01/app/11.2.0/grid/crs/install/crsconfig_params

Failure to execute: Inappropriate ioctlfordevice for command /u01/app/11.2.0/grid/bin/crsctl check cluster -n rac1

Failure to execute: Inappropriate ioctlfordevice for command /u01/app/11.2.0/grid/bin/crsctl check cluster -n rac1

Usage: srvctl <command><object>[<options>]

   commands:enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config

   objects:database|service|asm|diskgroup|listener|home|ons|eons

For detailed help on each command andobjectand its options use:

 srvctl <command> -h or

 srvctl <command> <object>-h

PRKO-2012 : nodeapps object is notsupportedin Oracle Restart

sh: /u01/app/11.2.0/grid/bin/clsecho:Nosuch file or directory

Can'texec"/u01/app/11.2.0/grid/bin/clsecho": No such file or directoryat/u01/app/11.2.0/grid/lib/acfslib.pm line 937.

Failure to execute: Inappropriate ioctlfordevice for command /u01/app/11.2.0/grid/bin/crsctl check cluster -n rac1

You must kill crs processes or rebootthesystem to properly

cleanup the processes started byOracleclusterware

2560+0 records in

2560+0 records out

10485760 bytes (10 MB) copied, 0.0373402s,281 MB/s

error: package cvuqdisk is not installed

Successfully deconfigured Oracleclusterwarestack on this node

You have new mail in /var/spool/mail/root

[root@rac1 oracle]#

三.重新安装并处理问题

在执行/u01/app/11.2.0/grid/root.sh脚本的时候开2个root的shell窗口,一个用来执行脚本,一个用来监控/var/tmp/.oracle/npohasd文件,看到就用root立即执行命令:

/bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

[root@rac1 oracle]#/u01/app/11.2.0/grid/root.sh

Running Oracle 11g root.sh script...

The following environment variables are setas:

   ORACLE_OWNER= oracle

   ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bindirectory: [/usr/local/bin]:

The file "dbhome" already existsin /usr/local/bin.  Overwrite it? (y/n)

[n]:

The file "oraenv" already existsin /usr/local/bin.  Overwrite it? (y/n)

[n]:

The file "coraenv" already existsin /usr/local/bin.  Overwrite it? (y/n)

[n]:

Entries will be added to the /etc/oratabfile as needed by

Database Configuration Assistant when adatabase is created

Finished running generic part of root.shscript.

Now product-specific root actions will beperformed.

2012-06-27 14:32:21: Parsing the host name

2012-06-27 14:32:21: Checking for superuser privileges

2012-06-27 14:32:21: User has super userprivileges

Using configuration parameter file:/u01/app/11.2.0/grid/crs/install/crsconfig_params

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp'root'..

Operation successful.

  rootwallet

 root wallet cert

 root cert export

 peer wallet

 profile reader wallet

  pawallet

 peer wallet keys

  pawallet keys

 peer cert request

  pacert request

 peer cert

  pacert

 peer root cert TP

 profile reader root cert TP

  paroot cert TP

 peer pa cert TP

  papeer cert TP

 profile reader pa cert TP

 profile reader peer cert TP

 peer user cert

  pauser cert

--------注意-------------

看到root.sh 执行到这里的时候,我们就可以在另一个窗口不断的刷我们的dd命令了,如果有更好的方法也可以,我这里是这么操作的:

[root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

/bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory

[root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

/bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory

[root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

/bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory

[root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

/bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory

[root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

/bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory

You have new mail in /var/spool/mail/root

[root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

--只要dd命令成功执行,我们的root.sh 就可以顺利完成了。

--------End --------------

Adding daemon to inittab

CRS-4123: Oracle High Availability Serviceshas been started.

ohasd is starting

ADVM/ACFS is not supported onoraclelinux-release-6Server-1.0.2.x86_64

CRS-2672: Attempting to start 'ora.gipcd'on 'rac1'

CRS-2672: Attempting to start 'ora.mdnsd'on 'rac1'

CRS-2676: Start of 'ora.gipcd' on 'rac1'succeeded

CRS-2676: Start of 'ora.mdnsd' on 'rac1'succeeded

CRS-2672: Attempting to start 'ora.gpnpd'on 'rac1'

CRS-2676: Start of 'ora.gpnpd' on 'rac1'succeeded

CRS-2672: Attempting to start'ora.cssdmonitor' on 'rac1'

CRS-2676: Start of 'ora.cssdmonitor' on'rac1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on'rac1'

CRS-2672: Attempting to start 'ora.diskmon'on 'rac1'

CRS-2676: Start of 'ora.diskmon' on 'rac1'succeeded

CRS-2676: Start of 'ora.cssd' on 'rac1'succeeded

CRS-2672: Attempting to start 'ora.ctssd'on 'rac1'

CRS-2676: Start of 'ora.ctssd' on 'rac1'succeeded

ASM created and started successfully.

DiskGroup DATA created successfully.

clscfg: -install mode specified

Successfully accumulated necessary OCRkeys.

Creating OCR keys for user 'root', privgrp'root'..

Operation successful.

CRS-2672: Attempting to start 'ora.crsd' on'rac1'

CRS-2676: Start of 'ora.crsd' on 'rac1'succeeded

CRS-4256: Updating the profile

Successful addition of voting disk372c42f3b2bc4f66bf8b52d2526104e3.

Successfully replaced voting disk groupwith +DATA.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfullyreplaced

## STATE    File Universal Id                File Name Disk group

-- -----    -----------------                --------- ---------

 1.ONLINE   372c42f3b2bc4f66bf8b52d2526104e3(/dev/asm-disk1) [DATA]

Located 1 voting disk(s).

CRS-2673: Attempting to stop 'ora.crsd' on'rac1'

CRS-2677: Stop of 'ora.crsd' on 'rac1'succeeded

CRS-2673: Attempting to stop 'ora.asm' on'rac1'

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on'rac1'

CRS-2677: Stop of 'ora.ctssd' on 'rac1'succeeded

CRS-2673: Attempting to stop'ora.cssdmonitor' on 'rac1'

CRS-2677: Stop of 'ora.cssdmonitor' on'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on'rac1'

CRS-2677: Stop of 'ora.cssd' on 'rac1'succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on'rac1'

CRS-2677: Stop of 'ora.gpnpd' on 'rac1'succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on'rac1'

CRS-2677: Stop of 'ora.gipcd' on 'rac1'succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on'rac1'

CRS-2677: Stop of 'ora.mdnsd' on 'rac1'succeeded

CRS-2672: Attempting to start 'ora.mdnsd'on 'rac1'

CRS-2676: Start of 'ora.mdnsd' on 'rac1'succeeded

CRS-2672: Attempting to start 'ora.gipcd'on 'rac1'

CRS-2676: Start of 'ora.gipcd' on 'rac1'succeeded

CRS-2672: Attempting to start 'ora.gpnpd'on 'rac1'

CRS-2676: Start of 'ora.gpnpd' on 'rac1'succeeded

CRS-2672: Attempting to start'ora.cssdmonitor' on 'rac1'

CRS-2676: Start of 'ora.cssdmonitor' on'rac1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on'rac1'

CRS-2672: Attempting to start 'ora.diskmon'on 'rac1'

CRS-2676: Start of 'ora.diskmon' on 'rac1'succeeded

CRS-2676: Start of 'ora.cssd' on 'rac1'succeeded

CRS-2672: Attempting to start 'ora.ctssd'on 'rac1'

CRS-2676: Start of 'ora.ctssd' on 'rac1'succeeded

CRS-2672: Attempting to start 'ora.asm' on'rac1'

CRS-2676: Start of 'ora.asm' on 'rac1'succeeded

CRS-2672: Attempting to start 'ora.crsd' on'rac1'

CRS-2676: Start of 'ora.crsd' on 'rac1'succeeded

CRS-2672: Attempting to start 'ora.evmd' on'rac1'

CRS-2676: Start of 'ora.evmd' on 'rac1'succeeded

CRS-2672: Attempting to start 'ora.asm' on'rac1'

CRS-2676: Start of 'ora.asm' on 'rac1'succeeded

CRS-2672: Attempting to start 'ora.DATA.dg'on 'rac1'

CRS-2676: Start of 'ora.DATA.dg' on 'rac1'succeeded

rac1    2012/06/27 14:39:25    /u01/app/11.2.0/grid/cdata/rac1/backup_20120627_143925.olr

Preparing packages for installation...

cvuqdisk-1.0.7-1

Configure Oracle Grid Infrastructure for aCluster ... succeeded

Updating inventory properties forclusterware

Starting Oracle Universal Installer...

Checking swap space: must be greater than500 MB.   Actual 969 MB    Passed

The inventory pointer is located at/etc/oraInst.loc

The inventory is located at/u01/app/oraInventory

'UpdateNodeList' was successful.

[root@rac1 oracle]#

这里root.sh成功执行,方法可行。

注意:

在所有节点执行root.sh 都需要使用dd命令。

-------------------------------------------------------------------------------------------------------

版权所有,文章允许转载,但必须以链接方式注明源地址,否则追究法律责任!

Skype: tianlesoftware

QQ:              tianlesoftware@gmail.com

Email:   tianlesoftware@gmail.com

Blog:     http://www.tianlesoftware.com

Weibo: http://weibo.com/tianlesoftware

Twitter: http://twitter.com/tianlesoftware

Facebook: http://www.facebook.com/tianlesoftware

Linkedin: http://cn.linkedin.com/in/tianlesoftware

-------加群需要在备注说明Oracle表空间和数据文件的关系,否则拒绝申请----

DBA1 群:62697716(满);   DBA2 群:62697977(满)  DBA3 群:62697850(满)  

DBA 超级群:63306533(满);  DBA4 群:83829929   DBA5群: 142216823

DBA6 群:158654907    DBA7 群:172855474   DBA总群:104207940


免责声明:文章转载自《Oracle 11g RAC ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443 解决方法》仅用于学习参考。如对内容有疑问,请及时联系本站处理。

上篇分享api接口验证模块Tomcat无法重新安装该怎么办?下篇

宿迁高防,2C2G15M,22元/月;香港BGP,2C5G5M,25元/月 雨云优惠码:MjYwNzM=

相关文章

处理因ASM实例异常导致RAC第一节点实例异常终止故障

遭遇RAC第一节点实例由于ASM实例异常导致数据库实例非正常停止,记录在此。1.故障现象两节点RAC第一节点实例停止,经检查ASM实例亦异常终止。2.故障分析检查数据库实例及ASM实例的的alert寻找处理思路。1)alert日志内容Sun May 8 06:59:06 2011Errors in file /oracle/app/oracle/admin...

一个节点rac+单节点dg网络配置(listener.ora与tnsnames.ora)

环境说明:  实验环境是 一个节点的 rac + 单机dg    (主备全部用asm存储) tnsnames.ora  文件  (oracle用户) node 1 : node1-> pwd /u01/app/oracle/product/11.2.0/db_1/network/adminnode1-> cat tnsnames.ora  DE...

X ORACLE19c的RAC集群部署-详细完整篇

本文为19年12月份搭建的环境并亲自安装的整体过程日志,如有疑问,可留言共同探讨。 ORACLE19c--RAC集群安装 第一步:配置/etc/hosts 192.168.1.61xiaosheng61 192.168.1.62xiaosheng62 192.168.10.61xiaosheng61prv 192.168.10.62xiaosheng...

Centos6.5安装Oracle11.2.0.4 RAC(完整版)

环境参数:Linux:Centos6.5 Grid和Oracle:11.2.0.4 一、环境配置 1.配置Node1和Node2两个节点之间的网卡 Node1: [sql]view plaincopy [root@rac1 network-scripts]# cat ifcfg-eth0   DEVICE=eth0   HWADDR=08:00:2...

Oracle RAC 集群环境下日志文件结构

Oracle RAC 集群环境下日志文件结构 在Oracle RAC环境中,对集群中的日志的定期检查是必不可少的。通过查看集群日志,可以早期定位集群环境中出现的问题,以便将问题消灭在萌芽状态。简单介绍一下有关Oracle集群环境中日志的结构,方便快速查找所需的日志文件。 1.Oracle集群日志藏匿之处 Orac 在Oracle RAC环境中,对集群中...

安装rac遇到的问题总结:

1. 选择虚拟机工具   这个过程是非常的波折。这次安装也让我吸取了很大教训,获得了宝贵经验。   首先啊,必须了解rac的机制。   共享磁盘+多实例。   这就意味着,我们必须使用一个支持共享磁盘的虚拟机。   第一次我使用了workstation,竟然安装了2组磁盘。。多么可笑哇。羞~   其实也有人说workstation可以配置共享磁盘...