欢迎来到三一办公! | 帮助中心 三一办公31ppt.com(应用文档模板下载平台)
三一办公
全部分类
  • 办公文档>
  • PPT模板>
  • 建筑/施工/环境>
  • 毕业设计>
  • 工程图纸>
  • 教育教学>
  • 素材源码>
  • 生活休闲>
  • 临时分类>
  • ImageVerifierCode 换一换
    首页 三一办公 > 资源分类 > DOC文档下载  

    [计算机软件及应用]ORACLE10g RAC HPUXMC sericeguard安装配置stepstep.doc

    • 资源ID:4561617       资源大小:4.12MB        全文页数:38页
    • 资源格式: DOC        下载积分:10金币
    快捷下载 游客一键下载
    会员登录下载
    三方登录下载: 微信开放平台登录 QQ登录  
    下载资源需要10金币
    邮箱/手机:
    温馨提示:
    用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)
    支付方式: 支付宝    微信支付   
    验证码:   换一换

    加入VIP免费专享
     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    [计算机软件及应用]ORACLE10g RAC HPUXMC sericeguard安装配置stepstep.doc

    ORACLE10g RAC +HP-UX+MC serviceguard安装配置step by step目录一、环境检查1二、编辑相关配置文件22.1更改hosts文件22.2添加root用户的rhosts信任22.3 AIO配置22.4查看符合的连接是否存在22.5.调整内核参数32.6.创建Oracle用户3三、逻辑卷配置43.1.创建PV43.2.创建vg53.3.创建arch1文件系统53.4.创建lv53.5.对vg,lv设权限63.6.vgexport vgimport卷组63.7.创建arch2文件系统7四、 集群配置84.1 创建集群配置文件84.2编辑集群文件84.3验证集群配置文件84.4应用集群配置文件94.5启/停双机并设置卷组信息94.6.建立集群管理包94.6.1建第一个包pkg194.6.2增加第二个包10五、安装CRS软件11六、数据库软件安装22七、打补丁到10.2.0.427八、创建数据库27九、配置NTP时钟服务器36一、系统环境检查OS环境: HP-UX 11.31HA软件: MC/ServiceGuard 11.18+Serviceguard Extension for RACOracle: Oracle10g 10.2.0.1+10.2.0.4补丁Hostname: pmosdb1 pmosdb2# machinfo #查系列号# /usr/contrib/bin/machinfo | grep -i Memory #查看物理内存# /usr/sbin/swapinfo -a #查看交换空间# bdf /tmp #查看/tmp的空间情况,至少400M,推荐1G如果没有400M可以设置oracle用户的环境变量,增加临时空间$ oracle用户执行 vi /.profile,增加export TEMP=/directoryexport TMPDIR=/directory# bdf #至少有10GB 的磁盘空间给Oracle software.# uname -a #查看操作系统的版本# /bin/getconf KERNEL_BITS #查看操作系统bit数 # date #查看每个节点的日期时间,同步时间Tue Nov 25 10:34:33 EAT 2008 #时区要为EAT。# date 11310959 #修改时间# set_parms timezone #修改时区二、编辑相关配置文件2.1更改hosts文件# vi /etc/hosts添加如下内容:222.3.25.121 pmosdb1222.3.25.122 pmosdb2222.3.25.123 pmosdb1-vip222.3.25.124 pmosdb2-vip10.1.0.121 pmosdb1_priv10.1.0.122 pmosdb2_priv(此处必须用下划线)2.2添加root用户的rhosts信任# vi .rhostspmosdb1 rootpmosdb2 root2.3 AIO配置# ll /dev/asynccrw-rw-rw- 1 bin bin 101 0x000000 Jun 9 09:38 /dev/async#如果没有开启,可以通过如下方法开启:#创建/dev/async character device/sbin/mknod /dev/async c 101 0x0chown oracle:dba /dev/asyncchmod 660 /dev/async/usr/sbin/setprivgrp -f /etc/privgroup #重启生效#用SAM在 kernel配置async driver=> Kernel Configuration=> Kernel=> the driver is called 'asyncdsk'Generate new kernel(生成新的kernel)2.4查看符合的连接是否存在# cd /usr/libls libX*.sl # 如果不存在需要创建ln -s libX11.3 libX11.slln -s libXIE.2 libXIE.slln -s libXext.3 libXext.slln -s libXpmosdb11.3 libXpmosdb11.slln -s libXi.3 libXi.slln -s libXm.4 libXm.slln -s libXp.2 libXp.slln -s libXt.3 libXt.slln -s libXtst.2 libXtst.sl2.5.调整内核参数#kctune 查内核参数kctune -h -B nproc="4200"kctune -h -B ksi_alloc_max="33600"kctune -h -B max_thread_proc="1100"kctune -h -B maxdsiz="1073741824"kctune -h -B maxdsiz_64bit="4294967296"kctune -h -B maxssiz="134217728"kctune -h -B maxssiz_64bit="1073741824"kctune -h -B maxuprc="3688"kctune -h -B msgmni="2048"kctune -h -B msgtql="2048"kctune -h -B ncsize="35840" kctune -h -B nflocks="2048"kctune -h -B ninode="34816"kctune -h -B nkthread="8416"kctune -h -B semmni="2048"kctune -h -B semmns="16384"kctune -h -B semmnu="4092"kctune -h -B semvmx="32767"kctune -h -B shmmax="34359738368" kctune -h -B shmmni="512"kctune -h -B shmseg="300"kctune -h -B vps_ceiling="64"2.6.创建Oracle用户groupadd -g 502 dbagroupadd -g 501 oinstall/usr/sbin/useradd -u 501 -g oinstall -G dba oraclemkdir /home/oraclechown oracle:oinstall /home/oracleusermod -d /home/oracle oraclemkdir -p /oracle/product/10.2.0/crs_1mkdir -p /oracle/product/10.2.0/db_1chown -R oracle:oinstall /oraclechmod -R 775 /oracle添加oracle用户的rhosts信任# vi /home/oracle/.rhostspmosdb1 oraclepmosdb2 oraclepmosdb1_priv oraclepmosdb2_priv oraclepmosdb1-vip oraclepmosdb2-vip oracle$#vi /.profileexport ORACLE_BASE=/oracle/productexport ORACLE_HOME=$ORACLE_BASE/10.2.0/db_1export ORA_CRS_HOME=$ORACLE_BASE/10.2.0/crs_1export ORACLE_SID=pmosdb1export ORACLE_TERM=xtermexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:$ORACLE_HOME/rdbms/libexport PATH=$PATH:$ORACLE_HOME/bin:$ORA_CRS_HOME/binexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib第二个节点ORACLE_SID=pmosdb2三、逻辑卷配置3.1.创建PVinsf -eC disk #生成磁盘设备文件ioscan -funC disk #扫描磁盘设备 ioscan -m dsf #查看磁盘设备名pvcreate -f /dev/rdisk/disk28diskinfo /dev/rdisk/disk283.2.创建vgmkdir /dev/lockvgmkdir /dev/datavgmkdir /dev/archvg1 ll /dev/*/group mknod /dev/lockvg/group c 64 0x010000mknod /dev/datavg/group c 64 0x020000mknod /dev/archvg1/group c 64 0x030000vgcreate /dev/lockvg /dev/disk/disk28vgcreate -s 64 /dev/datavg /dev/disk/disk47 /dev/disk/disk48 /dev/disk/disk49 /dev/disk/disk50 vgcreate /dev/archvg1 /dev/disk/disk513.3.创建arch1文件系统在archvg1上创建arch1文件系统(-L后面的单位是M)lvcreate -L 50000 -n lvarch1 archvg1mkfs -F vxfs -o largefiles /dev/archvg1/lvarch1mkdir /arch1mount /dev/archvg1/lvarch1 /arch13.4.创建lvlvcreate -n ora_vote01 -L 128 /dev/datavglvcreate -n ora_vote02 -L 128 /dev/datavglvcreate -n ora_vote03 -L 128 /dev/datavglvcreate -n ora_crs01 -L 128 /dev/datavglvcreate -n ora_crs02 -L 128 /dev/datavglvcreate -n ora_spfile -L 128 /dev/datavglvcreate -n ora_pwd -L 128 /dev/datavglvcreate -n db_control01 -L 128 /dev/datavglvcreate -n db_control02 -L 128 /dev/datavglvcreate -n db_control03 -L 128 /dev/datavglvcreate -n db_users01 -L 128 /dev/datavglvcreate -n db_sysaux01 -L 2048 /dev/datavglvcreate -n db_system01 -L 2048 /dev/datavglvcreate -n db_temp01 -L 2048 /dev/datavglvcreate -n db_temp02 -L 2048 /dev/datavglvcreate -n db_temp03 -L 2048 /dev/datavglvcreate -n db_temp04 -L 2048 /dev/datavglvcreate -n db_undo1_01 -L 2048 /dev/datavglvcreate -n db_undo1_02 -L 2048 /dev/datavglvcreate -n db_undo2_01 -L 2048 /dev/datavglvcreate -n db_undo2_02 -L 2048 /dev/datavglvcreate -n db_redo1_01 -L 128 /dev/datavglvcreate -n db_redo1_02 -L 128 /dev/datavglvcreate -n db_redo1_03 -L 128 /dev/datavglvcreate -n db_redo1_04 -L 128 /dev/datavglvcreate -n db_redo1_05 -L 128 /dev/datavglvcreate -n db_redo2_01 -L 128 /dev/datavglvcreate -n db_redo2_02 -L 128 /dev/datavglvcreate -n db_redo2_03 -L 128 /dev/datavglvcreate -n db_redo2_04 -L 128 /dev/datavglvcreate -n db_redo2_05 -L 128 /dev/datavg3.5.对vg,lv设权限chown root:oinstall /dev/datavg/rora_crs0*chown oracle:oinstall /dev/datavg/rora_vote0*chown oracle:dba /dev/datavg/rora_spfilechown oracle:dba /dev/datavg/rora_pwdchown oracle:dba /dev/datavg/rdb_*chmod 664 /dev/datavg/rora_crs0* chmod 664 /dev/datavg/rora_vote0*chmod 660 /dev/datavg/rora_spfilechmod 660 /dev/datavg/rora_pwdchmod 660 /dev/datavg/rdb_*3.6.vgexport vgimport卷组pmosdb1:vgchange -a n /dev/lockvgvgchange -a n /dev/datavgvgexport -v -p -s -m /tmp/lockvg.map /dev/lockvgvgexport -v -p -s -m /tmp/datavg.map /dev/datavgrcp /tmp/lockvg.map pmosdb2:/tmprcp /tmp/datavg.map pmosdb2:/tmppmosdb2: mkdir /dev/lockvg mkdir /dev/datavg mkdir /dev/archvg2 mknod /dev/lockvg/group c 64 0x010000 mknod /dev/datavg/group c 64 0x020000 mknod /dev/archvg2/group c 64 0x030000 vgcreate /dev/archvg2 /dev/disk/disk51vgimport -v -s -m /tmp/lockvg.map /dev/lockvgvgimport -v -s -m /tmp/datavg.map /dev/datavg #strings /etc/lvmtabvgchange -a y /dev/lockvgvgchange -a y /dev/datavgvgcfgbackup /dev/lockvgvgcfgbackup /dev/datavgvgchange -a n /dev/lockvgvgchange -a n /dev/datavgvgchange -a n /dev/archvg2chown root:oinstall /dev/datavg/rora_crs0*chown oracle:oinstall /dev/datavg/rora_vote0*chown oracle:dba /dev/datavg/rora_spfilechown oracle:dba /dev/datavg/rora_pwdchown oracle:dba /dev/datavg/rdb_*chmod 664 /dev/datavg/rora_crs0* chmod 664 /dev/datavg/rora_vote0*chmod 660 /dev/datavg/rora_spfilechmod 660 /dev/datavg/rora_pwdchmod 660 /dev/datavg/rdb_*3.7.创建arch2文件系统在archvg2上创建arch2文件系统lvcreate -L 150000 -n lvarch2 archvg2newfs -F vxfs -o largefiles /dev/archvg2/lvarch2查看已创建卷组的信息# strings /etc/lvmtab四、 集群配置4.1 创建集群配置文件使用cmquerycl去创建一个集群配置文件,Cmquerycl命令必须在主节点上进行。# cmquerycl -v -C /etc/cmcluster/cmcluster.ascii -n pmosdb1 -n pmosdb24.2编辑集群文件# vi /etc/cmcluster/cmcluster.asciiCLUSTER_NAME cluster1NODE_NAME pmosdb1NETWORK_INTERFACE lan2 HEARTBEAT_IP 10.1.0.121NETWORK_INTERFACE lan1NETWORK_INTERFACE lan0 HEARTBEAT_IP 222.3.25.121# CLUSTER_LOCK_LUNETWORK_INTERFACE lan2 HEARTBEAT_IP 10.1.0.122NETWORK_INTERFACE lan1NETWORK_INTERFACE lan0 HEARTBEAT_IP 222.3.25.122# CLUSTER_LOCK_LUNFIRST_CLUSTER_LOCK_PV /dev/disk/disk28*VOLUME_GROUP /dev/vgdataOPS_VOLUME_GROUP /dev/vgdata (RAC环境注意这两个参数的修改) *NODE_TIMEOUT 6000000 #修改NODE_TIMEOUT为6s。其他参数采用默认值。4.3验证集群配置文件# cmcheckconf -v -C /etc/cmcluster/cmcluster.asciivgchange -a y /dev/lockvgvgchange -a y /dev/datavg4.4应用集群配置文件cmapplyconf -v -C /etc/cmcluster/cmcluster.ascii将卷组datavg去激活:#vgchange -a n datavg4.5启/停双机并设置卷组信息先将卷组datavg去激活:#vgchange -a n datavg#cmruncl v(启动)#cmviewcl v(查看是否启动)通过在每个节点上vgdisplay -v datavg进行查看,可见每个卷组datavg卷都为激活状态。4.6.建立集群管理包4.6.1建第一个包pkg1mkdir /etc/cmcluster/pkg1cmmakepkg -p /etc/cmcluster/pkg1/pkg1.conf# vi /etc/cmcluster/pkg1/pkg1.confPACKAGE_NAME pkg1NODE_NAME pmosdb1 #只加节点1.RUN_SCRIPT /etc/cmcluster/pkg1/tlHALT_SCRIPT /etc/cmcluster/pkg1/tl# cmmakepkg -s /etc/cmcluster/pkg1/tl(生成脚本)# vi /etc/cmcluster/pkg1/tl*VGCHANGE="vgchange -a s" # RAC模式注意:注释掉VGCHANGE="vgchange -a e"*VG0="datavg" #需要激活的卷组,lockvg不需要添加,如有第二个,则添加VG1。# chmod +x /etc/cmcluster/pkg1/tl# chmod 700 /etc/cmcluster/pkg1在备机上# mkdir /etc/cmcluster/pkg1# chmod 700 /etc/cmcluster/pkg1回主机上# rcp /etc/cmcluster/pkg1/tl pmosdb2:/etc/cmcluster/pkg1/# cmcheckconf -C /etc/cmcluster/cmcluster.ascii # cmcheckconf -P /etc/cmcluster/pkg1/pkg1.conf# cmhaltcl -v -f# cmapplyconf -v -C /etc/cmcluster/cmcluster.ascii -P /etc/cmcluster/pkg1/pkg1.conf# cmruncl -v -f>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4.6.2增加第二个包# mkdir /etc/cmcluster/pkg2# cmmakepkg -p /etc/cmcluster/pkg2/pkg2.conf# vi /etc/cmcluster/pkg2/pkg2.confPACKAGE_NAME pkg2NODE_NAME hpdb2 #只加节点2.RUN_SCRIPT /etc/cmcluster/pkg2/tlHALT_SCRIPT /etc/cmcluster/pkg2/tl# cmmakepkg -s /etc/cmcluster/pkg2/tl# vi /etc/cmcluster/pkg2/tlVGCHANGE="vgchange -a s" # RAC模式注意:注释掉VGCHANGE="vgchange -a e"VG0="datavg" #需要激活的包,lockvg不需要添加。# chmod +x /etc/cmcluster/pkg2/tl# chmod 700 /etc/cmcluster/pkg2在备机上# mkdir /etc/cmcluster/pkg2# chmod 700 /etc/cmcluster/pkg2回主机上# rcp /etc/cmcluster/pkg2/tl pmosdb2:/etc/cmcluster/pkg2# cmcheckconf -P /etc/cmcluster/pkg2/pkg2.conf# cmapplyconf -v -P /etc/cmcluster/pkg2/pkg2.conf# cmhaltcl -v -f# cmruncl -v -f/etc/rc.config.d/cmcluster中AUTOSTART_CMCLD=1自动启动MC,一般设为AUTOSTART_CMCLD=0。五、安装CRS软件jar -vxf *.zip解压软件包chown -R oracle:oinstall databasechown -R oracle:oinstall clusterware$oraclerac1./ runInstaller ignoreSysPrereqs跳过操作系统检查Name:CRS_HOME1Path:/oracle/product/10.2.0/crs_1Next。选择两个节点,Next勾选上没有通过检查的选项点击edit,把公用网卡的IP类型改为public输入OCR所在的裸设备位置输入votedisk所在的裸设备位置分别按照顺序在两节点上执行脚本,在第二个节点执行root.sh时会提示以下错误信息WARNING: directory '/oracle/product/10.2.0' is not owned by rootWARNING: directory '/oracle/product' is not owned by rootWARNING: directory '/oracle' is not owned by rootChecking to see if Oracle CRS stack is already configuredChecking to see if any 9i GSD is upSetting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/oracle/product/10.2.0' is not owned by rootWARNING: directory '/oracle/product' is not owned by rootWARNING: directory '/oracle' is not owned by rootclscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 1: pmosdb1 pmosdb1_priv pmosdb1node 2: pmosdb2 pmosdb2_priv pmosdb2clscfg: Arguments check out successfully.NO KEYS WERE WRITTEN. Supply -force parameter to override.-force is destructive and will destroy any previous clusterconfiguration.Oracle Cluster Registry for cluster has already been initializedStartup will be queued to init within 30 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes. pmosdb1 pmosdb2CSS is active on all nodes.Waiting for the Oracle CRSD and EVMD to startOracle CRS stack installed and running under init(1M)Running vipca(silent) for configuring nodeappsThe given interface(s), "lan2" is not public. Public interfaces should be used to configure virtual IPs.使用vipca 更改VIP网卡六、数据库软件安装$oraclerac1 ./runInstaller -ignoreSysPrereqs七、打补丁到10.2.0.4先升级CRS然后升级数据库,此处安装步骤省略。八、创建数据库oraclerac1dbca此处也可事先定义一个mapping file文件,例/home/oracle/mappingfilesystem=/dev/datavg/rdb_system01sysaux=/dev/datavg/rdb_sysaux01spfile=/dev/datavg/rdb_system01users=/dev/datavg/rdb_users01temp=/dev/datavg/rdb_temp01undotbs1=/dev/datavg/rdb_undo1_01undotbs2=/dev/datavg/rdb_undo2_01control1=/dev/datavg/rdb_control01control2=/dev/datavg/rdb_control02redo1_1=/dev/datavg/rdb_redo1_01redo1_2=/dev/datavg/rdb_redo1_02redo2_1=/dev/datavg/rdb_redo2_01redo2_2=/dev/datavg/rdb_redo2_02查看之前定义的文件路径在这里是否都正确点击OK,等待数据库安装完成。待数据库安装完成后,使用crs_stat t 查看集群资源是否都正常启动。九、配置NTP时钟服务器假设:本地有server1(172.22.22.1),server2(172.22.22.3),web上的public NTP server 为192.12.19.20;(web public NTP server可以在http:/www.eecis.udel.edu/mills/ntp/clock1b.html查到,可以任意选择其中一个server,本地的具体时间有本地的时区设定决定)以本地一台服务器server1为NTP server的配置方法:1.首先在server1上做如下操作:a)#vi /etc/rc.config.d/netdaemons将export XNTPD00改为export XNTPD=1;export XNTPD=1export XNTPD_ARGS=b)#vi /etc/ntp.conf在最后加上三行:server 172.22.22.1fudge 127.127.1.1 stratum 10driftfile /etc/ntp.driftc)#/etc/init.d/xntpd stopd)#/etc/init.d/xntpd starte)#ntpq -p确认ntp是否正常工作 remote refid st t when poll reach delay offset disp = LOCAL(1) LOCAL(1) 3 l 35 64 1 0.00 0.000 15885.0其中reach一项应大于02、在客户端机器server2上执行以下操作a)#vi /etc/rc.config.d/netdaemons将export NTPDATE_SERVER=改为export NTPDATE_SERVER="1

    注意事项

    本文([计算机软件及应用]ORACLE10g RAC HPUXMC sericeguard安装配置stepstep.doc)为本站会员(sccc)主动上传,三一办公仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三一办公(点击联系客服),我们立即给予删除!

    温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




    备案号:宁ICP备20000045号-2

    经营许可证:宁B2-20210002

    宁公网安备 64010402000987号

    三一办公
    收起
    展开