Clicky Web Analytics

Clicky

Nov
17
Wed
Posted By Ritesh Chhajer on Wednesday, November 17, 2010
3478 Views


This is 2nd part of getting started with 11.2.0.2 series. In the previous one we saw the upgrade process. Here, we'll see new features specific to 11.2.0.2. For general overview of 11gR2 new features, refer to Oracle 11gR2 Features and Bugs

Bug fixes:

With 11.2.0.2, Oracle has fixed quite a bunch of bugs that were associated with previous releases. You can refer to ML Note: 1179583.1 for the list of bug fixes. The one that interests to me is hugepages.

Hugepages:
Earlier in 11.2.0.1, one had to manually edit ohasd by explicitly saying "ulimit -l unlimited" so that hugepages are used by the database instance when started by GI through crsctl/srvctl. Now, there is no need to edit those files as Oracle itself has those lines in the ohasd script. Also, the alert log now shows the usage of hugepages at the time of starting the instance.
Sample Extract from alert log:
Starting ORACLE instance (normal)
****************** Huge Pages Information *****************
Huge Pages memory pool detected (total: 13000 free: 13000)
DFLT Huge Pages allocation successful (allocated: 12609)
***********************************************************

AMM:
This is not specific to 11.2.0.2 but 11g in general. Since we talked about hugepages, let me mention something about AMM here as well. Remember that 11g introduced AMM(automatic memory management) to manage both SGA and PGA which is enabled by setting memory_max_target and memory_target. And if you want to use AMM then hugepages won't be used as AMM is not compatible with hugepages.

[oracle@test-server1]~% cat /proc/meminfo|grep HugePages
HugePages_Total:  2000
HugePages_Free:   2000

Refer ML Note: 749851.1 for more details.
If AMM is in use, you'll see in-memory files under /dev/shm.
Example:
[oracle@test-server1]~% ls -ltr /dev/shm|head -5
total 1133124
-rw-r-----  1 oracle dba 67108864 Nov 15 10:48 ora_testdb1_1507329_29
-rw-r-----  1 oracle dba 67108864 Nov 15 10:48 ora_testdb1_1507329_0
-rw-r-----  1 oracle dba 67108864 Nov 15 10:48 ora_testdb1_1507329_24
-rw-r-----  1 oracle dba 67108864 Nov 15 10:48 ora_testdb1_1507329_25

"testdb1" is the instance name while "1507329" is the shmid.
[oracle@test-server1]~% ipcs -m

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0xaaf57360 1507329    oracle    660        4096       0

If you want to use hugepages then disable AMM by setting memory_max_target and memory_target to 0

Also note the even though if you are not using AMM, you may find files like below under /dev/shm:
-rwxrwx--- 1 oracle dba    4096 Nov 15 04:42 JOXSHM_EXT_452_testdb1_29097985
-rwxrwx--- 1 oracle dba    8192 Nov 15 04:42 JOXSHM_EXT_551_testdb1_29097985
-rwxrwx--- 1 oracle dba    4096 Nov 16 11:53 JOXSHM_EXT_553_testdb1_29097985
-rwxrwx--- 1 oracle dba    4096 Nov 17 04:02 JOXSHM_EXT_552_testdb1_29097985

This is a bug and has been addressed in ML Note: 752899.1

Redundant Interconnect:
With 11.2.0.2, Oracle has introduced redundant interconnect. Let us take a look at ifconfig to make it more clear.
Sample extract from ifconfig:

eth1:1    Link encap:Ethernet  HWaddr XX:XX:XX:XX:XX:XX
          inet addr:169.254.251.212  Bcast:169.254.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

Earlier, VIPs were associated with only public IPs. Now, grid has assigned VIP to private interface as well. The virtual addresses have been assigned from 169.254.*.* subnet.  

If you check the resources using `crsctl status resource -t -init`, you'll see a new resource introduced with 11.2.0.2 as "ora.cluster_interconnect.haip". This HAIP can also be seen via:
- SQL> select * from v$cluster_interconnects;

- oifcfg iflist

- Alert log

I really like this feature for two reasons:
1. If you want to have additional NIC for having redundancy for your private interconnect, traditionally you'll configure bonding at operating system level. Now, since Oracle can do do that for you, that step is eliminated.

2. The main advantage of redundant interconnect when implemented by Oracle over traditional NIC bonding is Oracle will do load balancing as well unlike active:passive which is the case with NIC bonding.

Refer ML Note: 1210883.1 for more details on it.

Multicasting:
Multicasting should be enabled on the private network for redundant interconnect to work. ML Note: 1212703.1 has more details on it along with a sample program to validate. I prefer to run this program as a pre-check so that I don't bump into installation issues later on. Find below sample run of it. This has been tested on RHEL4U8 and RHEL5U3 where eth1 is the private interface.

[oracle@test-server1]~% gunzip mcasttest
gunzip: mcasttest: unknown suffix -- ignored

[oracle@test-server1]~% file mcasttest
mcasttest: gzip compressed data, from Unix

[oracle@test-server1]~% mv mcasttest mcasttest.gz

[oracle@test-server1]~% gunzip mcasttest.gz

[oracle@test-server1]~% file mcasttest
mcasttest: POSIX tar archive

[oracle@test-server1]~% tar xvf mcasttest
mcasttest/
mcasttest/README.txt
mcasttest/mcast2.aix.ppc64
mcasttest/mcast2.linux.x32
mcasttest/mcast2.hpux.ia64
mcasttest/mcasttest.pl
mcasttest/mcast2.solaris.sparc64
mcasttest/mcast2.solaris.x64
mcasttest/mcast2.linux.x64
mcasttest/mcast2.hpux.parisc64

[oracle@test-server1]~% cd mcasttest

[oracle@test-server1]~/mcasttest% perl mcasttest.pl -n test-server1,test-server2 -i eth1
###########  Setup for node test-server1  ##########
Checking node access 'test-server1'
Checking node login 'test-server1'
Checking/Creating Directory /tmp/mcasttest for binary on node 'test-server1'
Distributing mcast2 binary to node 'test-server1'
###########  Setup for node test-server2  ##########
Checking node access 'test-server2'
Checking node login 'test-server2'
Checking/Creating Directory /tmp/mcasttest for binary on node 'test-server2'
Distributing mcast2 binary to node 'test-server2'
###########  testing Multicast on all nodes  ##########

Test for Multicast address 230.0.1.0

Nov 11 11:53:16 | Multicast Succeeded for eth1 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 11 11:53:17 | Multicast Succeeded for eth1 using address 224.0.0.251:42001

CVU:
If you check `crsctl status res -t`, you'll notice a new resource called "ora.cvu"
[oracle@test-server1]~% srvctl config cvu
CVU is configured to run once every 360 minutes

[oracle@test-server1]~% srvctl status cvu
CVU is enabled and running on node test-server1

CVU is nothing but cluster verification utility which now would run once every 6 hours(This is configurable and can be changed using srvctl modify). You can see the logs under $GRID_HOME/log/`hostname -s`/cvu

Service:
The whole purpose of edition based redefinition is to maintain different versions so that those can be modified online. Now, starting with 11.2.0.2, the database service also has edition attribute that can be managed using srvctl. 
[oracle@test-server1]~% srvctl add service -d testdb -s test -r testdb1 -a testdb2 -t e1

[oracle@test-server1]~% srvctl config service -d testdb -s test
Service name: test
Service is enabled
Server pool: testdb_test
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition: e1
Preferred instances: testdb1
Available instances: testdb2

Rants & Raves Minimize

Recommended Oracle DBA Books Minimize

     

Tag Cloud Minimize


Archive Posts Minimize
 
Monthly
    Yearly

    Disclaimer:
    This posting is provided "AS IS" with no warranties, and confers no rights. You assume all risk for your use.

    This posting has nothing to do with my present or past employer.