11gR2 install fails with “Hard Limit: maximum user processes” error

Just finished dealing with “Hard Limit: maximum user processes” error on Open Solaris 10 while installing 11gR2:

11gR2 install fails with

Oracle Metalink was useless — total waste of time — I hate that site now, it’s gone completely into the crapper.

SOLUTION (thanks to David D’Acquisto advice):

1) edit /etc/system as follows:

set shmsys:shminfo_shmmax=12025908428
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=1024
set shmsys:shminfo_shmmin=1
set max_nprocs=30000
set maxuprc=16384

2) setup project for oracle user (if it’s already setup then just ignore the duplicate error while running projadd command):

projadd -U oracle user.oracle
projmod -s -K "project.max-sem-ids=(priv,100,deny)" user.oracle
projmod -s -K "process.max-sem-nsems=(priv,256,deny)" user.oracle
projmod -s -K "project.max-shm-memory=(priv,12025908428,deny)" user.oracle
projmod -s -K "project.max-shm-ids=(priv,100,deny)" user.oracle
projmod -s -K "process.max-file-descriptor=(priv,65536,deny)" user.oracle

3) bounce the box:

init 6

Here’s how to check for the setting:

## before above changes were applied
##
$ kstat|grep v_proc
        v_proc                          16362
$

$ kstat |grep v_maxup
        v_maxup                         16357
        v_maxupttl                      16357
$

## after changes/reboot
##

$ kstat|grep v_proc
        v_proc                          30000
$

$ kstat |grep v_maxup
        v_maxup                         16384
        v_maxupttl                      29995
$

NOTE: the setting above are based on 16gb of RAM if yours is less/more — adjust as per David’s formula.

If you found this article helpful and would like to receive more like it as soon as I release them make sure to sign up to my newsletter below:

SUBSCRIBE

February 15, 2010

Posted In: Installs

Tags: , , , ,

11gR2 clients connect to the database using SCANs

If you’ve extended your RAC cluster on a set of new nodes you already know how painful it can be to have to go through the list of your clients and make sure their SQL*Net configuration is up to date. 11gR2 solves this problem using Single Client Access Name (SCAN).

The single client access name (SCAN) is a hostname used to provide service access for clients to the cluster. Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes are running a particular database. Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use the SCAN.

Reference: 1.3.2.2 IP Address Requirements

How is SCAN implemented?

For high availability purposes the SCAN name should be associated with at least three IP addresses using DNS round-robin resolution. If you opt to use Grid Naming Service then GNS can also be used to manage the SCAN name.

SCAN is configured at a cluster level not at the node level, that’s what makes it so flexible — no mater how many nodes your clusters consists of, your clients can continue to use SCAN to access the services of your cluster utilizing all nodes even if you add or delete them:

The SCAN is a virtual IP name, similar to the names used for virtual IP addresses, such as node1-vip. However, unlike a virtual IP, the SCAN is associated with the entire cluster, rather than an individual node, and associated with multiple IP addresses, not just one address.

SCAN works as an independent handler for the entire cluster — it acts on client’s behalf during connection request since it knows all cluster services and it’s available, least loaded nodes:

The SCAN works by being able to resolve to multiple IP addresses reflecting multiple listeners in the cluster handling public client connections. When a client submits a request, the SCAN listener listening on a SCAN IP address and the SCAN port is contracted on a client’s behalf. Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener on the least-loaded node where the service is currently being offered. Finally, the client establishes connection to the service through the listener on the node where service is offered. All of these actions take place transparently to the client without any explicit configuration required in the client.

Bottom line – use SCAN – it simplifies cluster management:

Because the SCAN addresses resolve to the cluster, rather than to a node address in the cluster, nodes can be added to or removed from the cluster without affecting the SCAN address configuration.

Reference: D.1.3.5 About the SCAN

If you found this article helpful and would like to receive more like it as soon as I release them make sure to sign up to my newsletter below:

SUBSCRIBE

September 10, 2009

Posted In: RAC

Tags: , , ,

11gR2 – raw and block devices – no longer supported

I was just reading up on the 11gR2 documentation for Grid Infrastructure Installation and finally we have a closure on the topic of RAW and BLOCK devices for OCR and VOTING disks:

With this release, OUI no longer supports installation of Oracle Clusterware files on block or raw devices. Install Oracle Clusterware files either on Automatic Storage Management diskgroups, or in a supported shared file system.

For new installations, OCR and voting disk files can be placed either on ASM, or on a cluster file system or NFS system. Installing Oracle Clusterware files on raw or block devices is no longer supported, unless an existing system is being upgraded.

REFERENCE: What’s New in Oracle Grid Infrastructure Installation and Configuration?

Perfect timing! I was just mulling over what to do with OCR/VOTING on my upcoming SAN-based RAC install — now it’s clear — use 11gR2 and store them on ASM.

September 10, 2009

Posted In: RAC

Tags: , ,

11gR2 is here!

In case you didn’t get this via oracle.com news updates — 11gR2 is out and available for download from here ….

And as I have predicted, the first distro is Linux x86 and Linux x86-64. So if you are a die-hard Solaris fan it should now be obvious (if it wasn’t already) that Linux is now officially the preferred platform (as Solaris once was but no longer is).

I actually wasn’t expecting this in September — I thought it was going to get announced closer to Oracle Open World but I am not complaining here :)

September 10, 2009

Posted In: Linux

Tags: , , ,

Data Guard — it’s real it’s Oracle, you know what you’ve got!

Data Guard — it’s real it’s Oracle, you know what you’ve got!” said Joe Meeks (Director of High Availability Product Management at Oracle) in his closing statement during the Live Webcast presentation titled “Maximize Availability with Oracle Database 11g” held today.

The focus of this presentation was on “Active Data Guard Option” of Oracle 11g. In one sentence — Active Data Guard Option lets you open your Physical Standby database READ-Only all while redo-apply is taking place from primary. In contrast, in 10g, redo-apply halted when physical standby was opened READ-Only, the redo transportation was still taking place but it was not being applied in 10g. 11g bridges this gap and it does so very efficiently — as Joe Meeks said “With 11gR2 Active Data Guard – we are confident that latency is very low”.

Data Guard != ZERO Downtime because rolling upgrade still requires downtime. For example, according to Joe’s recollection of a major upgrade at UPS — their downtime going from 10g to 11g took ~ 4 minutes. To get ZERO downtime you must design your application around Oracle Streams.

In contrast to Data Guard, which uses media recovery (block by block), Oracle Streams are building logical change records using SQL. Steams are providing full featured replication features at your disposal, for example you can:

  • replicate a subset of database
  • perform transformations
  • replicate across platforms
  • setup multimaster replications
  • replicate to a database and have it replicate different subsets of data out

Data Guard is simpler, one way replication — focused on DR using media recovery process (time tested process). Physical Standby is very horizontal, transparent to the storage and application.

PS: Active Data Guard is only available for Physical Standby (block by block changes) and it requires an additional license.

August 19, 2009

Posted In: Data Guard

Tags: , , , , , ,