HP DL145 HP InfiniBand Solution for Oracle RAC Environments white paper - Page 2

Overview

Page 2 highlights

Overview Oracle® Real Application Clusters (RAC) has a scale out, shared-disk clustering architecture. In a shared-disk architecture, all nodes in the cluster have access to, and can manipulate, the same data set. Concurrent access to the data from multiple nodes is synchronized through a private cluster interconnect network. Depending on the amount of contention inherent in the application, the latency and efficiency of the cluster interconnect can directly affect the performance and scalability of the entire system. Most Oracle RAC deployments use standard Ethernet networks for the cluster interconnect, however applications that have high levels of contention will not scale beyond a certain point. For these applications, InfiniBand technologies can be deployed for the cluster interconnect network, enabling the system to achieve higher levels of performance than with standard Ethernet networks. This white paper explains how to implement an Oracle Real Applications Cluster (RAC) solution that leverages HP InfiniBand products for the cluster interconnect. It is intended for system administrators, system architects, and systems integrators who are considering the advantages of HP InfiniBand based solutions in an Oracle RAC environment. Oracle RAC overview Oracle RAC system is a shared-disk clustering architecture in which several servers share access to a common set of network storage devices. A database instance runs on each node in the cluster. The database data and log files are stored on the shared storage and accessed simultaneously by all of the nodes in the cluster. The database instances communicate through a cluster interconnect to maintain concurrency of the data in the database. A typical Oracle RAC system has the following components: • Two or more server nodes • Public local area network (LAN) • Cluster interconnect • Fibre Channel storage area network (SAN) (usually redundant) • Fibre Channel shared storage array 2

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

Overview
Oracle® Real Application Clusters (RAC) has a scale out, shared-disk clustering architecture. In a
shared-disk architecture, all nodes in the cluster have access to, and can manipulate, the same data
set. Concurrent access to the data from multiple nodes is synchronized through a private cluster
interconnect network. Depending on the amount of contention inherent in the application, the latency
and efficiency of the cluster interconnect can directly affect the performance and scalability of the
entire system. Most Oracle RAC deployments use standard Ethernet networks for the cluster
interconnect, however applications that have high levels of contention will not scale beyond a certain
point. For these applications, InfiniBand technologies can be deployed for the cluster interconnect
network, enabling the system to achieve higher levels of performance than with standard Ethernet
networks. This white paper explains how to implement an Oracle Real Applications Cluster (RAC)
solution that leverages HP InfiniBand products for the cluster interconnect. It is intended for system
administrators, system architects, and systems integrators who are considering the advantages of HP
InfiniBand based solutions in an Oracle RAC environment.
Oracle RAC overview
Oracle RAC system is a shared-disk clustering architecture in which several servers share access to a
common set of network storage devices. A database instance runs on each node in the cluster. The
database data and log files are stored on the shared storage and accessed simultaneously by all of
the nodes in the cluster. The database instances communicate through a cluster interconnect to
maintain concurrency of the data in the database.
A typical Oracle RAC system has the following components:
Two or more server nodes
Public local area network (LAN)
Cluster interconnect
Fibre Channel storage area network (SAN) (usually redundant)
Fibre Channel shared storage array
2