HP ML570 Novell NetWare 6 performance tuning guidelines for ProLiant servers - Page 4

understanding the server deployment environment

Page 4 highlights

Novell NetWare 6 performance tuning guidelines for ProLiant servers understanding the server deployment environment why use industry standard benchmark for performance analysis Generally speaking, a server can typically be deployed in any environment the user chooses. However, server performance varies depending on configuration, operating environment, and workload. Typically, in a given platform, the server subsystems likely to be exercised are the processor, memory, network, and the disk. However, the extents to which any of these are exercised depend largely on the workload characteristics and the operating system environment in which the server is deployed. For example, under a database workload, the processor, memory, and disk subsystem of the server come under intense pressure; whereas in file and print service environments, the network, memory, and disk are intensely exercised. Similarly, under web applications, the network, memory, and processor come under intense pressure. Likewise, under Exchange/Messaging workload, the memory, processor, and disk are greatly stressed. And lastly, under an application server workload, the memory, processor, and disk are under intense pressure. To achieve optimum server performance, it is important to understand the particular environment the server will operate in and the impact the server subsystem components selected might have on the server's overall performance. Having this knowledge enables you to fine-tune the server appropriately; thereby isolating and eliminating performance bottlenecks. This document provides a general overview of typical server subsystem components and the tuning guidelines for deployment in a file I/O and web application environments. The Ziff-Davis NetBench and WebBench benchmarks running under NetWare 6 operating system are used as the basis for the analysis. Ideally, the best benchmark should be the same exact application the user or customer will be running on his/her platform. However, this is usually not possible in most situations. Therefore, most IT professionals use an industry standard benchmark application that best simulates their unique environment in order to predict how well a server will perform when deployed. In general, benchmarks can be categorized into two main groups: • trace driven • execution driven The trace driven benchmark focuses primarily on real-world performance using capture/playback "traces" of real-world applications. The Ziff-Davis benchmark test suites fall in this category. This benchmark is widespread in the industry and is believed to mimic real-world applications. The execution driven benchmark is synthetic and tends to focus on certain aspects of the server subsystem (e.g., processor, memory, etc.) that does not usually correlate to realworld usage experience. The SpecCPU2000, for instance, is a good example of a synthetic benchmark. The object of this discussion is not to preclude or endorse one category over the other but to highlight the differences so the user can make an informed decision as to which method accomplishes set goals. 4

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51

Novell NetWare 6 performance tuning guidelines for ProLiant servers
understanding
the server
deployment
environment
Generally speaking, a server can typically be deployed in any environment the user
chooses. However, server performance varies depending on configuration, operating
environment, and workload.
Typically, in a given platform, the server subsystems likely to be exercised are the
processor, memory, network, and the disk. However, the extents to which any of these are
exercised depend largely on the workload characteristics and the operating system
environment in which the server is deployed. For example, under a database workload,
the processor, memory, and disk subsystem of the server come under intense pressure;
whereas in file and print service environments, the network, memory, and disk are
intensely exercised. Similarly, under web applications, the network, memory, and
processor come under intense pressure. Likewise, under Exchange/Messaging workload,
the memory, processor, and disk are greatly stressed. And lastly, under an application
server workload, the memory, processor, and disk are under intense pressure.
To achieve optimum server performance, it is important to understand the particular
environment the server will operate in and the impact the server subsystem components
selected might have on the server’s overall performance. Having this knowledge enables
you to fine-tune the server appropriately; thereby isolating and eliminating performance
bottlenecks.
This document provides a general overview of typical server subsystem components and
the tuning guidelines for deployment in a file I/O and web application environments. The
Ziff-Davis NetBench and WebBench benchmarks running under NetWare 6 operating
system are used as the basis for the analysis.
why use industry
standard benchmark
for performance
analysis
Ideally, the best benchmark should be the same exact application the user or customer will
be running on his/her platform. However, this is usually not possible in most situations.
Therefore, most IT professionals use an industry standard benchmark application that best
simulates their unique environment in order to predict how well a server will perform when
deployed.
In general, benchmarks can be categorized into two main groups:
trace driven
execution driven
The trace driven benchmark focuses primarily on real-world performance using
capture/playback “traces” of real-world applications. The Ziff-Davis benchmark test suites
fall in this category. This benchmark is widespread in the industry and is believed to mimic
real-world applications.
The execution driven benchmark is synthetic and tends to focus on certain aspects of the
server subsystem (e.g., processor, memory, etc.) that does not usually correlate to real-
world usage experience. The SpecCPU2000, for instance, is a good example of a
synthetic benchmark.
The object of this discussion is not to preclude or endorse one category over the other but
to highlight the differences so the user can make an informed decision as to which method
accomplishes set goals.
4