HP ML570 Novell NetWare 6 performance tuning guidelines for ProLiant servers - Page 4
understanding the server deployment environment
UPC - 808736750752
View all HP ML570 manuals
Add to My Manuals
Save this manual to your list of manuals |
Page 4 highlights
Novell NetWare 6 performance tuning guidelines for ProLiant servers understanding the server deployment environment why use industry standard benchmark for performance analysis Generally speaking, a server can typically be deployed in any environment the user chooses. However, server performance varies depending on configuration, operating environment, and workload. Typically, in a given platform, the server subsystems likely to be exercised are the processor, memory, network, and the disk. However, the extents to which any of these are exercised depend largely on the workload characteristics and the operating system environment in which the server is deployed. For example, under a database workload, the processor, memory, and disk subsystem of the server come under intense pressure; whereas in file and print service environments, the network, memory, and disk are intensely exercised. Similarly, under web applications, the network, memory, and processor come under intense pressure. Likewise, under Exchange/Messaging workload, the memory, processor, and disk are greatly stressed. And lastly, under an application server workload, the memory, processor, and disk are under intense pressure. To achieve optimum server performance, it is important to understand the particular environment the server will operate in and the impact the server subsystem components selected might have on the server's overall performance. Having this knowledge enables you to fine-tune the server appropriately; thereby isolating and eliminating performance bottlenecks. This document provides a general overview of typical server subsystem components and the tuning guidelines for deployment in a file I/O and web application environments. The Ziff-Davis NetBench and WebBench benchmarks running under NetWare 6 operating system are used as the basis for the analysis. Ideally, the best benchmark should be the same exact application the user or customer will be running on his/her platform. However, this is usually not possible in most situations. Therefore, most IT professionals use an industry standard benchmark application that best simulates their unique environment in order to predict how well a server will perform when deployed. In general, benchmarks can be categorized into two main groups: • trace driven • execution driven The trace driven benchmark focuses primarily on real-world performance using capture/playback "traces" of real-world applications. The Ziff-Davis benchmark test suites fall in this category. This benchmark is widespread in the industry and is believed to mimic real-world applications. The execution driven benchmark is synthetic and tends to focus on certain aspects of the server subsystem (e.g., processor, memory, etc.) that does not usually correlate to realworld usage experience. The SpecCPU2000, for instance, is a good example of a synthetic benchmark. The object of this discussion is not to preclude or endorse one category over the other but to highlight the differences so the user can make an informed decision as to which method accomplishes set goals. 4