HP 2128-F ClusterPack V2.4 Tutorial - Page 47

Set up TCP-CONTROL

Page 47 highlights

The default use model of an ClusterPack cluster is that end users will submit jobs remotely through the ClusterWare GUI or by using the ClusterWare CLI from the Management Node. Cluster administrators generally discourage users from logging into the Compute Nodes directly. Users are encouraged to use th Management Server for accessing files and performing routine tasks. When it is desirable to add addition nodes for this purpose, or for more intense computational tasks such as job pre or post processing and compilation, additional "head nodes" can be used. In this document, the term "head node" refers to such user-accessible nodes that allow for interactive use. Head nodes can be included in an ClusterPack Cluste using the following approach: z The head nodes should include an additional network card to allow the node to be accessible to the wider area network. z Head nodes should be added to the cluster using the same approach as Compute Nodes. They can be included in the initial cluster definition or added at a later time using the '-a' option to manager_config and compute_config. z Administrators may choose to close these nodes from running ClusterWare jobs or only make them accessible only to particular queues. (See ClusterWare documentation for more information). z It may be convenient to use the clgroup command to create groups to represent the head node(s) and the remaining Compute Nodes. z Use compute_config to configure the additional network cards to allow the head node(s) to be accessible outside of the cluster. Assign the available network cards publicly accessible IP addresses as appropriate to your local networking configuration. Back to Top 1.3.4 Set up TCP-CONTROL ClusterPack delivers a package to allow some control of TCP services coming into the Compute Nodes. T package, called TCP-CONTROL, can be used to limit users from accessing the Compute Nodes directly, should be used with great care due to several restrictions. TCP-CONTROL can be used to force users to r jobs through ClusterWare Pro™ only. It accomplishes this by disabling telnet and remsh access to the Compute Nodes from the manager. However, this will also cause several important telnet- and remsh-bas applications to fail for non-root users. The tools affected are the multi-system aware tools (clsh, clps, etc. and the AppRS utilities (apprs_ls, apprs_clean, etc.). Note: Enabling TCP-CONTROL by setting the /etc/hosts.deny file will prevent users' access to multi-system aware tools and AppRS utilities. By default, the TCP-CONTROL package is installed on the Compute Nodes, but is not configured to rest access in any way. TCP control is restricted by the settings in /etc/hosts.allow and /etc/hosts.deny files on each Compute Node. The /etc/hosts.deny file is initially configured with no entries, but has two comment lines that can be uncommented to prevent users from accessing the Compute Nodes: ALL:ALL@

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173

The default use model of an ClusterPack cluster is that end users will submit jobs remotely through the
ClusterWare GUI or by using the ClusterWare CLI from the Management Node. Cluster administrators
generally discourage users from logging into the Compute Nodes directly. Users are encouraged to use th
Management Server for accessing files and performing routine tasks. When it is desirable to add addition
nodes for this purpose, or for more intense computational tasks such as job pre or post processing and
compilation, additional "head nodes" can be used. In this document, the term "head node" refers to such
user-accessible nodes that allow for interactive use. Head nodes can be included in an ClusterPack Cluste
using the following approach:
The head nodes should include an additional network card to allow the node to be
accessible to the wider area network.
Head nodes should be added to the cluster using the same approach as Compute Nodes.
They can be included in the initial cluster definition or added at a later time using the '-a'
option to manager_config and compute_config.
Administrators may choose to close these nodes from running ClusterWare jobs or only
make them accessible only to particular queues. (See ClusterWare documentation for
more information).
It may be convenient to use the clgroup command to create groups to represent the head
node(s) and the remaining Compute Nodes.
Use compute_config to configure the additional network cards to allow the head node(s)
to be accessible outside of the cluster. Assign the available network cards publicly
accessible IP addresses as appropriate to your local networking configuration.
Back to Top
1.3.4 Set up TCP-CONTROL
ClusterPack delivers a package to allow some control of TCP services coming into the Compute Nodes. T
package, called TCP-CONTROL, can be used to limit users from accessing the Compute Nodes directly,
should be used with great care due to several restrictions. TCP-CONTROL can be used to force users to r
jobs through ClusterWare Pro™ only. It accomplishes this by disabling telnet and remsh access to the
Compute Nodes from the manager. However, this will also cause several important telnet- and remsh-bas
applications to fail for non-root users. The tools affected are the multi-system aware tools (clsh, clps, etc.
and the AppRS utilities (apprs_ls, apprs_clean, etc.).
Note:
Enabling TCP-CONTROL by setting the /etc/hosts.deny file will prevent users' access to
multi-system aware tools and AppRS utilities.
By default, the TCP-CONTROL package is installed on the Compute Nodes, but is not configured to rest
access in any way. TCP control is restricted by the settings in /etc/hosts.allow and /etc/hosts.deny files on
each Compute Node. The /etc/hosts.deny file is initially configured with no entries, but has two comment
lines that can be uncommented to prevent users from accessing the Compute Nodes:
ALL:ALL@<Management Server name>