Configure Cluster Nodes
This guide will go through the steps of expanding a cluster horizontally. This
guide will use the terms Master Node and Slave Node where master node
is the node the installer was executed on and a slave node is a node that is taking
part in the cluster but does not contain an Admin Server.
Each node in the cluster,
including Master Node, hosts one Node Manager which is used as communication channel
for the cluster. It is therefore important that this listen address and port can
be reached from all nodes in the cluster.
It is good to have at least some basic understanding of how to
control the servers.
Note: On Microsoft Windows - when setting up horizontal middleware
server cluster the node machines should be connected to a domain, the firewall in
Windows will not allow windows services to communicate the way a cluster requires
otherwise. Separate firewall rule might be required that accepts RPC Dynamic ports
between the cluster nodes depending on how network principals are defined in the
network domain.
Note: The server clocks must be synchronized across the cluster.
Additional points to consider:
- For the cluster.zip creation process, a free disk space of 10-15GB is required
as a workspace.
- Linux machines require zip 3.0 installed. Please refer to the OS user manual
for details on how to update/install.
If the zip tool was not found, the
cluster files are created as a folder in the instance folder (folder name: 'cluster-node').
Copy all the contents of this folder to the slave node.
If the cluster-node
folder was manually archived, make sure the archive tool supports file permissions.
If copied to a Windows machine, Linux file permissions are not preserved.
- A slave node can be updated by creating a new cluster.zip file in the master
node.
If the master has had a recent CPU patch apply, use the 'create' option
to include updated MWS runtime files.
If the master was only reconfigured
or patched (no CPU), use the 'update' option.
Stop the NM and all managed
servers in the slave node to avoid file locks.
Copy and extract the cluster.zip
to the IFS home in the slave node.
Use the 'update' option in cluster.sh/cmd
in the slave node to update the slave. A new script is generated in slave node
to control node manager (since BP 10.16.20.0)
Extending the Cluster
It is not possible to extend the cluster using the IFS Installer, instead, the
IFS Admin Console is used to create new servers or delete existing server from a
cluster. A machine that is to be part of a cluster must first be setup using a specific
cluster script.
- On the master node, browse to <ifs_home>/instance/<instance>/bin
and run cluster.cmd/sh depending on the underlying operating system.
- The script can either create a cluster node or update a cluster node. The
difference between the two operations is that 'create' packs everything needed
to create a new cluster node on a machine that is not already part of the cluster
(i.e. there is no IFS Home created), whereas 'update' packs files in the current
IFS Home that might have been modified or added by the installer during a delivery
or reconfigure. Update assumes there is already an IFS Home installed on the
cluster node.
Once 'create' or 'update' has been selected in the command
window, the script prompts for confirmation. Press 'y' to proceed or 'n' to
abort the operation. If 'create' was selected, this operation will take time.
- The script will produce a compressed archive in <ifs_home>/instance/<instance>/
called cluster.zip
Move this archive to one of the slave nodes.
- Create a new IFS Home on the slave node in the same manner as described
in the Intro Process Document and extract
the archive in this new empty IFS Home. The directories on all machines in the
cluster must be the same.
On Linux you should extract the archive
as user IFS and keep using this user for the remainder of this guide.
- Still on the slave node, go to <ifs_home>/instance/<instance>/bin
and once again run the script cluster.cmd/sh depending on the underlying operating
system.
The script will give three options, create, update and delete. If
the current machine is not yet part of the cluster, select 'create', if the
current node is already part of the cluster but needs to be updated due to a
delivery or reconfigure done on the master node, select 'update'. If this node
should no longer be part of the current cluster or if it needs to be recreated,
select 'delete'.
Create: Unpacks all binaries and creates a domain
that is part of the cluster. It also creates a new machine automatically.
Update: Overwrites all the files in the current IFS Home with the
files included in the archive.
Delete: Unregister IFS Middleware Server
and removes the machine in the cluster so that the IFS Home can be safely deleted.
- When the script has finished it should have installed the binaries, created
an IFS Middleware domain, a machine and the NodeManager should be running.
- Repeat the previous steps for each machine in the cluster. The same archive
created during the first steps can be reused, so omit those steps.
Note: Updating cluster nodes is necessary after applying
a delivery on the master node except for pure database, client or documentation
deliveries.
Configure the Cluster For an External Load Balancer
When an External Load Balancer is used in combination with a horizontal cluster
an HTTP Server is needed on each node. HTTP Servers are created and managed from
the IFS Admin Console.
IFS Enterprise Explorer deployment
Follow the
IFS Enterprise Explorer deployment guide for the client to work on all nodes.
Remove a Cluster Node
Go to <ifs_home>/instance/<instance>/bin and
run the script cluster.cmd/sh depending on the underlying operating system.
Select 'delete' when prompted. If there are servers attached to the cluster node
the operation will terminate with the following message: 'The node has active Servers
running. Shutdown and Delete the servers using IFS Admin Console to continue with
node delete'. Using IFS Admin Console stop and delete the servers attached to the
cluster node. Then go back to the cluster node and execute <ifs_home>/instance/<instance>/bin
cluster.cmd/sh. Now the IFS Middleware Home should be unregistered and the machine
should be deleted from the configuration.
Delete the IFS Home from disk.
Verify
|
Verify that the machine(s) has been created and that the Node Manager
is running. |
|
Check the logs located in <ifs_home>/instance/<instance>/logs
|