Skip to content
Snippets Groups Projects
Commit f018f834 authored by Mike Smith's avatar Mike Smith
Browse files

changed user names

parent e340ef98
No related branches found
No related tags found
No related merge requests found
...@@ -4,6 +4,6 @@ This folder contains instructions and files for setting up the example cluster u ...@@ -4,6 +4,6 @@ This folder contains instructions and files for setting up the example cluster u
# Cluster infrastructure # Cluster infrastructure
The cluster we're using is running on the Heidelberg installation of the de.NBI cloud. The current design is to create a 4 node cluster (1 controller, 3 compute nodes), with varying hardware specifications for each node so we can demonstrate resource managment. The cluster we're using is running on the Heidelberg installation of the [de.NBI cloud](https://www.denbi.de/cloud-overview/cloud-hd). The current design is to create a 4 node cluster (1 controller, 3 compute nodes), with varying hardware specifications for each node so we can demonstrate resource managment.
Job scheduling is doing using [https://slurm.schedmd.com/](SLURM) since it is (a) free and (b) mirrors the infrastructure we're currently using at EMBL. Job scheduling is doing using [SLURM](https://slurm.schedmd.com/) since it is (a) free and (b) mirrors the infrastructure we're currently using at EMBL.
## Generate _ubuntu_ user SSH keys ## Generate _ubuntu_ user SSH keys
We only need to do the is on the Master, since the home drive will be shared with the compute nodes
``` ```
ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsa ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsa
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
...@@ -21,7 +23,12 @@ sudo service nfs-kernel-server start ...@@ -21,7 +23,12 @@ sudo service nfs-kernel-server start
``` ```
sudo apt-get update sudo apt-get update
sudo apt-get install nfs-common sudo apt-get install nfs-common
## add a line to automatically mount the shared home directory
sudo cat '10.0.0.8:/home /home nfs auto,noatime,nolock,bg,nfsvers=4,intr,tcp,actimeo=1800 0 0' >> /etc/fstab sudo cat '10.0.0.8:/home /home nfs auto,noatime,nolock,bg,nfsvers=4,intr,tcp,actimeo=1800 0 0' >> /etc/fstab
## restart the machine
sudo shutdown -r now
``` ```
...@@ -31,17 +38,34 @@ sudo cat '10.0.0.8:/home /home nfs auto,noatime,nolock,bg,nfsvers=4,intr,tc ...@@ -31,17 +38,34 @@ sudo cat '10.0.0.8:/home /home nfs auto,noatime,nolock,bg,nfsvers=4,intr,tc
``` ```
sudo apt-get install slurm-wlm sudo apt-get install slurm-wlm
## enable use of cgroups for process tracking and resource management
sudo bash -c 'echo CgroupAutomount=yes >> /etc/slurm-llnl/cgroup.conf' sudo bash -c 'echo CgroupAutomount=yes >> /etc/slurm-llnl/cgroup.conf'
sudo chown slurm:slurm /etc/slurm-llnl/cgroup.conf sudo chown slurm:slurm /etc/slurm-llnl/cgroup.conf
sudo sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"/g' /etc/default/grub
sudo update-grub
## put munge key in home directory so we can share it with the nodes
sudo cp /etc/munge/munge.key $HOME/ sudo cp /etc/munge/munge.key $HOME/
## download slurm.conf file (may require some editing of IP addresses etc)
sudo wget https://raw.githubusercontent.com/grimbough/embl_swc_hpc/oct2017/cluster_setup/slurm.conf -O /etc/slurm-llnl/slurm.conf -o /dev/null
sudo chown slurm:slurm /etc/slurm-llnl/slurm.conf
``` ```
### Node ### Node
``` ```
## install slurm worker daemon
sudo apt-get install slurmd sudo apt-get install slurmd
## enable use of cgroups for process tracking and resource management
sudo bash -c 'echo CgroupAutomount=yes >> /etc/slurm-llnl/cgroup.conf' sudo bash -c 'echo CgroupAutomount=yes >> /etc/slurm-llnl/cgroup.conf'
sudo chown slurm:slurm /etc/slurm-llnl/cgroup.conf sudo chown slurm:slurm /etc/slurm-llnl/cgroup.conf
sudo sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"/g' /etc/default/grub
sudo update-grub
## copy the shared munge key and restart the service to start using it
sudo cp /home/ubuntu/munge.key /etc/munge/munge.key sudo cp /home/ubuntu/munge.key /etc/munge/munge.key
sudo service munge restart sudo service munge restart
``` ```
...@@ -5,5 +5,5 @@ n=40 ...@@ -5,5 +5,5 @@ n=40
for i in `seq -w 1 ${n}` for i in `seq -w 1 ${n}`
do do
echo $i; echo $i;
userdel -rf test${i} userdel -rf user${i}
done; done;
#!/bin/bash #!/bin/bash
## script to create 40 users called testXX with a default password ## script to create 40 users called userXX with a default password
## and setup up ssh logins without asking for passwords & host checking ## and setup up ssh logins without asking for passwords & host checking
n=40 n=40
...@@ -7,20 +7,20 @@ for i in `seq -w 1 ${n}` ...@@ -7,20 +7,20 @@ for i in `seq -w 1 ${n}`
do do
echo $i; echo $i;
## create n new user called testXX and create default password ## create n new user called userXX and create default password
adduser --gecos "" --disabled-password test${i} adduser --gecos "" --disabled-password user${i}
echo test${i}:SoftwareC | chpasswd echo user${i}:SoftwareC | chpasswd
## create somewhere to store ssh configuration ## create somewhere to store ssh configuration
mkdir -p /home/test${i}/.ssh mkdir -p /home/user${i}/.ssh
echo 'Host *\n StrictHostKeyChecking no\n ForwardX11 yes' > /home/test${i}/.ssh/config printf "Host *\n StrictHostKeyChecking no\n ForwardX11 yes\n" > /home/user${i}/.ssh/config
## generate a ssh key & copy to the list of authorized keys ## generate a ssh key & copy to the list of authorized keys
ssh-keygen -f /home/test${i}/.ssh/id_rsa -t rsa -N '' ssh-keygen -f /home/user${i}/.ssh/id_rsa -t rsa -N ''
cp /home/test${i}/.ssh/id_rsa.pub /home/test${i}/.ssh/authorized_keys cp /home/user${i}/.ssh/id_rsa.pub /home/user${i}/.ssh/authorized_keys
## set new user as owner ## set new user as owner
chown -R test${i}:test${i} /home/test${i}/.ssh chown -R user${i}:user${i} /home/user${i}/.ssh
chmod 600 /home/test${i}/.ssh/config chmod 600 /home/user${i}/.ssh/config
done done
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment