Configure Spark cluster using Ansible: Difference between revisions
Line 86: | Line 86: | ||
ansible-galaxy collection install ansible.posix | ansible-galaxy collection install ansible.posix | ||
== Install HDFS and YARN == | |||
To install HDFS and YARN you need the ''master_node'' and ''num_workers'' available as Ansible variables (facts). You can use the ''ansible.builtin.shell'' and ''.set_fact'' modules to do this, for example at the start of a new Ansible play: | |||
- name: Install HDFS and YARN | |||
hosts: all | |||
tasks: | |||
- name: Register master_node expression | |||
shell: grep tf-driver /etc/hosts | cut -d' ' -f1 | |||
register: master_node_expr | |||
- name: Set master_node fact | |||
set_fact: | |||
master_node: "{{ master_node_expr.stdout } }" | |||
Write two corresponding tasks for ''num_workers''. | |||
Use the ''ansible.builtin.get_url'' module to download the Hadoop and later Spark archives directly to each cluster host. But if you re-run your script many times, it takes time and can trigger rate limitations in the download hosts. If so, it is better to let Ansible download each archive once to your local machine and then copy it onto the hosts as needed. | |||
Use the ''ansible.builtin.unarchive'' module to unpack the archives. Use the ''apt'' module to install ''gzip'' if you need it. Use ''ansible.builtin.file'' to create symbolic links as in Exercise 4. | |||
Use ''ansible.builtin.lineinfile'' to define environment variables by adding them to ''~/.info319'' (instead of ''~/.bashrc''). | |||
Change the variable syntax in the files ''{core,hdfs,mapred,yarn}-site.xml'' from Exercise 4 from Linux to Ansible. For example | Change the variable syntax in the files ''{core,hdfs,mapred,yarn}-site.xml'' from Exercise 4 from Linux to Ansible. For example | ||
Line 107: | Line 122: | ||
</property> | </property> | ||
</configuration> | </configuration> | ||
You can now use the ''ansible.builtin.template'' module instead of Linux' ''envsubst'' command. | You can now use the ''ansible.builtin.template'' module instead of Linux' ''envsubst'' command to configure the files. Note that the files use the {{ hadoop_namenode } } variable. It is not defined yet, but should have the same value as {{ master_node } }. | ||
'''hdfs namenode -format''' has got a '''-nonInteractive''' option that does not re-format an already formatted namenode. | Use ''ansible.builtin.shell'' to create Hadoop's ''worker'' and ''master'' files as in Execise 4, and to format the HDFS namenode and create the HDFS root folder on the ''terraform-driver'' host. | ||
Note that '''hdfs namenode -format''' has got a '''-nonInteractive''' option that does not re-format an already formatted namenode. Use ''failed_when'' to make Ansible ignore ''Exit code 1'' from hdfs in such cases: | |||
- name: Format HDFS namenode | |||
ansible.builtin.shell: | |||
argv: ["/home/ubuntu/volume/hadoop/bin/hdfs", "namenode", "-format", "-nonInteractive"] | |||
register: result | |||
failed_when: result.rc not in [0, 1] | |||
Now you can log into ''terraform-driver'' and [[Install_HDFS_and_YARN_on_the_cluster#Run_HDFS_on_the_cluster | test run HDFS on the cluster as in Exercise 4]]. | Now you can log into ''terraform-driver'' and [[Install_HDFS_and_YARN_on_the_cluster#Run_HDFS_on_the_cluster | test run HDFS on the cluster as in Exercise 4]]. | ||
Finally, you can start HDFS and YARN from ''terraform-driver'': | |||
- name: Start HDFS and YARN | |||
ansible.builtin.shell: | |||
argv: ["/usr/bin/bash", "--rcfile", "/home/ubuntu/.info319", "-c", "start-all.sh"] | |||
=== Install Spark on the cluster === | === Install Spark on the cluster === |
Revision as of 12:01, 29 October 2022
Install and configure Ansible
On your local host, install Ansible, for example:
sudo apt install ansible
To prepare:
sudo cp /etc/ansible/hosts /etc/ansible/hosts.original sudo chmod 666 /etc/ansible/hosts
Ansible needs to know the names of your cluster machines. Change info319-cluster.tf so it also writes a file like this to /etc/ansible/hosts:
terraform-driver terraform-worker-0 ... terraform-worker-5
Finally, Ansible must be installed on all the hosts too. Add the line
- ansible
to the packages: section of user-data.cfg, and re-run terraform apply.
Test run Ansible
Make sure you can log into your cluster machines without a password. Test your Ansible set up from your local machine:
ansible all -m ping
On your local machine, create the file info319-cluster.yaml with a simple task: - name: Prepare .bashrc
hosts: all tasks:
- name: Save original .bashrc ansible.builtin.copy: src: /home/ubuntu/.bashrc dest: /home/ubuntu/.bashrc.original remote_src: yes
Run Ansible:
ansible-playbook info319-cluster.yaml
Create Ansible playbook
Extend the playbook file info319-cluster.yaml to re-create the setup from Exercise 4 on the new Terraform cluster. You have some freedom with respect to the order of things.
Preparing .bashrc
In Exercise 4 we made a lot of modifications to ~/.bashrc. In some cases it is more practical to have the cluster configuration in a separate file, for example ~/.info319.
- Add a task that uses the ansible.builtin.file module to create ("touch") /home/ubuntu/.info319 on all the hosts.
- Add a task that uses the ansible.builtin.lineinfile module to add this line to /home/ubuntu/.info319 on all the hosts:
source .info319
See the documentation here: https://docs.ansible.com/ansible/latest/collections/index.html, for example https://docs.ansible.com/ansible/latest/collections/ansible/builtin/lineinfile_module.html.
Configure SSH
You can use the blockinfile module to add your local ipv4-hosts to /etc/hosts on each node:
- name: Copy IPv4 addresses to /etc/hosts ansible.builtin.blockinfile: path: /etc/hosts block: "{{ lookup('file', 'ipv4-hosts') } }" become: yes
Note: There should not be a space between the two curly braces at the end of the key: line. But without the space, WikiText misinterprets them as a template marker.
On your local machine, create the file config in your exercise-5 folder:
Host terraform-* localhost User ubuntu IdentityFile ~/.ssh/info319-spark-cluster StrictHostKeyChecking no UserKnownHostsFile /dev/null Include ~/.ssh/config.terraform-hosts
(This is the config.stub file from Exercise 4, with the Include line added. Also, localhost has been added to the first line to allow nodes to ssh themselves...)
Use the copy module to upload this file, along with ~/.ssh/config.terraform-hosts and ~/.ssh/info319-spark-cluster to all hosts.
In addition to uploading the config and private key files, you must also authorise the public info319-spark-cluster.pub key, like this:
- name: Authorise public cluster key ansible.posix.authorized_key: key: "{{ lookup('file', '/home/YOUR_USERNAME/.ssh/info319-spark-cluster.pub') } }" user: ubuntu
Tip: ansible-playbook has a --start-at-task "Task name" option to avoid repeating all earlier blocks and stages. You can also use --step to have Ansible ask before each step whether to execute, skip, or finish.
Install Java
Use Ansible's ansible.builtin.apt module and install an old and stable Java version, for example openjdk-8-jdk-headless.
Mount volumes
Using the community.general.parted, community.general.filesystem and ansible.posix.mount modules from your local machine may require installation:
ansible-galaxy collection install community.general ansible-galaxy collection install ansible.posix
Install HDFS and YARN
To install HDFS and YARN you need the master_node and num_workers available as Ansible variables (facts). You can use the ansible.builtin.shell and .set_fact modules to do this, for example at the start of a new Ansible play:
- name: Install HDFS and YARN hosts: all tasks: - name: Register master_node expression shell: grep tf-driver /etc/hosts | cut -d' ' -f1 register: master_node_expr - name: Set master_node fact set_fact: master_node: "{{ master_node_expr.stdout } }"
Write two corresponding tasks for num_workers.
Use the ansible.builtin.get_url module to download the Hadoop and later Spark archives directly to each cluster host. But if you re-run your script many times, it takes time and can trigger rate limitations in the download hosts. If so, it is better to let Ansible download each archive once to your local machine and then copy it onto the hosts as needed.
Use the ansible.builtin.unarchive module to unpack the archives. Use the apt module to install gzip if you need it. Use ansible.builtin.file to create symbolic links as in Exercise 4.
Use ansible.builtin.lineinfile to define environment variables by adding them to ~/.info319 (instead of ~/.bashrc).
Change the variable syntax in the files {core,hdfs,mapred,yarn}-site.xml from Exercise 4 from Linux to Ansible. For example
- from core-site.xml:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://${HADOOP_NAMENODE}:9000</value> </property> </configuration>
- to core-site.xml.j2:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://{{ hadoop_namenode } }:9000</value> </property> </configuration>
You can now use the ansible.builtin.template module instead of Linux' envsubst command to configure the files. Note that the files use the {{ hadoop_namenode } } variable. It is not defined yet, but should have the same value as {{ master_node } }.
Use ansible.builtin.shell to create Hadoop's worker and master files as in Execise 4, and to format the HDFS namenode and create the HDFS root folder on the terraform-driver host.
Note that hdfs namenode -format has got a -nonInteractive option that does not re-format an already formatted namenode. Use failed_when to make Ansible ignore Exit code 1 from hdfs in such cases:
- name: Format HDFS namenode ansible.builtin.shell: argv: ["/home/ubuntu/volume/hadoop/bin/hdfs", "namenode", "-format", "-nonInteractive"] register: result failed_when: result.rc not in [0, 1]
Now you can log into terraform-driver and test run HDFS on the cluster as in Exercise 4.
Finally, you can start HDFS and YARN from terraform-driver:
- name: Start HDFS and YARN ansible.builtin.shell: argv: ["/usr/bin/bash", "--rcfile", "/home/ubuntu/.info319", "-c", "start-all.sh"]