EX294 無料問題集「RedHat Red Hat Certified Engineer (RHCE) exam for Red Hat Enterprise Linux 8」
Create a file called requirements.yml in /home/sandy/ansible/roles a file called role.yml in /home/sandy/ansible/. The haproxy-role should be used on the proxy host. And when you curl http://node3.example.com it should display "Welcome to node4.example.com" and when you curl again "Welcome to node5.example.com" The php-role should be used on the prod host.
正解:
Solution as:

Check the proxy host by curl http://node3.example.com

Check the proxy host by curl http://node3.example.com
In /home/sandy/ansible/ create a playbook called logvol.yml. In the play create a logical volume called Iv0 and make it of size 1500MiB on volume group vgO If there is not enough space in the volume group print a message "Not enough space for logical volume" and then make a 800MiB Iv0 instead. If the volume group still doesn't exist, create a message "Volume group doesn't exist" Create an xfs filesystem on all Iv0 logical volumes. Don't mount the logical volume.
正解:
Solution as:

Topic 1, LAB SETUP
You will need to set up your lab by creating 5 managed nodes and one control node.
So 6 machines total. Download the free RHEL8 iso from Red Hat Developers website.
***Control node you need to set up***
You need to create some static ips on your managed nodes then on the control node set them up in the
/etc/hosts file as follows:
vim /etc/hosts
10.0.2.21 node1.example.com
10.0.2.22 node2.example.com
10.0.2.23 node3.example.com
10.0.2.24 node4.example.com
10.0.2.25 node5.example.com
yum -y install ansible
useradd ansible
echo password | passwd --stdin ansible
echo "ansible ALL=(ALL) NOPASSWD:ALL
su - ansible; ssh-keygen
ssh-copy-id node1.example.com
ssh-copy-id node2.example.com
ssh-copy-id node3.example.com
ssh-copy-id node4.example.com
ssh-copy-id node5.example.com
***Each manage node setup***
First, add an extra 2GB virtual harddisk to each control node 1,2,3. Then add an extra hard disk to control
node 4. Do not add an extra hard disk to node 5. When you start up these machines the extra disks should be
automatically located at /dev/sdb (or /dev/vdb depending on your hypervisor).
useradd ansible
echo password | passwd --stdin ansible
echo "ansible ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/ansible
Note python3 should be installed by default, however if it is not then on both the control node and managed
nodes you can install it also set the default python3 if you are having trouble with python2 being the default.
yum -y install python3
alternatives --set python /usr/bin/python3
All machines need the repos available. You did this in RHSCA. To set up locally you just need to do the same
for each machine. Attach the rhel8 iso as a disk to virtualbox, kvm or whatever hypervisor you are using (this
will be /dev/sr0). Then inside the machine:
mount /dev/sr0 to /mnt
Then you will have all the files from the iso in /mnt.
mkdir /repo
cp -r /mnt /repo
vim /etc/yum.repos.d/base.repo
Inside this file:
[baseos]
name=baseos
baseurl=file:///repo/BaseOS
gpgcheck=0
Also the appstream
vim /etc/yum.repos.d/appstream.repo
Inside this file:
[appstream]
name=appstream
baseurl=file:///repo/AppStream
gpgcheck=0

Topic 1, LAB SETUP
You will need to set up your lab by creating 5 managed nodes and one control node.
So 6 machines total. Download the free RHEL8 iso from Red Hat Developers website.
***Control node you need to set up***
You need to create some static ips on your managed nodes then on the control node set them up in the
/etc/hosts file as follows:
vim /etc/hosts
10.0.2.21 node1.example.com
10.0.2.22 node2.example.com
10.0.2.23 node3.example.com
10.0.2.24 node4.example.com
10.0.2.25 node5.example.com
yum -y install ansible
useradd ansible
echo password | passwd --stdin ansible
echo "ansible ALL=(ALL) NOPASSWD:ALL
su - ansible; ssh-keygen
ssh-copy-id node1.example.com
ssh-copy-id node2.example.com
ssh-copy-id node3.example.com
ssh-copy-id node4.example.com
ssh-copy-id node5.example.com
***Each manage node setup***
First, add an extra 2GB virtual harddisk to each control node 1,2,3. Then add an extra hard disk to control
node 4. Do not add an extra hard disk to node 5. When you start up these machines the extra disks should be
automatically located at /dev/sdb (or /dev/vdb depending on your hypervisor).
useradd ansible
echo password | passwd --stdin ansible
echo "ansible ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/ansible
Note python3 should be installed by default, however if it is not then on both the control node and managed
nodes you can install it also set the default python3 if you are having trouble with python2 being the default.
yum -y install python3
alternatives --set python /usr/bin/python3
All machines need the repos available. You did this in RHSCA. To set up locally you just need to do the same
for each machine. Attach the rhel8 iso as a disk to virtualbox, kvm or whatever hypervisor you are using (this
will be /dev/sr0). Then inside the machine:
mount /dev/sr0 to /mnt
Then you will have all the files from the iso in /mnt.
mkdir /repo
cp -r /mnt /repo
vim /etc/yum.repos.d/base.repo
Inside this file:
[baseos]
name=baseos
baseurl=file:///repo/BaseOS
gpgcheck=0
Also the appstream
vim /etc/yum.repos.d/appstream.repo
Inside this file:
[appstream]
name=appstream
baseurl=file:///repo/AppStream
gpgcheck=0
Create Logical volumes with lvm.yml in all nodes according to following
requirements.
----------------------------------------------------------------------------------------
* Create a new Logical volume named as 'data'
* LV should be the member of 'research' Volume Group
* LV size should be 1500M
* It should be formatted with ext4 file-system.
--> If Volume Group does not exist then it should print the message "VG Not found"
--> If the VG can not accommodate 1500M size then it should print "LV Can not be
created with
following size", then the LV should be created with 800M of size.
--> Do not perform any mounting for this LV.
requirements.
----------------------------------------------------------------------------------------
* Create a new Logical volume named as 'data'
* LV should be the member of 'research' Volume Group
* LV size should be 1500M
* It should be formatted with ext4 file-system.
--> If Volume Group does not exist then it should print the message "VG Not found"
--> If the VG can not accommodate 1500M size then it should print "LV Can not be
created with
following size", then the LV should be created with 800M of size.
--> Do not perform any mounting for this LV.
正解:
Solution as:
# pwd
/home/admin/ansible
# vim lvm.yml
---
- name:
hosts: all
ignore_errors: yes
tasks:
- name:
lvol:
lv: data
vg: research
size: "1500"
- debug:
msg: "VG Not found"
when: ansible_lvm.vgs.research is not defined
- debug:
msg: "LV Can not be created with following size"
when: ansible_lvm.vgs.research.size_g < "1.5"
- name:
lvol:
lv: data
vg: research
size: "800"
when: ansible_lvm.vgs.research.size_g < "1.5"
- name:
filesystem:
fstype: ext4
dev: /dev/research/data
:wq!
# ansible-playbook lvm.yml --syntax-check
# ansible-playbook lvm.yml
# pwd
/home/admin/ansible
# vim lvm.yml
---
- name:
hosts: all
ignore_errors: yes
tasks:
- name:
lvol:
lv: data
vg: research
size: "1500"
- debug:
msg: "VG Not found"
when: ansible_lvm.vgs.research is not defined
- debug:
msg: "LV Can not be created with following size"
when: ansible_lvm.vgs.research.size_g < "1.5"
- name:
lvol:
lv: data
vg: research
size: "800"
when: ansible_lvm.vgs.research.size_g < "1.5"
- name:
filesystem:
fstype: ext4
dev: /dev/research/data
:wq!
# ansible-playbook lvm.yml --syntax-check
# ansible-playbook lvm.yml
Install and configure ansible
User sandy has been created on your control node with the appropriate permissions already, do not change or modify ssh keys. Install the necessary packages to run ansible on the control node. Configure ansible.cfg to be in folder /home/sandy/ansible/ansible.cfg and configure to access remote machines via the sandy user. All roles should be in the path /home/sandy/ansible/roles. The inventory path should be in /home/sandy/ansible/invenlory.
You will have access to 5 nodes.
node1.example.com
node2.example.com
node3.example.com
node4.example.com
node5.example.com
Configure these nodes to be in an inventory file where node I is a member of group dev. nodc2 is a member of group test, node3 is a member of group proxy, nodc4 and node 5 are members of group prod. Also, prod is a member of group webservers.
User sandy has been created on your control node with the appropriate permissions already, do not change or modify ssh keys. Install the necessary packages to run ansible on the control node. Configure ansible.cfg to be in folder /home/sandy/ansible/ansible.cfg and configure to access remote machines via the sandy user. All roles should be in the path /home/sandy/ansible/roles. The inventory path should be in /home/sandy/ansible/invenlory.
You will have access to 5 nodes.
node1.example.com
node2.example.com
node3.example.com
node4.example.com
node5.example.com
Configure these nodes to be in an inventory file where node I is a member of group dev. nodc2 is a member of group test, node3 is a member of group proxy, nodc4 and node 5 are members of group prod. Also, prod is a member of group webservers.
正解:
In/home/sandy/ansible/ansible.cfg
[defaults]
inventory=/home/sandy/ansible/inventory
roles_path=/home/sandy/ansible/roles
remote_user= sandy
host_key_checking=false
[privilegeescalation]
become=true
become_user=root
become_method=sudo
become_ask_pass=false
In /home/sandy/ansible/inventory
[dev]
node 1 .example.com
[test]
node2.example.com
[proxy]
node3 .example.com
[prod]
node4.example.com
node5 .example.com
[webservers:children]
prod
[defaults]
inventory=/home/sandy/ansible/inventory
roles_path=/home/sandy/ansible/roles
remote_user= sandy
host_key_checking=false
[privilegeescalation]
become=true
become_user=root
become_method=sudo
become_ask_pass=false
In /home/sandy/ansible/inventory
[dev]
node 1 .example.com
[test]
node2.example.com
[proxy]
node3 .example.com
[prod]
node4.example.com
node5 .example.com
[webservers:children]
prod