devops-tools-notes

nealalan.github.io/devops-tools-notes

Note: The jekyl markdown code used by Github will not display this page correctly on github pages. I recommend looking at the repo version of this page so it will display correctly.

TOC

DEFINITIONS

Machine Delopyment

Vagrant - vagrantup.com

Vagrant: Installing on a Mac

  1. Can download from the hasicorp vagrantup site. DOWNLOAD: https://www.vagrantup.com/downloads.html
  2. use brew… I tried this method, however vagrant 2.0.0 had been previously installed and no longer works. But the system will not use the latest version. Tried uninstalling and deleting everything I can but still can’t use the version installed with brew.

Vagrant: Use on a Mac

  1. Install the latest version of VirtualBox. I had a version that was out of date and would no longer run on MacOS.

  2. Creating a vagrant file…
    # set current directory to be a Vagrant environment and create a Vagrantfile
    #   within ~/Projects/vagrant/
    $ vagrant init
    
  3. Edit vagrant file
    $ atom ~/Projects/vagrant/Vagrantfile
    
  4. Bring up vagrant
    # Setup environment Create and configure guest machines according to Vagrantfile
    $ vagrant up
    # ALSO:
    $ vagrant up --PROVIDER=VirtualBox --debug
    
  5. See machine running
    # status of machines in env
    $ vagrant status
    
  6. connect to the machine instance
    # connect
    $ vagrant ssh default
    
  7. stop and destroy the machine (all resources)
    $ vagrant destroy
    

Vagrant: Additional Commands

# make sure your vagrant file is valid
$ vagrant validate
$ vagrant provision
# runs a half and an up
$ vagrant reload

Vagrant: Use with Docker

  1. Configure your Vagrantfile. This will pull down a ghost blog container to run in docker and map port 80 to 2368 on the counter
    Vagrant.configure("2") do |config|
      config.vm.provider "docker" do |d|
     d.image = "ghost"
     d.ports = ["80:2368"]
      end
    end
    
  2. Run vagrant
    $ vagrant up
    
  3. Find the docker container ID and connect to docker instance
    $ docker ps
    $ docker exec -i -t [container-id] /bin/bash
    

Vagrant: Mapping files

Vagrant: Connecting with SSH

Vagrant: Provisioning in the shell and w/ puppet

  1. Clone Sample Vagrantfile: https://github.com/linuxacademy/content-LPIC-OT-vagrant-puppet
    $ git clone https://github.com/linuxacademy/content-LPIC-OT-vagrant-puppet.git vagrant
    
  2. Setup the vagrant file ```yml Vagrant.configure(“2”) do |config| config.vm.define “web” do |web| web.vm.box = “ubuntu/trusty64” web.vm.hostname = “web.vagrant.vm” end

config.vm.define “db” do |db| db.vm.box = “ubuntu/trusty64” db.vm.hostname = “db.vagrant.vm” end end


3. launch the webserver only
```bash
$ vagrant up web
  1. Add provisioning to the Vagrantfile ```yml Vagrant.configure(“2”) do |config| config.vm.define “web” do |web| web.vm.box = “ubuntu/trusty64” web.vm.hostname = “web.vagrant.vm” web.vm.provision “she;;” do |shell| shell.inline = “apt update -y shell.inline = “apt install apache2 -y” end end

config.vm.define “db” do |db| db.vm.box = “ubuntu/trusty64” db.vm.hostname = “db.vagrant.vm” end end


5. relaunch what is up (only the web so far)
```bash
$ vagrant reload --provision
  1. Add Puppet provider to the Vagrantfile
    Vagrant.configure("2") do |config|
      config.vm.define "web" do |web|
     web.vm.box = "ubuntu/trusty64"
     web.vm.hostname = "web.vagrant.vm"
     web.vm.provision "she;;" do |shell|
       shell.inline = "apt update -y
       shell.inline = "apt install apache2 -y"
     end
      end
      config.vm.define "db" do |db|
     db.vm.box = "ubuntu/trusty64"
     db.vm.hostname = "db.vagrant.vm"
     db.vm.provision "puppet" do |puppet|
       puppet.manifest_path = "puppet/manifests"
       puppet.manifest_file = "default.pp"
       puppet.module_path = "puppet/modules"
       puppet.hiera_config_path = "puppet/hiera.yaml"
     end
      end
    end
    
  2. Validate for errors in Vagrantfile and launch the db server
    $ vagrant validate
    $ vagrant up db
    
  3. Connect to the container and verify mysql is installed
    # once server launches...
    $ vagrant ssh db
    $ sudo su
    $ mysql
    

Vagrant: CentOS Lab Notes

# check for docker
$ docker -v

# install vagrant - link from the downloads section
$ sudo yum install -y https://releases.hashicorp.com/vagrant/2.2.3/vagrant_2.2.3_x86_64.rpm
$ vagrant -v 

$ create Vagrantfile and map host port to vagrant port 80:2368
$ sudo yum install nano
$ nano Vagrantfile
Vagrant.configure("2") do |config|
  config.vm.provider "docker" do |d|
    d.image = "ghost"
    d.ports = ["80:2368"]
  end
end

# launch and verify
$ sudo vagrant up
$ docker ps
$ docker images
$ curl http://localhost

# pull up web browser

Vagrant: Use w/ Docker to Build a DEV Env

  1. Log into instance with Vagrant and Docker installed
  2. Setup Dockerfile
    $ sudo su
    $ yum install nano
    $ cd root/docker
    $ nano Dockerfile
    
    FROM node:alpine
    COPY code /code
    WORKDIR /code
    RUN npm install
    EXPOSE 3000
    CMD ["node", "app.js"]
    
  3. Setup Dockerfile ```bash $ nano Vagrantfile

ENV[‘VAGRANT_DEFAULT_PROVIDER’] = “docker”

Vagrant.configure(“2”) do |config| config.vm.provider “docker” do |d| d.build_dir = “.” d.ports = [“80:3000”] end end


4. Use Vagrant to launch the Docker image "node:alpine"
```bash
$ mkdir code
$ vagrant validate
$ vagrant up

$ docker images
$ docker ps
  1. see if the docker:alpine image port 3000 is mapped to localhost:80
    $ curl localhost
    
  2. to edit the js code… just cd into the code/ folder
    $ vagrant reload
    

Vagrant Box

$ vagrant box add <ADDRESS>
$ vagrant box list
# tell you if box is outdated
$ vagrant box outdated
$ vagrant box outdated --global
# prune out old versions of boxes
$ vagrant box prune
$ vagrant box prune -n
# remove a specific box
$ vagrant box remove <NAME>
# repackage - reconstruct the box file
$ vagrant box repackage <NAME> <PROVIDER> <VERSION>
$ vagrant box repackage ubuntu64 virtualbox 0
# download and install the new box and you must update the individual running box
$ vagrant box update
$ vagrant box update --box centos/7 
# remove and readd box
$ vagrant box remove ubuntu64
$ vagrant box add ubuntu64
# automatically create the Vagrantfile for precise64
$ vagrant init hashicorp/precise64

Vagrant Box: Creating a Vagrant Box file

  1. Go into project folder “vagrant_box”
  2. Download Ubuntu 18.04 using curl
  3. Open virtual box and create a new box “ubuntu64-base, 512MB, 40GB, dynamic alloc
  4. Disable audio, USB, set network port forwarding SSH 2222:22
  5. Setup storage, CD-ROM, Virtual, vagrant_box, d/l file
  6. Networking, set as NAT
  7. Start machine in Virtual box, setup, add User: vagrant Pass: vagrant
  8. Guided install for disk, automatically add security updates
  9. Software Selection: OpenSSH Server, Basic Ubuntu Server, Yes for GRUB
  10. Before restart, In Virtual Box: Eject the disk
  11. Log into Ubuntu instance
  12. Setup security
    $ passwd root; vagrant
    $ echo "vagrant ALL=(ALL) NOPASSWD:ALL" | tee -a /etc/sudoers.d/vagrant
    
  13. Get the vagrant public key: https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant.pub
  14. Get the key, setup SSH, install packages
    $ mkdir /home/vagrant/.ssh
    $ chmod 0700 /home/vagrant/.ssh
    $ wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant.pub
    $ mv vagrant.pub authorized_keys
    $ chmod 600 authorized_keys
    $ chown -R vagrant ~/.ssh/
    $ echo "AuthorizedKeysFile %h/.ssh/authorized_keys" | tee -a /etc/ssh/sshd_config
    $ service ssh restart
    $ apt install -y gcc build-essential git linux-headers-$(uname -r) dkms
    
  15. VirtualBox Menu Bar: Devices: Insert Guest Additions CD image
    $ mount /dev/cdrom /mnt
    $ /mnt/VBoxLinuxAdditions.run
    
  16. Compress all empty space out of filesystem
    $ dd if=/dev/zero of=/EMPTY bs=1M
    $ rm -f /EMPTY
    
  17. Turn Ubuntu into a Vagrant Box from home OS to create a package.box ```bash $ vagrant package –base ubuntu64-base

$ vagrant box add ubuntu64 package.box

$ vagrant box list

18. Run the new box and connect!
```bash
$ vagrant init ubuntu64 -m
$ cat Vagrantfile
$ vagrant up
$ vagrant ssh

Vagrant Box: file format

Vagrant: Review Questions

  1. When executing a vagrant init, what flag would you use to overwrite a Vagrantfile if one has already been created.
    • vagrant -f
  2. Which command returns all installed boxes?
    • vagrant box list
  3. When executing vagrant destroy, what flag would you use so that you are not prompted to confirm that you want to destroy the resources?
    • vagrant -f
  4. What are the three ways that Vagrant will try and detect a provider?
    • Execute vagrant up with the –provider flag. Use the VAGRANT_DEFAULT_PROVIDER environmental variable in your Vagrantfile. Example: ENV[‘VAGRANT_DEFAULT_PROVIDER’] = ‘’ Vagrant will go through all of the config.vm.provider calls in the Vagrantfile and try each in order.
  5. Fill in the blank. The primary function of the _ _ _ _ _ _ _ _ _ is to describe the type of machine required for a project, and how to configure and provision these machines.
    • Vagrantfile
  6. What file format should the info file be in?
    • JSON
  7. When creating a base box, what are the default settings that should configured?
    • Set the root password to vagrant.
    • Create a vagrant user for SSH access to the machine.
    • Configure the SSH user to have passwordless sudo configured.
  8. Which of these is true about Docker Base Boxes?
    • The Docker provider does not require a Vagrant box.
  9. Which of the following commands will create a new Box file from a Box imported into Vagrant.
    • vagrant box repackage
  10. What are the three different components of a Box?
    • A Box Information File
    • A Box Catalog Metadata File
    • A Box File

Packer

Packer Templates

$ packer build
# bring a template up to date
$ packer fix
# learn what the template is doing (vars, definitions, etc)
$ packer inspect
# check syntax and config
$ packer validate

Packer: Install

$ cd /usr/local/bin
$ wget <packer zip file>
$ yum install unzip
$ unzip <packer zip file>
$ rm <packer zip file>
$ cd
$ packer --version

Packer: Create a Packer Template

$ mkdir packer
$ nano packer.json

{
  "variables": {
    "repository": "la/express",
    "tag": "0.1.0"
  },
  "builders": [
    { "type": "docker",
      "author": "<your name>",
      "image": "node",
      "commit": "true",
      "changes": [
        "EXPOSE 3000"
      ]
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "inline": [
        "apt update && apt install curl -y",
        "mkdir -p /var/code",
        "cd /root",
        "curl -L https://github.com/linuxacademy/content-nodejs-hello-world/archive/v1.0.tar.gz -o code.tar.gz",
        "tar zxvf code.tar.gz -C /var/code --strip-components=1",
        "cd /var/code",
        "npm install"
        ]
    }
  ],
  "post-processors": [
    {
    "type": "docker-tag",
    "repository": "",
    "tag": ""
    }
  ]
}

$ packer validate
# fix errors
$ packer build -var 'tag=0.0.1' packer.json
$ docker images
# you will see any docker images from the past and this one
$ docker run -dt -p 80:3000 la/express:0.0.1 node /var/code/bin/www
$ docker ps
# you should see the docker image running

Cloud Init

$ cloud-init init = run by the OS but can be run on the CLI
$ cloud-init modules = activates modules using a config key
$ cloud-init single 
$ cloud-init dhclient-hook
$ cloud-init features = not always installed
$ cloud-init analyze = cloud-init logs and data
$ cloud-init devel = run the dev tools
$ cloud-init collect-logs = collect and tar debug info
$ cloud-init clean = remove logs and artifacts so cloud-init can re-run
$ cloud-init status = reports cloud-init status or wait on completion

Packer: Using Packer to Create an AMI

$ sudo su
$ cd /usr/local/bin
$ wget <packer.io link>
$ unzip pack*.zip
$ rm packer*.zip
$ exit
# packer --version

Packer: Using Packer to Create a Docker Image

  1. In the root directory (of an instance), create a packerfile.json with the following contents:
  1. Create a directory call code in /var.
    • Use curl to download the application tar file to root: curl -L https://github.com/linuxacademy/content-nodejs-hello-world/archive/v1.0.tar.gz -o code.tar.gz
    • Untar the file to /var/code. tar zxvf code.tar.gz -C /var/code –strip-components=1
    • Go to /var/code and execute an npm install.
  2. Create a docker-tag post-processor:
    • Set repository to use the repository variable.
    • Set tag to use the tag variable.
$ echo '{
  "variables": {
    "repository": "la/express",
    "tag": "0.1.0"
  },
  "builders": [
    { "type": "docker",
      "author": "<your name>",
      "image": "node",
      "commit": "true",
      "changes": [
        "EXPOSE 3000"
      ]
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "inline": [
        "apt update && apt install curl -y",
        "mkdir -p /var/code",
        "cd /root",
        "curl -L https://github.com/linuxacademy/content-nodejs-hello-world/archive/v1.0.tar.gz -o code.tar.gz",
        "tar zxvf code.tar.gz -C /var/code --strip-components=1",
        "cd /var/code",
        "npm install"
        ]
    }
  ],
  "post-processors": [
    {
    "type": "docker-tag",
    "repository": "",
    "tag": ""
    }
  ]
}' > packerfile.json

  1. Validate the packerfile.json.
    $ packer validate packerfile.json
    
  2. Build the docker image by Executing packer build.
    $ packer build --var 'repository=la/express' --var 'tag=0.0.1' packerfile.json
    # show the images that exist
    $ docker images
    
  3. Start a Docker container by executing:
    $ docker run -dt -p 80:3000 la/express:0.0.1 node /var/code/bin/www
    # validate running
    $ docker ps
    $ curl localhost
    

Configuration Management

Puppet

$ puppet apply    // apply manifests locally to system
$ puppet agent    // apply manifests to puppet magest by applying catalog
$ puppet cert     // list and manage built-in cert authority
$ puppet module   // pull down modules or create our own
$ puppet resource // inspect or manipulate 
$ puppet parse    // validate puppet files

Chef

$ chef-server-ctl
$ chef-server-ctl restore BACKUP_PATH
$ chef-server-ctl backup-recover
$ chef-server-ctl cleanse   // undo the recover or reconfigure
$ chef-server-ctl gather-logs
$ chef-server-ctl ha-status
$ chef-server-ctl show-config
$ chef-server-ctl  restart SERVICE_NAME
$ chef-server-ctl  service-list
$ chef-server-ctl  start SERVICE_NAME
$ chef-server-ctl  status
$ chef-server-ctl  stop SERVICE_NAME
$ chef-solo     // exec locally
$ knife         // interact with chef server
$ knife cookbook
$ knife cookbook generate COOKBOOK_NAME
$ knife cookbook delete COOKBOOK_NAME[version]
$ knife cookbook download COOKBOOK_NAME[version]
$ knife cookbook list
$ knife cookbook metadata
$ knife cookbook show COOKBOOK_NAME
$ knife cookbook upload COOKBOOK_NAME

Ansible

$ ansible
$ ansible-config
$ ansible-console   // REFL console for executing Ansible tasks
$ ansible-dock
$ ansible-galaxy    // upload rolls for sharing packages
$ ansible-inventory
$ ansible-playbook
$ ansible-pull
$ ansible-vault     // encrypt/decrypt secrets
$ ansible -i
$ ansible-playbook --ask-vault-pass
$ ansible-playbook --vault-password-file:<file>
$ ansible-vault create file.yml
$ ansible-vault edit file.yml
$ ansible-vault rekey file.yml
$ ansible-vault encrypt file.yml
$ ansible-vault decrypt file.yml
$ ansible-vault view file.yml

Ansible: Configure Master to work on Client

Deploying to AWS with Ansible and Terraform

Deploying to AWS with Ansible and Terraform: Requirements

Deploying to AWS with Ansible and Terraform: Budgeting

Deploying to AWS with Ansible and Terraform: Process Flow

Deploying to AWS with Ansible and Terraform: Setup Process Overview

Deploying to AWS with Ansible and Terraform: Setup Server

Deploying to AWS with Ansible and Terraform: Setup AWS IAM and DNS

$ sudo su - 
$ aws configure --profile terransible_lab
$ aws ec2 describe-instances --profile terransible_lab
# NOTE: may not have any instances listed

Deploying to AWS with Ansible and Terraform: Setup Credentials and Variables

$ cd /home/user/terransible
$ touch main.tf terraform.tfvars variables.tf
$ touch userdata aws_hosts wordpress.yml s3update.yml

Deploying to AWS with Ansible and Terraform: Terraform Files & Ansible Playbook Files

$ terraform init
$ terraform plan

main.tf

variables.tf

terraform.tfvars

s3update.yml

wordpress.yml

Deploying to AWS with Ansible and Terraform: DEPLOY

$ terraform fmt --diff
$ terraform plan
$ terraform apply

Deploying to AWS with Ansible and Terraform: Troubleshooting

  1. What will happen if Terraform makes a change in a later version that isn’t supported by your script?
    • The script may break, Terraform does not automatically fix errors.
  2. What is one command that you can use to access your ssh agent?
    • ssh-agent bash
  3. What two Route 53 Zone types are available?
    • Public, Private
  4. What must be set to “False” in order for the Ansible-playbook to avoid issues connecting to an AWS instance for the first time?
    • host_key_checking
  5. What is a prerequisite for installation of the AWS CLI
    • python-pip
  6. What command allows you to setup your AWS CLI with your credentials and region?
    • aws configure
  7. What three things could cause your Ansible Playbook called by Terraform not to run successfully on the AWS instance?
    • key not added to ssh-agent, Incorrect Security Group rules, host_key_checking set to true
  8. What happens if a variable defined in your main.tf (or other infrastructure) file is not listed in your variables.tf file?
    • The apply will fail
  9. What punctuation is used to encapsulate a list of multiple items in a Terraform *.tf file.
    • brackets
  10. What Terraform command will deploy the scripted infrastructure?
    • terraform apply
  11. What Terraform command will “clean up” the code in the tf files?
    • terraform fmt
  12. What switch for the command that runs an Ansible Playbook is used to specify a custom inventory file?
    • -i
  13. What command runs an Ansible Playbook?
    • ansible-playbook
  14. What AWS feature allows us to access AWS repositories privately as well as the S3 bucket all without using an Internet Gateway for our instances?
    • S3 VPC Endpoint
  15. What must be modified for the Terraform command to run by just using the command “Terraform”?
    • The PATH
  16. What command will initialize the Terraform directory and download required plugins?
    • terraform init
  17. What happens if you don’t specify the value of a variable in the variables.tf file in the terraform.tfvars file?
    • A terraform apply will ask you for the value before applying
  18. What AWS product allows us to server traffic to multiple private instances without exposing them to the public?
    • Elastic Load Balancer
  19. When creating your static nameservers used in Route 53, where must you set those nameservers in order for the deployment to work correctly and the zones to propagate to the internet?
    • nameservers field in your registrar
  20. What command will allow you to list keys associated with your ssh-agent?
    • ssh-add -l
  21. What Route 53 feature allows you to reuse nameservers for multiple deployments?
    • reusable-delegation-set

Container Management

Docker

$ docker attach    // attach to a running container (like SSH but if you detach you shut it down)
$ docker build     // build image from a Dockerfile
$ docker exec      // run a command in a running container
$ docker exec -it nginx-test /bin/bash   // connect to this container with a bash shell
$ docker images 
$ docker inspect . // low level info about Docker object
$ docker logs
$ docker network
$ docker network inspect <network-name>
$ docker node .     // manage swarm nodes
$ docker ps .       // list running continares (-a --all)
$ docker pull       // retrieve from registry
$ docker push       // push to a registry
$ docker restart    
$ docker rm         // remove container
$ docker rmi        // remove images
$ docker run        // run a command in a new container
$ docker run -d --name=static-site -p 80:80 la/static:latest
$ docker start / stop
$ docker swarm
$ docker volume     // manage volumes
$ docker volume ls
$ docker volume rm <volume>

Docker: Dockerfile

  1. pull down repo
$ git clone https://github.com/linuxacademy/content-express-demo-app.git
  1. create dockerfile
FROM node

RUN mkdir -p /var/node
ADD content-express-demo-app/ /var/node
WORKDIR /var/node
RUN npm install

CMD bin/www
  1. build image
$ docker build -t la/app-node -f Dockerfile .
$ docker images

Docker: Docker Volumes

  1. Create Dockerfile
FROM nginx
VOLUME /usr/share/nginx/html
VOLUME /var/log/nginx
WORKDIR /usr/share/nginx/html
  1. Build image
$ docker build -t la/static-site:latest -f Dockerfile .
$ docker images
  1. Create volume for NGINX code and logs
$ sudo su
$ docker volume create nginx-code
$ docker volume create nginx-logs
$ docker volume ls
$ docker run -d --name=static-side -p 80:80 --mount source=nginx-code,target=/usr/share/nginx/html --mount source=nginx-logs,target=/var/log/nginx la/static-site:latest
$ docker ps
$ ls /var/lib/docker/volumes
$ ls /var/lib/docker/volumes/nginx-code/_data/
$ vi /var/lib/docker/volumes/nginx-code/_data/index.html
  1. look at the webpage served from the IP address

Docker: Docker Networks

$ docker network create app-bridge
$ docker network ls
$ docker run -dt --name my-app --network app-bridge nginx:latest
$ docker ps
$ docker inspect <name>

Docker: Docker Compose

$ docker-compose build -h
$ docker-compose up         // runs in forground
$ docker-compose up -d      // runs not in forground (detach mode)
$ docker-compose ps
$ docker-compose start stop // runs in background
$ docker-compose rm         // delete the build (only)
$ docker-compose logs
$ docker-compose pause unpause
$ docker-compose down       // stop and remove all container, networks, etc
$ docker-compose restart

Docker: Docker Compose file

Docker: Swarm

Docker: Machine

Kubernetes

$ kubectl get pod nginx-pod-demo
$ kubectl create -f pod.yml
$ kubectl delete pod nginx-pod-demo
$ kubectl create -f replicaset.yml
$ kubectl get replicasets
$ kubectl get pods
$ kubectl delete pod <pod-name>   // a new one will automatically be created in it's place
$ kubeget pods
$ kubectl scale --replicas=4 replicaset/replicaset-demo   // bump up to 4
$ kubectl delete replicaset <replicaset-name>             // to get rid of replciaset
# 
$ kubectl get deployments
$ kubectl scale --replicas=4 deployment/nginx-deployment
$ kubectl delete deployment nginx-deployment

Kubernetes: Configuring 2 servers

  1. Create 2 “Cloud Native Kubernetes” Servers
  2. Setup master
$ kubeadm init --pod-network-cidr=10.244.0.0/16
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. Install flannel on master
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
  1. Grab the $ kube join command from the master to paste into the 2nd server
  2. On the master run $ kubectl get nodes and you should see both servers

Software Engineering

RESTful APIs

SOA

Microservices

Version Control Tools

Agile

Test Driven Development

CI/CD

Jenkins

Jenkins Installation on CentOS

#### ON SLAVE SERVER
$ yum install java -y
$ useradd -d /var/lib/jenkins jenkins
$ su jenkins -s /bin/bash
$ cd /var/lib/jenkins
$ mkdir .ssh
$ chmod 700 .ssh/
$ cd .ssh
$ touch authorized_keys
# drop back to root
$ exit
$ passwd jenkins
#### ON MASTER SERVER
$ su jenkins -s /bin/bash
$ ssh-keygen
$ ssh-copy-id jenkins@slaveserver
### ON SLAVE SERVER
$ cat /var/lib/jenkins/.ssh/authorized_keys
# you will now see the MASTER SERVER key is copies to the SLAVE SERVER
$ su jenkins -s /bin/bash
$ cd /var/lib/jenkins
$ chmod 700 .ssh/
$ chmod 600 authorized_keys
### ON MASTER SERVER
$ ssh jenkins@slaveserver
$ cat /var/lib/jenkins/.ssh/id_rsa.pub
$ docker images
$ ls /var/lib/jenkins/workspace

Jenkins: Test Driven Dev Project

$ npm install
$ npm test

Building a Docker Image using Packer and Jenkins

$ firewall-cmd --zone=public --permanent --add-port=8080/tcp
$ firewall-cmd --reload
$ firewall-cmd --zone=public --permanent --list-ports
$ cat /var/lib/jenkins/secrets/initialAdminPassword
$ docker images

GIT

Prod Concepts

Pen Testing

Metasploit

[EDIT]