DigitalOcean Fellowship Program

Check out my blog post on DigitalOcean’s blog site about the DigitalOcean fellowship program.  This program is all about mentorship and raising a performing team with the mentor / mentee relationship:

https://blog.digitalocean.com/mentoring-engineers-through-an-engineering-fellowship-program/

Advertisement

0 to Cloud, Using DigitalOcean and Terraform to Start Working in the Cloud

Introduction

I get asked a lot about how to get started with cloud computing. As most of you already know, this is a very broad question  (you have probably been asked the same thing).

I decided to write a short guide to make it easy for someone to get started working with a cloud instance. This shell script, using Terraform and DigitalOcean, will hopefully fulfill that goal.

Getting Started

Requirements:

  1. A DigitalOcean account and API Key (See this post under “How To Generate a Personal Access Token”)
  2. 64 bit Ubuntu System (virtual or physical) – Next few steps are all on this system
  3. Update your repos: sudo apt-get update
  4. Install git:  sudo apt-get -y install git
  5. Pull the repo we will be working with: git clone https://github.com/tspiegs/0toCloud
  6. Move into the new directory: cd 0toCloud

Execution

NOTE:  NEVER, EVER run a script that someone gives you without knowing what it does.   If you can not figure out what the script is doing, then ask someone. This is critically important with a script like this one, which will be asking for your sudo credentials. Please make sure you know what it is doing.

This is the beauty of this 0toCloud script.  Simply type: ./0tocloud.sh and hit enter.  This will start the process of creating your cloud instance (called a droplet by DigitalOcean).  Depending on how your system is configured, the script may ask for a few pieces of info.  After which, as long as everything looks good, your new droplet will start to build.

Building and setting up the droplet will take up to 5 minutes so, just let it go.  After it is finished, you should see a set of information with an IP address. Try to SSH to that IP address by typing: ssh root@theipaddress. You can also view the IP address in a web browser to see an Nginx setup page.

You should now have a startup webpage and a working cloud instance that you can play with!

What The Hell Is This Doing!?

OK let’s go through a bit of the important code here block by block.  Follow along by typing: less ./0tocloud.sh

while [ "$1" != "" ]; do
    case $1 in
        -d | --distro )         shift
                                distro=$1
                                ;;
        -h | --hostname )       shift
                                hostname=$1
                                ;;
        -p | --plan )           plan=1
                                ;;
        -r | --refresh )        refresh=1
                                ;;
        --help )                usage
                                exit
                                ;;
        -D | --destroy )        destroy=1
                                ;;
        * )                     usage
                                exit 1
    esac
    shift
done

These are our possible arguments. Simply running: ./0tocloud.sh will start a droplet for you, but if you want to edit specific attributes of this droplet, you can do so by passing arguments to the script. For example: ./0tocloud.sh -h nginxtestserver will start a droplet with the hostname “nginxtestserver”. We also see that you can use “./0tocloud.sh -D” to destroy the instances you have already created.

type terraform >/dev/null 2>&1 || { echo "Dowloading and setting up Terraform" \
wget https://dl.bintray.com/mitchellh/terraform/terraform_0.5.1_linux_amd64.zip \
echo "unzipping terrform!" \
sudo unzip terraform_0.5.1_linux_amd64.zip -d /usr/bin/
}

This bit is going to check that Terraform is installed on your computer (you can find more information at terraform.io). If Terraform is not installed, this will grab the .zip file from their website and unzip the files into your /usr/bin/ so “terraform” can be used like any other linux command. Note: This is one of a few parts that will require sudo access.

if [ -a ~/.ssh/id_rsa.pub ]; then
  echo "public ssh key file exists, continuing"
else
  echo "~/.ssh/id_rsa.pub does not exists.  Would you like to create it? yes or no"
  read creatersa
  if [[ $creatersa == "yes" || $creatersa == "y" ]]; then
    echo "creating rsa key pair"
    ssh-keygen -t rsa
  else
    echo "rsa key pair needs to be created.  Place rsa key in ~/.ssh/id_rsa.pub or rerun this program and type yes to make a new key pair"
    exit
  fi
fi

This block is checking if you have an RSA key pair. This is needed for SSH access. If you do not have a key pair located in ~/.ssh/ directory then it will start the process of making a key pair for you. NEVER share your id_rsa. This is your private key. The only key that should ever be moved from your local machine is the id_rsa.pub as this is the public side of the key.

if [ -z "$DO_PAT" ]; then
  echo "no PAT variable exists" #manually enter the key
  echo "enter your DigitalOcean API Key to start setting up your new droplet"
  read DOKey
else
  echo "pat exist"
  DOKey=$(echo $DO_PAT)
fi

This will check if you have a variable called $DO_PAT which has your API Key. If this variable is not set, you will need to manually enter your API Key every time you run 0tocloud.sh. To set the variable, use something like this (but obviously with your specific API Key):

 echo declare -x DO_PAT=\'yourapikeyhere\';. ~/.bashrc 

That is an optional step. You can easily just paste your API Key in each time you run the script.

sshkeyFP=$(ssh-keygen -lf ~/.ssh/id_rsa.pub | awk '{print $2}')
curlkeystatus=$(curl -X GET -H 'Content-Type: application/json' -H "Authorization: Bearer $DOKey"  "https://api.digitalocean.com/v2/account/keys" | grep -ci $sshkeyFP)
if [ $curlkeystatus -eq 0 ]
then
  echo "Putting your SSH key in DO"
  sshkeypub=$(cat ~/.ssh/id_rsa.pub)
  apijson=$(echo "{\"name\":\"My SSH Public Key\",\"public_key\":\"$sshkeypub\"}")
  curl -X POST -H 'Content-Type: application/json' -H "Authorization: Bearer $DOKey" -d "$apijson" "https://api.digitalocean.com/v2/account/keys"
elif [ $curlkeystatus -eq 1 ]; then
  echo "SSH Key properly in DO account, Continuing…."
else
  echo "can't detect ssh key status, something is VERY wrong here……"
fi

Now that we are sure that we have an RSA key pair and the DO API Key, we can see if the public key is in our DO account. We use the DigitalOcean API to verify. If the public RSA key is not in our DigitalOcean account, this block will automatically place it there for easy SSH access to our droplets.

if [ $destroy -eq 1 ] >/dev/null 2>&1; then
  terraform destroy -var "do_token=${DOKey}" -var "ssh_fingerprint=${sshkeyFP}" -var "do_distro=${distro}" -var "do_hostname=${hostname}"
  exit 1
fi

if [ $plan -eq 1 ] >/dev/null 2>&1; then
  terraform plan -var "do_token=${DOKey}" -var "ssh_fingerprint=${sshkeyFP}" -var "do_distro=${distro}" -var "do_hostname=${hostname}"
  exit 1
fi

if [ $refresh -eq 1 ] >/dev/null 2>&1; then
  terraform refresh -var "do_token=${DOKey}" -var "ssh_fingerprint=${sshkeyFP}" -var "do_distro=${distro}" -var "do_hostname=${hostname}"
  exit 1
fi
#now starting terraform magic and creating the instance

terraform apply -var "do_token=${DOKey}" -var "ssh_fingerprint=${sshkeyFP}" -var "do_distro=${distro}" -var "do_hostname=${hostname}"

terraform show

Finally, these are the Terraform commands that will actually run to create, show, or destroy our droplets.

PLAY PLAY PLAY!

Now its time to experiment with the files. Play with the different arguments for the shell script 0tocloud.sh. Type: ./0tocloud.sh –help to see the available usage. Take a look at the other files in the repo such as form.tf (this is the terraform file telling the droplet what to do). Don’t forget to destroy your cloud instances at the end if you do not want your cloud bill piling up! The great part about this is that they are very easy to create and destroy.

Thanks a lot for reading. I will be adding much more in the future when I have time. Next up, I will be playing with a bastion host and security of our droplets.

Monitoring Enterprise Wifi Using a Raspberry Pi 2

Monitoring Enterprise Wifi Using a Raspberry Pi 2
by Thomas Spiegelman, Operations Engineer for Amplify

Wireless sucks.  There I said it.  As a network engineer who has dealt primarily with switching and routing, wireless is just a pain.  I wanted a simpler, cheaper way to diagnose inconsistent wireless problems.  Oh hey, there is a Raspberry Pi 2 now?!
 
With the Raspberry Pi 2, I created a simple, inexpensive setup to collect data on a wireless network.  I tacked on a TP Link WDN3200 wireless adaptor (this adaptor has 5ghz), and a 32GB MicroSD Card. I left 8 of these Pis in different locations within the wireless network.  This setup cost under $650. You’d have a hard time finding a single enterprise WAP for that price anymore!
 
The plan

What do I want to test on the network?  Let’s set some goals for testing:

  1. Test network for packet loss
  2. Record RSSI and Noise level
  3. Check Authentication.  WPA2 / WPA2 Enterprise handshake
  4. Track Association issues
  5. Record Wireless Drops / Reauths
  6. Log all data to centralized server

My setup

Raspberry Pi 2 running Raspian, TP Link WDN3200, 32GB SD card, and a standalone rsyslog server.   I am already connected to my wireless network with all of the network info defined in /etc/wpa_supplicant/wpa_supplicant.conf.  I’ve also configured the hostnames on all of the Pis to their respective serial number so that I didn’t have 8 Raspberry Pis using the same hostname of “raspberrypi”:


sethostname=$(cat /proc/cpuinfo|grep erial | awk '{print $3}')

sudo sed -i 's/raspberrypi/$sethostname/g' /etc/hosts

sudo sed -i 's/raspberrypi/$sethostname/g' /etc/hostname

Now the tests!

The first test is the ping test for goal number 1.  I didn’t want to just have a constant stream of ICMP messages, so I’ve decided that I would do a 3 ping burst every 15 seconds to the gateway and log the data.  As long as I am logging these pings every 15 seconds I can kill 2 birds with one stone and add the RSSI and Noise data from iwconfig to the same log messages for goal number 2.  Here is my code (named pingthings):

#!/bin/bash
#script for pinging the dg every 15 seconds and logging that data and the signal quality data

#grabbing the wifi signal from iwconfig and formatting it to be logged
wisignal=$(iwconfig wlan0 | grep -i quality | sed -e 's/^[ \t]*//;s/[ \t]*$//;s/\/70/\/70,/')

#Grabbing the default gateway IP address to ping
pdefaultgw=$(ip route show | grep default | awk '{print $3}')

#sending to the log file that the acript is starting and logging the initial signal strength
echo "pingthings, Interface Started, $wisignal" 2>&1 | logger &
while true
do
  #checking wlan0 to see if it has a valid ip address assigned, if it does then start pinging
  ifconfig wlan0 | awk '/inet addr/{print substr($2,6)}' | grep -q -Eo '([0-9]*\.){3}[0-9]*'
  if [ $? -ne 0 ] 
  then
    echo "pingthings, wlan0 not fully running yet" 2>&1 | logger &
    sleep 10
  else
    srcaddy=$(ifconfig wlan0 | grep inet\ addr: | awk '{print $2}')
    pingrta=()
    pcount=0
    #do 3 pings and log the round trip time to a table
    for i in {1..3}
    do 
      pingrt=$(ping -c 1 $pdefaultgw | grep -o "time\=.*")
      if [ $? -ne 0 ] 
      then
        pcount=$(( $pcount + 1 ))
        pingrta+=('NULL')
      else 
        pingrt=$(echo $pingrt | sed 's/time=//' | sed 's/ms//')
        pingrta+=($pingrt)
      fi
    done


    #now we are going to check how many pings were successful then send data to the log according to the number of successful pings
    wisignal=$(iwconfig wlan0 | grep -i quality | sed -e 's/^[ \t]*//;s/$[ ]*//;s/\/70/\/70,/')
    if [ $pcount -eq 0 ] 
    then
      echo "$(date +%s), $srcaddy, $wisignal, pingthings pass, RTT: ${pingrta[*]}" 2>&1 | logger &
    elif [[ $pcount -gt 0 && $pcount -lt 3 ]]
    then
      echo "$(date +%s), $srcaddy, $wisignal, pingthings $pcount out of 3 FAIL, RTT: ${pingrta[*]}" 2>&1 | logger &
    elif [ $pcount -eq 3 ] 
    then
      echo "$(date +%s), $wisignal, pingthings 3 out of 3 FAIL" 2>&1 | logger &
      pdefaultgw=$(ip route show | grep default | awk '{print $3}')
    else 
      echo "$(date +%s), $srcaddy, pingthings error" 2>&1 | logger &
    fi
    sleep 20
  fi
done

I made this file executable and placed it in /usr/local/bin/pingthings.

Next, I wanted to take care of goals 3 and 4 by bouncing the interface every 15 minutes.  I figured if there are reauth problems or association errors (as long as the WAP doesn’t cache the client) then wpa_supplicant would see the problems and log the data.  I also wanted to stop the pings from the script above while the interface was restarting.  To accomplish this I’ve just put all of the following info in an init.d script (named wlanmonitor): 

#! /bin/sh
#

### BEGIN INIT INFO
# Provides:          pingthings
# Required-Start:    $remote_fs $syslog
# Required-Stop:     $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start daemon at boot time
# Description:       Enable service provided by daemon.
### END INIT INFO
# Some things that run always
touch /var/lock/wifizzles

# Carry out specific functions when asked to by the system
case "$1" in
  start)
    echo "Starting script pingthings "
    pingthings &
    ;;
  stop)
    killall pingthings
    echo "pingthings, Interface Being Stopped" 2>&1 | logger &
    ifdown wlan0
    sleep 15
    echo "bringing it back up"
    ifup wlan0
    ;;
  reload) 
    /etc/init.d/wlanmonitor stop
    /etc/init.d/wlanmonitor start
    ;;
  *)
    echo "Usage: /etc/init.d/pingthings {start|stop}"
    exit 1
    ;;
esac

exit 0

I then set that to a cron job for every 15 min:


SHELL=/bin/bash

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

*/15 * * * * root bash -c "/etc/init.d/wlanmonitor reload"

Now goal 5 will come with logging all the info.  Almost everything wifi related on the Raspberry Pi by default will be managed from wpa_supplicant.  Logs from wpa_supplicant will not only show me the activity from the interface bouncing (wlanmonitor script), but I will also see if any other wifi activity occurs.  I also want to log this to a centralized rsyslog server that I am going to setup (below).  The logging is easily done by editing /etc/rsyslog.conf and adding the following lines directly under where you see ### RULES ###:


:msg, contains, "wlan0"           @@10.60.1.5:514
if $msg contains 'wlan0' then /var/log/wpa_sup.log
& ~

:programname, contains, "wpa_supplicant"           @@10.60.1.5:514
if $programname contains 'wpa_supplicant' then /var/log/wpa_sup.log
& ~

:programname, contains, "wpa_action"           @@10.60.1.5:514
if $programname contains 'wpa_action' then /var/log/wpa_sup.log
& ~

:msg, contains, "pingthings"           @@10.60.1.5:514
if $msg contains 'pingthings' then /var/log/latency.log
& ~


@@ means it uses TCP logging instead of UDP logging which is essential since we are constantly bouncing the interface.  10.60.1.5 is the IP I am reserving for my rsyslog server.

Finally, I need a centralized rsyslog server to receive all the data from the raspberry Pis.  I would recommend doing some Googling on how to setup your server and configure client settings for rsyslog.  You might need to do some tweaking that’s specific to your network to ensure that your clients cache the logs correctly when the interface is bouncing. You will also need to make sure that you set your server up to receive tcp.   Here is my template that I’ve used on the rsyslog server to receive the files:

$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$template PiLogging, "/var/log/%HOSTNAME%.log"
if $fromhost-ip startswith '10.' then ?PiLogging
& stop

This simply states, any logging coming from a 10 net ip address, send it to /var/log/(hostnameofclient).

This gives me Raspberry Pi 2s that I can simply plug into a usb port (they work great taped to a LCD TV, in the TV’s USB port) which auto connects to a wifi network and sends me logs to a centralized log server with very beneficial data.  It is amazing how easily you can diagnose problems with this information.  Packet loss, auth problems, coverage problems, it’s all logged!

Some future steps for me in this project:

  1. The old Raspberry Pi models were not capable of running Chef, it was just too taxing on the Pi.  Now with the quad core Raspberry Pi 2s I am hoping that I can run Chef.  This means I should be able to add to a Chef server (or git, pull and run Chef Solo) the wifi information of a network that I want to test and have the Raspberry Pi pull SSID / password / 802.1x creds through a Chef run, having it place that information automatically in its wpa_supplicant.conf file.
  1.   Setup OpenVPN to have the Raspi connect back over a VPN tunnel to the rsyslog server.  This will mean I can have Pis on multiple LANs and still have them “phone home” and log all information to the same rsyslog server.

I hope this is helpful and happy network monitoring!!!   Thanks to my colleagues Ryan Walsh and Robert Lahnemann for the help on this project.