Getting VLC to work with Celestron Digital Microscope on Manjaro

I’ve run into problems getting VLC to work with miscellaneous devices on Manjaro and I believe I’ve finally figured out that some of the dependencies to the plugins weren’t getting installed. Here is how I figured this out.

Runningvlc v4l2:///dev/video0

Gives the error about “VLC is unable to open the MRL…”

Make sure your user is a member of the “video” group. If so, then run VLC in debug mode: vlc v4l2:///dev/video0 --extraintf=http:logger --verbose=2 --file-logging --logfile=vlc-log.txt

In my vlc-log.txt log file there is was a line that indicated that the libv4l2 was not loading due to a dependency issue (you will probably find lots of other dependencies that could also be fixed). The specific line was:

main warning: cannot load module `/usr/lib/vlc/plugins/access/libv4l2_plugin.so' (libzvbi.so.0: cannot open shared object file: No such file or directory)

I then searched for that specific file with:

pacman -F libzvbi.so.0
extra/zvbi 0.2.35-3
usr/lib/libzvbi.so.0

Then simply installed it with sudo pacman -S zvbi

Now opening vlc v4l2:///dev/video0 works and I can use my Celestron USB Microscope.

Automate Bacula Restore for Testing Backups

As per my standard operating procedure, I would rather spend a week scripting something than spend one extra minute doing anything twice. Testing restores from backup jobs is something that everyone hates to do, but hopefully recognizes the criticality of such tasks, else performing backups is nearly pointless. Bacula Enterprise is a great backup/restore product (and their support is AMAZING!). Given that it is Linux based, scripting many common tasks becomes a simple concept using common Linux tools.

bconsole (the administrative interface for Bacula) commands can be automated with simple expect scripts. Below is the expect script that I created which will automate the restore of a VM to a dev ESXi host. There is another shell script executed out of cron that randomly picks a VM from the list of VM’s that have been backed up and then executes the following with an argument.

#!/usr/bin/expect -f 

set ServerToRestore [lindex $argv 0]
set timeout -1
spawn /usr/bin/bconsole

expect "\\*"
send  "restore client=10-srv-fd01 fileset=$ServerToRestore where= current select all done\r"
expect "OK to run? (yes/mod/no):"
send  "mod\r"
expect "Select parameter to modify (1-13): "
send  "13\r"
expect "Use above plugin configuration? (yes/mod/no):"
send  "mod\r"
expect "Select parameter to modify (1-7): " 
send "1\r"
expect "Please enter a value for datastore: "
send "datastore2\r"
expect "Use above plugin configuration? (yes/mod/no):"
send  "mod\r"
expect "Select parameter to modify (1-7): "
send "2\r"
expect "Please enter a value for restore_host: "
send "10-dev-vm03.domain.com\r"
expect "Use above plugin configuration? (yes/mod/no):"
send  "mod\r"
expect "Select parameter to modify (1-7): "
send "4\r"
expect "Please enter a value for vsphere_server: "
send "10-dev-vm03\r"
expect "Use above plugin configuration? (yes/mod/no): "
send "yes\r"
expect "OK to run? (yes/mod/no):"
send  "mod\r"
expect "Select parameter to modify (1-13): "
send  "7\r"
expect "Enter new Priority: "
send  "15\r"
expect "OK to run? (yes/mod/no): "
send "yes\r"
expect "\\*"
send "quit\r"
exit

Each morning someone on my team validates the restore and documents the successes and failures and, if necessary, creates a ticket to determine the cause of any such failure.

Automate WPA2 passphrase change on Cisco AP

I recently needed to provide wireless Internet access to Spektrum transmitters so updates could be applied while the transmitters are in for repair. These devices were not working reliably with the existing UniFi Guest Wireless/Portal and the transmitters do not seem able to connect using WPA2-Enterprise authentication. Due to security concerns, I did not want to provide a permanent WPA2 passphrase for ongoing use so I decided to use bash, clogin and cron to create a function that would change the passphrase daily.

I grabbed the first bit of code from Mike Willis’ blog which randomly picks a predefined number of words from a specified wordlist file. I found an acceptable wordlist without special characters from EFF’s site (although it would bee been a better password to use all characters, I didn’t want users to have to enter special characters on the transmitter touch screen). I proceeded to make a few changes to Mike’s script so that the script would only select words that were less than 4 characters and added random digits to the end of the randomly chosen word. After picking a random word and adding 4 digits the script writes the requisite Cisco commands for changing the passphrase to an external file. clogin finishes the process by simply running the IOS commands against the Cisco AP after automatically logging in via SSH (the credentials are stored in the .cloginrc file). If all goes well an email is sent out notifying the users of the new password (if a failure occurs the script sends me an email). Adding the script to crontab makes the process happen every night without any IT intervention.

#!/bin/bash

WORDFILE="/opt/ResetServiceCenterAPPassword.wordfile"
NUMWORDS=1
EMAILLIST="user1_to_notify@email.com user2_to_notify@email.com"

tL=`awk 'NF!=0 {++c} END {print c}' $WORDFILE`
poorchoice=1

while read -r line; do
  while [[ $poorchoice = 1 ]]; do
    rnum=$((RANDOM%$tL+1))
    pword=$(sed -n "$rnum p" $WORDFILE)
    if [[ ${#pword}  > 5 ]]; then
      poorchoice=1
    else
      poorchoice=0
    fi
    pword=$pword$(shuf -i 2000-9999 -n 1)
  done
done < <(echo $WORDFILE)

echo "conf t
dot11 ssid Spektrum
wpa-psk ascii 0 $pword
exit
exit" > /opt/ResetServiceCenterAPPassword.cmds

/opt/clogin -x /opt/ResetServiceCenterAPPassword.cmds <Access Point IP>
status=$?

if [ $status = 0 ]; then
  echo "Spektrum transmitter WiFi password changed to $pword" | mail --append="FROM:IT <tickets@email.com>" --return-address=tickets@email.com -s
"Spektrum transmitter WiFi password changed to $pword EOM" $EMAILLIST
else
  echo "Error with /opt/ResetServiceCenterAPPassword.sh script" | mail --append="FROM:IT <tickets@email.com>" --return-address=tickets@email.com -s "Error with /opt/ResetServiceCenterAPPassword.cmds" me@email.com
fi

Create time-lapse video

I donate time to the Georgetown Fair each year and the last couple of years we have used a GoPro to take a number of stills of the arena area. I have around 20,000 still images and decided it was time to make a movie.

I kept running into problems with too many objects to use command commands with convert or ffmpeg and finally got the following to work:

Even with that I had to use directories of about 1,000 objects, but I was able to hack a shell script together that would iterate through all 19 directories and create an avi for each set of images. I then used OpenShot Video Editor to slice and combine different parts of the video and below is the result.

Monitoring Running Time of EC2 Instances

My typical us of EC2 is to start an instance for a specific task(s) and then turn if back off. However, as a result of my multitasking throughout the day I found I was frequently forgetting to turn instances off, thus incurring AWS charges unnecessarily.  With my recent desire to learn more Python I decided to write something simple to notify me if my AWS instances were running for extended periods of time.

I have an Asterisk server connected to some Cisco SIP phones at my house and one of the functions that my original script performed was to place a call to some phones and notify me if an instance had been running too long. I have stripped that part of the code since I assume most of you don’t have anything like that setup.

The script uses the Mailgun API to send an email to me if an instance has been running more than 25 minutes but less than 90 minutes. If the instance has been running for more than 90 minutes I will get an email as well as an SMS message. I simply added this script to crontab to run every 5 minutes.

Below is the config.py file:

 

 

ELK for Mikrotik Netflow

Occasionally I get a call from a wireless customer indicating that their wireless Internet speeds have been slow for a few days. I like to be able to look back at the bandwidth usage of that particular customer to see if perhaps their experience was below average because they were saturating their own link due to a virus, kids downloading hundred-gigabyte games, BitTorrent, and a number of other good/bad uses of bandwidth.

Netflow is an old and very reliable protocol and works perfectly for tracking historical bandwidth usage and Mikrotik supports NetFlow with minimal config changes. A router sending NetFlow data useless unless you also have an aggregator for the data. Historically, two of my favorite aggregators were ntop and ManageEngines Netflow Analyzer. However, both programs have gone through major UI enhancements which I very much dislike (I am sure there is also enhancements to the programs themselves, but I don’t really need/want anything special). All I really care about is visualizing Top Talkers and how much data a particular IP consumed over a given time period. Thus began my search for a new method to visualize the Netflow data from my Mikrotik router.

ELK kept coming up in my searches and I had never heard of it. It is a stack consisting of Elasticsearch, Logstash and Kibana, which still meant absolutely nothign to me because I had never heard of any of those individual projects either and wasn’t sure if I wanted to try it or not. But as soon as I saw a typical image for logstash, I knew it was going to be worth my time!

I decided to try to get ELK working in a Docker container.  Here are the steps I used to successfully get an ELK stack working in a Docker container and ingesting Netflow data.

I built my own Docker image because I needed the logstash-codec-netflow plugin installed and sepb/elk didn’t have it. I obtained start.sh from sebp/elk on Github.

Contents of the Dockerfile

Build the Docker image

Contents of lotstash-netflow.conf

Launch the Docker container

Configure Mikrotik to send Netflow data to logstash

If everything worked you should be able to log into Kibana to search and create graphs of the NetFlow data by going to the http://DOCKER_IP:5601

Initially Kibana was overwhelming and seemed impossible to ever get any useful data from it. Slowly however, I am starting to understand how it works and have created a few useful graphs. Hopefully I can get to the point where I have a nice Dashboard of graphs and can write another post about specifically about Kibana.

About Rising Wireless

Rising Wireless is a Wireless Internet Service Provider I started with my brother in 2012 and provides high speed Internet service to residents of central Vermilion County, Illinois. The following post is a recent addition to it’s website.

About Us

The individuals behind Rising Wireless, Inc. have a long history of service to the community and a combined 40 years of experience providing Internet service. Carl Davis, Brad Davis and Chris Cook have full-time jobs in addition to supporting Rising Wireless, Inc. Not only do they provide the engineering, maintenance and support but they are also customers of the wireless Internet service at their residences.

Carl Davis: Over 30 years of computer/network experience and a 20-year history of providing residents in and around the Sidell area with Internet service. Currently employed as the Manager of Network Operations of Horizon Hobby, LLC in Champaign, IL.

Brad Davis: Over 10 years of experience in the Wireless ISP industry providing service to commercial and residential customers in East Central Illinois. Brad is currently the third generation working in the family farming business.

Chris Cook: Over 20 years in the Commercial, Industrial, and Residential Electrical Construction and Maintenance industry. Currently employed as a Utilities and Maintenance Electrician at the University of Illinois.

The History of Rising Wireless

Carl’s experience with the Internet Service Provider industry began in 1997 while providing end-user support and network administration services for Sidell Online, a dial-up ISP. After earning a Bachelor’s Degree from the U of I in Economics in 1999, Carl began working for AdvanceNet shortly after AdvanceNet’s acquisition of Sidell Online. While working at AdvanceNet Carl provided support and management for AdvanceNet’s acquisitions of Net66, PDNT and a number of other smaller ISPs. Carl continued to provide support to eGIX which purchased AdvanceNet in 2001. During his time with AdvanceNet Carl began working with wireless technologies to provide Internet services to otherwise underserved customers. In 2000 Carl left AdvanceNet and returned to the U of I to provide computer/network support for the Department of Animal Sciences and earned a Master’s Degree in Economics/Finance. It was during this time that Carl, with the assistance of Josh Jones and Brian Curtis, started TS Wireless and started offering Wireless Internet Service in the Sidell area with a single 768K T1 line. TS Wireless continued to grow and Carl’s brother Brad became a partner and was responsible for the growth of the network West of Champaign, IL. TS Wireless grew into a large regional Wireless ISP providing service from 45 POPs in an area spanning from Newman, IL to Farmer City, IL. In 2010, TS Wireless was sold to a holding company out of Nevada. Carl continued as a Senior Network Administrator for Freestar Bank and in 2012 started working for Horizon Hobby as the Manager of Network Operations.

By 2012 Carl and Brad had returned the community where they had grown-up and found the community again in need of reliable Internet service. They decided to start another Wireless ISP and called it Rising Wireless, Inc. Both Carl and Brad remain fully employed outside of Rising Wireless and dedicate a large portion of their free time to the operation of Rising Wireless. Chris Cook provides invaluable assistance, also in his free time, with trouble-shooting, customer installs and network infrastructure upgrades.

Connect to Your AWS VPC via Custom VPN Instance

I have recently setup some AWS EC2 services, within a private/public VPC, in order to host a Windows application (Quickbooks) via remote desktop. Rather than expose the Windows instance to the Internet via an Elastic or Public IP, I chose to leave the EC2 instance inside the Private VPC and utilize Amazon’s Virtual Private Gateway to establish an IPSEC tunnel to my Cisco ASA 5505. This was relatively easy to setup and seemed to work very well. Unfortunately, my understanding of how I would be billed for the VPN gateway service was wrong.  I believed that the billing rate of $.05 per VPN Connection-hour meant for actual tunnel uptime, which made sense given the general sense of “you pay for what you use” mentality of other AWS Services.

Unfortunately, this was not the case at all. The $.05 per hour charge was incurred so long as the tunnel could be established, whether or not it was actually established. So even though I was only using the Windows instance, with the tunnel actually established, on average of less than 10 minutes a day, I was paying for VPN availability 24 hours a day which equates to about $36/month for VPN services alone. Fortunately there is a cheaper way to establish a VPN to your VPC.

AWS gives us the ability of setting up an instance within our VPC to provide virtually any service we want, including VPN services. Below I try to provide the technical details on how I accomplished an IPSEC tunnel from my ASA 5505 to a VyOS AMI instance.

First, deploy an instance from a VyOS AMI. I chose to deploy from ami-9c9a12f4 within us-east-1 (I also updated VyOS to 1.17 shortly after deployment) but you could also chose to deploy the official Marketplace version, which at the time of this writing is also free. For the purposes of establishing a tunnel the details of the instance are relatively benign. Here are a couple of things you need to make note of:

  • Deploy VyOS into your Public VPC
  • Associate an Elastic IP with the VyOS instance
  • When logging in via ssh don’t forget the username is “vyos” and not “root”
  • Set a route within your Private subnet for the subnet behind your ASA to your VyOS instance
  • My VyOS config also allows the instance to act as a NAT gateway so you can also stop paying for the AWS NAT gateway and instead route 0.0.0.0/0 also to the VyOS instance

Here is the entire config of my VyOS instance.

Here are the relevant parts of my ASA config

 

Monitor A/C for Errors (24-volt to relay) with Raspberry Pi

At my place of employment we usually fight the A/C units for the server room every spring and fall. We have two large units and a single unit should be able to properly handle the heat created by the servers/network equipment. This seems to work great, however we have problems because we do not employ any software to monitor the individual units. Typically what happens is one unit will fail due to any number of trivial reasons (lose belt, dirty filter, etc.). Since a single unit will effectively handle the heat we won’t have any idea we are only running on one unit until the second unit has a problem at which point we are getting Nagios alerts that the room is overheating.

We tried to get the proper BACnet software in place to monitor each unit but were disappointed when we received quotes anywhere between $10,000-$15,000. This seemed absurd so our trusty maintenance person got his hands on a detailed manual of the system and discovered that there was a 24-volt signal off of one of the outputs on the main control board if there was ever any error.

This was a very promising discovery. The next step was determine a way to monitor that 24-volt output. Raspberry Pi makes this so easy. We ran 4 wires along-side the thermostat wires from each unit into the server room and attached them to two relays. Very simply the relay closes when the A/C detects and error and activates the 24-volt outputs, else the relay remains open. A very simply bash script, which is executed via the Nagios Remote Plugin Executor, calls a python script to determine if the relay is open or closed and alerts us when a failure is detected.

eastunit.py:

check_ac-east.sh:

 

Force DHCP Client to obtain new IP with Mikrotik DHCP Server

Recently I discovered that many of my wireless clients were never getting the static IPs that I had assigned them via DHCP.  After some research I realized that this was exactly how DHCP was supposed to work. While the client is in the BOUND state, DHCP essentially lies dormant until 50% the lease time expires. At that time, the client sends a DHCPREQUEST using the same IP that is currently bound and most of the time the server agrees by sending a DHCPACK allowing the client to keep that same IP. As a result, the static IPs I have bound to a specific MAC address is ignored since I only add static entries after the install is complete and I know what the client’s router’s MAC address is.

I needed a way for the DHCP server to NOT send that DHCPACK, which is not something that is built into Mikrotik’s DHCP server options. However, I discovered if I create multiple IP pools and rotate which pool the DHCP server is using it would prevent that DHCPACK upon client renewal request and lead to the DHCP server offering the static IP I had bound.


The following is what my IP pools look like:

I set the next-pool option to prevent the DHCP server from allocating all available IPs. I plan on adding some logic to my update scripts that will check which pool the router is currently using and change it once the update is complete.