Blocking traffic by MAC address on Ubiquiti EdgeRouters by Time of Day

I have an Ubiquiti EdgeRouter PoE at the house as my main router.  In order to manage “resources” at the house,  I wanted a way to block a couple of MAC addresses at a certain time each day.   I created a filter that blocks by MAC address that looks something like:

set firewall name SWITCH0_IN default-action accept
set firewall name SWITCH0_IN description 'Used for blocking local users'
set firewall name SWITCH0_IN rule 1 action drop
set firewall name SWITCH0_IN rule 1 description APhone
set firewall name SWITCH0_IN rule 1 disable
set firewall name SWITCH0_IN rule 1 log disable
set firewall name SWITCH0_IN rule 1 protocol all
set firewall name SWITCH0_IN rule 1 source mac-address '66:55:44:33:22:11'
set firewall name SWITCH0_IN rule 2 action drop
set firewall name SWITCH0_IN rule 2 description iPhone
set firewall name SWITCH0_IN rule 2 disable
set firewall name SWITCH0_IN rule 2 log disable
set firewall name SWITCH0_IN rule 2 protocol all
set firewall name SWITCH0_IN rule 2 source mac-address '11:22:33:44:55:66'
set firewall name SWITCH0_IN rule 3 action drop
set firewall name SWITCH0_IN rule 3 description Desktop
set firewall name SWITCH0_IN rule 3 disable
set firewall name SWITCH0_IN rule 3 log disable
set firewall name SWITCH0_IN rule 3 protocol all
set firewall name SWITCH0_IN rule 3 source mac-address '12:34:56:78:90:ab'

I applied this rule to the “switch0” interface that talks to my LAN interfaces at eth2, eth3 and eth4.

For the rulesets above, I want to enable rule #2 and #3 for the devices “iPhone” and “Desktop” to block traffic from them.  Two hours later, I want to disable this rule to pass traffic again.  This script does just that…

# A script to disable access for two MAC addresses at     
# 0330 to 0530 UTC (1930 to 2130 Pacific) everyday.
if [ $# == 1 ]; then
$WR begin
if [ $unblock == 1 ]; then
  $WR set firewall name SWITCH0_IN rule 2 disable
  $WR set firewall name SWITCH0_IN rule 3 disable
  $WR delete firewall name SWITCH0_IN rule 2 disable
  $WR delete firewall name SWITCH0_IN rule 3 disable
$WR commit                                     
$WR end

There are a couple of ways to configure the router with scripts.  Ubiquiti suggests using the /opt/vyatta/etc/functions/script-template script like:


if [ $# == 0 ]; then
  echo usage: $0 
  exit 1
source /opt/vyatta/etc/functions/script-template
set interfaces tunnel tun0 local-ip $new_ip

This actually breaks due to a bug.  I have had to use the /opt/vyatta/sbin/vyatta-cfg-cmd-wrapper as part of my script.  Works just fine.

The CRON entries that will run the scripts at 0330 and 0530 UTC…

30 3 * * * /home/joeuser/
30 5 * * * /home/joeuser/ unblock


Radio Air Checks…

Radio air checks are normally used by radio talent and program directors to go back and listen to the talent’s performance. Many times these air checks are recordings of a DJ’S shift but was “telescoped” so it only recorded the time that the talent’s microphone was turned on (aka ‘open’). Air checks were also collected by radio fans that wanted to either record a DJ they liked or perhaps event grab that hit song they can listen to later.

Mike Schweizer was a bit of a radio fan boy in his early years. Later on he became a radio engineer specializing in remote broadcasts and working as an engineer for stations like KUSF, KYA and KSFO. As a kid and through his adult life, he made recordings of stations and collected hundreds of reels of tape. Unfortunately his life was cut short and passed away in 2011. Before he passed, he transferred a good number of these airchecks to digital. I have many of these up at my site that you can listen to.  Since I published these some years ago, Internet Archive has recently copied these recordings and has put them up there.

Many of these are classic as they are recordings of early free form radio such as KMPX or air checks of Wolfman Jack on XEPRS.

I should say that not all of these were recorded by Mike, it seems that there are a handful that were recorded by others that have snuck in here.  If you find that one of those are yours, please drop me a note and I can remove it if you deem it necessary.

Librenms’ API

Librenms is a very flexible network and server monitoring and alerting system.  I have it deployed at a number of companies based on the ease of installation, the fact that it auto discovers devices, it is updated frequently (typically multiple times a week) and supports pretty much every network device you can think of.

On top of that, the alerting can be tuned to match very specific cases as the back end is MySQL so you alerting conditions can match almost anything you can write a SQL query for.  A good example would be to only alert on certain interfaces that have a specific description in them such as “TRANSIT” where the device has a host name of “edge” and is only a 10Gbs connection (the interface name is ‘xe’).  Because you can group things by description or part of a host  name, you can just say anything with the string “edge” in the hostname should be considered a “edge router” so a group “ER” can be created for these devices.  With autodiscovery, as soon as you add a device, it will get automatically be put into the group that the rule/regular expression matches it.

One of the more interesting features is Libre’s API.  You can get pretty much any detail you want out of what Libre has collected and stored in the DB.  It will also create graphs for you on the fly.  One case I have had in the past is to create daily and weekly total bandwidth graphs for a set of specific ports on a group of switches.  The switch ports has a particular unique string I can match on so I was able to create a “group” called “peering” that included these ports over all of the switches.

I wrote this simple script called that asked for a graph for daily and weekly time frames.  I also added various options to the request such as don’t show the legend of interfaces and make the in and out directions all one color.  The other option is to make different colors for each interface.  We wanted a clean look so we went for the solid color.  The API doesn’t do everything you may want such as tilting the graph.  This is where I use the “convert” program from imagemagick to overlay some text at the top of the graph.  You can see the final result at the SFMIX site.

Getting Mediainfo Data into a CSV

Mediainfo is a pretty handy tool to examine media files like MP4 containers and the streams in it such as the video and audio streams. It will also spit out a couple of formats in order parse the data such as XML. But I just want to get a certain set of data and want to move it into a CSV file so I can bring it into something like Google Sheets or Excel. Seems the “Inform” argument can get me some of the way there. You can tell it to give you multiple data points about a particular stream or “General” aspect about the file. You can’t mix say “Audio” and “Video”. That’s ok. I just want a handful of things about the Video stream and the filename. Woops. The file name is in the “General” bucket. So I am going to cheat a little with the “echo” command and tell it to print the file name and a comma and not to print out a new line in order to have it create the first column for the CSV row.

So this little script will find my MP4 movie files in the “/foo/*” directories and subdirectories, assign the name to the “movie” variable, print out the name and comma without a new line and then spit out a bunch of stuff about the video stream to get me a nice CSV output…

echo "Filename,PixelWidth,PixelHeight,DisplayRatio,Bitrate,FrameRate"
find /foo/ -type f | grep -i \.MP4 | while read movie;do
echo -n "$movie,"
mediainfo --Inform="Video;%Width%,%Height%,%DisplayAspectRatio/String%,%BitRate%,%FrameRate%" $movie

And I get something like:


You can get a list of Mediainfo parameters to use with “–Info-Parameters” argument.

[Update: I created a mediainfo template and script that will create a nice CSV with lots of info from an MP4 container. Assuming one video and audio track for the container.  You can see it on github at:]

Let’s Encrypt!

Encryption is in the news again. Various three letter government organizations want to have backdoors in devices like cell phones for surveillance. Of course that means that with a back door or exploit into an operating system or application anyone can track traffic from these devices. Trying to limit it to “lawful interception” would be impossible.

Encryption has two significant rolls; security from third party viewing the traffic and authentication so you have some confidence that you are talking to the right party. Having traffic in the clear, without encryption means that your communications can be easily captured and your session could be spoofed. You certainly don’t want your web sessions with your bank in the clear where a nefarious party can watch your traffic and even spoof your session to transfer your funds to them. Internet commerce would not work without encryption.

The encryption method that web sites use is called HTTPS. It uses a protocol called TLS to set up an encrypted session between you and the web site. The nice thing about HTTPS and TLS, is that it can use a number of different strong ciphers to make it pretty difficult for third parties to sniff your traffic. It also uses a “chain of trust” system in order to have some authentication that the web site you are using is really the site you think it is.

Up until recently, setting up HTTPS and acquiring and setting up the certificate for the web site has not been for the faint of heart. It also has been pretty expensive. Purchasing a certificate can run between $250 and $500 a year. Your personal web site or even a small company, may not have the coin to purchase a certificate. As such, many sites have opted not to run HTTPS and will run the more common and insecure HTTP protocol. This is where Let’s Encrypt comes into this story.

To quote from Let’s Encrypt’s web site:

Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG). The ISRG is a non-profit with the mission to “reduce financial, technological, and education barriers to secure communication over the Internet.”

Let’s Encrypt is doing just that. It addresses the speed bumps to creating secure communications; it is free and it is simple. For most operating systems and web servers, it just means downloading the Let’s Encrypt software, running it and restarting the web server. Your site would be up and running with a valid HTTPS session. Although this is true for most Linux distributions, it isn’t quite there for UNIX-like systems like FreeBSD of which this site uses. It did take me a bit more hacking around to get this to work. Googling around, you can find out how to get this software to work on FreeBSD as well as how to configure your web server (eg. Apache on this box) to use Let’s Encrypt’s certificate as well as updating when the cert expires.

One of the nice things about Let’s Encrypt is the process to prove who you are to the Let’s Encrypt process. Normally with any other certificate authority, it would mean email, phone calls, etc back and forth a number of times. This can take hours or days to process. The Let’s Encrypt process just requires you take down your web site for the short period you run the Let’s Encrypt client. Running the client will put up a little website that the Let’s Encrypt servers will validate against. If you have control over your domain, this process will work and the Let’s Encrypt servers will hand back a certificate for your web site that is good for 90 days. From then on, you just run the client software say once a month to update the certificate. Any operating system has a way to running scheduled applications such as “cron” for Linux/FreeBSD. Oh… and all of this is free.

So there shouldn’t be an excuse to running a non-encrypted web site now. Protect yourself and your users by using HTTPS with Let’s Encrypt.

[A very nice page detailing how to install and renew Letsencrypt certs for Ubuntu can be found at: – Tim]

Ubiquiti Rockets and a 50Km Path Over Water…

FarallonsThis last fall, we put in a 50Km 5.8Ghz link from the center of San Francisco (Twin Peaks) to the South East Farallon Island lighthouse using Ubiquiti Rockets. At first the link was unusable. This was mainly due to the fact that the long distance and shooting over water causes the received signal to vary wildly. This cased the radios to frequently and rapidly try to change the MCS (modulation scheme) and would make the link very lossy. Here are some settings I had to settle on to get the links to work.

  • Do not enable auto-negotiate for the signal rate on long links. The radios will auto negotiate data rates when the receive signal level changes. This will momentary drop the link while the ends sync up. If the signal is bouncing frequently this will make the link pretty lossy or not usable at all.
  • Long links or links that are being interfered with will likely have problems with modulation schemes that have an amplitude component such as QAM. If so, use a modulation scheme that doesn’t have an amplitude component like BSFK where you can leverage “Capture Effect“. This would be MCS0 (1 chain) and MSC8 (2 chains).
  • Fix the distance of the link to about 30% over the calculated distance. The auto-magic calculation that AirOS does typically is wrong with long links.
  • Turn off AirMax on Point to Point links. AirMax is used to manage multiple clients on one AP more fairly. Not needed for P2P.
  • Use as narrow of a channel you can support for the bandwidth you need. As per the AirOS manual…
Reducing spectral width provides 2 benefits and 1 drawback.
  • Benefit 1: It will increase the amount of non-overlapping channels. This can allow networks to scale better
  • Benefit 2: It will increase the PSD (power spectral Density) of the channel and enable the link distance to be increased
  • Drawback: It will reduce throughput proportional to the channel size reduction. So just as turbo mode (40MHz) increases possible speeds by 2x, half spectrum channel (10MHz), will decrease possible speeds by 2x.

Airfields of Yesteryear

An older post of mine talked about looking back at history with the little geodetic survey benchmarks you see in the sidewalk and at the base of older buildings. Modern archaeology has always interested me and if you are interested too, there is a wonderful site documenting abandoned airports around the US named “Abandoned & Little-known Airfields”. It covers the history and evidence left behind when general aviation was more popular and strange little military operations that were out in the middle of nowhere.

As I have just spent the last 25 or so years living in San Francisco, it was a surprise to find out about strips that I didn’t know about such as the Bay Meadows Airport in San Mateo and the Marina Airfield next to Crissy Field in San Francisco. Marina Airfield was the first terminus of the United States Post Office Department Trans-Continental Air Mail Service.

Growing up in Fresno, I remember the remnants of Furlong Field just out Shaw Avenue. Good to see it documented here so it isn’t forgotten as development has pretty much obliterated any trace of it.

Sad to see so many fields disappear with the wane of general aviation in this country. It is just too expensive for most to own or lease a plane and keep it up. Land is being sold to developers as cities can see better tax revenue with a shopping center than an air strip.

Small form-factor broadcast console…

Mackie set the standard in inexpensive, small form-factor recording and sound consoles. I own a 1402 VLZ console that fits in a small brief case and sounds great. The problem is that it is the wrong console for most of the work I do that will need a console. Coming from the broadcast side of the world and not the recording side, I want things like a cue buss that sits at the end of the fader travel, or the control room monitors to mute when I turn on the mike so I don’t get feedback. I want logic that I can switch a CD player into play when I bring up the fader or hit a start button. None of these “features” are typically required on recording and sound consoles and that was where the biggest market for companies like Mackie are.

Allen & Heath, a respected name in recording consoles, has just come out with their first stab at a broadcast console in the same sort of form-factor as the Mackie 1402. It is called the XB-14 and has most of the Bells and Whistles that I have been looking for. I have been told by Mark Haynes at Leo’s Pro Audio that they should have one in next week to test drive and I am looking forward to seeing if they got it right.

One “down” side of the console is the price. It is selling at just under $1,400. I have seen it advertised at $1,200. The Mackie 1402 is running around $500. I can see that the XB-14 has a some extra features to make it more of a broadcast desk, but $700 more? I hope some of these boxes sell to encourage folks like Mackie to compete for this market.

Great tool for checking Line of Sight…

Google Maps has opened up access to resources that would take considerable work and expense to access. Just purchasing software that can do ray tracing over a geographic area 10 years ago would have cost tens of thousands of dollars. Now “HeyWhatsThat” has leveraged Google Maps to do just this and it is free.

Now, why would I be so interested in this site? Being a bit of a wireless geek, it is a great starter tool to understand how much coverage area a mountain top has. In the example shown in the right you can see the coverage area from the Twin Peaks communications site in San Francisco. The orange/red overlay indicates area that this site can see. You can see the shadowing of some of the hills of San Francisco affecting the coverage area.

At the top of the frame, shows a panorama of the skyline seen from that site. The list on the right shows what mountain tops can been seen and distance to them.

HeyWhatsThat is a great starting point in checking out coverage area. I wouldn’t throw away your $50,000 coverage software just yet as that will be a bit more accurate using better algorithms to calculate coverage such as Longley Rice and TIREM as well as their own tweaks.

Major Rant – Title 24

In its every intention, Title 24 tries to reduce power consumption for new and remodeled buildings. In its current structure, it can make it worse.

I really have been trying to be aware in my design and purchase of lighting in our new kitchen remodel. I have been looking at every different lighting option and in particular, LED in the assumption that anything that doesn’t have a “heater” in it and creates waste heat, is good (BTW, this is a whole other blog regarding LED lighting). One of the first things I ran into in my design is California’s Title 24 requirements. At least up to August of this year, California has standards that have a rather strange way of promoting and calculating effective power usage for kitchens.

  1. There is no limit on the power you can put into the lighting of a kitchen. (a bad thing)
  2. The wattage allocated for high efficacy lighting must be 50 percent or more of total lighting wattage. (a bad thing)
  3. Any fixture that can take non-high efficacy lighting devices like incandescent will be counted at the maximum wattage of the fixture. (a bad thing)
  4. High efficacy lighting is based on the number of lumen per watt (a good thing)

What this means is if you have 200 watts of low efficient lighting, you must have 200 watts of high efficacy lighting which makes no sense at all. In fact it could force you into putting more high efficacy lighting in that you need. In addition to this, the old screw-type Edison base for light bulbs is a standard. There are many more compact florescent bulbs out there designed for this base than for a proprietary pin-type base for recessed lighting. Guess which one is cheaper? In order to “comply” with title 24, you need to use non-standard fixtures.

If they really want to “fix” this, they could just limit the number of watts per square foot and strike out this silliness with standard lighting fixtures.