Accessing Sonos devices across VLANs on a Ubiquiti UDM network

My network has been (over)powered by a Ubiquiti UniFi Dream Machine (UDM) for a few years now.

Using the excellent UniFi web interface, I created a few VLANs so I could easily segregate my IoT devices from the rest of the network.

That was all working fine until last Christmas I got myself a Sonos Arc soundbar, and realized that I couldn’t have it on VLAN 20 and access on my Android phone in VLAN 40…

After a lot of googling on this problem, I came to realize that Sonos uses multicasting to “announce” their devices to the network; so as long as I could replay those multicast packages across the VLANs, I should be able to access it on my phone without issues!

I found the multicast-relay from Al Smith on GitHub, which it said to it could be used to “relay broadcast and multicast packets between interfaces”… so just what I needed!

My solution was to SSH into the UDM (this is disabled by default, but you can enable it on the web interface) and execute the following commands to download and run the multicast-relay.py python script:

cd /tmp
wget https://github.com/alsmith/multicast-relay/raw/master/multicast-relay.py
python3 multicast-relay.py --foreground --verbose --noMDNS --interfaces br20 br40

On the final part of the above commands you will notice “br20 br40”, those are my VLANs 20 and 40, so you will have to change those to match your own VLANs.

Once the script started, I could see it relaying the Sonos multicast data across VLANs just as I wanted, and when I opened up the Sonos Android app, it quickly found the device and allowed me to add it to the app!

Now the interesting bit is that after adding the Sonos device to the app, I could just stop the script (hit Ctrl+C) and then delete it, as the app will still work just fine even without seeing those multicast packets ever.

Programming and Tinkering!

Though this might come as no surprise given the lack of content here lately, I’ve been mostly away from any type of Windows development work - not only on a personal level but also on a professional one.

The raw reality is that the work opportunities in Windows Development in the UK have mostly disappeared, and right now, there are no signs of this changing anytime soon!

For the past 2.5 years I’ve been working mostly with .NET Core 3.1, .NET 5, and .NET 6, building backend solutions, running inside Docker containers or as Lambda functions, hosted in Microsoft Azure and/or Amazon Web Services - and I have learned a lot with people who have more experience than I did on this type of work!

On a personal level, I’ve been experimenting a lot more in Home Automation, using Home Assistant, ESPHome, and Zigbee2MQTT.

A couple of months ago I bought an Ender-3 V2 3D printer which I have now heavily modified, and that took me to learn more about CAD modeling, 3D printing, electronics, and start contributing in some related open-source projects - but I’m going to leave that for another post!

As I continue this new phase, I wanted to drop the “Windows Development” moniker on this website and replace it with a more appropriate “Programming and Tinkering”!

Monitoring changes in webpages with Home Assistant

I’m a huge fan of Home Assistant and use it to automate most of my devices at my home, and for that matter, I follow a few people on Twitter and YouTube that share valuable information on it!

One of those is BeardedTinker channel on YouTube, as he provides particularly good step-by-step explanations in his videos - if you are interested on the topic, I do recommend following his YouTube channel!

A few weeks ago he published a video on Smart Home Service monitoring with Home Assistant, where he showed how we can check that a website is working correctly by using curl on the webpage address and then checking for the 200 OK status return code.

I wanted to improve on that by checking a webpage for changes and then creating automations on that!

There are a few ways to achieve this, I will be discussing two of them!

Using the ETag header

First, here is a quick explanation on what the ETag header is (more here):

The ETag HTTP response header is an identifier for a specific version of a resource. It lets caches be more efficient and save bandwidth, as a web server does not need to resend a full response if the content has not changed.

As all we want is the webpage headers, we can use a HEAD request instead of the GET, so here is how the whole thing will work:

sensor:
- platform: command_line
name: webpage_monitor
command: >-
python3 -c "import requests; response = requests.head('https://www.zigbee2mqtt.io/information/supported_devices.html'); print(response.headers.get('ETag'))"
scan_interval: 10800

On the above example, we have a sensor called webpage_monitor that will execute a python command to perform a HEAD request on the Zigbee2MQTT Devices webpage, and return the ETag header value.

Now all we need is an automation that will trigger when the state (value) of this sensor changes:

automation:
- alias: Show notification when webpage_monitor changes
trigger:
- platform: state
entity_id: sensor.webpage_monitor
action:
- service: persistent_notification.create
data:
message: 'Page changed!'

Using the page content

Not all webpages have an ETag header, and for those cases (or when the ETag somehow changes without the content actually changing), we can instead create a hash of the page content and use that in our automation!

Here is an example with the Home Assistant homepage:

sensor:
- platform: command_line
name: webpage_monitor
command: >-
python3 -c "import hashlib, requests; response = requests.get('https://www.home-assistant.io/'); print(hashlib.sha256(response.content).hexdigest())"
scan_interval: 10800

The above will calculate a unique hash of the content of the page and store that in the sensor.

As before, all we need then is to create an automation that will run when the sensor state changes!

How to connect to a WireGuard VPN server from a Docker container

I like to use Docker containers for pretty much everything I do, so I wanted to see if I could have a Docker container connect to a WireGuard VPN Server, and then have other containers share that same connection.

Surprisingly, this is not only possible, but it is also amazingly easy to achieve!

Preparation

We will be using the linuxserver/wireguard Docker image. This image works in either WireGuard server or client mode, but we will be using it just as a client.

For this post, I will focus on having the VPN connection isolated from the host system by using a custom bridge network.

We will also be using docker-compose to maintain the full Docker stack.

We will create a folder called “wireguard” that will store all the data from the container. Inside this folder we will place a file called “wg0.conf” that will hold the WireGuard connection settings.

Our final folder structure looks like this:

.\docker-compose.yaml
.\wireguard\wg0.conf

Getting a WireGuard VPN server

There are quite a few VPN Server providers out there that already provide WireGuard servers for you to connect, so if you already have a VPN service subscription, you should probably check there first for WireGuard support!

I’ve been a happy customer of TorGuard for a few years now, and I was quite pleased to see them adding WireGuard support recently.

If you are considering registering for a TorGuard subscription plan, you can use this link and the promo code PL50P to get a lifetime discount of 50% off!

Disclaimer: neither TorGuard nor anyone else sponsored this post, but as I said I’ve been paying and using their products for quite a few years to the point I do recommend them. The link above is an affiliate link and does pay a small commission to me for anyone who does use it with the discount code.

Here is how you can generate the WireGuard connection settings in TorGuard:

  1. Login and open the Config Generator
  2. Change the ”VPN Tunnel type” to “WireGuard”
  3. Select one of the available servers on the ”VPN Server Hostname/IP”
  4. Enter your ”VPN Username” and ”VPN Password”
  5. Click on “Generate Config”

The last step is to copy the “Config Output” contents to the “wg0.conf” file.

Running WireGuard from Docker

Here is the basic “docker-compose.yaml” file to get the container running:

version: '3.7'

services:
wireguard:
image: linuxserver/wireguard
container_name: wireguard
restart: unless-stopped
networks:
- backbone
volumes:
- './wireguard:/config'
- '/lib/modules:/lib/modules:ro'
environment:
- PUID=1000
- PGID=1000
cap_add:
- NET_ADMIN
- SYS_MODULE
sysctls:
- net.ipv4.conf.all.src_valid_mark=1

networks:
backbone:
driver: bridge

If you read the Docker image documentation, you will see it requires some special capabilities that need to be enabled for it to work. You can see those on the cap_add and sysctls yaml nodes above.

We are now ready to start, so just enter docker-compose up -d to create the “backbone” bridge network, and create and start the “wireguard” container.

Testing and Validating

Run the following commands now:

curl -w "\n" ifconfig.me

docker exec -it wireguard curl -w "\n" ifconfig.me

The first command will retrieve your real Public IP, matching the one your ISP has provided you with.

The second command will do the same but from inside the Wireguard Docker container, and it should match the connected WireGuard VPN Server IP.

Sharing the connection with other containers

Under the services node of our “docker-compose.yaml” file, add the following service:

# under the existing "services:" node, add the following content
ubuntu:
depends_on:
- wireguard
image: ubuntu
container-name: ubuntu
network_mode: service:wireguard
command: >-
sleep 10 && curl -w "\n" ifconfig.me

After saving the changes to the file, run docker-compose up -d to start this new container.

The above service will start a new Ubuntu Docker container after the WireGuard one, pause for 10 seconds, and then retrieve the Public IP address; if all goes well, this should match the WireGuard VPN Server IP.

The trick here is to use the network_mode: service:<service-name> to make the new container reuse another container’s network stack!

Exposing the client IPs to Docker containers in Synology NAS

I have a DS412+ Synology NAS that’s been running continuously for a few years now!

It’s a great NAS, but it’s the extra features like being able to easily run Docker containers that makes me like it even more!

I recently tried running an AdGuard Home Docker container and with no surprise, it worked perfectly; after setting the NAS IP as the DNS server in my router, all my local DNS requests were sent to AdGuard Home and I could see it doing its job of blocking any known advertisement or tracking hosts.

There was however a small issue: the IP of the client machine was not showing up!

AdGuard Home Docker container not showing the client IPs
AdGuard Home Docker container not showing the client IPs

I found a similar issue had previously been open on Pi-hole GitHub repository, but no solution provided.

As I couldn’t find any fix for this problem, I posted the issue to the Synology Community forums, but that didn’t provide a solution either…

At this stage, I tried comparing the Docker iptables on my Synology NAS with the ones in a Raspberry Pi, and that’s when I noticed that the Docker pre-routing rules were missing.

I then SSH‘ed to the Synology NAS and manually added the missing rules:

sudo iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
sudo iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER

Success!!

I could now see the client IPs and hostnames on AdGuard Home:

AdGuard Home Docker container now showing the client IPs or hostnames
AdGuard Home Docker container now showing the client IPs or hostnames

I was aware that this change to the Synology iptables was not a permanent one and would have to be done on every reboot, so the next step was to get a more permanent solution!

The simplest solution I found was to use the Synology Task Scheduler to run a user-defined script on every reboot

You can check the script and instructions on how to use it here:

Please be aware that these changes to the iptables will probably mess with the Synology Firewall, so just be careful with that if you do use the Firewall!

Also be aware that this goes with WOMM certification, so it works on my machine but there are no guarantees it will work on yours!

My first steps with Azure Static Web Apps

Let me just start by saying that Build 2020 was awesome!!

There was a lot of great content going around for everyone, but as I was watching the “From code to scale! Build a static web app in minutes” session showing how easy it is to use Azure Static Web Apps, I couldn’t help but to try that for my own with this website!

My first attempt was to run through the “Tutorial: Publish a Gatsby site to Azure Static Web Apps Preview”, and in the end I was happy to see it created the Azure resources and the a new Github workflow showed up on my repository.

On close analysis of the workflow, I can see there’s a new Azure/static-web-apps-deploy action in use doing all the heavy lifting! Internally, this action is using Oryx build system to identify the type of source code in the repo and compile it.

Once the workflow started and got to the Azure/static-web-apps-deploy action, I quickly came to realize that it wasn’t happy with the fact that my “package.json” file had "yarn": ">=1.22" on the "engines" node… the build image only had yarn 1.17 and so the build failed:

error pedrolamas.com@0.1.0: The engine "yarn" is incompatible with this module. Expected version ">=1.22". Got "1.17.3"
error Found incompatible module.

At this point I edited my “package.json” file, lowered the yarn version requirement to “1.17”, and moved forward.

As expected, pushing this change caused a new workflow to start, but again, this failed:

error Processing /bin/staticsites/ss-oryx/app-int/content/assets/logo.jpg failed

Original error:
/bin/staticsites/ss-oryx/app-int/node_modules/cwebp-bin/vendor/cwebp: **error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory**

This time, the problem was a lot more complicated to fix: the libGL.so.1 library required by Gatsby to process the images was missing from the Oryx docker image!

At this stage I actually thought giving up and went to Twitter to somehow vent my frustration on what I just experienced:

John Papa quickly replied to that tweet, and asked me to open an issue on the Azure Static Apps repo.

After some messages exchanged, I followed up on a tip from Annaji Sharma Ganti to compile the source code before the Azure/static-web-apps-deploy action, and make the action point to the compiled static folder (the “public” folder in my case) - this way Orix would just skip the compilation bit and go directly to publishing the artifacts to Azure.

You can see here my changes to the workflow.

Finally, the workflow worked as expected and I could see my static GatsbyJs site hosted as an Azure Static Web App!

I then made a few more changes (like ensuring that GitHub would checkout the submodule with the website images and fixed some missing environment variables), added a custom domain, waited a few minutes for it to generate a valid SSL certificate, and… we are done!!

http://azure-test.pedrolamas.com/

I ran a website speed test with Fast or Slow and got 98 of 100 points, a 2 points improvement over a speed test of this website (which is built and hosted on Netlify with Cloudflare CDN)

Speed test results from Fast or Slow for this Azure Static Web App
Speed test results from Fast or Slow for this Azure Static Web App

Took a bit more of effort than I initially expected, but I’m very happy and very impressed with the result!

The Azure Static Web Apps is currently in “preview” so be aware that changes and improvements will happen before it is ready for production - in the meantime, go and try it out for free!

Microsoft Build 2020 registration is open, fully remote, and free!

Microsoft Build 2020, the biggest annual event from Microsoft for Developers by developers, is just around the corner!

In regular times, //build/ is an in-person paid event, but given the current situation with COVID-19, Microsoft announced that this year it will host the event virtually, and completely free!

The event starts on the with Satya Nadella, followed by 48h of live sessions!

You can register here right now!

My Windows Phone apps are now open-source!

As Windows Phone has been dead and buried for a while now, last weekend I uploaded all assets and source-code for my old Windows Phone 7.x and 8.x apps to GitHub, some of them never before published!

Here’s the list of the new repos:

All content is available under MIT license, but please bear in mind that I’ve built these a while back and so I give no guaranties that these are in working order!