My network has been (over)powered by a Ubiquiti UniFi Dream Machine (UDM) for a few years now.
Using the excellent UniFi web interface, I created a few VLANs so I could easily segregate my IoT devices from the rest of the network.
That was all working fine until last Christmas I got myself a Sonos Arc soundbar, and realized that I couldn’t have it on VLAN 20 and access on my Android phone in VLAN 40…
After a lot of googling on this problem, I came to realize that Sonos uses multicasting to “announce” their devices to the network; so as long as I could replay those multicast packages across the VLANs, I should be able to access it on my phone without issues!
I found the multicast-relay from Al Smith on GitHub, which it said to it could be used to “relay broadcast and multicast packets between interfaces”… so just what I needed!
My solution was to SSH into the UDM (this is disabled by default, but you can enable it on the web interface) and execute the following commands to download and run the multicast-relay.py python script:
On the final part of the above commands you will notice “br20 br40”, those are my VLANs 20 and 40, so you will have to change those to match your own VLANs.
Once the script started, I could see it relaying the Sonos multicast data across VLANs just as I wanted, and when I opened up the Sonos Android app, it quickly found the device and allowed me to add it to the app!
Now the interesting bit is that after adding the Sonos device to the app, I could just stop the script (hit Ctrl+C) and then delete it, as the app will still work just fine even without seeing those multicast packets ever.
Though this might come as no surprise given the lack of content here lately, I’ve been mostly away from any type of Windows development work - not only on a personal level but also on a professional one.
The raw reality is that the work opportunities in Windows Development in the UK have mostly disappeared, and right now, there are no signs of this changing anytime soon!
For the past 2.5 years I’ve been working mostly with .NET Core 3.1, .NET 5, and .NET 6, building backend solutions, running inside Docker containers or as Lambda functions, hosted in Microsoft Azure and/or Amazon Web Services - and I have learned a lot with people who have more experience than I did on this type of work!
A couple of months ago I bought an Ender-3 V2 3D printer which I have now heavily modified, and that took me to learn more about CAD modeling, 3D printing, electronics, and start contributing in some related open-source projects - but I’m going to leave that for another post!
As I continue this new phase, I wanted to drop the “Windows Development” moniker on this website and replace it with a more appropriate “Programming and Tinkering”!
I’m a huge fan of Home Assistant and use it to automate most of my devices at my home, and for that matter, I follow a few people on Twitter and YouTube that share valuable information on it!
One of those is BeardedTinker channel on YouTube, as he provides particularly good step-by-step explanations in his videos - if you are interested on the topic, I do recommend following his YouTube channel!
A few weeks ago he published a video on Smart Home Service monitoring with Home Assistant, where he showed how we can check that a website is working correctly by using curl on the webpage address and then checking for the 200 OK status return code.
I wanted to improve on that by checking a webpage for changes and then creating automations on that!
There are a few ways to achieve this, I will be discussing two of them!
Using the ETag header
First, here is a quick explanation on what the ETag header is (more here):
The ETag HTTP response header is an identifier for a specific version of a resource. It lets caches be more efficient and save bandwidth, as a web server does not need to resend a full response if the content has not changed.
As all we want is the webpage headers, we can use a HEAD request instead of the GET, so here is how the whole thing will work:
On the above example, we have a sensor called webpage_monitor that will execute a python command to perform a HEAD request on the Zigbee2MQTT Devices webpage, and return the ETag header value.
Now all we need is an automation that will trigger when the state (value) of this sensor changes:
-alias: Show notification when webpage_monitor changes
Using the page content
Not all webpages have an ETag header, and for those cases (or when the ETag somehow changes without the content actually changing), we can instead create a hash of the page content and use that in our automation!
I like to use Docker containers for pretty much everything I do, so I wanted to see if I could have a Docker container connect to a WireGuard VPN Server, and then have other containers share that same connection.
Surprisingly, this is not only possible, but it is also amazingly easy to achieve!
We will be using the linuxserver/wireguard Docker image. This image works in either WireGuard server or client mode, but we will be using it just as a client.
For this post, I will focus on having the VPN connection isolated from the host system by using a custom bridge network.
We will also be using docker-compose to maintain the full Docker stack.
We will create a folder called “wireguard” that will store all the data from the container. Inside this folder we will place a file called “wg0.conf” that will hold the WireGuard connection settings.
Our final folder structure looks like this:
Getting a WireGuard VPN server
There are quite a few VPN Server providers out there that already provide WireGuard servers for you to connect, so if you already have a VPN service subscription, you should probably check there first for WireGuard support!
I’ve been a happy customer of TorGuard for a few years now, and I was quite pleased to see them adding WireGuard support recently.
If you are considering registering for a TorGuard subscription plan, you can use this link and the promo code PL50P to get a lifetime discount of 50% off!
Disclaimer: neither TorGuard nor anyone else sponsored this post, but as I said I’ve been paying and using their products for quite a few years to the point I do recommend them. The link above is an affiliate link and does pay a small commission to me for anyone who does use it with the discount code.
Here is how you can generate the WireGuard connection settings in TorGuard:
Select one of the available servers on the ”VPN Server Hostname/IP”
Enter your ”VPN Username” and ”VPN Password”
Click on “Generate Config”
The last step is to copy the “Config Output” contents to the “wg0.conf” file.
Running WireGuard from Docker
Here is the basic “docker-compose.yaml” file to get the container running:
If you read the Docker image documentation, you will see it requires some special capabilities that need to be enabled for it to work. You can see those on the cap_add and sysctls yaml nodes above.
We are now ready to start, so just enter docker-compose up -d to create the “backbone” bridge network, and create and start the “wireguard” container.
Testing and Validating
Run the following commands now:
dockerexec-it wireguard curl-w"\n" ifconfig.me
The first command will retrieve your real Public IP, matching the one your ISP has provided you with.
The second command will do the same but from inside the Wireguard Docker container, and it should match the connected WireGuard VPN Server IP.
Sharing the connection with other containers
Under the services node of our “docker-compose.yaml” file, add the following service:
# under the existing "services:" node, add the following content
sleep 10 && curl -w "\n" ifconfig.me
After saving the changes to the file, run docker-compose up -d to start this new container.
The above service will start a new Ubuntu Docker container after the WireGuard one, pause for 10 seconds, and then retrieve the Public IP address; if all goes well, this should match the WireGuard VPN Server IP.
The trick here is to use the network_mode: service:<service-name> to make the new container reuse another container’s network stack!
I have a DS412+ Synology NAS that’s been running continuously for a few years now!
It’s a great NAS, but it’s the extra features like being able to easily run Docker containers that makes me like it even more!
I recently tried running an AdGuard Home Docker container and with no surprise, it worked perfectly; after setting the NAS IP as the DNS server in my router, all my local DNS requests were sent to AdGuard Home and I could see it doing its job of blocking any known advertisement or tracking hosts.
There was however a small issue: the IP of the client machine was not showing up!
I found a similar issue had previously been open on Pi-hole GitHub repository, but no solution provided.
As I couldn’t find any fix for this problem, I posted the issue to the Synology Community forums, but that didn’t provide a solution either…
At this stage, I tried comparing the Docker iptables on my Synology NAS with the ones in a Raspberry Pi, and that’s when I noticed that the Docker pre-routing rules were missing.
I then SSH‘ed to the Synology NAS and manually added the missing rules:
sudo iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
sudo iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL !--dst127.0.0.0/8 -j DOCKER
I could now see the client IPs and hostnames on AdGuard Home:
I was aware that this change to the Synology iptables was not a permanent one and would have to be done on every reboot, so the next step was to get a more permanent solution!
The simplest solution I found was to use the Synology Task Scheduler to run a user-defined script on every reboot
You can check the script and instructions on how to use it here:
Please be aware that these changes to the iptables will probably mess with the Synology Firewall, so just be careful with that if you do use the Firewall!
Also be aware that this goes with WOMM certification, so it works on my machine but there are no guarantees it will work on yours!
On close analysis of the workflow, I can see there’s a new Azure/static-web-apps-deploy action in use doing all the heavy lifting! Internally, this action is using Oryx build system to identify the type of source code in the repo and compile it.
Once the workflow started and got to the Azure/static-web-apps-deploy action, I quickly came to realize that it wasn’t happy with the fact that my “package.json” file had "yarn": ">=1.22" on the "engines" node… the build image only had yarn 1.17 and so the build failed:
error email@example.com: The engine "yarn" is incompatible with this module. Expected version ">=1.22". Got "1.17.3"
error Found incompatible module.
At this point I edited my “package.json” file, lowered the yarn version requirement to “1.17”, and moved forward.
As expected, pushing this change caused a new workflow to start, but again, this failed:
/bin/staticsites/ss-oryx/app-int/node_modules/cwebp-bin/vendor/cwebp: **error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory**
This time, the problem was a lot more complicated to fix: the libGL.so.1 library required by Gatsby to process the images was missing from the Oryx docker image!
At this stage I actually thought giving up and went to Twitter to somehow vent my frustration on what I just experienced:
I tried this yesterday with my own GatsbyJs website and unfortunately it failed... First it was Yarn version (had to lower my requirement), but once I fixed that it failed on some native component when processing the images. https://t.co/rnYuwtmYyj
After some messages exchanged, I followed up on a tip from Annaji Sharma Ganti to compile the source code before the Azure/static-web-apps-deploy action, and make the action point to the compiled static folder (the “public” folder in my case) - this way Orix would just skip the compilation bit and go directly to publishing the artifacts to Azure.
Finally, the workflow worked as expected and I could see my static GatsbyJs site hosted as an Azure Static Web App!
I then made a few more changes (like ensuring that GitHub would checkout the submodule with the website images and fixed some missing environment variables), added a custom domain, waited a few minutes for it to generate a valid SSL certificate, and… we are done!!