TechDistortion Articles https://techdistortion.comarticles en john@techdistortion.com Copyright 2009-2021 2021-06-25T01:29:50+00:00 TechDistortion Articles Fri, 25 Jun 2021 01:29:50 GMT Podcasting 2.0 Phase 3 Tags https://techdistortion.com/articles/podcasting-2-0-phase-3-tags https://techdistortion.com/articles/podcasting-2-0-phase-3-tags Podcasting 2.0 Phase 3 Tags I’ve been keeping a close eye on Podcasting 2.0 and a few weeks ago they finalised their Phase 3 tags. As I last wrote about this in December 2020, I thought I’d quickly update on thoughts on each of the Phase 3 tags:

  • < podcast:trailer > Is a compact and more flexible version of the existing iTunes < itunes:episodeType >trailer< /itunes:episodeType > tag. The Apple-spec isn’t supported outside of Apple, however more importantly you can only have one trailer per podcast, whereas the PC2.0 tag allows multiple trailers and trailers per season if desired. It also is more economical than the Apple equivalent, as it acts as an enclosure tag, rather than requiring an entire RSS Item in the Apple Spec.
  • < podcast:license > Used to specify the licence terms of the podcast content, either by show or by episode, relative to the SPDX definitions.
  • < podcast:alternateEnclosure > With this it’s possible to have more than one audio/video enclosure specified for each episode. You could use this for different audio encoding bitrates and video if you want to.
  • < podcast:guid > Rather than the using the Apple GUID guideline, the PC2.0 suggests using UUIDv5 using the RSS feed as the seed value.

In terms of TEN, I’m intending to add Trailer in future and I’m considering Licence as well, but beyond that probably not much else for the moment. I don’t see that GUID adds much for my use case over my existing setup (using the CDATA URL at time of publishing) and since my publicly available MP3s are already 64kbps Mono, Alternate Enclosure for low bitrate isn’t going to add any value to anyone in the world. I did consider linking to the YouTube videos of episodes where they exist however I don’t see this as beneficial in my use case either. In future I could explore an IPFS stored MP3 audio option for resiliency, however this would only make sense if this became more widely supported by client applications.

It’s good to see things moving forward and whilst I’m aware that the Value tag is being enhanced iteratively, I’m hopeful that this can incorporate client-value and extend the current lightning keysend protocol options to include details where supporters can flag “who” the streamed sats came from (if they choose to). It’s true that customKey/Value exist however they’re intentionally generic for the moment.

Of course, it’s a work in progress and it’s amazing that it works so well already, but I’m also aware that KeySend as it exists today, might be deprecated by the AMP aka Atomic-Multipath Payment protocol, so there may be some potential tweaks yet to come.

It’s great to see the namespace incorporating more tags over time and I’m hopeful that more client applications can start supporting them as well in future.

]]>
Podcasting 2021-06-13T16:30:00+10:00 #TechDistortion
Pushover and PodPing from RSS https://techdistortion.com/articles/pushover-and-podping-from-rss https://techdistortion.com/articles/pushover-and-podping-from-rss Pushover and PodPing from RSS In my efforts to support the Podcasting 2.0 initiative, I thought I should see how easy it was to incorporate their new PodPing concept, which is effectively a distributed RSS notification system specifically tailored for Podcasts. The idea is that when a new episode goes live, you notify the PodPing server and it then adds that notification to the distributed Hive blockchain system and then any apps can simply watch the blockchain and this can trigger the download of the new episode in the podcast client.

This has come predominantly from their attempts to leverage existing technology in WebSub, however when I tried the WebSub angle a few months ago, the results were very disappointing with many minutes, hours passing before a notification was seen and in some cases it wasn’t seen at all.

I leveraged parts of an existing Python script I’ve been using for years for my RSS social media poster, but stripped it down to the bare minimum. It consists of two files, checkfeeds.py (which just creates an instance of the RssChecker class) and then the actual code is in rss.py.

This beauty of this approach is that it will work on ANY site’s RSS target. Ideally if you have a dynamic system you could trigger the GET request on an episode posting event, however since my sites are statically generated and the posts are created ahead of time (and hence don’t appear until the site builder coincides with a point in time after that post is set to go live) it’s problematic to create a trigger from the static site generator.

Whilst I’m an Electrical Engineer, I consider myself a software developer of many different languages and platforms, but for Python I see myself more of a hacker and a slasher. Yes, there are better ways of doing this. Yes, I know already. Thanks in advance for keeping that to yourself.

Both are below for your interest/re-use or otherwise:

from rss import RssChecker

rssobject=RssChecker()

checkfeeds.py

CACHE_FILE = '<Cache File Here>'
CACHE_FILE_LENGTH = 10000
POPULATE_CACHE = 0
RSS_URLS = ["https://RSS FEED URL 1/index.xml", "https://RSS FEED URL 2/index.xml"]
TEST_MODE = 0
PUSHOVER_ENABLE = 0
PUSHOVER_USER_TOKEN = "<TOKEN HERE>"
PUSHOVER_API_TOKEN = "<TOKEN HERE>"
PODPING_ENABLE = 0
PODPING_AUTH_TOKEN = "<TOKEN HERE>"
PODPING_USER_AGENT = "<USER AGENT HERE>"

from collections import deque
import feedparser
import os
import os.path
import pycurl
import json
from io import BytesIO

class RssChecker():
    feedurl = ""

    def __init__(self):
        '''Initialise'''
        self.feedurl = RSS_URLS
        self.main()
        self.parse()
        self.close()

    def getdeque(self):
        '''return the deque'''
        return self.dbfeed

    def main(self):
        '''Main of the FeedCache class'''
        if os.path.exists(CACHE_FILE):
            with open(CACHE_FILE) as dbdsc:
                dbfromfile = dbdsc.readlines()
            dblist = [i.strip() for i in dbfromfile]
            self.dbfeed = deque(dblist, CACHE_FILE_LENGTH)
        else:
            self.dbfeed = deque([], CACHE_FILE_LENGTH)

    def append(self, rssid):
        '''Append a rss id to the cache'''
        self.dbfeed.append(rssid)

    def clear(self):
        '''Append a rss id to the cache'''
        self.dbfeed.clear()

    def close(self):
        '''Close the cache'''
        with open(CACHE_FILE, 'w') as dbdsc:
            dbdsc.writelines((''.join([i, os.linesep]) for i in self.dbfeed))

    def parse(self):
        '''Parse the Feed(s)'''
        if POPULATE_CACHE:
            self.clear()
        for currentfeedurl in self.feedurl:
            currentfeed = feedparser.parse(currentfeedurl)

            if POPULATE_CACHE:
                for thefeedentry in currentfeed.entries:
                    self.append(thefeedentry.get("guid", ""))
            else:
                for thefeedentry in currentfeed.entries:
                    if thefeedentry.get("guid", "") not in self.getdeque():
#                        print("Not Found in Cache: " + thefeedentry.get("title", ""))
                        if PUSHOVER_ENABLE:
                            crl = pycurl.Curl()
                            crl.setopt(crl.URL, 'https://api.pushover.net/1/messages.json')
                            crl.setopt(pycurl.HTTPHEADER, ['Content-Type: application/json' , 'Accept: application/json'])
                            data = json.dumps({"token": PUSHOVER_API_TOKEN, "user": PUSHOVER_USER_TOKEN, "title": "RSS Notifier", "message": thefeedentry.get("title", "") + " Now Live"})
                            crl.setopt(pycurl.POST, 1)
                            crl.setopt(pycurl.POSTFIELDS, data)
                            crl.perform()
                            crl.close()

                        if PODPING_ENABLE:
                            crl2 = pycurl.Curl()
                            crl2.setopt(crl2.URL, 'https://podping.cloud/?url=' + currentfeedurl)
                            crl2.setopt(pycurl.HTTPHEADER, ['Authorization: ' + PODPING_AUTH_TOKEN, 'User-Agent: ' + PODPING_USER_AGENT])
                            crl2.perform()
                            crl2.close()

                        if not TEST_MODE:
                            self.append(thefeedentry.get("guid", ""))

rss.py

The basic idea is:

  1. Create a cache file that keeps a list of all of the RSS entries you already have and are already live
  2. Connect up PushOver (if you want push notifications, or you could add your own if you like)
  3. Connect up PodPing (ask @dave@podcastindex.social or @brianoflondon@podcastindex.social for a posting API TOKEN)
  4. Set it up as a repeating task on your device of choice (preferably a server, but should work on a Synology, a Raspberry Pi or a VPS)

VPS

I built this initially on my Macbook Pro using the Homebrew installed Python 3 development environment, then installed the same on a CentOS7 VPS I have running as my Origin web server. Assuming you already have Python 3 installed, I added the following so I could use pycurl:

yum install -y openssl-devel
yum install python3-devel
yum group install "Development Tools"
yum install libcurl-devel
python3 -m pip install wheel
python3 -m pip install --compile --install-option="--with-openssl" pycurl

Whether you like “pycurl” or not, obviously there are other options but I stick with what works. Rather than refactor for a different library I just jumped through some extra hoops to get pycurl running.

Finally I bridge the checkfeeds.py with a simply bash script wrapper and call it from a CRON Job every 10 minutes.

Job done.

Enjoy.

]]>
Technology 2021-05-25T08:00:00+10:00 #TechDistortion
Fun With Apple Podcasts Connect https://techdistortion.com/articles/fun-with-apple-podcasts-connect https://techdistortion.com/articles/fun-with-apple-podcasts-connect Fun With Apple Podcasts Connect Apple Podcasts will shortly open to the public but for podcasters like me, we’ve been having fun with Apple’s first major update to their podcasting backend in several years, and it hasn’t really been that much fun. Before talking about why I’m putting so much time and effort into this at all, I’ll go through the highlights of my experiences to date.

Fun Times at the Podcasts Connect Mk2

Previously I’d used the Patreon/Breaker integration but that fell apart when Breaker was acquired by Twitter and the truth was that very, very few Patrons utilised the feature and the Breaker app was never big enough to attract any new subscribers. The Breaker audio integration and content has since been removed even though the company had the service taken over (to an extent) as it was one less thing for me to upload content to. In a way…this has been a bit déjà-vu and “here we go again…” 1

The back-catalogue of ad-free episodes as well as bonus content between Sleep, Pragmatic, Analytical and Causality adds up to 144 individual episodes.

For practically every one I had the original project files which I restored and re-exported in WAV format then uploaded them via the Apple Podcasts updated interface. (The format must be WAV or FLAC and Stereo, which is funny for a Mono podcast like mine and added up to about 50GB of audio) It’s straight-forward enough although there were a few annoying glitches that after using it for 10 days were still unresolved. Each of the key issues I encountered: (there were others but some were resolved at time of writing this so I’ve excluded those)

  1. Ratings and Reviews made a brief appearance then disappeared and still haven’t come back (I’m sure they will at some point)
  2. Not all show analytics time spans work (Past 60 days still doesn’t work, everything is blank)
  3. Archived shows in the Podcast-drop-down list appear but don’t in the main overview even when displaying ‘All’
  4. The order you save and upload audio files, changes the episode date such that if you create the episode meta-data, set the date, then upload the audio the episode date defaults to todays date. It does this AFTER you leave the page though, so it’s not obvious, but if you upload the audio THEN set the date it’s fine.
  5. The audio upload hit/miss ratio for me was about 8 out of 10, meaning for every 10 episodes I uploaded, 2 got stuck. What do I mean? The episode WAV file uploads, completes and then the page shows the following:

Initial WAV Upload Attempt

…and the “Processing Audio” never actually finishes. Hoping this was just a back-log issue with high end user demand I uploaded everything and came back minutes, hours then days later and finally after waiting five days I set about to try to unstick it.

Can’t Publish! Five Days of Waiting and seeing this I gave up waiting for it to resolve itself…

The obvious thing to try: select “Edit” and delete then re-upload the audio. Simple enough, keeps the meta-data intact (except the date I had to re-save after every audio re-upload) then I waited another few days. Same result. Okay, so that didn’t work at all.

Next thing to try, re-create the entire episode again from scratch! So I did that for the 30 episodes that were stuck. Finally I see this (in some cases up to an hour later):

Blitz

And sure enough…

Blitz

Of course, that only worked for 25 episodes out of the 30 I uploaded a second time. I then had to wash-rinse-repeat for the 5 that had failed for a second time and repeated until they all worked. I’d hate to think about doing this on a low-bandwidth connection like I had a decade ago. Even at 40Mbps up it took a long time for the 2GB+ episodes of Pragmatic. The entire exercise has probably taken me 4 work-days of effort end to end, or about 32 hours of my life. There’s no way to delete the stuck episodes either so I now have a small collection of “Archived” non-episodes. Oh well…

Why John…Why?

I’ve read a lot of differing opinions from podcasters about Apples latest move and frankly I think the people most dismissive are those with significant existing revenue streams for their shows, or those that have already made their money and don’t need/want income for their show(s). Saying that you can reduce fees by using Stripe and your own website integration, by using Memberful, Patreon, or more recently by streaming Satoshis (very cool BTW), all have barriers to entry for the Podcast creator that can not be ignored.

For me, I’m a geek and I love that stuff so sure, I’ll have a crack at that (looks over at the Raspberry Pi Lightning Node on desk with a nod) but not everyone is like me (probably a good thing on balance).

So far as I can tell, Apple Podcasts is currently the most fee-expensive way for podcasters to get support from listeners. It’s also a walled garden2, but then so is Patreon, Spotify/Anchor (if you’re eligible and I’m not…for now), Breaker, and building your own system with Memberful or Stripe website integration requires developer chops most don’t have so isn’t an option. By far the easiest (once you figure out BitCoin/Lightning and set up your own Node) is actually streaming Sats, but that knowledge ramp is tough and lots of people HATE BitCoin. (That’s another, more controversial story).

Apple Podcasts has one thing going for it: It’s going to be the quickest, easiest way for someone to support your show coupled with the biggest audience in a single Podcasting ecosystem. You can’t and shouldn’t ignore that, and that’s why I’m giving this a chance. The same risks apply to Apple as to all the other walled gardens (Patreon, Breaker, Spotify/Anchor etc): you could be kicked-off the platform, they could stop supporting their platform slowly, sell it off or shut it down entirely and if any of that happens, your supporters will mostly disappear with it. That’s why no-one should rely on it as the sole pathway for support.

It’s about being present and assessing after 6-12 months. If you’re not in it, then you might miss out on supporters that love your work and want to support it and this is the only way they’re comfortable doing that. So I’m giving this a shot and when it launches for Beta testing will be looking for any fans that want to give it a try so I can tweak anything that needs tweaking, and will post publicly when it goes live for all. Hopefully all of my efforts (and Apples) are worth it for all concerned.

Time will tell. (It always does)


  1. Realistically if every Podcasting-walled-garden offers something like this (as Breaker did and Spotify is about to) then at some point Podcasters have to draw a line of effort vs reward. Right now I’m uploading files to two places, and with Apple that will be a third. If I add Spotify, Facebook, Breaker then I’m up to triple my current effort to support 5 walled gardens. Eventually if the platform isn’t popular then it’s not going to be worth that effort. Apple is worth considering because its platform is significant. The same won’t always be true for the “next walled garden” whatever that may be. ↩︎

  2. To be crystal clear, I love walled gardens as in actual GARDENS, but I don’t mean those ones, I mean closed ecosystems aka ‘walled gardens’, before you say that. Actually no geek thought that, that’s just my sense of humour. Alas. ↩︎

]]>
Technology 2021-04-30T20:00:00+10:00 #TechDistortion
Causality Transcriptions https://techdistortion.com/articles/causality-transcriptions https://techdistortion.com/articles/causality-transcriptions Causality Transcriptions Spurred on by Podcasting 2.0 and reflecting on my previous attempt at transcriptions, I thought it was time to have another crack at this. The initial attempts were basic TXT files that weren’t time-synced nor proofed and used a very old version of Dragon Dictate I had laying around.

This time around my focus is on making Causality as good as it possibly can be. From the PC2.0 guidelines:

SRT: The SRT format was designed for video captions but provides a suitable solution for podcast transcripts. The SRT format contains medium-fidelity timestamps and are a popular export option from transcription services. SRT transcripts used for podcasts should adhere to the following specifications.

Properties:

  • Max number of lines: 2
  • Max characters per line: 32
  • Speaker names (optional): Start a new card when the speaker changes. Include the speaker’s name, followed by a colon.

This is closely related to defaults I found using Otter.ai but that’s not free if you want time-sync’d SRT files. So my workflow uses YouTube (for something useful)…

STEPS:

  1. Upload episode directly converted from the original public audio file to YouTube as a Video (I use Ferrite to create a video export). Previously I was using LibSyn as part of their YouTube destination which also works.
  2. Wait a while. It can take anywhere from a few minutes to a few hours, then go to your YouTube Studio, pick an episode, Video Details, under the section: “Language, subtitles, and closed captions”, select “English by YouTube (automatic)” three vertical dots, “Download” (NOTE BELOW). Alternatively select Subtitles, and next to DUPLICATE AND EDIT, select the three dots and Download, then .srt
  3. If you can only get the SBV File: Open this file, untitled.sbv in a raw text editor, then select all, copy and paste it into: DCMP’s website, click Convert, select all, then create a new blank file: untitled.srt and paste in the converted format.
  4. If you have the SRT now, and don’t have the source video (eg if it was created by LibSyn automatically, I didn’t have a copy locally) download the converted YouTube video using the embed link for the episode to: SaveFrom or use a YouTube downloader if you prefer.
  5. Download the Video in low-res and put all into a single directory.
  6. I’m using Subtitle Studio and it’s not free but it was the easiest for me to get my head around and it works for me. Open the SRT file just created/downloaded then drag the video for the episode in question onto the new window.
  7. Visually skim and fix obvious errors before you press play (Title Case, ends of Sentences, words for numbers, MY NAME!)
  8. Export the SRT file and add to the website and RSS Feed!

NOTE: In 1 case out of 46 uploads it thought I was speaking in Russian for some reason? The auto-translation in Russian was funny but not useful, but for all others it correctly translated automatically into English and the quality of the conversion is quite good.

I’ve also flattened the SRT into a fixed Text file, which is useful for full text search. The process for that takes me two steps:

  1. Upload the file to Happy Scribe and select “Text File” as the output format.
  2. Open the downloaded file in a text editor, select all the text and then go to Tool Slick’s line merge tool, pasting the text into the Input Text box, then “Join Lines” and select all of the Output Joined Lines box and paste over what you had in your local text file.
  3. Rename the file and add to the website and RSS Feed!

As of publishing I’ve only done the sub-titles in SRT and TXT formats of two episodes, but I will continue to churn my way through them as time permits until they’re all done.

Of course you could save yourself a bit of effort and use Otter, and save yourself even more effort and don’t proof-read the automatically converted text. If I wasn’t so much of a stickler for detail, I’d probably do that myself but it’s that refusal to just accept that, that makes me the Engineer I am I suppose.

Enjoy!

]]>
Podcasting 2021-03-30T06:00:00+10:00 #TechDistortion
Building A Synology Hugo Builder https://techdistortion.com/articles/building-a-synology-hugo-builder https://techdistortion.com/articles/building-a-synology-hugo-builder Building A Synology Hugo Builder I’ve been using GoHugo (Hugo) as a static site generator on all of my sites for about three years now and I love it’s speed and its flexibility. That said a recent policy change at a VPS host had me reassessing my options and now that I have my own Synology with Docker capability I was looking for a way to go ultra-slim and run my own builder, using a lightweight (read VERY low spec) OpenVZ VPS as the Nginx front-end web server behind a CDN like CloudFlare. Previously I’d used Netlify but their rebuild limitations on the free tier were getting a touch much.

I regularly create content that I want to set to release automatically in the future at a set time and date. In order to accomplish this Hugo needs to rebuild the site periodically in the background such that when new pages are ready to go live, they are automatically built and available to the world to see. When I’m debugging or writing articles I’ll run the local environment on my Macbook Pro and only when I’m happy with the final result will I push to the Git repo. Hence I need a set-and-forget automatic build environment. I’ve done this on spare machines (of which I current have none), on a beefier VPS using CronJobs and scripts, on my Synology as a Virtual machine using the same (wasn’t reliable) before settling on this design.

Requirements

The VPS needed to be capable of serving Nginx from folders that are RSync’d from the DropBox. I searched through LowEnd Stock looking for deals for 256GB of RAM, SSD for a cheap annual rate and at the time got the “Special Mini Sailor OpenVZ SSD” for $6 USD/yr which was that amount of RAM and 10GB of SSD space, running CentOS7. (Note: These have sold out but there’s plenty of others around that price range at time of writing)

Setting up the RSync, NGinx, SSH etc is beyond the scope of this article however it is relatively straight-forward. Some guides here might be helpful if you’re interested.

My sites are controlled via a Git workflow, which is quite common for website management of static sites and in my case I’ve used GitHub, GitLab and most recently settled on the lightweight and solid Gitea which I also self-host now on my Synology. Any of the above would work fine but having them on the same device makes the Git Clone very fast but you can adjust that step if you’re using an external hosting platform.

I also had three sites I wanted to build from the same platform. The requirements roughly were:

  • Must stay within Synology DSM Docker environment (no hacking, no portainer which means DroneCI is out)
  • Must use all self-hosted, owned docker/system environment
  • A single docker image to build multiple websites
  • Support error logging and notifications on build errors
  • Must be lightweight
  • Must be an updated/recent/current docker image of Hugo

The Docker Image And Folders

I struggled for a while with different images because I needed one that included RSync, Git, Hugo and allowed me to modify the startup script. Some of the hugo build dockers out there were actually quite restricted to a set workflow like running up the local server to serve from memory or assumed you had a single website. The XdevBase / HugoBuilder was perfect for what I needed. Preinstalled it has:

  • rsync
  • git
  • Hugo (Obviously)

Search for “xdevbase” in the Docker Registry and you should find it. Select it and Download the latest - at time of writing it’s very lightweight only taking up 84MB.

XDevBase

After this open “File Station” and start building the supporting folder structure you’ll need. For me I had three websites: TechDistortion, The Engineered Network and SlipApps, hence I created three folders. Firstly under the Docker folder which you should already have if you’ve played with Synology docker before, create a sub-folder for Hugo - for me I imaginatively called mine “gohugo”, then under that I created a sub-folder for each site plus one for my logs.

Folders

Under each website folder I also created two more folders: “src” for the website source I’ll be checking out of Gitea, and “output” for the final publicly generated Hugo website output from the generator.

Scripts

I spent a fair amount of time perfecting the scripts below. The idea was to have an over-arching script that called each site one after the other in a never-ending loop with a mandatory wait-time between the loops. If you attempt to run independent dockers each on a timer and any other task runs on the Synology, the two or three independently running dockers will overlap leading to an overload condition the Synology will not recover from. The only viable option is to serialise the builds and synchronising those builds is easiest using a single docker like I have.

Using the “Text Editor” on the Synology or using your text editor of choice and copying the files across to the correct folder, create a main build.sh file and as many build-xyz.sh files as you have sites you want to build.

#!/bin/sh
# Main build.sh

# Stash the current time and date in the log file and note the start of the docker
current_time=$(date)
echo "$current_time :: GoHugo Docker Startup" >> /root/logs/main-build-log.txt

while :
do
	current_time=$(date)
	echo "$current_time :: TEN Build Called" >> /root/logs/main-build-log.txt
	/root/build-ten.sh
	current_time=$(date)
	echo "$current_time :: TEN Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m

	current_time=$(date)
	echo "$current_time :: TD Build Called" >> /root/logs/main-build-log.txt
	/root/build-td.sh
	current_time=$(date)
	echo "$current_time :: TD Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m

	current_time=$(date)
	echo "$current_time :: SLIP Build Called" >> /root/logs/main-build-log.txt
	/root/build-slip.sh
	current_time=$(date)
	echo "$current_time :: SLIP Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m
done

current_time=$(date)
echo "$current_time :: GoHugo Docker Build Loop Ungraceful Exit" >> /root/logs/main-build-log.txt
curl -s -F "token=xxxthisisatokenxxx" -F "user=xxxthisisauserxxx1" -F "title=Hugo Site Builds" -F "message=\"Ungraceful Exit from Build Loop\"" https://api.pushover.net/1/messages.json

# When debugging is handy to jump out into the Shell, but once it's working okay, comment this out:
#sh

This will create a main build log file and calls each sub-script in sequence. If it ever jumps out of the loop, I’ve set up a Pushover API notification to let me know.

Since all three sub-scripts are effectively identical except for the directories and repositories for each, The Engineered Network script follows:

#!/bin/sh

# BUILD The Engineered Network website: build-ten.sh
# Set Time Stamp of this build
current_time=$(date)
echo "$current_time :: TEN Build Started" >> /root/logs/ten-build-log.txt

rm -rf /ten/src/* /ten/src/.* 2> /dev/null
current_time=$(date)
if [[ -z "$(ls -A /ten/src)" ]];
then
	echo "$current_time :: Repository (TEN) successfully cleared." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Repository (TEN) not cleared." >> /root/logs/ten-build-log.txt
fi

# The following is easy since my Gitea repos are on the same device. You could also set this up to Clone from an external repo.
git --git-dir /ten/src/ clone /repos/engineered.git /ten/src/ --quiet
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Repository (TEN) successfully cloned." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Repository (TEN) not cloned." >> /root/logs/ten-build-log.txt
fi

rm -rf /ten/output/* /ten/output/.* 2> /dev/null
current_time=$(date)
if [[ -z "$(ls -A /ten/output)" ]];
then
	echo "$current_time :: Site (TEN) successfully cleared." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not cleared." >> /root/logs/ten-build-log.txt
fi

hugo -s /ten/src/ -d /ten/output/ -b "https://engineered.network" --quiet
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Site (TEN) successfully generated." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not generated." >> /root/logs/ten-build-log.txt
fi

rsync -arvz --quiet -e 'ssh -p 22' --delete /ten/output/ bobtheuser@myhostsailorvps:/var/www/html/engineered
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Site (TEN) successfully synchronised." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not synchronised." >> /root/logs/ten-build-log.txt
fi

current_time=$(date)
echo "$current_time :: TEN Build Ended" >> /root/logs/ten-build-log.txt

The above script can be broken down into several steps as follows:

  1. Clear the Hugo Source directory
  2. Pull the current released Source code from the Git repo
  3. Clear the Hugo Output directory
  4. Hugo generate the Output of the website
  5. RSync the output to the remote VPS

Each step has a pass/fail check and logs the result either way.

Your SSH Key

For this work you need to confirm that RSync works and you can push to the remote VPS securely. For that extract the id_rsa key (preferably generate a fresh key-pair) and place that in the /docker/gohugo/ folder on the Synology ready for the next step. As they say it should “just work” but you can test if it does once your docker is running. Open the GoHugo docker, go to the Terminal tab and Create–>Launch with command “sh” then select the “sh” terminal window. In there enter:

ssh bobtheuser@myhostsailorvps -p22

That should log you in without a password, securely via ssh. Once it’s working you can exit that terminal and smile. If not, you’ll need to dig into the SSH keys which is beyond the scope of this article.

Gitea Repo

This is now specific to my use case. You could also clone your Repo from any other location but for me this was quicker easier and simpler to map my repo from the Gitea Docker folder location. If you’re like me and running your own Gitea on the Synology you’ll find that repo directory under the /docker/gitea sub-directories at …data/git/respositories/ and that’s it. Of course many will not be doing that, but setting up external Git cloning isn’t too difficult but beyond the scope of this article.

Configuring The Docker Container

Under the Docker –> Image section, select the downloaded image then “Launch” it, set the Container Name to “gohugo” (or whatever name you want…doesn’t matter) then configure the Advanced Settings as follows:

  • Enable auto-restart: Checked
  • Volume: (See below)
  • Network: Leave it as bridge is fine
  • Port Settings: Since I’m using this as a builder I don’t care about web-server functionality so I left this at Auto and never use that feature
  • Links: Leave this empty
  • Environment –> Command: /root/build.sh (Really important to set this start-up command here and now, since thanks to Synology’s DSM Docker implementation, you can’t change this after the Docker container has been created without destroying and recreating the entire docker container!)

There’s a lot of little things to add here to make this work for all the sites. In future if you want to add more sites then stopping the Docker, adding Folders and modifying the scripts is straight-forward.

Add the following Files: (Where xxx, yyy, zzz are the script names representing your sites we created above, aaa is your local repo folder name)

  • docker/gohugo/build-xxx.sh map to /root/build-xxx.sh (Read-Only)
  • docker/gohugo/build-yyy.sh map to /root/build-yyy.sh (Read-Only)
  • docker/gohugo/build-zzz.sh map to /root/build-zzz.sh (Read-Only)
  • docker/gohugo/build.sh map to /root/build.sh
  • docker/gohugo/id_rsa map to /root/.ssh/id_rsa (Read-Only)
  • docker/gitea/data/git/respositories/aaa map to /repos (Read-Only) Only for a locally hosted Gitea repo

Add the following Folders:

  • docker/gohugo/xxx/output map to /xxx/output
  • docker/gohugo/xxx/src map to /xxx/src
  • docker/gohugo/yyy/output map to /yyy/output
  • docker/gohugo/yyy/src map to /yyy/src
  • docker/gohugo/zzz/output map to /zzz/output
  • docker/gohugo/zzz/src map to /zzz/src
  • docker/gohugo/logs map to /root/logs

When finished and fully built the Volumes will look something like this:

Volumes

Apply the Advanced Settings then Next and select “Run this container after the wizard is finished” then Apply and away we go.

Of course, you can put whatever folder structure and naming you like, but I like keeping my abbreviations consistent and brief for easier coding and fault-finding. Feel free to use artistic licence as you please…

Away We Go!

At this point the Docker should now be periodically regenerating your Hugo websites like clockwork. I’ve had this setup running now for many weeks without a single hiccup and on rebooting it comes back to life and just picks up and runs without any issues.

As a final bonus you can also configure the Synology Web Server to point at each Output directory and double-check what’s being posted live if you want to.

Enjoy your automated Hugo build environment that you completely control :)

]]>
Hugo 2021-02-22T06:00:00+10:00 #TechDistortion
Building Your Own Bitcoin Lightning Node Part Two https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node-part-two https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node-part-two Building Your Own Bitcoin Lightning Node Part Two Previously I’ve written about my Synology BitCoin Node Failure and more recently about my RaspiBlitz that was actually successful. Now I’d like to share how I set it up with a few things I learned along the way that will hopefully make things easier for others to avoid the mistakes I made.

Previously I suggested the following:

  • Set up the node to download a fresh copy of the BlockChain
  • Use an External IP, as it’s more compatible than TOR (unless you’re a privacy nut)

Beyond that here’s some more suggestions:

  • If you’re on a home network behind a standard Internet Modem/Router: change the Raspberry Pi to a fixed IP address and set up port forwarding for the services you need (TCP 9735 at a minimum for Lightning)
  • Don’t change the IP from DHCP to Fixed IP until you’ve first enabled and set up your Wireless connection as a backup
  • Sign up for DuckDNS before you add ANY Services (I tried FreeDNS but DuckDNS was the only one I found that supports Let’s Encrypt)

Let’s get started then…

WiFi First

Of course this is optional, but I think it’s worth having even if you’re not intending to pull the physical cable and shove the Pi in a drawer somewhere (please don’t though it will probably overheat if you did that). Go to the Terminal on the Pi and enter the following:

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

Then add the following to the bottom of the file:

network={
ssid="My WiFi SSID Here"
psk="My WiFi Password Here"
}

This is the short-summary version of the Pi instructions.

Once this is done you can reboot or enter this to restart the WiFi connection:

sudo wpa_cli -i wlan0 reconfigure

You can confirm it’s connected with:

iwgetid

You should now see:

wlan0     ESSID:"My WiFi SSID Here"

Fixed IP

The Raspberry Pi docs walk through what to change but I’ll summarise it here. Firstly if you have a router to connect to the internet, likely it’s one of the standard subnets like 192.168.1.1 and it’s your gateway, but to be sure from the Raspberry Pi terminal (after you’ve SSH’d in) type:

route -ne

It should come back with a table with Destination 0.0.0.0 to a Gateway, most likely something like 192.168.1.1 as Iface (Interface) Eth0 for hardwired Ethernet and wlan0 for WiFi. Next type:

cat /etc/resolv.conf

This should list the nameservers you’re using - make a note of these in a text-editor if you like. Then edit your dhcpcd.conf. I use nano but you can use vi or any other linux editor of your choice:

sudo nano /etc/dhcpcd.conf

Add the following (or your equivalent) to the end of the conf: (Where xxx is your Fixed IP)

interface eth0
static ip_address=192.168.1.xxx
static routers=192.168.1.1
static domain_name_servers=192.168.1.1  fe80::9fe9:ecdf:fc7e:ad1f%eth0

Of course when picking your Fixed IP on the local network, make sure your DHCP allocation has a free zone above or below which it’s a safe space. On my network I only allow DHCP between .20 and .254 of my subnet but you can reserve any which way you prefer.

Once this is done reboot your Raspberry Pi and confirm you can connect via SSH at the Fixed IP. If you can’t, try the WiFi IP address and check your settings. If you still can’t, oh dear you’ll need to reflash your SD card and start over. (If that happens don’t worry, your Blockchain on the SSD will not be lost)

Dynamic DNS

If you’re like me you’re running this on your home network and you have a “normal” internet plan behind an ISP that charges more for a Fixed IP on the Internet and hence you’ve got to deal with a Dynamic IP address that’s public-facing. #Alas

There are many Dynamic DNS sites out there, but finding one that will work reliably, automatically, with Let’s Encrypt isn’t easy. Of course if you’re not intending to use public-facing utilities that need a TLS certificate like I am (Sphinx) then you probably don’t need to worry about this step or at least any Dynamic DNS provider would be fine. For me, I had to do this to get Sphinx to work properly.

DuckDNS allows you to sign in with credentials ranging from Persona, to Twitter, GitHub, Reddit and Google: pick whichever you have or whichever you prefer. Once logged in you can create a subdomain and add up to 5 in total. Take note of your Token and your subdomain.

In the RaspiBlitz menu go to SUBSCRIBE and select NEW2 (LetsEncrypt HTTPS Domain [free] not under Settings!) then enter the above information as requested. When it comes to the Update URL leave this blank. The Blitz will reboot and hopefully everything should just work. When you’re done the Domain will then appear on the LCD of your Blitz at the top.

You won’t know if your certificates are correctly issued until later or if you want you can dive into the terminal again and manually check, but that’s your call.

Port Forwarding Warning

Personally I only Port Forward the following that I believe is the minimum required to get the Node and Sphinx Relay working properly:

  • TCP 9735 (Lightning)
  • TCP 3300 & 3301 (Sphinx Relay)
  • TCP 8080 (Let’s Encrypt)

I think there’s an incremental risk in forwarding a lot of other services - particularly those that allow administration of your Node and Wallet. I also use an Open VPN to my household network with a different endpoint and I use the Web UIs and Zap application on my iPhone for interacting with my Node. Even with a TLS certificate and password per application I don’t think opening things wide open is a good idea. You may see that convenience differently, so make your own decisions in this regard.

Okay…now what?

As a podcaster and casual user of your Lightning Node, not everything in the Settings and Services is of interest. For me I’ve enabled the following that are important for use and monitoring:

  • (SETTINGS) LND Auto-Unlock
  • (SERVICES) Accept KeySend
  • (SERVICES) RTL Web interface
  • (SERVICES) ThunderHub
  • (SERVICES) BTC-RPC-Explorer
  • (SERVICES) Lightning Loop
  • (SERVICES) Sphinx-Relay

Each in turn…

LND Auto-Unlock

In lightning’s LND implementation, the Wallet with your coinage in it is automatically locked when you restart your system. If you’re comfortable with auto-unlocking your wallet on reboot without you explicitly entering your Wallet password then this feature means a recovery from a reboot/power failure etc will be that little bit quicker and easier. That said, storing your wallet password on your device for privacy nuts is probably not the best idea. I’ll let you balance convenience against security for yourself.

Accept KeySend

One of the more recent additions to the Lightning standard in mid-2020 was KeySend. This feature allows anyone to send an open Invoice to any Node that supports it, from any Node that supports it. With the Podcasting 2.0 model, the key is using KeySend to stream Sats to your nominated Node either per minute listened or as one-off Boost payments showing appreciation on behalf of the listener. For me this was the whole point, but for some maybe they might not be comfortable accepting payments from random people at random times of the day. Who can say?

RTL Web interface

The Ride The Lightning web interface is a basic but handy web UI for looking at your Wallet, your channels and to create and receive Invoices. I enabled this because it was more light-weight than ThunderHub but as I’ve learned more about BitCoin and Lightning, I must confess I rarely use it now and prefer ThunderHub. It’s a great place to start though and handy to have.

ThunderHub

By far the most detailed and extensive UI I’ve found yet for the LND implementation, ThunderHub allows everything that RTL’s UI does plus channel rebalancing, Statistics, Swaps and Reporting. It’s become my go to UI for interacting with my Node.

BTC-RPC-Explorer

I only recently added this because I was sick of going to internet-based web pages to look at information about BitCoin - things like the current leading block, pending transactions, fee targets, block times and lots and lots more. Having said all of that, it took about 9 hours to crunch through the blockchain and derive this information on my Pi, and it took up about 8% of my remaining storage for the privilege. You could probably live without it though, but if you’re really wanting to learn about the state of the BitCoin blockchain then this is very useful.

Lightning Loop

Looping payments in and out is handy to have and a welcome addition to the LND implementation. At a high level Looping allows you to send funds to/from users/services that aren’t Lightning enabled and reduces transaction fees by reusing Lightning channels. That said, maybe that’s another topic for another post.

Sphinx-Relay

The one I really wanted. The truth is that at the time of writing, the best implementation of streaming podcasts with Lightning integration is Sphinx.

Sphinx started out as a Chat application, but one that uses the distributed Lightning network to pass messages. The idea seems bizarre to start with but if you have a channel between two people you can send them a message attached to a Sat payment. The recipient can then send that same Sat back to you with their own message in response.

Of course you can add fees if you want to for peer to peer but that’s optional. If you want to chat with someone else on Sphinx, so long as they have a Wallet on a Node that has a Sphinx-Relay on it, you can participate. Things get more interesting if you create a group chat, that Sphinx call a “Tribe” at which point you can “Stake” an amount to post on the channel with a “Time to Stake” both set by the Tribe owner. If the poster posts something good, the time to stake elapses and the Staked amount returns to the original poster. If the poster posts something inflammatory then the Tribe owner can delete that post and those funds are claimed by the Tribe owner.

This effectively puts a price on poor behaviour and conversely poor-acting owners that delete all posts will find themselves with an empty Tribe very quickly. It’s an interesting system for sure but has led to some well moderated conversations in my experiences thus far even in controversial Tribes.

In mid/late 2020 Sphinx integrated Podcasts into Tribe functionality. Hence I can create a Tribe, link a single Podcast RSS Feed to that Tribe and then anyone listening to an episode in the Sphinx app and Tribe will automatically stream Sats to the RSS Feed’s nominated Lightning Node. The “Value Slider” defaults to the Streaming Sats suggested in the RSS Feed, however this can be adjusted by the listener on a sliding bar all the way down to 0 if they wish - it’s Opt in. The player itself is basic but works well enough with Skip Forwards and Backwards as well as speed adjustment.

Additionally Sphinx has apps available for iOS (TestFlight Beta), Android (Sideload, Android 7.0 and higher) and desktop OSs including Windows, Linux and MacOS as well. Most functions exist on all apps however I find myself sometimes going back to the iOS app to send/receive Sats to my Wallet/Node which isn’t currently implemented on the MacOS version. (Not since I started my own Node however) You can of course host a Node on Sphinx for a monthly fee if you prefer, but this article is about owning your own Node.

One Last Thing: Inbound Liquidity

The only part of this equation that’s a bit odd (or was for me at the beginning) is understanding liquidity. I mentioned it briefly here, but in short when you open a channel with someone the funds are on your own side, meaning you have outbound liquidity. Hence I can spend Lightning/BitCoin on things in the Network. That’s fine. No issue. The problem is when you’re a Podcaster you want to receive payments in streaming Sats, but without Inbound Liquidity you can’t do that.

The simplest way to build it is to ask, really, really nicely for an existing Lightning user to open a channel with you. Fortunately my Podcasting 2.0 acquaintance Dave Jones was kind enough to open a channel for 100k Sats to my node, thus allowing inbound liquidity for testing and setting up.

In current terms, 100k isn’t a huge channel but it’s more than enough to get anyone started. There are other ways I’ve seen including pushing tokens to the partner on the channel when it’s created (at a cost) but that’s something that I need to learn more about before venturing more thoughts on it.

That’s it

That’s pretty much it. If you’re a podcaster and you’ve made it this far you now have your own Node, you’ve added your Value tag to your RSS feed with your new Node ID, you’ve set up Sphinx Relay and your own Tribe and with Inbound Liquidity you’re now having Sats streamed to you by your fans and loyal listeners!

Many thanks to Podcasting 2.0, Sphinx, RaspiBlitz, DuckDNS and both Adam Curry and Dave Jones for inspiration and guidance.

Please consider supporting each of these projects and groups as they are working in the open to provide a better podcasting future for everyone.

]]>
Podcasting 2021-02-16T06:00:00+10:00 #TechDistortion
Building Your Own Bitcoin Lightning Node https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node Building Your Own Bitcoin Lightning Node After my previous attempts to build my own node to take control of my slowly growing podcast streaming income didn’t go so well I decided to bite the bullet and build my own Lightning Node with new hardware. The criteria was:

  1. Minimise expenditure and transaction fees (host my own node)
  2. Must be always connected (via home internet is fine)
  3. Use low-cost hardware and open-source software with minimal command-line work

Because of the above, I couldn’t use my Macbook Pro since that comes with me regularly when I leave the house. I tried to use my Synology, but that didn’t work out. The next best option was a Raspberry Pi, and two of the most popular options out there are the RaspiBolt and RaspiBlitz. Note: Umbrel is coming along but not quite as far as the other two.

The Blitz was my choice as it seems to be more popular and I could build it easily enough myself. The GitHub Repo is very detailed and extremely helpful. This article is not intended to just repeat those instructions, but rather describe my own experiences in building my own Blitz.

Parts

The GitHub instructions suggest Amazon links, but in Australia Amazon isn’t what it is in the States or even Europe. So instead I sourced the parts from a local importer of Rasperry Pi parts. I picked from the “Standard” list:

Core Electronics

  • $92.50 / Raspberry Pi 4 Model B 4GB
  • $16.45 / Raspberry Pi 4 Power Supply (Official) - USB-C 5.1V 15.3W (White)
  • $23.50 / Aluminium Heatsink Case for Raspberry Pi 4 Black (Passive Cooling, Silent)
  • $34.65 / Waveshare 3.5inch LCD 480x320 (The LCD referred to was a 3.5" RPi Display, GPIO connection, XPT2046 Touch Controller but they had either no stock on Amazon or wouldn’t ship to Australia)

Blitz All the parts from Core Electronics

UMart

  • $14 / Samsung 32GB Micro SDHC Evo Plus W90MB Class 10 with SD Adapter

On Hand

Admittedly a 1TB SSD and Case would’ve cost an additional $160 AUD, which in future I will extend probably to a fully future-proof 2TB SSD but at this point the Bitcoin Blockchain uses about 82% of that so a bigger SSD is on the cards for me, in the next 6-9 months time for sure.

Total cost: $181.10 AUD (about $139 USD or 300k Sats at time of writing)

Blitz The WaveShare LCD Front View

Blitz The WaveShare LCD Rear View

Assembly

The power supply is simple: unwrap, plug in to the USB-C Power Port and done. The Heatsink comes with some different sized thermal pads to sandwich between the heatsink and the key components on the Pi motherboard and four screws to clamp the two pieces together around the motherboard. Finally lining up the screen with the outer-most pins on the I/O Header and gently pressing them together. They won’t sit flat against the HeatSink/case but they don’t have to, to connect well.

Blitz The Power Supply

Blitz The HeatSink

Blitz The Raspberry Pi 4B Motherboard

Burning the Image

I downloaded the boot image from the GitHub repo, and used Balena Etcher to write it on my Macbook Pro. Afterward you insert that into the Raspberry Pi, connected up the SSD to the motherboard side USB3.0 port, connect up an Ethernet cable and then power it up!

Installing the System

If everything is hooked up correctly (and you have a router/DHCP server on your hardwired ethernet you just connected it to) the screen should light up with the DHCP allocated IP Address you can reach it on with instructions on how to SSH via the terminal, like “ssh admin@192.168.1.121” or similar. Open up Terminal, enter that and you’ll get a nice neat blue-screen with the same information on it. From here everything is done via the menu installer.

If you get kicked out of that interface just enter ‘raspiblitz’ and it will restart the menu.

Getting the Order Right

  1. Pick Your Poison For me I chose BitCoin and Lightning which is the default. There are other Crypto-currencies if that’s your choice then set your passwords and please use a Password manager with at least 32 characters - make it as secure as you can from Day One!
  2. TOR vs Public IP Some privacy nuts run behind TOR to obscure their identity and location. I’ve done both and can tell you that TOR takes a lot longer to sync and access and will kill off a lot of apps and makes opening channels to some other nodes and services difficult or impossible. For me, I just wanted a working node that was as interoperable as possible so I chose Public IP.
  3. Let the BlockChain Sync Once your SSD is formatted, if you have the patience then I recommend syncing the Blockchain from scratch. I already had a copy of it that I SCP’d across from my Synology and it saved me about 36 hours but it also caused my installer to ungracefully exit and it took me another day of messing with the command line to get it to start again and complete the installation. In retrospect, not a time saver end to end but your mileage may vary.
  4. Set up a New Node Or in my case, I recovered my old node at this point by copying the channel.backup over but for most others it’s a New Node and a new Wallet and for goodness sake when you make a new wallet; KEEP A COPY OF YOUR SEED WORDS!!!
  5. Let Lightning “Sync” It’s actually validating blocks technically but this also takes a while. For me it took nearly 6 hours for both Lightning and Bitcoin blocks to sync.

Blitz The Final Assembled Node up and Running

My Money from Attempt 2 on the Synology Recovered!

I was able to copy the channel.backup and wallet.dat files from the Synology and was able to successfully recover my $60 AUD investment from my previous attempts, so that’s good! (And it worked pretty easily actually)

In order to prevent any loss of wallet, I’ve also added a USB3.0 Thumb Drive to the other USB3.0 port and set up “Static Channel Backup on USB Drive” which required a brief format to EXT4 but worked without any real drama.

Conclusion

Building the node using a salvaged SSD cost under $200 AUD and took about 2 days to sync and set up. Installing the software and setting up all the services is another story for another post, but it’s working great!

]]>
Podcasting 2021-02-12T06:00:00+10:00 #TechDistortion
BitCoin, Lightning and Patience https://techdistortion.com/articles/bitcoin-lightning-and-patience https://techdistortion.com/articles/bitcoin-lightning-and-patience BitCoin, Lightning and Patience I’ve been vaguely aware of BitCoin for a decade but never really dug into it until recently, as a direct result of my interest in the Podcasting 2.0 team.

My goals were:

  1. Minimise expenditure and transaction fees
  2. Use existing hardware and open-source software
  3. Setup a functional lightning node to both make and accept payments

I’m the proud owner of a Synology, and it can run docker, and you can run BitCoin and Lightning in Docker containers? Okay then…this should be easy enough, right?

BitCoin Node Take 1

I set up the kylemanna/bitcoind docker on my Synology and started it syncing to the Mainnet blockchain. About a week later and I was sitting at 18% complete and averaging 1.5% per day and dropping. Reading up on this and the problem was two-fold: validating the blockchain is a CPU and HDD/SSD intensive task and my Synology had neither. I threw more RAM at it (3GB out of the 4GB it had) with no difference in performance, set the CPU restrictions to give the Docker the most performance possible with no difference and basically ran out of levers to pull.

I then learned it’s possible to copy a blockchain from one device to another and the Raspberry Pi’s sold as your own private node come with the blockchain pre-synced (up to the point they’re shipped) so they don’t take too long to catch up to the front of the chain. I then downloaded BitCoin Core for MacOS and set it running. After two days it had finished (much better) and I copied the directories to the Synology only to find that the settings on BitCoin Core were to “prune” the blockchain after validation, meaning the entire blockchain was no longer stored on my Mac, and the docker container would need to start over.

Ugh.

So I disabled pruning on the Mac, and started again. The blockchain was about 300GB (so I was told) and with my 512GB SSD on my MBP I thought that would be enough, but alas no, as the amount of free space diminished at a rapid rate of knots, I madly off-loaded and deleted what I could finishing with about 2GB to spare and the entire blockchain and associated files weighed in at 367GB.

Transferring them to the Synology and firing up the Docker…it worked! Although it had to revalidate the 6 most recent blocks (taking about 26 minutes EVERY time you restarted the BitCoin docker) it sprang to life nicely. I had a BitCoin node up and running!

Lightning Node Take 1

There are several docker containers to choose from, the two most popular seemed to be LND and c-Lightning. Without understanding the differences I went with the container that was said to be more lightweight and work better on a Synology: c-Lightning.

Later I was to discover that more plugins, applications, GUIs, relays (Sphinx for example) only work with LND and require LND Macaroons, which c-Lightning doesn’t support. Not only that design decisions by the c-Lightning developers to only permit single connections between nodes makes building liquidity problematic when you’re starting out. (More on that in another post someday…)

After messing around with RPC for the cLightning docker to communicate with the KyleManna Bitcoind docker, I realised that I needed to install ZMQ support since RPC Username and Password authentication were being phased out in preference for a token authentication through a shared folder.

UGH

I was so frustrated at losing 26 minutes every time I had to change a single setting in the Bitcoin docker, and in an incident overnight both dockers crashed, didn’t restart and then took over a day to catch up to the blockchain again. I had decided more or less at this point to give up on it.

SSD or don’t bother

Interestingly my oldest son pointed out that all of the kits for sale used SSDs for the Bitcoin data storage - even the cheapest versions. A bit more research and it turns out that crunching through the blockchain is less of a CPU intensive exercise and more of a data store read/write intensive exercise. I had a 512GB Samsung USB 3.0 SSD laying around and in a fit of insanity decided to try connecting it to the Synology’s rear port, shift the entire contents of the docker shared folders (that contained all of the blocks and indexes) to that SSD and try it again.

Oh My God it was like night and day.

Both docker containers started, synced and were running in minutes. Suddenly I was interested again!

Bitcoin Node Take 2

With renewed interest I returned to my previous headache - linking the docker containers properly. The LNCM/Bitcoind docker had precompiled support for ZMQ and it was surprisingly easy to set up the docker shared file to expose the token I needed for authentication with the cLightning docker image. It started up referencing the same docker folder (now mounted on the SSD) and honestly, seemed to “just work” straight up. So far so good.

Lightning Node Take 2

This time I went for the more-supported LND, and picked one that was quite popular by Guggero, and also spun it up rather quickly. My funds on my old cLightning node would simply have to remain trapped until I could figure out how to recover them in future.

Host-Network

The instructions I had read all related to TestNet, and advised not to use money you weren’t prepared to lose. I set myself a starting budget of $40 AUD and tried to make this work. Using the Breez app on iOS and their integration with MoonPay I managed to convert about 110k Sats. The next problem was getting them from Breez to my own Node and my attempts with Lightning failed with “no route.” (I learned later I needed channels…d’uh) Sending via BitCoin was the only option. “On-chain” they call it. This cost me a lot of Sats, but I finally had some Sats on my Node.

Satoshi’s

BitCoin has a few quirky little problems. One interesting one is that a single BitCoin is worth a LOT of money - currently 1 BTC = $62,000.00 AUD. So it’s not a practical measure and hence BitCoin is more commonly referred to in Satoshi’s which are 1/100,000,000th of a BitCoin. BitCoin is a crypto-currency which is transacted on the BitCoin blockchain, via the BitCoin network. Lightning is a Layer 2 network that also deals in BitCoin but in smaller amounts, peer to peer connected via channels and because the values are much smaller is regularly transacted in values of Satoshi’s.

Everything you do requires Satoshi’s (SATS). It costs SATS to fund a channel. It costs SATS to close a channel. I couldn’t find out how to determine the minimum amount of Sats needed to open a channel without first opening one via the command line. I only had a limited number of SATs to play with so I had to choose carefully. Most channels wanted 10,000 or 20,000 but I managed a find a few that only required 1,000. The initial thought was to open as many channels as you could then make some transactions and your inbound liquidity will improve as others in the network transact.

Services exist to help build that inbound liquidity, without which, you can’t accept payments from anyone else. Another story for a future post.

Anything On-Chain Is Slow and Expensive

For a technology that’s supposed to be reducing fees overall, Lightning seems to cost you a bit up-front to get into it, and anytime you want to shuffle things around, it costs SATS. I initially bought into it wishing to fund my own node and try for that oft-touted “self-soverignty” of BitCoin, but to achieve that you have to invest some money to get started. In the end however I hadn’t invested enough because my channels I opened didn’t allow inbound payments.

I asked some people to open some channels to me and give me some inbound liquidity however not a single one of them successfully opened. My BitCoin and Lightning experiment had ground to a halt, once again.

At first I experimented with TOR, then by publishing on an external IP address, port-forwarding to expose the Lightning external access port 9735 to allow incoming connections. Research into why highlighted that I needed to recreate my dockers but connect them to a custom Docker network and then resync the containers otherwise the open channel attempts would continue to fail.

I did that and it still didn’t work.

Then I stumbled across the next idea: you needed to modify the Synology Docker DSM implementation to allow direct mounting of the Docker images without them being forced through a Double-NAT. Doing so was likely to impact my other, otherwise perfectly happily running Dockers.

UGH

That’s it.

I’m out.

Playing with BitCoin today feels like programming COBOL for a bank in the 80s

Did you know that COBOL is behind nearly half of all financial transactions in 2017? Yes and the world is gradually ripping it out (thankfully).

IDENTIFICATION DIVISION.
   PROGRAM-ID. CONDITIONALS.
   DATA DIVISION.
     WORKING-STORAGE SECTION.
     *> I'm not joking, Lightning-cli and Bitcoin-cli make me think I'm programming for a bank
     01 NUM1 SATSJOHNHAS 0(0).
   PROCEDURE DIVISION.
     MOVE 20000 TO NUM1.
     IF NUM1 > 0 THEN
       DISPLAY 'YAY I HAZ 20000 SATS!'
     END-IF
     *> I'd like to make all of transactions using the command line, just like when I do normal banking...oh wait...
     EVALUATE TRUE
       WHEN SATS = 0
         DISPLAY 'NO MORE SATS NOW :('
     END-EVALUATE.
   STOP RUN.

There is no doubt there’s a bit geek-elitism amongst many of the people involved with BitCoin. Comments like “Don’t use a GUI, to understand it you MUST use the command line…” reminds me of those that whined about the Macintosh in 1984 having a GUI. A “real” computer used DOS. OMFG seriously?

A real financial system is as painless for the user as possible. Unbeknownst to me, I’d chosen a method that was perhaps the least advisable: the wrong hardware running the wrong software, running a less-compatible set of dockers and my conclusion was that setting up your own Node that you control is not easy.

It’s not intuitive either and it will make you think about things like inbound liquidity that you never thought you’d need to know, since you’re geek - not an investment banker. I suppose the point is that owning your own bank means you have to learn a bit about how a bank needs to work and that takes time and effort.

If you’re happy to just pay someone else to build and operate a node for you then that’s fine and that’s just what you’re doing today with any bank account. I spent weeks learning just how much I don’t want to be my own bank - thank you very much, or at least I didn’t want to using the equipment that I had laying about and living in the Terminal.

Synology as a Node Verdict

Docker was not reliable enough either. In some instances I would modify a single dockers configuration file and restart the container only get “Docker API failed”. Sometimes I could recover by picking the Docker Container I thought had caused the failure (most likely the one I modified but not always) by clearing the container and restarting it.

Other times I had to completely reboot the Synology to recover it and sometimes I had to do both for Docker to restart. Every restart of the Bitcoin Container and there would go another half an hour restarting and then the container would “go backwards” and be 250 blocks behind taking a further 24-48 hours of resynchronising with the blockchain before the Lightning Container could then resynchronise with it. All the while the node was offline.

Unless your Synology is running SSDs, has at least 8GB of RAM, is relatively new and you don’t mind hacking your DSM Docker installation, you could probably make it work, but it’s horses for courses in the end. If you have an old PC laying about then use that. If you have RAM and SSD on your NAS then build a VM rather than use Docker, maybe. Or better yet, get a Raspberry Pi and have a dedicated little PC that can do the work.

Don’t Do What I Did

Money is Gone

The truth is in an attempt to get incoming channel opens working, I flicked between Bridge and Host and back again, opening different ports with Socks failed errors and finally gave up when after many hours the LND docker just wouldn’t connect via ZMQ any more.

And with that my $100 AUD investment is now stuck between two virtual wallets.

I will keep trying and report back but at this point my intention is to invest in a Raspberry Pi to run my own Node. I’ll let you know how that goes in due course.

]]>
Podcasting 2021-02-01T12:30:00+10:00 #TechDistortion
Podcasting 2.0 Addendum https://techdistortion.com/articles/podcasting-2-0-addendum https://techdistortion.com/articles/podcasting-2-0-addendum Podcasting 2.0 Addendum I recently wrote about Podcasting 2.0 and thought I should add a further amendment regarding their goals. I previously wrote:

To solve the problems above there are a few key angles being tackled: Search, Discoverability and Monetisation.

I’d like to add a fourth key angle to that, which I didn’t think at the time should be listed as it’s own however having listened more to Episodes 16 and 17 and their intention to add XML tags for IRC/Chat Room integration I think I should add the fourth key angle: Interactivity.

Interactivity

The problem with broadcast historically is that audience participation is difficult given the tools and effort required. Pick up the phone, make a call, you need a big incentive (think cash prizes, competitions, discounts, something!) or audiences just don’t participate. It’s less personal and with less of a personal connection the desire for listeners to connect is much less.

In podcasting as an internet-first application and being far more personal, the bar is set differently and we can think of real-time feedback methods as verbal via a dial-in/patch-through to the live show or written via messaging, like a chat room. There are also non-real-time methods predominantly via webforms and EMail. With contact EMails already in the RSS XML specification, adding a webform submission entry might be of some use (perhaps < podcast:contactform > with a url=“https://contact.me/form"), but real-time is far more interesting.

Real Time Interactivity

In podcasting initially (like so many internet-first technology applications) geeks that understood how it works, led the way. That is to say with podcasts originally there was a way for a percentage of the listeners to use IRC as a Chat Room (Pragmatic did this for the better part of a year in 2014, as well as other far more popular shows like ATP, Back To Work etc.) to get real-time listener interaction during a podcast recording, albeit with a slight delay between audio out and listener response in the chat room.

YouTube introduced live streaming and live chat with playback that integrated the chat room with the video content to lower the barrier of entry for their platform. For equivalent podcast functionality to go beyond the geek-% of the podcast listeners, podcast clients will need to do the same. In order for podcast clients to be pressured to support it, standardisation of the XML tags and backend infrastructure is a must.

The problem with interactivity is that whilst it starts with the tag, it must end with the client applications otherwise only the geek-% of listeners will use it as they do now.

From my own experiences with live chat rooms during my own and other podcasts, people that are able to tune in to a live show and be present (lots of people just “sit” in a channel and aren’t actually present) is about 1-2% of your overall downloads and that’s for a technical podcast with a high geek-%. I’ve also found there are timezone-effects such that if you podcast live during different times of the day or night directly impacts those percentages even further (it’s 1am somewhere in the world right now, so if your listeners live in that timezone chances are they won’t be listening live).

The final concern is that chat rooms only work for a certain kind of podcast. For me, it could only potentially work with Pragmatic and in my experience I wanted Pragmatic to be focussed and chat rooms turned out to be a huge distraction. Over and again my listeners reiterated that one of the main attractions of podcasts was their ability to time-shift and listen to them when they wanted to listen to them. Being live to them was a minus not a plus.

For these reasons I don’t see that this kind of interactivity will uplift the podcasting ecosystem for the vast majority of podcasters, though it’s certainly nice to have and attempt to standardise.

Crowd-sourced Chapters

Previously I wrote:

The interesting opportunity that Adam puts forward with chapters is he wants the audience to be able to participate with crowd-sourced chapters as a new vector of audience participation and interaction with podcast creators.

Whilst I looked at this last time from a practical standpoint of “how would I as a podcaster use this?” concluding that I wouldn’t use it since I’m a self-confessed control-freak, but I didn’t fully appreciate the angle of audience interaction. I think for podcasts that have a truly significant audience with listeners that really want to help out (but can’t help financially) this feature provides a potential avenue to assist in a non-financial aspect, which is a great idea.

Crowd-source Everything?

(Except recording the show!)

From pre-production to post-production any task in the podcast creation chain could be outsourced to an extent. The pre-production dilemma could look like a feed level XML Tag < podcast:proposedtopics > to a planned topic list (popular podcasts currently use Twitter #Tags like #AskTheBobbyMcBobShow), to cut-out centralised platforms like Twitter from the creation chain in the long term. Again, only useful for certain kinds of shows, but could also include a URL Link to a shared document (probably a JSON file), an episode index reference (i.e. Currently released episode is 85, proposed topics for Episode 86, could also be an array for multiple episodes.)

The post-production dilemma generally consists of show notes, chapters (solution in progress) and audio editing. Perhaps a similar system to crowd-sourced chapters could be used for show notes that could include useful/relevant links for the current episode that aren’t/can’t be easily embedded as Chapter markers.

In either case there’s no reason why it couldn’t work the same way as crowd-sourced chapter markers. The podcaster could also have (with sufficient privileges) the administrative access to add/modify remove content from either of these, with guests also having read/write access. With an appropriate client tool this would then eliminate the plethora of different methods in use today: shared google documents being quite popular with many podcasters today, will not be around indefinitely.

All In One App?

Of course the more features we pile into the Podcasting client app, the more difficult it becomes to write and maintain. Previously an excellent programmer, come podcaster, come audiophile like Marco Arment, could create Overcast. With lightning network integration, plus crowd-sourced chapters, shared document support (notes etc) and a text chat client (IRC) the application is quickly becoming much heavier and complex, with fewer developers with the knowledge in each dimension to create an all-in-one app client.

The need for better frameworks to make feature integration easier for developers is obvious. There may well be the need to two classes of app or at least two views: the listener view and the podcaster view, or simply multiple apps for different purposes. Either way it’s interesting to see where the Tag + Use Case + Tool-chain can lead us.

]]>
Podcasting 2021-01-01T12:15:00+10:00 #TechDistortion
Podcasting 2.0 https://techdistortion.com/articles/podcasting-2-0 https://techdistortion.com/articles/podcasting-2-0 Podcasting 2.0 I’ve been podcasting from close to a decade and whilst I’m not what some might refer to as the “Old Guard” I’ve come across someone that definitely qualifies as such: Adam Curry.

Interestingly when I visited Houston in late 2019 pre-COVID19 my long-time podfriend Vic Hudson suggested I catch up with Adam as he lived nearby and referred to him as the “Podfather.” I had no idea who Adam was at that point and thought nothing of it at the time and although I caught up with Manton Reece at the IndieWeb Meetup in Austin I ran out of time for much else. Since then a lot has happened and I’ve come across Podcasting 2.0 and thus began my somewhat belated self-education of my pre-podcast-involvement podcasting history of which I had clearly been ignorant until recently.

In the first episode of Podcasting 2.0, “Episode 1: We are upgrading podcasting” on the 29th of August, 2020 at about 17 minutes in, Adam regales the story of when Apple and Steve Jobs wooed him with regards to podcasting as he handed over his own Podcast Index as it stood at the time to Apple as the new custodians. He refers to Steve Jobs' appearance at D3 and at 17:45, Steve defined podcasting as being iPod + Broadcasting = Podcasting, further describing it as “Wayne’s World for Podcasting” and even plays a clip of Adam Curry complaining about the unreliability of his Mac.

The approximate turn of events thereafter: Adam hands over podcast index to Apple, Apple builds podcasting into iTunes and their iPod line up and become the largest podcast index, many other services launch but indies and small networks dominate podcasting for the most part but for the longest time Apple didn’t do much at all to extend podcasting. Apple added a few RSS Feed namespace tags here and there but did not attempt to monetise Podcasting even as many others came into the Podcasting space, bringing big names from conventional media and with them many companies starting or attempting to convert podcast content into something that wasn’t as open as it had been with “exclusive” pay-for content.

What Do I Mean About Open?

To be a podcast by its original definition it must contain an RSS Feed, that can be hosted on any machine serving pages to the internet, readable by any other machine on the internet with an audio tag referring to an audio file that can be streamed or downloaded by anyone. A subscription podcast requires login credentials of some kind, usually associated with a payment scheme, in order to listen to the audio of those episodes. Some people draw the line at free = open (and nothing else), others are happy with the occasional authenticated feed that’s still available on any platform/player as that still presents an ‘open’ choice, but much further beyond that (won’t play in any player, not everyone can find/get the audio) and things start becoming a bit more closed.

Due to their open nature, tracking of podcast listeners, demographics and such is difficult. Whilst advertisers see this as a minus, most privacy conscious listeners see this as a plus.

Back To The History Lesson

With big money and big names a new kind of podcast emerged, one behind a paywall with features and functionality that other podcast platforms didn’t or couldn’t have with a traditional open podcast using current namespace tags. With platforms scaling and big money flowing into podcasting, it effectively brought down the average ad-revenue across the board in podcasting and introduced more self-censorship and forced-censorship of content that previously was freely open.

With Spotify and Amazon gaining traction, more multi-million dollar deals and a lack of action from Apple, it’s become quite clear to me that podcasting as I’ve known it in the past decade is in a battle with more traditional, radio-type production companies with money from their traditional radio, movie and music businesses behind them. The larger the more closed podcast eco-systems become, the harder it then becomes for those that aren’t anointed by those companies as being worthy, to be heard amongst them.

Advertisers instead of spending time and energy with highly targeted advertising by carefully selecting shows (and podcasters) individually to attract their target demographic, instead they start dealing only with the bigger companies in the space since they want demographics from user tracking with bigger companies claiming a large slice of the audience they then over-sell their ad-inventory leading to lower-value DAI and less-personal advertising further driving down ad-revenues.

(Is this starting to sound like radio yet? I thought podcasting was supposed to get us away from that…)

Finally another issue emerged: that of controversial content. What one person finds controversial another person finds acceptable. With many countries around the world, each with different laws regarding freedom of speech and with people of many different belief systems, having a way to censor content with a fundamentally open ecosystem (albeit with partly centralised search) was a lever that would inevitably be pulled at some point.

As such many podcasts have been removed from different indexes/directories for different reasons, some more valid than others perhaps, however that is a subjective measure and one I don’t wish to debate here. If podcasts are no longer open then their corporate controller can even more easily remove them in part or in whole as they control both the search and the feed.

To solve the problems above there are a few key angles being tackled: Search, Discoverability and Monetisation.

Search

Quick and easy, the Podcast Index is a complete list of any podcast currently available that’s been submitted. It isn’t censored and is operated and maintained by the support of it’s users. As it’s independent there is no hierarchy to pressure the removal of content from it.

Monetisation

The concept here is ingenuous but requires a leap of faith (of a sort). Bitcoin or rather Lightning, which is a micro-transaction layer that sits aside Bitcoin. If you are already au fait with having a Bitcoin Node, Lightning Node and Wallet then there’s nothing for me to add but the interesting concept is this: by submitting your Node address in your Podcast RSS feed (using the podcast:value tag) a compliant Podcast player can then optionally use the KeySend Lightning command to send a periodic payment “as you listen.” It’s voluntary but it’s seamless.

The podcaster sets a suggested rate in Sats (Satoshis) per minute of podcast played (recorded minute - not played minute if you’re listening at 2x, and the rate is adjustable by the listener) to directly compensate the podcast creator for their work. You can also “Boost” and provide one-off payments via a similar mechanism to support your podcast creator.

The transactions are so small and carry such minimal transaction fees that effectively the entire amount is transferred from listener to podcaster without any significant middle-person skimming off the top in a manner that both reflects the value in time listened vs created and without relying on a single piece of centralised infrastructure.

Beyond this the podcaster can choose additional splits for the listener streaming Sats to go to their co-hosts, to the podcast player app-developer and more. Imagine being able to directly compensate audio editors, artwork contributors, hosting providers all directly and fairly based on listeners actually consuming the content in real time.

This allows a more balanced value distribution and protects against the current non-advertising podcast-funding model via a support platform like Patreon and Patreon (oh I mean Memberful but that’s actually Patreon ). When Patreon goes out of business all of those supportive audiences will be partly crippled as their creators scramble to migrate their users to an alternative. The question is will it be another centralised platform or service, or a decentralised system like this?

That’s what’s so appealing about the Podcasting 2.0 proposition: it’s future proof, balanced and sensible and it avoids the centralisation problems that have stifled creativity in the music and radio industries in the past. There’s only one problem and it’s a rather big one: the lack of adoption of Lightning and Bitcoin. Currently only Sphinx supports podcast KeySend at the time of publishing and adding more client applications to that list of one is an easier problem to solve than listener mass adoption of BitCoin/Lightning.

Adam is betting that Podcasting might be the gateway to mass adoption of BitCoin and Lightning and if he’s going to have a chance of self-realising that bet, he will need the word spread far and wide to drive that outcome.

As of time of writing I have created a Causality Sphinx Tribe for those that wish to contribute by listening or via Boosting. It’s already had a good response and I’m grateful to those that are supporting Causality via that means or any other for that matter.

Discoverability

This is by far the biggest problem to solve and if we don’t improve it dramatically, the only people and content that will be ‘findable’ will be that of the big names with big budgets/networks behind them, leaving the better creators without such backing, left lacking. It should be just as easy to find an independent podcast with amazing content made by one person as it is to find a multi-million dollar podcast made by an entire production company. (And if the independent show has better content, then the Sats should flow to them…)

Current efforts are focussed on the addition of better tags in the Podcasting NameSpace to allow automated and manual searches for relevant content, and to add levers to improve promotability of podcasts.

They are sensibly splitting the namespace into Phases, each Phase containaing a small group of tags and progressively agreeing several tags at a time with the primary focus of closing out one Phase of tags before embarking on too much detail for the next. The first phase (now released) included the following:

  • < podcast:locked > (Technically not discoverability) If ‘yes’ the podcast platform is NOT permitted to be imported. This needs to be implemented by all platforms (or as many as possible) to be effective in preventing podcast theft which is rampant on platforms like Anchor aka Spotify
  • < podcast:transcript > A link to an episode transcript file
  • < podcast:funding > (Technically not discoverability) Link to the approved funding page/method (in my case Patreon)
  • < podcast:chapters > A server-side JSON format for chapters that can be static or collaborative (more below)
  • < podcast:soundbite > Link to one or more excerpts from the episode for a prospective listener to check out the episode before downloading or streaming the whole episode from the beginning

I’ve implemented those that I see as having a benefit for me, which is all of them (soundbite is a WIP for Causality), with the exception of Chapters. The interesting opportunity that Adam puts forward with chapters is he wants the audience to be able to participate with crowd-sourced chapters as a new vector of audience participation and interaction with podcast creators. They’re working with HyperCatcher’s developer to get this working smoothly but for now at least I’ll watch from a safe distance. I think I’m just too much of a control freak to hand that out on Causality to others to make chapter suggestions. That said it could be a small time saver for me for Pragmatic…maybe.

The second phase (currently a work in progress) is tackling six more:

  • < podcast:person > List of people that are on an episode or the show as a whole, along with a canonical reference URL to identify them
  • < podcast:location > The location of the focus of the podcast or episodes specific content (for TEN, this only makes sense for Causality)
  • < podcast:season > Extension of the iTunes season tag that allows a text string name in addition to the season number integer
  • < podcast:episode > Modification of the iTunes episode tag that allows non-integer values including decimal and alpha-numeric
  • < podcast:id > Platforms, directories, hosts, apps and services this podcast is listed on
  • < podcast:social > List of social media platform/accounts for the podcast/episode

Whilst there are many more in Phase 3 which is still open, the most interesting is the aforementioned < podcast:value > where the podcaster can provide a Lightning Node ID for payment using the KeySend protocol.

TEN Makes It Easy

This is my “that’s fine for John” moment, where I point out that me incorporating these into the fabric of The Engineered Network website hasn’t taken too much effort. TEN runs on GoHugo as a static site generator and whilst it was based on a very old fork of Castanet, I’ve re-written and extended so much of that now that’s not recognisable.

I already had people name tagging, people name files, funding, subscribe-to links on other platforms and social media tags and transcripts (for some episodes) already in the MarkDown YAML front-matter and templates so adding them into the RSS XML template was extremely quick and easy and required very little additional work.

The most intensive tags are those that require additional Meta-Data to make them work. Namely, Location only makes sense to implement on Causality, but it took me about four hours of Open Street Map searching to compile about 40 episode-locations worth of information. The other one is soundbite (WIP) where searching for one or more choice quotes retrospectively is time-consuming.

Not everyone out there is a developer (part or full-time) and hence rely on services to support these tags. There’s a relatively well maintained list at Podcast Index and at time of writing: Castopod, BuzzSprout, Fireside, Podserve and Transistor support one or more tags, with Fireside (thank you Dan!) supporting an impressive six of them: Transcript, Locked, Funding, Chapters, Soundbite and Person.

Moving Forward

I’ve occasionally chatted with the lovely Dave Jones on the Fediverse (Adam’s co-host and the developer working on many aspects of 2.0) and listen to 2.0 via Sphinx when I can (unfortunately I can’t on my mobile/iPad as the app has been banned by my company’s remote device management policy) and I’ve implemented the majority of their proposed tags thus far on my shows. I’m also in the process of setting up my own private BitCoin/Lightning Node.

For the entire time I’ve been involved in the podcasting space, I’ve never seen a concerted effort like this take place. It’s both heartening and exciting and feels a bit like the early days of Twitter (before Jack Dorsey went public, bought some of the apps and effectively killed the rest and pushed the algorithmic timeline thus ruining Twitter to an extent). It’s a coalition of concerned creators, collaborating to create a better outcome for future podcast creators.

They’ve seen where podcasting has come from, where it’s going and if we get involved we can help deliver our own destiny and not leave it in the hands of corporations with questionable agendas to dictate.

]]>
Podcasting 2020-12-29T15:25:00+10:00 #TechDistortion
Oh My NAS https://techdistortion.com/articles/oh-my-nas https://techdistortion.com/articles/oh-my-nas Oh My NAS I’ve been on the receiving end of failing hard drives in the past and lost many of my original podcast source audio files and more importantly a years' worth of home videos, gone forever.

Not wishing for a repeat of this I purchased an 8TB external USB HardDrive and installed BackBlaze. The problem for me though was that BackBlaze was an ongoing expense, could only be used for a single machine and couldn’t really do anything other than be an offsite backup. I’d been considering a Network Attached Storage for years now and the thinking was, if I had a NAS then I could have backup redundancy1 plus a bunch of other really useful features and functionality.

The trigger was actually a series of crashes and disconnects of the 8TB USB HDD, and with the OS’s limited ability to troubleshoot HDD hardware-specific issues via USB I had some experience from my previous set of HDD failures many years ago, that this is how it all starts. So I gathered together a bunch of smaller HDDs and copied across all the data to them, while I still could, and resolved to get a better solution: hence the NAS.

Looking at both QNAP and Synology and my desire to have as broad a compatibility as possible, I settled on an Intel-based Synology, which in Synology-speak, means a “Plus” model. Specifically the DS918+ presented the best value for money with 4 Bays and the ability to extend with a 5 Bay external enclosure if I really felt the need in future. I waited until the DS920+ was released and noted that the benchmarks on the 920 weren’t particularly impressive and hence I stuck with the DS918+ and got a great deal as it had just become a clearance product to make way for the new model.

My series of external drives I had been using to hold an interim copy of my data were: a 4TB 3.5", a 4TB 2.5" (at that time I thought it was a drive in an enclosure you could extract), and a 2TB 3.5" drive as well as, of course, my 8TB drive which I wasn’t sure was toast yet. The goal was to reuse as many of my existing drives as possible and not spend even more money on more, new HDDs. I’d also given a disused but otherwise healthy 3.5" 4TB drive to my son for his PC earlier in the year and he hadn’t actually used it, so I reclaimed it temporarily for this exercise.

Here’s how it went down:

STEP 1: Insert 8TB Drive and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…hundreds of bad sectors. To be honest, that wasn’t too surprising since the 8TB drive was periodically disconnecting and reconnecting and rebuilding its file tables - but now I had the proof. The Synology refused to let me create a Storage Pool or a Volume or anything so I resigned myself to buying 1 new drive: I saw that SeaGate Barracudas were on sale so I grabbed one from UMart and tried it.

STEP 2: Insert new 4TB Barracuda and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…it worked perfectly! (As you’d expect) Though the test took a VERY long time, I was happy so I created a Storage Pool, Synology Hybrid RAID. Created a Volume, BTRFS because it came highly recommended, and then began copying over the first 4TB’s worth of data to the new Volume. So far, so good.

STEP 3: Insert my son’s 4TB drive and extend the SHR Storage Pool to include it. The Synology allowed me to do this and I did so for some reason without running a SMART Extended test on it first, and it let me so that should be fine right? Turns out, this was a terrible idea.

STEP 4: Once all data was copied off the 4TB data drive and to the Synology Volume, wipe that drive, extract the 3.5" HDD and insert the reclaimed 4TB 3.5" into the Synology and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…hundreds of bad sectors. Um, okay. That’s annoying. So I might be up for yet another HDD since I have 9TB to store.

OH DEAR MOMENT: As I was re-running the drive check the Synology began reporting that the Volume was Bad, and the Storage Pool was unhealthy. I looked into the HDD manager and saw that my sons reclaimed 3.5" drive was also full of bad sectors, as the Synology had run a periodic test while data was still copying. I also attempted to extract the 2.5" drive from the external enclosure only to discover that it was a fully integrated controller/drive/enclosure and couldn’t be extracted without breaking it. (So much for that) Whilst I still had a copy of my 4TB of data in BackBlaze at this point I wasn’t worried about losing data but the penny dropped: Stop trying to save money and just buy the right drives. So I went to Computer Alliance and bought three shiny new 4TB SeaGate IronWolf drives.

STEP 5: Insert all three new 4TB IronWolfs and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…the first drive perfect! The second and third drives however…had bad sectors. Bad sectors. On new drives? And not only that NAS-specific, high reliability drives? John = not impressed. I extended the Storage Pool (Barracuda + 1 IronWolf) and after running a Data Scrub it still threw up errors despite the fact both drives appeared to be fine and were brand new.

IronWolf Fail This is not what you want to see on a brand new drive…

TROUBLESHOOTING:

So I did what all good geeks do and got out of the DSM GUI and hit SSH and the Terminal. I ran “btrfs check –repair” and recover, super-recover and chunk-recover and ultimately the chunk tree recovery failed. I read that I had to stop everything running and accessing the Pool so I painstakingly killed every process and re-ran the recovery but ultimately it still failed after a 24 hour long attempt. There was nothing for it - it was time to start copying the data that was on there (what I could read) back on to a 4TB external drive and blow it all away and start over.

Chunk Fail

STEP 6: In the midst of a delusion that I could still recover the data without having to recopy the lot of it off the NAS (a two day exercise), I submitted a return request for first failed IronWolf, while I re-ran the SMART on the other potentially broken drive. The return policy stated that they needed to test the HDD and that could take a day or two and Computer Alliance is a two hour round trip from my house. Fortunately I met a wonderfully helpful and accomodating support person at CA on that day and he simply handed me a replacement, taking the Synology screenshot of the bad sector count and serial number confirming I wasn’t pulling a switch on him and handed me a replacement IronWolf on the spot! (Such a great guy - give him a raise) I returned home, this time treating the HDD like a delicate egg the whole trip, inserted it and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…perfect!

STEP 7: By this time I’d given up all hope of recovering the data and with three shiny new drives in the NAS, my 4TB of original data restored to my external drive (I had to pluck 5 files that failed to copy back from my BackBlaze backup) I wiped all the NAS drives…and started over. Not taking ANY chances I re-ran the SMART tests on all three and when they were clean (again) recreated the Pool, new Volume, and started copying my precious data back on to the NAS all over again.

STEP 8: I went back to Computer Alliance to return the second drive and this time I met a different support person, someone who was far more “by the book” and accepted the drive and asked me to come back another day once they’d tested it. I’d returned home and hours later they called and said “yeah it’s got bad sectors…” (you don’t say?) and unfortunately due to personal commitments I couldn’t return until the following day. I grabbed the replacement drive, drove on eggshells, added it to the last free bay and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…perfect! (FINALLY)

STEP 9: I copied all of the data across from all of my external drives on to the Synology. The Volume was an SHR with 10.9TB of usable space spread across x4 4TB drives, (x3 IronWolf, and x1 Barracuda). The Data Scrub passed, the SMART Tests passed, and the IronWolf-specific Health Management tests all passed with flying colours (all green, oh yes!) It was time to repurpose the 4TB 2.5" external drive as my offline backup for the fireproof safe. I reformatted it to ExFAT and set up HyperBackup for my critical files (Home Videos, Videos of the Family, my entire photo library), backed them up and put that in the safe.

CONCUSION:

Looking back the mistake was that I never should have extended the storage pool before the Synology had run a SMART test and flagged the bad sectors. In so doing it wrote data to those bad sectors and there were just too many for BTRFS to recover in the end. In addition I never should have tried to do this on the cheap. I should have just bought new drives from the get-go. Not only that, I should have just bought NAS-specific drives from the get-go as well. Despite the bad sectors and bad luck of getting two out of three bad IronWolf drives, in the end they have performed very well and completed their SMARTs faster with online forums suggesting a desktop-class HDD (the Barracuda) is a bad choice for a NAS. I now have my own test example to see if the Barracuda is actually suitable as a long-term NAS drive, since I ended up with both types in the same NAS, same age, same everything else, so I’ll report back in a few years to see which starts failing first.

Ultimately I also stopped using BackBlaze. It was slowing down my MacBook Pro, I found video compression on data recovery that was frustrating, and even with my 512GB SSD on the MBP with everything on it, I would often get errors about a lack of space for backups to BackBlaze. Whilst financially the total lifecycle cost of the NAS and the drives is far more than BackBlaze (or an equivalent backup service) would cost me, the NAS can also do so many more things, than just to backup my data via TimeMachine.

But that’s another story for another article. In the end the NAS plus drives cost me $1.5k AUD, 6 trips to two different computer stores and 6 weeks from start to finish, but it’s been running now since August 2020 and hasn’t skipped a beat. Oh…my…NAS.


  1. Redundancy against the failure of an individual HDD ↩︎

]]>
Technology 2020-11-29T09:00:00+10:00 #TechDistortion
200-500mm Zoom Lens Test https://techdistortion.com/articles/200-500-zoom-lens-test https://techdistortion.com/articles/200-500-zoom-lens-test 200-500mm Zoom Lens Test I’ve been exploring my new 200-500mm Nikon f/5.6 Zoom Lens on my D500 and pushing the limits of what it can do. I’ve used it for several weeks taking photos of Soccer and Cricket and I thought I should run a few of my own lens sharpness tests to see how it’s performing in a controlled environment.

As in my previous Lens Shootout I tested sharpness indoors, with controlled lighting conditions setting the D500 on a tripod, set with a Timer and adjusting the shutter speed leaving a constant shutter speed of 1/200th of a second, with Auto ISO and tweaked the Exposure during post to try and equalise the light level between exposures.

Setting the back of some packaging with a mixture of text and symbols as the target with the tripod at the same physical distance for each test photo.

Nikon 200-500mm Zoom Lens

I took photos across the aperture range at f/5.6, f/8 and f/11, cropped to 1,000 x 1,000 pixels in both the dead-center of the frame and the bottom-right edge of the frame.


200mm

200mm Edge f/5.6 200mm Center Crop f/5.6

200mm Edge f/8 200mm Center Crop f/8

200mm Edge f/11 200mm Center Crop f/11

200mm Edge f/5.6 200mm Edge Crop f/5.6

200mm Edge f/8 200mm Edge Crop f/8

200mm Edge f/11 200mm Edge Crop f/11


300mm

300mm Edge f/5.6 300mm Center Crop f/5.6

300mm Edge f/8 300mm Center Crop f/8

300mm Edge f/11 300mm Center Crop f/11

300mm Edge f/5.6 300mm Edge Crop f/5.6

300mm Edge f/8 300mm Edge Crop f/8

300mm Edge f/11 300mm Edge Crop f/11


400mm

400mm Edge f/5.6 400mm Center Crop f/5.6

400mm Edge f/8 400mm Center Crop f/8

400mm Edge f/11 400mm Center Crop f/11

400mm Edge f/5.6 400mm Edge Crop f/5.6

400mm Edge f/8 400mm Edge Crop f/8

400mm Edge f/11 400mm Edge Crop f/11


500mm

500mm Edge f/5.6 500mm Center Crop f/5.6

500mm Edge f/8 500mm Center Crop f/8

500mm Edge f/11 500mm Center Crop f/11

500mm Edge f/5.6 500mm Edge Crop f/5.6

500mm Edge f/8 500mm Edge Crop f/8

500mm Edge f/11 500mm Edge Crop f/11


What I wanted to test the most was the differences between Edge and Centre sharpness as well as the the effect of different Apertures. For me I think the sensor is starting to battle ISO grain at f/11 and this is impacting the apparent sharpness. In the field I’ve tried stopping down the Aperture to try and get a wider focus across the zoom area but it’s tough the further out you zoom and the images above support this observation.

My conclusions in terms of the questions I was seeking answers to though, is firstly there’s no noticable change in sharpness from the center to the edge at the closest zoom, irrespective of aperture. The edge starts to softens only slightly as you zoom in towards 500mm, and is independent of aperture.

The thing I didn’t expect was the sharpness at f/5.6 being so consistent, throughout the zoom range. If you’re isolating a subject at the extremes of zoom then it’s probably not worth stopping down the aperture and in future when I’m shooting I’ll just keep that aperture as wide open as I can unless I’m at the 200mm end of the zoom spectrum.

It’s a truly amazing lens for the money and whilst I realise there are many other factors to consider, I at least answered my own questions.

]]>
Photography 2020-10-25T06:00:00+10:00 #TechDistortion
Astronomy With Zoom Lenses https://techdistortion.com/articles/astronomy-with-zoom-lenses https://techdistortion.com/articles/astronomy-with-zoom-lenses Astronomy With Zoom Lenses About a month ago I started renting a used Nikon 200-500mm Zoom Lens that was in excellent condition. Initially my intention was to use it for photographing the kids playing outdoor sports, namely Soccer, Netball and Cricket. Having said that the thought occurred to me that it would be excellent for some Wildlife photography, here, here and here, and also…Astrophotography.

Nikon 200-500mm Zoom Lens

I was curious just how much I could see with my D500 (1.5x as it’s a DX Crop-sensor) using the lens at 500mm maximum (750mm effective). The first step was to mount my kit on my trusty 20 year old, ultra-cheap, aluminium tripod. Guess what happened?

The bracket that holds the camera to the tripod base snapped under the weight of the lens and DSLR and surprising even myself, in the pitch dark, I miraculously caught them before they hit the tiles, by mere inches. Lucky me, in one sense, not so lucky in another - my tripod was now broken.

Not to be defeated, I applied my many years of engineering experience to strap it together with electrical tape…because…why not?

D500 and 200-500 Zoom on Tripod

Using this combination I attempted several shots of the heavens and discovered a few interesting things. My PixelPro wireless shutter release did not engage the Image Stabilisation in the zoom lens. I suppose they figured that if you’re using the remote, you’ve probably got a tripod anyhow so who needs IS? Well John does, because his Tripod was a piece of broken junk that was swaying in the breeze - no matter how gentle that breeze was…

Hence I ended up ditching the Tripod and opted instead for handheld, using the IS in the Zoom Lens. The results were (to me at least) pretty amazing!

Earth’s Moon

The Moon I photographed through all of its phases culminating in the above Full Moon image. By far the easier thing to take a photo of and in 1.3x crop mode on the D500 it practically filled the frame. Excellent detail and an amazing photograph.

Of course, I didn’t stop there. It was time to turn my attention to the planets and luckily for me several planets are at or near opposition at the moment. (Opposition is one of those astronomy terms I learned recently, where the planet appears at its largest and brightest, and is above the horizon for most of the night)

Planet Jupiter

Jupiter and its moons, the cloud band stripes are just visible in this photo. Stacked two images, one exposure of the Moons and one of Jupiter itself. No colour correction applied.

Planet Saturn

Saturn’s rings are just visible in this image.

Planet Mars

Mars is reddish and not as interesting unfortunately.

International Space Station

The ISS image above clearly shows the two large solar arrays on the space station.

What’s the problem?

Simple. It’s not a telescope…is the problem. Zoom Lenses are simply designed for a different purpose than maximum reach taking photos of planets. I’ve learned through research that the better option is to use a T-Ring adaptor and connect your DSLR to a telescope. If you’re REALLY serious you shouldn’t use a DSLR either since most have a red-light filter which changes the appearance of nebulae, you need to use a digital camera that’s specifically designed for Astrophotography (or hack your DSLR to remove it from some models if you’re crazy enough).

If you’re REALLY, REALLY interested in the best photos you can take, you need an AltAz or Altitude - Azimuth mount that automatically moves the camera in opposition to Earths rotation to keep the camera pointing in the same spot in the night sky for longer exposures. And if you’re REALLY, REALLY, REALLY serious you want to connect that to a guide scope that further ensures the auto-guided mount is tracking as precisely as possible. And if you’re REALLY, REALLY, REALLY, REALLY serious you’ll take many, many exposures including Bias Frames, Light Frames, Dark Frames, and Frame Frames and image stack them to reduce noise in the photo.

How Much Further Can You Go With a DSLR and Lenses?

Not much further, that’s for sure. I looked at adding Teleconverters, particularly the TC-14E (1.4x) and then a TC-20E (2x) which would give me an effective focal length of 1,050mm and 1,500mm respectively. The problem is that you lose a lot of light in the process and whilst you could get a passable photo at 1,050mm, with 1,500 on this lens you’re down to an aperture of f/11 which is frankly, terrible. Not only that but reports seem to indicate that coma (chromatic aberration) is pretty bad with the 2x Teleconverter coupled with this lens. The truth is that Teleconverters are meant for fast primes (f/4 or better) not a f/5.6 Zoom.

Going to an FX Camera Body wouldn’t help since you’d lose the 1.5x effective zoom from the DX sensor, although you might pick up a few extra pixels, the sensor on my D500 is pretty good in low light, so you’re not going to get a much better low-light sensor for this sort of imaging. (Interestingly the pixel density of the sensor between the D500 DX and D850 FX, leaves my camera with 6.7% more pixels per square cm so it’s still the better choice)

How Many Pixels Can You See?

Because I’m me, I thought let’s count some pixels. Picking Jupiter because it’s big, bright and easy to photograph (as planets go) with my current combination it’s 45 pixels across. Adding 1.4x Teleconverter gets me to an estimated 63 pixels, and 2.0x to 90 pixels diameter. Certainly that would be nicer, but probably still wouldn’t be enough detail to make out the red spot with any real clarity.

Just a Bit of Fun

Ultimately I wanted to see if it was possible to use my existing Camera equipment for Astronomy. The answer was: kinda, but don’t expect more than the Moon to look any good. If you want pictures somewhere between these above and what the Hubble can do, expect to spend somewhere between $10k –> $30k AUD on a large aperture, large focal length telescope, heavy duty AltAz mount, tracking system and specialised camera, and add in a massive dose of patience waiting for the clearest possible night too.

If nothing else for me at least, it’s reawakened a fascination that I haven’t felt since I was a teenager about where we sit in the Universe. With inter-planetary probes and the Hubble Space Telescope capturing amazing images, and CGI making it harder to pick real from not-real planets, suns and solar systems, it’s easy to become disconnected from reality. Looking at images of the planets in ultra-high resolution almost doesn’t feel as real as when you use your own equipment and see them with your own eyes.

So I’ve enjoyed playing around with this but not because I was trying to get amazing photographs. It’s been a chance to push the limits of the gear I have with me to see a bit more of our Solar System, completely and entirely on my own from my own backyard. And that made astronomy feel more real to me than it had for decades.

The stars, the moon, the planets and a huge space station that we humans built, are circling above our heads. All you need to do is look up…I’m really glad I took the time to do just that.

]]>
Technology 2020-10-17T08:00:00+10:00 #TechDistortion
Solo Band https://techdistortion.com/articles/solo-band https://techdistortion.com/articles/solo-band Solo Band Apple’s new Apple Watch Series 6 was released with several new bands, of which the two most controversial are the Solo Braid and Solo Sport Loop bands. Whilst the braided band might look nice, my instant reaction was “that’s going to catch on everything” and I’ve heard a few anecdotal reports floating around the internet recently of threads being pulled on these bands as some evidence to validate my ultimate choice not to get that one.

Whilst I applaud Apple’s “Create Your Style” watch and band selector, the fact you STILL can’t select a Nike band or a Hermes band with your new watch. (I know right? No Hermes? I guess there’s always a Hermes store for that…the bands are next to the riding helmets I hear…)

Per Apple’s directions when ordering, I dutifully printed out the measuring tape/paper cutout measurement implement to find my wrist size was between 6 and 7 - exactly half way. I opted for a 7 when I ordered, plain white then attended the Chermside Apple Store to pick it up at a scheduled time through their door / COVID19 “window” for pickups.

Once in hand I opened and hastily put it on the watch and my wrist only to find it was too loose. Logic being that this was going to probably stretch over time, I went back to the “window” to swap it for a Size 6, one size down. After attempting to return just the band, and failing, then trying multiple times to return the entire watch, just to swap the band, after nearly 45 minutes I had the right fitting band and was on my way.

I’m not sure I’m complaining exactly as everything is relative. There are other parts of the world where Apple Stores are still closed due to local COVID19 lockdown restrictions, so I had it good…for sure.

Solo Loop Edge Gap

The gap at the edge is quite small and tight, which is how I like to wear my watches. (I hate loose watches)

Solo Loop on Wrist

The band to the untrained eye looks just like a traditional White Sport Band.

Solo Loop Underneath

The giveaway is underneath where there is no pin, and ultimately the reason that I like this band so much more than any of the existing sport bands. On standard two-piece sport bands, the pin isn’t so much the issue, it’s the slide-under segment through the hole that pulls out arm hairs on the way and places pressure on my carpel tunnels after many hours of wearing. (Sure I could wear it more loosely, but refer above - I hate doing that)

Feel and Comfort

The solo loop band is softer than my White Sport Band and is elastic but firm. The rubber-like texture is balanced with a smooth finish so it doesn’t grab your arm hairs too much like a rubber-band would when you take it off or put it on.

Beyond this I’ve found that like the other sport bands it’s the best option when you get it wet as it’s quick and easy to dry.

I Really Wanted A Nike Sport Loop Though

I’ve been a huge fan of my nearly two year old Blue Sport Loop band so much so that I’ve worn it more than any other band during that time and it’s frayed at the loop-back buckle and generally a bit worse for wear.

I had secretly hoped that when Apple released the Series 6 they would open up the selector to include Nike bands as options, alas they did not. So after wearing the Solo Loop for a week, I went back to the Apple Store and grabbed the band I actually wanted: the Spruce Aura Sport Loop.

Solo Loop and Sport Loop Outside

Side by side the Pure White of the Solo Loop contrasts with the subtle Green weave of the Nike Sport Loop.

Solo Loop and Sport Loop Inside

The Nike Loop is made from the same material and is just as comfortable as my previous favourite comfortable band with the bonus of being a pleasant light colour that’s reflective in the dark.

Concerns with the Solo Loops

Much has been written about the Solo Loop being a bad customer experience and certainly with so many Apple Stores not functioning as they used to due to COVID19 restrictions, finding the best fit is more difficult than it otherwise would be. That said, were they open the truly best way to get a feel for the band comfort isn’t wearing it in the store for two minutes - you really need serious time with it in general use for a few days or weeks to know for sure if it will work for you in that size.

Notwithstanding this the other issue is resale. Previously you could sell your Apple Watch or hand it down to other family members but now the variable of “will it fit their wrist” needs to be considered. If not, you’re up for another solo band that fits the recipient or one with flexible sizing that fits anyone.

If you can look past these issues, then the solo sport loop is comfortable, simple and I think better than the other Sport Bands on offer. That said…I’ll be sticking with my recommendation for the Sport Loops as the best all-round band for the Apple Watch.

]]>
Technology 2020-10-16T21:00:00+10:00 #TechDistortion
Kit vs Tamron vs Prime Shoot-out https://techdistortion.com/articles/kit-vs-tamron-vs-prime-shootout https://techdistortion.com/articles/kit-vs-tamron-vs-prime-shootout Kit vs Tamron vs Prime Shoot-out I’ve been reading and learning, trying and fine-tuning my photography setup (heck, isn’t that what all photography enthusiasts do?) and I’ve been looking at the gaps in my lens arsenal and looking also for duplication and overlap.

I started out loving Zoom lenses with my 55-200mm Nikon providing most of the work for outdoors sports, but with two of the key kinds of sports photography I was being called on happening at night or indoors in poor lighting (Netball and Basketball) then I had to invest in a better zoom with the Tamron 24-70mm f/2.8 being my choice.

It does a fine job and did double-duty for large group shots where I didn’t have space to move back and needed to work in close, and using a DX camera (Nikon D5500 and then D500) the 24mm short end wasn’t quite short enough. I invested in a kit lens second-hand on a lark, thinking it could do fine at the short end (18mm) for those tight situations. Unfortunately I kept having trouble with the sharpness of both the 24-70mm Tamron and the 18-55mm Nikon kit lenses.

A thought occurred to me that I’d become spoilt by my growing prime collection (35mm f/1.8, 50mm f/1.8, 85mm f/1.8) which are sharp as a tack at pretty much every aperture. Then I read many, many semi-professional and professional lens reviews to try and decide if I was imagining things.

Then I thought, “Hey, I could just do my own test…”

…and here it is…

I decided to test their sharpness indoors, with controlled lighting conditions setting the D500 on a tripod, set with a Timer and adjusting the shutter speed leaving a constant ISO160. Setting the back of some packaging with a mixture of text and symbols as the target with the tripod at the same distance for each. The only variable I think I could have done better was the distance from the lens element to the target was slightly different owing to the different lens designs and resulting imprecision of the 50mm mark on each Zoom ensuring the exact same image scale in the frame, but it’s close enough to make the point.

Tamron 24-70mm f/2.8 / Nikon 18-55mm f/3.5-f/5.6 / Nikon 50mm f/1.8 Lenses Tamron 24-70mm f/2.8 (Left) | Nikon 18-55mm f/3.5-f/5.6 (Middle) | Nikon 50mm f/1.8 (Right)

Finally to match the apertures I took photos across the range with two equivalence points that were possible on all three lenses, at f/5.6 which is the widest open the 18-55mm Lens could go, and f/8 because…f-8 and be there, or something like that. Additionally I tried out f/2.8 to provide another point of comparison for between the 50mm and the Tamron.


Firstly the f/5.6 Shoot-out…

18-55mm Nikon at f/5.6 18-55mm Nikon at f/5.6

24-70mm Tamron at f/5.6 24-70mm Tamron at f/5.6

50mm Nikon at f/5.6 50mm Nikon at f/5.6


Secondly the f/8 Shoot-out…

18-55mm Nikon at f/8 18-55mm Nikon at f/8

24-70mm Tamron at f/8 24-70mm Tamron at f/8

50mm Nikon at f/8 50mm Nikon at f/8


There’s no question that the 18-55mm Kit Lens is the worst by an obvious margin than both the other lenses. That shouldn’t be a revelation to anyone, it’s the cheapest lens I tried and honestly…it shows.

What’s more interesting is the colour reproduction and the sharpness between the Tamron and the Prime. At f/8 I think the Tamron has better colour and is marginally sharper, but at f/5.6 it’s almost a wash. It’s easy to take the darker lines on the Tamron as the better representation but the Prime picked up the dust and imperfections in the printed lines and text slightly better, leading to a slightly lighter colour.


Finally the f/2.8 Shoot-out…

24-70mm Tamron at f/2.8 24-70mm Tamron at f/2.8

50mm Nikon at f/2.8 50mm Nikon at f/2.8


In the end the Tamron on balance seems slightly sharper than the 50mm Prime at 50mm, but there’s also the amount of light and colour on the Prime is better. So what’s the conclusion? Clearly the Tamron is a fantastic lens, but the 50mm is probably good enough at 50mm, so the question is why do I need both?

For me, personally, what is each lens really for? If I have a 50mm and 85mm Prime, then I don’t really need the Tamron beyond 24mm. What’s clear to me is that I’m well covered between 35mm and 85mm with some great lenses but where I’m lacking in a decent Ultra-wide. The poor quality of the 18-55mm Kit Lens disqualifies it as a contender.

Hence my intention is that I definitely don’t need or want the Kit Lens anymore. It’s just not up the standard I’m looking for in terms of sharpness. Also, as hard as it is for me to part with it, the Tamron doesn’t fit a need I have any more. The gap I need to fill is in the Ultra-Wide category which is difficult with a crop-sensor to achieve, but 24mm isn’t enough. My intention therefore is to replace them both with a sharper Ultra-Wide Lens.

Which lens that is, I’m still uncertain, though the 10-20mm Nikon looks nice.

]]>
Photography 2020-09-05T06:00:00+10:00 #TechDistortion
WhitePapers And Photography https://techdistortion.com/articles/whitepapers-and-photography https://techdistortion.com/articles/whitepapers-and-photography WhitePapers And Photography I’ve had two side projects in the past few years I think it’s time to consolidate into TechDistortion. As of today all of my Control System Space whitepapers will be moved here and my PixelFed instance, Chidgey.xyz will be redirected here.

Having looked at the type and volume of traffic it makes sense to consolidate them now rather than let them continue on for another year at their current homes.

The intention is to keep Podcasting over on The Engineered Network and everything else here at TechDistortion.

]]>
Technology 2020-07-04T06:00:00+10:00 #TechDistortion
Podcaster To AudioBook Narrator https://techdistortion.com/articles/podcaster-to-audiobook-narrator https://techdistortion.com/articles/podcaster-to-audiobook-narrator Podcaster To AudioBook Narrator I’ve been told for many years that I have a lovely voice, even before I started podcasting; lots more since then. Whilst I’ve been known for my accents and impersonations as well, of which some have actually got me in serious trouble in years past, it seemed a logical extension to consider audiobooks and vocal acting.

Upon putting my name down at an agency I wasn’t sure what to expect and when I landed an audition, then I landed the narration job for an audiobook! I was ecstatic. Once that wore off I signed the contract and realised I was now on the hook to record, edit and supply a complete audiobook that someone else had poured their time, energy and effort into creating the written version of. It was my job to narrate that book and make it sing!

Easy huh?

Oh boy.

I think it’s fair to say that I underestimated how much work it would be and looking back, just how much I learned in making it.

Some of the key lessons I learned from this experience that weren’t obvious to me when I signed up:

  • It is NOT possible to record even a short book in a single recording session especially in the midst of a COVID19 lockdown. My house is my recording studio and with the lockdown restrictions my recording periods are very brief, disjointed and highly problematic. Whilst I accept in future this won’t always be the case it made this particularly challenging. Children, TVs and music blaring, neighbours with too much gardening time on their hands mowing their lawns constantly, a Harley Davison motorbiking enthusiast up the street, it was incredibly frustrating!
  • Keeping a consistent pacing of speech, the same tone and pitch between recordings is extremely difficult. I learned to record in blocks wherever I could to avoid differences in my voice, and keep my positioning in front of the microphone identical every time.
  • Test your gear twice before you start a recording session! I unfortunately had a bad cable and I didn’t realise until I had an hour recorded! I had to re-record all of it.
  • If you put down some audio and you start editing and the levels aren’t the way you like, admit defeat early and re-record it! I made the mistake of persevering with sub-par audio for several hours of editing but after a few listen-backs to the finished product, I just couldn’t give it to the client. It wasn’t good enough. I should have cut my losses hours earlier and admitted I’d had a hardware failure and just re-recorded before I spent any time trying to salvage it.
  • Make sure you pre-record at your set levels, keep the same recording booth layout and confirm all the way through your workflow to your audio editing final output to ensure every link in the chain is set correctly before recording for any significant duration.
  • Scan a few words ahead, read those words after a delay in your brain, listen back to what you said whilst re-reading the same text to confirm you read exactly what was written. This is as hard as it sounds, but after about the 3 hour mark I started to get the hang of it. Like learning Morse Code I was amazed my brain was able to bend itself around that way of read/speak/reviewing but it actually is possible.
  • You might be recording multiple chapters spread over different recording and editing sessions, but the end listener will be listening in succession, hence between Chapters take an extra step and match the volume levels in post-production between each of the Chapters as the final listener will notice the differences.
  • This is someone else’s hard work. When you’re being paid to turn it into an audio form, you need to do your absolute best job to make their work SING! Give them your very best, don’t phone it in. If you need to re-record a sentence, a paragraph, a chapter, the WHOLE THING because it doesn’t make the grade, then just do it and do it right!
  • I edited in Ferrite on my iPad (as I do all my podcasts) and unfortunately there was a strange volume glitch (I submitted a bug report to the developer), but fortunately I learned how to work around it by force-restarting after a second track import which fixed it. Unfortunately I’d sent out a badly volume-matched final audio chapter before I realised the problem was with Ferrite. Not a good look.
  • Expect feedback from the client. I didn’t submit a single full chapter without at least one suggestion for improvement. Sometimes the written word just doesn’t translate into a spoken sentence that sounds correct. Some abbreviations should be spoken in full and others not. The pacing of some sentences and the emphasis might need to shift. I had all that sort of feedback but by incorporating it, I know the client will get the result that they want. It’s their book!

Of course this is the first audiobook I’ve ever recorded for a client. Realistically though it wasn’t what my friends and family expected. Firstly it wasn’t fiction, I didn’t do any voices, and spoke in my normal accent. In some audiobooks I’m aware of, narrators tweak sentences and ad-lib to an extent, lending their own personality to the reading. That isn’t always the case and wasn’t for this book.

Am I Planning Another AudioBook?

Absolutely yes, I am. I’ve done another audition and I’m working on my own series of AudioBooks as an Author-Narrator. The next time I’ll have a much better idea what to expect and am intent on doing an even better job each subsequent book I narrate.

So How Long Was This Book?

The book runs for just a touch under 3 hours, which is quite short for an audiobook but I speak pretty fast. A “normal” narrator should take about 3.5hrs for the same word count. That said my client loved the pacing and that’s what matters to me.

So in terms of Raw Audio, unedited, including all re-records and edits was 5.3hrs of raw audio. The entire book took approximately 28 hours to record, edit, re-record, normalise, remove noises, review and organise ready for release.

That’s a lot. I suspect I’ll get better next book but it’s no walk in the park. I lost about 10 hours where I had to re-record effectively a third of the book so that didn’t help…

Conclusion

The book is “The Knack Of Selling” by Mat Harrington. In reading the book I have to admit, I learned a lot of little things I had long suspected were salesperson “tricks” and a few things I hadn’t picked up on too. So to be completely fair, not only did I record this book for Mat, I learned a lot about sales while I was at it!

It’s currently available on iTunes and the Google Play audiobook stores.

I’m planning my own audiobooks in future and I’m going to record some of my accents as well on my profile page at Findaway Voices.

If you’d like me to record your audiobook, reach out and let me know. I’d love to help bring your work to life too!

]]>
AudioBooks 2020-06-18T21:30:00+10:00 #TechDistortion
Until Overcast For Mac Comes Out https://techdistortion.com/articles/until-overcast-for-mac-comes-out https://techdistortion.com/articles/until-overcast-for-mac-comes-out Until Overcast For Mac Comes Out I listen to podcasts a lot. Though less since I’ve been working from home full time. I want everything to channel through my desktop when I’m in front of it, so the best option for me is an integrated Podcast player that works on all iOS platforms, including the iPad, iPhone, Apple Watch and macOS. The Apple Podcasts app meets this requirement but I don’t like the missing smart speed, nor the way it handles playlists, podcast specific settings and so on that Overcast handles just the way I like it. (I’m a creature of habit too, I suppose)

Of course Marco has toyed with spending time developing a macOS port of Overcast but until that happens I needed a work-around. The requirements for my use case:

  • Use the Macbook Pro Audio System (External Speakers via the Audio Output on my Thunderbolt Dock)
  • Control Playback/Pause from the Macbook Pro keyboard
  • Keeps sync position for Overcast

I tried Undercast and a few other web-wrappers but to be honest, they were all terrible. The Web player is a bare-minimum passable option that gets you by in a pinch but that’s all. Then I remembered you can turn your Mac into an AirPlay receiver by using an app from Rogue Amoeba. AirFoil Satellite can be trialled free but a licence costs $29 USD (plus applicable taxes). I had a copy laying around from years ago and I always just install it (just in case) on every new machine.

Open AirFoil Satellite and set a Play/Pause shortcut that makes sense for you (I chose Command-Shift-P) and then write an AppleScript to activate and then send the keyboard shortcut and give that a keyboard shortcut via FastScripts. I chose F17 (I love my extended keyboard).

  on run
    if application "Airfoil Satellite" is running then
      tell application "Airfoil Satellite" to activate
        tell application "System Events" to tell process "Airfoil Satellite" to keystroke "P" using {command down, shift down}
      return
    end if
  end run

It’s not perfect but meets my criteria. There are other applications out there that do similar things and I’ve had trouble with Automator since the Catalina update restricting what can be executed as a global shortcut from ANYWHERE, which is why I’ve switched to FastScripts.

Hopefully that’s useful to someone, until native macOS app is released in the future. You can just load up your playlist, pipe it through your desktop speakers, sync position is kept, smart speed is your best friend, and away you go :)

]]>
Technology 2020-04-10T08:15:00+10:00 #TechDistortion
Docks And Interference https://techdistortion.com/articles/docks-and-interference https://techdistortion.com/articles/docks-and-interference Docks And Interference For the most part I’ve enjoyed my 13" Macbook Pro TouchBar 2018 model with questionable keys, but shifting to a fully work from home environment due to our unfriendly cold virus in recent times, I’ve begun to rely more heavily on a full time setup. At work in an office I’d be up and down, in and out of meetings, and could write off the occasional glitches as a downside of working in a large downtown office building in the middle of RF pea-soup.

No so much at home.

As an electrical engineer with a background in radio I’m well aware of the issues with wireless connectivity. Particularly low power wireless, even broadband or spread-spectrum technologies can be thwarted by enough radio interference. So when I purchased a brand new Apple Magic Mouse 2 a few weeks ago, I could no longer avoid what had been nagging at me for over a year: there seemed to be something wrong with my Macbook Pros wireless connectivity. (Spoiler: So I thought)

Symptoms

I’ve had a Bluetooth Apple Magic Keyboard and Magic Trackpad 2 for over a year and they would occasionally disconnect from the Macbook Pro, and on the keyboard my keystrokes would occasionally lag behind what was shown on the screen. For the longest time I shrugged it off, it was passing and temporary.

Starting the use the Magic Mouse 2, I was irritated in the first minute I used it with a stuttering cursor across the screen. As a part of working from home I’ve been on Skype for Business, Microsoft Teams, even (Shudder) Zoom audio and video conferences, on some days for 9 hours straight. The obvious thing to reach for are my AirPods. They’re only six months old and the audio in my ears sounded perfectly clear, however I was getting consistent complaints from others on the conference call that my audio was breaking up, yet I was connected by hardwired Ethernet to my router and my Upload/Download connection speeds were first rate.

Diagnosis

Being a semi-professional podcaster (some say) I had plenty of audio gear to test my microphone and quickly connecting my MixPre3 and Heil PR-40 to the Macbook Pro, now using the MixPre3 as the Microphone and my AirPods as the receiver, there were no issues with audio any more. I noted that when connected to my iPad or iPhone the AirPods had no microphone drop-outs. At this point it was clear the problem was proximity to the Macbook Pro or the Macbook Pro had some issue with wireless connectivity, specifically these Bluetooth devices. To further confirm the mouse stutter wasn’t the mouse itself I borrowed my sons wired USB Mouse and noted that it did not stutter when connected via the USB hub or via the Thunderbolt dock.

Next I cabled my Magic Keyboard 2 to my USB Hub, hence disconnecting its Bluetooth connection. The Mouse stuttering continued, though it appeared to be marginally better. Turning off the trackpad and AirPods entirely and the stuttering seemed ever so marginally less pronounced though it was still visible and jarring.

Then to attempt to isolate further I disconnected the Macbook Pro from power with no change. I then disconnected the USB Hub, and the most marked improvement in stutter was clear. Then I turned my attention to the only other item connected: the StarTech.com Thunderbolt hub. At this point the Stuttering was gone.

Image of StarTech.com Adaptor The StarTech.com with my attempts to shield and repair the cable

Not Very Useful

I tried to wrap the StarTech.com cable with an RF Choke, shielding, but whatever noise it was producing would not be silenced. I needed to connect the Macbook Pro to multiple screens and I needed hardwired Ethernet and I only had 4 USB-C ports (mind you that’s better than some of Apple’s laptop machines).

I’d been eyeing one of these off for what seems like years (more like 18 months) so I finally ordered the CalDigit TS3+ Thunderbolt dock. I ordered it via Apple and it arrived only two business days later.

CalDigit TS3+

Devices I currently have plugged into the TS3+:

  • Audio Output to my desktop speakers
  • Hardwired Ethernet to the router
  • Thunderbolt cable to my Macbook Pro
  • DisplayPort to 4K 28" Monitor #1
  • Thunderbolt Downstream to Cable Creation DisplayPort adaptor to 4K 28" Monitor #2
  • USB-A to 8TB HDD
  • USB-A to a Qi Charging Pad
  • USB-C to MixPre3
  • AC Power Adaptor (from the wall socket)

I’ve tested the SD Card reader (can pack away my old multi-card USB 2.0 reader now), and all of the other USB-A ports plus the USB-C front port but they’re currently vacant. With this dock I packed away my USB-C 61W charger and Apple’s Macbook Pro USB-C cable as well. My Magic Keyboard 2 is back in Bluetooth mode, so’d the Magic Trackpad, the Magic Mouse and the AirPods and guess what?

No Mouse Stutter

No Audio Dropouts of the Microphone from the AirPods

Okay so was this a case of throwing money at a problem to make it go away? Kinda sorta, but truth be told it was more an expensive process of elimination.

Magic Keyboard, Magic Mouse, AirPods All BlueTooth Devices now Happily Working Simultaneously

Interference

The problem lies in one of three places, as it always does with anything wireless. For communication between two places you need A) a transmitter, B) a receiver and C) the transmission medium joining the two. In this case, the transmitter probably wasn’t a factor - everything was within tens of centimetres from each other so single strength wasn’t a problem, though interference could still be a factor for a receiver. A broad spectrum interferer would impact the devices no matter where you were in the house, no matter what you disconnected or didn’t - which eliminated a common interferer.

So it comes back to the transmitter or the receiver and the perspective of each. From the Mouse/AirPods (acting as a transmitter, sending data to the Macbook Pro) it has a relatively small battery to transmit BlueTooth back to the Macbook Pro. The mouse isn’t a receiver (well it is but it’s one we can’t test independently) and the AirPods as a receiver for audio playback (from the Macbook Pro to the AirPods) has a more powerful transmitter in the Macbook Pro to listen to.

If you have a localised interferer it will tend to drown-out the nearest radio receiver. In this case whatever is trying to communicate with the Macbook Pro via BlueTooth is going to struggle to pick out the desired signal over the top of the noisy interferer. How this manifests in this situation is lost data from the weaker transmitter (the battery powered device) to the receiver in the Macbook Pro. In the case of the:

  • AirPods: broken up microphone audio
  • Magic Keyboard: occasionally delayed or lost keystrokes
  • Magic Trackpad: delayed selection/tapbacks, stuttering cursor movements
  • Magic Mouse: stuttering cursor movements

Hopefully that all makes sense but what was causing the interference?

First About Bluetooth

BlueTooth operates between 2.400 and 2.485 GHz which is a narrow(-ish) 85 MHz of spectrum. Notwithstanding the guard bands at the top and bottom of that spectrum it operates using 79 channels each of 1 MHz bandwidth using Frequency-Hopping Spread-Spectrum technology. FHSS allows narrow band interference to be avoided by constantly hopping between segments of the spectrum within any given channel. Of course that’s fine if you only have narrow band interference. Broadband interferers that spew noise across vast segments of a band will cause enough data loss to drop packets.

USB 3.0

‘Superspeed’ USB (aka USB3) has delivered significantly faster data rates for several years but as clock speeds increase the frequency of interference increases to a point where the EMI (Electro-Magnetic Interference) emitted is centered around the base clock frequency and multiples thereof such that it’s difficult to obtain compliance to EMI standards in some frequency bands. To avoid multiple narrow-band EMI peaks across the frequency band and in an attempt to reduce EMI, the concept of spread-spectrum was applied to data clocking (in a manner of speaking). There’s an excellent article by Microsemi that explains: “Spread spectrum clocking is a technique used in electronics design to intentionally modulate the ideal position of the clock edge such that the resulting signal’s spectrum is ‘spread’, around the ideal frequency of the clock…”. This has the effect of spreading the noise across a very wide frequency range, significantly reducing narrow-band noise, but at the cost of increasing spread-spectrum noise.

Intel released a White Paper in 2012 that looked at the practical implementation of USB 3.0 and how the technology had an impact specifically on low powered wireless devices operating in the 2.4GHz band. Specifically WiFi and BlueTooth. The following table is extracted from that White Paper and shows the noise increase due to an externally connected USB 3.0 Hard Disk Drive.

USB 3.0 Interference: Credit Intel 2012 Figure 3-3

Intel’s commentary: “…With the (external USB 3.0) HDD connected, the noise floor in the 2.4 GHz band is raised by nearly 20 dB. This could impact wireless device sensitivity significantly…”

The Root Cause

In years past when I had access to an RF Spectrum Analyser I could have connected some probes to stray cables and known for certain, but based on a process of elimination it’s clear that there were two interferers most likely due to USB 3.0 components:

The StarTech.com dock started to cut out intermittently over 9 months ago. The cut-outs caused a HDD to disconnect multiple times leading to a lot of frustration with directory rebuilding, reindexing and backup re-uploading such that I couldn’t leave it connected to my Macbook Pro via the dock anymore. That drove me to seek out an independent USB hub, so I’d switched to a combination of CableCreation USB-C to DisplayPort adaptors and a cheap Unitek USB-3 Hub via a cheap Orico USB-C to USB-A adaptor. This solution worked for a while but it ultimately consumed too many ports and once I had shifted to working at home full time, wouldn’t work.

Through use and abuse in the case of the StarTech.com dock I’ve come to appreciate that the shielding and cabling was damaged, and in the case of the cheaper USB 3 Hub from Unitek, I doubt it was ever particularly well shielded to begin with and I essentially got what I paid for as it was rather cheap.

USB Hub and Adaptors Miscellaneous Adaptors I Used Along The Way

Well Shielded Cables Please

Poorly shielded cabling relating to high speed external data buses is far more often the culprit that you might think when you’re experiencing BlueTooth or WiFi issues. Whilst it’s true there are many layers to the comms stack, it’s also possible it’s purely a software issue, it could be a faulty BlueTooth device as well. Having said that, swapping out cables and docks may well solve your problems definitively.

I like to think about shielding as the bottle and RF Noise as the genie. Once that shielding is damaged or if it’s poorly designed or constructed, it lets the genie out of the bottle and once it’s out, it’s incredibly difficult to stop it interfering with other devices.

My advice: choose your USB hubs, devices and cables with care and treat them well, lest that EMI genie be let out of its bottle.

Hopefully this helps someone trying to understand why their BlueTooth devices are misbehaving, when said devices are in otherwise perfect condition.

]]>
Technology 2020-04-08T21:25:00+10:00 #TechDistortion
Kia Optima 2018 Auto-Steer https://techdistortion.com/articles/kia-optima-2018-auto-steer https://techdistortion.com/articles/kia-optima-2018-auto-steer Kia Optima 2018 Auto-Steer My rental vehicle in the US was a Kia Optima FE and it had a lot of extra little features I’d never been exposed to before. The one of most interest was auto-steer, or “lane keep assist” it’s sometimes referred to as.

The way I discovered it had this feature was when I was driving to Austin on a slow left hand bend when I felt the steering wheel start to pull me off the road. Ever so slightly disconcerting at 70mph! What the heck was tugging on the steering wheel? I initially thought the car needed a wheel alignment or the tyre pressures were badly off.

Thinking back I’d been having warning alerts go off in the hour previously but didn’t know what they were for. I realised that it was complaining about my lane position. One of the challenges when you’re driving on the other side of the road is that the sight-line you’re used to using from the driving position to the center or outside lines of the road to get your correct road position is thrown out by sitting on the other side of the vehicle.

After a few days driving on the right hand side of the road I’d retrained my brain so that’s fine but the car was pointing this out to me for several hours before I realised what it was doing. (Please note: I wasn’t drifting OUT of my lane, but I was too far across to the right hand edge of my lane, not enough to cause an incident but enough to upset lane-keep).

Back to Auto-steer. I realised through observation that the green steering wheel icon would appear at speeds above 40mph when the car could “see” solid or regularly dashed lines on either side of the roadway ahead of it. If it did see them I could let go of the wheel for a period of time and the car would then keep itself in the lane. It worked well enough but there were a few little problems.

  • Sharper bends were a fail: I pushed the car’s limits a bit on this one, with my hands at the ready as I let it steer through ever-sharper turns but ultimately I learned when I pushed it too hard to not trust it to steer itself on anything other than the most gentle of curves in the road
  • Missing lines caused jerking: This is what happened in the first incident I mentioned - there was a gap in the outside line of the road due to a series of driveway entrances on a more rural section of the highway which confused the auto-steer system
  • The no-hands on wheel alarm: After about 20 seconds of not touching the wheel the system would alert you to the fact you hadn’t been holding the wheel and cut out auto-steer if you didn’t grab the wheel. In practice when I was lightly holding the wheel it wasn’t detected at all especially on a straight stretch of roadway and I had to forcibly inject a small correction into the wheel even if it wasn’t warranted to convince it I was actually holding the wheel.
  • On freeways with lots of merges it’s rough: Particularly in heavy traffic I just turned if off and stopped using it. It wasn’t safe and I didn’t trust it. To be fair I have the same policy with a cruise control - it has no place in heavily congested traffic at those speeds.

It’s not all bad news and limitations however:

  • You don’t drift if you look away anymore: You can say it as much as you like, “always keep your eyes on the road” and if you need something from the passenger seat, glove box, sometimes even the radio, the advice is “pull over until it’s safe to do so”. The counter-argument with freeways is that this isn’t usually practical - most freeways don’t have wide enough shoulders to safely stop, there’s too much traffic to safely stop, they don’t have enough exits set aside for breaks - once you’re on it, you’re stuck on it. Hence if you do look away from the road, the direction that you look or lean no matter how good a driver you are, you’ll start to drift the car in that direction. With this feature - that’s no longer an issue.
  • Less tiring: I wouldn’t have thought it would have such an impact but driving back late at night when you’re tired the Auto-Steer made a huge difference. I found I could focus more on the cars around me (the few that were) and the map guidance and let the car take that cognitive load off of me. It worked really well.

I’m strongly considering a Tesla Model 3 or Model Y in a few years time when it’s time for my next car and I’m now more excited than ever that this kind of technology is becoming cheaper and hence more accessible and whilst the Kia implementation (according to others reviews I’ve read) isn’t as good as Teslas, it’s still good enough to be useful and I’m glad I had it.

]]>
Technology 2019-11-12T06:00:00+10:00 #TechDistortion