(145 items)
For some reason, I have a printer at home. I think I bought it for printing wet signature-requiring legal documents a few years back, and buying a printer was cheaper/easier than getting things remotely printed and posted back to me. It’s a cheap-ish Brother greyscale laser printer.
Whatever the reason, since having the printer I have become immensely popular with my family, as it turns out that no one else near us has one. Despite no one else owning one, it also turns out that people do need to use them from time to time – e.g. for printing out shipping labels, gift voucher codes for birthday cards, ID document scans for certification, etc.
TL;DR (can I see your setup?): see this note.
–
I’ve now been exclusively using aerc for my day-to-day email workflows for a few months. This has been my first proper foray into using terminal-based mail clients as I never fully got around to trying other ones, such as Mutt (and NeoMutt), but had recently read good things about aerc in various threads and wanted to give it a go. From what I read, it seemed to be modern and actively developed, with a good ecosystem, and with a focus on being user-friendly and extensible.
This note documents my current setup for email.
It includes:
To securely store credentials for syncing, I use pass
. I recommend installing this for your system and setting it up with a GPG key.
Obsidian is a note/knowledge management tool I’ve used a fair amount in the past (before moving to nb). Below I log some considerations I dealt with whilst using Obsidian.
When I used Obsidian, I synced it to my devices using iCloud (though I don’t know if this is relevant to this particular issue). At various points, I noticed that some notes would completely disappear. Sometimes this would happen even whilst I was editing them, or I would see the note contents just go blank whilst the note was open. This was stressful if I was in a meeting or needed the note at that time (though luckily cmd-Z would usually re-surface the note contents back into the buffer).
...A collection of “things”, software and systems I use on a frequent or daily basis.
I enjoy using open source, self-hosted, and independent software where possible. I like the reliability and quality of Apple hardware, and often the combination of the two works best for me.
Whilst browsing my GitHub home feed a little while back (not something I’m in a habit of doing, generally), I stumbled upon the command line journaling tool jrnl
. I thought it looked interesting, and so subsequently posted about it and had a good discussion on this and alternatives over on Mastodon. jrnl
also gave Kev Quirk the idea to create his own journaling tool.
During the conversation, Tucker McKnight mentioned that he uses a tool called nb
for journaling and note taking. It wasn’t something I’d heard of before, and so I was intrigued.
As of September 2024 I use nb
as my primary note-taking and knowledge management program.
On a Mac, it can be installed from Homebrew: brew install nb
.
nb
directoryI sync my nb
documents via Syncthing, and therefore need the tool to use my Syncthing directory as its “home”: nb set nb_dir ~/Syncthing/nb
.
To migrate my notes from Obsidian, I created the top-level directories (which nb
calls “notebooks”) in ~/Syncthing/nb
and copied the .md
files from Obsidian into the appropriate directories.
Many moons ago I would write web applications using technologies like PHP or Python to directly serve web content – often using templating engines to more easily handle data display. These apps would thus ship server-side rendered plain HTML, along with a mix of browser-native forms and AJAX (sometimes with JQuery) for some interactivity and more seamless data submission. I’d use additional lightly-sprinkled JavaScript for other UI niceties.
For the past ten years or so, this all changed and my web apps – including those developed as part of my work and nearly all of my side projects – switched to using front-end frameworks, like React or Vue, to create single page apps (SPAs) that instead solely consume JSON APIs. The ease of adding front-end dependencies using npm
or yarn
, advancements in modular and “importable” JavaScript, and browser-native capabilities like the Fetch API’s json()
function encouraged this further, and the ecosystem continued to grow ever larger. Front-end code would run entirely independently from the back-end, the two communicating (often) only via JSON, and with both “ends” developing their own disparate sets of logic to the extent where progressive web application front-ends could leverage service workers and run entirely offline.
A few weeks back (as of 2024-09-01) I posted about a tool called jrnl
on Mastodon. The post got a fair amount of engagement and discussion, and I’ve also given it a go over the past couple of weeks myself.
jrnl
is a command-line tool for keeping a journal. It can be used for quick single-sentence entries or for long-form journaling. Similar to Taskwarrior, it’s a tool that does what it needs to do and then gets out of the way (i.e. you don’t need to open an app and click around to get to your journal).
If you’ve followed my blog or other notes, you’ll be well aware of my extensive use of Tailscale for nearly everything – securing access to remote servers and for connecting to services across my local network.
Tailscale DNS is great for assigning DNS names to individual hosts, but my approach (usually) involves using Docker containers, with which I run several services on a single host. It is then fiddly (or impossible? I’m not sure) to use a single Tailscale “machine” to expose multiple services on the same host nicely via the DNS system (system).
...Restic has been my go-to backup tool for many years, and I’ve used it for a wide range of workloads – including for managing filesystem, Postgres, Photoprism and Vaultwarden backups. At the point I was researching suitable backup options for these cases, and considering my personal requirements, I was often stumbling upon discussions comparing Restic to a similar tool – Borg backup – but I eventually settled on using Restic myself, due to its flexibility and the amount of data (measured in terabytes) I intended to backup.
I self-host a Vaultwarden instance to manage my usernames, passwords, two-factor codes, and other details for my accounts everywhere.
Standard Bitwarden clients (including browser extensions and mobile apps) can use Vaultwarden instances as their backend server.
In this note I describe my particular setup.
To begin, set-up a new docker-compose.yml
file that includes the Vaultwarden and reverse-proxy containers.
I’ve switched from managed photo providers (such as Apple or Google photos) to a self-hosted Photoprism instance. This note documents my setup.
I use Linode for this, and you can get a free $100 credit using this referral link.
Update 2022: More recently I’ve migrated Photoprism over to a home-based Raspberry Pi, which works great.
Provision a new server or instance (at least 4GB memory and 2 CPU cores), and create and attach a big (I use 1TB) volume.
...I subscribe to a small number of podcasts. I usually listen to them when at the gym or out walking, and so if I subscribe to too many I just can’t get through them all each week!
That said, I’m always open to good recommendations if you have any.
Most of these podcasts should be available by searching your podcast app. (I use Overcast on my phone).
My current subscriptions are:
...The tempting approach is just to use a publicly accessible bucket in “website mode” for a website (since the website itself will also be public), and then use Cloudfront with the bucket’s website URL as the origin, but this feels lazy. Instead, use a private bucket and Cloudfront with OAC to securely access and serve content from the private bucket.
First, set up a bucket policy for the bucket, referencing the AWS account ID and preferred Cloudfront distribution ID (the below comes directly from the link above):
...This is a reference quick-start note for deploying PostgreSQL via Docker, and with working self-signed TLS.
E.g. with one year expiry:
mkdir postgres-certs && cd postgres-certs
openssl req -new -x509 -days 365 -nodes -text -out server.crt -keyout server.key
sudo chown 999 server.key
sudo chmod 0600 server.key
Postgres key permissions are fussy. In this case, we set the key to be owned by the postgres
user in the container (999
), which you may not want to do if you’re on a shared environment. See this thread for more info.
This is a reference quick-start note for deploying MongoDB via Docker, and with working self-signed TLS.
Note: This setup does not yet consider replica sets. Coming soon…
E.g. with one year expiry:
openssl req -nodes -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365
cp cert.pem certificateKey.pem
cat key.pem >> certificateKey.pem
Ensure to reference the correct locations via volume mounts.
...A scratchpad of links.
<div>
element.My personal website is generated using Hugo, which allows me to write nearly all of the actual content itself in plain markdown files.
I also maintain a Gemini capsule (hosted at gemini://wilw.capsule.town). For a while I’ve wanted to be able to add more content to this capsule, and to try and keep it updated more consistently over time. However, I don’t really have the capacity to duplicate the time taken to maintain the site (and its blog posts and notes) in order to do so.
Some of my personal projects are beginning to get larger and more complicated, often involving different front-ends and services that all need to be separately built and deployed. Managing all of these is taking away more of my personal time, and I’ve been on the look-out for a good CI/CD automation system for these projects. I primarily use Gitea as a git server, and have been struggling to find a system that suited my needs and works well with Gitea.
As well as serving content from a storage zone, Bunny CDN can also serve from other origins. For example, S3-compatible storage solutions.
For example, this website is stored on Linode Object Storage, and served via Bunny CDN.
In this note I describe how to achieve this. The note uses Linode as an object storage provider, but any S3-compatible storage provider should work the same.
Step 1: Create a new bucket.
...I use Woodpecker CI to automate my personal CI/CD processes. This note documents my setup.
Create a docker-compose.yml
and bring the service up (notes on environment below):
version: '3'
services:
woodpecker-server:
image: woodpeckerci/woodpecker-server:latest
ports:
- 8000:8000
expose:
- 8000
volumes:
- ./woodpecker-server-data:/var/lib/woodpecker/
environment:
- WOODPECKER_OPEN=false
- WOODPECKER_HOST=https://your.host
- WOODPECKER_AGENT_SECRET=SECRET
- WOODPECKER_GITEA=true
- WOODPECKER_GITEA_URL=https://git.wilw.dev
- WOODPECKER_GITEA_CLIENT=GITEA_CLIENT
- WOODPECKER_GITEA_SECRET=GITEA_SECRET
woodpecker-agent:
image: woodpeckerci/woodpecker-agent:latest
command: agent
depends_on:
- woodpecker-server
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WOODPECKER_SERVER=woodpecker-server:9000
- WOODPECKER_AGENT_SECRET=SECRET
Replace the WOODPECKER_HOST
variable with the URL/host required, and also the URL for your Gitea instance.
Bunny CDN provides a cost-effective, easy-to-use, and highly performant service for delivering web content. I currently use it to serve this website, as well as a few other systems.
In Bunny, web content is categorised into “pull zones”, where each can have an origin and a number of configurable settings. Typically you’d use one pull zone per website/app. If you’ve used AWS Cloudfront before, these are the same as “distributions”.
...I self-host a number of services - Nextcloud, FreshRSS, PhotoPrism etc. - at home on a Raspberry Pi. Attached to this Pi is a large SSD to hold the service data and configuration, and all of this is periodically backed-up via Restic to a remote site.
The SSD is simply formatted with ext4
, and the directory containing all of these services and data is currently encrypted using fscrypt. I (mainly) want to encrypt the data in order to protect against the case of break in and theft in my house, however likely or unlikely that is to occur.
I run a number of services on my own machines and VPSs, and for each system I ensure there are appropriate backups in place.
I usually use Backblaze B2 as a backup solution due to its low costs, simplicity, and ease of use.
Linode’s Object Storage service is also great, but I use Linode to handle nearly 100% of my workloads and it feels safer (and more compliant with the 3-2-1 principle of backing-up) to store backups with a different datacentre and provider from the primary data store.
...I’m interested in being able to look after my own data, as much as possible. I try to ensure that this is at least (almost) as convenient as commercial where I can (and so there is probably much more I can do).
Most of the services I selfhost run on Linode servers. I can definitely recommend them for ease of security, management, and performance. For personal services, I use a home-based Raspberry Pi.
...My notes are kept using Joplin. I use Joplin server to keep all of my devices in-sync.
Below is a docker-compose.yml
I use to run the server.
version: '3'
services:
db:
image: postgres:13
volumes:
- ./data/postgres:/var/lib/postgresql/data
restart: unless-stopped
environment:
- POSTGRES_PASSWORD=CHANGEME
- POSTGRES_USER=joplin
- POSTGRES_DB=joplin
app:
image: joplin/server:latest
depends_on:
- db
ports:
- "8084:22300"
restart: unless-stopped
environment:
- APP_PORT=22300
- APP_BASE_URL=http://CHANGEME
- DB_CLIENT=pg
- POSTGRES_PASSWORD=CHANGEME
- POSTGRES_DATABASE=joplin
- POSTGRES_USER=joplin
- POSTGRES_PORT=5432
- POSTGRES_HOST=db
Note: I run my Joplin server in my Tailscale network, so I don’t bother with TLS certs, but you may wish to put a reverse proxy in front if you run it publicly.
...Every Mac user seems to have a different way of managing their open applications and windows.
Some people prefer to view each window in “full” mode, in which they take up the entire display and the user can cycle apps or use the dock to change the active window. Other people use full-screen mode and/or swipe between desktop spaces to find their apps, or a mixture of several approaches.
I have a small fleet of Raspberry Pis (mostly the Pi 4 Model B), which I use at home for various tasks.
They are all firewalled off and are reachable via Tailscale. A small number of the services they run are also exposed to the local network.
As of the time of writing, services I run on the Pis include the following:
I use large SSDs (e.g. this one) for the Pis that require extra storage, such as for my photos. These are connected via USB3-SATA cables (such as these).
...This note documents my setup for building a sane-ish (for me) software environment on macOS for work, development, and life.
open /Users
and then drag homedir to the sidebar)From the App Store install and configure:
From publisher websites, install and configure:
...To free, open-source, hobby services and products.
Product/service | Link | Donation |
---|---|---|
FreshRSS | Liberapay | £3 / month |
Fosstodon | Patreon | £4 / month |
Photoprism | Patreon | £2 / month |
Joplin | Patreon | £1.50 / month |
If you would like to support me or my work, then you can do so on the donate page.
...Back in 2021 I blogged about how and why I wanted to switch from Google Photos as a storage solution (and source of truth) for my life’s photo and video library. The post compared several solutions, such as Piwigo, Mega, and Nextcloud.
In that time I’ve tried several further options, starting with pCloud (as described in that post), Nextcloud backed by S3, and plain-old Apple Photos.
2FAuth is a self-hostable web service for managing two-factor authentication tokens. It has a nice, clean interface, is responsive (so it can be simply added to the homescreen on iOS), and has various useful features:
Running the service itself is as straight forward as bringing up the supplied docker-compose.yml
file. However - as noted in this blog post - if, like me, you plan to run it inside of a Tailscale tailnet, you’ll need to ensure HTTPS is enabled in order for the camera to work. To do this, attach a reverse-proxy armed with the required certificates.
Update 2024-09-01:
I have since written a note on Tailscale Sidecars which provides a more elegant solution to this problem. I have left this post here for posterity.
Tailscale’s HTTPS feature is an excellent tool for adding TLS connections to web services exposed over the tailnet.
Although traffic over the tailnet is encrypted anyway due to the nature of Tailscale itself, some web-based services work better when served over HTTPS. Since the browser does not know that you are accessing the service over a secure connection, it may enforce limits on connected web services when accessing them in - what feels like - an insecure context.
The Bear notes app has been my go-to notes app for Mac, iPhone, and iPad for some time now. It’s got a great UX, a customisable UI, and is one of those apps that feels like a (clichéd) “delight” to use.
Bear is written exclusively for Apple devices, and uses CloudKit to sync notes between devices via iCloud. In theory, this isn’t too much of a problem. However, I’ve recently found CloudKit-reliant apps to become a little unreliable.
s3cmd
is an excellent tool for interacting with S3-compatible storage solutions (Amazon S3, Backblaze B2, Linode Object Storage, etc.) via the command line.
I personally find it easier to use and more intuitive than the aws
equivalent.
s3cmd
The tool can be installed (on a Mac) via Homebrew:
brew install s3cmd
s3cmd
The tool comes with a --configure
command in order to create a valid configuration file.
Gitea is a fantastic GitHub-like service for git remotes and for viewing code and git projects via a web-browser. One can join existing instances (such as Codeberg), or self-host it.
I self-host a Gitea instance. I use a docker-compose.yml
file like the one below.
version: "3"
services:
gitea:
image: gitea/gitea:latest
restart: unless-stopped
environment:
- USER_UID=1000
- USER_GID=1000
networks:
- traefik_net
volumes:
- ./gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "22:22"
expose:
- 3000
labels:
- traefik.http.routers.gitea.rule=Host(`domain.com`)
- traefik.http.routers.gitea.tls=true
- traefik.http.routers.gitea.tls.certresolver=myresolver
- traefik.http.services.gitea.loadbalancer.server.port=3000
networks:
traefik_net:
external: true
Change the domain at which you host the instance and setup the DNS/labels as described in the Traefik note.
...For sensitive data storage in the cloud, I will usually provision a separate volume, encrypt it, and then use this as the volume mapper for containerised services.
I use Linode to host the vast majority of my services. In Linode, new volumes can be easily created and attached to an instance.
After a short while the instance will then recognise the new device and make it available via the OS.
...I use Nextcloud as an alternative to Google Drive or iCloud. I self-host it on a VPS or Raspberry Pi and make it available only within my Tailscale network so that it is only accessible to my own authorised devices.
Nextcloud is relatively straight-forward to run using a docker-compose.yml
file:
version: '3'
services:
db:
image: mariadb:10.5
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: unless-stopped
volumes:
- ./nextcloud-data/db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=CHANGEME
- MYSQL_PASSWORD=CHANGEME
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:22
restart: unless-stopped
ports:
- 8083:80
volumes:
- ./nextcloud-data/storage:/var/www/html
Bring the service up, navigate to the IP address or domain, and configure the instance to get started.
...When browsing Twitter, I use a self-hosted Nitter instance. Nitter allows you to browse Twitter using a more privacy-respecting front-end.
To run Nitter, create a docker-compose.yml
:
version: "3"
services:
nitter:
image: zedeus/nitter:latest
ports:
- "8085:8080"
volumes:
- ./nitter.conf:/src/nitter.conf:ro
depends_on:
- nitter-redis
restart: unless-stopped
nitter-redis:
image: redis:6-alpine
command: redis-server --save 60 1 --loglevel warning
volumes:
- ./nitter-redis:/data
restart: unless-stopped
Simply bring up the service, navigate to the IP address or domain, and get started.
...When browsing Reddit, I use a self-hosted Teddit instance. Teddit allows you to view Reddit using a nicer and more privacy-respecting front-end.
To run Teddit, create a docker-compose.yml
:
version: "3"
services:
teddit:
image: teddit/teddit:latest
environment:
- REDIS_HOST=teddit-redis
ports:
- "8086:8080"
depends_on:
- teddit-redis
teddit-redis:
image: redis:6.2.5-alpine
command: redis-server
environment:
- REDIS_REPLICATION_MODE=master
Simply bring the service up, navigate to the IP address or domain for the service, and you’re ready to go.
...For managing and consuming RSS feeds, I use FreshRSS.
To run a FreshRSS instance, create a docker-compose.yml
file:
version: "3"
services:
freshrss:
image: freshrss/freshrss
environment:
- "CRON_MIN=3,33" # Change this if you like
- TZ=Europe/London
volumes:
- ./freshrss_data:/var/www/FreshRSS/data
- ./freshrss_extensions:/var/www/FreshRSS/extensions
restart: unless-stopped
ports:
- 8081:80
I run FreshRSS within my Tailscale network and so I do not bother with TLS certificates or a reverse proxy. If you do want to use one, look at the Traefik note.
...Most of my personal services I run within my Tailscale network, and so I do not tend to bother with HTTPS (since provisioning TLS certificates for multiple services running on a single host via Tailscale is a pain).
However, for external services that need to be accessed publicly, I use Traefik as a reverse proxy and for managing and provisioning TLS certificates.
Run Traefik using Docker. Create a docker-compose.yml
file in a directory called traefik
:
I use Umami for analytics on this website and a few other services that I run. I self-host Umami using Docker.
To start, create a docker-compose.yml
:
version: '3'
services:
umami:
image: ghcr.io/mikecao/umami:postgresql-latest
environment:
DATABASE_URL: postgresql://umami:CHANGEME@umamidb:5432/umami
DATABASE_TYPE: postgresql
HASH_SALT: CHANGEME
depends_on:
- umamidb
restart: always
expose:
- 3000
networks:
- traefik_net
labels:
- traefik.http.routers.umami.rule=Host(`CHANGEME`)
- traefik.http.routers.umami.tls=true
- traefik.http.routers.umami.tls.certresolver=myresolver
umamidb:
image: postgres:12-alpine
environment:
POSTGRES_DB: umami
POSTGRES_USER: umami
POSTGRES_PASSWORD: CHANGEME
volumes:
- ./sql/schema.postgresql.sql:/docker-entrypoint-initdb.d/schema.postgresql.sql:ro
- ./umami-data:/var/lib/postgresql/data
restart: always
networks:
- traefik_net
networks:
traefik_net:
external: true
Since Umami must be publicly-accessible (so it can measure analytics on public websites), it must also be served over HTTPS.
...Alpine Linux ships with several BusyBox utilities, including many day-to-day shell commands.
One such example is the unzip
tool for extracting ZIP files.
Recently, when unzipping a large archive on Alpine, I kept getting this short read
error message:
$ unzip archive.zip
Archive: archive.zip
unzip: short read
$
I tried re-creating the archive a few times to see if it was an issue with the zip file itself, but to no avail. I still don’t know what the short read
error is referring to, but the only thing I can think of is the size of the file (around 100GB).
I use a self-hosted Monica instance to keep on top of birthdays and other useful/interesting notes about friends and family.
This note documents my setup.
Create a docker-compose.yml
file:
version: "3.4"
services:
monica:
image: monica:latest
depends_on:
- monicadb
environment:
- APP_KEY=CHANGEME
- DB_HOST=monicadb
- APP_ENV=nonprod # See note below
- APP_URL=http://CHANGEME
- APP_TRUSTED_PROXIES=*
- MAIL_DRIVER=smtp
- MAIL_HOST=CHANGEME
- MAIL_PORT=587
- MAIL_USERNAME=CHANGEME
- MAIL_PASSWORD=CHANGEME
- MAIL_ENCRYPTION=tls
- MAIL_FROM_ADDRESS=CHANGEME
- MAIL_FROM_NAME=Monica
volumes:
- ./monica_data:/var/www/html/storage
restart: always
ports:
- 8082:80
monicadb:
image: mysql:5.7
environment:
- MYSQL_RANDOM_ROOT_PASSWORD=true
- MYSQL_DATABASE=monica
- MYSQL_USER=homestead
- MYSQL_PASSWORD=CHANGEME
volumes:
- ./mysql:/var/lib/mysql
restart: always
APP_ENV
When running with APP_ENV=production
, Monica enforces HTTPS connections. I run Monica in my Tailscale network without TLS certificates, and so I run Monica in nonprod
(which I made up) mode to allow this to work.
For several years I’ve been using GatsbyJS to generate the static site content for this website. Gatsby is a great tool and produces blazing-fast websites through the use of an interesting combination of technologies.
In Gatsby, pages are simply React components, and developers can make use of the entire JavaScript and React ecosystems to craft their sites. Config files can be used to create pages that don’t “exist” in your filesystem (e.g. an index page for each tag used in a blog) and GraphQL queries are used to surface content and query data from across the website. Gatsby templates and standard React composition patterns allow for excellent re-use of components.
Making Tax Digital (MTD) is part of the UK Government’s plan for modernising the tax system for both businesses and individuals.
For years, HMRC (the Government’s tax department) has had an online tax system that is infamously complicated and slow to use and update such that even accomplishing simple tasks can be long and painful processes. Part of this is due to the laughably complicated UK tax system itself (rather than the fault of the technology), but some of it can certainly be attributed to the antiquated tooling.
Since getting a Magic Keyboard for my iPad Pro, I’ve been using the iPad for many areas of work for which before I would have needed a laptop.
In fact, last week I was able to use the iPad full-time when at work in our new office space, and I didn’t need to reach for my MacBook once. When I can, I prefer working on the iPad due to its flexibility, brilliant display, speed, battery life, and more.
Over the past couple of years I have been intrigued (and sort of excited) about the ideas behind plain text accounting. To be excited about accounting is probably unusual but I love its simplicity and logic - and the power it gives you.
Plain text accounting is essentially the practice of keeping track of your finances, but in simple plain-text files. This is in contrast to other accounting/bookkeeping systems - such as Xero, Sage, Quickbooks, GnuCash, etc. - which can have proprietary/closed-source/software-specific formats. Plain text accounting provides a number of additional benefits:
...Earlier this week I needed to make some changes and re-deploy an old Vue app. I hadn’t touched the codebase in over a year, and my experience with the rate of change in the front-end web space made me dread what would happen if I tried to re-awaken this thing.
Sure enough, after running a yarn install
and launching the app using the scripts in package.json
, a number of errors were displayed about Node/Webpack/Vue incompatibilities, and I didn’t really know where to start. I don’t use Vue on a daily basis these days, and so I don’t usually need to make an effort to keep fully up-to-date on its developments, but I knew I was several versions behind on vue
, vue-loader
, as well as all the sass
and babel
toolings. This wasn’t going to be a quick fix.
If you’re a current follower of this blog then you may already know that I’m a bit of a fan of using plain text accounting for managing finances.
I mainly use the Ledger text file format and CLI tool for bookkeeping and reporting on finances. This works great, and I can quickly and easily generate different kinds of reports using a range of simple Ledger commands.
For example, to generate a quick income/expenses balance sheet for a particular date range I can run ledger balance income expense -b 2022/03/01 -e 2022/03/31
, which produces something along the lines of the following:
I’m not a designer, so I find it useful to take inspiration from others when building software.
I often talk about self-hosting on this blog, and I’m certainly a big fan of being able to control my own data and systems wherever possible (and feasible). I’ve recently switched from using Nginx to Traefik as a reverse proxy for my server and for terminating TLS connections.
In this post I’ll talk a little about why and how I made this change.
I self-host a number of services; including Nextcloud for file storage and sync, Gitea for git syncing, FreshRSS for RSS feed aggregation and management, Monica for relationship organisation, and a few other things too.
If you’ve ever run your own Nextcloud before, you may have noticed screens like the following in your instance’s settings pages.
The messages advise a number of maintenance procedures to help ensure the smooth running of your instance. These could be to run database migrations or to update schemas in response to installing new apps.
Often these steps might involve running occ
commands. occ
is Nextcloud’s command-line interface, and is so-called because of its origins in ownCloud.
I recently signed the web0 manifesto, which embodies many of the values I consider to be important when it comes to technology - and the web in particular.
web0 is the decentralised web… web0 is web3 without all the corporate right-libertarian Silicon Valley bullshit.
Essentially web0 is around empowering a decentralised web that:
In practice this could mean owning your own domain name and taking part by hosting a website or through getting involved in other communities, such as in the tildeverse. The key thing is that participants own and can control their own data and that things are accomplished without needing to rely on big-tech.
🎉 This is post 100 in my attempt at the #100DaysToOffload challenge!
For a couple of years I have been writing mobile apps using the Flutter framework, having previously been a React Native advocate. Flutter is a great tool for writing applications that target multiple platforms and architectures from one code base - and not needing to write any JavaScript is definitely a bonus!
I use and recommend Firebase Cloud Messaging to handle push notifications in these applications. There’s also a great library for Flutter - Flutterfire - to handle the setup and receipt of these messages, along with the requesting of push permissions on iOS. The set-up takes away the pain of managing cross-platform notifications in Android and iOS applications.
For as long as I’ve been using Matrix I’ve hosted my own homeserver on my own VPS and at my own domain.
I previously wrote about how I self-host my homeserver with the help of the Synapse project. Although this set-up is quite straight forward, it’s an extra system to maintain with all of the associated overheads.
One of the reasons I don’t host my own mail server is that I fear missed messages and silent bounces. I trust dedicated mail providers (particularly Fastmail) more than myself in providing a robust enough service to ensure things get through. Equally, if I am telling other people my Matrix handle, then I want to make sure that messages they send (and those that I send) actually get delivered without any problems.
Some people may remember my quest a few months back to find a good alternative to Google Photos for image storage and backup.
At the time, I talked about Piwigo, Mega and pCloud as potential candidates. I also briefly touched upon Nextcloud in that post - a service I use (and self-host) anyway for all of my other storage needs, but I did not consider it further due to the high cost of the associated block storage required to house a large number of images.
Most applications include some sort of outbound transactional email as part of their normal function. These email messages could be to support account-level features (such as password-resets) or to notify the user about activity relevant to them.
In the latter case, such emails might be read, and then archived or deleted by the user, without further direct action. They aren’t typically designed to be something one actions or replies to - they’re mostly there to bring you back into engaging with the platform.
I maintain a small number of projects in my spare time. The amount of time I get to work on and maintain these varies depending on my other workloads.
The projects were never designed to be a means of making additional income, and were usually created simply to solve a need that I (or somebody else I know) had!
By open-sourcing them I hope that others will read through the code, and even check for (and report) problems or potentially contribute. The projects are licensed quite liberally under the BSD license.
A while ago I posted about how I back-up my personal servers to Backblaze B2. That approach involved adding all of the files and directories into a single compressed archive and sending this up to an encrypted bucket on B2.
Whilst this does achieve my back-up goals (and I use Telegram to notify me each time it completes), it felt inelegant. Every time the back-up executed, the entirety of the back-up file - several gigabytes - would be built and transferred.
From the hills of Dusk’s End to the small alleys of Main Street, you feel drawn to the lights of this vibrant metropolis in an uncharted internet territory. The sign reads “Nightfall”.
– Nightfall City
The Nightfall City Gemini capsule (also available via the web) is an internet community in which people can engage with each other and write blog posts and other long-form content.
The community is divided into different districts of Nightfall City: Main Street, Dusk’s End, and Writer’s Lane. These districts allow members (or “citizens”) to post links to their blog posts, zines, or other online articles. From what I can see, people tend to participate in the district or community that best fits them as an individual.
I’ve really enjoyed my recent discovery of a couple of traditional-style webzines. Webzines (sometimes referred to as online magazines, or - in this instance - simply “zines”) are a way of distributing periodic content through the web.
I’m not referring to modern-day online media outlets, but those publications which are typically written by a small number of individuals (often “netizens”) and where the focus is not on advertisements, clickbait, or the mass production of content.
I’ve recently been reminiscing about the “old” days of the web. They felt much more like expressions of personality and creativity.
These days, most people have social media accounts on mainstream services that act as their sole representation of themselves online. Whilst the content can be different, everyone’s own pages end up looking the same, with avatar images, feeds, and other components having layouts and “look and feel"s controlled by the service - the creativity is lost and things become bland.
If you run a service that accepts file uploads from users, and then subsequent re-download by other users (such as images), then your service is potentially at risk of becoming a system for distributing malware. Without safeguards in place, bad actors could potentially use your service to upload harmful files with the intention of them being downloaded by other users.
Services like Google Drive and some email providers will automatically scan files for malicious payloads, but if you - like many people - rely on more basic object storage for storing files for your apps, then there may be less default protection available.
I’ve recently noticed (and read) more and more posts discussing *BSD systems. Creations like the new (and excellent) OpenBSD Webzine and blogs (such as Rubenerd’s and Solene’s) do a great job in raising awareness of the family of operating systems.
I’m pretty familiar and comfortable with Linux, having spent many years using it as a daily driver (I am back on macOS full-time right now). Whilst UNIX systems share a lot of similarities, I’ve never properly used a BSD system before.
Another podcast I frequently listen to likely needs no introduction of its own. The This Week in Tech (or just “TWiT”) network’s flagship podcast - also called TWiT - must be one of the longest-running tech podcasts.
The podcast series started back in 2005. It runs weekly, with episodes recorded live each Sunday evening and made available via podcast clients shortly afterwards.
It is hosted by Leo Laporte, who is joined by interesting and varied panelists from across the tech sector. Episodes feature light-hearted discussion of recent news and insights from the technology world.
Last week I gave a talk at the Bitcoin Association BSV Meet-up for Wales, hosted by Tramshed Tech in Cardiff.
Before learning about this meetup, I had not heard of BSV - either from a technology or currency perspective. However, as well as promoting an interesting project, the event welcomes showcases from technologists working across the blockchain space.
At Simply Do we have recently completed a project that aimed to leverage blockchain distributed ledger technology to help protect and manage IP assets in complex international and cross-domain supply chains. The project was a success and this was what I - along with my colleague, John - presented about.
A few years ago I was in the position of needing a solution to backup and sync dotfiles (configuration files for various pieces of software) across my machines.
Specifically, I had Mac computers and Linux servers, and needed a way to nicely keep these files up-to-date between them. For example, I may have spent some time crafting and tweaking files - such as my .vimrc
and .tmux.conf
- and needed a way of ensuring all of my devices could access the latest version of these files.
It’s been a few weeks since my last post about the Pinephone. Since then I have been playing further with a different graphical shell and have been trying out new applications.
In that previous post, I noted a few points that made the phone tricky to use as a daily-driver. However, it should be noted that this was (intentionally) based purely on the phone’s out-of-the-box configuration. I fully meant to continue exploring and to discover ways in which the device could become more of a useful daily use phone for me. This post forms part of that journey.
For many developers, the notion of adding accessibility features - such as image alt
text attributes to web page images and integrations with host usability enhancements, such as screen-zoom and text-to-speech - might feel like a chore. Especially for those still in the startup or “do things that don’t scale” phase.
It should no longer be about “adding” accessibility features any more than one these days “adds” a mobile-friendly version of their site (long-surpassed by responsive design and mobile-first pricinples) or even “adds” a button to perform a specific task. Accessibility features are a core part of any product, and should factor into the software development process right through from requirement engineering through to planning, design, implementation, and testing.
I was performing a standard system upgrade on an Arch server this morning and received the following messages (maintainer details redacted):
$ sudo pacman -Syyu
... # Download of packages
(159/159) checking keys in keyring [######################] 100%
(159/159) checking package integrity [######################] 100%
error: fail2ban: signature from "... <...>" is unknown trust
:: File /var/cache/pacman/pkg/fail2ban-0.11.2-2-any.pkg.tar.zst is corrupted (invalid or corrupted package (PGP signature)).
Do you want to delete it? [Y/n] Y
error: failed to commit transaction (invalid or corrupted package)
Errors occurred, no packages were upgraded.
I followed advice in the forums and tried refreshing and repopulating the keys, clearing the Pacman cache, and a combination of these things. I still kept getting the same problem each time I tried to upgrade.
I self-host several services on various servers - for both some professional and personal uses.
I use automated backup scripts to periodically sync data to Backblaze (which I recently posted about). However, once they were setup I would often worry about whether they were working properly. To verify, I’d have to log into Backblaze and check when the latest backups came through.
Although I trusted the process, this became a bit of a pain and more and more of a constant worry. The script might crash, run out of storage space, or anything else, and I wouldn’t know about it unless I actually checked.
Another project I try to maintain (when I can!) is SSO Tools.
This is a simple web service that aims to help developers test their own services’ single sign-on (SSO) functionality. The motivation behind the project was that many commmercial offerings were too expensive for solo developers, or just far too complex for simple testing.
SSO Tools aims to provide a simple interface, with functionality that allows for registering identity providers (IdPs), test IdP users, and service providers (SPs). It is targeted at developers looking to quickly, yet robustly, test and iterate on their SSO setup in their applications.
Many services - including web and mobile apps - allow for their users to upload imagery. This could be to enable users to upload an avatar image or perhaps create a gallery of image files.
Either way, many photos contain some degree of sensitive metadata information as part of their EXIF data. For example, if you take photos using your phone, it is likely that the camera application will embed metadata into the image file it creates. This could include the geocoordinates of the position from where the photo was taken, the make and model of the camera device, as well as lots of other data (exposure time, focus, balances, etc).
Adding theming and the choice between “light” and “dark” modes to your website can enhance your site’s accessibility and make it feel more consistent with the host operating system’s own theme.
With JavaScript (and React in particular) this is easy to do, as I’ll explain in this post. We’ll use a combination of our own JavaScript code, the Zustand state management library, and the styled components package to achieve the following:
For several years I’ve been a user of Goodreads. It’s a very popular platform, and I primarily use it for keeping track of the books I’ve read, for receiving suggestions about new books, and for keeping up with what some of my friends are reading.
It’s a good service (though sometimes a little slow) - the website and mobile app are nice to use. However, as with any closed system, it’s always a worry of mine to think about what might happen if the service were to disappear or if I were to get locked out for some reason.
I don’t tend to talk much about the projects I’m working on, but thought this would be a good opportunity to write a post about one such project - Treadl.
Treadl is a web app (and more recently and less popularly a mobile app too). It enables weavers to create and store their weaving patterns and projects online. This could be simply for personal use, or for sharing projects with others as a portfolio.
Back in April, I bought a Pinephone. I used the phone quite consistently for the first few weeks and I meant to write an update here a couple of months back, but work (and other things) got in the way a bit.
So, here is my delayed “first few weeks with a Pinephone” update.
As mentioned, I initially aimed to simply use the phone in its out-of-the-box state (i.e. Manjaro Linux with KDE Plasma Mobile) - not as a daily driver, but more as a means of measuring the phone’s base case usability. However, hopefully with an aim to eventually being able to use such a device more full-time.
Some people have complex development processes and flows - making use of tools such as heavy editors and IDEs, Docker for running and building locally in development, or even develop entirely remotely over SSH connections. Other people use simpler combinations of tools.
I thought I’d write briefly about what I use on a daily basis. I have a relatively simple development tech stack:
Terminal.app
application that ships with my Mac, since this works best for me)I also use a small number of Vim plugins - installed via Vundle - to add nice quality-of-life features to my editor:
Providing code snippets on your website or blog can be a great way to convey meaning for technical concepts.
Using the HTML pre
tag can help provide structure to your listings, in terms of spacing and indentation, but highlighting keywords - as most people do in their code text editors - also vastly helps improve readability.
For example, consider the below JavaScript snippet.
class Greeter {
greet(name) {
console.log(`Hello, ${name}!`);
}
}
const greeter = new Greeter();
greeter.greet('Will');
The exact same listing is displayed below with some simple syntax highlighting. The structure and meaning of the code becomes much easier to understand.
The Gemini protocol has gathered even more momentum in the few months since I last posted about it.
Its popularity is largely driven by its privacy-focused and content-oriented design. It doesn’t allow for bloated sites or resource-hungry client-side scripting. It’s a means for simply and easily accessing content that is useful to you - either by hosting a capsule yourself or by joining an existing community.
In this post I am introducing Capsule.Town - a way in which I can try and give back to the FOSS community.
I listen to a number of podcasts each week. One of these is ATP (Accidental Tech Podcast).
This is one of my favourite weekly podcasts. It’s humurous and full of cutting-edge discussion from the tech world, and I always look forward to new episodes.
The episodes are primarily Apple focused, which is fine for me since I’m a big user of Apple products. Some episodes are more technical than others - discussing programming and development approaches - whilst others are focused more on user-facing items.
Many web apps have support for uploading video files. Whether it’s a media-focused platform (such as a video sharing service) or just offering users a chance to add vlogs to their profile - videos are a powerful mechanism for distributing ideas.
For services providing image upload functionality, it is relatively simple to build in processes that extract smaller versions of the files (e.g. thumbnails) to be used as image previews. This allows other users to see roughly what an image is about before opening a larger version. It also enables more interesting, responsive, and attractive interfaces - since the smaller images can be loaded more quickly.
I enjoy reading my RSS feeds across my devices - whether it’s on my phone when out and about, my Mac in between bouts of work, or my iPad when in downtime.
Being able to sync feeds across these devices is important to me, both so I can maintain a single collection of feeds and to ensure that I can keep track of read/unread articles as I switch devices.
There are lots of web-based clients available, but using Reeder - a native app - gives a far nicer reading experience. There are lots of other clients for other types of devices too.
Image processing and resizing is a common task in many types of applications. This is made even more important by modern phones that take photos several megabytes in size.
For example, if you offer an application that allows people to choose an avatar image, you won’t want to render the full multi-MB size image each time it’s shown. This is extra data for your users to download each time (which costs them both time and money, and can give a poor sluggish experience) and also means you need to fork out more cash for the additional bandwidth usage. If you have lots of users, then this time/money saving can be amplified significantly.
In user-facing software, loading indicators are extremely important to let your users know that something is happening. This is the same no matter whether your software is a CLI program, a GUI application, a web app - or anything else.
Without such indicators, users of your software may become frustrated, or assume the program has crashed, and try to close it or leave.
Generally speaking, developers should try to keep long-running tasks to a minimum (or even offload them to a cron-job or an asynchronous server-side worker). However, in some cases this is not possible. For example, in a cloud-based file storage solution, in which uploads and downloads are a core and direct user-facing feature, the user must wait until a bunch of files finish uploading - though any post-processing can of course still occur in the background afterwards.
It’s common knowledge that part of Google’s business model is to use the data it knows about you, your searches, and browsing patterns in order to more effectively serve ads.
Many people feel uncomfortable with this and so there is a strong movement to adopt more privacy-focused options, such as DuckDuckGo. This was my position, too. For a few years I’ve been a solid DuckDuckGo user, and it was my default on Mac and mobile devices.
Wales Tech Week is an annual event held by Technology Connected. The 2021 event is running this week, aiming to bring technologists together from a wide range of businesses and organisations across Wales.
Today, I was a member of a panel discussing blockchain - “Welsh Businesses Bringing Blockchain to Life”. I was speaking alongside experts from other companies working in the blockchain and crypto space, and an academic focused on applying the technology to government functions.
IDEs and richly-featured text editors - such as VS Code and Sublime Text - support many great features. One of these is the notion of projects or workspaces.
Such workspaces let you save your project’s development configuration to disk - things like the project directory, open files, editor layout, integrated terminal commands, and more. Often, each project can have its own workspace, too.
If you use workspaces then you don’t need to go through the tedious process of setting everything back up again each time you switch project, re-open your editor, or reboot your computer.
Recently I’ve noticed that some of the RSS feeds I subscribe to have become more and more restrictive. A post might contain just a title, or perhaps a short snippet or introductory paragraph, with the expectation that I then proceed to follow the link to visit the website itself in order to view the post in full.
I suppose in many ways that this is similar to distributing podcasts via RSS: the feed contains the podcast title, description, and other metadata, and then a link to download the podcast episode file itself. But this is because podcasts are in audio or video format and cannot be reasonably embedded directly into an XML file.
Someone non-technical recently asked me the question, “what actually is a server?”. They knew it was just a type of computer that runs somewhere that can be accessible over the internet, but they were interested in how they differ from “normal” computers.
The conversation moved on to how these computers can make several different functions available at the same time over the network, which brought us on to the topic of services and network ports.
For a couple of years now I have been using a self-hosted Nextcloud as a replacement for iCloud and Google Drive. I won’t go into the details as to why (especially given the additional upkeep and other overheads required), as this has been covered before - but mainly it’s about maintaining control over my data.
I use a cloud VPS to host my Nextcloud instance - rented from Linode, whom I can certainly recommend if you’re looking for a good VPS provider - and since starting my Nextcloud journey I have begun hosting a number of additional services on the same server. For example, FreshRSS (which I consume using Reeder), Monica, Gitea, a Matrix server, and more.
In this post I will talk a little about how I handle my digital notes and to-do lists. In the spirit of my last post on data sovereignty, the focus will be on self-hosted approaches.
It feels odd that the first task many new technical frameworks guide users through, by way of a tutorial, is a simple to-do list; yet finding great production-ready examples of such software can be challenging.
The term ‘data sovereignty’ is something we hear much more about these days. Increasingly I’ve also heard it being mentioned in different contexts.
We’ve seen it more in the world of enterprise SaaS; particularly in the case of UK-based public sector organisations amid post-Brexit data flow policies. More and more organisations are getting stricter in the geographic location of their users’ data. Whereas before most organisations would be happy as long as the data is stored somewhere within the EU, they would now require it to be stored onshore within the UK.
I listen to a number of podcasts each week. One of these is Go Time.
The Go Time podcast releases episodes every Thursday. Its format is mostly comprised of panel discussions and interviews with founders and specialists in the community about the Go programming language. Episodes are usually between 60 and 90 minutes long.
I don’t program in Go a lot myself these days, though do have one or two older projects written in the language. However, I feel that the content is often broadly relevant for non-full-time gophers - like myself - also.
As you may know, I recently purchased the beta edition of the Pinephone. It arrived last week in the Pinephone Beta Edition box shown below.
As mentioned in my previous post on the subject, I bought the phone for purely experimental purposes, to get involved in the community, and to be a part of the freedom and Linux-on-phone movement.
I fully understand that the device is not yet really considered ready for every-day reliable production use (especially when compared to my current iPhone 11 Pro Max). However, the Pinephone is less than 20% the price of my iPhone, and comes with the freedom to do so much more - without the restrictions of Apple’s “walled garden”.
As is the case with many countries, all businesses in the UK must report the state of their financial accounts to the relevant inland revenue service at their year-end (in the UK, this is HMRC).
This is also the case if you are a freelancer or sole-trader (or if you’ve made other untaxed income - e.g. from investments). In these cases, this is called your Self Assessment. Self Assessments are pretty straight forward, and can usually be completed online by the indiviual themself - as long as they have kept good accounts and know their numbers.
I don’t use Facebook often. In fact, I only have an account currently because our company uses the “Login with Facebook” functionality in order to offer an additional single sign-on option for some customers.
I logged-in today as we needed to update some of the app’s configuration on the Facebook Developer portal, and I went via the Facebook homepage feed to get there. A couple of “Suggested for you” posts that showed near the top of my feed were unusual and caught my eye.
Like many people, I own and manage multiple email accounts - for example, some are for work, for home, or for specific projects. I used to be a strong user of solely web-based email clients (such as Gmail or Fastmail’s web apps) for each of my accounts. However the number of tabs I needed to keep open for all of this grew to the point where things became unmanageable - both in terms of needing to check multiple tabs several times per day and also frustrations when the browser would restart, or if I’d lose my tab setup for some other reason.
The HTTP standard is an expressive system for network-based computer-computer interaction. It’s a relatively old standard - it started life as HTTP/1.0 in 1996 and the HTTP/1.1 standard was formally specified in 1999. HTTP/2 (2015) introduced efficiencies around how the data is transmitted between computers, and the still in-draft HTTP/3 builds further on these concepts.
I won’t go into the nuts and bolts of it, but - essentially - for most applications and APIs, the developer-facing concepts haven’t really changed since HTTP/1.1. By this version, we had all the useful methods required to build powerful and flexible APIs.
Earlier this week I ordered a PinePhone, which recently became available as a Beta Edition.
I’ve been excitedly following the progress of the PinePhone for some time now. I’ve joined various Matrix rooms, subscribed to blogs, and started listening to the PineTalk podcast. The phone is a hackable device that runs plain old Linux - not an Android variant - and thus helps users escape from the grasp of the Google and Apple ecosystems.
Centralised communication services, such as Telegram, Signal, and Whatsapp, offer convenient means to chat to friends and family using your personal devices. However these services also come with a number of pitfalls that are worth considering. For example;
There are, of course, other factors on both sides that you may want to consider. It can be hard to move away from these services - after all, there’s no point using a system that no-one else you need to talk to uses.
Although I was still somewhere between being of single-digit age and a young teen back in the ’90s and early ’00s, I still fondly remember discovering and becoming a small part of the flourishing community of personal, themed, and hobby websites that connected the web.
We were even given basic server space in school and the wider internet was thriving with GeoCities and communities grew around services like Neopets. Everyday, after school, we’d go home and continue our playground conversations over MSN Messenger (after waiting for the dial-up modem to complete its connection, of course). The internet felt small and personal (even if you didn’t use your real name or identity) and exciting.
Like many people I these days try and live a minimal life when it comes to possessions. Having more stuff means there is a greater level of responsibility required to look after it. I love the principles involved in “owning less”.
Although I am in a very different situation to Pieter Levels, I find the ideas behind his 100 Thing Challenge (and other related pieces) to be inspiring.
RSS has had a bit of a resurgence for personal websites and blogs in recent years, especially with the growing adoption of Small Web and IndieWeb ideologies.
Many static site generators - including Hugo, Jekyll, and Eleventy - can easily support the automatic generation of RSS feeds at build time (either directly, or through plugins).
The same is true for Gatsby - the framework currently used to build this static website - and the good news is that setting up one feed, or multiple ones for different categories, only takes a few minutes.
Python’s Flask framework is an easy and excellent tool for writing web applications. Its in-built features and ecosystem of supporting packages let you create extensible web APIs, handle data and form submissions, render HTML, handle websockets, set-up secure account-management, and much more.
It’s no wonder the framework is used by individuals, small teams and all the way through to large enterprise applications. A very simple, yet still viable, Flask app with a couple of endpoints looks as follows.
By now I’m sure everyone has heard the horror stories about people (seemingly-) randomly losing access to their Google accounts. Often the account closures are reported to have been accompanied with vague automated notifications from Google complaining that the account-holder violated their terms in some way, but without any specific details or an offer of appeal or process to resolve the “issues” and reinstate the accounts.
As such, these events usually mark the end of the road for the victims’ presence and data on Google platforms - including Gmail, Drive, Photos, YouTube - without having any option to extract the data out first. This could be years’ worth of documents, family photos, emails, Google Play purchases, and much more (ever used “Sign in with Google” on another service, for example?).
For many small or personal services running on a VPS in the cloud, administration is often done by connecting directly to the server via SSH. Such servers should be hardened with firewalls, employ an SSHd config that denies root and password-based login, run fail2ban, and other services and practices.
Linode has some great getting-started guides on the essentials of securing your server.
In more complex production scenarios heightened security can be achieved by isolating application (webapp, API, database, etc.) servers from external internet traffic. This is usually done by placing these “sensitive/protected” servers in a private subnet, without direct internet-facing network interfaces. This means that the server is not reachable from the outside world.
Many people no longer feel comfortable using Facebook. Whether you were never a member to begin with or you’ve had an account but chosen to remove yourself from the service, or you’ve simply tried to start using it less - either way, it’s no surprise given the way that they, across their family of products (including Instagram and WhatsApp), operate in terms of your own data and time.
This is a huge subject on its own and it’s really up for everyone to make their own minds up when it comes to their own stance. It’s been widely discussed pretty much everywhere, and there are loads of resources available on this handy website if you’re interested in understanding more about what goes on behind the scenes on these platforms.
Shapes and patterns can be leveraged in user interfaces to guide your users, draw attention to content, lend weight or emphasis, or just for aesthetics and decoration.
Layout and styling on the web is typically handled using CSS, however mastering CSS to the level where you can confidently take advantage of more advanced features is definitely not easy. I’ve been developing for the web almost full-time for a decade and I’m still pretty crap when it comes to doing complex stuff with CSS.
React state management is what gives the library its reactiveness. It’s what makes it so easy to build performant data-driven applications that dynamically update based on the underlying data. In this example the app would automatically update the calculation result as the user types in the input boxes:
import React, { useState } from 'react';
function MultiplicationCalculator() {
const [number1, setNumber1] = useState(0);
const [number2, setNumber2] = useState(0);
return ( <>
<input value={number1} onChange={e => setNumber1(parseInt(e.target.value))} />
<input value={number2} onChange={e => setNumber2(parseInt(e.target.value))} />
<p>The result is {number1 * number2}.</p>
</> );
}
Many people would consider RSS - Really Simple Syndication - to be a relic of the past. However I think it has been making a comeback.
RSS is a mechanism by which people can automatically receive updates from individual websites, similar to how you might follow another user on a social networking service. Software known as RSS readers can be used to subscribe to RSS feeds in order to receive these updates. As new content (e.g. a blog post) is published to an RSS-enabled website, its feed is updated and your RSS reader will show the new post the next time it refreshes. Many RSS readers have an interface similar to an email client, with read/unread states, folders, favourites, and more.
If you need a database for your next project, why not first consider if SQLite might be a good option? And I don’t mean just for getting an MVP off the ground or for small personal systems; I mean for “real” production workloads.
Many people will be quick to jump on this with chimes of “it’s not designed for production”, but I think it depends on what is actually meant by “production”? Sure, it’s not the right choice for every scenario - it wouldn’t work well in distributed workloads or for services expected to receive a very high volume of traffic - but it has been used successfully in many real-world cases.
If you’ve visited my geminispace (gemini://wilw.capsule.town) you’ll have noticed that I’ve recently been on a mission to decentralise the every-day tools and services I use, and will understand the reasons why. This post will likely become part of a series of posts in which I talk about taking control and responsibility for my own data.
One of the changes I’ve made more recently is to move many of my own personal projects (including the source for this site) over to a self-hosted Gitea service. I chose Gitea personally, but there are many other self-hosted solutions available (see this post for examples and comparisons).
Gemini is a newer internet protocol designed to provide a more lightweight, privacy- and content-focused experience. For more information I recommend reading this blog post about it.
I try not to replicate too much content across this website and my Gemini space (but that might change over time).
If you’re interested in seeing what I write there, get a Gemini client (I use Amfora) and head over to gemini://wilw.capsule.town.
...Over the past few months I have been trying to use centralised “big tech” social media platforms less and instead immerse myself into the more community-driven “fediverse” of decentralised services that are connected (“federated”) using common protocols (e.g. ActivityPub). If you like, you can follow me on Mastodon (@wilw@fosstodon.org, recently migrated over from my old mastodon.social account) and Pixelfed (@wilw@pixelfed.social).
I’ve loved spending my time on these platforms - mainly due to the lack of noise and fuss, and more of a focus on sharing relevant content and interesting interactions with likeminded people (though of course this does depend on the instance you join).
Building apps on serverless architecture has been a game-changer for me and for developers everywhere, enabling small dev teams to cheaply build and scale services from MVP through to enterprise deployment.
Taking advantage of serverless solutions - such as AWS’ Lambda, Google’s Cloud Functions, and Cloudflare’s Workers - means less resource is spent on traditional dev-ops and deployment and, especially when combined with tools like Serverless framework and its rich ecosystem of plugins, you can use the time instead to better develop your products. Let the provider worry about deploying your code, keeping your services highly available, and scaling them to meet the needs of huge audiences.
If you write React web apps that interface with a backend web API then definitely consider trying React Query.
The library makes use of modern React patterns, such as hooks, to keep code concise and readable. It probably means you can keep API calls directly inside your normal component code rather than setting-up your own client-side API interface modules.
React Query will also cache resolved data through unique “query keys”, so you can keep transitions in UIs fast with cached data without needing to rely on redux.
This short post introduces a useful JavaScript operator to help make your one-liners even more concise.
The specification was added formally in the 11th edition of ECMAScript. It is implemented as a logical operator to selectively return the result of one of two expressions (or operands) based on one of the expressions resolving to a “nullish” value. A nullish value in JavaScript is one that is null
or undefined
.
JavaScript has lots of handy tools for creating concise code and one-liners. One such tool is the optional chaining operator.
The optional chaining operator is useful for addressing an attribute of a deeply-nested object in which you cannot be fully certain that the successive levels of the object are valid at run-time.
For example, consider the following object.
const person = {
name: 'Harry',
occupation: 'student',
enrolmentInformation: {
contactDetails: {
email: 'harry@hogwarts.ac.uk',
address: {
firstLine: '4 Privet Drive',
postCode: 'GU3 4GH'
}
}
}
};
In order to safely (i.e. if you cannot guarantee each object level at run-time) read the nested postCode
attribute, you could do so like this, using the logical AND operator:
I recently stumbled across an article on Hacker News discussing the pros of basic personal accounting using GnuCash - a free and open-source desktop accounting program. The article was interesting as the data geek in me resonated with the notion of being able to query the information in useful ways, particularly after having used the system for enough time to accumulate enough financial data.
The comments on the article’s post also mentioned another tool, Ledger. Whilst GnuCash allows users to input transactional and account information as well as reports, Ledger’s focus is only on the reports - a key feature of this CLI tool is that the actual bookkeeping is made directly (or through other tools) into a text file, which Ledger only reads from and never otherwise touches. Both programs work on the principle of double-entry bookkeeping, but some of the key positives of Ledger are its speed (even when working with several decades’ worth of financial data) and its innate ability to be combined with other useful UNIX tools - both for data input and, if necessary (Ledger’s own reporting outputs are very powerful), output.
This note documents the set-up of a k8s cluster from scratch, including ingress and load-balanced TLS support for web applications. It’s mainly for myself to revisit and reference later on. The result of this note is not (quite) production-grade, and additional features (e.g. firewalls/logging/backups) should be enabled to improve its robustness.
Several cloud providers offer managed k8s services (including Amazon EKS, GKE, Digital Ocean, etc.). Whilst these would be recommended for sensitive or production workloads, I wanted to create my own provider-independent cluster in order to understand the ins and outs.
ZEIT’s Now service is great for deploying apps and APIs that are able to make use of serverless execution models, and I use it for many of my projects (including this website, at the time of writing).
I recently needed to deploy a backend written in Go and kept running into problems when trying to read data from the HTTP request body. The client-side app I was developing to communicate with the backend is also written in Go and everything seemed to work fine when running the backend locally (using now dev
), but the exact same requests failed when running it in production. The client’s request body was available when in development, but returned empty strings when running in production.
A previous note about Philips Hue bulbs got me thinking that the API exposed by the bridge might be used to warn if the house lights are left on too late at night, or even if they get turned on at unexpected times - potentially for security.
I put together a simple program that periodically checks the status of known Hue bulbs late at night. If any bulbs are discovered to be powered on during such times then an email notification is sent. It runs as a systemd
service on a Raspberry Pi.
I recently blogged about Nintendo Hotspot data and mentioned it could be more usefully consumable in a native mobile app.
As such, I wrote a small Android app for retrieving this data and displaying it on a Google Map. The app shows nearby hotspots, allows users to also search for other non-local places, and shows information on the venue hosting the zone.
The app is available on the Play Store and its source is published on GitHub.
Since getting a DS, StreetPass has become quite addictive. It’s actually pretty fun checking the device after walking through town or using public transport to see a list of Miis representing the people you’ve been near recently, and the minigames (such as StreetPass Quest) that require you to ‘meet’ people in order to advance also make it more involved. Essentially the more you’re out and about, the further you can progress - this is further accentuated through Play Coins, which can be used to help ‘buy’ your way forward and are earned for every 100 steps taken whilst holding the device.
A couple of years ago I wrote a blog post about wrapping some of Weka’s classification functionality to allow it to be used programmatically in Python programs. A small project I’m currently working on at home is around taking some of the later research from my PhD work to see if it can be expressed and used as a simple web-app.
I began development in Go as I hadn’t yet spent much time working with the language. The research work involves using a Bayesian network classifier to help infer a tweet’s interestingness, and while Go machine-learning toolkits do exist, I wanted to use my existing models that were serialized in Java by Weka.
As is the case with many people, all music I listen to on my PC these days plays from the web through a browser. I’m a heavy user of Google Play Music and SoundCloud, and using Chrome to handle everything means playlists and libraries (and the way I use them through extensions) sync up properly everywhere I need them.
On OS X I use BearededSpice to map the keyboard media controls to browser-based music-players, and the volume keys adjusted the system as they should. Using i3 (and other lightweight window managers) can make you realise what you take for granted when using more fully-fledged arrangements, but it doesn’t take long to achieve the same functionality on such systems.
A while ago I wrote an article for Heroku’s Dev Center on carrying out direct uploads to S3 using a Python app for signing the PUT request. Specifically, the article focussed on Flask but the concept is also applicable to most other Python web frameworks.
I’ve recently had to implement something similar, but this time as part of an Node.js application. Since the only difference between the two approaches is literally just the endpoint used to return a signed request URL, I thought I’d post an update on how the endpoint could be constructed in Node.
In my last post I discussed methods for streaming music to different zones in the house. More specifically I wanted to be able to play music from one location and then listen to it in other rooms at the same time and in sync.
After researching various methods, I decided to go with using a compressed MP3 stream over RTP. Other techniques introduced too much latency, did not provide the flexibility I required, or simply did not fulfill the requirements (e.g. not multiroom, only working with certain applications and non-simultaneous playback).
For a while, now, I have been looking for a reliable way to manage zoned music-playing around the house. The general idea is that I’d like to be able to play music from a central point and have it streamed over the network to a selection of receivers, which could be remotely turned on and off when required, but still allow for multiple receivers to play simulataneously.
Apple’s AirPlay has supported this for a while now, but requires the purchasing of AirPlay compatible hardware, which is expensive. It’s also very iTunes-based - which is something that I do not use.
Last week I released a new version of the tides Android app I’m currently developing.
The idea of the application was initially to simply display the tidal times and patterns for the Gower Peninsula, and that this should be possible without a data connection. Though, as the time has gone by, I keep finding more and more things that can be added!
The latest update saw the introduction of 5-day surf forecasts for four Gower locations - Llangennith, Langland, Caswell Bay, and Hunts Bay. All the surf data comes from Magic Seaweed’s API (which I talked about last time).
Back in March, I emailed Magic Seaweed to ask them if they had a public API for their surf forecast data. They responded that they didn’t at the time, but that it was certainly on their to-do list. I am interested in the marine data for my Gower Tides application.
Yesterday, I visited their website to have a look at the surf reports and some photos, when I noticed the presence of a Developer link in the footer of the site. It linked to pages about their new API, with an overview describing exactly what I wanted.
I today issued a full upgrade of the server at flyingsparx.net, which is hosted by Digital Ocean. By default, on Arch, this will upgrade every currently-installed package (where there is a counterpart in the official repositories), including the Linux kernel and the kernel headers.
Digital Ocean maintain their own kernel versions and do not currently allow kernel switching, which is something I completely forgot. I rebooted the machine and tried re-connecting, but SSH couldn’t find the host. Digital Ocean’s website provides a console for connecting to the instance (or ‘droplet’) through VNC, which I used, through which I discovered that none of the network interfaces (except the loopback) were being brought up. I tried everything I could think of to fix this, but without being able to connect the droplet to the Internet, I was unable to download any other packages.
Over the last few months, I’ve started to use Weka more and more. Weka is a toolkit, written in Java, that I use to create models with which to make classifications on data sets.
It features a wide variety of different machine learning algorithms (although I’ve used the logistic regressions and Bayesian networks most) which can be trained on data in order to make classifications (or ‘predictions’) for sets of instances.
This is just a quick post to mention that I have made the source for the Gower Tides app on Google Play public.
The source repository is available on GitHub. From the repository I have excluded:
My hosting for my website has nearly expired, so I have been looking for renewal options.
These days I tend to need to use servers for more than simple web-hosting, and most do not provide the flexibility that a VPS would. Having (mostly) full control over a properly-maintained virtual cloud server is so much more convenient, and allows you to do tonnes of stuff beyond simple web hosting.
I have some applications deployed on Heroku, which is definitely useful and easy for this purpose, but I decided to complement this for my needs by buying a ‘droplet’ from Digital Ocean.
I’ve been having trouble connecting to Eduroam, at least reliably and persistently, without heavy desktop environments or complicated network managers. Eduroam is the wireless networking service used by many Universities in Europe, and whilst it would probably work fine using the tools provided by heavier DEs, I wanted something that could just run quickly and independently.
Many approaches require the editing of loads of config files (especially true for netcfg
), which would need altering again after things like password changes. The approach I used (for Arch Linux) is actually really simple and involves the use of the user-contributed wicd-eduroam
package available in the Arch User Repository.
I wanted a way in which users can seamlessly upload images for use in the Heroku application discussed in previous posts.
Ideally, the image would be uploaded through AJAX as part of a data-entry form, but without having to refresh the page or anything else that would disrupt the user’s experience. As far as I know, barebones JQuery does not support AJAX uploads, but this handy plugin does.
styled the file input nicely (in a similar way to this guy) and added the JS so that the upload is sent properly (and to the appropriate URL) when a change is detected to the input (i.e. the user does not need to click the ‘upload’ button to start the upload).
A few posts back, I talked about the development of an Android app for tide predictions for South Wales. This app is now on Google Play.
If you live in South Wales and are vaguely interested in tides/weather, then you should probably download it :)
The main advantage is that the app does not need any data connection to display the tidal data, which is useful in areas with low signal. In future, I hope to add further features, such as a more accurate tide graph (using a proper ‘wave’), surf reports, and just general UI updates.
I’ve taken to writing most of my recent presentations in plain HTML (rather than using third-party software or services). I used JavaScript to handle the appearance and ordering of slides.
I bundled the JS into a single script, js/scriptslide.js
which can be configured
using the js/config.js
script.
There is a GitHub repo for the code, along with example usage and instructions.
Most configuration can be done by using the js/config.js
script, which supports many features including:
I’ve always been interested in the development of smartphone apps, but have never really had the opportunity to actually hava a go. Whilst I’m generally OK with development on platforms I feel comfortable with, I’ve always considered there to be no point in developing applications for wider use unless you have a good idea about first thinking about the direction for it to go.
My Dad is a keen surfer and has a watch which tells the tide changes as well as the time. It shows the next event (i.e. low- or high-tide) and the time until that event, but he always complains about how inaccurate it is and how it never correctly predicts the tide schedule for the places he likes to surf.