(45 items)
This note documents my current setup for email.
It includes:
To securely store credentials for syncing, I use pass
. I recommend installing this for your system and setting it up with a GPG key.
Obsidian is a note/knowledge management tool I’ve used a fair amount in the past (before moving to nb). Below I log some considerations I dealt with whilst using Obsidian.
When I used Obsidian, I synced it to my devices using iCloud (though I don’t know if this is relevant to this particular issue). At various points, I noticed that some notes would completely disappear. Sometimes this would happen even whilst I was editing them, or I would see the note contents just go blank whilst the note was open. This was stressful if I was in a meeting or needed the note at that time (though luckily cmd-Z would usually re-surface the note contents back into the buffer).
...A collection of “things”, software and systems I use on a frequent or daily basis.
I enjoy using open source, self-hosted, and independent software where possible. I like the reliability and quality of Apple hardware, and often the combination of the two works best for me.
Whilst browsing my GitHub home feed a little while back (not something I’m in a habit of doing, generally), I stumbled upon the command line journaling tool jrnl
. I thought it looked interesting, and so subsequently posted about it and had a good discussion on this and alternatives over on Mastodon. jrnl
also gave Kev Quirk the idea to create his own journaling tool.
During the conversation, Tucker McKnight mentioned that he uses a tool called nb
for journaling and note taking. It wasn’t something I’d heard of before, and so I was intrigued.
As of September 2024 I use nb
as my primary note-taking and knowledge management program.
On a Mac, it can be installed from Homebrew: brew install nb
.
nb
directoryI sync my nb
documents via Syncthing, and therefore need the tool to use my Syncthing directory as its “home”: nb set nb_dir ~/Syncthing/nb
.
To migrate my notes from Obsidian, I created the top-level directories (which nb
calls “notebooks”) in ~/Syncthing/nb
and copied the .md
files from Obsidian into the appropriate directories.
A few weeks back (as of 2024-09-01) I posted about a tool called jrnl
on Mastodon. The post got a fair amount of engagement and discussion, and I’ve also given it a go over the past couple of weeks myself.
jrnl
is a command-line tool for keeping a journal. It can be used for quick single-sentence entries or for long-form journaling. Similar to Taskwarrior, it’s a tool that does what it needs to do and then gets out of the way (i.e. you don’t need to open an app and click around to get to your journal).
If you’ve followed my blog or other notes, you’ll be well aware of my extensive use of Tailscale for nearly everything – securing access to remote servers and for connecting to services across my local network.
Tailscale DNS is great for assigning DNS names to individual hosts, but my approach (usually) involves using Docker containers, with which I run several services on a single host. It is then fiddly (or impossible? I’m not sure) to use a single Tailscale “machine” to expose multiple services on the same host nicely via the DNS system (system).
...Restic has been my go-to backup tool for many years, and I’ve used it for a wide range of workloads – including for managing filesystem, Postgres, Photoprism and Vaultwarden backups. At the point I was researching suitable backup options for these cases, and considering my personal requirements, I was often stumbling upon discussions comparing Restic to a similar tool – Borg backup – but I eventually settled on using Restic myself, due to its flexibility and the amount of data (measured in terabytes) I intended to backup.
I self-host a Vaultwarden instance to manage my usernames, passwords, two-factor codes, and other details for my accounts everywhere.
Standard Bitwarden clients (including browser extensions and mobile apps) can use Vaultwarden instances as their backend server.
In this note I describe my particular setup.
To begin, set-up a new docker-compose.yml
file that includes the Vaultwarden and reverse-proxy containers.
I’ve switched from managed photo providers (such as Apple or Google photos) to a self-hosted Photoprism instance. This note documents my setup.
I use Linode for this, and you can get a free $100 credit using this referral link.
Update 2022: More recently I’ve migrated Photoprism over to a home-based Raspberry Pi, which works great.
Provision a new server or instance (at least 4GB memory and 2 CPU cores), and create and attach a big (I use 1TB) volume.
...This is a reference quick-start note for deploying PostgreSQL via Docker, and with working self-signed TLS.
E.g. with one year expiry:
mkdir postgres-certs && cd postgres-certs
openssl req -new -x509 -days 365 -nodes -text -out server.crt -keyout server.key
sudo chown 999 server.key
sudo chmod 0600 server.key
Postgres key permissions are fussy. In this case, we set the key to be owned by the postgres
user in the container (999
), which you may not want to do if you’re on a shared environment. See this thread for more info.
This is a reference quick-start note for deploying MongoDB via Docker, and with working self-signed TLS.
Note: This setup does not yet consider replica sets. Coming soon…
E.g. with one year expiry:
openssl req -nodes -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365
cp cert.pem certificateKey.pem
cat key.pem >> certificateKey.pem
Ensure to reference the correct locations via volume mounts.
...Some of my personal projects are beginning to get larger and more complicated, often involving different front-ends and services that all need to be separately built and deployed. Managing all of these is taking away more of my personal time, and I’ve been on the look-out for a good CI/CD automation system for these projects. I primarily use Gitea as a git server, and have been struggling to find a system that suited my needs and works well with Gitea.
I use Woodpecker CI to automate my personal CI/CD processes. This note documents my setup.
Create a docker-compose.yml
and bring the service up (notes on environment below):
version: '3'
services:
woodpecker-server:
image: woodpeckerci/woodpecker-server:latest
ports:
- 8000:8000
expose:
- 8000
volumes:
- ./woodpecker-server-data:/var/lib/woodpecker/
environment:
- WOODPECKER_OPEN=false
- WOODPECKER_HOST=https://your.host
- WOODPECKER_AGENT_SECRET=SECRET
- WOODPECKER_GITEA=true
- WOODPECKER_GITEA_URL=https://git.wilw.dev
- WOODPECKER_GITEA_CLIENT=GITEA_CLIENT
- WOODPECKER_GITEA_SECRET=GITEA_SECRET
woodpecker-agent:
image: woodpeckerci/woodpecker-agent:latest
command: agent
depends_on:
- woodpecker-server
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WOODPECKER_SERVER=woodpecker-server:9000
- WOODPECKER_AGENT_SECRET=SECRET
Replace the WOODPECKER_HOST
variable with the URL/host required, and also the URL for your Gitea instance.
I self-host a number of services - Nextcloud, FreshRSS, PhotoPrism etc. - at home on a Raspberry Pi. Attached to this Pi is a large SSD to hold the service data and configuration, and all of this is periodically backed-up via Restic to a remote site.
The SSD is simply formatted with ext4
, and the directory containing all of these services and data is currently encrypted using fscrypt. I (mainly) want to encrypt the data in order to protect against the case of break in and theft in my house, however likely or unlikely that is to occur.
I run a number of services on my own machines and VPSs, and for each system I ensure there are appropriate backups in place.
I usually use Backblaze B2 as a backup solution due to its low costs, simplicity, and ease of use.
Linode’s Object Storage service is also great, but I use Linode to handle nearly 100% of my workloads and it feels safer (and more compliant with the 3-2-1 principle of backing-up) to store backups with a different datacentre and provider from the primary data store.
...I’m interested in being able to look after my own data, as much as possible. I try to ensure that this is at least (almost) as convenient as commercial where I can (and so there is probably much more I can do).
Most of the services I selfhost run on Linode servers. I can definitely recommend them for ease of security, management, and performance. For personal services, I use a home-based Raspberry Pi.
...My notes are kept using Joplin. I use Joplin server to keep all of my devices in-sync.
Below is a docker-compose.yml
I use to run the server.
version: '3'
services:
db:
image: postgres:13
volumes:
- ./data/postgres:/var/lib/postgresql/data
restart: unless-stopped
environment:
- POSTGRES_PASSWORD=CHANGEME
- POSTGRES_USER=joplin
- POSTGRES_DB=joplin
app:
image: joplin/server:latest
depends_on:
- db
ports:
- "8084:22300"
restart: unless-stopped
environment:
- APP_PORT=22300
- APP_BASE_URL=http://CHANGEME
- DB_CLIENT=pg
- POSTGRES_PASSWORD=CHANGEME
- POSTGRES_DATABASE=joplin
- POSTGRES_USER=joplin
- POSTGRES_PORT=5432
- POSTGRES_HOST=db
Note: I run my Joplin server in my Tailscale network, so I don’t bother with TLS certs, but you may wish to put a reverse proxy in front if you run it publicly.
...I have a small fleet of Raspberry Pis (mostly the Pi 4 Model B), which I use at home for various tasks.
They are all firewalled off and are reachable via Tailscale. A small number of the services they run are also exposed to the local network.
As of the time of writing, services I run on the Pis include the following:
I use large SSDs (e.g. this one) for the Pis that require extra storage, such as for my photos. These are connected via USB3-SATA cables (such as these).
...Back in 2021 I blogged about how and why I wanted to switch from Google Photos as a storage solution (and source of truth) for my life’s photo and video library. The post compared several solutions, such as Piwigo, Mega, and Nextcloud.
In that time I’ve tried several further options, starting with pCloud (as described in that post), Nextcloud backed by S3, and plain-old Apple Photos.
2FAuth is a self-hostable web service for managing two-factor authentication tokens. It has a nice, clean interface, is responsive (so it can be simply added to the homescreen on iOS), and has various useful features:
Running the service itself is as straight forward as bringing up the supplied docker-compose.yml
file. However - as noted in this blog post - if, like me, you plan to run it inside of a Tailscale tailnet, you’ll need to ensure HTTPS is enabled in order for the camera to work. To do this, attach a reverse-proxy armed with the required certificates.
Update 2024-09-01:
I have since written a note on Tailscale Sidecars which provides a more elegant solution to this problem. I have left this post here for posterity.
Tailscale’s HTTPS feature is an excellent tool for adding TLS connections to web services exposed over the tailnet.
Although traffic over the tailnet is encrypted anyway due to the nature of Tailscale itself, some web-based services work better when served over HTTPS. Since the browser does not know that you are accessing the service over a secure connection, it may enforce limits on connected web services when accessing them in - what feels like - an insecure context.
The Bear notes app has been my go-to notes app for Mac, iPhone, and iPad for some time now. It’s got a great UX, a customisable UI, and is one of those apps that feels like a (clichéd) “delight” to use.
Bear is written exclusively for Apple devices, and uses CloudKit to sync notes between devices via iCloud. In theory, this isn’t too much of a problem. However, I’ve recently found CloudKit-reliant apps to become a little unreliable.
Gitea is a fantastic GitHub-like service for git remotes and for viewing code and git projects via a web-browser. One can join existing instances (such as Codeberg), or self-host it.
I self-host a Gitea instance. I use a docker-compose.yml
file like the one below.
version: "3"
services:
gitea:
image: gitea/gitea:latest
restart: unless-stopped
environment:
- USER_UID=1000
- USER_GID=1000
networks:
- traefik_net
volumes:
- ./gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "22:22"
expose:
- 3000
labels:
- traefik.http.routers.gitea.rule=Host(`domain.com`)
- traefik.http.routers.gitea.tls=true
- traefik.http.routers.gitea.tls.certresolver=myresolver
- traefik.http.services.gitea.loadbalancer.server.port=3000
networks:
traefik_net:
external: true
Change the domain at which you host the instance and setup the DNS/labels as described in the Traefik note.
...For sensitive data storage in the cloud, I will usually provision a separate volume, encrypt it, and then use this as the volume mapper for containerised services.
I use Linode to host the vast majority of my services. In Linode, new volumes can be easily created and attached to an instance.
After a short while the instance will then recognise the new device and make it available via the OS.
...I use Nextcloud as an alternative to Google Drive or iCloud. I self-host it on a VPS or Raspberry Pi and make it available only within my Tailscale network so that it is only accessible to my own authorised devices.
Nextcloud is relatively straight-forward to run using a docker-compose.yml
file:
version: '3'
services:
db:
image: mariadb:10.5
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: unless-stopped
volumes:
- ./nextcloud-data/db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=CHANGEME
- MYSQL_PASSWORD=CHANGEME
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:22
restart: unless-stopped
ports:
- 8083:80
volumes:
- ./nextcloud-data/storage:/var/www/html
Bring the service up, navigate to the IP address or domain, and configure the instance to get started.
...When browsing Twitter, I use a self-hosted Nitter instance. Nitter allows you to browse Twitter using a more privacy-respecting front-end.
To run Nitter, create a docker-compose.yml
:
version: "3"
services:
nitter:
image: zedeus/nitter:latest
ports:
- "8085:8080"
volumes:
- ./nitter.conf:/src/nitter.conf:ro
depends_on:
- nitter-redis
restart: unless-stopped
nitter-redis:
image: redis:6-alpine
command: redis-server --save 60 1 --loglevel warning
volumes:
- ./nitter-redis:/data
restart: unless-stopped
Simply bring up the service, navigate to the IP address or domain, and get started.
...When browsing Reddit, I use a self-hosted Teddit instance. Teddit allows you to view Reddit using a nicer and more privacy-respecting front-end.
To run Teddit, create a docker-compose.yml
:
version: "3"
services:
teddit:
image: teddit/teddit:latest
environment:
- REDIS_HOST=teddit-redis
ports:
- "8086:8080"
depends_on:
- teddit-redis
teddit-redis:
image: redis:6.2.5-alpine
command: redis-server
environment:
- REDIS_REPLICATION_MODE=master
Simply bring the service up, navigate to the IP address or domain for the service, and you’re ready to go.
...For managing and consuming RSS feeds, I use FreshRSS.
To run a FreshRSS instance, create a docker-compose.yml
file:
version: "3"
services:
freshrss:
image: freshrss/freshrss
environment:
- "CRON_MIN=3,33" # Change this if you like
- TZ=Europe/London
volumes:
- ./freshrss_data:/var/www/FreshRSS/data
- ./freshrss_extensions:/var/www/FreshRSS/extensions
restart: unless-stopped
ports:
- 8081:80
I run FreshRSS within my Tailscale network and so I do not bother with TLS certificates or a reverse proxy. If you do want to use one, look at the Traefik note.
...Most of my personal services I run within my Tailscale network, and so I do not tend to bother with HTTPS (since provisioning TLS certificates for multiple services running on a single host via Tailscale is a pain).
However, for external services that need to be accessed publicly, I use Traefik as a reverse proxy and for managing and provisioning TLS certificates.
Run Traefik using Docker. Create a docker-compose.yml
file in a directory called traefik
:
I use Umami for analytics on this website and a few other services that I run. I self-host Umami using Docker.
To start, create a docker-compose.yml
:
version: '3'
services:
umami:
image: ghcr.io/mikecao/umami:postgresql-latest
environment:
DATABASE_URL: postgresql://umami:CHANGEME@umamidb:5432/umami
DATABASE_TYPE: postgresql
HASH_SALT: CHANGEME
depends_on:
- umamidb
restart: always
expose:
- 3000
networks:
- traefik_net
labels:
- traefik.http.routers.umami.rule=Host(`CHANGEME`)
- traefik.http.routers.umami.tls=true
- traefik.http.routers.umami.tls.certresolver=myresolver
umamidb:
image: postgres:12-alpine
environment:
POSTGRES_DB: umami
POSTGRES_USER: umami
POSTGRES_PASSWORD: CHANGEME
volumes:
- ./sql/schema.postgresql.sql:/docker-entrypoint-initdb.d/schema.postgresql.sql:ro
- ./umami-data:/var/lib/postgresql/data
restart: always
networks:
- traefik_net
networks:
traefik_net:
external: true
Since Umami must be publicly-accessible (so it can measure analytics on public websites), it must also be served over HTTPS.
...I use a self-hosted Monica instance to keep on top of birthdays and other useful/interesting notes about friends and family.
This note documents my setup.
Create a docker-compose.yml
file:
version: "3.4"
services:
monica:
image: monica:latest
depends_on:
- monicadb
environment:
- APP_KEY=CHANGEME
- DB_HOST=monicadb
- APP_ENV=nonprod # See note below
- APP_URL=http://CHANGEME
- APP_TRUSTED_PROXIES=*
- MAIL_DRIVER=smtp
- MAIL_HOST=CHANGEME
- MAIL_PORT=587
- MAIL_USERNAME=CHANGEME
- MAIL_PASSWORD=CHANGEME
- MAIL_ENCRYPTION=tls
- MAIL_FROM_ADDRESS=CHANGEME
- MAIL_FROM_NAME=Monica
volumes:
- ./monica_data:/var/www/html/storage
restart: always
ports:
- 8082:80
monicadb:
image: mysql:5.7
environment:
- MYSQL_RANDOM_ROOT_PASSWORD=true
- MYSQL_DATABASE=monica
- MYSQL_USER=homestead
- MYSQL_PASSWORD=CHANGEME
volumes:
- ./mysql:/var/lib/mysql
restart: always
APP_ENV
When running with APP_ENV=production
, Monica enforces HTTPS connections. I run Monica in my Tailscale network without TLS certificates, and so I run Monica in nonprod
(which I made up) mode to allow this to work.
Since getting a Magic Keyboard for my iPad Pro, I’ve been using the iPad for many areas of work for which before I would have needed a laptop.
In fact, last week I was able to use the iPad full-time when at work in our new office space, and I didn’t need to reach for my MacBook once. When I can, I prefer working on the iPad due to its flexibility, brilliant display, speed, battery life, and more.
I often talk about self-hosting on this blog, and I’m certainly a big fan of being able to control my own data and systems wherever possible (and feasible). I’ve recently switched from using Nginx to Traefik as a reverse proxy for my server and for terminating TLS connections.
In this post I’ll talk a little about why and how I made this change.
I self-host a number of services; including Nextcloud for file storage and sync, Gitea for git syncing, FreshRSS for RSS feed aggregation and management, Monica for relationship organisation, and a few other things too.
If you’ve ever run your own Nextcloud before, you may have noticed screens like the following in your instance’s settings pages.
The messages advise a number of maintenance procedures to help ensure the smooth running of your instance. These could be to run database migrations or to update schemas in response to installing new apps.
Often these steps might involve running occ
commands. occ
is Nextcloud’s command-line interface, and is so-called because of its origins in ownCloud.
For as long as I’ve been using Matrix I’ve hosted my own homeserver on my own VPS and at my own domain.
I previously wrote about how I self-host my homeserver with the help of the Synapse project. Although this set-up is quite straight forward, it’s an extra system to maintain with all of the associated overheads.
One of the reasons I don’t host my own mail server is that I fear missed messages and silent bounces. I trust dedicated mail providers (particularly Fastmail) more than myself in providing a robust enough service to ensure things get through. Equally, if I am telling other people my Matrix handle, then I want to make sure that messages they send (and those that I send) actually get delivered without any problems.
Some people may remember my quest a few months back to find a good alternative to Google Photos for image storage and backup.
At the time, I talked about Piwigo, Mega and pCloud as potential candidates. I also briefly touched upon Nextcloud in that post - a service I use (and self-host) anyway for all of my other storage needs, but I did not consider it further due to the high cost of the associated block storage required to house a large number of images.
A while ago I posted about how I back-up my personal servers to Backblaze B2. That approach involved adding all of the files and directories into a single compressed archive and sending this up to an encrypted bucket on B2.
Whilst this does achieve my back-up goals (and I use Telegram to notify me each time it completes), it felt inelegant. Every time the back-up executed, the entirety of the back-up file - several gigabytes - would be built and transferred.
I enjoy reading my RSS feeds across my devices - whether it’s on my phone when out and about, my Mac in between bouts of work, or my iPad when in downtime.
Being able to sync feeds across these devices is important to me, both so I can maintain a single collection of feeds and to ensure that I can keep track of read/unread articles as I switch devices.
There are lots of web-based clients available, but using Reeder - a native app - gives a far nicer reading experience. There are lots of other clients for other types of devices too.
It’s common knowledge that part of Google’s business model is to use the data it knows about you, your searches, and browsing patterns in order to more effectively serve ads.
Many people feel uncomfortable with this and so there is a strong movement to adopt more privacy-focused options, such as DuckDuckGo. This was my position, too. For a few years I’ve been a solid DuckDuckGo user, and it was my default on Mac and mobile devices.
For a couple of years now I have been using a self-hosted Nextcloud as a replacement for iCloud and Google Drive. I won’t go into the details as to why (especially given the additional upkeep and other overheads required), as this has been covered before - but mainly it’s about maintaining control over my data.
I use a cloud VPS to host my Nextcloud instance - rented from Linode, whom I can certainly recommend if you’re looking for a good VPS provider - and since starting my Nextcloud journey I have begun hosting a number of additional services on the same server. For example, FreshRSS (which I consume using Reeder), Monica, Gitea, a Matrix server, and more.
In this post I will talk a little about how I handle my digital notes and to-do lists. In the spirit of my last post on data sovereignty, the focus will be on self-hosted approaches.
It feels odd that the first task many new technical frameworks guide users through, by way of a tutorial, is a simple to-do list; yet finding great production-ready examples of such software can be challenging.
Centralised communication services, such as Telegram, Signal, and Whatsapp, offer convenient means to chat to friends and family using your personal devices. However these services also come with a number of pitfalls that are worth considering. For example;
There are, of course, other factors on both sides that you may want to consider. It can be hard to move away from these services - after all, there’s no point using a system that no-one else you need to talk to uses.
Many people no longer feel comfortable using Facebook. Whether you were never a member to begin with or you’ve had an account but chosen to remove yourself from the service, or you’ve simply tried to start using it less - either way, it’s no surprise given the way that they, across their family of products (including Instagram and WhatsApp), operate in terms of your own data and time.
This is a huge subject on its own and it’s really up for everyone to make their own minds up when it comes to their own stance. It’s been widely discussed pretty much everywhere, and there are loads of resources available on this handy website if you’re interested in understanding more about what goes on behind the scenes on these platforms.
If you’ve visited my geminispace (gemini://wilw.capsule.town) you’ll have noticed that I’ve recently been on a mission to decentralise the every-day tools and services I use, and will understand the reasons why. This post will likely become part of a series of posts in which I talk about taking control and responsibility for my own data.
One of the changes I’ve made more recently is to move many of my own personal projects (including the source for this site) over to a self-hosted Gitea service. I chose Gitea personally, but there are many other self-hosted solutions available (see this post for examples and comparisons).