netbox 3.0.7 on OpenBSD 7

📆
🏷
,

Ever since dywis0r made me aware of netbox I was planning on getting my hands dirty with it. But only after looking loads of videos on the topic and after being `forced’ to use it at work has I been able to finally get enough momentum going to start the journey for myself.

At the beginning of it lay another topic I was successfully procrastinating since a very long time: a suffiently detailed network diagram which was both useful and pleasing to the eye. Being interested especially in isometric network diagrams I started working on that very foundation for better documentation of my home network. A journey which led me down a rabbit hole at which’s bottom I found inkscape to be the best tool available for my different needs. It’s not the most effective tool for drawing a network diagram but I had a nice produce after a steep learning curve. But this is another story.

Back to the topic at hand. After drawing the diagram and cleaning up my network from countless redesigns leaving artifacts of me learning and labbing at home I started on working on netbox. After some further research I found what I think to be a good starting point over at Jasper’s blog. I used it as a skeletton but wanted to

  1. use relayd(8) instead of nginx for redirecting static content elsewhere
  2. use httpd(8) instead of nginx to serve static content
  3. use rc(8) instead of supervisord

in large parts due to the software already laying around and the target system already had httpd running. The architecture will more or less look like this:

architecture overview
netbox running with httpd and relayd

Other than that you get same as with Jasper’s setup:

The following documents the steps needed to setup NetBox on OpenBSD. I am running NetBox on a PC Engines APU which holds up fairly well and I have since migrated my own setup from RackTables to NetBox, primarily because of the API functionality NetBox offers which allows for integration with SaltStack. But more on that some other …

Moving. Again

📆
🏷

So I am back on Vultr. Kind of. Not that I am disappointed with hetzner but I just want to run my stuff at home just like I was doing when I started getting more into being part of the Internet and before running your mail server from home became more or less impossible. I still like thinking of the internet as being a decentralized space. A space where anyone can found his own settlement.

I was also chewing the bone of self hosting for quite a while again and what hold me back the most was my own domain and my wish to run my own name servers. Up until lately I was running my setup either on 2 Vultr instances or two virtual machines on my hetzner setup in order to satisfy the requirement of having 2 nameservers available. Luckily I was finally able to collaborate in that regard with a friend of mine. He’s also running his own nameservers and he agreed to act as my secondary so I am even better off than before.

I thought about how to move my setup back into my home without having to have a business line and therefor a static IP. And I thought maybe others might have interest in doing the same so I started thinking about setting it up in way I could provide it as a service. Which was a good reason to get myself into docker. So I installed Alpine on vultr and started to build my own docker images as I don’t want to use 3rd party images which I don’t know how they have been setup and instead of spending my time auditing the images I wanted to spend my time learning docker.

Using a dynamic DNS provider was not an option for two reasons:

  1. I wanted to send e-mails which is dead since decades if you are running your mailserver from a consumer dialup range
  2. I wanted to be at least online wrt my mail setup and if possible at least partial for my blog

The idea was to run a per tenant mail relay and caching reverse proxy connecting back to your basement via wireguard. I got most of the parts running on my own docker images but honestly, progress was slow and …

Unifi Network Controller on Debian 10 (as OpenBSD guest)

📆
🏷
,

Lately problems emerged with my self hosted Unifi network controller which I had been running on a Raspberry Pi. Mainly I suffered from a missing admin collection in the underlying MongoDB which rendered my controller unmaintainable as I was unable to login to the system. Further investigation showed also multiple warnings about ext4 problems so I decided to move away from the Raspberry and host the controller on a Linux guest running on OpenBSDs vmd(8).

My first attempt was Alpine Linux. I really enjoyed the brief moments with it and the installer seemed to be OpenBSD inspired which I liked. Sadly the current Alpine Linux does not have any MongoDB package available due a change in licensing on MongoDB’s side. So I decided to go with Debian as both MongoDB and Ubiquiti provide packages for Debian. Being a security conscious being I opted for Debian 10, their current stable distribution.

This is where the trouble began.

The nice things about running the latest stable are having current (i.e. in the Debian sense for that matter) software packages at your disposal. Little did I know that Debian also ditched MongoDB for the same reasons as Alpine (or the other way around?) but luckily I could get away by using MongoDB’s repository for the 3.6 release of the database (unifi’s package does not support a version >= 4.0.0). Unifi also has troubles with Java 11 and last but not least it also uses a poor choice of TLS parameters which culminated in a instance of the controller which I was unable to reach from my browser as there was no way to negotiate a secure connection. To make matters worse some commands taken from Ubiquiti’s documentation did harm the overall process (apt-mark). But to be fair, the instructions are for Debian 8 and 9.

So without further ado here are the steps to get the unifi network controller v6.2.26 running:

apt install -y gnupg2
echo "deb http://repo.mongodb.org/apt/debian stretch/mongodb-org/3.6 main" | \
    tee …

Bruteforcing my own Bitwarden vault

📆
🏷
,

Lately I lost the masterkey to my Bitwarden vault. As Bitwarden does not provide a way out of that rabbit hole, losing my masterkey would mean losing all of my data within the vault. Something around 300 entries.

Luckily I had access to the vault on one of my computers as the vault has not been locked there and on my mobile which is using Face ID to unlock my vault. But without my masterkey I was unable to export the entries, so my only choices had been

  1. lose everything
  2. transcribe all entries manually
  3. patch bitwarden Firefox extension so I could bypass the masterkey in order to export the vault

It goes without saying that option #1 is totally unacceptable. Option #2 was nice to have as a backup plan and option #3 was nice to know and something probably worthwhile to follow through someday.

But, as the title suggests, there is an other option: bruteforcing. My masterkey actually is a passphrase comprised of a number of words. I was pretty sure about which words most likely have been lost. I also had a list of candidates. Basically I assumed that I have no more than to two words which could have been wrong. So I searched my system for the wordlist used by bitwarden to generate passphrases and started to assemble a wordlist with likely candidates for bruteforcing. My final wordlist has 187735 candidates. I was also able to extract my keyHash from data.json and with a little help of a friend I also found how Bitwarden was generating the keyHash saved on disk which basically is

pepper  = pbkdf2(sha256, pass = masterkey, salt = email,     rounds = 100000)
keyHash = pbkdf2(sha256, pass = pepper,    salt = masterkey, rounds = 1)

googling for a bitwarden specific bruteforcer was unsuccessful so the plan grew on me to write my own. As I still had access to the vault I was under no pressure at least not on the time front. Some hours later bw_brute.py has been conjured and tests with a list of one thousand entries finished in 9s on my 8 core, 16 threads Ryzen CPU which left me …