Bluefin Linux
My experience with a new kind of Linux
It’s an unwritten rule that when writing about Linux, us oldies have to stake a claim to our earliness by recounting the first time we ever saw or used Linux. Well for me the first time I ever actually installed it was in 1999 with Red Hat 6.0. At the time I was working as a web developer in London and my machine of choice was a Mac running OS 9. Because I needed Perl to write CGI scripts, I had installed an amazing tool called MachTen that allowed me to run a virtual UNIX machine right on top of the original MacOS. That first taste of a UNIX shell left me wanting more, and when I heard the rumours that Apple were about to adopt a UNIX based operating system, I decided it was time to get to grips with a real OS (I have a rare official Apple MK Linux CDROM from those days but nothing ever came of that unfortunately). One of the sysadmins at work would come to work wearing Linux t-shirts and he told me all about it. I had an unused Pentium gaming machine that was lying around so I decided to give it a try.
In those days of dial-up modems, downloading install media was out of the question. So one Saturday I made the trip to the long disappeared PC Bookshop that used to be located on Sicilian Avenue in Holborn, London. That bookshop was one of my favourites (along with Compendium in Camden). I would spend hours browsing the O’Reilly books, and when I finally decided which one to buy I would decamp to the Spaghetti House restaurant across the road to devour it with a plate of pasta and a glass of red wine.
The PC bookshop had a special shelf given over to Linux, and at one end of the shelf next to the books, there was a line of intriguing looking cardboard boxes. Each box contained a handbook and anywhere from 5 to 15 CDROMs. I can’t remember all the names now except for SUSE and Red Hat. I liked the Red Hat graphics so I chose that. Version 6.0 it was. I remember that I was too excited to go to the Spaghetti house that day - I needed to get home so I could crack open the box and get installing. And that was that. My Linux journey had begun. I was completely enthralled. In those days the difference between commercial operating systems and Linux was stark. The most polished gnome app was the system monitor. An early version of Gimp was pretty sad compared to the Photoshop of the time. But I didn’t care. I had root.
Over the years I tried it all: Slackware, Debian, Gentoo. When Ubuntu arrived it seemed like a miracle, and I’ve been using it ever since.
So for twenty years or so I became kind of settled. It’s been gratifying to see Linux gain more acceptance, and it’s been great watching more people discover it.
But Linux seems to me to have only really seen a slow, steady improvement. The fabled “year of the Linux desktop” just never seemed to arrive. Gaming is better now, and the desktop gets slicker every year. But the wider picture has pretty much remained the same. In Ubuntu land I began to dread the update messages. The interruptions to my workflow imposed when the inevitable upgrade from the furtive ferret to the gorgeous gorilla forces me to stop what I’m doing and sit and watch the update, praying that my carefully crafted setup will survive. This seemed to be the way: slow improvement but no major innovation. I watched as a new generation of developers began to shun Linux for MacOS. I never understood how clever people would choose closed systems over open. But in hindsight it’s obvious: MacOS has a degree of polish that Linux traditionally doesn’t have. MacOS just works.
Well, ladies and gentlemen, those days are over. For the first time in a long time I have discovered real innovation in a Linux system and that system is called Bluefin. On the Bluefin Documentation site they describe it thus:
“Bluefin is a next-generation Linux desktop that trends toward progressive improvement. We rigorously and aggressively move away from legacy technologies as soon as possible to provide the best possible experience.”
There’s some great insight on that website and I will quote it further:
“Bluefin features a GNOME desktop configured by our community. It is designed to be hands-off and stay out of your way so you can focus on your applications.”
“System updates are image-based and automatic. Applications are logically separated from the system by using Flatpaks for graphical applications and brew for command-line applications.”
“Bluefin is ‘An interpretation of the Ubuntu spirit built on Fedora technology’—a callback to an era of Ubuntu’s history that many open source enthusiasts grew up with, much like the Classic X-Men. We aim to bring that same vibe here; think of us as the reboot. Chill vibes.”
There is so much to unpack in this.
What Bluefin Does Differently
Bluefin is built on Universal Blue, which is built on Fedora. But it uses an image-based model that flips the traditional approach entirely.
Rather than managing individual packages, you receive the base OS as a complete, pre-built image. That image is read-only (immutable in the parlance). You don’t install system packages into it. You don’t tweak it. You just use it.
When an update comes, you’re not applying a patch on top of your existing system state, you’re atomically switching to a new image. Reboot, and you’re running the new version. If something’s wrong, you boot back into the previous image. The whole thing is transactional in the same way that switching between git commits is transactional.
This sounds limiting. In practice, it’s liberating.
So Where Does Your Software Actually Live?
The natural question: if the base system is read-only, where do you install things?
Bluefin has a clear answer to this, and it’s organised by layer:
GUI applications live as Flatpaks, delivered through Flathub. This is the default for anything you’d launch from your app drawer: browsers, image editors, communication tools. Flatpaks are sandboxed, self-contained, and don’t touch the base OS at all.
CLI tools and dev utilities live in Homebrew, which runs entirely within your home directory. No root required. No system pollution. If you’ve used Homebrew on a Mac, the mental model transfers directly. I think this is a real stealth win for Bluefin. In the old days Linux geeks argued over the relative merits of APT and RPM. Meanwhile the cool kids are running MacOS and using Homebrew. It makes total sense to try to unify a single package manager across platforms. Less headaches for maintainers and more packages for users. Win win.
Anything that needs a traditional Linux environment, say a project with specific apt or dnf dependencies, lives inside a Distrobox container. Distrobox lets you run any Linux distro as a container that integrates with your home directory and your PATH. You can have an Ubuntu container for one project and an Arch container for another, and neither touches the base system. This is just perfect for people like me who have been living with APT for so long that the thought of giving it up makes me anxious. The real magic of Distrobox is that it allows you to export applications from the container and use them in the host environment in a completely transparent way. You can add the app icon to the system tray and forget about the container completely.
One of the first things I did after the Bluefin install was install Davinci Resolve. Video heavy apps like this often cause problems, particularly with the recent shift away from traditional X11 towards Wayland. This was a fantastic experience and gave me my first taste of what life on Bluefin is like. I quickly discovered Davincibox and got myself up and running in no time.
Upgrades That Actually Feel Like Upgrades
One of the quiet joys of this model is how OS upgrades work.
On a traditional distro like Ubuntu, upgrading to a new major version is an event. There are guides. There are things that might break (and they often do). You back up your dotfiles and hope for the best.
On Bluefin, upgrading the base is a rebase. You run the update, confirm, reboot. That’s it. The immutable base is just swapped out cleanly, and your Flatpaks, Homebrew tools, and home directory are entirely untouched. It’s conceptually closer to flashing a new firmware version than to upgrading a traditional OS. It’s the way Chromebooks work. My wife has one, and it never gives her any trouble. It’s a really powerful approach, particularly because when Bluefin boots a new image, the old one is retained and you can easily jump back to it should you hit a problem.
What I’ve just described, the clean upgrades, the layered approach, that all stems from something deeper. Bluefin isn’t really a distro in the way we usually think about distros.
Why Bluefin is “Not a Distro”
Think about what happens when you run apt upgrade or dnf update on a traditional system. Your machine becomes a little assembly plant: it downloads hundreds of individual packages and tries to weave them together on your specific machine, with your specific history of installed stuff. If a script fails or a dependency conflicts, you’re the one who gets to fix it. We’ve all been there.
Bluefin skips all of that. It doesn’t host its own repos or ship its own packages. Instead, the developers take Fedora Silverblue as a base, apply a curated set of configurations, and seal the whole thing into a single image. You don’t build or update the OS locally. You just download the finished product. Think of it less like a distro and more like firmware for your desktop.
Why They Call it “Cloud-Native”
Here’s where it gets properly interesting for anyone who’s worked with containers. The entire Bluefin OS is defined by a Containerfile, the same format you’d use for a Docker image. It’s built using OCI (Open Container Initiative) standards, and the finished images live on the GitHub Container Registry. Your laptop pulls its OS updates the same way a Kubernetes cluster pulls a new deployment.
If you’ve ever worked with immutable infrastructure, where you replace servers rather than patching them, this is exactly that, but for your desktop. Updates download in the background as a single atomic chunk. They either succeed completely or fail cleanly. And if something goes wrong, you just reboot into the previous image. No prayer required.
The Engine: How bootc Fits In
So if the OS is a container image, what actually boots it on your machine? That’s where bootc comes in. It stands for Bootable Containers.
Earlier immutable desktops used rpm-ostree, which was clever. It worked a bit like Git for your OS. But it was never designed to pull OCI containers natively; that was bolted on afterwards. bootc was built from scratch for this world. It takes a container image, unpacks it onto your drive, configures the bootloader, and tells the kernel to boot directly from that image. Normally an OS runs containers. Here, the OS is the container. It’s a neat inversion.
And because bootc treats the OS as just a container tag, swapping your entire system is as trivial as changing a URL.
The Power of Re-basing: Say you’re running standard Bluefin but fancy trying the KDE Plasma version, or the gaming-focused one. You don’t need an installation USB. You just tell
bootcto switch to a different image URL, it pulls the new image, stages it alongside the old one, and when you reboot, there you are, running a completely different system. Your home directory stays exactly where it was, completely untouched.
The upshot of all this is that Universal Blue has effectively turned the Linux desktop into just another tag on a container registry. It sounds dry when you put it like that, but in practice it’s transformative.
But what if I really need to tweak the system files?
Fair question. The core system directory is read-only, but that doesn’t mean your entire hard drive is frozen. The /etc and /var directories are fully writable, so system-wide settings work just as you’d expect. Changes there persist across updates and reboots.
For anything that would traditionally need sudo apt install, things like development tools and custom binaries, you’re back in Distrobox or Toolbx territory. Spin up a container, use whatever package manager you like with full root access, and export the apps straight to your Bluefin launcher. Your base system stays clean, and you still get to tinker to your heart’s content.
And if you genuinely need to change the base image itself, say a different kernel or a baked-in system service, the cloud-native model has you covered. Bluefin is defined by a Containerfile, so you can fork the repo on GitHub, add your modifications, and let the CI pipeline build your own personalised OS image. Point your machine at your custom registry and you’ve got your own bespoke, automatically maintained system. It’s the kind of thing that would have sounded like science fiction back in my Red Hat 6.0 days. There are even projects that are designed to help you build your own custom version. Take a look at finpilot and BlueBuild.
So what are the benefits of all this?
The really exciting thing about all of this is that we finally have a solid, sleek, slick desktop Linux system that just works. I bought a new laptop to install Bluefin on. It has an Nvidia graphics card that I’ll use for local AI (and a little gaming). The drivers are baked into the system and everything just worked first time. I have the latest Gnome, a really modern kernel, and all the AI development tools I could want. Even the Bluefin-cli is a thing of beauty.
Best of all, I never have to worry about upgrades ever again. That Saturday in 1999, lugging a Red Hat box home from Holborn, I couldn’t have imagined this. Welcome to the future.