Back to Blog
MinIO Is Dead. Here's What Your Infrastructure Team Should Do Next.

MinIO Is Dead. Here's What Your Infrastructure Team Should Do Next.

60,000 GitHub stars. One billion Docker pulls. Officially archived. MinIO's five-year wind-down from Apache 2.0 to AGPL to dead is the most dramatic open-source infrastructure collapse in years. Here's the migration playbook.

InfrastructureOpen SourceObject StorageArchitecture
March 1, 2026
7 min read

A project with 60,000 GitHub stars and over one billion Docker pulls just became a read-only archive on GitHub. No PRs. No issues. No contributions accepted. MinIO, the open-source S3-compatible object storage server that half the Kubernetes ecosystem depends on, is officially dead.

Not "unmaintained." Not "in maintenance mode." Archived. Done.

If your infrastructure runs MinIO, you need a plan. Not eventually. Now.

The timeline nobody wants to read

This wasn't sudden. MinIO spent five years slowly killing its open-source edition while hoping nobody would notice. Here's how it played out:

DateWhat happened
May 2021License changed from Apache 2.0 to AGPL v3
July 2022Legal action against Nutanix for license violations
March 2023Legal action against Weka
May 2025Admin console GUI removed from Community Edition
October 2025Pre-built binaries and Docker images stopped for CE
December 2025Repository marked "maintenance mode, not accepting changes"
February 2026Repository officially archived. Read-only forever.

Each step was small enough to rationalize. "It's just a license change." "They're just protecting their IP." "The binaries are still available from source." By the time the community noticed the pattern, the frog was already boiled.

MinIO's commercial play is AIStor, available in a "Free" standalone edition and an "Enterprise" distributed edition. The message is clear: if you want maintained S3-compatible storage from MinIO Inc., you're paying for it.

The fork: pgsty/minio

Within days of the archive, Ruohang Feng (the Pigsty founder) forked the repo to pgsty/minio. The fork restores the admin console GUI, rebuilds the binary distribution pipeline (RPM/DEB packages), and patches CVE-2025-62506, a privilege escalation vulnerability where low-privilege users could mint new accounts.

The swap is straightforward: replace minio/minio with pgsty/minio in your deployment. Everything else stays the same. Same config, same API, same S3 compatibility layer.

But here's the question nobody wants to ask out loud: should you trust a community fork of critical storage infrastructure?

The honest answer is complicated. The fork is maintained by one person with a strong track record (Pigsty is serious infrastructure software). But "maintained by one person" is exactly how single points of failure work. For development environments and internal tooling, the fork buys you time. For production data at scale, you need a longer-term strategy.

The alternatives (and why none of them are MinIO)

I spent time evaluating every serious contender. Vonng's comparison and the Proxmox forum threads saved me some legwork, but I tested the ones that matter. Here's where things actually stand:

Ceph is the enterprise answer. It handles block, file, and object storage in one system, and its S3 gateway (via RADOS Gateway) is battle-tested. The problem: Ceph is brutally complex to operate. MinIO spoiled us with a single Go binary. Ceph requires dedicated nodes, careful CRUSH map tuning, and operational expertise that most teams don't have. If you already run Ceph, adding S3 via RGW is a no-brainer. If you don't, spinning up Ceph just for object storage is like buying a semi truck to deliver groceries.

SeaweedFS excels at one thing: billions of small files with O(1) disk seeks. If that's your workload, it's genuinely excellent. The catch is it needs an external metadata store (usually Filer backed by LevelDB, MySQL, or PostgreSQL). For general-purpose S3 workloads, that dependency adds operational overhead MinIO never required.

Garage is the lightweight option. A 10 MB binary built by Deuxfleurs with NGI funding, designed for self-hosters and edge deployments. It's delightful for personal projects and small-scale use. But S3 compatibility is thin: no versioning, no cross-region replication, no IAM. Enterprise teams will hit walls fast.

RustFS is the most interesting long-term bet. It explicitly targets the "drop-in MinIO replacement" niche and is written in Rust. But it's still alpha. No mc admin support, different health check endpoints, and certificate handling quirks. Vonng tested it in Pigsty and shelved it. More concerning: RustFS uses the same Apache 2.0 + CLA + single-company model that MinIO started with. The pattern that led us here could repeat.

Here's the real comparison for production teams:

MinIO (fork)Ceph RGWSeaweedFSGarageRustFS
S3 compatibilityFullFullGoodBasicPartial
Operational complexityLowHighMediumLowLow
Production readinessYes (legacy)YesYesLimitedAlpha
Single binaryYesNoNoYesYes
Active developmentFork onlyYesYesYesEarly
License riskAGPL v3LGPLApache 2.0AGPL v3Apache 2.0 + CLA

What I'd actually do (depending on your situation)

If you're running MinIO in production today:

Pin your current version. Don't update, don't change anything yet. Audit whether you're on a build that includes the CVE-2025-62506 fix (anything after October 2025). If not, either build from source on the archived repo or switch to the pgsty/minio fork for the patched binary. Lock MinIO behind your network perimeter.

You have months, not days. MinIO doesn't stop working because GitHub archived the repo. Your binaries still run. Your data is still there. Make decisions from stability, not panic.

If you're choosing object storage for a new project:

Don't pick MinIO. The fork's long-term maintenance is uncertain, and you'd be starting a new dependency on archived software.

For simple S3 needs (backups, artifact storage, dev environments): evaluate SeaweedFS or Garage based on your scale. For enterprise workloads: Ceph if you have the ops team, or just use your cloud provider's object storage. The cost of running your own S3-compatible layer is only justified if you have specific latency, sovereignty, or air-gap requirements.

If you're a platform team maintaining MinIO for others:

Start the migration conversation now, even if the migration itself is months away. Document which services depend on MinIO, what S3 features they actually use (most teams use 10% of the API surface), and what their data volumes look like. This inventory is valuable regardless of which alternative you pick.

The bigger lesson: infrastructure dependency auditing

MinIO's death follows a pattern that's becoming disturbingly common. VC-funded company builds open-source infrastructure. Gains massive adoption. Changes license. Strips features from community edition. Eventually kills or archives the open-source version.

We've seen this with Elasticsearch (OpenSearch fork), Redis (Valkey fork), Terraform (OpenTofu fork), and now MinIO. The playbook is identical every time:

  1. Build in the open, gain trust and adoption
  2. Change license to something more restrictive
  3. Gradually gate features behind commercial tiers
  4. Archive or abandon the community edition
  5. Community forks, some survive, most don't

If your infrastructure depends on VC-funded open source, you need a dependency audit that goes beyond "is it maintained?" Ask:

  • What's the license, and has it changed in the last 3 years?
  • Is there a CLA that assigns copyright to a single entity?
  • What percentage of commits come from one company?
  • Is there a credible foundation or multi-vendor governance?
  • What's the commercial product, and how does it relate to the open-source version?

None of these are disqualifying on their own. All of them together are a red flag.

The uncomfortable truth

MinIO worked. For years, it was the answer to "I need S3 but self-hosted." Single binary, fast, well-documented, battle-tested. The fact that it's gone doesn't erase that.

But it does mean that every team running MinIO now carries technical debt they didn't sign up for. The fork buys time. The alternatives each solve part of the problem. None of them are MinIO.

The teams that will handle this best are the ones that start planning now while their MinIO instances are still running fine. The ones that will handle it worst are the ones who wait until something breaks.

Don't be the second group.

Share

Get new posts in your inbox

Architecture, performance, security. No spam.

Keep reading