v0.39.0
[!NOTE] This release was brought to you by the Shipyard team.
Unclaimed project
Are you a maintainer of kubo? Claim this project to take control of your public changelog and roadmap.
[!NOTE] This release was brought to you by the Shipyard team.
ipfs provide statprovider_provides_totalgo-ipfs name no longer publishedThis release is an important step toward solving the DHT bottleneck for self-hosting IPFS on consumer hardware and home networks. The DHT sweep provider (now default) announces your content to the network without traffic spikes that overwhelm residential connections. Automatic UPnP recovery means your node stays reachable after router restarts without manual intervention.
New content becomes findable immediately after ipfs add. The provider system persists state across restarts, alerts you when falling behind, and exposes detailed stats for monitoring. This release also finalizes the deprecation of the legacy go-ipfs name.
The Amino DHT Sweep provider system, introduced as experimental in v0.38, is now enabled by default (Provide.DHT.SweepEnabled=true).
What this means: All nodes now benefit from efficient keyspace-sweeping content announcements that reduce memory overhead and create predictable network patterns, especially for nodes providing large content collections.
Migration: The transition is automatic on upgrade. Your existing configuration is preserved:
Provide.DHT.SweepEnabled=false in v0.38, you'll continue using the legacy provideripfs config --json Provide.DHT.SweepEnabled falseRouting.AcceleratedDHTClient is enabled, full sweep efficiency may not be available yet; consider disabling the accelerated client as sweep is sufficient for most workloads. See caveat 4.New features available with sweep mode:
For background on the sweep provider design and motivations, see Provide.DHT.SweepEnabled and Shipyard's blogpost Provide Sweep: Solving the DHT Provide Bottleneck.
When you add content to IPFS, the sweep provider queues it for efficient DHT provides over time. While this is resource-efficient, other peers won't find your content immediately after ipfs add or ipfs dag import completes.
To make sharing faster, ipfs add and ipfs dag import now do an immediate provide of root CIDs to the DHT in addition to the regular queue (controlled by the new --fast-provide-root flag, enabled by default). This complements the sweep provider system: fast-provide handles the urgent case (root CIDs that users share and reference), while the sweep provider efficiently provides all blocks according to Provide.Strategy over time.
This closes the gap between command completion and content shareability: root CIDs typically become discoverable on the network in under a second (compared to 30+ seconds previously). The feature uses optimistic DHT operations, which are significantly faster with the sweep provider (now enabled by default).
By default, this immediate provide runs in the background without blocking the command. For use cases requiring guaranteed discoverability before the command returns (e.g., sharing a link immediately), use --fast-provide-wait to block until the provide completes.
Simple examples:
ipfs add file.txt # Root provided immediately, blocks queued for sweep provider
ipfs add file.txt --fast-provide-wait # Wait for root provide to complete
ipfs dag import file.car # Same for CAR imports
Configuration: Set defaults via Import.FastProvideRoot (default: true) and Import.FastProvideWait (default: false). See ipfs add --help and ipfs dag import --help for more details and examples.
Fast root CID provide is automatically skipped when DHT routing is unavailable (e.g., Routing.Type=none or delegated-only configurations).
The Sweep provider now persists the reprovide cycle state and automatically resumes where it left off after a restart. This brings several improvements:
Provide.DHT.ResumeEnabled (default: true). Set to false if you don't want to keep the persisted provider state from a previous run.This feature improves reliability for nodes that experience intermittent connectivity or restarts.
ipfs provide statThe Sweep provider system now exposes detailed statistics through ipfs provide stat, helping you monitor provider health and troubleshoot issues.
Run ipfs provide stat for a quick summary, or use --all to see complete metrics including connectivity status, queue sizes, reprovide schedules, network statistics, operation rates, and worker utilization. For real-time monitoring, use watch ipfs provide stat --all --compact to observe changes in a 2-column layout. Individual sections can be displayed with flags like --network, --operations, or --workers.
For Dual DHT configurations, use --lan to view LAN DHT statistics instead of the default WAN DHT stats.
For more information, run ipfs provide stat --help or see the Provide Stats documentation, including Capacity Planning.
[!NOTE] Legacy provider (when
Provide.DHT.SweepEnabled=false) shows basic statistics without flag support.
Kubo now monitors DHT reprovide operations when Provide.DHT.SweepEnabled=true
and alerts you if your node is falling behind on reprovides.
When the reprovide queue consistently grows and all periodic workers are busy, a warning displays with:
Provide.DHT.MaxWorkers or Provide.DHT.DedicatedPeriodicWorkerswatch ipfs provide stat --all --compactThe alert polls every 15 minutes (to avoid alert fatigue while catching persistent issues) and only triggers after sustained growth across multiple intervals. The legacy provider is unaffected by this change.
provider_provides_totalThe Amino DHT Sweep provider metric has been renamed from total_provide_count_total to provider_provides_total to follow OpenTelemetry naming conventions and maintain consistency with other kad-dht metrics (which use dot notation like rpc.inbound.messages, rpc.outbound.requests, etc.).
Migration: If you have Prometheus queries, dashboards, or alerts monitoring the old total_provide_count_total metric, update them to use provider_provides_total instead. This affects all nodes using sweep mode, which is now the default in v0.39 (previously opt-in experimental in v0.38).
Kubo now automatically recovers UPnP port mappings when routers restart or become temporarily unavailable, fixing a critical connectivity issue that affected self-hosted nodes behind NAT.
Previous behavior: When a UPnP-enabled router restarted, Kubo would lose its port mapping and fail to re-establish it automatically. Nodes would become unreachable to the network until the daemon was manually restarted, forcing reliance on relay connections which degraded performance.
New behavior: The upgraded go-libp2p (v0.44.0) includes Shipyard's fix for self-healing NAT mappings that automatically rediscover and re-establish port forwarding after router events. Nodes now maintain public connectivity without manual intervention.
[!NOTE] If your node runs behind a router and you haven't manually configured port forwarding, make sure
Swarm.DisableNatPortMap=falseso UPnP can automatically handle port mapping (this is the default).
This significantly improves reliability for desktop and self-hosted IPFS nodes using UPnP for NAT traversal.
go-ipfs name no longer publishedThe go-ipfs name was deprecated in 2022 and renamed to kubo. Starting with this release, the legacy Docker image name has been replaced with a stub that displays an error message directing users to switch to ipfs/kubo.
Docker images: The ipfs/go-ipfs image tags now contain only a stub script that exits with an error, instructing users to update their Docker configurations to use ipfs/kubo instead. This ensures users are aware of the deprecation while allowing existing automation to fail explicitly rather than silently using outdated images.
Distribution binaries: Download Kubo from https://dist.ipfs.tech/kubo/ or https://github.com/ipfs/kubo/releases. The legacy go-ipfs distribution path should no longer be used.
All users should migrate to the kubo name in their scripts and configurations.
The new Gateway.MaxRangeRequestFileSize configuration protects against CDN range request limitations that cause bandwidth overcharges on deserialized responses. Some CDNs convert range requests over large files into full file downloads, causing clients requesting small byte ranges to unknowingly download entire multi-gigabyte files.
This only impacts deserialized responses. Clients using verifiable block requests (application/vnd.ipld.raw) are not affected. See the configuration documentation for details.
Kubo provides official linux-riscv64 prebuilt binaries, bringing IPFS to RISC-V open hardware.
As RISC-V single-board computers and embedded systems become more accessible, the distributed web is now supported on open hardware architectures - a natural pairing of open technologies.
Download from https://dist.ipfs.tech/kubo/ or https://github.com/ipfs/kubo/releases and look for the linux-riscv64 archive.
go-libp2p to v0.45.0 (incl. v0.44.0) with self-healing UPnP port mappings and go-log/slog interop fixesquic-go to v0.55.0go-log to v2.9.0 with slog integration for go-libp2pgo-ds-pebble to v0.5.7 (includes pebble v2.1.2)boxo to v0.35.2 (includes boxo v0.35.1)ipfs-webui to v4.10.0go-libp2p-kad-dht to v0.36.0dag import (#11058) (ipfs/kubo#11058)ipfs provide stat (#11019) (ipfs/kubo#11019)--flush=false (#10985) (ipfs/kubo#10985)SweepingProvider.wg (#1200) (libp2p/go-libp2p-kad-dht#1200)| Contributor | Commits | Lines Β± | Files Changed | |-------------|---------|---------|---------------| | @guillaumemichel | 41 | +9906/-1383 | 170 | | @lidel | 30 | +6652/-694 | 97 | | @sukunrt | 9 | +1618/-1524 | 39 | | @MarcoPolo | 17 | +1665/-1452 | 160 | | @gammazero | 23 | +514/-53 | 29 | | @Prabhat1308 | 1 | +197/-67 | 4 | | @peterargue | 3 | +82/-25 | 5 | | @cargoedit | 1 | +35/-72 | 14 | | @hsanjuan | 2 | +66/-29 | 5 | | @shoriwe | 1 | +68/-21 | 3 | | @dennis-tra | 2 | +27/-2 | 2 | | @Lil-Duckling-22 | 1 | +4/-1 | 1 | | @crStiv | 1 | +1/-3 | 1 | | @cpeliciari | 1 | +3/-0 | 1 | | @rvagg | 1 | +1/-1 | 1 | | @p-shahi | 1 | +1/-1 | 1 | | @lbarrettanderson | 1 | +1/-1 | 1 | | @filipremb | 1 | +1/-1 | 1 | | @marten-seemann | 1 | +0/-1 | 1 |
RegionsFromPeers may return multiple regions (#1185) (libp2p/go-libp2p-kad-dht#1185)