If your Orange Pi 5 (or any RK3588 board running a Rockchip BSP kernel) takes roughly two minutes to boot, with no console output and no obvious failures, this post is for you. The fix is a one-line cmdline change. Skip to the bottom if you just want it.
The symptom
Boot would hang for exactly 120 seconds after U-Boot’s Starting kernel ... message, with no kernel framebuffer, no serial output, no network, and no indication of life. After the 120 seconds elapsed, the system would come up normally and run perfectly. Once running, everything was fine — Kodi with hardware decode, panthor for the GPU, smooth media playback. Just slow to boot.
The usual forum advice for symptoms like this — “bad SD card,” “underpowered USB-C supply,” “try a different power adapter” — is wrong here. I went through multiple SD cards (Samsung Pro Endurance, SanDisk Extreme), a 100W GaN USB-C supply, and an official Orange Pi power brick. Same 120 seconds every time, deterministic to the second. That kind of clean, repeatable timing is never a hardware fault. Hardware faults are messy.
What I was running
- Orange Pi 5 (RK3588S, 8GB)
- Armbian Debian Trixie minimal
- Vendor BSP kernel (the standard Armbian “current” branch for this board)
- libmali Valhall blob for GPU acceleration
- ffmpeg-rockchip patched build for VPU video decode
- Kodi as the media player
The configuration is otherwise stable and performant. Once booted, it is genuinely the right software stack for an RK3588 media player in 2026: blob for fast 2D, hardware video decode through MPP, no panthor instability. The boot delay was the only fly in the ointment.
Diagnosis
With no console output, the only way to see what was happening was to enable the kernel’s initcall_debug and ignore_loglevel options on the cmdline. This makes the kernel print a line for every initcall it runs, with timing information, so you can see exactly where the time is going.
In /boot/armbianEnv.txt:
extraargs=cma=256M earlycon initcall_debug ignore_loglevel
Wired up a 1.5Mbaud serial console (the RK3588 debug UART runs at 1500000, not 115200 — easy to get wrong) and captured the boot log. Three initcalls were eating almost all the time:
[19.687] calling init_kprobe_trace+0x0/0x1a4 @ 1
[68.591] initcall init_kprobe_trace+0x0/0x1a4 returned 0 after 48902620 usecs
[70.493] calling jent_mod_init+0x0/0x4c @ 1
[72.708] initcall jent_mod_init+0x0/0x4c returned 0 after 2215012 usecs
[72.713] calling crypto_kdf108_init+0x0/0x158 @ 1
[113.376] initcall crypto_kdf108_init+0x0/0x158 returned 0 after 40664383 usecs
That’s 49 + 2 + 41 = ~92 seconds of stalls during initcall processing, plus ~30 seconds of normal kernel startup overhead. Totals to roughly the 120 seconds I was seeing.
False leads
My first guesses were wrong. Worth noting them so you don’t waste time on the same paths:
Entropy starvation. The presence of jent_mod_init (Jitter Entropy RNG seeding) and the crypto-related stall names made this look like the kernel was waiting for entropy. Adding random.trust_cpu=1 random.trust_bootloader=1 to the cmdline did nothing. The stalls were unchanged.
Crypto self-tests. The crypto_kdf108_init call prints “alg: self-tests for CTR-KDF (hmac(sha256)) passed” right before completing, which made it look like algorithm self-tests were the slow part. Adding cryptomgr.notests=1 (verified honored via /sys/module/cryptomgr/parameters/notests showing Y) also did nothing. The stalls were unchanged.
Bad hardware. Different SD cards, different power supplies, different USB-C cables. No change.
What it actually is
The breakthrough was using initcall_blacklist to skip the slow calls entirely:
extraargs=cma=256M initcall_blacklist=init_kprobe_trace,crypto_kdf108_init nokprobes
Boot got faster, but the stall didn’t disappear — it moved to a different initcall: init_blk_tracer, taking ~48 seconds. Embedded in that stall in the boot log was this line:
[21.1] calling init_blk_tracer
[62.7] Freeing initrd memory: 15424K <-- happens during the stall
[69.5] init_blk_tracer returned after 48439808 usecs
That’s the smoking gun. The kernel is freeing initrd memory while init_blk_tracer is “running.” That means init_blk_tracer isn’t doing 48 seconds of work — it’s blocked waiting on something, and other kernel work proceeds in parallel during the wait.
The pattern across all the stalls is now obvious: every slow initcall is in the kernel tracing subsystem. init_kprobe_trace, init_blk_tracer, and the crypto self-test path that registers trace events. They all hit the same synchronization wall — almost certainly synchronize_rcu_tasks() waiting for some kthread or workqueue in the BSP kernel that takes ~50 seconds to quiesce after boot.
This is a Rockchip BSP kernel issue. Something in their downstream additions holds up RCU-tasks grace periods early in boot, and any initcall that needs to synchronize with the trace subsystem stalls until it clears. The first such initcall pays the full cost; subsequent ones are fast because the grace period has already elapsed.
The fix
Add this to your extraargs in /boot/armbianEnv.txt:
extraargs=cma=256M cryptomgr.notests=1 nokprobes initcall_blacklist=init_kprobe_trace,crypto_kdf108_init,init_blk_tracer trace_buf_size=1
Reboot. The 120-second delay is gone. Boot is now in the normal range — under 30 seconds total to login prompt.
The pieces:
initcall_blacklist=init_kprobe_trace,crypto_kdf108_init,init_blk_tracer—
skips the three slow initcalls entirelynokprobes— disables kprobes infrastructure, since we’re skipping its initcryptomgr.notests=1— disables crypto algorithm self-tests, harmless
belt-and-suspenderstrace_buf_size=1— minimizes the trace ring buffer
You lose:
- kprobe tracing (kernel debugging tool, not used in production)
- blktrace block I/O tracing (debugging tool)
- KDF self-tests (FIPS validation, irrelevant outside FIPS environments)
For a media player, server, router, or any production use case that isn’t kernel development on this exact hardware, none of these matter.
Why nobody talks about this
I burned a lot of time searching for similar symptoms and finding nothing useful. There are reasons for that. RK3588 is popular among SBC enthusiasts but tiny in absolute Linux user terms. Within that group, the intersection of “uses BSP kernel” + “actually reboots regularly” + “noticed the delay exists” + “kept digging past the lazy bad-hardware replies” is approximately nobody.
Also, the hang is invisible without instrumentation. Without initcall_debug on the kernel cmdline and a serial console wired up at 1.5Mbaud, you just see “boot is slow” and assume it’s normal SBC sluggishness. Most users either don’t reboot enough to care, or shrug and move on. So the fix never gets written down.
If you found this post by searching for your own boot stall, you’re welcome. If you have a different RK3588 board (Rock 5B, NanoPi R6S, Radxa CM5, etc.) running a Rockchip BSP kernel, the fix should apply equally — the bug is in the BSP kernel itself, not anything Orange Pi-specific.
Postscript
Snapshot your working SD card after applying the fix. dd if=/dev/sdX of=opi5-working.img bs=4M status=progress then gzip it. On a fiddly system like this, a known-good backup image is worth more than the board itself. When something inevitably regresses six months from now, you can roll back in two minutes instead of rebuilding the entire stack from scratch.