Category Archives: raspberry pi

Boot Btrfs root partition with Raid1 on Raspberri Pi

This article stems from two different sources:

https://feldspaten.org/2019/07/05/raspbian-and-btrfs/

and

https://www.raspberrypi.org/forums/viewtopic.php?f=29&t=319427

and I decided to summarize the process here, after extensively testing on a Raspberry Pi 4 with 8GB of RAM, and a 64bit OS; I might extend this guide to include setting up a Raid1 with a second disk (spoiler: I already did, down at the end of this page).

WARNING
I suggest testing this procedure on a pendrive where you flashed the image of a raspberry pi os distribution, WITHOUT using apt update & full-upgrade for two reasons:

  1. nothing bad will happen if things go wrong, since it’s not a production environment
  2. you’ll be able to test the kernel update automation AFTER you completed all the steps, by running the apt update && full-upgrade commands

Install requirements and edit needed modules:

sudo apt install initramfs-tools btrfs-tools
# use btrfs-progs if btrfs-tools has "no candidate"
sudo nano /etc/initramfs-tools/modules

Add the following lines to the file and save:

btrfs
xor
zlib_deflate
raid6_pq

Create the initramfs in /boot partition and edit config.txt:

sudo mkinitramfs -o /boot/initramfs-btrfs.gz
sudo nano /boot/config.txt

adding the following line up top and save:

# For more options and informations see
# http://rpf.io/configtxt
# Some settings may impact device functionality. See link above for details

initramfs initramfs-btrfs.gz

For good measure, check if rebooting the system is succesful. Then sudo poweroff and place the disk (SD, SSD, pendrive) on a linux PC (or just reboot the same Raspberry Pi with a different media you prepared, like an SD card or a pendrive, leaving the disk attached) , and, let’s say the device is recognized as /dev/sdb:

sudo fsck.ext4 /dev/sdb2
sudo btrfs-convert /dev/sdb2                 
sudo mkdir /mnt/rootfs
sudo mount /dev/sdb2 /mnt/rootfs

We’ve just checked if the existing ext4 rootfs is clean, then we converted it to btrfs (it will take quite some time to create the ext2 image to make rollback possible), then created a mountpoint and mounted to it the just converted btrfs root partition.

We now need to update the fstab in this partition so the system will correctly mount at boot:

sudo nano /mnt/rootfs/etc/fstab

Correct the root line (/) by replacing ext4 with btrfs, and make it sure it ends with two 0‘s to disable fsck on it (btrfs has its own builtin filesystem checks, and fsck might return unwarranted errors).

Also, correct /boot/cmdline.txt in the boot partition by replacing, again, ext4 with btrfs, and fsck.repair=yes with fsck.repair=no.

At this point, placing the drive back on the Pi (or removing the other booting media and leaving just the disk) and booting will take you in a btrfs rootfs.

BEWARE navigators: this was my major gripe, as without anything else added to the procedure, a kernel update will be followed by a useless system at the next reboot, because the initramfs won’t be recreated.

User dickon on raspberrypi forums was a great help in the following procedure.

You need to create a script to automatically update initramfs in /boot after a kernel update, so here it is.

sudo nano /etc/kernel/postinst.d/btrfs-update

Insert this code in the script (this will work with a 64bit OS, make sure kernel8.img is the correct filename, you can check against the existing file in /boot, otherwise change accordingly):

#!/bin/bash
if [ "x$2" != "x/boot/kernel8.img" ]; then
	exit 0
fi

echo ============ UPDATE INITRAMFS ==============
mkinitramfs -o /boot/initramfs-btrfs.gz 0,86 EUR
echo ============ UPDATE COMPLETED ==============

then make sure it is executable, and has the same permissions of the other files in the same folder:

sudo chmod 755 /etc/kernel/postinst.d/btrfs-update

At this point, the system will (should) update the relevant initramfs right after each kernel update, freeing you of the hassle of remembering to do it by hand, or risk having a useless system after the next reboot.

It is a good idea to disable swap, since btrfs won’t be able to host a swap file anyway:

sudo systemctl disable dphys-swapfile.service

or alternatively, for a more aggressive approach that will remove the swapfile capabilities entirely:

sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo update-rc.d dphys-swapfile remove
sudo apt purge dphys-swapfile

In case you have a spare SSD lying around (who doesn’t) and you want to leverage the advantages of both redundancy and concurrent read speeds, then you can easily use btrfs innate capabilities for this.

You should mirror the partitioning of the system disk on the second SSD (unless there is a large disparity in size), and when that’s done, like detailed here, follow these steps:

sudo btrfs filesystem show

will confirm that the partition assigned to the rootfs mountpoint is the one you just converted a while ago. Let’s assume the partition you want to mirror the rootfs to is /dev/sdb1:

sudo btrfs device add -f /dev/sdb1 /

will assign it to the rootfs mountpoint, which you can confirm by re-issuing:


sudo btrfs filesystem show

that will show the second drive together with the first; you will then instruct btrfs to proceed with the actual mirroring of the data:

sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /

which might take some time depending on the amount of used space on the partition.

From now on, you will be able to add even more drives, or remove failing ones to substitute them, and there is plenty of resources online that you can search for without me having to detail them all in here.

Clone raspberry disk TO newer/larger disk/SD/SSD

I was switching from a 120GB SSD on my Raspberry Pi 4, to a 240GB one.

Found this and I cloned the command in the opening question:

sudo dd if=/dev/originaldisk of=/dev/destinationdisk bs=4096 conv=sync,noerror

where I used /dev/disk/by-id/... handles to make sure I was pointing to the correct SSD’s (otherwise,  had I swapped them, a huge mess would happen).

The resulting SSD was a perfect copy down to partition ID, so the cmdline.txt file under /boot/mounted from a FAT partition on the SD was starting the system off the new disk as if nothing happened.

I just tested it for the inverse situation.

On a Raspberry Pi 3, the running disk was a 240GB SSD, but it was pretty much wasted space since it was hosting a less than 4GB root partition, so I wanted to switch it to the 120GB SSD that I took out of the Raspi4.

I ran the above command, and I allowed myself the luxury to just Ctrl-C out of it after the first 10GB had been copied over, because actually just 4GB of the disk were being used.

Guess what, turned off the system, put the second SSD in place of the first, and the system booted perfectly.

So, how do you check the progress of a running dd command, you might ask?

Well, with the progress tool, naturally!

sudo apt install progress

first, and then, right after dd has started,

sudo progress -wm

This will clear the screen, and have the current status of the copy being shown and updated, while the copy is still running, so use of byobu (go search for that) is highly recommended.

The sudo is there because dd was started as root, so progress won’t be able to access its status unless ran with same privileges.

Disclaimer: using dd to clone a running disk might create inconsistencies where other running processes change the disk contents while the copy is running, and the resulting copy has part “old”, and part “new” content. Usually, this doesn’t matter, or might not happen at all if all the other processes access either tmpfs partitions or another disk, but in the end only you know what your system does, so thread with caution.

7zip compression test of Raspberry backup image

I regularly backup the Raspbian system of my several Raspberry Pi’s, for reasons that anyone owning and using a Raspberry Pi knows.

With time you end up always wanting more, and I want to upload backups on the cloud for that additional layer of extra safety; cloud space, though, is either free and very limited, or quite costly to mantain, hence the smaller the files you upload, the more practical is sending them online.

With this purpose in mind, I wanted to try several compression options, using the same source file (a 3.7GB image file produced by my latest version of RaspiBackup -the “bleeding edge” which right now is in its own branch), but changing some parameters from the default “Ultra settings” (the one you can find in 7z manpage).

All tests were done on a non overclocked Raspberry Pi 4 with 4GB of RAM.

First test goes with the “ultra settings” comandline found in 7z manpage:

time 7z a -t7z -m0=lzma -mx=9 -mfb=64 -md=32m -ms=on archive.7z source.img

7-Zip [32] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,32 bits,4 CPUs LE)

Scanning the drive:
1 file, 3981165056 bytes (3797 MiB)

Creating archive: archive.7z

Items to compress: 1


Files read from disk: 1
Archive size: 695921344 bytes (664 MiB)
Everything is Ok

real    50m33.638s
user    73m16.589s
sys     0m44.505s

Second test builds on this, and increases the dictionary size to 128MB (which is, alas, the maximum allowed for 32bit systems as per 7zip documentation, any value above this will throw an error on the raspberry):

time 7z a -t7z -m0=lzma -mx=9 -mfb=64 -md=128m -ms=on archive.7z source.img

7-Zip [32] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,32 bits,4 CPUs LE)

Scanning the drive:
1 file, 3981165056 bytes (3797 MiB)

Creating archive: archive.7z

Items to compress: 1


Files read from disk: 1
Archive size: 625572636 bytes (597 MiB)
Everything is Ok

real    59m54.703s
user    80m50.340s
sys     0m55.886s

Third test puts another variable in the equation, by adding the -mmc=10000 parameter, which tells the algorithm to cycle ten thousand times to find matches in the dictionary, hence increasing the possibility of a better compression, from the default number of cycles which should be in this case less than 100.

time 7z a -t7z -m0=lzma -mx=9 -mfb=64 -md=128m -mmc=10000 -ms=on archive.7z source.img

7-Zip [32] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,32 bits,4 CPUs LE)

Scanning the drive:
1 file, 3981165056 bytes (3797 MiB)

Creating archive: archive.7z

Items to compress: 1


Files read from disk: 1
Archive size: 625183257 bytes (597 MiB)
Everything is Ok

real    77m53.377s
user    99m48.431s
sys     0m39.215s

I then tried one last command line that I found on Stack Exchange network:

time 7z a -t7z -mx=9 -mfb=32 -ms -md=31 -myx=9 -mtm=- -mmt -mmtf -md=128m -mmf=bt3 -mpb=0 -mlc=0 archive.7z source.img

and I cannot find that answer anymore but it boasted the best compression rate ever (yeah, I imagine, everything was set to potential maximum). This commandline I had to tone down, because it implied increasing to the maximum possibile the dictionary size (which is 1536MB, but it’s not feasible on 32bit system which are limited to 128MB) and also the fast bytes to its maximum of 273.

I always got an error though:

ERROR: Can't allocate required memory!

even by gradually decreasing the -mfb (fast bytes) down to 32; even if I completely removed the fast bytes parameter. At this point I simply desisted.

So, onto the

Conclusions:

You should definitely pump up the dictionary size to its limit of 128MB, because it yields a discrete compression increase (down to 15.7% from 17.5%, so 10% smaller). According to this post the time increase must be measured as “user+sys”), so it’s 74 minutes of CPU time for first example, 81.75 minutes for the second, and 100.5 minutes for the third. The difference in CPU time between the first and second is still in the ballpark of 10%, so that additional time gets practically converted in better compression, I’ll take it.

Interestingly, increasing the matching cycles didn’t bring ANY increase in compression, at the expense of a whopping 25% increase in processing time (actually it did when I compared the exact file sizes, and it was negligible at just a few hundred kilobytes less).

Overall, this is is a great result, as the total free space in that image should be around 300MB, so the rest is all real data compression.

Decode Raspberry vcgencmd get_throttled response with a PHP script

If you search for an interpretation to get a meaning out of the command

vcgencmd get_throttled

which might return something like:

throttled=0x50005

you will find many forum posts that basically tell you it is a bitcode and that you have to decode it following this table:

Bit Meaning
0 Under-voltage detected
1 Arm frequency capped
2 Currently throttled
3 Soft temperature limit active
16 Under-voltage has occurred
17 Arm frequency capped has occurred
18 Throttling has occurred
19 Soft temperature limit has occurred

(from this GitHub page)

yyyyeaaahhhh right.

Finally I found a comprehensible explanation here and I decided to write a script around it, and since I know PHP this is what I used.

So, from your home folder,

nano throttled.php

paste this inside:

<?php
$codes=array(
0=>"Under-voltage detected",
1=>"Arm frequency capped",
2=>"Currently throttled",
3=>"Soft temperature limit active",
16=>"Under-voltage has occurred",
17=>"Arm frequency capped has occurred",
18=>"Throttling has occurred",
19=>"Soft temperature limit has occurred");

$output=exec("vcgencmd get_throttled");
$output=explode("0x",$output);

if ($output[1]=="0") {
    echo "all fine, lucky you\n";
    exit();
}

$output=str_split($output[1]);
$bincode="";
foreach ($output as $hex) {
    $bincode.=str_pad(base_convert($hex,16,2),4,"0",STR_PAD_LEFT);
}
$bincode=array_reverse(str_split($bincode));
foreach ($bincode as $k=>$v) {
    if ($v) {
        echo $codes[$k]."\n";
    }
}

And then run:
php throttled.php