Category Archives: howto’s

HOCl Hypochlorous Acid using salt water electrolysis

 

I have published several videos on youtube about HOCl production, the older one, the newer one, and the italian one, even older and uglier. Yet another might be in the works, as the vat model I’m currently using is different from the one in the latest video, and I plan to build another improved version of that.

Anyway, I’m creating this post because one user on youtube made several very interesting questions that deserve to be answered here as well.

chrastitko asked:

I am really confused after many days of research of this topic. I have really doubt if it is possible to get HoCl by simple electolysis of NaCl, but on the other hand does the USA companies lies about theyre commercial products? Or it is really so important to keep very specific conditions (size of electrodes, voltage, amperage, etc.) I have made many of attempts with concentration of salt vinegar, duration of electrolysis. I haven’t got enough big electrodes (now ordered from aliexpress similar as yours) and I haven’t got ppm tester so I can’t evalueate the final product. When I used a small amount of salt and vinegar and let electrolysis almost 2 hour (because of weak power supply, small electrodes 1 lite of solution) it doesn’t smell so intensive by chlorine like before when I used much salt. But if I tried to immerse cloth to this solution it bleaches it a little. So I am not sure if it should be HoCl or not. HoCl should smell so strong like NaOCl or not? It is the right technique to rise the time of electrolysis when you have small electrodes? What happens to solution if you let it run too long?

To which I replied:

you pose good questions. I am not the guy for you as I can only answer by experience, not by knowledge. My solution has not showed to bleach tissues, given I have never submerged a cloth inside it, but I spray many dressed people all day long when they access a certain area, been doing it for a year, and noone complained yet. The best way I can describe the smell of my solution after it’s been for a few hours inside the reservoir of the pressure painter I use to spray it, is “swimming pool changing room”, if you know what I mean. The concentration of HOCl in the final solution is a result of: pH of the solution (optimal is 5 if I remember correctly, yet 6 is pretty much the same), amount of electrical current (which itself is proportional to voltage, electrodes area, electrodes distance, and amount of electrolytes in the solution, that is, salt), and amount of salt (so how much salt you place in there will speed up the reaction twice, but will also leave a larger salt residue when the solution dries up). By the way, regarding the evidence that this method really works, there’s plenty of papers where it’s detailed that electrolysis is the simplest way to obtain HOCl, albeit not the most efficient. Running the electrolysis for too long will warm up the solution because of Joule effect (possibly degrading the HOCl which is in itself unstable), and increase the amount of corrosion the anode will go through.

 

Convertire automaticamente da FLAC a MP3

Scrivo per riferimento personale questa guida, che traggo da un post semplice ed efficace, e che per utilità generale traduco in italiano e inserisco nel mio blog a scopo di “backup”.

I file FLAC sono lossless, cioè fedeli alla qualità del CD originale. Ma l’orecchio umano non è perfetto, quindi fatto salvo tu sia un audiofilo investito di poteri extrasensoriali, la conversione a MP3 ti darà la stessa medesima qualità percepita ad una frazione dello spazio occupato.

Per questo, da Linux, o da Ubuntu installato su Windows 10 tramite WSL (qui una guida), installare i codec LAME e FLAC con:

sudo apt install lame flac


E quindi dopo essersi spostati nella cartella contenente i file FLAC che si desidera convertire (su Ubuntu sotto WSL, il percorso è /mnt/<lettera drive>/percorso/della/cartella), eseguire il comando:

for f in .flac; do flac -cd "$f" | lame -b 320 - "${f%.}".mp3; done


che per ogni file FLAC presente, crea una versione MP3 a bitrate costante di 320kbps. Potete cambiare il numero da 320 a un valore inferiore per ridurre il bitrate, o sostituire l’intero blocco -b 320 con -v, per un bitrate variabile, ma se lo fate siete degli zotici e non meritate di ascoltare musica.

Scherzo dai.

Dritte per un decoupage di fumetti su tavolo

Questo articolo è tanto divulgativo quanto di riferimento personale, infatti dopo le fatiche affrontate durante un progettino in corso d’opera ho pensato che sarebbe meglio metter nero su bianco quello che ho imparato.

In ordine più o meno sparso:

  • superficie più pulita possibile, se si tratta di legno appena levigato accertarsi che sia stata rimossa la polvere quanto più possibile
  • miscela 50/50 vinavil e acqua, stendere sul piano senza lesinare, deve esserci la possibilità di “spremere” la colla da sotto la carta una volta applicata
  • connesso al punto precedente, se il piano è un legno poroso o appena levigato e molto “asciutto”, occorrerà stendere una quantità maggiore di base
  • usare un pennello per verniciare la colla, se il fondo è poroso verniciare leggermente anche il retro del foglio di carta da applicare
  • posare il foglio nella posizione desiderata, e usare una specie di spatola morbida per stenderlo energicamente spremendo via i residui di colla sottostanti, ho usato con successo una “linguina” in gomma di quelle adatte a impastare le creme nelle ciotole e versarle nelle tortiere successivamente
  • dopo il passaggio precedente normalmente la colla in eccesso passa attraverso il foglio di carta fino a bagnare la superficie superiore, ma nel caso ripassare delicatamente col pennello e poca colla
  • decidere sin da subito il tipo di disposizione, in ogni caso è preferibile iniziare in un punto ed estendersi da quello, piuttosto che applicare i fogli in punti separati e proseguire irregolarmente; la disposizione a griglia è più semplice e efficiente, giacché richiede il minor numero possibile di fogli, ma quella casuale, con i fogli in obliquo e parzialmente sovrapposti, seppur richieda più fogli, risulta normalmente anche più gradevole
  • nel caso vadano ritagliate parti di foglio, il risultato è sia esteticamente sia tecnicamente più piacevole se vengono piuttosto strappati lungo i margini (le fibre di carta frastagliata si prestano meglio all’adesione)
  • sei margini di alcuni fogli tendono ad alzarsi, è preferibile intervenire subito prima dell’asciugatura, sollevandoli finché non si avverte un minimo di resistenza, e applicando sotto, sia su piano che retro foglio, un po’ di colla aggiuntiva col pennello, stendendo bene nuovamente con la spatola
  • attendere l’asciugatura completa (asciutezza e compattezza ben avvertibili al tocco) per i passaggi successivi di colla, che dovrebbero contenere una percentuale maggiore di vinilica rispetto al passaggio iniziale
  • [varie ed eventuali da aggiungere a proseguimento del progetto]

Titanium backup doesn’t restore on latest Android version

If you found the latest version of Titanium  Backup (now almost one year old with no recent updates…) cannot restore your apps data anymore on Android 10, this solution was found in the comments on the play store, by Stranger Stunts, published on 9/12/20.

Turn off “Verify apps over USB” in Developer Options, then turn off “Play protect” in the Play Store.

I didn’t test it properly but I remember last time I had such a problem I found a similar solution in the same fortuitous way, and it worked, but I didn’t have the idea to save it somewhere.

So I’m saving it on my blog for future reference.

Clone raspberry disk TO newer/larger disk/SD/SSD

I was switching from a 120GB SSD on my Raspberry Pi 4, to a 240GB one.

Found this and I cloned the command in the opening question:

sudo dd if=/dev/originaldisk of=/dev/destinationdisk bs=4096 conv=sync,noerror

where I used /dev/disk/by-id/... handles to make sure I was pointing to the correct SSD’s (otherwise,  had I swapped them, a huge mess would happen).

The resulting SSD was a perfect copy down to partition ID, so the cmdline.txt file under /boot/mounted from a FAT partition on the SD was starting the system off the new disk as if nothing happened.

I just tested it for the inverse situation.

On a Raspberry Pi 3, the running disk was a 240GB SSD, but it was pretty much wasted space since it was hosting a less than 4GB root partition, so I wanted to switch it to the 120GB SSD that I took out of the Raspi4.

I ran the above command, and I allowed myself the luxury to just Ctrl-C out of it after the first 10GB had been copied over, because actually just 4GB of the disk were being used.

Guess what, turned off the system, put the second SSD in place of the first, and the system booted perfectly.

So, how do you check the progress of a running dd command, you might ask?

Well, with the progress tool, naturally!

sudo apt install progress

first, and then, right after dd has started,

sudo progress -wm

This will clear the screen, and have the current status of the copy being shown and updated, while the copy is still running, so use of byobu (go search for that) is highly recommended.

The sudo is there because dd was started as root, so progress won’t be able to access its status unless ran with same privileges.

Disclaimer: using dd to clone a running disk might create inconsistencies where other running processes change the disk contents while the copy is running, and the resulting copy has part “old”, and part “new” content. Usually, this doesn’t matter, or might not happen at all if all the other processes access either tmpfs partitions or another disk, but in the end only you know what your system does, so thread with caution.

Get list of cases in a PHP switch statement

Like the title says, you have a PHP script where a (supposedly long) case switch statement is placed, and you want to programmatically get a list of all the strings for each case.

My use case is, I have a Telegram bot for my own private use, which does several actions, all different among themselves, when receiving user input from a specific account (mine).

The command strings are predetermined, so I have a -long- list of cases like so:

switch(strtolower($text)) {
    case "blah1":
        dosomething1();
        break;
    case "blah2":
        file_put_contents($somefile,2);
        break;
    case "blah3":
        echo file_get_contents($someurl);
        break;
    // [...]
    case "blah12":
        exec('php somescript.php > /tmp/somescript.log 2>&1 &');
        break;
}

I am adding new commands all the time, with the most different functions, and I might even forget some neat functions exist, so I wanted to implement a function where, in reply to a certain command, the bot lists all possible other commands (a --help function of sorts if you might).

They cannot be simply put inside an array, to be neatly listed at my pleasure, not with a useless excercise of my patience. Also, the listing functionality comes second, the first utility is semantical appropriatedness.

Let’s first say that there is no builtin function in PHP to get a list of cases in switch statement but you can still hack a function yourself.

The following solution is very ugly, but it will work on a simple code, and I would use this only if both of following conditions are verified:

  1. You are the only user of the script (my telegram bot takes into account commands only if they come from my account)
  2. the whole code in the script file basically revolves around the case switch and not much else

This scenario perfectly fits my case, so here’s what I did:

preg_match_all('/case \"([a-z0-9\s]+)\"\:/', file_get_contents(__FILE__), $matches);

You then can use:

foreach ($matches[1] as $casestring) {
    //...
}

or rather, as I actually did in the end, I simply returned:

$reply=implode("\n",$matches[1]);

7zip compression test of Raspberry backup image

I regularly backup the Raspbian system of my several Raspberry Pi’s, for reasons that anyone owning and using a Raspberry Pi knows.

With time you end up always wanting more, and I want to upload backups on the cloud for that additional layer of extra safety; cloud space, though, is either free and very limited, or quite costly to mantain, hence the smaller the files you upload, the more practical is sending them online.

With this purpose in mind, I wanted to try several compression options, using the same source file (a 3.7GB image file produced by my latest version of RaspiBackup -the “bleeding edge” which right now is in its own branch), but changing some parameters from the default “Ultra settings” (the one you can find in 7z manpage).

All tests were done on a non overclocked Raspberry Pi 4 with 4GB of RAM.

First test goes with the “ultra settings” comandline found in 7z manpage:

time 7z a -t7z -m0=lzma -mx=9 -mfb=64 -md=32m -ms=on archive.7z source.img

7-Zip [32] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,32 bits,4 CPUs LE)

Scanning the drive:
1 file, 3981165056 bytes (3797 MiB)

Creating archive: archive.7z

Items to compress: 1


Files read from disk: 1
Archive size: 695921344 bytes (664 MiB)
Everything is Ok

real    50m33.638s
user    73m16.589s
sys     0m44.505s

Second test builds on this, and increases the dictionary size to 128MB (which is, alas, the maximum allowed for 32bit systems as per 7zip documentation, any value above this will throw an error on the raspberry):

time 7z a -t7z -m0=lzma -mx=9 -mfb=64 -md=128m -ms=on archive.7z source.img

7-Zip [32] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,32 bits,4 CPUs LE)

Scanning the drive:
1 file, 3981165056 bytes (3797 MiB)

Creating archive: archive.7z

Items to compress: 1


Files read from disk: 1
Archive size: 625572636 bytes (597 MiB)
Everything is Ok

real    59m54.703s
user    80m50.340s
sys     0m55.886s

Third test puts another variable in the equation, by adding the -mmc=10000 parameter, which tells the algorithm to cycle ten thousand times to find matches in the dictionary, hence increasing the possibility of a better compression, from the default number of cycles which should be in this case less than 100.

time 7z a -t7z -m0=lzma -mx=9 -mfb=64 -md=128m -mmc=10000 -ms=on archive.7z source.img

7-Zip [32] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,32 bits,4 CPUs LE)

Scanning the drive:
1 file, 3981165056 bytes (3797 MiB)

Creating archive: archive.7z

Items to compress: 1


Files read from disk: 1
Archive size: 625183257 bytes (597 MiB)
Everything is Ok

real    77m53.377s
user    99m48.431s
sys     0m39.215s

I then tried one last command line that I found on Stack Exchange network:

time 7z a -t7z -mx=9 -mfb=32 -ms -md=31 -myx=9 -mtm=- -mmt -mmtf -md=128m -mmf=bt3 -mpb=0 -mlc=0 archive.7z source.img

and I cannot find that answer anymore but it boasted the best compression rate ever (yeah, I imagine, everything was set to potential maximum). This commandline I had to tone down, because it implied increasing to the maximum possibile the dictionary size (which is 1536MB, but it’s not feasible on 32bit system which are limited to 128MB) and also the fast bytes to its maximum of 273.

I always got an error though:

ERROR: Can't allocate required memory!

even by gradually decreasing the -mfb (fast bytes) down to 32; even if I completely removed the fast bytes parameter. At this point I simply desisted.

So, onto the

Conclusions:

You should definitely pump up the dictionary size to its limit of 128MB, because it yields a discrete compression increase (down to 15.7% from 17.5%, so 10% smaller). According to this post the time increase must be measured as “user+sys”), so it’s 74 minutes of CPU time for first example, 81.75 minutes for the second, and 100.5 minutes for the third. The difference in CPU time between the first and second is still in the ballpark of 10%, so that additional time gets practically converted in better compression, I’ll take it.

Interestingly, increasing the matching cycles didn’t bring ANY increase in compression, at the expense of a whopping 25% increase in processing time (actually it did when I compared the exact file sizes, and it was negligible at just a few hundred kilobytes less).

Overall, this is is a great result, as the total free space in that image should be around 300MB, so the rest is all real data compression.

Remove qTranslate from WordPress and keep translations

This website goes BACK in time, hell if it does. It started maybe in 2001, with static HTML. Then I played for a while with SHTML, before jumping head on to PHP and writing my own, ugly CMS.

I already had italian and english articles translated in here, and when I switched to wordpress (oh, joy and pain) I found this nifty plugin called qTranslate that kind of automated the translation management.

Looked like a good idea back then, so I installed it and moved all bilingual articles in it.

Yeah, looked like a good idea back then.

After a while though, as WordPress updates progressed, I noticed how I couldn’t write new posts anymore because the custom editor changes broke… either that, or I had to manually add the language tags, or I had to keep back the version of WordPress not to lose editor function. NO BUENO!

Until qTranslate stopped working altogether, and sorry guys, it’s not mantained anymore, f*ck you!

Luckily qTranslate-X was released, giving some more oxygen to my rare yet constant contribution to this blog.

Then, guess what, even qTranslate-X was discontinued.

Luckily qTranslate-XT came out, and it’s on GitHub, so as far as I see it’s actively followed, developed, improved… stil it doesn’t cut it for me.

I mean, developers are doing a GREAT job… can you imagine what a huge hassle is following development of a tool so complex, and coordinating the efforts of several people, while trying to keep the code working after major WordPress updates are released?

There must be a ot of people who thank anything that is sacred for qTranslate-XT’s existence.

I’m not one of those, since especially lately, I’m either releasing articles in italian, or in english, so there is not a lot of translations going on.

Everytime I searched for methods to remove qTranslate, every strategy involved choosing a language to keep, and just thrasing the others! As if I didn’t invest a LOT of time translating! Why should I waste al this work?

I used to think the work to do that by myself was going to be immense so I never tried, until today, when I achieved the objective, and am now happily composing, on Gutenberg editor, with a qTranslate-less version of my blog, where every article has been kept, and the URL redirection of “ephestione.it/it/title” has been fixed and redirected to “ephestione.it/it-title”.

What’s the strategy? Well, I built the code for my own website (and I am NOT willing to customize it for yours, unless you offer to pay me), so these are the premises:

  1. Not every article is bilingual, on the contrary most are either in english or italian
  2. I obviously want to mantain both languages
  3. I don’t care if the blogroll will show both italian and english articles (some the translation of the other) in the same sequence
  4. I want to keep the existing database entry, and add another database entry for the additional language (if present), in this second case english is kept to its original database row, and italian is inserted as a new row
  5. It is made to work with bilingual sites, but in reality it will most definitely work fine with multilingual sites and you may even not need to edit anything; still, you are expected to have familiarity with PHP to run it with confidence (BACKUP DATABASE FIRST!!!!11!!!1oneone)

Following is the code.

<?php

ini_set('display_errors', 1);
ini_set('display_startup_errors', 1);
error_reporting(E_ALL);

$dbhost="localhost";
$dbname="dbname";
$dbuser="dbuser";
$dbpass="dbpass";

function doquery($query) {
	$result=mysqli_query($GLOBALS["dbcon"],$query);
	if ($report=mysqli_error($GLOBALS["dbcon"])) {
		die($report);
	}
	return $result;
}

function excerpt($string) {
	return substr($string, 0, 30)."...";
}

$erroremysql=false;

$GLOBALS["dbcon"]=@mysqli_connect($dbhost, $dbuser, $dbpass);
if (mysqli_error($GLOBALS["dbcon"])) $erroremysql=true;
@mysqli_select_db($GLOBALS["dbcon"],$dbname);
if (mysqli_error($GLOBALS["dbcon"])) $erroremysql=true;
@mysqli_set_charset($GLOBALS["dbcon"],'utf8');
if (mysqli_error($GLOBALS["dbcon"])) $erroremysql=true;

$posts=doquery("SELECT * FROM wp_posts WHERE post_type='post'");

$a=array("<!--:","-->");
$b=array("\[:","\]");
$lang=array("it","en");
$main="en";

echo '<font face="Courier New">';

while ($post=mysqli_fetch_assoc($posts)) {
	echo "<strong>post {$post["ID"]}</strong>:<br/>";
	if (strpos($post["post_title"],"[:en]")!==false || strpos($post["post_title"],"[:it]")!==false) {
		$s=$b;
	}
	else if (strpos($post["post_title"],"<!--:en-->")!==false || strpos($post["post_title"],"<!--:it-->")!==false) {
		$s=$a;
	}
	$data=array();
	foreach ($lang as $l) {
		if (preg_match('/'.$s[0].$l.$s[1].'([\s\S]+?)'.$s[0].$s[1].'/',$post["post_title"],$matches)) {
			$data[$l][0]=$matches[1];
			preg_match('/'.$s[0].$l.$s[1].'([\s\S]+?)'.$s[0].$s[1].'/',$post["post_content"],$matches);
			$data[$l][1]=$matches[1];
		}
	}
	if (count($data)>1) {
		foreach ($data as $k=>$v) {
			echo "$k: ".excerpt($v[0])." - ".excerpt(strip_tags($v[1])).(($k==$main)?" main":"")."<br/>";
			//it is the main language, just updates post stripping the other language
			if ($k==$main) {
				doquery("UPDATE wp_posts SET post_title='".mysqli_real_escape_string($dbcon,$v[0])."', post_content='".mysqli_real_escape_string($dbcon,$v[1])."' WHERE ID=".$post["ID"]);
				echo "\n";
			}
			//it is not, so creates a new post copying over the rest of the data
			else {
				doquery("
					INSERT INTO wp_posts (
						post_author,
						post_date,
						post_date_gmt,
						post_title,
						post_content,
						post_modified,
						post_modified_gmt,
						post_name)
					VALUES (
						{$post["post_author"]},
						'{$post["post_date"]}',
						'{$post["post_date_gmt"]}',
						'".mysqli_real_escape_string($dbcon,$v[0])."',
						'".mysqli_real_escape_string($dbcon,$v[1])."',
						'{$post["post_modified"]}',
						'{$post["post_modified_gmt"]}',
						'".$k."-{$post["post_name"]}')
				");
				echo "\n";
			}
		}
	}
	else  {
		echo "1: ".excerpt($data[key($data)][0])." - ".excerpt(strip_tags($data[key($data)][1]))." main"."<br/>";
		doquery("UPDATE wp_posts SET post_title='".mysqli_real_escape_string($dbcon,$data[key($data)][0])."', post_content='".mysqli_real_escape_string($dbcon,$data[key($data)][1])."' WHERE ID=".$post["ID"]);
		echo "\n";
	}
}

This is what you need to add to .htaccess in the root of your public_html folder, obviously adapting it to your needs, and adding more, similar rows if you have additional languages:

RewriteRule ^it/([a-z0-9\-\_]+)/$ /it-$1/ [R=301,L]

In my case, it worked like a charm, even if not without some cold sweats.

PHP script to batch download from Wallpaperscraft website

I found myself interesting in renewing my wallpapers gallery, and Wallpaperscraft website is really full of themed collections, like, it’s huge!

But then, who’s going to just download every picture by hand, right? Well I know PHP, so one morning when I had the time I jotted down some lines of code.

This script is working at the time of writing, but any change occurring in the source code or in the url structure may break it (not that it would be so hard to fix it anyway).

This is clear to anyone who can run a PHP script, just change the configurable values at the top, the folder name “Wallpapers” in the code, and it’s good to go.

<?php

$webbase="https://wallpaperscraft.com/catalog/city/page";
$imgbase="https://images.wallpaperscraft.com/image/";
$res="_1920x1080";

$i=1;
$c=1;
$goon=true;
while ($goon) {
	echo "\nscarico da $webbase$i\n\n";
	$html=file_get_contents($webbase.$i);
	if (strpos($html,"<html>")===false) {
		$html=gzdecode($html);
		if (strpos($html,"<html>")===false) {
			echo "pagina $i sballata, riscarico...\n";
			sleep(2);
			continue;
		}
	}
	preg_match_all("/<a class=\"wallpapers__link\" href=\"([\/a-z0-9\_]+)\">/",$html,$matches);
	//var_dump($matches);
	if ($matches[1][0]) {
		foreach ($matches[1] as $image) {
			$image=explode("/",$image);
			$image=end($image);
			if (!file_exists("Wallpapers/$image.jpg")) {
				$handle=@fopen($imgbase.$image.$res.".jpg", 'rb');
				if ($handle) {
					echo "$i:$c $image ...\n";
					file_put_contents("Wallpapers/$image.jpg",$handle);
					fclose($handle);
					$c++;
					//https://images.wallpaperscraft.com/image/pier_dock_sea_dusk_shore_118549_1920x1080.jpg
				}
			}
		}
	}
	else {
		$goon=false;
	}
	//sleep(1);
	$i++;
}

Firmware con root e senza cloud dell’aspirapolvere robot Xiaomi

Per saltare alle istruzioni, senza leggere l’interessantissimo e utile preambolo, clicca qui.

Il Mi Robot, anche conosciuto come Xiaomi Vacuum Robot, o Mijia Robot, è un aspirapolvere che vale ben più del suo prezzo (attualmente lo si trova a 210€ spedito con offerte che vengono periodicamente ripetute), con la mappatura della casa, il percorso intelligente di pulizia, e il ritorno automatico alla base per ricaricarsi e poi continuare, se il primo passaggio non può essere completato con la capacità di carica residua.

Sono usciti numerosi modelli successivi, quello direttamente a lui superiore è il Mijia 1S, che aggiunge 200MPa di potenza massima di aspirazione, dei percorsi “ancora più intelligenti”, una telecamerina sul dorso per spiare riconoscere ancora meglio casa nostra, e la possibilità di definire stanze, e impostare nmuri virtuali, direttamente dal programma. Quest’ultima funzione in particolare è quella secondo me più interessante, perché evita di dover acquistare i costosissimi, e francamente brutti. cordoli di plastica nera per delimitare le aree di casa.

Come tutti i prodotti Xiaomi però ha un difettaccio brutto: si collega al cloud della casa madre per caricare una pletora di dati ignoti (in quanto crittografati), che in particolare per questo modello saranno i dati del WiFi e la piantina di casa… non so voi, ma a me scoccia assai.

Il metodo più semplice per bloccare evitare questo comportamento, è evitare del tutto di collegarlo all’applicazione, lasciandolo quindi orfano di rete; questo però significa che il robot manterrà aperto il suo AP hotspot WiFi senza password, al quale chiunque potrà collegarsi, anche il vicino di casa dall’altra parte del muro. Inoltre questo non permette di visualizzare la piantina di casa, comandarlo da remoto, ecc ecc.

Potete inoltre associarlo all’applicazione Mi Home, avendo però l’accortezza di bloccare tramite router le connessioni esterne originate dal MAC address del robot, in modo che non possa inviare dati rimanendo comunque accessibile dall’applicazione; però a quel punto potrebbe essere Mi Home a inviare dati al posto del robot… AHHHH che dilemma.

A proposito, se avete problemi di connessione tra Mi Robot e Mi Home, accertatevi di aver attivato il GPS e aver concesso all’applicazione il permesso di accedere ai dati di geolocalizzazione, altrimenti non si collegherà mai… ahi questi cinesi.

Insomma, ci vorrebbe un modo per farlo funzionare senza collegarlo al cloud, ma mantenendo piantina, controllo remoto, e magari aggiungendo altre funzioni interessanti.

Quello che segue è un tutorial pane e salame:

  1. Scaricare l’applicazione XVacuum, disponibile per Android e iPhone/iOS
  2. Scaricare un firmware con root e Valedudo precompilato al suo interno, potete usare questo servizio, che è configurabile, oppure scaricare le versioni pronte da questo archivio (Gen1 corrisponde al primo Mi Robot, Gen2 al roborock S50 e atri, Gen3 al Mijia 1S ed altri, che attualmente non sono rootabili)
  3. Resettare il wifi del robot (nel Gen1 basta tenere premuti i due pulsanti principali fino a ricevere la notifica vocale)
  4. Collegare il proprio smartphone alla rete wifi del robot (disattivando la linea dati che altrimenti verrebbe usata di default causa assenza di connessione internet)
  5. Aprire l’applicazione XVacuum, e verificare che la connessione al robot sia stata effettuata
  6. Premere il pulsante per eseguire il flash del firmware, e selezionare il file .pkg che avrete ottenuto al punto 2
  7. Attendere che il firmware sia prima scaricato dal robot, e quindi installato
  8. Ri-collegarsi alla rete wifi del robot, con la linea dati disattivata, e aprire sul browser del telefono l’indirizzo 192.168.8.1
  9. Andare nella sezione impostazioni, aprire la sezione WiFi, e impostare i dati di connessione alla propria WiFi di casa/ufficio (dovrete poi ricollegarvi al WiFi di casa, e cercare l’IP che avrà assunto il robot, questo è compito vostro)
  10. Potete scaricare da questo archivio il file .pkg delle voci del robot, e dalla sezione Impostazioni/Voci dell’interfaccia web di Valetudo, caricare il pkg che verrà immediatamente installato

Buon divertimento!