Category Archives: howto’s

Cyclic sound MP3 audio recording in ubuntu

I work in a relatively safe environment, yet it may very well happen that I need to prove something that someone said in my office, so I can hold it against them when the time comes.

My laptop is always turned on, so I could use it to record the environmental sounds around it, with a couple of requirements:

  • the recording must be totally unattended, starting when I turn on the pc, and stopping when I turn it off, without any user intervention
  • the recorded files must be somehow purged, starting from the oldest ones, so that my disk doesn’t get filled with audio files

As in the Ubuntu spirit, I tried to search for something that did the job right away, but with no luck.

So, still in the Ubunt spirit, I had to arrange it myself: the idea is to record the audio in chunks of 10 minutes, and each time delete the oldest files, so that there is a chosen number of max files inside the recording folder.

You will need the audio-recorder package for the job, install it as follows:

sudo apt-add-repository ppa:osmoma/audio-recorder
sudo apt-get update
sudo apt-get install audio-recorder

when the program is installed, open it (Alt-F2 and then launch audio-recorder), click on the “additional settings” button, and setup your default recording folder there, in this example it’s the folder “audiofiles” directly under your home folder.
Also I suggest changing the file naming standard to %Y-%m-%d-%H:%M:%S so that each recording can be easily associated with the time of starting.

Then, you need to make a bash script that will deal with starting a new recording, while closing the previous one, this is what I came up with:

#!/bin/bash

/usr/bin/audio-recorder --display=:0.0 -c stop
/usr/bin/audio-recorder --display=:0.0 -c start
cd /home/username/audiofiles
rm `ls -t | awk 'NR>150'`

which does exactly the following: stops a previously open (if existing) instance of the program, and starts a new one, then deletes the oldest recorded audio chunks so that there are maximum 150 files inside the recording folder (if you want a different amount, just replace 150 with the number you prefer); please note that the recording folder written in this bash script must be the same that is set in the additional settings, so if you want to use a different folder make sure to set it up both in audio-recorder and in this bash script.
Also, please note that the username part of the path must be replaced with your ubuntu username.

You can create this bash script as a “recordaudio.sh” file in your home folder, and then be sure to chmod +x recordaudio.sh so you can execute it.

Then, you need something that actually starts the recording, and cron is our friend here.

Run the command

crontab -e

and if it’s the first time you run it,  you should be presented with a choice screen asking you which editor you prefer… absolutely choose nano!

Inside the editor screen, paste this:

*/10 * * * * /home/username/recordaudio.sh

where “username” must be replaced with you ubuntu username, then press Ctrl-X to save the file (press Y is prompted to confirm).

What this cron line does, is running the bash script we just created every ten minutes, so the recorded sound files will be 10 minutes long. If you want to change this length, just change the 10 in the command to the number of minutes you prefer.

Restart the pc, and notice how files are being created inside of your folder. After a while, you will also get over the set limit for the files, and you will notice how the number of files will always stay the same, with the oldest files being deleted.

Collegare in parallelo pannelli fotovoltaici di potenza diversa

Questo articolo è scritto in linguaggio “terra terra”, e non è indirizzato al perito elettrotecnico di turno, che sicuramente non avrà bisogno di leggere questa breve guida per sapere cosa fare; non aggreditemi quindi per i toni estremamente semplicistici.

Se siete finiti qui, è perché vi state chiedendo se è possibile collegare in parallelo due o più pannelli solari differenti, ebbene la risposta è: dipende!

Partiamo da due premesse:

  1. Se volete collegare più pannelli fotovoltaici assieme, significa che dovete alimentare uno o più dispositivi assetati di energia
  2. Se vi ponete il problema del “si può fare?“, significa che immaginate perlomeno che i pannelli, se non identici, devono essere comunque di caratteristiche simili

Essendo valide queste premesse, la risposta è sì, si possono collegare più pannelli solari in parallelo purché simili, ed entrambi con un diodo di protezione.

La caratteristica più importante è che abbiano lo stesso voltaggio, o un voltaggio molto vicino tra loro: ad esempio due pannelli da 5V, oppure un pannello da 18V e uno da 20V; è meno importante invece che abbiano anche la stessa potenza, cioè la stessa corrente in uscita.

Il voltaggio, collegando i pannelli ad un carico, si abbassa necessariamente ai capi di entrambi i pannelli, quindi anche se il pannello da 18V entrerà in azione “dopo” quello da 20V, entrambi comunque parteciperanno alla produzione della corrente (il pannello più debole non entra in azione “dopo” in realtà, ma semplicemente copre una parte inferiore del carico, in proporzione alla sua capacità di produrre potenza); i diodi di protezione, utili soprattutto sul pannello con voltaggio o amperaggio inferiore, servono a evitare che ci sia un flusso contrario di corrente dal pannello più potente a quello più debole, con un “furto” energetico di cui risentirebbe il dispositivo da caricare.

Come va montato il diodo? Prima di tutto è preferibile usare un diodo tipo Schottky (praticamente quelli che potete smontare dalle lampadine a risparmio energetico esaurite), e lo si può collegare al polo positivo del pannello, con la righetta bianca del diodo sul lato più lontano dal polo positivo stesso.

L’esempio pratico della fattibilità della cosa l’ho ottenuto collegando una lampadina led usb ad un pannello da 5V/5W, e successivamente aggiungendo in parallelo a quest’ultimo un altro pannello da 5V/3.5W: la luce della lampadina aumenta istantaneamente, questo perché la lampadina costituisce un carico rilevante per il primo pannello, al punto che aggiungendo il secondo, meno performante, comunque anche questo è in grado di fornire potenza sfruttata dalla lampadina.

Howto batch watermark resize convert crop images and pictures

  • Download IrfanView (if you already don’t have it installed, it is an awesome image viewer, and great for basic editing with its Paint plugin, installed by default)
  • After opening the main viewer window, go to File>Batch Conversion/Rename
  • What opens is a window crammed with options and tools, with which you can:
    1. Convert between different formats (Jpeg, Gif, PNG, BMP, TIFF, whatever)
    2. Rename with serial progressions
    3. Crop at given dimensions (this and the following are accessible pressing the Advanced button)
    4. Resize to given dimensions, or proportionally inside a maximum dimensions rectangle
    5. Change color depth
    6. Flip horizontally and/or vertically
    7. Rotate 90° to left or right
    8. Grayscale or negative color
    9. Add an overlay text with custom color, font, size, position, alignment, you name it (this is a fast and easy way to watermark your pictures)
    10. Add a proper watermark image overlay, pointing to an image file, choosing its position and transparency
    11. Change color scheme inverting the order of the RGB values
    12. Apply other filters: sharpen, brightness, contrast, gamma correction, saturation, color balance for R G and B, bur, median, and fine rotation (choose amount of degrees)
    13. You can choose whether overwriting or renaming destination files, move them to subfolders, and so on
    14. You can add multiple files to the batch job by picking them from different arbitrary folders or loading them from a list saved into a TXT file

How to drag and drop files between windows in Ubuntu Unity launcher bar

So I like Unity, it looks nifty and the Zeitgeist launcher is so productive.
One huge gripe about unity though, is that you cannot, apparently, drag&drop files between applications open in the Unity launcher bar, namely:

  • a file from nautilus into thunderbird as a mail attachment
  • an image from nautilus into a photo-editing program
  • the same file from nautilus into an archive manager
  • an image into the upload page of imgur.com opened in your browser
  • anything else

I use a Precise Pangolin installation, and this is what works for me:

  1. Start dragging the file until you have it under your mouse pointer, ready to be dropped somewhere
  2. At this point you will notice the launcher bar buttons become gray (almost all of them, Nautilus and Firefox stay bright for me)
  3. Trying to drop onto any of the buttons, be it grayed out or bright, will NOT bring up its window
  4. Keep the mouse button pressed, and on your keyboard use theWinKey+TAB combination, you will see the applications buttons on the unit launcher bar brighten one at a time, cycling though both bright ones and grayed ones
  5. When you have highlighted the button of the program you need (for example, Thunderbird to attach a file into a mail), release the WinKey+TAB combo and the relative application window will open
  6. Finally drop your file in the opened window
  7. After you’ve done your job, flood LaunchPad with bug reports until we get this dumb problem fixed

MyPhoneExplorer via Bluetooth: phone could not be indentified and parameter incorrect

Chances are that you are trying to have your Android phone sync with Outlook via MyPhoneExplorer, but whatever you do won’t work, since as soon as you try to connect, the procedure stops at “identification” and MyPhoneExplorer pops up a “phone could not be identified” error.

Syncing via USB cables works though, but you are not going to settle for something as annoying and remembering to plug in your cable everytime.

Update: try this first:
Chris, in the comments below (thank you, Chris), suggests doing this, which is apparently working in Windows 7 (Windows 8 doesn’t have such possibility):

Easier if you just go to control panel > hardware and sound > devices and printers > bluetooth devices

Then right click on the device you’ve already paired. Go to Services tab, and under Bluetooth Services there should be a checkbox for Serial port(SPP) ‘MyPhoneExplorer’.

Check it, apply, done…

If that doesn’t work, continue reading!

Compared to the first version of this article, when I was on Windows 7 + Ice Cream Sandwich, now I’m on Windows 8 + Lollipop, and started having this problem a short while after upgrading from KitKat. The solution was to open MyPhoneExplorer settings, select bluetooth in the connection tab, and choose, from the dropdown menu, the other Bluetooth port that was not selected before, and try again to connect, in my case it went by as normal.

If this doesn’t work as well, then proceed with the very first guide.

So let’s start by saying this out front: this is black magic.

You may have tried to go into “change bluetooh settings” in your control panel, then open the “COM Ports” tab, and manually add incoming ports and trying them out one by one in MyPhoneExplorer… this should not work, no matter how many times you reboot your phone and/or unpair/pair again with your PC.

The procedure I am going to illustrate may work for you, or it may not. It may do nothing on a sunny day of April, but deal impressive results in a foggy evening of november.

I tried to replicate the same method previously, but it worked for me just a few minutes ago as a blessing (and this is why I rushed to write an article about it) while it didn’t at my previous attempts, so these are the steps (keep in mind I have ICS on my phone, and Windows 7 x64 on my laptop, the BT sync worked before, but had stopped working after I upgraded from Gingerbread to Ice Cream Sandwitch on my Galaxy Note)

  1. Find your phone entry in bluetooth devices in Windows, click it and remove it
  2. Unpair with your PC from your Android phone
  3. If there are any remaining, remove every reserved port in “COM Ports” tab of bluetooth settings (unless there are other ports being used by other BT devices you own, leave those alone)
  4. Reboot both your PC and your phone, preferably at the same time (black magic, remember?)
  5. Pair the device from Windows (go into bluetooth panel, “add device”, then proceed with the pairing)
  6. Your aim here is to have Windows itself add the COM ports, you should end up with two COM ports, one Incoming, and one Outgoing, they should both carry the BT name you have given to your phone, and the Outgoing port should also say “MyPhoneExplorer”
  7. You should set MyPhoneExplorer to use the Outgoing port among the two, but if it doesn’t work for you, also try the Incoming port (black magic)

Good luck!

Convert a micro-SIM into a normal SIM card, just a knife without adapter

Sometimes you’re stuck with a brand new microSIM that you can’t use in your phone, because you need a normal form factor SIM, but maybe you had this chance to activate a special promotion that only came in microsim format.

Well do not despair, you can get a normal SIM (-ish) from a microSIM, just DON’T remove the microsim from its matrix just yet!

microsim converted inside it's credit card holder
This is what we are going to get at the end
width comparison between SIM and microSIM
Our micro-SIM in its credit-card like holder, compared to a standard SIM card
marking width on microsim holder
After aligning the correct contacts of SIM and micro-SIM, I used insulating tape to mark the side borders of the new SIM card
marking the outer borders of the SIM on the microsim holder
I did the same thing with upper and lower borders, aligning the microSIM and SIM through their chips
almost finished microsim sim conversion
This is the rough result after cropping the external borders with an exacto knife guided by a ruler, slightly bigger but centered nonetheless
microsim converted to SIM, adapted and rounded with dremel
After cutting the angle, I used a dremel to round the corners

The SIM card works perfectly inside my Galaxy Note, just slightly harder to plug inside but once inserted it goes as a charm.

Installare deluge bittorrent su Ubuntu con controllo remoto via interfaccia web

Caso tipico: avete, o volete impostare, un server domestico con Ubuntu Linux, che agisca anche da server sempre attivo per il download e upload via bittorrent.
La richiesta di risorse è molto limitata e potete resuscitare un sistema molto vecchio, ad esempio persino un vecchio laptop su cui non volete nemmeno installare l’interfaccia grafica (a patto che abbia un disco sufficientemente capiente per i file che volete scaricare).

L’idea di partenza è avere quindi un server che faccia tutto dietro le quinte, e che sia sicuro.

Questa guida è ispirata ad altre due guide: questa e quest’altra.

Se volete la spiegazione dei vari passaggi, la trovate in fondo all’articolo.

Installato Ubuntu (l’ultima versione, al momento di scrivere, è la 12.04 Precise Pangolin) eseguite i seguenti comandi:

sudo adduser --disabled-password --system --home /home/deluge --gecos "BitTorrent Service" --group deluge
sudo mkdir /home/deluge/Incoming
sudo chown deluge:deluge /home/deluge/Incoming
sudo mkdir /home/deluge/Completed
sudo chown deluge:deluge /home/deluge/Completed
sudo add-apt-repository ppa:deluge-team/ppa
sudo apt-get update
sudo apt-get install deluged deluge-webui

Quindi create il file /etc/init/deluge.conf:

sudo nano /etc/init/deluge.conf

e incollate all’interno il seguente testo:

start on (filesystem and networking) or runlevel [2345]
stop on runlevel [016]
env uid=deluge
env gid=deluge
env umask=000
exec start-stop-daemon -S -c $uid:$gid -k $umask -x /usr/bin/deluged -- -d

per salvare premete Ctrl-X e date l’ok premendo Y o S a seconda che Ubuntu sia in inglese o italiano.

Poi create il file /etc/init/deluge-web.conf:

sudo nano /etc/init/deluge-web.conf

e incollate all’interno il seguente testo:

start on started deluge
stop on stopping deluge
env uid=deluge
env gid=deluge
env umask=027
exec start-stop-daemon -S -c $uid:$gid -k $umask -x /usr/bin/deluge-web

Per avviare deluge è necessario il comando:

sudo start deluge

e per terminarlo:

sudo stop deluge

mentre per riavviarlo (ad esempio dopo modifiche alla configurazione, perché queste ultime abbiano effetto):

sudo restart deluge

L’interfaccia web di deluge si avvia e chiude contemporaneamente al demone, quindi non è necessario intervenire su quest’ultima; raggiungerla è piuttosto banale: se avete un’interfaccia grafica installata sullo stesso server, aprite un browser e dirigetevi all’indirizzo http://localhost:8112 (la password di default è deluge), altrimenti se vi collegate da un altro PC della rete, allora usate http://:8112.

Una volta ottenuto l’accesso all’interfaccia web è consigliabile effettuare alcuni cabiamenti: innanzitutto se vi compare la finestra di connessione al demone, premete il pulsante Connect, quindi dal pannello in alto premete Preferences e passate in rassegna le varie sezioni.

Vorrete probabilmente ridimensionare il numero di connessioni totali (400-500), attivare la crittografia (mettete “Enabled” e “Full stream” nelle varie opzioni e attivate la casellina), impostare correttamente le cartelle di salvataggio dei file (mettete /home/deluge/Incoming per i file in arrivo, e /home/deluge/Completed per i file che sono stati scaricati completamente), nella sezione Interface cambiate la porta di default ad una diversa, e attivate la casella SSL, cambiate la password premendo il pulsante Change sotto alle caselle di testo, e quindi riavviate deluge per attivare i cambiamenti. Se avete attivato SSL e cambiato porta, ad esempio 1234, vi dovrete collegare all’indirizzo https://localhost:1234 (oppure https://:1234), fate attenzione al protocollo, che diventa https con la “s” finale che sta per secure. Il vostro browser vi farà presente che il sito richiede dei certificati, se siete su Firefox cliccate su “Aggiungi eccezione” e salvate il certificato, oppure seguite gli opportuni passaggi sugli altri browser.

Cosa fa questa guida esattamente?

I vari comandi all’inizio servono a creare un utente riservato a deluge (invisibile dalla schermata di login, si potrebbe dire che è un account “di servizio”), in modo che una eventuale compromissione dell’account da remoto tramite una falla di sicurezza del demone deluge non metta a rischio l’intero server; viene creato un account chiamato deluge appartenente al gruppo deluge con cartella home uguale indovinate un po’ a /home/deluge nella quale verranno poi create le cartelle Incoming e Completed destinate rispettivamente ai file in fase di download ed allo spostamento di questi ultimi una volta completati. Il comando chown serve a attribuire all’utente deluge la proprietà delle suddette cartelle, siccome le creiamo come utente root.
Nella creazione dello script di avvio delige.conf il valore 000 assegnato al parametro umask serve a permettere l’accesso in lettura e scrittura alle cartelle di scaricamento di deluge da parte degli altri account del server, in modo che ad esempio non sia necessario impostare mirabolanti configurazioni multiutente per accedere via Samba alle cartelle del server da un pc Windows.
Se avete installato Samba, piuttosto, potete molto più facilmente accedere al server deluge usando, tramite rete locale, il nome del server. Se ad esempio l’hostname della macchina Ubuntu è pincopallo, una volta che avete impostato il WORKGROUP in /etc/samba/smb.conf ad un nome uguale al workgroup del vostro PC Windows, dal vostro browser preferito potete digitare semplicemente https://pincopallo:1234 (se vogliamo seguire l’esempio precedente).

Ubuntu won’t start Gnome GDM after upgrade to Oneiric Ocelot

So I was upgrading my home server first from Maverick Meerkat to Natty Narwhal, and then from Natty to Oneric Ocelot.
It is not a plain desktop installation, as back in the time I installed Ubuntu Server and then built upon it adding Gnome without the useless stuff that comes with the ubuntu-desktop package.

Anyway, after upgrading to Oneiric the X interface went away, all I saw was the boot messages text by the kernel up to the Apache2 start, and nothing else. SSH was still accessible so I could go through it, but you could still use recovery console to access the system if you don’t have remote terminal capability installed.

Checking with dmesg I saw these error messages:

[ 24.974182] gdm-simple-slav[1009]: segfault at 0 ip 002945b7 sp bfe9b6c8 error 4 in libnss_compat-2.13.so[291000+6000] [ 38.598946] gdm-simple-slav[1218]: segfault at 0 ip 00a3b5b7 sp bf9c35c8 error 4 in libnss_compat-2.13.so[a38000+6000] [ 39.562834] gdm-simple-slav[1238]: segfault at 0 ip 005eb5b7 sp bff72138 error 4 in libnss_compat-2.13.so[5e8000+6000]

Upgrading again, via SSH, to Precise Pangolin didn’t solve the problem, so I googled aroung and found this bug on launchpad.

Apparently, the autologin feature prevents GDM from going on and just hangs there.

Briefly, what I did and worked in my case (mileage may vary) was:

sudo add-apt-repository ppa:gnome3-team/gnome3
sudo add-apt-repository ppa:ubuntugnometeam/ppa-gen
sudo apt-get update
sudo apt-get dist-upgrade
sudo mv /etc/gdm/custom.conf /etc/gdm/custom.conf.off

The last line is the command that removes the autologin (by renaming the conf file that activates it), after doing this and rebooting I was showed the login screen.

Android ringtones notifications alarms resetting reverted or lost after reboot

You just added a few ringtones of your own to your android device, but after each reboot you lose them and they are set to something else, or muted altogether? Just like something was resetting them, or they were not saved correctly?

Well, most probably it’s because you copied the MP3’s on your SDcard, under

  • /ringtones
  • /alarms
  • /notifications

or even under

  • /media/audio/ringtones
  • /media/audio/alarms
  • /media/audio/notifications

(they all work as they should).

After a reboot, it may happen that for some reason your sd card takes too long to be mounted/scanned, hence Android cannot actually find anything in the specified folders, because they aven’t become available yet.

I wouldn’t know if there are any solutions to make Android mount your SD any faster, but a pretty workaround consists in copying those files directly in the internal memory. Ugly workaround if you ask me, since you’re taking away precious space for applications, still…

So, just take your desired MP3’s, and drop them in the internal memory (using a root file explorer, like ES file explorer), respectively in

  • /system/media/audio/ringtones
  • /system/media/audio/alarms
  • /system/media/audio/notifications

this way they will be treated just like builtin ringtones (you will actually notice the system ringtones in those folders, which you can delete since they are useless anyway, to save some space), and will be available right after boot.

Cache PHP to gzipped static html pages using htaccess redirect

No matter the server performance, the fastest kind of website for a visitor is the one with static HTML pages; this way the server just has to upload existing data to the browser instead of starting the PHP compiler, open a connection to MySQL, fetch data, format the page and only after that send it. Less CPU, less memory, less processing time, more users served in a smaller amount of time.

Avoiding PHP and MySQL execution altogether is the key, and my project was to create something similar to WPSuperCache to be implemented in a generic PHP website, so my thanks go to said plugin’s developers for the source code that gave me precious tips.

DISCLAIMER: this guide presumes you are “fluent” with PHP and .htaccess coding, and is meant to just give directions as of how to obtain a certain result. This guide will not give a pre-cooked solution to just copy/paste into your website, you need to change/add code to adapt it to your need; I am not the person who can help you if you need a tailored solution, simply because I have no time to give away free personalized help!

Here’s our logical scheme:

  1. Visitor comes to website and asks for page X;
  2. Webserver checks via .htaccess (avoiding PHP execution) if there is a static cached version of page X already, and in this case either serves the gzipped file (if the client supports it), or falls back to the uncompressed HTML version… at which point our scheme ends; otherwise, if no cached version exists, continue to next step;
  3. PHP builds the page and serves it to the visitor as “fresh”, but at the same time saves the HTML output as both gzipped and uncompressed version so the next visitor will directly download that one.

This is a very bare procedure, which needs to be perfected by the following conditions:

  • Not all pages need to be cached (those that by their very nature change very often, like real time statistics, a poll, whatever)… in fact, some pages must NOT be cached (those that only moderators can view, both for secutiry reasons and consistency); PHP must avoid caching those pages, so that htaccess will always serve the “fresh” PHP page.
  • Pages that do get cached still need to be refreshed, because sometimes they can change (articles or posts that are modified/updated with time, or where comments get posted).
  • Some pages, even if the corresponding cached version exists, must NOT be served from cache, for instance when POST data is submitted (sometimes, also with GET, you as the webmaster should know if this is the case), that requires PHP to be handled.

Define “caching behaviour” in PHP

This section is where you set the variables that the PHP caching routine will check against to know when and how to do its job.

$cachepath=$_SERVER["DOCUMENT_ROOT"]."/cache/";
$cachesuffix="-static";
$nocachelist=array(
    "search"=>1,
    "statistics"=>1,
    "secretcodes"=>1,
    "..."=>1,
 );

And explained:

  • $cachepath is the folder where you want the static (HTML and gzip) files to be stored (the website I use this caching routine on, has all the pages accessible from root folder with rewrite URLs, in other words the only slash in the URL is the one after the domain name, and I have no need to reproduce any folder structure; if you do, you’re on your own);
  • $cachesuffix is a string I need to add to the URL string (for example if address is domainname.com/pagename then in this case the cachefile will be named pagename-static), this is useful if you want to cache the homepage which is not named index.php but is just domainname.com/ because in that case, the cachefile name would be empty and .htaccess won’t find it;
  • $nocachelist is an associative array where you have to add as many keys (pointing to a value of 1) as the pages you don’t want (for whatever reason) to cache; in the key name you have to put the string the user would write in the URL bar after the domain name slash to get to the page, for example if you don’t want to cache domainname.com/statistics you would be using “statistics”=>1 in there, as already is.

Have PHP actually save to disk the cached pages

     if (
        !isset($nocachelist[$_GET["page"]]) &&
        !$_SESSION["admin"] &&
        !count($_POST) &&
        !$_SERVER["QUERY_STRING"]
    ) {
        //build the uncompressed cache
        if (!file_exists($cachepath.$url.$cachesuffix.".html")) {
            $cached=str_replace("~creationtime","cached",$html);
            $fp=fopen($cachepath.$url.$cachesuffix.".html","w");
            fwrite($fp,$cached);
            fclose($fp);
        }
        //build the compressed cache
        if (!file_exists($cachepath.$url.$cachesuffix.".html.gz")) {
            $cachedz=str_replace("~creationtime","cached&gzipped",$html);
            $cachedz=gzencode($cachedz,9);
            $cachedzsize=strlen($cachedz);
            $fp=fopen($cachepath.$url.$cachesuffix.".html.gz","w");
            fwrite($fp,$cachedz);
            fclose($fp);
        }
    }

Pretty much self explanatory, isn’t it.
No seriously, do you really want me to elaborate?

Ok, you got it.

  • The $_GET[“page”] is a GET value I set early in the code to know where we are in the website, you can use any variable here as long as you can check it against the $nocachelist array;
  • The other conditions in the first if should be clear, they avoid building the page’s cache if the CMS’s admin is logged (security) or if POST data is submitted or if there is a query string appended to the URL (consistency/stability);
  • $url is a variable that I define early in the code, and contains the string after the domain name slash and before the query string question mark, basically the kind of string you fill the $nocachelist array with (if you were paying attention, you may now think I have a redundant variable since $url and $_GET[“page”] should be the same, but this is not the case for other reasons);
  • $html is the string variable that, across the whole CMS, defines the raw HTML code to echo at the end of the PHP execution; you can either do like I do and define such string, or use an output buffer to obtain HTML if you instead print the HTML directly to screen during PHP execution;
  • ~creationtime is a “hotkey” I use in my template to plug in the time in seconds that was needed to create the page in PHP; since I am creating a cached version now, the creation time of the page is zero, because it’s already there to be downloaded from the browser instead of having to be compiled by the server, so in there I print either “cached&gzipped” for clients that support gzip, or only “cached” when the browser doesn’t not; you can safely strip out this part, as this is more of a eyecandy/nerdy/debug thing.

Let .htaccess send the cached files before starting the PHP compiler

AddEncoding x-gzip .gz
<FilesMatch "\.html\.gz$">
    ForceType text/html
</FilesMatch>

#GZIP CMS
RewriteCond %{REQUEST_METHOD} !POST
RewriteCond %{REQUEST_URI} !/forum/
RewriteCond %{QUERY_STRING} !.*=.*
RewriteCond %{HTTP:Cookie} !^.*(isadmin|nocache).*$
RewriteCond %{HTTP:X-Wap-Profile} !^[a-z0-9\"]+ [NC]
RewriteCond %{HTTP:Profile} !^[a-z0-9\"]+ [NC]
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteCond /full/path/to/your/htdocs/cache/$1-static.html.gz -f
RewriteRule ^(.*) /cache/$1-static.html.gz [L,T=text/html]

#UNCOMPRESSED CMS
RewriteCond %{REQUEST_METHOD} !POST
RewriteCond %{REQUEST_URI} !/forum/
RewriteCond %{QUERY_STRING} !.*=.*
RewriteCond %{HTTP:Cookie} !^.*(isadmin|nocache).*$
RewriteCond %{HTTP:X-Wap-Profile} !^[a-z0-9\"]+ [NC]
RewriteCond %{HTTP:Profile} !^[a-z0-9\"]+ [NC]
RewriteCond /full/path/to/your/htdocs/cache/$1-static.html -f
RewriteRule ^(.*) /cache/$1-static.html [L]

This is the code you need to plug in .htaccess, preferably after everything else, but before defining the custom error pages; anyway, since you should be .htaccess-fluent, you shouldn’t need to be told where this fits best.

Some detailing for the curious:

  • First bit is needed (at least I needed it) to serve the gzipped files in a way that the browser knows to handle, otherwise I just got gibberish (the gzip was being sent to output without being uncompressed by the browser first);
  • if the cookies “isadmin” or  “nocache” exist, the cache version of the page will not be served even if it exists; easy explanation, if an admin is logged, and there is special content on a page that only admins can see, you don’t want them to see the “vanilla” cached version of the pages instead; so it’s your duty in this case to set a “isadmin” cookie when an admin logs in, and remove it when the admin logs out;
  • choosing the correct full path on the webserver was a bit tricky in my case, I can’t quite remember where the issue was, but depending on the method I used to get it I had different paths; I kind of remember I had to choose between $_SERVER[“DOCUMENT_ROOT”] and dirname(__FILE__) because only one was working with .htaccess;
  • you don’t really have to change much in this code snippet unless you have particular needs; the /forum/ path exclusion could be irrelevant in your case;
  • thanks go to WPSuperCache developers for their .htaccess code that I stole to build this snippet!

Last but not least

The cache is there to help you, not to give an handicap to your website. You choose what the cache performance should be, and how many times it should get served instead of the dynamic page, before considering it paied off, but you have to clear the cache from time to time to make sure you’re not serving outdated stuff to your visitors.

In my case I use an external cronjob provider (setcronjob.com) to trigger a PHP routine every night which includes the following:

$handle=opendir("cache");
while (($file = readdir($handle))!==false) @unlink("cache/".$file);
closedir($handle);

so that everyday the website starts off with a fresh cache. Not less important, you should clear the cache of a single page as soon as you know that page changed, unless you’re ok with the changes being visibile only the next day after all the cache is cleared anyway. Example: you edit a page while logged as admin, or a user posts a comment, or anything happens that you have control over that alters in any way the page’s HTML: simply use unlink() to delete both the gzipped and uncompressed caches and the website will recreate them with the updated content.

 

Have fun with pimping up this draft!