WordPress development tricks
Expanding on Moving a WordPress instance by hand, here’s my list of hacks and tricks necessary for managing local WordPress instances.
Send PHP development server (‘wp server’) logs to a separate file
- In php-cli’s
php.ini
(should be somewhere under/etc/php
), set:log_errors = On
error_log = /var/log/php_errors.log sudo chmod 777 /var/log/php_errors.log
(yeah, whatever)
Catch emails
- Install mailcatcher:
gem install mailcatcher
(requiresrubygems
package to be installed) - In php-cli’s
php.ini
, setsendmail_path = /usr/bin/env catchmail
- At the very bottom of
wp-config.php
, below where it loadswp-settings.php
, put:
if (defined('WP_CLI')) {
WP_CLI::add_wp_hook('wp_mail_from', function () {
return '[email protected]';
});
} else {
add_filter('wp_mail_from', function () {
return '[email protected]';
});
}
(otherwise WordPress will fail to send emails, saying “Invalid address: (From): wordpress@localhost”)
Use more workers for the PHP development server
This may make things faster, but it may also slow things down to a crawl. Give it a shot.
Simply append PHP_CLI_SERVER_WORKERS=N
before the wp server
command, like so:
PHP_CLI_SERVER_WORKERS=50 wp server
Kill and restart a hanging PHP development server
If you see pages infinitely loading, just CTRL-C
the development server and kill all its workers before trying to restart it by doing:
pkill -f 8080 # <-- Replace '8080' with whatever port you used
Disable Dark Reader
Both in Firefox and Chrome, something in the browser plugin Dark Reader’s logic triggers multiple background loads of the index, which really slows things down when using PHP’s development server.
Fix uploads not working even though file permissions are OK
This could be caused by the upload_path
setting in the wp_options
table. To fix, clear the field:
UPDATE wp_options SET option_value = '' WHERE option_name = 'upload_path';
Recovering Dovecot mailboxes from disk (cPanel, Maildir format to Mbox)
I was given a backup of a hard drive containing cPanel-managed mailboxes and wanted to recover them into a readable format. When spinning up a new server using the drive backup, I could no longer access webmail or IMAP due to a cPanel license issue which I tried to work around but quickly disregarded. cPanel uses Dovecot to manage user mail.
What follows is a quick manual on how to recover and read Dovecot mailboxes given only disk access, a couple of scripts and a mail client that can read Mbox files.
We assume the mailboxes are stored in the Maildir++ format. These can be recognized by their directory structure: they always contain subdirectories cur
, new
, and usually tmp
.
1. Find and copy the mailboxes
On my particular cPanel-managed server, the maildirs were located in /home/<user>/mail
and there were also sub-maildirs in /home/<user>/mail/<domain>/<account>
Specific folders in the mailboxes used hidden directories like .Sent
or .Trash
.
You should be able to find maildirs by running:
find / -type d -name cur
Copy them all to a temporary workspace directory somewhere.
2. Unzip e-mails where necessary
I found that some of the e-mails were garbled when later converting the maildir to an mbox file.
This turned out to be because some (newer) mails were being stored gzipped.
I wrote a quick shell script to walk over all the e-mails and g-unzip them: maildir-gunzip.sh
Don’t use it on the originals, make a copy first!
To use:
chmod +x maildir-gunzip.sh # Make it executable
./maildir-gunzip.sh /path/to/copy-of-maildir
3. Convert to an Mbox file
Mbox files are a convenient, and perhaps the most common format for moving mailboxes around.
There is a nice script called maildir2mbox.py
which has been adapted by various people over the years.
The most recent version I could find was one by Github user bluebird75. Download the script here. (mirror)
Make sure you have Python v3 installed on your machine.
Run the script like so:
maildir2mbox -r /path/to/maildir-copy outputfile.mbox
This Dovecot docs page contains an alternative script for this written in Perl, as well as some scripts for converting between other formats.
4. Open the Mbox file
Any email client that can read Mbox files will do.
I used Evolution. In Evolution, go to File > Import and follow the steps.
Make sure you make a new directory to import in so you can easily delete the emails later.
Extracting MySQL databases from a disk backup, the easy way
A lot of guides online tell you that, if you want to restore a MySQL instance using only a backup of the /var/lib/mysql
directory, you should recover/rebuild the entire operating system that MySQL ran on.
It’s true that you should mimic the original setup, but only with regards to the MySQL version and sometimes whether MariaDB or MySQL were used. Once you’ve got that, you can very easily spin the database up again using Docker. Assuming you know the database user+password, you can just run mysqldump
as you normally would.
Below are the steps that worked for me.
1. Determine MySQL version
Find the location of the mysql
or mariadb
binary:
find /thebackup -type f -name mysql -or -name mariadb
I had a rough idea of what the version would be so I ran strings
and searched for 8.0:
$ strings /thebackup/usr/bin/mysql | grep '8\.0'
8.0.25
/build/mysql-8.0-eXpnQw/mysql-8.0-8.0.25/sql-common/client.cc
/build/mysql-8.0-eXpnQw/mysql-8.0-8.0.25/sql-common/client_plugin.cc
...
An alternative which may work generically:
strings /thebackup/usr/bin/mysql | grep '^/build'
2. Start the database in Docker
Now that we have the exact version number (8.0.25), we can spin up a Docker container for it:
docker run --rm --name mysql-recovery -v /thebackup/var/lib/mysql:/var/lib/mysql mysql:8.0.25 --skip-grant-tables --user=mysql
Replace mysql:
<version> with mariadb:<version>
if it’s MariaDB.
Note that we’re mounting the mysql lib directory into the container using -v
. If you want to play it safe, make an extra backup of /var/lib/mysql
first.
3. Extract data
You can now browse or dump the data using command-line tools:
docker exec -it mysql-recovery mysql
docker exec mysql-recovery mysqldump mydatabase > mydatabase-backup.sql
15.02.22 Moving a WordPress instance by hand
Over the years I’ve migrated a lot of WordPress sites. Although there are plugins that can take care of it, I have yet to find a more efficient way than just doing it by hand. The following is my recipe for migrating WordPress instances.
In the example we move a site to our local development environment.
Prerequisites
- Shell (usually SSH) access to the server that runs the WordPress instance
- PHP CLI (sudo apt install php-cli)
- WP-CLI
Exporting the database
The quickest way is to use wp-cli:
cd public_html
wp db dump example.com.sql
If that doesn’t work, here’s the manual way:
cd public_html
# Find database credentials
grep DB_ wp-config.php
# Create a database dump
mysqldump somedb -u someuser -p'somepassword' > example.com.sql
Archiving the website
Make a tarball of the site’s files:
tar czf example.com.tgz public_html
If it’s only for testing and you’re in a hurry, exclude the WP Uploads directory:
tar czf example.com.tgz --exclude=**/wp-content/uploads/* public_html
Now copy the tarball over to your machine.
Setting up local environment
- Unpack the WordPress instance:
tar xzf example.com.tgz
- Update database credentials in wp-config.php
For local development, I usually just take user ‘root’ with an empty password (the Gods of Security frown upon me) - Create and import the database, e.g.:
sudo mysqladmin create example_com
cd public_html
wp db cli < example.com.sql
- To not need HTTPS locally:
a. Remove any line containing “FORCE_SSL_ADMIN
” from wp-config.php
b. Remove or rename (= deactivate) any plugin (from wp-content/plugins) that forces ssl, for instance ‘really-simple-ssl’ - Replace your WordPress base URL using wp-cli:
wp search-replace https://example.com http://localhost:8080
- Run the local development server:
wp server --port=8080
You should now be able to access the WordPress site by visiting http://localhost:8080 !
Optional, but handy
Create an admin user for yourself:wp user create myadmin [email protected] --role=administrator --user_pass=admin
Troubleshooting
If connecting to the database fails, it could be that an empty password is not accepted. As using root with an empty password is a bit of a hack, here’s another hack to make it work for newer versions of MySQL/MariaDB. Obtain a root MySQL shell and execute:
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '';
FLUSH PRIVILEGES;
Further (necessary) tweaking
See WordPress development tricks
6.04.14HOWTO: rtorrent + rutorrent on the Netgear ReadyNAS 102
In 14 short steps. Wonderful!
Prerequisites:
- A Netgear ReadyNAS 102 with firmware version 6.1.6 to 6.1.8, which I’ve tested this on
- Root access to the NAS. All the commands in this tutorial should be run as root
NOTE: This tutorial probably works for any system running Debian Wheezy.
6.04.14HOWTO: Unattended rdiff-backup + multiple commands
Here’s a small addition to Dean Gaudet’s tutorial on how to set up rdiff-backup for secure, unattended remote backups.
The scenario: you want host1 to pull backups from:
host2 : /var/log
host2 : /var/www
You’ve set everything up:
- A non-root user on host1 called backupuser which initiates the backup
- SSH private/public keys for backupuser
- Added backupuser’s public key to host2:/root/.ssh/authorized_keys
- An entry for host2 preconfigured in host1:~/.ssh/config as explained in the original tutorial.
Then you find out that in authorized_keys, you can only limit backupuser to run one command, not multiple:
root@host2:~# cat /root/.ssh/authorized_keys command="commandname" ssh-rsa FBwfijwefwB(...etc...)