(Sidenote: incorrect use of these permission types may result in severe security risks.)
The sticky bit is an Unix access rights flag that can be assigned to files and directories. In a directory with the sticky bit set, only the file’s owner, the directory’s owner or root
can rename or delete the file. Without the sticky bit, any user with write and execute permissions for the directory can rename or delete files in the directory.
The /tmp
directory is an example of a directory that typically has the sticky bit set to prevent users from moving or deleting other users’ files.
The sticky bit can be set using the chmod
command. It can be set using the symbolic notation (+t
):
chmod +t /tmp/file.txt
chmod +t /tmp/directory/
or using the octal mode (1000
):
chmod 1644 /tmp/file.txt
chmod 1755 /tmp/directory/
To clear the sticky bit is as simple as this:
chmod -t /tmp/file.txt
chmod 0644 /tmp/file.txt
In a directory listing the sticky bit will be indicated by a lowercase t
at the end of the list with permissions:
-rwxrwxr-t 1 user group 147 Jan 23 17:39 /tmp/file.txt
In case the sticky bit is set on a file or directory without the execution bit set for the others
category, it is indicated by an uppercase T
:
-rw-rw-r-T 1 user group 147 Jan 23 17:39 /tmp/file.txt
setuid
(set user ID upon execution) is an Unix access rights flag that allows users to run an executable with the permissions of the executable’s owner.
When an executable file has the setuid
attribute, normal users gain the privileges of the user who owns the file (commonly root
). When root privileges have been gained within the process, the application can then perform tasks on the system that regular users normally would be restricted from doing.
Executables that use a SUID bit are for example passwd
, ping
and crontab
.
You can either set the SUID bit on a file using the symbolic notation:
chmod u+s /path/to/file.txt
or using octal notation:
chmod 4750 /path/to/file.txt
In the example above 4
is the SUID bit, 7
grants full permissions to the user, 5
read and execute permissions to the group and 0
no permissions for others.
When the SUID bit is set on an executable there will be a (lowercase) s
where you normally would see the x
in a directory listing. For example for /bin/ping
:
$ ls -l /bin/ping
-rwsr-xr-x 1 root root 44168 May 7 2014 /bin/ping
If the file does not have execute permissions but the SUID bit is set nevertheless, there will be (uppercase) S
:
$ ls -l /tmp/file.txt
-rwSr--r-- 1 user group 147 Jan 23 17:19 /tmp/file.txt
To find files or directories with the SUID bit set you can use:
find / -perm +4000
setgid
is similar to setuid
, except that the user will run the executable using the group’s permissions.
(Sidenote: incorrect use of these permission types may result in severe security risks.)
]]>If you followed my SpamAssassin setup the SpamAssassin thresholds are defined in the file /etc/amavis/conf.d/20-debian_defaults
. Below are the relevant settings to control the thresholds:
Flag | Description |
---|---|
$sa_tag_level_deflt |
The threshold at which amavisd will add a header to the mail to show the spam score. |
$sa_tag2_level_deflt |
The threshold at which amavisd will add the value of $sa_spam_subject_tag to the mail’s subject (by default ***SPAM*** ). |
$sa_kill_level_deflt |
The threshold at which amavisd will execute the action described by $final_spam_destiny (possible options are: REJECT , BOUNCE , DISCARD and PASS ). |
In the end, the only threshold I’m really interested in is $sa_kill_level_deflt
. But how do you find the value of this level?
By default it is set at 6.31 (at least in my case), but is this really the value that blocks the most spam and still passes legit (i.e. non-spam) messages? It’s time to find out and in order to do this we’re going to need $sa_tag2_level_deflt
.
Any message with a score above this threshold is supposed to get a mail header with some information to use for analysis. During my experiments I was not able to get this to work (I probably had some other setting messing things up). However, the relevant information was also added to the log file (/var/log/mail.log
). For every message (spam and legit) not only the spam score is shown, but also what amavis
did with the message: Passed CLEAN
, Passed SPAMMY
and Blocked SPAM
.
By choosing a relatively large value for $sa_kill_level_deflt
(lets say 10) and a low value for $sa_tag2_level_deflt
(lets pick 0) a large group of messages will be marked as Passed SPAMMY
. A lot of these messages clearly will be spam messages. Now you can use the information from the log file to figure out a new upper threshold: you will notice that no legit message exceeds a certain spam score. You have found your new value for $sa_kill_level_deflt
!
The same can be done for the lower threshold. There is a certain minimum score for obvious spam messages. This value can be used for $sa_tag2_level_deflt
. (Note that this has no effect on the number of messages being blocked. It just indicates that anything below this limit is considered non-spam.)
Currently I’m using these values:
$sa_tag_level_deflt = 0.0;
$sa_tag2_level_deflt = 0.5;
$sa_kill_level_deflt = 2.0;
This will block most of the obvious spam without rejecting (too much) legit messages. It still can happen that a normal message exceeds my limit of 2.0 and therefore will be blocked. During my experiments this did not happen often and is probably a sign of some misconfiguration on the sender’s side. My guess is that they probably will experience problems in sending mail often (or at least delivering it to the recipient).
Anything below 0.5 is considered clean. Most major players (Gmail, Hotmail, Mailchimp etc.) have excellent scores (far far below zero) and therefore these will always be delivered. Some spam messages have scores just above or even below zero and will also pass the filter. This is something I take for granted.
The problem area is the twilight zone between 0.5 and 2.0. For some reason too many legit messages look kind of spammy (no subject line, only images or videos, all caps, misconfigured server, …) and there is no foolproof way for a non-human to distinguish them from the spam messages that succeed in hiding their spammy intentions. By narrowing down this range you will receive less spam, but potentially miss certain legit messages. Using trial and error will give result in values that are acceptable for you.
]]>Download this Bash script and save it on your server (I like to use /opt/scripts/
for things like this).
Some things to note about the script:
/etc/nginx/cloudflare
. Your setup might be different, change accordingly.curl
or wget
to download the files from the CloudFlare site. If neither is found the script will exit.I use the script in a cronjob to regularly check for updates. In order to do so, we first have to set the permissions correctly:
chmod 700 /opt/scripts/cloudflare-update-ip-ranges.sh
This will make the script executable for the user (root
in my case since it needs to reload the nginx config), but for no one else.
Then add it to the user’s crontab:
crontab -e
by adding the following lines:
# Update CloudFlare IP Ranges (every Sunday at 04:00)
0 4 * * sun /opt/scripts/cloudflare-update-ip-ranges.sh > /dev/null 2>&1
The list of IP addresses probably won’t change that often, so checking just once a week should be okay.
]]>grep
a lot to search input files for a certain string. I use it many times each day, but somehow didn’t bother to improve my “grep fu”.
Since I rather match too much than too little, I usually give grep
a slightly too generic search string and have it look in more files than really is necessary. Then I will patiently step through the results to get to the file I’m looking for. (You’ll probably already see several parts where this can be improved.)
To drill down into the results of a grep
on the text FIRST_STRING
, I usually piped it to another grep
command and then searched for the text SECOND_STRING
.
grep FIRST_STRING /path/to/files/ -R | grep SECOND_STRING
And if you needed to eliminate even more matches from the result set, you could add more pipes.
But grep
has the option to search for patterns that contain a regular expression. (It’s the re in grep.) Why not use that feature to get rid of the pipe structure? Then we have the following single command:
grep FIRST_STRING.*SECOND_STRING /path/to/files/ -R
This will only match patterns where SECOND_STRING
will follow FIRST_STRING
, not the other way around. This is how I would use the pipe-method anyway, so for me that’s no big deal.
And of course, this also works with ack
:
ack FIRST_STRING.*SECOND_STRING /path/to/files/
]]>grep
a lot to search input files for a certain string. I use it many times each day, but somehow didn’t bother to improve my “grep fu”.
]]>tar
archive, by default the files and directories will be extracted to the current working directory. If you want to extract them to a different directory, it is possible to use the -C
(or --directory
) option:
tar xvf archive.tar -C /path/to/target/directory
Note: The target directory must exist. tar
will not create it for you.
If the tarball already contains a directory (all files and subdirectories are stored in a general directory), then it is possible to add the --strip-components <count>
option to extract the content at the desired location without having to move them afterwards:
tar xvf archive.tar -C /path/to/target/directory --strip-components 1
]]>tar
archive, by default the files and directories will be extracted to the current working directory. ]]>You can either use the default nginx config (/etc/nginx/nginx.conf
) or the config of a single site (/etc/nginx/sites-enabled/example.com
). I prefer the latter.
Add the following lines to the server
block:
# Custom error pages
error_page 401 /error/401/index.html;
error_page 403 /error/403/index.html;
error_page 404 /error/404/index.html;
error_page 405 /error/405/index.html;
error_page 500 501 502 503 504 /error/5xx/index.html;
location ^~ /error/ {
internal;
}
You only need to add the definitions for the HTTP status codes you want to support. If you don’t need a custom error page for 401 errors, just leave it out from the config file and nginx will serve the default response.
In the example above, all error pages will be served from the directory error
within the document root of the current site. If you want to serve the files from a common directory for all configured site then you need something like this:
location ^~ /error/ {
internal;
root /path/to/common/error-files;
}
In both cases you’ll notice the line that says internal
. This prevents visitors to fetch the error pages directly.
Save the files and reload the nginx config. Now your own custom error pages will be used!
]]>My name is Marek and I’m an recreational phpMyAdmin user.
Although I’m not a big fan of phpMyAdmin - its imaginative user interface keeps surprising me - there is one thing for which I use it constantly: adding new users to MySQL and assigning the correct privileges.
I always found that the whole process of adding a new MySQL user from the command line was way too cumbersome. And since phpMyAdmin makes it quite easy I never made a real effort to fix this hole in my MySQL know-how. Up till now.
And it turns out the process is not as fuzzy as I remembered it to be.
From the command line, start the MySQL shell:
mysql -uroot -p
To create a new user myuser
:
CREATE USER 'myuser'@'localhost' IDENTIFIED BY 'mypassword';
Note that the new user myuser
can only access the database from the local machine (i.e. localhost
). If you want to access the database from a remote server you can specify the remote hostname or use a wildcard to allow from all remote locations (%
).
Great! So we’ve added a user, but by default it has absolute no permissions to do anything (except login).
My view on permissions is that you need as little as possible.
In most cases I need a user that has read, write, update and delete permissions to all tables in a single database. This command will assign the relevant permissions:
GRANT SELECT, INSERT, UPDATE, DELETE ON `database`.* TO 'myuser'@'localhost';
If the user also needs to change the database structure (for instance to complete a installation process) then you need this command:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX,
ALTER, CREATE TEMPORARY TABLES ON `database`.* TO 'myuser'@'localhost';
When you have set up the desired permissions you can finalize them using:
FLUSH PRIVILEGES;
This way you’re sure all your users have the correct permissions.
The following command gives the user full access to all databases and tables. In general, I would not recommend using these.
GRANT ALL PRIVILEGES ON *.* TO 'myuser'@'localhost';
To have a bit more restricted type of access you can use:
GRANT ALL PRIVILEGES ON mydatabase.* TO 'myuser'@'localhost';
The general form of the GRANT
command is as follows:
GRANT [type-of-permission] ON [database-name].[table-name] TO '[username]'@'[hostname]';
As you can see, it is possible to set up a really detailed permission structure if needed.
Check out the entire list of MySQL privileges for more detailed information.
If you can give an user access it is also possible to retract those permissions. This is the general form to do so:
REVOKE [type-of-permission] ON [database-name].[table-name] FROM '[username]'@'[hostname]';
Eventually the moment will come when you also want to get rid of a user. This can easily be done by using the DROP
command:
DROP USER 'myuser'@'localhost';
]]>My name is Marek and I’m an recreational phpMyAdmin user.
Although I’m not a big fan of phpMyAdmin - its imaginative user interface keeps surprising me - there is one thing for which I use it constantly: adding new users to MySQL and assigning the correct privileges.
]]>One of the most simple servers in Node.js is the following HTTP server. It will create a local server listening on port 8080
. Every request triggers a single (and highly unoriginal) response.
Start the server and test it with curl
:
$ node basic-http-server.js
$ curl http://127.0.0.1:8080
This is a slight variation on the previous example. Instead of returning the response immediately and ending the connection, it will first tell you to have a bit more patience. After a short wait, it will send you the rest of the content. Notice that in the meantime the connection is kept open.
Start the server and check the time per request with the Apache benchmarking tool:
$ node delayed-response.js
$ ab -n 100 -c 100 http://127.0.0.1:8080/
If you use curl
to request the URL, you will see the first response immediately, followed by the next one after 2 seconds. In the browser you won’t see anything until the entire response is received. This is just how the different clients deal with the server’s response.
In this example we do not need the http
module, instead we use the more low level net
module. This server will wait for data that is sent and return it in uppercase.
Start the server and talk to it. Since this is not a HTTP server, we use the nc
command.
$ node echo-server.js
$ nc 127.0.0.1 8080
Start the server and connect to the server from at least two terminal windows. Now you can talk to each other.
$ node chat-server.js
$ nc localhost 8080
Start the server and test it by requesting several URLs. If the requested file doesn’t exist, it will return a 404 status code and corresponding response message.
$ node static-file-server.js
$ curl http://127.0.0.1:8080/static-file-server.js
$ curl http://127.0.0.1:8080/not-found
Note: These are just simple examples. Probably you shouldn’t use it in real world code. However, they are really useful in understanding how the basics of Node.js work and even how servers and clients interact with each other.
]]>To see which messages are currently in the queue:
mailq
This will result in something like this:
-Queue ID- --Size-- ----Arrival Time---- -Sender/Recipient-------
BF74A87146 333 Tue Mar 10 08:30:45 root@example.com
(temporary failure)
test@example.com
-- 0 Kbytes in 1 Request.
In this example a mail from root@example.com
to test@example.com
got temporarily stuck in the queue.
To flush the mail queue under postfix
you simply do this command:
postfix flush
This will process the queue, trying to deliver the remaining messages. If the message is not delivered but requeued instead, it is time to check the logs for any error messages.
If you just need to remove a single message, this is the command you need:
postsuper -d MAILID
where MAILID
is the ID of the mail in the queue.
To clean up the queue completely, you can remove the messages using this command:
postsuper -d ALL
This is a script floating around the internet for who knows how long. It will delete only the messages that match the specified regular expression.
The following commands will delete any message that contains either example.com
or root
in the e-mail address:
./delete-from-mailqueue.pl example.com
./delete-from-mailqueue.pl root
]]>Cause I’m a sucker for neat and ordered lists, today’s example is a collection of CSS cursors. With screenshots and live previews.
]]>Cause I’m a sucker for neat and ordered lists, today’s example is a collection of CSS cursors. With screenshots and live previews.
]]>X-Forwarded-For header
. Therefore it is possible to add the visitor’s real IP again to your logs.
For nginx it is necessary to have http-real-ip
installed. On Ubuntu, this module is activated by default. So we immediately can get started.
Add the following lines to /etc/nginx/nginx.conf
:
Create a new file /etc/nginx/cloudflare
and add these lines:
This is the list of IP addresses currently used by CloudFlare.
Now you can reload nginx and the real IP’s will be showing again in the logs.
If you get an error like this one:
$ service nginx reload
Reloading nginx configuration: nginx: [emerg] "set_real_ip_from" supports IPv4 only in /etc/nginx/nginx.conf:44
nginx: configuration file /etc/nginx/nginx.conf test failed
it just means you don’t support IPv6. Remove the lines with IPv6 addresses from the CloudFlare config file above and reload nginx again.
Check also my post about setting up a cronjob to automatically update the CloudFlare IP addresses.
]]>X-Forwarded-For header
. Therefore it is possible to add the visitor’s real IP again to your logs.
]]>Bash has a useful feature to make an alias to a command you often use. It’s essentially a shortcut or abbreviation for the longer command sequence. For instance, if you find yourself using often the command ls -alF
you can create an alias for it.
Just add it to the file ~/.bashrc
. Open it with your favorite editor and add the following lines:
alias ll='ls -alF'
Reload .bashrc
(or start a new terminal session):
source ~/.bashrc
Now you can use the command ll
to display directory listings.
WP-CLI
An alias I often use is for WP-CLI. You’re not allowed to run wp
as root. It is therefore better to run it as the user that is also used to run the website (for instance www-data
). Add this line to /root/.bashrc
to fix this issue:
alias wp="sudo -u www-data -- wp"
If you need to know what specific command is executed when you run an alias, you can use the type
command:
$ type ls
ls is aliased to `ls --color=auto'
$ type wp
wp is aliased to `sudo -u www-data -- wp'
When you do not know exactly which commands are aliased, you can use the compgen
command to list all available aliases:
$ compgen -a
egrep
fgrep
grep
l
la
ll
ls
“What about zsh?” you might ask. The answer is short, it is exactly the same except for one thing. You just add the aliases to .zshrc
instead of .bashrc
.
Bash has a useful feature to make an alias to a command you often use. It’s essentially a shortcut or abbreviation for the longer command sequence. For instance, if you find yourself using often the command ls -alF
you can create an alias for it.
POP3
is becoming an antiquated protocol to connect to a mail server, it is quite useful to test if a mail account is set up properly.
Normally I just try to login and stop there, but there are some more commands in your POP3-toolbox.
Command | Description |
---|---|
USER <username> |
Login to the account <username> (required). |
PASS <password> |
Login with the password <password> (required, plaintext). |
QUIT |
Close the connection. |
STAT |
Shows the number of messages and the total size of the mailbox. |
LIST [message] |
Displays a list of messages with their numbers and respective sizes, or a single message number with its size. |
UIDL [message] |
Displays a list of messages with unique identifiers for each message. |
RETR <message> |
Display the header and body content of the specified message. |
TOP <message> <lines> |
Display the header of the specified message. If <lines> is set it will also show this number of lines from the body content. |
DELE <message> |
Mark the specified message for deletion. |
RSET |
Clears any delete flags set by the DELE command. |
NOOP |
Does nothing, simply used to avoid server timeouts. |
There are just 2 replies possible with POP3: +OK
and -ERR
, meaning, you guessed it, that your command was successful or that it failed.
It is possible that these replies are followed by some more information to show the result of a command or the reason of the error.
We’re going to use telnet
to connect to the mail server.
$ telnet example.com 110
And we’re greeted by a welcome message:
Connected to example.com.
Escape character is '^]'.
+OK POP3 server ready.
The server is waiting for our first command. So let’s try to login. First we need to provider our username:
USER <username>
+OK
Followed by our password (Note: it will be shown as plaintext!):
PASS <password>
+OK Logged in.
Success, we’re in!
To display a list of the messages in our mailbox:
LIST
+OK 5 messages:
1 3375
2 3622
3 3606
4 3630
5 3834
This will tell you how many messages there are and will also give a list with on each line a message number and the message size in bytes (you gotta love those old’n days where people were concerned about disk space).
The STAT
command will also tell you the number of messages, plus the total size of the mailbox:
STAT
+OK 5 18067
You have 2 similar commands to your disposal for reading messages: TOP
and RETR
. Both commands will show you the content of a message and therefore need a message number. But TOP
also accepts a second parameter so you can specify the number of lines of body content to show (it will always show the headers). When no lines are specified it is assumed to be 0.
TOP 1
+OK
Return-Path: <sender@example.com>
Delivered-To: info@example.com
Received: from localhost (localhost [127.0.0.1])
by example.com (Postfix) with ESMTP id 10315142CF8
for <info@example.com>; Mon, 23 Feb 2015 13:19:37 +0100 (CET)
Date: Mon, 23 Feb 2015 04:19:35 -0700
From: <sender@example.com>
To: info@example.com
Message-ID: <78AF5C73C263AF24BA943DEEBB142316@example.com>
Subject: Just a test message
To delete a message we can use the DELE
command:
DELE 1
+OK Marked to be deleted.
If the message does not exist or has been deleted, you will see the following error messages:
DELE 1
-ERR Message is deleted.
DELE 9999
-ERR There's no message 9999.
The message is not deleted instantly, this will only happen when you close the connection. To recover the deleted message before you log out, you can use RSET
:
RSET
+OK
This will clear any delete flags that have been set.
To close the connection and actually delete any messages marked for deletion:
QUIT
+OK Logging out.
]]>POP3
is becoming an antiquated protocol to connect to a mail server, it is quite useful to test if a mail account is set up properly.
]]>For instance, if you make a backup of a directory structure, you want to have an exact copy of the thing. Not an almost exact copy with some files missing, some files that turned corrupt and some extra files you put there by mistake or forgot to remove.
I often receive updates to (premium) WordPress plugins. Since these plugins have no version control whatsoever I like to keep them under version control myself. Most of the time the changes concern existing or new files, but sometimes an existing file is no longer needed and keeping it in the repository is not a good idea. But how do you know which files are no longer needed?
There are several tools to compare the contents of two directories. The one I like best is diff
. Normally it is used to compare the contents of files, but it is also possible to make a recursive comparison (-r
). And by including the quiet flag (-q
) you get only a list of differences:
diff -rq original/ copy/
This will output something like this:
Files original/in-both.txt and copy/in-both.txt differ
Only in copy/: only-in-copy.txt
Only in original/: only-in-original.txt
As you can see, all differences between the directories original
and copy
are shown. In this case, both directories have a file called in-both.txt
, but the content is different. Also both original
and copy
contain files that are not present in the other directory (only-in-original.txt
and only-in-copy.txt
respectively).
Nowadays, with tools like Vagrant and Docker, it is a given you can control your virtual machines from the command line, but I did not realise until recently that it is also possible to do the same with VirtualBox.
VirtualBox has a super useful utility called VBoxManage for this.
To get a list of all available virtual machines on your system:
VBoxManage list vms
This will display a compact list with each VM’s name and UUID.
If you need a list of all running virtual machines, simply use:
VBoxManage list runningvms
Start a virtual machine without the graphical user interface (i.e. headless):
VBoxManage startvm <name-of-the-vm> --type headless
Now you can access the virtual machine with SSH. You can connect just like with any (remote) SSH-server. In stead of the default port 22, you will need to specify the specific port that you added for the VM. For example:
ssh -p 3022 <user>@127.0.0.1
If you forgot which portnumber you used, there’s a command that will show you the network rules:
VBoxManage showvminfo <name-of-the-vm> | grep Rule
NIC 1 Rule(0): name = ssh, protocol = tcp, host ip = , host port = 3022, guest ip = , guest port = 22
When nothing is returned, you will need to add port forwarding. This can be done from the GUI (Network Settings > Port Forwarding), but you guessed it, there’s also a command line version to do this:
VBoxManage modifyvm <name-of-the-vm> --natpf1 "ssh,tcp,,3022,,22"
This wil map port 3022 on the host to port 22 on the guest.
]]>Nowadays, with tools like Vagrant and Docker, it is a given you can control your virtual machines from the command line, but I did not realise until recently that it is also possible to do the same with VirtualBox.
]]>sudo
:
no talloc stackframe at ../source3/param/loadparm.c:4864, leaking memory
After a bit of digging around it was clear this was related to the Samba PAM module.
There are 2 options to get rid of the message:
1. Remove the module
apt-get remove libpam-smbpass
2. Change PAM authentication settings
pam-auth-update
and disable the option “SMB password synchronization”.
]]>sudo
:
no talloc stackframe at ../source3/param/loadparm.c:4864, leaking memory
]]>Just the other day I needed to quickly set up a temporary test server, but apparently I forgot something. On the command line I started seeing error messages, telling me the Perl’s locale was not set correctly:
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "nl_NL.UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
This is easy to fix by using locale-gen
to compile a list of locale definition files:
locale-gen en_US en_US.UTF-8 nl_NL nl_NL.UTF-8
Note: in this case I also included the custom locales for nl_NL
.
And then to finish it off:
dpkg-reconfigure locales
]]>Just the other day I needed to quickly set up a temporary test server, but apparently I forgot something. On the command line I started seeing error messages, telling me the Perl’s locale was not set correctly:
]]>mysqldump -uroot -p database_name > db-export.sql
mysql -u root -p database_name < db-export.sql
Occasionaly I need to make some drastic changes to a database to fix some messed up stuff (yeah, I’m looking at you WordPress). When you like to live dangerously, you’re gonna make those changes in the live database and thus a recent backup is essential in case you botch up.
Enter the accurately named mysqldump
:
mysqldump -uroot -p database_name > db-export-one.sql
Only the database with the name database_name
will be included in the export file.
To export multiple databases at once:
mysqldump -uroot -p --databases database1 database2 > db-export-multiple.sql
You can specify as many databases as needed, just separate them with a space.
To export all databases:
mysqldump -uroot -p --all-databases > db-export-all.sql
If you need to export only a specific table:
mysqldump -uroot -p database_name table_name > db-table-export.sql
Just kidding. You never make changes to a live database, do you? No, instead you make a backup, import it to a development server and make your changes. Then you reverse the process, dump the changed database and import it to the live server.
To import a SQL-file you can use the regular mysql
command:
mysql -u root -p database_name < db-export.sql
Note: if you used mysqldump
to export the file, it will contain DROP TABLE
commands. Tables that are both present in database_name
and the SQL-file will be removed and recreated.
php5-fpm
on an Ubuntu machine. This is not something I need to do often, when php5-fpm
is running you can pretty much leave it alone. But once in a while I need to add a new pool, and in that case a restart is required.
Normally you would restart the service with:
service php5-fpm restart
But somehow this did not kill the pools and therefore nothing really happened. At first I assumed a simple killall php-fpm
would do the trick, but alas: php-fpm: no process found
. I had to manually kill all processes that the pools were using and then start the service again! With just a couple of pools, it’s not that terrible, but when the number of pools increases and you have several dozens of active pools (with multiple processes per pool) this is no longer a feasible job.
Luckily in my case, all pools were running as the web-user www-data
. So by using some ps
and awk
it is possible to get a list of the process IDs and use this to kill the processes:
kill `ps aux|grep php-fpm | grep www-data | awk '{print $2}'`
And of course, adding a -9
would help if you had to force a kill.
When these processes are killed, it is possible to start the service again and the pools would become active again.
A simple little script to kill all PHP5-FPM pools and start fresh.
This script does exactly what I need when a new pool must be activated. Simply running this script will first check if php5-fpm
could be restarted and, if not, kills all php-processes and starts the service. Super fast and easy.
php5-fpm
on an Ubuntu machine. This is not something I need to do often, when php5-fpm
is running you can pretty much leave it alone. But once in a while I need to add a new pool, and in that case a restart is required.
]]>When you copy a file on Linux, information about certain attributes is not copied along. For instance, the timestamps, ownership and mode (or file permissions) from the original file will be lost.
Instead, the current timestamp will be used as the timestamp, the current user (and group) will take ownership of the file and the file permissions will be the default permissions. (The same applies to directories by the way.)
If you need to preserve the original mode, timestamps or ownership, you can specify an option to the cp
command: --preserve=[ATTR_LIST]
.
From the man
page:
--preserve[=ATTR_LIST]
preserve the specified attributes (default: mode,ownership,time‐
stamps), if possible additional attributes: context, links,
xattr, all
After we copy original.txt
to copy.txt
using no preserve options the attributes will look something like this:
$ cp original.txt copy.txt
$ ls -l
-rw-r--r-- 1 marek staff 9 Jan 7 22:30 copy.txt
-rwxrw-r-x 1 _www staff 9 Jan 7 22:27 original.txt
As you can see, the timestamps, ownership and permissions are different. The owner of the original file was _www
, now it is the current user, which is marek
. The original timestamp on the original was 22:27
, the copy has 22:30
. And the (quite remarkable) permissions -rwxrw-r-x
are now the default -rw-r--r--
(or 644) permissions.
To retain the original timestamps we only need to specify it like this:
$ cp --preserve=timestamps original.txt copy.txt
Which results in the same timestamps:
-rw-r--r-- 1 marek staff 9 Jan 7 22:27 copy.txt
-rwxrw-r-x 1 _www staff 9 Jan 7 22:27 original.txt
To retain the ownership the command is similar:
$ cp --preserve=ownership original.txt copy.txt
Now, we have:
-rw-r--r-- 1 _www staff 9 Jan 7 22:34 copy.txt
-rwxrw-r-x 1 _www staff 9 Jan 7 22:27 original.txt
As you probably have guessed, to keep the file permissions:
$ cp --preserve=mode original.txt copy.txt
And now both files have the permissions -rwxrw-r-x
:
-rwxrw-r-x 1 marek staff 9 Jan 7 22:37 copy.txt
-rwxrw-r-x 1 _www staff 9 Jan 7 22:27 original.txt
Note that we can combine all these commands to preserve the timestamps, ownership and permissions:
$ cp --preserve=timestamps,ownership,mode original.txt copy.txt
-rwxrw-r-x 1 _www staff 9 Jan 7 22:27 copy.txt
-rwxrw-r-x 1 _www staff 9 Jan 7 22:27 original.txt
If you followed along closely, you might have noticed in the man
page a shorthand option -p
to preserve the mode, ownership and timestamps at the same time:
-p same as --preserve=mode,ownership,timestamps
So, instead of the command above, you could also do this:
$ cp -r original.txt copy.txt
]]>When you copy a file on Linux, information about certain attributes is not copied along. For instance, the timestamps, ownership and mode (or file permissions) from the original file will be lost.
]]>