👉 This post was initially written in 2006 and referred to specific software versions. When tunning your system, always consider which version you are running.
It is just a little feedback about tuning a full LAMP server with some user traffic and services load. Important thing to notice is that all stuff in this post is NOT THE SOLUTION. You will probably have to tune little more for adapt all this to your personal server usage, server load, development & architecture. So, use those tips as a kind of inspiration instead of an “how to”. Don’t forget that when you do such tuning, take care to keep a backup of your previous configuration files.
We will try to tune the following server :
- Current OS : Debian GNU Linux Kernel 2.4.32 ipv4 + GRSEC
- 1Go RAM DDR
- Intel(R) Celeron(R) CPU 2.66GHz
- SWAP 512Mo
- 3Go on / and 226Go on /home
- Running services are Qmail, Bind9, mrtg, Apache 2.2.2, PHP 5.1.4, MySQL 5.0.21
The best way for tuning a server is to have dedicated services on one server and so, having multiple server especially for MySQL and Apache.
We were running a heavy website with [DotClear][1] and the heavy [PhpADS][2] with all its stuff (geoip, all counters, etc.)
The server up to a load of 114 in some peak with a swap totally used ! And so.. a big freeze of services… 70k mails/day , 110k pv/day, 12k v/day, 47 sql queries/sec
In fact, services weren’t so loaded but the box was crashing a lot and swapping often without using too much CPU.
First things that I do was to change the Linux Kernel from a 2.4.32 to a 2.6.18. Lot of things were improves in 2.6. I convey you to take a look at those posts:
- http://www-128.ibm.com/developerworks/linux/library/l-web26/
- http://developer.osdl.org/craiger/hackbench/
- http://kerneltrap.org/node/903
After this update, I take the time to update all version software for using a MySQL 5.0.27, PHP 5.2 etc. Without looking at the changelogs, bugfixes will still help us :-)
After this, we will tune our software configuration that still use default values (this is really bad ! :) then we will tune a little the kernel without recompile a new one.
Apache 2.2.2 Prefork
Our HTTPD is using some modules as url rewriting, server info, php5, GeoIP and other basic modules. We could optimize much more by using an Apache 2.2.3 Worker and only useful modules or even more delivering static pages and using proxy for dynamic pages. All this depend on your developments and your server usage. Here we will only focus on the Apache Prefork.
Nowadays, it’s important to keep active the KEEPALIVE functionality. This will increase the speed of delivring pages for lot of modern browsers (it’s supported by ie, firefox, safari, opera, etc.). The only thing is to touch a little to the default value. In fact, if your keepalive time out is too big, you will keep an entire apache slot open for a user that is probably gone ! A 4 seconds timeout is enough for delivering a full web page and take care of any network congestion. MaxKeepAliveRequests is used to define the maximum number of request manage by an apache slot during a keepalive session. Except if you have lot of pictures to load on your web pages you don’t really need to have a big value at this state.
KeepAlive On
KeepAliveTimeout 4
MaxKeepAliveRequests 500
As I don’t have lot of memory available on the server I ‘m constraint to decrease drastically the number of running servers from 150 to 60. As I have an apache using approximatly 13Mo of memory (withdraw 3Mo of shared memory), I need approximately 600 Mo of available memory when all the apache child process are running. We have to consider, for our further tuning, that this memory is used. It’s really important in our case to dedicate memory for avoid to swap too much and lost the box in a freeze. you can follow your memory usage by using TOP and looking for your apache/httpd process. (Do a quick “man top” for know more). If you have little more free memory you can take a look to the [apache documentation][3] for further tuning.
ServerLimit 60
MaxClients 60
Our server is often overload, with lot of traffic. When I need to restart the apache, or in case of any crashes the apache server start with only 5 Child server process and will add new one 1 second later, 2 new child 2 second later, 4 new at the third second, etc. It’s really too long when you are in a peak ! So, I configured StartServers for let us start directly with 30 child Server process. That will help us to deliver quickly the clients and minimize the impact of the server restart.
MinSpareServers and MaxSpareServers is used in same way as StartServer. When your apache server isn’t load, there is idle child waiting for connection. It’s not useful to have all your child still open but, In case of a new peak the best way to minimize its impact on your server is to deliver web pages as quick as possible. So keeping some idle Child Process still waiting for client isn’t so stupid. Furthermore in case of our touchy server we consider to be able to allocate 600Mo of RAM. So, We can use it even if it’s for idle Child Process as we dedicate this RAM for apache. For avoid any module Memory Leak, and having fully available Child I set the MaxRequestPerChild to 1000, that mean that each 1000 request, the child will be kill and Apache Server will spare a new one. You’ll probably have to set this value to a higher number. It’s depend of the structure of your web page. You will have to monitor a little your server after those change for being sure to don’t have too much child kill/spare instead of delivering web pages.
StartServers 30
MinSpareServers 30
MaxSpareServers 30
MaxRequestsPerChild 1000
Follow some security issue, we don’t display too much information about our server. As we don’t need the reverse lookup on the client ip, we keep the default value of HostnameLookups to Off and by this way we save some network traffic and server load.
ServerTokens Prod
ServerSignature Off
HostnameLookups Off
PHP 5.1.4
For perform our page generation and save some cpu we use the php extension [eaccelerator][4]. Take a look at the documentation for install it.
We dedicate 32Mo of our RAM for eaccelerator (shm_size) and will use it with shared memory and file cache ("shm_and_disk" value for keys, sessions and content variable). (Memory is really useful in our case, because of all the mails, apache log and MySQL disk access that generate too much i/o and slow down considerably all the server). As we don’t change often the php script on the server we don’t need to use the check_mtime functionality. When set to “1”, that will do a stat on the php script for checking of last modification date We don’t need this because we want to save disk access and we don’t have so many updates on the running scripts. We just have to clean the cache directory after an update.
eaccelerator.shm_size=“32”
eaccelerator.cache_dir="/www/tmp/eaccelerator"
eaccelerator.enable=“1”
eaccelerator.optimizer=“1”
eaccelerator.check_mtime=“0”
eaccelerator.debug=“0”
eaccelerator.filter=""
eaccelerator.shm_max=“0”
eaccelerator.shm_ttl=“3600”
eaccelerator.shm_prune_period=“1”
eaccelerator.shm_only=“0”
eaccelerator.compress=“1”
eaccelerator.compress_level=“9”
eaccelerator.keys = “shm_and_disk”
eaccelerator.sessions = “shm_and_disk”
eaccelerator.content = “shm_and_disk”
MySQL 5.0.24
As I don’t manage how has been coding many of running script, I decrease all the timeout MySQL connection for avoid congestion. Then I increase the number off simultaneous MySQL connection as we had lot of “Too many connection” error message.
wait_timeout=6
connect_timeout=5
interactive_timeout=120
max_connections = 500
max_user_connections = 500
Now we change the touchiest part of the MySQL configuration : The RAM usage. It’s touchy because a bad value can really decrease your server performance and result in a big server swap. After some test I decrease the table cache and the key buffer cache to 256Mo. In fact we don’t have so many available ram as we had 600Mo for our HTTPD and we have lot of other services running. I tried to set it up little higher, hopping that the swap won’t be to big, but in fact, due to our i/o load the swap were totaly not a good thing for MySQL :-)
If you are using MYISAM tables I suggest you to use the “concurrent_insert=2” that will really increase your server performance in many case. MYISAM use table lock, with concurrent insert, the engine will sometime bypass the lock and allow INSERT and SELECT to run concurrently. We also disable all engine that is not used (innodb, bdb). Take a look at the [MySQL documentation][5] for better tuning.
join_buffer_size=1M
sort_buffer_size=1M
read_buffer_size=1M
read_rnd_buffer_size=1M
table_cache=256M
max_allowed_packet=4Mkey_buffer=256M
key_buffer_size=256M
thread_cache=256M
thread_concurrency=2
thread_cache_size=40
thread_stack=128Kconcurrent_insert=2
query_cache_limit=1M
query_cache_size=256M
query_cache_type=1
skip-bdb
skip-innodb
Linux Kernel 2.6.18
Here is a touchy part of our tuning, we will try to perform the Linux Kernel behavior with our server load for save some memory and avoid too much swap. Furthermore, has we done a great stuff above this part, we have to manage more TCP connection and support correctly the peak. We will use the command “sysctl” for doing our update on values.
# display value of a variable or group of variable
sysctl [-n] [-e] variable ...
# set a new value toe the specified variable
sysctl [-n] [-e] [-q] -w variable=value ...
# display all the variable
sysctl [-n] [-e] -a
# load a sysctl config file
sysctl [-n] [-e] [-q] -p (default /etc/sysctl.conf)
For our test we will create a test config file “/etc/sysctl.conf.testing” and we will load it by using the following command line :
sysctl -p /etc/sysctl.conf.testing
When you will be glad of your change you could rename the file for “/etc/sysctl.conf”. All the sysctl variable are documented with the Kernel Sources. I suggest you to download the [documentation][6] corresponding to your kernel version and read it carefully if you decide to change some values.
A really good article on [Security Focus][7] give us some key for minimize the impact of a SYN ATTACK / SYN SPOOFING. In this goal we activate the syncookies and the route validation
net.ipv4.conf.default.rp_filter=1
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_synack_retries=3
net.ipv4.tcp_syn_retries=3
As we had some swap troubles, important thing to do is to change the value of vm.swappiness where the default value is 60. This variable control how much the kernel should favor swapping out applications, its value can be 0 to 100. I set it to 10 for minimize the swap.
vm.swappiness=10
We upgrade the max backlog for support more TCP traffic and we change the congestion control algorithm to [BIC][8]. The Linux Kernel support lot of congestion algorithm like Reno (default one), htcp, vegas, westwood, etc.
net.core.netdev_max_backlog=2500 # Interface buffering
net.ipv4.tcp_max_syn_backlog=4096
net.core.somaxconn=1024 # Limit of socket listen() backlog. Default is 128.
net.ipv4.tcp_congestion_control=bic
For avoid to have a big TCP queue and so memory usage for not really active connection I decrease some TCP timeout and force the kernel to recycle quickly tcp connection. We don’t cache the value of ssthresh (Slow Start Threshold) for avoid to impact a given host to have a reduced ssthresh for all is next connections.
net.ipv4.tcp_keepalive_time=900
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_max_orphans=16384
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_rfc1337=1
net.ipv4.tcp_no_metrics_save=1
It is critical to use the optimal SEND and RECEIVE socket buffer size for the link you are using. In our case we have a 100Mbits link connection. So for a better TCP connection and congestion control we had to increase the TCP Buffer. You can read more about this [here][9].
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216
That’s all folks !
Now, this server support twice more traffic load. Technical aspect was our traffic growth bottleneck. Lot of other tuning could be done for better performance (on i/o and disk access, other kernel options, compile a new kernel, using apache worker, etc.). This post was just some clues about how to tune your servers. One important thing to don’t forget is whatever you tune on your server, that will never be enough if you have a bloody developed programs running on it!