mysql issues.

I am trying to track down mysql stopping on a couple of our GPL servers (this does not happen on the pro server) - occasionally the mysql stops...

Aug  6 13:23:11 server kernel: [472199.586431] init: mysql main process (29643) terminated with status 1
Aug  6 13:23:11 server kernel: [472199.586448] init: mysql respawning too fast, stopped
Aug  6 13:39:14 server kernel: [473163.005919] audit: type=1400 audit(1438864754.929:71): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/mysqld" pi
d=30903 comm="apparmor_parser"

Looking into this we have a /etc/init.d/mysql and an /etc/init/mysql.conf - it looks like they are both doing the same thing... which is the correct one? and why have we got two start scripts.

Status: 
Active

Comments

Howdy -- it sounds like MySQL is crashing and restarting.

You may want to review the MySQL error logs in /var/log, do those show any clues as to why it's not working properly?

Aug  7 08:53:52 big kernel: [543560.622319] Out of memory: Kill process 28642 (mysqld) score 67 or sacrifice child                                                              
Aug  7 08:53:52 big kernel: [543560.623984] Killed process 28642 (mysqld) total-vm:1060200kB, anon-rss:138748kB, file-rss:0kB                                                   
Aug  7 08:53:52 big kernel: [543560.918505] init: mysql main process (28642) killed by KILL signal                                                                              
Aug  7 08:53:52 big kernel: [543560.918530] init: mysql main process ended, respawning                                                                                          
Aug  7 08:53:53 big kernel: [543561.852121] audit: type=1400 audit(1438934033.442:45): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/mysqld" pid=4950 comm="apparmor_parser"                                                                                                                                              
Aug  7 08:53:55 big kernel: [543563.736758] init: mysql main process (4964) terminated with status 1                                                                            
Aug  7 08:53:55 big kernel: [543563.736776] init: mysql main process ended, respawning                                                                                       
Aug  7 08:53:55 big kernel: [543564.159611] init: mysql post-start process (4965) terminated with status 1                                                                      
Aug  7 08:53:55 big kernel: [543564.169729] audit: type=1400 audit(1438934035.758:46): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/mysqld" pid=4996 comm="apparmor_parser"                                                                                                                                                
Aug  7 08:53:56 big kernel: [543564.608560] init: mysql main process (5008) terminated with status 1
Aug  7 08:53:56 big kernel: [543564.608579] init: mysql respawning too fast, stopped

Looks like it ran out of memory - this was set on the medium setting for MySQL - i have now moved it up to large - this is NOT the same server as the postfix issue server, and it does not exhibit the same postfix/webmin crash, just the mysql periodically stopping -

Ubuntu 14.xLTS + all updates + GPL only.

The server with the postfix error has been happy once i moved it up to a larger MySQL setup.

Actually, we are still loosing mysql - it crashes with out of memory

Aug 7 15:18:13 server kernel: [86963.803543] Out of memory: Kill process 24183 (mysqld) score 43 or sacrifice child

So, perhaps some web db is getting spammed and causing too much load?

just used the setup wizard to move the the 2GIG version of mysql... if that does not hold it then we have a memory leak or someone using up all the available resources...

MySQL uses the least amount of RAM after it's restarted. It can grow quite a bit after that.

The total amount it grows to is dependent on the settings in the my.cnf.

Note though that you may need to go the other direction.

That is, the "large" setting causes MySQL to grow to fill more RAM than the medium settings.

If you're finding that MySQL is using too much RAM and is getting killed off, you may want to move it to the "small" settings, which cause it to use less RAM.

it was... open_files_limit | 1185
it is now open_files_limit | 16384

my.cnf
[mysqld_safe]
open_files_limit = 16384
[mysqld]
open_files_limit = 16384

/etc/security/limits.conf
*     soft    nofile          40000
*     hard    nofile          40000
root  soft    nofile          40000
root  hard    nofile          40000

Might be a bit drastic, but after 24 hours their MySQL went west with lots of error 24's - i will watch and see.. doing this helped but they have so many wordpress sites...

It shouldn't be a problem to increase the open files limit.

So you could always use a configuration for a smaller system, but increase the open files limit. I set those to 16000+ on all of my servers, as I seem to run into that issue a quite a bit.

OK, once it has settled down i will reduce the virtualmin MySQL setup and also bring the files to 16000

is that 16000 for the system and mysql?

I can easily do that... i have just checked they have 51 Wordpress sites - some with more than one DB...

No I just meant that if MySQL was less than 16000, I'd make raise it up to at least that much.

I have open files 16384 but both the GPL servers still crash - they are now small and medium setups from within virtualmin - and i have been watching closely - the PRO machine does not have any issues at all, it is on the same base server with the same setup same OS, probably less DB than either of the GPL versions. the client is now getting a bit grumpy... on the busier server i installed mysqltuner just to see what it is outlining...

it said...

-------- Performance Metrics -------------------------------------------------
[--] Up for: 4h 17m 26s (933K q [60.463 qps], 23K conn, TX: 8B, RX: 110M)
[--] Reads / Writes: 99% / 1%
[--] Total buffers: 416.0M global + 1.6M per thread (151 max threads)
[OK] Maximum possible memory usage: 661.4M (16% of installed RAM)
[OK] Slow queries: 0% (0/933K)
[OK] Highest usage of available connections: 5% (8/151)
[OK] Key buffer size / total MyISAM indexes: 16.0M/25.7M
[OK] Key buffer hit rate: 99.7% (9M cached / 25K reads)
[!!] Query cache is disabled
[OK] Sorts requiring temporary tables: 0% (14 temp sorts / 165K sorts)
[!!] Joins performed without indexes: 47
[!!] Temporary tables created on disk: 47% (25K on disk / 54K total)
[!!] Thread cache is disabled
[!!] Table cache hit rate: 0% (22 open / 10K opened)
[OK] Open file limit used: 0% (43/32K)
[OK] Table locks acquired immediately: 99% (923K immediate / 923K locks)
[OK] InnoDB data size / buffer pool: 36.2M/128.0M

-------- Recommendations -----------------------------------------------------
General recommendations:
    Run OPTIMIZE TABLE to defragment tables for better performance
    MySQL started within last 24 hours - recommendations may be inaccurate
    Adjust your join queries to always utilize indexes
    Temporary table size is already large - reduce result set size
    Reduce your SELECT DISTINCT queries without LIMIT clauses
    Set thread_cache_size to 4 as a starting value
    Increase table_cache gradually to avoid file descriptor limits
Variables to adjust:
    query_cache_size (>= 8M)
    join_buffer_size (> 128.0K, or always use indexes with joins)
    thread_cache_size (start at 4)
    table_cache (> 64)

Not that i believe what it is saying - but good to get a second opinion. ALL the DB on here are in WP, so not much that can be done to the actual DB to tidy up - i COULD adjust the variables it suggests - but i am not sure that is what is causing it to crash yet, so any ideas would be welcome

Just to verify -- what are you seeing in the logs now when this happens?

Also, what are the current contents of your /etc/security/limits.conf file?

/etc/security/limits.conf

*      soft  nofile          40000
*      hard nofile          40000
root soft  nofile          40000
root hard nofile          40000
root@server:/var/log/mysql# ulimit -n
40000

last nights two shutdown.. it stayed down till about 8 am when one of us noticed - i think she pressed the restart button in webmin - i would have used the shell "service mysql start".

150814  2:41:22 [Warning] Checking table:   './zara/wp_options'
150814  2:58:11 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
150814  2:58:11 [Note] Plugin 'FEDERATED' is disabled.
150814  2:58:11 InnoDB: The InnoDB memory heap is disabled
150814  2:58:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins
150814  2:58:11 InnoDB: Compressed tables use zlib 1.2.8
150814  2:58:11 InnoDB: Using Linux native AIO
150814  2:58:11 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
150814  2:58:11 InnoDB: Completed initialization of buffer pool
150814  2:58:11 InnoDB: Fatal error: cannot allocate memory for the buffer pool
150814  2:58:11 [ERROR] Plugin 'InnoDB' init function returned error.
150814  2:58:11 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
150814  2:58:11 [ERROR] Unknown/unsupported storage engine: InnoDB
150814  2:58:11 [ERROR] Aborting

150814  2:58:11 [Note] /usr/sbin/mysqld: Shutdown complete

150814  2:58:12 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
150814  2:58:12 [Note] Plugin 'FEDERATED' is disabled.
150814  2:58:12 InnoDB: The InnoDB memory heap is disabled
150814  2:58:12 InnoDB: Mutexes and rw_locks use GCC atomic builtins
150814  2:58:12 InnoDB: Compressed tables use zlib 1.2.8
150814  2:58:12 InnoDB: Using Linux native AIO
150814  2:58:12 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
150814  2:58:12 InnoDB: Completed initialization of buffer pool
150814  2:58:12 InnoDB: Fatal error: cannot allocate memory for the buffer pool
150814  2:58:12 [ERROR] Plugin 'InnoDB' init function returned error.
150814  2:58:12 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
150814  2:58:12 [ERROR] Unknown/unsupported storage engine: InnoDB
150814  2:58:12 [ERROR] Aborting

150814  2:58:12 [Note] /usr/sbin/mysqld: Shutdown complete

Okay, it looks like you're still seeing memory problems.

Just to verify, what is the output of this command on that server:

free -m

last night, it first went down just after 24:00 - it recovered and then again at about 02 twice and then did not try again.

             total       used       free     shared    buffers     cached
Mem:          3952       3661        290       1998         38       2149
-/+ buffers/cache:       1473       2478

MySQL doesn't typically spike in memory usage, though other things on your server may.

My suspicion is that something is causing a spike in memory usage, and when your system is low on memory, the kernel is killing processes such as MySQL looking to free up memory.

It's hard to say what's causing the problem. You may want to check and see if there's a traffic spike at the time.

If there's an increase in traffic, the memory usage of Apache and PHP could cause the server to run out of RAM.

It also could be related to a cron job or other process running.

still looking, here is the output python script.. (http://www.pixelbeat.org/scripts/ps_mem.py)

Private  +   Shared  =  RAM used       Program

144.0 KiB +  10.0 KiB = 154.0 KiB       acpid
172.0 KiB +  25.0 KiB = 197.0 KiB       atd
300.0 KiB +  20.0 KiB = 320.0 KiB       upstart-socket-bridge
304.0 KiB +  58.5 KiB = 362.5 KiB       cron
292.0 KiB +  87.0 KiB = 379.0 KiB       pickup
332.0 KiB +  86.5 KiB = 418.5 KiB       irqbalance
352.0 KiB +  77.0 KiB = 429.0 KiB       master
408.0 KiB +  44.0 KiB = 452.0 KiB       dbus-daemon
324.0 KiB + 138.0 KiB = 462.0 KiB       log
428.0 KiB +  47.0 KiB = 475.0 KiB       systemd-logind
464.0 KiB +  33.0 KiB = 497.0 KiB       rpcbind
364.0 KiB + 133.5 KiB = 497.5 KiB       dovecot
500.0 KiB +  47.0 KiB = 547.0 KiB       tail (4)
520.0 KiB +  32.5 KiB = 552.5 KiB       mountall
608.0 KiB +  24.5 KiB = 632.5 KiB       upstart-udev-bridge
588.0 KiB +  51.0 KiB = 639.0 KiB       systemd-udevd
492.0 KiB + 157.0 KiB = 649.0 KiB       qmgr
656.0 KiB +  67.0 KiB = 723.0 KiB       monit
640.0 KiB + 101.0 KiB = 741.0 KiB       tlsmgr
676.0 KiB +  72.5 KiB = 748.5 KiB       sshd
576.0 KiB + 286.0 KiB = 862.0 KiB       flush
576.0 KiB + 311.0 KiB = 887.0 KiB       anvil (2)
980.0 KiB +  20.5 KiB =   1.0 MiB       upstart-file-bridge
928.0 KiB + 120.0 KiB =   1.0 MiB       getty (6)
  1.3 MiB + 138.5 KiB =   1.4 MiB       config
  1.4 MiB +  45.0 KiB =   1.5 MiB       freshclam
  1.4 MiB +  41.0 KiB =   1.5 MiB       init
  1.9 MiB + 148.5 KiB =   2.0 MiB       vmtoolsd
  2.2 MiB + 121.0 KiB =   2.3 MiB       proftpd
  1.5 MiB + 905.0 KiB =   2.3 MiB       saslauthd (5)
  1.3 MiB +   1.2 MiB =   2.5 MiB       sshd [updated] (2)
  2.5 MiB +  82.0 KiB =   2.6 MiB       imap-login (4)
  2.5 MiB + 348.0 KiB =   2.9 MiB       su (4)
  3.1 MiB + 352.0 KiB =   3.5 MiB       sudo (4)
  4.0 MiB + 102.5 KiB =   4.1 MiB       imap (4)
  7.2 MiB + 115.0 KiB =   7.4 MiB       tmux (2)
  7.7 MiB +  55.0 KiB =   7.8 MiB       mailmanctl
11.4 MiB +  65.0 KiB =  11.5 MiB       rsyslogd
11.4 MiB + 160.0 KiB =  11.6 MiB       multitail (2)
15.9 MiB + 932.5 KiB =  16.8 MiB       fail2ban-server
17.3 MiB + 118.5 KiB =  17.4 MiB       ncpa_posix_passive
17.3 MiB +   2.8 MiB =  20.1 MiB       bash (11)
20.3 MiB + 242.0 KiB =  20.6 MiB       ncpa_posix_listener
19.6 MiB +  14.6 MiB =  34.1 MiB       apache2 (12)
34.3 MiB +   1.1 MiB =  35.4 MiB       miniserv.pl (2)
61.8 MiB + 810.5 KiB =  62.6 MiB       python2.7 (8)
69.5 MiB + 295.5 KiB =  69.8 MiB       named
103.2 MiB + 167.5 KiB = 103.3 MiB       mysqld
  2.9 GiB +  15.7 MiB =   3.0 GiB       php5-cgi (70)
---------------------------------
                          3.4 GiB
=================================

here i can see that though apache and mysql are high memory users - it is php5-cgi that is really eating all the resrouces... i am going to set MaxConnectionsPerChild to 500, it is set to 0 in mpm_prefork.conf.

i will watch that for a while.

does this seem sensible, should i set it higher or am i barking up another wrong tree?

would it make any difference if we move to event [default MPM for Apache 2.4]: rather than prefork [default MPM for Apache 2.2, 2.0 and 1.3]:

I have also made some changes in fcgid.conf which have really made a difference - i will start reversing other changes i made to limits.conf etc if this keeps the memory footprint down.

<IfModule mod_fcgid.c>
  FcgidConnectTimeout 20
  MaxRequestsPerProcess 512
  MaxProcessCount 30
  IPCCommTimeout        240
  IdleTimeout           240
  <IfModule mod_mime.c>
    AddHandler fcgid-script .fcgi
  </IfModule>
</IfModule>

If php-cgi is using a lot of memory, you may want to disable any PHP modules not being used. That will make a big difference as to how much RAM is being used.

Also, setting MaxConnectionsPerChild could help prevent the Apache processes from growing to use too much RAM, that's not a bad idea.

You may also want to disable Apache modules you aren't using.

Another thing that may help is to lower the amount of connections Apache will accept at a time. To do that, you may want to set "MaxClients" in the Apache config to a lower value, it's possible the current value is set too high, allowing too many connections to occur at a time.