1. 程式人生 > >ClientException:HTTP 500 和 OperationalError: (OperationalError) (1040, 'Too many connections')

ClientException:HTTP 500 和 OperationalError: (OperationalError) (1040, 'Too many connections')

1 問題

使用nova命令時出現錯誤:

[email protected]:~# nova service-list
ERROR (ClientException): The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-4e4fb330-2798-414a-b86e-e33835014ee7)
[email protected]:~# 
[email protected]:~# tailf /var/log/nova/nova-api.log
2018-03-21 13:44:31.105 5447 TRACE nova.api.openstack   File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 526, in get_connection
2018-03-21 13:44:31.105 5447 TRACE nova.api.openstack     self.connection = self.__connect()
2018-03-21 13:44:31.105 5447 TRACE nova.api.openstack   File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 538, in __connect
2018-03-21 13:44:31.105 5447 TRACE nova.api.openstack     connection = self.__pool._creator()
2018-03-21 13:44:31.105 5447 TRACE nova.api.openstack   File "/usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/compat/handle_error.py", line 200, in connect
2018-03-21 13:44:31.105 5447 TRACE nova.api.openstack     raise original_exception
2018-03-21 13:44:31.105 5447 TRACE nova.api.openstack OperationalError: (OperationalError) (1040, 'Too many connections') None None
2018-03-21 13:44:31.105 5447 TRACE nova.api.openstack 
2018-03-21 13:44:31.171 5447 INFO nova.api.openstack [req-4e4fb330-2798-414a-b86e-e33835014ee7 54613e7ec86a4eea885f5efeed5de107 a18ac5cb662d404ca0611b9e3768f9b7 - - -] http://192.168.4.15:8774/v2/a18ac5cb662d404ca0611b9e3768f9b7/os-services returned with HTTP 500
2018-03-21 13:44:31.173 5447 INFO nova.osapi_compute.wsgi.server [req-4e4fb330-2798-414a-b86e-e33835014ee7 54613e7ec86a4eea885f5efeed5de107 a18ac5cb662d404ca0611b9e3768f9b7 - - -] 192.168.4.15 "GET /v2/a18ac5cb662d404ca0611b9e3768f9b7/os-services HTTP/1.1" status: 500 len: 359 time: 0.1559889

2 解決

出現此錯誤的原因,一種是訪問量確實很高,MySQL伺服器頂不住,這個時候就要考慮增加從伺服器分散讀壓力,另外一種情況是MySQL配置檔案中max_connections值過小。

openstack環境中資料庫資訊:

MariaDB [(none)]> show variables like 'max_connections';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| max_connections | 100   |
+-----------------+-------+
1 row in set (0.00 sec)

MariaDB [(none)]> show global status like 'max_used_connections';
+----------------------+-------+
| Variable_name        | Value |
+----------------------+-------+
| Max_used_connections | 101   |
+----------------------+-------+
1 row in set (0.00 sec)

分析原因

mysql的預設連線上只有100,也就是說連線資料超過100 就會有可能出現 Too Many Connections
 
修改my.cnf配置檔案新增並需要重啟:

[mysqld]

wait_timeout = 600
interactive_timeout = 600

查詢MySQL的最大連線數:

 程式碼如下複製程式碼

    mysql> show variables like 'max_connections';
    +-----------------+-------+
    | Variable_name | Value |
    +-----------------+-------+
    | max_connections | 100 |
    +-----------------+-------+
    1 row in set (0.00 sec)

查詢MySQL響應的最大連線數:

 程式碼如下複製程式碼

    mysql> show global status like 'max_used_connections';
    +----------------------+-------+
    | Variable_name | Value |
    +----------------------+-------+
    | Max_used_connections | 5 |
    +----------------------+-------+
    1 row in set (0.00 sec)

說明:本地環境沒什麼參考價值,但是就上面的資料而言,MySQL過去所響應的最大連線數小於其允許的最大連線數,所以不會出現1040錯誤。
MySQL比較理想的最大連線數計算方式為:

 程式碼如下複製程式碼

    max_used_connections / max_connections * 100% ≈ 85%

即最大連線數占上限連線數的85%左右,如果發現比例在10%以下,MySQL伺服器連線數上限設定的過高了。


問題找到解決辦法

1、mysql -u root -p 進入不了,同樣出現上述錯誤。

2、修改/etc/mysql/my.cnf(ubuntu系統,其他系統在/etc/my.cnf

 程式碼如下複製程式碼

[mysqld] 
port=3306 
#socket=MySQL 
skip-locking 
set-variable = key_buffer=16K 
set-variable = max_allowed_packet=1M 
set-variable = thread_stack=64K 
set-variable = table_cache=4 
set-variable = sort_buffer=64K 
set-variable = net_buffer_length=2K 
set-variable = max_connections=1000

3、重啟

 程式碼如下複製程式碼

mysql  /etc/init.d/mysql restart

搞定了。

3 其它

openstack環境的資料庫是雙節點的Galera Cluster,因此要在每一個節點上都修改mysql的配置檔案。修改後要重新啟動叢集。

To start the cluster, complete the following steps:

  1. Initialize the Primary Component on one cluster node. For servers that use init, run the following command:

    # service mysql start --wsrep-new-cluster
    

    For servers that use systemd, run the following command:

    # systemctl start mariadb --wsrep-new-cluster
    
  2. Once the database server starts, check the cluster status using the wsrep_cluster_size status variable. From the database client, run the following command:

    SHOW STATUS LIKE 'wsrep_cluster_size';
    
    +--------------------+-------+
    | Variable_name      | Value |
    +--------------------+-------+
    | wsrep_cluster_size | 1     |
    +--------------------+-------+
    
  3. Start the database server on all other cluster nodes. For servers that use init, run the following command:

    # service mysql start
    

    For servers that use systemd, run the following command:

    # systemctl start mariadb
    
  4. When you have all cluster nodes started, log into the database client of any cluster node and check the wsrep_cluster_sizestatus variable again:

    SHOW STATUS LIKE 'wsrep_cluster_size';
    
    +--------------------+-------+
    | Variable_name      | Value |
    +--------------------+-------+
    | wsrep_cluster_size | 3     |
    +--------------------+-------+
    

When each cluster node starts, it checks the IP addresses given to the wsrep_cluster_address parameter. It then attempts to establish network connectivity with a database server running there. Once it establishes a connection, it attempts to join the Primary Component, requesting a state transfer as needed to bring itself into sync with the cluster.

Note

In the event that you need to restart any cluster node, you can do so. When the database server comes back it, it establishes connectivity with the Primary Component and updates itself to any changes it may have missed while down.

Restarting the cluster

Individual cluster nodes can stop and be restarted without issue. When a database loses its connection or restarts, the Galera Cluster brings it back into sync once it reestablishes connection with the Primary Component. In the event that you need to restart the entire cluster, identify the most advanced cluster node and initialize the Primary Component on that node.

To find the most advanced cluster node, you need to check the sequence numbers, or the seqnos, on the last committed transaction for each. You can find this by viewing grastate.dat file in database directory:

$ cat /path/to/datadir/grastate.dat

# Galera saved state
version: 3.8
uuid:    5ee99582-bb8d-11e2-b8e3-23de375c1d30
seqno:   8204503945773

Alternatively, if the database server is running, use the wsrep_last_committed status variable:

SHOW STATUS LIKE 'wsrep_last_committed';

+----------------------+--------+
| Variable_name        | Value  |
+----------------------+--------+
| wsrep_last_committed | 409745 |
+----------------------+--------+

This value increments with each transaction, so the most advanced node has the highest sequence number and therefore is the most up to date.

參考: