来自App Engine的Google Cloud SQL有哪些连接限制,以及如何最佳地重用数据库连接? [英] What are the connection limits for Google Cloud SQL from App Engine, and how to best reuse DB connections?

查看:110
本文介绍了来自App Engine的Google Cloud SQL有哪些连接限制,以及如何最佳地重用数据库连接?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个使用Google Cloud SQL实例存储数据的Google App Engine应用程序。我需要我的实例能够通过平安的调用一次为数百个客户端提供服务,这些调用每个都会导致一个或几个数据库查询。我已经包装了需要数据库访问的方法,并将句柄存储到os.environ中的数据库连接中。查看这个问题/答案基本上我是如何做到的。然而,只要有几百个客户端连接到我的应用程序并触发数据库调用,我就开始在Google App Engine错误日志中获取这些错误(并且我的应用程序返回500,当然):

 无法连接:ApplicationError:1033实例有太多的并发请求:100 Traceback(最近的调用最后):文件/ base / python27_run 

来自Google App Engine和Google Cloud的有经验的用户的任何提示SQL?预先感谢。



以下是我在需要数据库连接的方法中使用的装饰器代码:

  def with_db_cursor(do_commit = False):
通过环绕Web调用来管理数据库连接的装饰器。
在os.environ字典
之间存储连接并打开连接计数。在包装函数中设置游标变量。可选
进行提交。当包装方法返回时关闭游标,如果没有未完成的游标,则关闭
数据库连接。

如果包装方法有一个关键字参数'existing_cursor',其值
是非False,则绕过该包装,因为假定另一个游标是
已经生效由于另一个调用堆栈。

主要基于:Shay Erlichmen
At:https://stackoverflow.com/a/10162674/379037


def method_wrap(method):
def wrap(* args,** kwargs):
如果kwargs.get('existing_cursor',False):
#如果方法调用时存在open游标
vdbg('由于exisiting_cursor而导致的shortcircuiting db wrapper')
返回方法(None,* args,** kwargs)

conn = os.environ.get(__ data_conn )

#为当前请求回收连接
#出于某种原因,threading.local()不起作用
#并且是os.environ应该是线程安全的
如果不是conn:
conn = _db_connect()
os.environ [__ data_conn] = conn
os.environ [__ data_conn_ref] = 1
dbg ('开放杉木')
else:
os.environ [__ data_conn_ref] =(os.environ [__ data_conn_ref] + 1)
vdbg('重复使用现有的数据库连接。现在使用计数为:{0}',
os.environ [__ data_conn_ref])
try:
cursor = conn.cursor()
try:
如果do_commit或os.environ.get(__ data_conn_commit):
os.environ [__ data_conn_commit] = False
dbg,result = method(cursor,* args,** kwargs)
('Wrapper execution DB commit。')
conn.commit()
返回结果
finally:
cursor.close()
finally:
os .environ [__ data_conn_ref] =(os.environ [__ data_conn_ref] -
1)
vdbg('数据库连接少一个用户,现在使用计数为{0}',
os.environ [__ data_conn_ref])
如果os.environ [__ data_co nn_ref] == 0:
dbg(此数据库连接没有更多用户。 )
os.environ [__ data_conn] =无
db_close(conn)
返回换行
返回method_wrap

def db_close(db_conn ):
if db_conn:
try:
db_conn.close()
除外:
err('无法关闭数据库连接')

else:
err('试图关闭一个未连接的数据库句柄')


<简单回答:
您的查询可能太慢,并且mysql服务器没有足够的线程来处理您尝试发送的所有请求


Long答案:

作为背景,Cloud SQL在这里有两个相关限制: p>


  • 连接:它们与代码中的'conn'对象相对应。服务器上有相应的数据结构。对象(c urrently配置为1000),最近最少使用的会自动关闭。当连接在您的下方关闭时,下次您尝试使用该连接时,您会收到未知连接错误(ApplicationError:1007)。 在服务器上执行。每个正在执行的查询将绑定服务器中的一个线程,因此限制为100.当并发请求太多时,后续请求将因您收到的错误而被拒绝(ApplicationError:1033)


听起来不像连接限制会影响你,但我想提一下它以防万一。



对于并发请求,增加限制可能会有帮助,但通常会使问题变得更糟。过去有两种情况:


  • 死锁:长时间运行的查询正在锁定数据库的关键行。所有后续查询都会阻止该锁。这些查询超时,但它们继续在服务器上运行,捆绑这些线程直到死锁超时触发器。
  • 慢查询:每个查询确实很慢。这通常发生在查询需要临时文件分类时。当查询的第一次尝试仍在运行时,应用程序超时并重试查询,并根据并发请求限制进行计数。如果你能找到你的平均查询时间,你可以估计你的mysql实例可以支持多少QPS(例如,每个查询5毫秒意味着每个线程200 QPS,因为有100个线程,你可以做20000 QPS。每个查询意味着2000 QPS。)



您应该使用 EXPLAIN SHOW ENGINE INNODB STATUS 查看这两个问题中的哪一个正在进行。



当然,您也有可能只是在您的实例中驱动大量流量,并且线程不够。在这种情况下,无论如何,您可能会将实例的cpu最大化,因此添加更多线程无济于事。

I have a Google App Engine app that uses a Google Cloud SQL instance for storing data. I need my instance to be able to serve hundreds of clients at a time, via restful calls, which each result in one or a handful of DB queries. I've wrapped the methods that need DB access and store the handle to the DB connection in os.environ. See this SO question/answer for basically how I'm doing it.

However, as soon as a couple hundred clients connect to my app and trigger database calls, I start getting these errors in the Google App Engine error logs (and my app returns 500, of course):

could not connect: ApplicationError: 1033 Instance has too many concurrent requests: 100 Traceback (most recent call last): File "/base/python27_run

Any tips from experienced users of Google App Engine and Google Cloud SQL? Thanks in advance.

Here's the code for the decorator I use around methods that require DB connection:

def with_db_cursor(do_commit = False):
    """ Decorator for managing DB connection by wrapping around web calls.
    Stores connections and open connection count in the os.environ dictionary
    between calls.  Sets a cursor variable in the wrapped function. Optionally
    does a commit.  Closes the cursor when wrapped method returns, and closes
    the DB connection if there are no outstanding cursors.

    If the wrapped method has a keyword argument 'existing_cursor', whose value
    is non-False, this wrapper is bypassed, as it is assumed another cursor is
    already in force because of an alternate call stack.

    Based mostly on post by : Shay Erlichmen
    At: https://stackoverflow.com/a/10162674/379037
    """

    def method_wrap(method):
        def wrap(*args, **kwargs):
            if kwargs.get('existing_cursor', False):
                #Bypass everything if method called with existing open cursor
                vdbg('Shortcircuiting db wrapper due to exisiting_cursor')
                return  method(None, *args, **kwargs)

            conn = os.environ.get("__data_conn")

            # Recycling connection for the current request
            # For some reason threading.local() didn't work
            # and yes os.environ is supposed to be thread safe 
            if not conn:                    
                conn = _db_connect()
                os.environ["__data_conn"] = conn
                os.environ["__data_conn_ref"] = 1
                dbg('Opening first DB connection via wrapper.')
            else:
                os.environ["__data_conn_ref"] = (os.environ["__data_conn_ref"] + 1)
                vdbg('Reusing existing DB connection. Count using is now: {0}',
                    os.environ["__data_conn_ref"])        
            try:
                cursor = conn.cursor()
                try:
                    result = method(cursor, *args, **kwargs)
                    if do_commit or os.environ.get("__data_conn_commit"):
                        os.environ["__data_conn_commit"] = False
                        dbg('Wrapper executing DB commit.')
                        conn.commit()
                    return result                        
                finally:
                    cursor.close()                    
            finally:
                os.environ["__data_conn_ref"] = (os.environ["__data_conn_ref"] -
                        1)  
                vdbg('One less user of DB connection. Count using is now: {0}',
                    os.environ["__data_conn_ref"])
                if os.environ["__data_conn_ref"] == 0:
                    dbg("No more users of this DB connection. Closing.")
                    os.environ["__data_conn"] = None
                    db_close(conn)
        return wrap
    return method_wrap

def db_close(db_conn):
    if db_conn:
        try:
            db_conn.close()
        except:
            err('Unable to close the DB connection.', )
            raise
    else:
        err('Tried to close a non-connected DB handle.')

解决方案

Short answer: Your queries are probably too slow and the mysql server doesn't have enough threads to process all of the requests you are trying to send it.

Long Answer:

As background, Cloud SQL has two limits that are relevant here:

  • Connections: These correspond to the 'conn' object in your code. There is a corresponding datastructure on the server. Once you have too many of these objects (currently configured to 1000), the least recently used will automatically be closed. When a connection gets closed underneath you, you'll get an unknown connection error (ApplicationError: 1007) the next time you try to use that connection.
  • Concurrent Requests: These are queries that are executing on the server. Each executing query ties up a thread in the server, so there is a limit of 100. When there are too many concurrent requests, subsequent requests will be rejected with the error you are getting (ApplicationError: 1033)

It doesn't sound like the connection limit is affecting you, but I wanted to mention it just in case.

When it comes to Concurrent Requests, increasing the limit might help, but it usually makes the problem worse. There are two cases we've seen in the past:

  • Deadlock: A long running query is locking a critical row of the database. All subsequent queries block on that lock. The app times out on those queries, but they keep running on the server, tying up those threads until the deadlock timeout triggers.
  • Slow Queries: Each query is really, really slow. This usually happens when the query requires a temporary file sort. The application times out and retries the query while the first try of the query is still running and counting against the concurrent request limit. If you can find your average query time, you can get an estimate of how many QPS your mysql instance can support (e.g. 5 ms per query means 200 QPS for each thread. Since there are 100 threads, you could do 20,000 QPS. 50 ms per query means 2000 QPS.)

You should use EXPLAIN and SHOW ENGINE INNODB STATUS to see which of the two problems is going on.

Of course, it is also possible that you are just driving a ton of traffic at your instance and there just aren't enough threads. In that case, you'll probably be maxing out the cpu for the instance anyway, so adding more threads won't help.

这篇关于来自App Engine的Google Cloud SQL有哪些连接限制,以及如何最佳地重用数据库连接?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆