使用sqlalchemy如何根据每个请求动态绑定到数据库引擎 [英] With sqlalchemy how to dynamically bind to database engine on a per-request basis

查看:96
本文介绍了使用sqlalchemy如何根据每个请求动态绑定到数据库引擎的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个基于Pylons的Web应用程序,该应用程序通过 Sqlalchemy (v0.5)连接到Postgres数据库。为了安全起见,我没有使用普通的Postgres用户(例如 webapp),而是遵循简单的Web应用程序的典型模式(在几乎所有教程中都可以看到),而是要求用户输入自己的Postgres用户名和密码。 ,并且正在使用它来建立连接。这意味着我们可以充分利用Postgres安全性。

I have a Pylons-based web application which connects via Sqlalchemy (v0.5) to a Postgres database. For security, rather than follow the typical pattern of simple web apps (as seen in just about all tutorials), I'm not using a generic Postgres user (e.g. "webapp") but am requiring that users enter their own Postgres userid and password, and am using that to establish the connection. That means we get the full benefit of Postgres security.

更复杂的是,有两个单独的数据库需要连接。尽管它们当前位于同一个Postgres集群中,但它们需要能够稍后转移到单独的主机上。

Complicating things still further, there are two separate databases to connect to. Although they're currently in the same Postgres cluster, they need to be able to move to separate hosts at a later date.

我们正在使用sqlalchemy的声明性程序包,尽管我看不出这与问题有任何关系

We're using sqlalchemy's declarative package, though I can't see that this has any bearing on the matter.

大多数sqlalchemy示例显示了一些简单的方法,例如在应用程序启动时使用通用的数据库用户名和密码设置一次元数据,该通用的数据库用户名和密码可在Web应用程序中使用。 。通常,这是通过Metadata.bind = create_engine()完成的,有时甚至是在数据库模型文件中的模块级。

Most examples of sqlalchemy show trivial approaches such as setting up the Metadata once, at application startup, with a generic database userid and password, which is used through the web application. This is usually done with Metadata.bind = create_engine(), sometimes even at module-level in the database model files.

我的问题是,如何推迟建立连接,直到用户登录为止,然后(当然)针对每个后续请求重新使用这些连接,或使用相同的凭据重新建立连接。

My question is, how can we defer establishing the connections until the user has logged in, and then (of course) re-use those connections, or re-establish them using the same credentials, for each subsequent request.

我们有这项工作-我们认为-但我不仅不确定它的安全性,而且我认为它在这种情况下看起来异常沉重。

We have this working -- we think -- but I'm not only not certain of the safety of it, I also think it looks incredibly heavy-weight for the situation.

在BaseController的 __ call __ 方法中,我们从Web会话中检索用户名和密码,为每个数据库调用sqlalchemy create_engine()一次,然后调用一个调用Session的例程.bind_mapper()重复执行一次,对于每个连接可能引用了 的表,即使任何给定请求通常仅引用一个或两个表。看起来像这样:

Inside the __call__ method of the BaseController we retrieve the userid and password from the web session, call sqlalchemy create_engine() once for each database, then call a routine which calls Session.bind_mapper() repeatedly, once for each table that may be referenced on each of those connections, even though any given request usually references only one or two tables. It looks something like this:

# in lib/base.py on the BaseController class
def __call__(self, environ, start_response):

    # note: web session contains {'username': XXX, 'password': YYY}
    url1 = 'postgres://%(username)s:%(password)s@server1/finance' % session
    url2 = 'postgres://%(username)s:%(password)s@server2/staff' % session

    finance = create_engine(url1)
    staff = create_engine(url2)
    db_configure(staff, finance)  # see below
    ... etc

# in another file

Session = scoped_session(sessionmaker())

def db_configure(staff, finance):
    s = Session()

    from db.finance import Employee, Customer, Invoice
    for c in [
        Employee,
        Customer,
        Invoice,
        ]:
        s.bind_mapper(c, finance)

    from db.staff import Project, Hour
    for c in [
        Project,
        Hour,
        ]:
        s.bind_mapper(c, staff)

    s.close()  # prevents leaking connections between sessions?

因此对每个请求都会进行create_engine()调用...我可以看到这是必需的,并且连接池可能会缓存它们并明智地执行操作。

So the create_engine() calls occur on every request... I can see that being needed, and the Connection Pool probably caches them and does things sensibly.

但是每次 请求都为每个表调用一次Session.bind_mapper()吗?似乎必须有一种更好的方法。

But calling Session.bind_mapper() once for each table, on every request? Seems like there has to be a better way.

很显然,由于对强大安全性的渴望是所有这一切的基础,因此我们不希望有任何机会为一个低安全性用户将在以后的请求中无意中使用高安全性用户。

Obviously, since a desire for strong security underlies all this, we don't want any chance that a connection established for a high-security user will inadvertently be used in a later request by a low-security user.

推荐答案

绑定全局对象(映射器,元数据)到用户特定的连接不是一个好方法。以及使用作用域会话。我建议为每个请求创建新会话并将其配置为使用用户特定的连接。下面的示例假定您为每个数据库使用单独的元数据对象:

Binding global objects (mappers, metadata) to user-specific connection is not good way. As well as using scoped session. I suggest to create new session for each request and configure it to use user-specific connections. The following sample assumes that you use separate metadata objects for each database:

binds = {}

finance_engine = create_engine(url1)
binds.update(dict.fromkeys(finance_metadata.sorted_tables, finance_engine))
# The following line is required when mappings to joint tables are used (e.g.
# in joint table inheritance) due to bug (or misfeature) in SQLAlchemy 0.5.4.
# This issue might be fixed in newer versions.
binds.update(dict.fromkeys([Employee, Customer, Invoice], finance_engine))

staff_engine = create_engine(url2)
binds.update(dict.fromkeys(staff_metadata.sorted_tables, staff_engine))
# See comment above.
binds.update(dict.fromkeys([Project, Hour], staff_engine))

session = sessionmaker(binds=binds)()

这篇关于使用sqlalchemy如何根据每个请求动态绑定到数据库引擎的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆