数据库优化 [英] Database Optimization

查看:50
本文介绍了数据库优化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Hello group,


我有一个相当普遍但有趣的查询与PHP相关

我希望这是发布它的合适位置。


我正在寻找一种方法来显着提高我的PHP

应用程序的性能。由于需要更多的负载,应用程序变得越来越慢。

它正在对数据库执行大量查询,而且我认为这是占用最多的的资源。


我正在试图弄清楚如何缓存数据库的内容

并防止执行许多查询。


我想要做的是将数据库的所有内容缓存在

内存中,以便我可以通过我的PHP应用程序直接访问它$>
没有查询数据库并节省宝贵的资源。


数据库非常小,15 - 20 mB,并且它的大小是恒定的(它

随着时间的推移不会变大。 92%的查询是SELECT,只有8

百分比是UPDATE,DELETE和INSERT。


所以,我的问题是,是否可以并且可以推荐给在共享内存中放置20mB的

数据,以防止查询到数据库? (所有

更新,删除和插入都在数据库中执行

在内存中)


或者我会更好地将数据库的副本放在ramdrive上?


其他信息:

我有一个运行PHP应用程序和MySQL的服务器

数据库。它有1GB的RAM。数据库每秒接收250个查询。


提前感谢您的帮助

Hello group,

I have a rather general but interesting inquiry that is related to PHP
and I hope this is the appropriate place to post it.

I''m looking for a way to improve dramatically the performance of my PHP
application. The application is getting slow as it is taking more load.
It is performing a very high number of queries to a database, and I
believe that this is taking up most of the ressources.

I''m trying to figure out how I could cache the content of the database
and prevent that many queries to be performed.

What I would like to do is cache all the content of the database in
memory, so that I could access it directly through my PHP application
without querying the database and saving precious ressources.

The database is quite small, 15 - 20 mB and it''s size is constant (it
does not get bigger over time). 92% of the queries are SELECT, only 8
percents are UPDATE, DELETE and INSERT.

So, my question is, is it possible and recommandable to place 20mB of
data in shared memory in order to prevent queries to the database? (all
updates, deletes and inserts are performed both in the database as well
as in memory)

Or would I be better to place a copy of the database on a ramdrive?

Other info:
I have a server which runs both the PHP application and the MySQL
database. It has 1GB of RAM. The database receives 250 queries / sec.

Thank you in advance for your kind help

推荐答案

no********@gmail.com 写道:
no********@gmail.com wrote:
Hello group,

我有一个相当普遍但有趣的查询与PHP相关
我希望这是发布它的合适位置。

我正在寻找一种方法来显着提高我的PHP
应用程序的性能。由于需要更多的负载,应用程序变得越来越慢。
它正在对数据库执行大量查询,并且我相信这占用了大部分资源。

我正在试图弄清楚如何缓存数据库的内容
并阻止执行许多查询。

我想做的是缓存内存中数据库的所有内容,以便我可以直接通过我的PHP应用程序访问它
而无需查询数据库并节省宝贵的资源。

数据库非常小,15 - 20 mB,它的大小是恒定的(它随着时间的推移不会变大)。 92%的查询是SELECT,只有8%是UPDATE,DELETE和INSERT。

所以,我的问题是,是否有可能并且可以建议放置20mB的数据在共享内存中为了防止查询到数据库? (所有
更新,删除和插入都在数据库中同时在内存中执行)

或者我会更好地在ramdrive上放置数据库的副本?

其他信息:
我有一个运行PHP应用程序和MySQL
数据库的服务器。它有1GB的RAM。该数据库每秒接收250个查询。

提前感谢您的帮助
Hello group,

I have a rather general but interesting inquiry that is related to PHP
and I hope this is the appropriate place to post it.

I''m looking for a way to improve dramatically the performance of my PHP
application. The application is getting slow as it is taking more load.
It is performing a very high number of queries to a database, and I
believe that this is taking up most of the ressources.

I''m trying to figure out how I could cache the content of the database
and prevent that many queries to be performed.

What I would like to do is cache all the content of the database in
memory, so that I could access it directly through my PHP application
without querying the database and saving precious ressources.

The database is quite small, 15 - 20 mB and it''s size is constant (it
does not get bigger over time). 92% of the queries are SELECT, only 8
percents are UPDATE, DELETE and INSERT.

So, my question is, is it possible and recommandable to place 20mB of
data in shared memory in order to prevent queries to the database? (all
updates, deletes and inserts are performed both in the database as well
as in memory)

Or would I be better to place a copy of the database on a ramdrive?

Other info:
I have a server which runs both the PHP application and the MySQL
database. It has 1GB of RAM. The database receives 250 queries / sec.

Thank you in advance for your kind help



您好,


我不确定将数据库放在内存中是否会严重增加它的性能:你必须测试它。


如果你的数据库正在使用它的时间扫描表格和加入,以及

转换等等,修剪时间可能会令人失望。

如果数据库是不间断读取文件,它可能会有所帮助。

很难说。

你使用哪个数据库?

(!)但是在你这么做之前,做过你尝试了更多''老式''

优化?


一些想法:

- 试着找出哪个表格被大量扫描,并将索引放在

相关列上。

(如果使用Postgresql,请尝试使用EXPLAIN-command获取帮助)


- 您的数据库和代码是否使用预备语句?

他们可以提供帮助很多,特别是当查询很复杂的时候。


- 如果250个查询/秒中有50个是相同的选择,那么你就不会改变,你需要
可以尝试一些智能缓存。

例如:如果一个流行的查询是获取最新的20个或那个,所有类型的

连接在其他表上,您可以每隔15分钟对该查询进行一次查询,

并将结果保存在文件中。然后在你需要的页面上包含文件



另外:你可以随时更新文件,只要你知道相关的

表格已经更改。

(最有意义的是由您决定当然。)


这种优化可以产生巨大的差异。


一般情况下:尝试弄清楚哪些查询被执行了很多,然后用准备好的语句/索引/缓存到文件开始

。 />

希望这会有所帮助。


祝你好运!


问候,

Erwin Moller


Hi,

I am unsure if placing the database in memory will seriously increase it''s
performance: you''ll have to test that.

If you database is using its time on scanning tables and joining, and
conversions, etc etc, the time trimmed off could be disappointing.
If the database is non-stop reading files, it could help.
Hard to say.
Which database do you use?
(!)But before you go all that way, did you try some more ''old-fashioned''
optimizations?

Some ideas:
- Try to figure out which tables are scanned a lot, and place indexes on the
relevant column(s).
(If you use Postgresql, try EXPLAIN-command for help)

- Does you DB and your code use Prepared statements?
They can help a lot, especially when the queries are complex.

- If 50 of the 250 queries/sec are the same selects that don''t change, you
could try some smart caching.
eg: If a popular query is to get the latest 20 this-or-that, with all kind
of joins on other tables, you could shedule that query every 15 minutes,
and safe the results in a file. Then include the file on the pages where
you need it.
Alternatively: you could just update the file, whenever you know a relevant
table is changed.
(What makes the most sense is up to you to decide of course.)

This kind of optimalization can make huge differences.

In general: Try to figure out which queries are executed a lot, and start
there with prepared statements/indexing/caching-to-file.

Hope this helps.

Good luck!

Regards,
Erwin Moller


感谢您的回复。


我使用经过适当优化的MySQL数据库。所有索引都是正确设置并使用。


大多数请求是使用唯一ID的简单查询并返回

只有一个结果。几乎没有连接或复杂的连接。
Thanks for your reply.

I use a MySQL database that is properly optimized. All the indexes are
set correctly and used.

Most of the requests are simple queries using a unique ID and returning
only a single result. There is almost no joins or complex joins.
- 如果250个查询/秒中的50个是相同的选择,不会改变,>您可以尝试一些智能缓存。
- If 50 of the 250 queries/sec are the same selects that don''t change, >you could try some smart caching.




不幸的是,大多数查询都不同。


我可举个例子:


一个包含大约4000个用户的用户表。可以查阅其他

用户的信息。因此,对单个记录进行了大量查询。


我测试了在shm函数的内存中放置一些记录,当然,它是b

快。


但我很奇怪系统如何对更大量的数据作出反应,并且这是最好的方法。


谢谢



Unfortunately, most of the the queries are different.

I can give an example:

An user table with around 4000 users. It is possible to consult other
user''s information. So a lot of queries are made on single records.

I tested placing a few records in memory with shm functions, and it
was of course, blazingly fast.

But I''m wonderig how the system reacts with higher volume of data, and
what would be the best way to do this.

Thanks


测试它可能很容易。在页面上有一个链接,当前窥视

请求他们测试新代码的有效性。我有
经验,几乎80%的用户会进去测试新的

的东西,只是为了好奇心。所以我对新代码的测试通常是在几天内完成,对操作没有影响。

Testing it could be easy. Have a link on the page that peeps currently
use asking them to test the effectiveness of the new code. I have
experienced that almost 80% of my users will go in and test the new
stuff, just for curiosity maybe. So my testing on new code is normally
completed within a few days, with no impact on operations.


这篇关于数据库优化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆