扩大Drupal 7 [英] Scaling Drupal 7 on Openshift

查看:231
本文介绍了扩大Drupal 7的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在设置一个drupal网站,并希望通过openshift(青铜计划,Small.highcpu)进行扩展。在这方面有两个问题:

i am setting up a drupal site and would like to make it scalable on openshift (bronze plan, Small.highcpu). two questions in this respect:

a)后台任务?

如果有人可以更详细地解释第3点,那么很棒:

would be great if someone can explain point 3 in more detail:

https://github.com/openshift/drupal-quickstart/blob/master/README.md


由于您的应用程序代码都没有被检入Git并且完全在数据目录中,如果此应用程序设置为可扩展性,新的齿轮将具有空数据目录,并且不会正确地提供请求。如果您想使应用程序可扩展,则需要:

Because none of your application code is checked into Git and lives entirely in your data directory, if this application is set to scalable the new gears will have empty data directories and won't serve requests properly. If you'd like to make the app scalable, you'll need to:


  1. 检查php / *的内容到您的Git仓库(在php / * dir中)

  2. 只需从头盔中通过Drush安装新模块,然后将这些更改提交到Git repo

  3. 使用后台任务将档案内容从档位复制到齿轮

  1. Check the contents of php/* into your Git repository (in the php/* dir)
  2. Only install new modules via Drush from the head gear, and then commit those changes to the Git repo
  3. Use a background task to copy file contents from gear to gear

用于部署和配置Drupal位于构建和部署钩子中。

All of the scripts used to deploy and configure Drupal are located in the build and deploy hooks.

b)附加文件系统:

这里的海报说,需要一个更持久的文件系统(例如S3)来扩展: https://groups.drupal.org/node/297403 。在高峰时段每秒大约30-50页的网站真的需要吗?添加S3有什么好处?

here the poster says that a more persistent filesystem (e.g. S3) is needed to scale: https://groups.drupal.org/node/297403. is that really necessary for a site with around 30-50 pages per second in peak time? what are the benefits of adding S3?

推荐答案

在可扩展的OpenShift应用程序中,您希望所有齿轮的行为相同。在Drupal的情况下,每个齿轮需要具有核心的Drupal文件,模块以及齿轮(图像等)提供的任何附加数据。

In a scalable OpenShift app, you want all gears to behave identically. In the case of Drupal, each gear needs to have the core Drupal files, modules, and any additional data to be served by the gear (images, etc.).

该指南建议您检查核心PHP文件和额外的模块(使用Drush之后),以便每个齿轮都有它们。

The guide recommends you to check core PHP files and extra modules (after using Drush) in to git so each gear has them.

这里后台任务和S3是两种方法同样的问题,以确保每个齿轮提供相同的数据。

Here background tasks and S3 are two approaches to the same problem—to make sure each gear serves the same data.

实现将档案从档位复制到档案的后台任务的一种方法是使用 OpenShift cron scp 数据文件定期到其余的档位。

One way to realize "a background task to copy file contents from gear to gear" is to use OpenShift cron on the head gear which scp's the data files to the remaining gears at regular intervals.

另一种方式是让齿轮服务相同的内容是让所有的齿轮指向外部存储器; S3。所以如果你使用S3,你不需要后台工作来复制齿轮之间的数据。如果服务30-50页的瓶颈是I / O读取数据,则S3可能有助于将其卸载到其服务器。

The other way to have gears serving the same content is to have all gears point to external storage—S3. So if you use S3, you don't need background jobs to copy data between gears. If the bottleneck in serving 30-50 pages is I/O in reading data, then S3 may definitely help in offloading that to its servers.

这篇关于扩大Drupal 7的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆