Gulp:定制管道,可批量转换许多文件 [英] Gulp: custom pipe to transform a lot of files in a batch

查看:106
本文介绍了Gulp:定制管道,可批量转换许多文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

编辑,看来我正在尝试创建一些concat gulp管道,该管道首先会转换各种文件的内容.

我一直在使用through2编写自定义管道.范围:使用gulp-watch,运行一个任务,将所有任务加载,在内存中进行转换并输出一些文件.

Through2带有一个"highWaterMark"选项,默认为一次读取16个文件.我的管道不需要进行内存优化(它读取数十个< 5kb json,运行一些转换并输出2或3 json).但我想了解首选方法.

我想找到一个很好的资源/教程,解释如何处理这种情况,欢迎任何潜在客户.

谢谢

解决方案

好,发现了我的问题.

使用through2创建自定义管道时,为了消费"数据(而不达到highWaterMark限制),只需添加一个.on('data', () => ...)处理程序,如以下示例所示:

function processAll18nFiles(done) {
  const dictionary = {};
  let count = 0; 
  console.log('[i18n] Rebuilding...');
  gulp
    .src(searchPatternFolder)
    .pipe(
      through.obj({ highWaterMark: 1, objectMode: true }, (file, enc, next) => {
        const { data, path } = JSON.parse(file.contents.toString('utf8'));
        next(null, { data, path });
      })
    )
    // this line fixes my issue, the highWaterMark doesn't cause a limitation now
    .on('data', ({ data, path }) => ++count && composeDictionary(dictionary, data, path.split('.')))
    .on('end', () =>
      Promise.all(Object.keys(dictionary).map(langKey => writeDictionary(langKey, dictionary[langKey])))
        .then(() => {
          console.log(`[i18n] Done, ${count} files processed, language count: ${Object.keys(dictionary).length}`);
          done();
        })
        .catch(err => console.log('ERROR ', err))
    );

rem 注意"done"参数,迫使开发人员在done()

时拨打电话

edit it seems I'm trying to create some concat gulp pipe that first transforms the contents of the various files.

I have been using through2 to write a custom pipe. Scope: Using gulp-watch, run a task that loads all of them, transform them in memory and output a few files.

Through2 comes with a "highWaterMark" option that defaults to 16 files being read at a time. My pipe doesn't need to be memory optimized (it reads dozens of <5kb jsons, runs some transforms and outputs 2 or 3 jsons). But I'd like to understand the preferred approach.

I'd like to find a good resource / tutorial explaining how such situations are handled, any lead's welcome.

Thanks,

解决方案

Ok, found my problem.

When using through2 to create a custom pipe, in order to "consume" the data (and not hit the highWaterMark limit), one simply has to add an .on('data', () => ...) handler, like in the following example:

function processAll18nFiles(done) {
  const dictionary = {};
  let count = 0; 
  console.log('[i18n] Rebuilding...');
  gulp
    .src(searchPatternFolder)
    .pipe(
      through.obj({ highWaterMark: 1, objectMode: true }, (file, enc, next) => {
        const { data, path } = JSON.parse(file.contents.toString('utf8'));
        next(null, { data, path });
      })
    )
    // this line fixes my issue, the highWaterMark doesn't cause a limitation now
    .on('data', ({ data, path }) => ++count && composeDictionary(dictionary, data, path.split('.')))
    .on('end', () =>
      Promise.all(Object.keys(dictionary).map(langKey => writeDictionary(langKey, dictionary[langKey])))
        .then(() => {
          console.log(`[i18n] Done, ${count} files processed, language count: ${Object.keys(dictionary).length}`);
          done();
        })
        .catch(err => console.log('ERROR ', err))
    );

rem mind the "done" parameter forcing the developper to make a call when it's done()

这篇关于Gulp:定制管道,可批量转换许多文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆