Nodejs>古尔夫>通过2>限制为16个文件? [英] Nodejs > Gulp > through2 > Limitation to 16 files?

查看:104
本文介绍了Nodejs>古尔夫>通过2>限制为16个文件?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

更新好吧,这似乎与through2的"highWaterMark"属性有关.基本上,它的意思是不要缓存超过x个文件,等待某人使用它,然后才接受另一批文件".由于它是按设计方式运行的,因此正在审查此问题中的代码段.必须有更好的方法来处理许多文件.

update Ok, this seems to be linked to through2's "highWaterMark" property. Basically, it means "don't buffer more than x files, wait for someone to consume it and then only then accept another batch of files". Since it works this way by design, the snippet in this question is being reviewed. There must be a better way to handle many files.

快速修复,允许8000个文件:

Quick fix, allowing 8000 files:

  through.obj({ highWaterMark: 8000 }, (file, enc, next) => { ... })

原始问题

我正在使用口吃的任务来创建翻译文件.它会扫描src文件夹中的*.i18n.json文件,并在源文件中找到的每种语言都保存一个.json.

I'm using a gulp task to create translation files. It scans an src folder for *.i18n.json files and saves one .json per language it finds within the source files.

它可以正常工作-直到找到16个以上的文件.它使用through2来处理每个文件.请参阅下面的源代码.方法processAll18nFiles()是一个自定义管道,用于接收匹配的输入文件,读取每个文件的内容,动态构建结果字典,然后最终将其交给on('finish)处理程序以编写字典.

It works fine - until it finds more than 16 files. It's using through2 for the processing of each file. See source code below. The method processAll18nFiles() is a custom pipe that receives the matching input files, reads the content of each files, constructs the resulting dictionaries on the fly, then finally hands it over to the on('finish) handler to write the dictionaries.

在Windows和Mac上测试.我的方法似乎有一定的局限性,因为它只适用于16个或更少的文件.

Tested on windows and mac. There seems to be a limitation that my approach hits, because it's working just fine with 16 files or less.

看起来仍然很不错,欢迎提供线索:-)

Still looking, clues welcome :-)

源文件示例:signs.i18n.json

{
  "path": "profile.signs",
  "data": {
    "title": {
      "fr": "mes signes précurseurs",
      "en": "my warning signs"
    },
    "add": {
      "fr": "ajouter un nouveau signe",
      "en": "add a new warning sign"
    }
  }
}

输出文件示例:en.json

{"profile":{"signs":{"title":"my warning signs","add":"add a new warning sign"}}}

gulpfile.js

const fs = require('fs');
const path = require('path');
const gulp = require('gulp');
const watch = require('gulp-watch');
const through = require('through2');

const searchPatternFolder = 'src/app/**/*.i18n.json';
const outputFolder = path.join('src', 'assets', 'i18n');

gulp.task('default', () => {
  console.log('Ionosphere Gulp tasks');
  console.log(' > gulp i18n         builds the i18n file.');
  console.log(' > gulp i18n:watch   watches i18n file and trigger build.');
});

gulp.task('i18n:watch', () => watch(searchPatternFolder, { ignoreInitial: false }, () => gulp.start('i18n')));
gulp.task('i18n', done => processAll18nFiles(done));

function processAll18nFiles(done) {
  const dictionary = {};
  console.log('[i18n] Rebuilding...');
  gulp
    .src(searchPatternFolder)
    .pipe(
      through.obj((file, enc, next) => {
        console.log('doing ', file.path);
        const i18n = JSON.parse(file.contents.toString('utf8'));
        composeDictionary(dictionary, i18n.data, i18n.path.split('.'));
        next(null, file);
      })
    )
    .on('finish', () => {
      const writes = [];
      Object.keys(dictionary).forEach(langKey => {
        console.log('lang key ', langKey);
        writes.push(writeDictionary(langKey, dictionary[langKey]));
      });
      Promise.all(writes)
        .then(data => done())
        .catch(err => console.log('ERROR ', err));
    });
}

function composeDictionary(dictionary, data, path) {
  Object.keys(data)
    .map(key => ({ key, data: data[key] }))
    .forEach(({ key, data }) => {
      if (isString(data)) {
        setDictionaryEntry(dictionary, key, path, data);
      } else {
        composeDictionary(dictionary, data, [...path, key]);
      }
    });
}

function isString(x) {
  return Object.prototype.toString.call(x) === '[object String]';
}

function initDictionaryEntry(key, dictionary) {
  if (!dictionary[key]) {
    dictionary[key] = {};
  }
  return dictionary[key];
}

function setDictionaryEntry(dictionary, langKey, path, data) {
  initDictionaryEntry(langKey, dictionary);
  let subDict = dictionary[langKey];
  path.forEach(subKey => {
    isLastToken = path[path.length - 1] === subKey;
    if (isLastToken) {
      subDict[subKey] = data;
    } else {
      subDict = initDictionaryEntry(subKey, subDict);
    }
  });
}

function writeDictionary(lang, data) {
  return new Promise((resolve, reject) => {
    fs.writeFile(
      path.join(outputFolder, lang + '.json'),
      JSON.stringify(data),
      'utf8',
      err => (err ? reject(err) : resolve())
    );
  });
}

推荐答案

好,如

Ok, as explained here, one must consume the pipe. This is done by adding a handler of 'data' events such as:

  gulp
    .src(searchPatternFolder)
    .pipe(
      through.obj({ highWaterMark: 4, objectMode: true }, (file, enc, next) => {
        const { data, path } = JSON.parse(file.contents.toString('utf8'));
        next(null, { data, path });
      })
    )
    // The next line handles the "consumption" of upstream pipings
    .on('data', ({ data, path }) => ++count && composeDictionary(dictionary, data, path.split('.')))
    .on('end', () =>
      Promise.all(Object.keys(dictionary).map(langKey => writeDictionary(langKey, dictionary[langKey])))
        .then(() => {
          console.log(`[i18n] Done, ${count} files processed, language count: ${Object.keys(dictionary).length}`);
          done();
        })
        .catch(err => console.log('ERROR ', err))
    );

这篇关于Nodejs>古尔夫>通过2>限制为16个文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆