如何在不达到执行限制的情况下返回所有文件夹名称和ID? [英] How do I return all folder names and IDs without reaching execution limit?

查看:118
本文介绍了如何在不达到执行限制的情况下返回所有文件夹名称和ID?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图使用Google Apps脚本检索所有文件夹及其各自ID的列表。目前,我正在将结果写入数组,然后每5000条记录将其发布到电子表格。不幸的是,脚本在完成之前达到了执行限制(5分钟)。我该如何解决这个问题?而且,通过HTTP执行RESTful API调用比使用Apps脚本更成功吗?



我注意到以下内容:


  1. 代码已经遵循Google的批量写入最佳做法。

  2. 执行速度缓慢是因为Apps脚本缓慢索引驱动器。

  3. 结果似乎遵循一致的索引模式。

    • 多次运行以相同的顺序产生结果 未知如何在添加时重新编制项目以防止运行之间有意义的缓存

    • Delta除非确定索引方法,否则不可靠。

    • 观察驱动器缓存。

      • 仍然需要通过FolderIterator对象循环

      • 理论性能会更差imo(正确?)

代码如下:

 函数LogAllFolders(){
var ss_index = 1;
var idx = 0;
var文件夹;
var data = new Array(5000);
for(i = 0; i <5000; i ++){
data [i] = new Array(2);
}
var ss = SpreadsheetApp.create(FolderInv2,1,2).getSheets()[0];
var root = DriveApp.getFolders();
while(root.hasNext()){
folder = root.next();
data [idx] [0] = folder.getName();
data [idx] [1] = folder.getId();
idx ++; ((ss_index%5000)== 0){
ss.insertRowsAfter(ss.getLastRow()+ 1,5000);
if
ss.getRange(ss.getLastRow()+ 1,1,5000,2).setValues(data);
SpreadsheetApp.flush();
idx = 0;
}
ss_index ++;
}
}


解决方案

I会首先收集您想要先处理的所有文件夹标识,然后可以将已处理的文件夹标识(或可能是数组索引)保存到项目属性中,然后每五分钟将作业作为CRON运行,然后从

我猜测完成后,以编程方式删除CRON触发器。


I am attempting to retrieve a list of all folders and their respective IDs using a Google Apps Script. Currently, I am writing the result to an array which is then posted to a spreadsheet every 5000 records. Unfortunately, the script reaches the execution limit (5 minutes) before completion. How can I work around this? And, would I have more success doing RESTful API calls over HTTP than using Apps Script?

I've noted the following:

  1. Code already follows Google's bulk-writes best practice.
  2. Slow execution is as a result of Apps Script indexing Drive slowly.
  3. Results appear to follow a consistent indexing pattern.
    • Multiple runs produce results in same order
    • Unknown how items are re-indexed upon addition preventing meaningful caching between runs
    • Delta not reliable unless indexing method is identified
  4. Looked into Drive caching.
    • Still required to loop through FolderIterator object
    • Theoretical performance would be even worse imo (correct?)

Code is below:

function LogAllFolders() {
  var ss_index = 1;
  var idx = 0;
  var folder;
  var data = new Array(5000);
  for (i=0;i<5000;i++){
    data[i] = new Array(2);
  }
  var ss = SpreadsheetApp.create("FolderInv2",1,2).getSheets()[0];
  var root = DriveApp.getFolders();
  while(root.hasNext()) {
    folder = root.next();
    data[idx][0] = folder.getName();
    data[idx][1] = folder.getId();
    idx++;
    if ((ss_index % 5000) == 0) {
      ss.insertRowsAfter(ss.getLastRow()+1, 5000);
      ss.getRange(ss.getLastRow()+1,1,5000,2).setValues(data);
      SpreadsheetApp.flush();
      idx = 0;
    }
    ss_index++;
  }
}

解决方案

I would first collect all the folder ids you wanted to process first, then you could save the folder ID (or maybe array index) that you've processed to your project properties and then run the job as a CRON every five minutes and just resume from that folder ID or index that you saved previously.

I guess when it's done, remove the CRON trigger programatically.

这篇关于如何在不达到执行限制的情况下返回所有文件夹名称和ID?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆