如何在我的数据库中的所有当前文档中添加一个新字段? [英] How to add a new field to all current documents in my database?

查看:15
本文介绍了如何在我的数据库中的所有当前文档中添加一个新字段?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以我正在尝试为我的应用程序实现一个新的云功能,但它需要对我现有的数据库数据模型进行一些调整.我数据库中的每个用户都有一个我每周尝试更新的字段.我不希望每周为所有用户运行此功能,因为这将是一项不必要的昂贵操作.相反,我刚刚决定通过在用户文档中存储last-updated"字段来跟踪我上次更新该用户的时间.

So I'm trying to implement a new cloud function for my application but it requires a little adjustment to my existing database data model. Each user in my database has a field that I am trying to update each week. I don't want this function to run for all users each week as that would be an unnecessarily expensive operation. Instead I've just decided to track the last time I updated that user by storing a 'last-updated' field in their user document.

问题是我现有的 400 多个用户都没有这个字段.所以我正在寻找一种方法来为数据库中的所有现有用户添加这个字段,启动到某个默认时间.

The problem is none of my existing 400+ users have this field. So I'm looking for a way to add this field, initiated to some default time, for all existing users in the database.

我考虑过使用此处所述的批量写入":https://firebase.google.com/docs/firestore/管理数据/事务#batched-writes

I've thought about using a 'batch write' as described here: https://firebase.google.com/docs/firestore/manage-data/transactions#batched-writes

但您似乎需要指定要更新的每个文档的 id.我的所有用户都有一个由 Firestore 生成的 UUID,因此手动写入每个用户对我来说并不实际.有没有办法在现有集合的每个文档中创建一个新字段?或者,如果不是一种让我获取所有文档 ID 列表的方法,以便我可以遍历它并进行非常丑陋的批量写入?我只需要进行一次大规模更新,然后再也不会.除非我发现了一条我想跟踪的新数据.

but it seems like you need to specify the id of each document you want to update. All of my users have a UUID that was generated by Firestore so it's not really practical for me to manually write to each user. Is there a way for me to create a new field in each document of an existing collection? Or if not perhaps a way for me to get a list of all document id's so that I can iterate through it and do a really ugly batch write? I only have to do this mass update once and then never again. Unless I discover a new piece of data I would like to track.

推荐答案

您可以使用云函数:例如,您将在下面找到云函数的代码,您可以通过在名为 batchUpdateTrigger(请注意,这只是触发云函数的一种方式.您可以很好地使用 HTTPS 云函数).

You could use a Cloud Function: for example you will find below the code of a Cloud Function that you would trigger by creating a doc in a collection named batchUpdateTrigger (note that this is just a way to trigger the Cloud Function. You could very well use an HTTPS Cloud Function for that).

在此 Cloud Function 中,我们获取名为 collection 的集合的所有文档,并为每个文档添加一个包含当前日期/时间的新字段(ServerValue.TIMESTAMP).我们使用 Promise.all() 来并行执行所有的更新异步工作.不要忘记为 batchUpdateTrigger 集合添加写入权限,并在云函数运行后删除它.

In this Cloud Function we take all the docs of the collection named collection and we add to each of them a new field with the current date/time (ServerValue.TIMESTAMP). We use Promise.all() to execute all the update asynchronous work in parallel. Don't forget to add write access to the batchUpdateTrigger collection and to delete the Cloud Function once it has run.

 exports.batchUpdate = functions.firestore
  .document('batchUpdateTrigger/{triggerId}')
  .onCreate((snap, context) => {

    var collecRef = db.collection('collection');
    return admin.collecRef.get()
        .then(snapshot => {

           const ts = admin.database.ServerValue.TIMESTAMP;
           var promises = [];

           snapshot.forEach(doc => {
             const ref = doc.ref;

             promises.push(
               ref.update({
                   lastUpdate: ts
               });
             );

           });

           return Promise.all(promises);

        });

  });

您在这里可能遇到的一个问题是您达到了 Cloud Function 的超时时间.默认超时时间为 60 秒,但您可以在 Google Cloud 控制台上增加它 (https://console.cloud.google.com/functions/list?project=xxxxxxx)

One problem you may encounter here is that you reach the timeout of the Cloud Function. The default timeout is 60 seconds but you can increase it on the Google Cloud console (https://console.cloud.google.com/functions/list?project=xxxxxxx)

正如您所说,另一种方法是使用批量写入.

Another approach would be, like you said, to use a batched write.

云函数如下所示:

 exports.batchUpdate = functions.firestore
  .document('batchUpdateTrigger/{triggerId}')
  .onCreate((snap, context) => {

    var collecRef = db.collection('collection');
    return admin.collecRef.get()
        .then(snapshot => {

           const ts = admin.database.ServerValue.TIMESTAMP;
           let batch = db.batch();

           snapshot.forEach(doc => {
             const ref = doc.ref;

             batch.update(ref, {
                   lastUpdate: ts
               });

           });

           return batch.commit();

        });

  });

但是,您需要在代码中管理批处理中最多 500 个操作的限制.

下面是一个可能的简单方法(即不是很复杂...).由于您将只设置一次默认值,并且您只有几百个文档要处理,我们可以认为它是可以接受的!以下 Cloud Function 将按 500 个批量处理文档.因此您可能需要手动重新触发它,直到处理完所有文档.

Below is a possible simple approach (i.e. not very sophisticated...). Since you are going to set the default values only once and you have only few hundreds of docs to treat we can consider it acceptable! The following Cloud Function will treat documents by batch of 500. So you may have to manually re-trigger it until all the docs are treated.

 exports.batchUpdate = functions.firestore
  .document('batchUpdateTrigger/{triggerId}')
  .onCreate((snap, context) => {

    var collecRef = db.collection('collection');
    return admin.collecRef.get()
        .then(snapshot => {

           const docRefsArray = [];
           snapshot.forEach(doc => {
              if (doc.data().lastUpdate == null) {
                 //We need to "treat" this doc
                 docRefsArray.push(doc.ref);
              )
            });

            console.log("Nbr of doc to treat: " + docRefsArray.length); //When the output is 0 you are done, i.e. all the docs are treated

            if (docRefsArray.length > 0) {

                const ts = admin.database.ServerValue.TIMESTAMP;

                let batch = db.batch();

                if (docRefsArray.length < 500) {

                   //We can "treat" all the documents in the QuerySnapshot
                   docRefsArray.forEach(ref => {
                      batch.update(ref, {
                         lastUpdate: ts
                      });
                   });

                } else {

                   //We need to "treat" only 500 documents
                   for (let i = 0; i < 500; i++) {
                      batch.update(docRefsArray[i], {
                         lastUpdate: ts
                      });
                }

                ​return batch.commit();

            } else {
                return null;
            }
        });
  });

最后一种技术的好处是,如果遇到 Cloud Function 超时的一些问题,可以减少批处理的大小.

The advantage with this last technique is that if you encounter some problems of Cloud Function timeout, you can reduce the size of the batch.

这篇关于如何在我的数据库中的所有当前文档中添加一个新字段?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆