使用localStorage和REST远程服务器的数据层架构 [英] Architecture for data layer that uses both localStorage and a REST remote server

查看:160
本文介绍了使用localStorage和REST远程服务器的数据层架构的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于如何实现使用localStorage和REST远程存储的数据持久层,任何人都有任何想法或参考:



某个客户端的数据是使用localStorage(使用ember-data indexedDB适配器)存储。本地存储的数据与远程服务器同步(使用ember-data RESTadapter)。



服务器从客户端收集所有数据。使用数学集符号:

  Server = Client1∪Client2∪...∪ClientN 

其中一般来说,任何记录可能不是特定客户端唯一的:

  ClientX∩ClientY≠0,∀X,Y∈ [1,N] 

以下是一些场景:




  • 客户端创建一个记录。记录的ID不能在客户端上设置,因为它可能与存储在服务器上的记录相冲突。因此,新创建的记录需要提交到服务器 - >接收id - >在localStorage中创建记录。


  • 服务器上更新记录,因此本地存储和服务器中的数据不同步。只有服务器知道,所以架构需要实现推送架构(?)




你会使用2个商店(一个用于localStorage,一个用于REST)并在它们之间同步,或者使用混合的indexedDB / REST适配器并在适配器中写入同步代码?



你可以看到任何方法来避免实现push(Web Sockets,...)?

解决方案

你提出的问题不能在几段中回答,或回答简单。不过,这是我的尝试...



首先,您采用的方法有很多困难:


  1. 客户端必须始终与网络连接才能创建数据并从服务器接收密钥。

  2. 如果您做不同的商店(localstorage& REST ),所有需要数据的应用程序代码必须在两个商店中查找。这显着增加了应用程序的每个部分的复杂性。

  3. 创建行后,如果要创建子行,则必须等待服务器返回主键,然后才能将其作为子行中的外键引用。对于任何适度复杂的数据结构,这将成为沉重的负担。

  4. 当服务器关闭时,所有客户端都无法创建数据。

这是我的做法。它使用 SequelSphereDB ,但大多数概念可以跨其他客户端数据管理系统重复使用。



第一:使用UUID作为主键。



大多数客户端数据管理系统应提供生成通用唯一ID的方式。 SequelSphere仅使用SQL函数:UUID()。将UUID作为每行的主键,可以随时在任何客户端上生成主键,而无需与服务器联系,并且仍然保证ID将是唯一的。这也因此允许应用程序以脱机模式工作,而不需要在运行时连接到服务器。



SECOND:使用一组镜像服务器的表 / p>

这比其他任何东西都要简单。这也是下面两个基本原则的要求。



第三:为了向下同步小数据集,优选从服务器完全刷新客户端数据。



只要有可能,请从服务器执行完全刷新客户端上的数据。这是一个更简单的范例,导致更少的内部数据完整性问题。主要缺点是传输中的数据大小。



第四:为了大数据集的向下同步,请执行事务性更新

这是我的方法有点复杂的地方。如果数据集太大,并且仅需要更改行才能同步,则必须找到一种根据事务进行同步的方法。也就是说:在服务器上执行的顺序插入/更新/删除以提供一个简单的脚本来在客户端上执行相同的操作。



最好在服务器上记录要同步到事务的事务的表。如果这是不可能的,那么订单通常可以在行上使用Timestamps记录在服务器上,并让客户端从特定的时间戳中询问所有更改。大负面:您将需要通过逻辑删除或通过在自己的表中跟踪已删除的行来跟踪删除的行。即使如此,隔离服务器的复杂性更适合在所有客户端上传播。



第五:为了向上同步,请使用事务更新



这是 SequelSphereDB 真正闪耀的地方:它将跟踪您对表执行的所有插入,更新和删除,然后在同步时间将其提供给您。它甚至可以通过浏览器重新启动,因为它会将信息保留在localstorage / indexeddb中。它甚至适当地处理提交和回滚。客户端应用程序可以像通常那样与数据进行交互,而无需考虑记录更改,然后使用SequelSphereDB的更改跟踪器在同步时间重播更改。



如果您不使用 SequelSphere(您应该),请在客户端上保留一个单独的表,以记录客户端执行的所有插入,更新和删除。每当客户端应用程序插入/更新/删除行时,请在事务表中复制该副本。在向上同步的时候,发送那些。在服务器上,只需按照相同的顺序执行相同的步骤即可复制客户端上的数据。



另请注意:始终在完全刷新之前执行向上同步来自服务器的客户端表。 :)



结论



我建议尽可能简单的复杂性地方尽可能。使用UUID作为主键对此非常有帮助。使用某种变更追踪器也很有用。使用SequelSphereDB等工具来跟踪您的变化是最有帮助的,但这并不是必需的。



完全披露:我与公司SequelSphere密切相关,但该产品对于实施上述方法实际上并不是必需的。


Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage:

The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter).

The server gathers all data from clients. Using mathematical sets notation:

Server = Client1 ∪ Client2 ∪ ... ∪ ClientN 

where, in general, any record may not be unique to a certain client:

ClientX ∩ ClientY ≠ 0,  ∀ X,Y ∈ [1,N]

Here are some scenarios:

  • A client creates a record. The id of the record can't be set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server -> receive the id -> create the record in localStorage.

  • A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?)

Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter?

Can you see any way to avoid implementing push (Web Sockets, ...)?

解决方案

The problem you bring up can not be answered in a few paragraphs, or answered simply. Nevertheless, here is my try...

First, there are number of difficulties with the approach you have adopted:

  1. The clients must always be network connected to create data and receive the keys from the server.
  2. If you do make different stores (localstorage & REST), all application code requiring data must look in both stores. That significantly increases the complexity of every part of the application.
  3. After creating a row, if you want to create child rows, you must wait for the server to return the primary key before you can reference it as a foreign key in the child row. For any moderately complex data structures, this becomes a heavy burden.
  4. When the server goes down, all clients cannot create data.

Here is my approach. It uses SequelSphereDB, but most of the concepts can be reused across other client data management systems.

FIRST: Use UUIDs for Primary Keys.

Most client data management systems should provide a manner for generating Universally Unique IDs. SequelSphere does it simply with an SQL function: UUID(). Having a UUID as the primary key for each row allows primary keys to be generated on any client at any time without having to contact the server, and still guarantee that the IDs will be unique. This also consequently allows the application to work in an "offline" mode, not requiring a connection to the server during run-time. This also keeps a downed server from bringing all of the clients down.

SECOND: Use a single set of tables that mirror the server's.

This is more of a requirement for simplicity than anything else. It is also a requirement for the next two fundamental principles.

THIRD: For downward-synchronization of small datasets, completely refreshing client data from the server is preferable.

Whenever possible, perform complete refreshes of data on the client from the server. It is a simpler paradigm, and results in less internal data integrity issues. The primary drawback is data size in transfer.

FOURTH: For downward-synchronization of large datasets, perform 'Transactional' updates

This is where my approach gets a little more complex. If the datasets are too large, and require only changed rows to be sync'd, you must find a way to sync them according to "transactions". That is: the inserts/updates/deletes in the order in which they were performed on the server to provide a simple script for performing the same on the client.

It is preferable to have a table on the server recording the transactions to be sync'd down to the device. If this is not possible, then the order can often be recorded on the server using Timestamps on the rows, and having the client ask for all changes since a particular timestamp. Big Negative: you will need to keep track of deleted rows either by "logical" deletes, or by tracking them in their own table. Even still, isolating the complexity to the server is preferable to spreading it across all the clients.

FIFTH: For upward-synchronization, use 'Transactional' updates

This is where SequelSphereDB really shines: It will keep track for you of all the inserts, updates, and deletes performed against tables, and then provide them back to you at sync time. It even does it across browser restarts, since it persists the information in localstorage/indexeddb. It even handles commits and rollbacks appropriately. The client app can interact with the data as it normally would without having to give thought about recording the changes, and then use SequelSphereDB's "Change Trackers" to replay the changes at sync time.

If you are not using SequelSphere (you should be), then keep a separate table on the client to record all inserts, updates, and deletes that the client performs. Whenever the client application inserts/updates/deletes rows, make a copy of that in the "transaction" table. At upward sync time, send those. On the server, simply perform the same steps in the same order to replicate the data that was on the client.

ALSO IMPORTANT: Always perform an Upwards-sync before fully refreshing the client tables from the server. :)

Conclusion

I suggest going for simplicity over complexity in as many places as possible. Using UUIDs for primary keys is extremely helpful for this. Using some sort of "change trackers" is also very useful. Using a tool such as SequelSphereDB to track the changes for you is most helpful, but not necessary for this approach.

FULL DISCLOSURE: I am closely related to the company SequelSphere, but that product really isn't necessary for implementing the above approach.

这篇关于使用localStorage和REST远程服务器的数据层架构的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆