专用的提交日志存储与读/写比率? [英] Dedicated commitlog storage vs Read/Write ratio?

查看:85
本文介绍了专用的提交日志存储与读/写比率?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

由于我们正在使用SSD磁盘为具有30 GB内存的服务器上的群集提供存储。



对于commitlog目录是否存在争议,是否专门使用



由于我们已经在使用SSD磁盘,因此在同一磁盘上同时提交提交日志和数据的性能应该很好。



但是,还有另一个因素,那就是读写比。当我们在同一磁盘上同时拥有提交日志和数据时,这样的比率将如何影响写入或读取的性能?



使用SSD时,专用于

解决方案

专用的commitlog设备通常在具有HDD时很有意义,但是如果您使用的是SSD,则不太明显。



即使您仅询问对SSD设置是否有意义,我也将主要根据我的理解和经验,尝试给出一些有关该主题的一般提示。我承认,重点可能是过多地放在了HDD上,但是HDD可以深入了解Cassandra的工作原理以及为什么用SSD备份commitlog / data目录可以节省生命。






背景:IOPS和OPS不是一回事。



我将从一个(非常)遥远的地方开始点:设备性能。 在这里是有关存储设备性能的入门讲座 。即使该文章的中立性处于讨论之下,它仍可以提供一些关于您可以从某些系统获得的一般指标和性能。当然,您的里程可能会有所不同,具体取决于您使用哪种设备(类型/品牌/型号等)以及设备承受的压力(工作负载类型),但是我认为这是一个不错的起点我们在这里讨论。



我更喜欢从IOPS开始的原因是因为它是了解存储性能的起点。 C *文献谈到OPS,即每秒操作数,因为人们通常不考虑IOPS,尤其是查看统计数据时。这确实隐藏了很多细节,对于初学者来说,操作 size



Cassandra操作通常包含多个IOPS。 Cassandra文档通常指的是旋转磁盘(即使也引用了SSD),也清楚地说明了执行读/写操作时发生的事情,人们往往会忽略这样一个事实,即他们的软件堆栈(从应用程序一直延伸到Cassandra及其驱动程序存储器中的数据文件)击中磁盘后,性能就会下降巨大,这是因为它们无法识别随机的工作负载,即使 Cassandra是高性能的……等等。等等。



例如,查看读取路径文档中,您可以清楚地看到内存/磁盘上的数据结构以及如何访问SSTable数据。此外,行缓存段落说:


...如果启用了行缓存,则需要的分区数据为从行缓存中读取数据,可能会在数据中保存两次查找 ...


捕获的开始位置:从 Cassandra的角度来看,这两个搜寻可能被保存。这只是意味着Cassandra不会对存储系统发出两次请求:它将避免请求分区索引 data ,因为一切都已经在RAM中了,但是并没有真正转化为存储系统将保存两个 IO操作。确实,从存储设备中检索(通用)数据的方式非常非常,并且当然取决于文件在磁盘本身上的布局方式:您使用的是EXT4,XFS,或者是什么?假设没有可用的缓存(例如,对于非常大的数据集大小,您实际上无法缓存所有内容...),查找文件会消耗IOPS ,并且这会放大当您将数据存储在RAM中时,可能保存的行会搜索,而当数据不存在时,它们会放大您会遭受的惩罚。 p>




您无法逃脱物理操作:HDD需缴一些税,SSD不需要。



您已经知道,HDD的主要问题(在性能方面)是平均寻道时间,即HDD平均需要等待的时间才能使<头下是目标部门。一旦扇区处于磁头之下,如果系统必须读取一堆连续的比特,则一切都将变得很平稳,并且吞吐量与HDD的旋转速度成正比(精确到HDD的 tangential 速度)。头部下方的盘子,这也取决于 track 等)。



换句话说,HDD具有平均固定性能税(平均寻道时间),并且此后的所有东西几乎都是免费的。如果应用程序请求了一堆不连续的扇区(从磁盘的角度来看,例如,零散的文件被拆分为多个扇区,但是应用程序无法真正知道这一点),则磁盘将必须等待平均平均寻道时间,此固定税会影响其最大吞吐量。



关于存储的最强论据是:每个设备都有自己的最大魔术平均IOPS数。此数字表示设备可以执行的随机 IOPS数量。您不能强迫硬盘平均拥有更多IOPS,这是一个物理问题。操作系统通常足够聪明,可以排队扇区请求,以减少查找时间,例如,通过递增请求的扇区号进行排序(尝试利用某些顺序操作),但是没有什么能节省随机IO工作负载的性能。您已经 X 分配了可用的IOPS,并且必须面对这些问题。






您需要利用设备分配的IOPS,并且必须明智地使用您可以使用它们。



假设您的硬盘平均可以达到100 IOPS。如果您的应用程序执行一堆小的(例如4KB)文件读取,则您的应用程序每秒执行100 * 4KB读取:吞吐量将约为400KB / s(除非涉及某些缓存,在这种情况下,保存的缓存您宝贵的IOPS)。惊人。这仅仅是因为您要多次支付搜索时间。如果您将访问模式更改为读取16MB(连续)文件的内容,则会获得更高的吞吐量,因为您不必花太多的搜索时间,因此您将利用顺序模式。实际情况是每个操作的请求大小



现在,一个有趣的问题是: IOPS和请求大小如何 相关? 一个请求的16MB大小是否可以视为一个 IOPS?那么128MB的请求大小呢?这确实是一个好问题。在较低级别,请求大小范围从512字节(最小扇区大小)到128KB(一个请求中32 * 4K扇区)。如果操作的大小较小,则其传输时间(磁盘需要获取数据的时间)也很小。请求量越大,显然传输时间就越长。但是,如果您能够执行100个4KB IOPS,则可能将能够执行大约80个IOPS @ 8KB。该关系不能是线性的,因为传输时间仅取决于磁盘的旋转速度 (传输时间与查找时间相比可以忽略不计),并且因为您实际上是从两个相邻的磁盘读取数据扇区,则您每次请求都会触及搜索时间罚款。对于4K请求,这意味着约400KB / s的吞吐量;对于8K请求,这意味着约1.6MB / s的吞吐量。依此类推....请求大小越大,传输数据所需的时间就越长,IOPS越少,吞吐量就越高。 (这些是随机数字,双关语是故意的,没有进行任何测量!只是为了让您理解。不过,我认为它们在球场上。)



SSD不会遭受机械惩罚,这就是为什么它们能够比HDD表现更好。它们具有更多的IOPS,其限制来自板载电子设备,总线连接等。具有更高IOPS的设备是一个很大的优势,不与IOPS友好的应用程序可以使用这些设备。 ,并且用户不会注意到应用程序很烂。但是,对于SSD,请求大小会线性影响您可以执行的IOPS数量。当您查看某些具有100k IOPS的设备时,通常将其称为4K。如果执行64K请求,那么您将只能执行6.2k请求。






为什么Cassandra读得这么好



从单个节点的角度来讲(因为考虑到集群的性能,Cassandra随集群中节点的数量线性增长),问题出在问题本身上。仅当您以这种特定方式对数据建模时,这才是正确的:


  1. 您必须仅使用一个查询来获取所有数据。

  2. 必须对数据进行排序。

  3. 如果无法使用一个数据来获取数据,请进行非规范化以便仅使用一个查询来检索它。

  4. 您每次读取都会获取相对大量的数据

这些都是著名的Cassandra建模规则,但关键是这些规则确实有理由在IOPS上应用。实际上,这些规则使Cassandra能够:


  1. 成为超快速数据库,因为它只需要分区索引数据的 SSTable 偏移索引:最佳情况下为两个IOPS ,最坏情况下为更多IOPS

  2. 成为超快速数据库,因为它将利用HDD的顺序功能,并且不会通过发出其他IO(随机)寻道而对IO子系统造成压力。

  3. 成为a超级数据库,因为它将获取更多的数据,例如点号1。

  4. 成为超级数据库,因为它将利用更长的时间( )HDD的顺序功能

换句话说,遵循这些基本数据建模规则可使Cassandra在回读数据时对IOPS友好。



如果搞砸了数据模型会怎样?卡桑德拉(Cassandra)不会对IOPS友好,因此演出的效果可想而知。除非您使用具有更高IOPS的SSD,否则您不会注意到速度过慢。



如果读取/写入少量数据会发生什么情况(例如,由于配置错误的刷新大小,小的提交日志等...)?卡桑德拉(Cassandra)不会对IOPS友好,因此演出的效果可想而知。除非您使用具有更高IOPS的SSD,否则您不会注意到速度过慢。






如何读取/ write比率模式会影响Cassandra节点的性能吗?



Cassandra是一个复杂的系统,具有相互交互的不同组件。我将尝试从我的角度解释将所有内容仅放在一台设备上的要点。



Cassandra中的写入/删除/更新速度很快,因为它们只是对CommitLog设备的仅追加写入。相反,读取可能会非常消耗IOPS。当CommitLog和数据都在同一物理磁盘(HDD或SSD)上时,读/写路径会相互作用,并且它们都消耗IOPS。



两个重要的问题是:


  1. 读取一次IOPS数(使用读取路径)消耗?

  2. 一次写入消耗多少IOPS?

这些是重要的问题,因为您必须记住您的设备最多可以执行 X IOPS,并且您的系统将不得不在这些操作之间分配这些 X IOPS。



回答读取问题非常困难,因为当您请求一些数据时,Cassandra需要找到满足请求所需的所有SSTable。假设数据集很大,并且缓存无效,则这意味着Cassandra读取路径可能非常耗费IOPS。确实,如果您的数据分散到3个不同的SSTable中,Cassandra将必须找到所有的SSTable,并且对于每个SSTable都将遵循读取路径:将读取 partition index ,然后将读取数据在SSTable中。这至少是两个 IOPS,因为如果您的文件系统不够协作,则定位文件和/或指向文件偏移量可能需要更多IOPS。最后,在此示例中,Cassandra每次读取至少消耗六个IOPS。



回答写问题也很棘手,因为可以触发压实和冲洗。它们将消耗大量的IOPS。刷新很容易理解:它们使用顺序模式将数据从内存表写入磁盘。相反,压缩会从磁盘上的不同SSTables读回数据,并且在读取表时,它们会将结果刷新到新的磁盘文件中。这是一种混合的读/写模式,并且在HDD上具有很大的破坏性,因为它将迫使磁盘执行多次查找。






混合百分比:TL; DR



如果R / W比率为95%读取和5%写入,那么拥有单独的CommitLog设备可能会很浪费资源,因为写入几乎不会影响您的读取性能,而且写入很少,因此写入性能可能并不重要。



如果R / W比率为5%的读取和95%的写入,拥有一个单独的CommitLog设备可能再次浪费资源,因为读取几乎不会影响您的写入性能,并且您的读取性能几乎不会受到提交日志上的一系列顺序附加的影响。



最后,如果您的R / W比率是50%读和50%写,那么拥有单独的CommitLog设备并不浪费资源,因为每次写入在CommitLog设备上执行操作至少不会在数据驱动器上产生至少两个IOPS (一个用于写入,一个用于返回读取)。



请注意,我没有提到压缩,因为独立于您的工作负载,当触发压缩时,您的工作负载将因在不同的读写后台混合操作而中断文件(一路消耗磁盘IOPS),读和写都将受到影响。



对于HDD,这一切应该足够清楚,因为您的IOPS耗尽了速度很快,当您执行此操作时,您会立即注意到它。但是,在SSD上,您的IOPS不会很快用完[em] ,但是如果您的数据包含很多小数据行,则可以这样做。



现实是,很难摆脱SSD上的IOPS,因为您会(大量)占用CPU资源,但是一旦您这样做,将会看到您的效果逐渐下降。但是,这种影响不会像硬盘驱动器那样剧烈。例如,如果您有一个100 IOPS的HDD,并且通过尝试发出500个随机IO物料而使IOPS用尽,那么您显然会受到惩罚。通过将此罚款称为 P ,如果您拥有具有100k IOPS的SSD,则要获得相同的罚款 P ,则应发出500k IOPS,这在不耗尽资源的情况下很难做到CPU或RAM。



通常,当系统中某种类型的资源用完时,您需要增加其数量。 (对我而言)最重要的是,不要让Cassandra群集的数据部分中的IOPS用尽。对于固态硬盘IOPS,很少有这种限制。我想您会花光CPU的时间。但是,如果您不调整系统,或者您的工作负载使磁盘子系统上的负载过多(例如,Leveled Compaction),您将可以。我建议为提交日志放置一个普通的HDD而不是一个高性能的SSD,以节省资金。但是,如果您有很多很小的提交日志刷新,则SSD可以完全节省生命,因为您的编写者不会遭受HDD的延迟。



最后,在我否则,您应该使用某种真实数据进行预生产,并检查IOPS要求。如果您有足够的空间放置SSD,请不要担心。去省钱。如果系统由于压紧而承受太大压力,则建议使用单独的设备。分析您的提交日志模式,如果没有IOPS要求,则将其放在单独的磁盘上。此外,如果您有虚拟环境,则可以配置相对较小的commitlog设备,而不考虑其他因素。不会增加您解决方案的成本。


As we are using SSD disks to provide storage for our cluster on servers with 30 GB of memory.

There is an argument about the commitlog directory, whether to dedicate an individual disk or having it on the same data disk.

As we already using SSD disks, performance should be fine having both commitlogs and data on the same disk, as there is no mechanical moving head for writing.

However, there is another factor, that is the read/write ratio. How would such a ratio affect the performance of writing or reading when we have both commitlogs and data on the same disk?

Using SSD, when would it become important to dedicate a high performance disk for the commitlog directory?

解决方案

A dedicated commitlog device usually makes a lot of sense when you have HDDs, but is less obvious if you're using SSDs.

Even if you asked only if it makes sense with SSDs setups, I will try to give some general hints about the subject, primarily based on my understandings and my own experience. I admit the focus is probably too much on HDDs, but HDDs allow a deep insight on how Cassandra stuff works and why backing a commitlog/data directory with an SSD can be a life saver.


Background: IOPS and OPS are not the same thing.

I will start from a (very) far point: Device Performance. Here's a start-point lecture about storage device performances in general. Even if the article's neutrality is under discussion, it can provide some insights about the general metrics and performance you can expect from some systems. Of course, your mileage may vary, depending on what device (type/brand/model etc...) and how much stress (intended as type of workload) you put on the device, but I think it is a good starting point for our discussion here.

The reason I prefer to start from IOPS is because it is the very starting point for understanding storage performance. The C* literature speaks about OPS, Operations Per Second, because people usually don't think in terms of IOPS, especially when looking at stats. This really hides a lot of details, the operation size for starters.

A Cassandra Operation usually consists of multiple IOPS. Cassandra documentation usually refers to spinning disks (even if SSDs are referenced too), clearly states what happens when performing reads/writes, and people tend to ignore the fact that when their software stack (that spans from up the application down to Cassandra and its data files on the storage) hit the disks the performance decreases by a huge amount just because they have failed to recognize a random workload, and even if "Cassandra is an high-performance etc... etc.. etc...".

As an example, looking at the picture in the read path documentation, you can clearly see what data structures are in memory/on disk, and how the SSTable data is accessed. Further, the row cache paragraph says:

... If row cache is enabled, desired partition data is read from the row cache, potentially saving two seeks to disk for the data...

And here's where the catch starts: these two seeks are potentially saved from Cassandra's point of view. This simply means that Cassandra won't make two requests to the storage system: it will avoid to request the partition index, and the data because everything is already in RAM, but it doesn't really translates to "the storage system will save two IO operations". Indeed, how (generic) data is retrieved from the storage device is a very different thing, and of course depends on how the files are layed-out on the disk itself: are you using EXT4, XFS, or what? Assuming no cache is available (eg for very big data set sizes you can't really cache everything...), looking for a file is IOPS consuming, and this tends to amplify the potentially saved seeks when you have data in RAM, and tends to amplify the penalty you perceive when your data is not.


You can't escape physics: HDDs pay some taxes, SSDs no.

As you already know, the main "problem" (performance-wise) of HDDs is the average seek time, that is the time the HDD needs to wait on average in order to have a target sector under the heads. Once the sector is under the heads, if the system have to read a bunch of sequential bits everything is smooth and the throughput is proportional to the rotational speed of the HDD (to be precise to the tangential speed of the platters under the head, which depends also on the track etc...).

In other terms, HDDs have an average fixed performance tax (the average seek time), and everything after is almost "free". If an application requests a bunch of sectors that are not "contiguous" (from the disk point of view, eg a fragmented file is splitted across multiple sectors, but an application can't really know this), the disk will have to wait the average seek time on average multiple times, and this fixed tax influences its maximum throughput.

The strongest argument about storage is: every device have its own maximum magic average IOPS number. This number express the number of random IOPS the device can perform. You can't force an HDD to have more IOPS on average, it's a physical problem. The OS is usually smart enough to "enqueue" sector requests in the attempt to reduce the seek times, eg ordering by ascending requested sector number (trying to exploit some sequential operations), but nothing will save the performances from a random IO workload. You have X allotted available IOPS and must face your problems with that. No matter what.


You need to take advantage of the allotted IOPS of your device, and you must be wise on how you use them.

Suppose you have an HDD that maxes out at 100 IOPS on average. If your application performs a bunch of small (say 4KB) file reads, you have an application that performs 100 * 4KB reads every second: the throughput will be around 400KB/s (unless some caching is involved, and in that case the cache saved you precious IOPS). Astonishing. This is simply because you keep paying the seek time multiple times. If you change your access pattern to something that reads 16MB (contiguous) files, you get an higher throughput because you won't pay the seek time so much, you are exploiting a sequential pattern. What changes under the hood is the Request Size of each operation.

Now an interesting question is: how are "IOPS" and "Request Size" related to? Does one request size of 16MB can be considered one IOPS? And what about a 128MB request size? This is indeed a good question. At lower level, the Request Size spans from 512 bytes (the minimum sector size) to 128KB (32*4K sectors in one request). If the operation has small size, its transfer time, the time the disk needs to fetch the data, is also small. Higher request sizes have higher transfer times obviously. However, if you are able to perform 100 4KB IOPS, you will probably be able to perform around 80 IOPS @8KB. The relation can't be linear, because the transfer time depends on the rotational speed of the disks only (the transfer time is negligible compared to the seek time), and since you are actually reading from two adjacent sectors, you'll hit the seek time penalty once per request. This translates to a throughput of around 400KB/s for 4K requests and 1.6MB/s for 8K requests. And so on.... The larger the request size, the longer it takes to transfer data, the lesser IOPS you have, the higher throughput you have. (These are random numbers, pun intended, no measurements done! Just to let you understand. I however think they are in the ballpark).

SSDs don't suffer mechanical penalties and that's why they are capable of performing much better than HDDs. They have much more IOPS, and their limits come from the onboard electronics, bus connection etc.... Having an higher IOPS device is a big plus, these can be consumed by applications that are not IOPS friendly, and the user won't notice that the applications suck. However, with SSDs, the Request Size linearly influences the number of IOPS you can perform. When you look at some device that have 100k IOPS, these are usually referred at 4K. You'll be able to perform only 6.2k requests if you perform 64K requests.


Why Cassandra has a such good read performances even with HDDs then?

Speaking from a single node point of view (because given the performance of a cluster Cassandra scales linearly with the number of nodes in the cluster), the problem lies in the question itself. This is only true if you model your data in this particular way:

  1. You must fetch all your data with one query only.
  2. Your data must be ordered.
  3. If you can't fetch your data with one data, denormalize in order to retrieve it with one query only.
  4. You fetch a relative good amount of data on every read

These are well-known Cassandra modeling rules, but the key point is that these rules do really have a reason to be applied IOPS-wise. Indeed, these rules allow Cassandra to:

  1. Be a super fast database because it will just require the partition index and the SSTable offset index of the data: two IOPS in the best case, much more IOPS in the worst case.
  2. Be a super fast database because it will exploit the sequential capabilities of the HDDs and will not stress the IO subsystem by issuing other IO (random) seeks.
  3. Be a super fast database because it will just fetch more data like the point number 1.
  4. Be a super fast database because it will exploit longer the sequential capabilities of the HDDs.

In other terms, following these basic data modeling rules allows Cassandra to be IOPS friendly when reading data back.

What happens if you screw-up your data model? Cassandra won't be IOPS friendly, and as a consequence the performances will be predictably horrible. Unless you use an SSD, which has greater IOPS and then you won't notice slowness too much.

What happens if you read/write a small amount of data (eg due to misconfigured flush sizes, small commit log etc...)? Cassandra won't be IOPS friendly, and as a consequence the performances will be predictably horrible. Unless you use an SSD, which has greater IOPS and then you won't notice slowness too much.


How a read/write ratio pattern can influence performance in a Cassandra node?

Cassandra is a complex system, with different components that interact each other. I will try to explain from my point of view what are the main points when you put everything on one device only.

Writes/Deletes/Updates in Cassandra are fast because they are simply append-only writes to the CommitLog device. Reads, on the contrary, can be very IOPS consuming. When both CommitLog and Data are on the same physical disk (either HDD or SSD), the read/write paths interact, and they both consume IOPS.

Two important questions are:

  1. How many IOPS a read (using the read path) consumes?
  2. How many IOPS a write consumes?

These are important question because you have to remember that your device can perform at most X IOPS, and your system will have to split these X IOPS among these operations.

It is quite difficult to answer to the "read" question because, when you request some data, Cassandra needs to locate all the SSTables needed to satisfy the request. Assuming a very big dataset size, where caching is not effective, this imply that the Cassandra read path can be very IOPS hungry. Indeed, if your data is spread into 3 different SSTables, Cassandra will have to locate all of them, and for each SSTable will follow the read path: will read the partition index, and then will read the data in the SSTable. These are at least two IOPS, because if your filesystem is not "collaborative" enough, locating a file and/or pointing at a file offset could require some more IOPS. In the end, in this example Cassandra is consuming at least six IOPS per read.

Answering the "write" question is also tricky, because compactions and flushes can be triggered. They will consume a lot of IOPS. Flushes are easy to understand: they write data from memtables to disk with a sequential pattern. Instead, compactions read data back from different SSTables on disk, and while reading the tables they flush the result out to a new disk file. This is a mixed read/write pattern, and on HDDs this is very disruptive, because will force the disk to perform multiple seeks.


Mixing percentages: TL;DR

If you have a R/W ratio of 95% reads and 5% writes, having a separate CommitLog device can be a waste of resources, because writes will hardly impact your read performances, and you write so rarely that write performance may be considered not critical.

If you have a R/W ratio of 5% reads and 95% writes, having a separate CommitLog device can be again a waste of resources, because reads will hardly impact your write performances, and your read performances will hardly suffer from a bunch of sequential appends on the commitlog.

And finally, if you have a R/W ratio of 50% reads and 50% writes, having a separate CommitLog device is NOT a waste of resources, because every write performed on the CommitLog device won't produce at least two IOPS on the data drive (one for writing, and one for going back to read).

Please note that I didn't mention compactions, because independently on your workload, when compaction triggers in, your workload will be disrupted by mixed read/write background operations on different files (consuming disk IOPS all the way), and you will suffer both on reads and writes.

All this should be clear enough for HDDs because you run out of IOPS very fast, and when you do you notice it immediately. On SSDs, however, you don't run out of IOPS that fast, but you could do if your data consists of a lot small data rows.

The reality is that getting out of IOPS on an SSD is very hard because you'll get out (by a far amount) of CPU resources, But once you do you will see your performance slowly decrease. The effect however won't be such dramatic as in the cases of HDDs. As an example, if you have a 100 IOPS HDD and you run-out of IOPS by trying to issue 500 random IO stuff, you cleary get a penalty. By calling this penalty P, if you have an SSD with 100k IOPS, to get the same penalty P you should issue 500k IOPS, which can be very difficult to do without exhausting CPU or RAM.

In general, when you run out of some type of resource in your system, you need to increase its quantity. The most important thing (to me) is not to run out of IOPS in the "Data" part of your Cassandra cluster. In the case of SSDs IOPS, it's rare enough that you'll get the limit. You'll burn your CPU well before I think. But you will if you don't tune your system, or if your workload put too much stress on the disk subsystem (eg Leveled Compaction). I'd suggest to put an ordinary HDD instead of an high performance SSD for the commitlog, saving money. But if you have a lot of very small commitlog flushes an SSD is a completely life saver, because your writers won't suffer the latency of HDDs.

Finally, in my opition, you should go in pre-production with some sort of real data, and check your IOPS requirements. If you have enough room to put the SSD there don't worry. Go and save money. If your system gets too much pressure due to compaction then having a separate device is suggested. Analyze your commitlog pattern, and if its not IOPS demanding put it on a separate disk. Moreover, if you have a virtual environment you can provision a relatively small commitlog device regardless of other factors. It won't rise the cost of your solution too much.

这篇关于专用的提交日志存储与读/写比率?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆