如何斯威夫特管理阵列内部? [英] How does Swift manage Arrays internally?

查看:96
本文介绍了如何斯威夫特管理阵列内部?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道如何斯威夫特管理型阵列内部? <一href=\"https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/CollectionTypes.html#//apple_ref/doc/uid/TP40014097-CH8-ID105\"相对=nofollow>苹果的语言指南只能处理使用,但内部结构不细说。

I would like to know how Swift managed arrays internally? Apple's language guide only handles usage, but does not elaborate on internal structures.

由于我习惯于看裸阵列作为一个非常静态和固定数据结构的Java的开发者。我知道,这是不正确的斯威夫特。在斯威夫特,比Java之外的,可以发生变异数组的长度,并执行插入和删除操作。在Java中,我用来决定我要根据我想与结构来执行,从而优化我的code获得更好的性能哪些操作使用(简单数组,数组列表,链表等)什么数据结构。

As a Java-developer I am used to looking at "bare" arrays as a very static and fixed data structure. I know that this is not true in Swift. In Swift, other than in Java, you can mutate the length of an array and also perform insert and delete operations. In Java I am used to decide what data structure I want to use (simple arrays, ArrayList, LinkedList etc.) based on what operations I want to perform with that structure and thus optimise my code for better performance.

在最后,我想知道数组如何在斯威夫特实施。他们在内部为(双)链表管理?而且还有什么可媲美Java的集合框架,以调整获得更好的性能?

In conclusion, I would like to know how arrays are implemented in Swift. Are they internally managed as (double) linked lists? And is there anything available comparable to Java's Collection Framework in order to tune for better performance?

推荐答案

您可以找到关于在斯威夫特标准库上面的注释阵列大量的信息。看到这一点,你可以CMD-OPT单击游乐场阵列,或者你可以看看它在非官方的 SwiftDoc 页。

You can find a lot of information on Array in the comment above it in the Swift standard library. To see this, you can cmd-opt-click Array in a playground, or you could look at it in the unofficial SwiftDoc page.

要套用一些从那里的信息来回答你的问题:

To paraphrase some of the info from there to answer your questions:

在斯威夫特创建磁盘阵列保持在一个连续的内存区域的值。为此,您可以有效地传递斯威夫特阵列成一个C API,需要那种结构。

Arrays created in Swift hold their values in a contiguous region of memory. For this reason, you can efficiently pass a Swift array into a C API that requires that kind of structure.

至于你提到的,一个阵列可以成长为您附加价值给它,并在某些点上,这意味着,一个新的,更大的内存区域分配和previous值复制到它。正是出于这个原因,其称操作,如附加的可能的被 O(N) - 也就是说,最坏情况下的时间来执行附加操作按比例增大到阵列的当前大小(因为该值超过复制所花费的时间)。

As you mention, an array can grow as you append values to it, and at certain points that means that a fresh, larger, region of memory is allocated, and the previous values are copied into it. It is for this reason that its stated that operations like append may be O(n) – that is, the worst-case time to perform an append operation grows in proportion to the current size of the array (because of the time taken to copy the values over).

然而,当阵列扩大其存储,新的存储量是每次分配呈指数增长,这意味着重新分配变得罕见,罕见的是你追加,这意味着摊销的时间将所有来电追加了接近恒定的时间。

However, when the array has to grow its storage, the amount of new storage it allocates each time grows exponentially, which means that reallocations become rarer and rarer as you append, which means the "amortized" time to append over all calls approaches constant time.

数组也有一个方法, reserveCapacity ,允许你以preemptively避免通过请求数组分配本身前面空间部分最低金额呼吁追加再分配。你可以使用这个,如果你事先知道你打算多少值数组中持有。

Arrays also have a method, reserveCapacity, that allows you to preemptively avoid reallocations on calling append by requesting the array allocate itself some minimum amount of space up front. You can use this if you know ahead of time how many values you plan to hold in the array.

插入一个新的值到一个数组的中间还 O(N),因为数组在连续的内存中保存,这样插入一个新的价值包括洗牌后续值沿到最后。虽然不像追加,这并不比改善多个呼叫。这是从,比如说,一个链表,您可以在插入O(1)即固定的时间有很大不同。但请记住大代价是阵列也都在固定时间随机访问,不像链表。

Inserting a new value into the middle of an array is also O(n), because arrays are held in contiguous memory, so inserting a new value involves shuffling subsequent values along to the end. Unlike appending though, this does not improve over multiple calls. This is very different from, say, a linked list where you can insert in O(1) i.e. constant time. But bear in mind the big tradeoff is that arrays are also randomly accessible in constant time, unlike linked lists.

就地阵列(即通过下标分配)中更改为单值应 O(1)下标实际上并没有一个文档化的评论,但是这是一个pretty安全的赌注)。这意味着,如果你创建一个数组,填充它,然后不追加或插入到它,它应该类似的行为Java数组在性能方面。

Changes to single values in the array in-place (i.e. assigning via a subscript) should be O(1) (subscript doesn't actually have a documenting comment but this is a pretty safe bet). This means if you create an array, populate it, and then don't append or insert into it, it should behave similarly to a Java array in terms of performance.

还有一点需要注意这一切 - 阵列有价值的语义。这意味着如果你有一个数组变量 A ,你把它分配给另一个数组变量 B ,这实质上是复制的阵列。在 A 来的值​​的后续更改不会影响 B ,并改变 B 不会影响 A 。这与引用语义其中两个 A B 指向同一个数组,并通过向它提出的任何更改 A 将反映通过有人看着它b

There's one caveat to all this – arrays have "value" semantics. This means if you have an array variable a, and you assign it to another array variable b, this is essentially copying the array. Subsequent changes to the values in a will not affect b, and changing b will not affect a. This is unlike "reference" semantics where both a and b point to the same array and any changes made to it via a would be reflected to someone looking at it via b.

然而,斯威夫特数组实际上是写入时复制。也就是说,当你指定 A B 复制没有实际发生。它仅在两个变量中的一个被改变发生(突变)。这带来了很大的性能优势,但它确实意味着,如果两个数组引用相同的存储,因为无论自从副本执行写操作,如标分配的变化确实有那个复制整个阵列的一次性费用点。

However, Swift arrays are actually "Copy-on-Write". That is, when you assign a to b no copying actually takes place. It only happens when one of the two variables is changed ("mutated"). This brings a big performance benefit, but it does mean that if two arrays are referencing the same storage because neither has performed a write since the copy, a change like a subscript assign does have a one-off cost of duplicating the entire array at that point.

在大多数情况下,你不需要担心任何这除了在极少数情况下(尤其是小到中等规模的阵列打交道时),但如果性能是至关重要的,你这是绝对值得熟悉自己所有在该链接的文件。

For the most part, you shouldn't need to worry about any of this except in rare circumstances (especially when dealing with small-to-modest-size arrays), but if performance is critical to you it's definitely worth familiarizing yourself with all of the documentation in that link.

这篇关于如何斯威夫特管理阵列内部?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆