使用队列和信号量进行并发和属性包装? [英] Use queue and semaphore for concurrency and property wrapper?

查看:99
本文介绍了使用队列和信号量进行并发和属性包装?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试创建一个线程安全的属性包装器。我只能认为GCD队列和信号量是最快捷,最可靠的方法。信号量是不是性能更高(如果是真的),还是有另一个原因需要并发使用?

I'm trying to create a thread-safe property wrapper. I could only think of GCD queues and semaphores as being the most Swifty and reliable way. Are semaphore's just more performant (if that's true), or is there another reason to use one over the other for concurrency?

以下是原子属性包装器的两种变体:

Below are two variants of atomic property wrappers:

@propertyWrapper
struct Atomic<Value> {
    private var value: Value
    private let queue = DispatchQueue(label: "Atomic serial queue")

    var wrappedValue: Value {
        get { queue.sync { value } }
        set { queue.sync { value = newValue } }
    }

    init(wrappedValue value: Value) {
        self.value = value
    }
}

@propertyWrapper
struct Atomic2<Value> {
    private var value: Value
    private var semaphore = DispatchSemaphore(value: 1)

    var wrappedValue: Value {
        get {
            semaphore.wait()
            let temp = value
            semaphore.signal()
            return temp
        }

        set {
            semaphore.wait()
            value = newValue
            semaphore.signal()
        }
    }

    init(wrappedValue value: Value) {
        self.value = value
    }
}

struct MyStruct {
    @Atomic var counter = 0
    @Atomic2 var counter2 = 0
}

func test() {
    var myStruct = MyStruct()

    DispatchQueue.concurrentPerform(iterations: 1000) {
        myStruct.counter += $0
        myStruct.counter2 += $0
   }
}

如何正确设置经过测试和测量以了解两个imp的区别

How can they be properly tested and measured to see the difference between the two implementations and if they even work?

推荐答案

FWIW,另一种选择是带有并发队列的读写器模式,其中同步进行读取,但允许相对于其他读取并发运行,但是写入是异步完成的,但存在障碍(即

FWIW, another option is reader-writer pattern with concurrent queue, where reads are done synchronously, but are allowed to run concurrently with respect to other reads, but writes are done asynchronously, but with a barrier (i.e. not concurrently with respect to any other reads or writes):

@propertyWrapper
class Atomic<Value> {
    private var value: Value
    private let queue = DispatchQueue(label: "com.domain.app.atomic", attributes: .concurrent)

    var wrappedValue: Value {
        get { queue.sync { value } }
        set { queue.async(flags: .barrier) { self.value = newValue } }
    }

    init(wrappedValue value: Value) {
        self.value = value
    }
}

还有另一个锁:

@propertyWrapper
struct Atomic<Value> {
    private var value: Value
    private var lock = NSLock()

    var wrappedValue: Value {
        get { lock.synchronized { value } }
        set { lock.synchronized { value = newValue } }
    }

    init(wrappedValue value: Value) {
        self.value = value
    }
}

其中

extension NSLocking {
    func synchronized<T>(block: () throws -> T) rethrows -> T {
        lock()
        defer { unlock() }
        return try block()
    }
}

我们应该认识到,尽管这些和您的提供原子性,但不会提供线程安全的交互。

We should recognize that while these, and yours, offers atomicity, it’s not going to provide thread-safe interaction.

考虑这个简单的实验,我们将整数递增一百万次:

Consider this simple experiment, where we increment an integer a million times:

@Atomic var foo = 0

func threadSafetyExperiment() {
    DispatchQueue.global().async {
        DispatchQueue.concurrentPerform(iterations: 1_000_000) { _ in
            self.foo += 1
        }
        print(self.foo)
    }
}

您期望 foo 等于1,000,000,但事实并非如此。这是因为获取值并增加并保存它的整个交互过程都需要包装在一个同步机制中。

You’d expect foo to be equal to 1,000,000, but it won’t be. It’s because the whole interaction of "retrieve the value and increment it and save it" needs to be wrapped in a single synchronization mechanism.

因此,您又回到了非属性包装程序的解决方案,例如

So, you’re back to non-property wrapper sorts of solutions, e.g.

class Synchronized<Value> {
    private var _value: Value
    private let lock = NSLock()

    init(_ value: Value) {
        self._value = value
    }

    var value: Value {
        get { lock.synchronized { _value } }
        set { lock.synchronized { _value = newValue } }
    }

    func synchronized(block: (inout Value) -> Void) {
        lock.synchronized {
            block(&_value)
        }
    }
}

然后运行正常:

var foo = Synchronized<Int>(0)

func threadSafetyExperiment() {
    DispatchQueue.global().async {
        DispatchQueue.concurrentPerform(iterations: 1_000_000) { _ in
            self.foo.synchronized { value in
                value += 1
            }
        }
        print(self.foo.value)
    }
}








如何对其进行适当的测试和衡量,以查看这两种实现之间的差异以及它们是否还能正常工作?

How can they be properly tested and measured to see the difference between the two implementations and if they even work?

一些想法:


  • 我建议进行1000次以上的迭代。您希望进行足够的迭代,以秒为单位而不是毫秒来衡量结果。我个人使用了一百万次迭代。

  • I’d suggest doing far more than 1000 iterations. You want to do enough iterations that the results are measured in seconds, not milliseconds. Personally I used a million iterations.

单元测试框架非常适合进行正确性测试和使用度量来衡量性能方法(该方法对每个单元测试重复进行10次性能测试,结果将由单元测试报告捕获):

The unit testing framework is ideal at both testing for correctness as well as measuring performance using the measure method (which repeats the performance test 10 times for each unit test and the results will be captured by the unit test reports):

因此,创建一个具有单元测试目标的项目(或者,如果需要,可以将单元测试目标添加到现有项目中),然后创建单元测试,并使用命令 + u 执行它们。

So, create a project with a unit test target (or add a unit test target to existing project if you want) and then create unit tests, and execute them with command+u.

如果您为您的目标,您可以选择随机化测试顺序,以确保测试执行的顺序不会影响性能:

If you edit the scheme for your target, you can choose to randomize the order of your tests, to make sure the order in which they execute doesn’t affect the performance:

我还要使测试目标使用发布版本来制作确保您正在测试优化的版本。

I’d also make the test target use a release build to make sure you’re testing an optimized build.

这是使用GCD串行队列进行各种不同同步的示例,并发队列,锁,不公平锁,信号灯:

This is an example of a variety of different synchronization using GCD serial queue, concurrent queue, locks, unfair locks, semaphores:

class SynchronizedSerial<Value> {
    private var _value: Value
    private let queue = DispatchQueue(label: "com.domain.app.atomic")

    required init(_ value: Value) {
        self._value = value
    }

    var value: Value {
        get { queue.sync { _value } }
        set { queue.async { self._value = newValue } }
    }

    func synchronized<T>(block: (inout Value) throws -> T) rethrows -> T {
        try queue.sync {
            try block(&_value)
        }
    }

    func writer(block: @escaping (inout Value) -> Void) -> Void {
        queue.async {
            block(&self._value)
        }
    }
}

class SynchronizedReaderWriter<Value> {
    private var _value: Value
    private let queue = DispatchQueue(label: "com.domain.app.atomic", attributes: .concurrent)

    required init(_ value: Value) {
        self._value = value
    }

    var value: Value {
        get { queue.sync { _value } }
        set { queue.async(flags: .barrier) { self._value = newValue } }
    }

    func synchronized<T>(block: (inout Value) throws -> T) rethrows -> T {
        try queue.sync(flags: .barrier) {
            try block(&_value)
        }
    }

    func reader<T>(block: (Value) throws -> T) rethrows -> T {
        try queue.sync {
            try block(_value)
        }
    }

    func writer(block: @escaping (inout Value) -> Void) -> Void {
        queue.async(flags: .barrier) {
            block(&self._value)
        }
    }
}

struct SynchronizedLock<Value> {
    private var _value: Value
    private let lock = NSLock()

    init(_ value: Value) {
        self._value = value
    }

    var value: Value {
        get { lock.synchronized { _value } }
        set { lock.synchronized { _value = newValue } }
    }

    mutating func synchronized<T>(block: (inout Value) throws -> T) rethrows -> T {
        try lock.synchronized {
            try block(&_value)
        }
    }
}

/// Unfair lock synchronization
///
/// - Warning: The documentation warns us: "In general, higher level synchronization primitives such as those provided by the pthread or dispatch subsystems should be preferred."</quote>

class SynchronizedUnfairLock<Value> {
    private var _value: Value
    private var lock = os_unfair_lock()

    required init(_ value: Value) {
        self._value = value
    }

    var value: Value {
        get { synchronized { $0 } }
        set { synchronized { $0 = newValue } }
    }

    func synchronized<T>(block: (inout Value) throws -> T) rethrows -> T {
        os_unfair_lock_lock(&lock)
        defer { os_unfair_lock_unlock(&lock) }
        return try block(&_value)
    }
}

struct SynchronizedSemaphore<Value> {
    private var _value: Value
    private let semaphore = DispatchSemaphore(value: 1)

    init(_ value: Value) {
        self._value = value
    }

    var value: Value {
        get { semaphore.waitAndSignal { _value } }
        set { semaphore.waitAndSignal { _value = newValue } }
    }

    mutating func synchronized<T>(block: (inout Value) throws -> T) rethrows -> T {
        try semaphore.waitAndSignal {
            try block(&_value)
        }
    }
}

extension NSLocking {
    func synchronized<T>(block: () throws -> T) rethrows -> T {
        lock()
        defer { unlock() }
        return try block()
    }
}

extension DispatchSemaphore {
    func waitAndSignal<T>(block: () throws -> T) rethrows -> T {
        wait()
        defer { signal() }
        return try block()
    }
}

这篇关于使用队列和信号量进行并发和属性包装?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆