我正在尝试实现资源处理程序类.它必须无死锁且无饥饿 [英] I am trying to implement a resource handler class. It must be deadlock-free and starvation-free

查看:62
本文介绍了我正在尝试实现资源处理程序类.它必须无死锁且无饥饿的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是一项家庭作业.我有两个已经给出的文件. Client classSharedValues Interface.这是说明:

It is a homework assignment. I have two files which were already given. The Client class and the SharedValues Interface. Here is the description:

我需要编写一个包含static interfacesclass(资源处理程序),并管理"n"个资源分配及其对"k"个客户端的调度.客户端必须只有two operations. Reading and writingno deadlock and no starvation.如果分配了一个资源用于写入,那么其他客户端将无法将其用于任何其他目的.如果将资源分配给读取,则只有读者可以获取它,而作家则不能.密钥可以释放分配所需的资源.资源的引用用String(名称)表示.

I need to write a class (resource handler) which contains static interfaces and manage "n" number of resource allocation and its scheduling for "k" number of clients. there must be only two operations for the clients. Reading and writing with no deadlock and no starvation. If one resource is allocated for writing then other clients can't have it for any other purpose. If the resource is allocated for reading then only readers can get it, writers not. A key can make the resources free it got for the allocation. The references of resources are represented by String (names).

Resource Handler的客户端必须包含two interfaces. getLock()releaseLock(). getLock()接口的必需参数是对象(Set<String>),其中放置了资源名称和所需的操作(boolean: true - writing, false - reading),返回值是identifier (long).在资源处理程序无法将请求的资源提供给cleint之前,应在getLock()调用中将资源处理程序阻止为当前客户端.当请求的资源可用于给定操作时,它将再次解除阻塞. releaseLock()的返回值是void,它的必需参数是在getLock()调用中获得的标识符.客户端正在从资源处理程序类(通过getLock()接口)请求锁定/释放资源作为资源的子集,并且客户端正在通过接收到的标识符释放资源. (通过releaseLock()界面).

The Resource Handler must contain two interfaces for the clients. getLock() and releaseLock(). The required argument of the getLock() interface is an object (Set<String>) in which the names of the resources and the desired operations (boolean: true - writing, false - reading) are placed, the return value is an identifier (long). Until the resource handler can't give the requested resources to the cleint, the resource handler should be blocked the current client on the getLock() call. It will be unblock again when the requested resources are available for the given operation. The return value of the releaseLock() is void, its required argument is an identifier what it got at the getLock() call. The clients are requesting the lock/release of resources as a subsets of resources from the resource handler class (through the getLock() interface) and the clients are releasing the resources by the received identifier. (through the releaseLock() interface).

我不是Java专业人士,我在多线程方面只有很少的经验.请考虑在内.

I am not pro in Java and i have only a little experience in multithreading. Please take into account.

给出以下类和接口:

SharedValues界面

    public interface SharedValues
    {
        //resources
        public final static String[] RESOURCE_LIST = new String[]{ "a", "b", "c", "d", "e", "f", "g", "h" };

        //constant of the client's writing method type
        public final static boolean WRITE_METHOD = true;

        //constant of the client's reading method type
        public final static boolean READ_METHOD = false;

        //constant for the number of clients
        public final static int CLIENTNUM = 5;

        //minimum wait time of the client
        public final static int CLIENT_HOLD_MINIMUM = 1000;

        //maximum wait time difference of the client
        public final static int CLIENT_HOLD_DIFF = 1000;

        //time limit of the clients
        public final static int RUNTIME = 20000;
}

客户类别

import java.util.Arrays;
import java.util.Collections;
import java.util.Date;
import java.util.HashSet;
import java.util.Random;
import java.util.Set;
import java.util.ArrayList;

//start and implementation of the client
public class Client extends Thread implements SharedValues
{
    //used random for creating clients
    private static Random mRandom = new Random();

    //for stoping flag
    private boolean mRunning = true;

    //used method in client
    private boolean mMethod = true;

    //the clients want to lock these resources
    private Set<String> mNeededRes = new HashSet<String>();

    //received identifier for the releasing cleint's resources
    private long mLockID = -1;

    //client's logging name
    private String mLogName = null;

    //client's constructor
    public Client( String[] xResList, boolean xMethod, int xClientID )
    {
        super( "Client_" + xClientID );
        mLogName = "Client_" + xClientID;
        mMethod = xMethod;

        for ( int i = 0; i < xResList.length; i++ )
        {
            mNeededRes.add( xResList[ i ] );
        }
    }

    //interface for logging
    private void log( String xMessage )
    {
        System.out.println( new Date() + " " + mLogName + ": " + xMessage );
    }

    //holding resources or sleeping
    private synchronized void holdResources()
    {
        if ( !mRunning )
        {
            return;
        }

        //sleeping a value what is in the intervall
        try
        {
            wait( mRandom.nextInt( CLIENT_HOLD_DIFF ) & CLIENT_HOLD_MINIMUM );
        }
        catch ( InterruptedException e )
        {
            log( "Error: Resource allocating interrupted" );
        }
    }

    //for stopping interface
    public synchronized void stopRunning() throws Exception
    {
        //changing the flag and interrupting the sleep if it has
        if ( mRunning )
        {
            mRunning = false;
            wait();
        }
        else
        {
            log( "Error: the client has already stopped!" );
        }
    }

    //Overloading thread method
    public void run()
    {
        log( "Started." );

        while ( mRunning )
        {
            log( ( ( mMethod == WRITE_METHOD ) ? "Writing" : "Reading" ) + " requested resources: "
                + toSortedSet( mNeededRes ) );

            final long startTime = System.currentTimeMillis();
            mLockID = ResHandler.getLock( mNeededRes, mMethod );
            final long elapsed = System.currentTimeMillis() - startTime;

            log( ( ( mMethod == WRITE_METHOD ) ? "Writing" : "Reading" ) + " received resources (" + elapsed
                + " ms): " + toSortedSet( mNeededRes ) + ". Lock: " + mLockID );

            holdResources();

            ResHandler.releaseLock( mLockID );

            holdResources();
        }

        log( "Stopped." );
    }

    //creating clients
    private static Client createClient( int xClientID )
    {
        final int resNum = mRandom.nextInt( RESOURCE_LIST.length ) + 1;

        //randomly take out one of all resources
        final ArrayList<String> selectedRes = new ArrayList<String>( Arrays.asList( RESOURCE_LIST ) );

        for ( int i = 0; i < ( RESOURCE_LIST.length - resNum ); i++ )
        {
            final int chosenRes = mRandom.nextInt( selectedRes.size() );

            selectedRes.remove( chosenRes );
        }

        final boolean method = mRandom.nextInt( 5 ) <= 2;

        return new Client( ( String[] ) selectedRes.toArray( new String[]{} ), method, xClientID );
    }

    //auxiliary interface for elements of subset, we can do sorted logging
    private String toSortedSet( Set<String> xSet )
    {
        final StringBuffer tmpSB = new StringBuffer( "{ " );

        final String[] sortedRes = ( String[] ) xSet.toArray( new String[]{} );
        Arrays.sort( sortedRes );

        for ( int i = 0; i < sortedRes.length; i++ )
        {
            tmpSB.append( sortedRes[ i ] ).append( ", " );
        }
        tmpSB.setLength( tmpSB.length() - 2 );
        tmpSB.append( " }" );

        return tmpSB.toString();
    }

    public static void main( String[] args ) throws Exception
    {
        //keep the clients for stop
        final Client[] clientArr = new Client[ CLIENTNUM ];

        for ( int i = 0; i < clientArr.length; i++ )
        {
            clientArr[ i ] = createClient( i );
            clientArr[ i ].start();

            //the clients do not start at the same time
            try
            {
                Thread.sleep( mRandom.nextInt( CLIENT_HOLD_MINIMUM ) );
            }
            catch ( InterruptedException e )
            {
                e.printStackTrace();
            }
        }

        //sleeping the running time of clients
        try
        {
            Thread.sleep( RUNTIME );
        }
        catch ( InterruptedException e )
        {
            e.printStackTrace();
        }

        //stopping cleints
        for ( int i = 0; i < clientArr.length; i++ )
        {
            clientArr[ i ].stopRunning();

            try
            {
                clientArr[ i ].join();
            }
            catch ( InterruptedException e )
            {
                e.printStackTrace();
            }
        }
    }
}

我到目前为止已经写了.我看到客户端日志中锁定"始终为0,经过时间为0ms,但我不知道为什么.

I wrote this so far. I see the client log where the Lock is always 0 and the elapsed time 0ms but I do not know why.

资源处理程序类

import java.util.Set;


    class ResHandler {

        private static long identifier;

        public static long getLock(Set<String> mNeededRes, boolean mMethod) {

            return identifier;
        }


        public static void releaseLock(long mLockID) {

        } 

这是输出:

 Wed Oct 09 04:42:25 CEST 2013 Client_0: Started.

Wed Oct 09 04:42:25 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }

Wed Oct 09 04:42:25 CEST 2013 Client_0: Writing received resources (4 ms): { b, c, d, g, h }. Lock: 0

Wed Oct 09 04:42:26 CEST 2013 Client_1: Started.

Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }

Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0

Wed Oct 09 04:42:26 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }

Wed Oct 09 04:42:26 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0

Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }

Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0

Wed Oct 09 04:42:26 CEST 2013 Client_2: Started.

Wed Oct 09 04:42:26 CEST 2013 Client_2: Writing requested resources: { a, b, d, e, f, g, h }

Wed Oct 09 04:42:26 CEST 2013 Client_2: Writing received resourcesk (0 ms): { a, b, d, e, f, g, h }. Lock: 0

Wed Oct 09 04:42:27 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }

Wed Oct 09 04:42:27 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0

Wed Oct 09 04:42:27 CEST 2013 Client_3: Started.

Wed Oct 09 04:42:27 CEST 2013 Client_3: Reading requested resources: { a, b, c, d, e, f, g, h }

Wed Oct 09 04:42:27 CEST 2013 Client_3: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0

Wed Oct 09 04:42:27 CEST 2013 Client_4: Started.

Wed Oct 09 04:42:27 CEST 2013 Client_4: Reading requested resources: { f, h }

Wed Oct 09 04:42:27 CEST 2013 Client_4: Reading received resources (0 ms): { f, h }. Lock: 0

Wed Oct 09 04:42:27 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }

Wed Oct 09 04:42:27 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0

Wed Oct 09 04:42:28 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }

Wed Oct 09 04:42:28 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0

Wed Oct 09 04:42:28 CEST 2013 Client_4: Reading requested resources: { f, h }

Wed Oct 09 04:42:28 CEST 2013 Client_4: Reading received resources (0 ms): { f, h }. Lock: 0

Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading requested resources: { a, b, c, d, e, f, g, h }

Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0

Wed Oct 09 04:42:28 CEST 2013 Client_2: Writing requested resources: { a, b, d, e, f, g, h }

Wed Oct 09 04:42:28 CEST 2013 Client_2: Writing received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 0

Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading requested resources: { a, b, c, d, e, f, g, h }

Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0

Wed Oct 09 04:42:29 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }

Wed Oct 09 04:42:29 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0

Wed Oct 09 04:42:29 CEST 2013 Client_2: Writing requested resources: { a, b, d, e, f, g, h }

Wed Oct 09 04:42:29 CEST 2013 Client_2: Writing received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 0

Wed Oct 09 04:42:29 CEST 2013 Client_4: Reading requested resources: { f, h }

Wed Oct 09 04:42:29 CEST 2013 Client_4: Reading received resources (0 ms): { f, h }. Lock: 0

Wed Oct 09 04:42:29 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }

Wed Oct 09 04:42:29 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0

. .

我在互联网上找到了一半解决方案: 具有ReentrantLocks的资源管理器

public class ResHandler {

//ID-s of the granted resource lists
private static long lockNum = 0;

//Resources are identified by strings, each client has a list of demanded resources
//we store these when granted, along with an ID
private static ConcurrentHashMap<Long, Set<String>> usedResources 
    = new ConcurrentHashMap<Long, Set<String>>();

//We store a lock for each resource
private static ConcurrentHashMap<String, ReentrantReadWriteLock> resources 
    = new ConcurrentHashMap<String, ReentrantReadWriteLock>();

//Filling our resources map with the resources and their locks
static {
    for (int i = 0; i < SharedValues.RESOURCE_LIST.length; ++i) {
        String res = SharedValues.RESOURCE_LIST[i];
        //Fair reentrant lock
        ReentrantReadWriteLock lc = new ReentrantReadWriteLock(true);
        resources.put(res, lc);
    }
}

//We get a set of the required resources and the type of lock we have to use
public static long getLock(Set<String> mNeededRes, boolean mMethod) {
    //!!!
    if (mMethod == SharedValues.READ_METHOD) {

        //We try to get the required resources
        for (String mn : mNeededRes)
            resources.get(mn).readLock().lock();

        //After grandted, we put them in the usedResources map
        ++lockNum;
        usedResources.put(lockNum, mNeededRes);
        return lockNum;         
    }

    //Same thing, but with write locks
    else {

        for (String mn : mNeededRes)
            resources.get(mn).writeLock().lock();

        ++lockNum;
        usedResources.put(lockNum, mNeededRes);
        return lockNum;         
    }
}

//Releasing a set of locks by the set's ID
public static void releaseLock(long mLockID) {
    if (!usedResources.containsKey(mLockID)) {
        System.out.println("returned, no such key as: " + mLockID);
        return; 
    }

    Set<String> toBeReleased = usedResources.get(mLockID);

    //Unlocking every lock from this set
    for (String s : toBeReleased) {
        if (resources.get(s).isWriteLockedByCurrentThread())
            resources.get(s).writeLock().unlock();
        else 
            resources.get(s).readLock().unlock();
    }

    //Deleting from the map
    usedResources.remove(mLockID);
}   
}

我尝试了此操作,并将输出更改为以下内容:

Fri Oct 11 10:14:40 CEST 2013 Client_0: Started.

Fri Oct 11 10:14:40 CEST 2013 Client_0: Reading requested resources: { b, c, h }

Fri Oct 11 10:14:40 CEST 2013 Client_0: Reading received resources (8 ms): { b, c, h }. Lock: 1

Fri Oct 11 10:14:40 CEST 2013 Client_1: Started.

Fri Oct 11 10:14:40 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }

Fri Oct 11 10:14:40 CEST 2013 Client_1: Reading received resources (1 ms): { a, b, c, d, f, g, h }. Lock: 2

Fri Oct 11 10:14:40 CEST 2013 Client_2: Started.

Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }

Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 3

Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }

Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 4

Fri Oct 11 10:14:41 CEST 2013 Client_3: Started.

Fri Oct 11 10:14:41 CEST 2013 Client_3: Writing requested resources: { h }

Fri Oct 11 10:14:41 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }

Fri Oct 11 10:14:41 CEST 2013 Client_0: Reading requested resources: { b, c, h }

Fri Oct 11 10:14:41 CEST 2013 Client_3: Writing received resources (303 ms): { h }. Lock: 5

Fri Oct 11 10:14:41 CEST 2013 Client_1: Reading received resources (293 ms): { a, b, c, d, f, g, h }. Lock: 6

Fri Oct 11 10:14:41 CEST 2013 Client_0: Reading received resources (171 ms): { b, c, h }. Lock: 7

Fri Oct 11 10:14:41 CEST 2013 Client_3: Writing requested resources: { h }

Fri Oct 11 10:14:41 CEST 2013 Client_4: Started.

Fri Oct 11 10:14:41 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }

Fri Oct 11 10:14:42 CEST 2013 Client_3: Writing received resources (633 ms): { h }. Lock: 8

Fri Oct 11 10:14:42 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }

Fri Oct 11 10:14:42 CEST 2013 Client_4: Reading received resources (819 ms): { a, b, c, d, e, f, g, h }. Lock: 9

Fri Oct 11 10:14:42 CEST 2013 Client_2: Reading received resources (163 ms): { a, b, d, e, f, g, h }. Lock: 10

Fri Oct 11 10:14:42 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }

Fri Oct 11 10:14:42 CEST 2013 Client_1: Reading received resourcesk (0 ms): { a, b, c, d, f, g, h }. Lock: 11

Fri Oct 11 10:14:42 CEST 2013 Client_0: Reading requested resources: { b, c, h }

Fri Oct 11 10:14:42 CEST 2013 Client_0: Reading received resources (0 ms): { b, c, h }. Lock: 12

Fri Oct 11 10:14:42 CEST 2013 Client_3: Writing requested resources: { h }

Fri Oct 11 10:14:42 CEST 2013 Client_0: Reading requested resources: { b, c, h }

Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }

Fri Oct 11 10:14:43 CEST 2013 Client_3: Writing received resources (447 ms): { h }. Lock: 13

Fri Oct 11 10:14:43 CEST 2013 Client_0: Reading received resources (504 ms): { b, c, h }. Lock: 14

Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading received resources (210 ms): { a, b, c, d, f, g, h }. Lock: 15

Fri Oct 11 10:14:43 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }

Fri Oct 11 10:14:43 CEST 2013 Client_4: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 16

Fri Oct 11 10:14:43 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }

Fri Oct 11 10:14:43 CEST 2013 Client_2: Reading received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 17

Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }

Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading received resources (0 ms): { a, b, c, d, f, g, h }. Lock: 18

Fri Oct 11 10:14:44 CEST 2013 Client_3: Writing requested resources: { h }

Fri Oct 11 10:14:44 CEST 2013 Client_3: Writing received resources (152 ms): { h }. Lock: 19

Fri Oct 11 10:14:44 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }

Fri Oct 11 10:14:44 CEST 2013 Client_0: Reading requested resources: { b, c, h }

Fri Oct 11 10:14:44 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }

Fri Oct 11 10:14:44 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }

Fri Oct 11 10:14:45 CEST 2013 Client_0: Reading received resources (504 ms): { b, c, h }. Lock: 21

Fri Oct 11 10:14:45 CEST 2013 Client_4: Reading received resources (399 ms): { a, b, c, d, e, f, g, h }. Lock: 22

Fri Oct 11 10:14:45 CEST 2013 Client_1: Reading received resources (230 ms): { a, b, c, d, f, g, h }. Lock: 23

Fri Oct 11 10:14:45 CEST 2013 Client_2: Reading received resources (544 ms): { a, b, d, e, f, g, h }. Lock: 20

Fri Oct 11 10:14:45 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }

Fri Oct 11 10:14:45 CEST 2013 Client_1: Reading received resources (0 ms): { a, b, c, d, f, g, h }. Lock: 24

Fri Oct 11 10:14:45 CEST 2013 Client_3: Writing requested resources: { h }

Fri Oct 11 10:14:45 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }

Fri Oct 11 10:14:45 CEST 2013 Client_0: Reading requested resources: { b, c, h }

Fri Oct 11 10:14:46 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }

但是在这里程序被冻结了.我猜是因为僵局.

but here the program is frozen. I guess because of deadlock.

我的问题是:我该如何解决.如果有人可以向我展示一些有效的代码示例,我将非常感激.

My question is: How can I fix this. I would really really apreciate if somebody could show me some working code example.

推荐答案

尝试通过循环释放免费锁来获取请求的锁将不可避免地导致死锁.除非所需的全部密码可用,否则根本不应该授予任何客户端任何锁定.

Trying to get the requested locks by looping over the free locks will inevitably result in deadlocks. No client must be granted any lock at all unless the entire set required is available.

一种解决方案:

仅使用一个锁,一次允许一个客户端访问关键部分",该关键部分"控制对自由/分配集合的访问,用于检查是否所有所需的锁"是否可用以及用于释放锁".如果客户进入此关键部分的需求不能立即全部满足,请创建一个事件/信号灯等待,将其需求和事件/信号灯存储在容器中(在此处生成ID,以便可以将数据再次在发布时进行查找),离开关键部分并等待事件/信号量,以便在不给予任何锁定的情况下对其进行阻止.当客户端进入关键部分以释放锁时,请使用ID在容器中查找其数据,将其分配的资源标记为空闲,将其从容器中删除,然后对容器进行迭代以查找现在可以获取其所有信息的所有被阻止的客户端请求的锁.如果找到一个,则将其锁标记为已分配,离开关键部分并向客户端发送信号/信号,以便使其在分配了所有锁的情况下继续运行.

Use only ONE LOCK that allows one client at a time access to a 'Critical Section' that controls access to the free/allocated sets, the algorithm/s for checking if all the 'locks' required are available and for releasing 'locks'. If a client enters this critical section with a requirement that cannot be satisfied immediately and in its entirety, create an event/semaphore to wait on, store its requirements and event/semaphore in a container, (generate ID here so that the data can be looked up again on release), leave the critical section and wait on the event/semaphore, so blocking it without giving it any locks. When a client enters the critical section to release locks, use the ID to find its data in the container, mark its alocated resouces as free, remove it from the container and then iterate the container looking for any blocked clients that can now obtain ALL their requested locks. If one is found, mark its locks as allocated, leave the critical section and signal the client event/semaphore, so allowing it to run on with ALL its locks allocated.

使用复杂锁定方案的技巧不使用复杂锁定方案:)

The trick with complex lock schemes it to not use complex lock schemes :)

您可以编写代码-毕竟是功课:)

You can write the code - it is homework, after all :)

PS-饥饿.您可以以任何希望的方式实施反饥饿行动.一种方法是在释放资源时并且在迭代容器以寻找可运行的客户端之前旋转"容器条目.这样,所有客户最终都将有机会先查找其所需的资源.

PS - starvation. You can implement anti-starvation in any way you wish. One way would be to 'rotate' the container entries when resources are released and before iterating the container looking for runnable clients. That way, all clients would eventually get a chance to have their required resources looked up first.

这篇关于我正在尝试实现资源处理程序类.它必须无死锁且无饥饿的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆