是否可以创建一个具有 log(n) 复杂度的 ArrayList 属性的 Map? [英] Is it possible to create a Map that has ArrayList properties with log(n) complexity?

查看:21
本文介绍了是否可以创建一个具有 log(n) 复杂度的 ArrayList 属性的 Map?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试构建一个通用数据结构,它需要保存键和值,同时跟踪键和值被放入的索引的索引,就像 arraylist 做的复杂度为 O(log n)或更少.

I am trying to build a generic data structure that needs to hold keys and values and in the same time keep track on the index's in which key's and values were put in like arraylist do just in a complexity of O(log n) or less.

我试图解决这个问题,并创建了各种具有整数的 TreeMap - 索引和值以及反之亦然,并且与键相同.

I tryed to work around a solution to this problem and created various TreeMaps that has Integer - index's and valuse and vise-versa, and the same with keys.

为了更清楚,索引象征着来自用户的插入顺序.所以如果我有 3 个元素,那么它们的索引是 0 1 2,如果元素 0 被删除,那么我需要将 1 推到 0 和 2 推到 1,并且一个新元素将被添加到索引 2.

Just to make it more clear, the Index's symbolize the insertion order from the user. so if I had 3 elements then their index's are 0 1 2 and if element 0 was deleted then I need to push 1 to 0 and 2 to 1 and a new element would be added with index 2.

我的问题是当我删除一个键和它的值时,如果我想在正确的索引中插入下一个键和值,我必须确保所有旧的都被设置为 1.我不知道如何做到这一点,而不是陷入 O(n) 复杂性.

My problem is when I remove a key and its value, if I want to insert the next key and value in the right index I have to make sure all the old ones were set back by 1. I don't know how to do it and not to fall into O(n) complexity.

我的目标是使用现有的数据结构并混合它们以获得此结果,请查看我在需要时实现的方法.

My goal is to use existing data structure's and mixing them to get this result, please take a look at the methods I implemented as I need those.

我正在添加我的代码以供参考,任何帮助将不胜感激.

I am adding my code for reference, any help would be much appreciated.

谢谢

汤姆

import java.util.Collection;
import java.util.HashMap;
import java.util.TreeMap;
import java.util.Map;

public class SuperStruct<K,V> 
{
    private Map<K,V> mInternalKeyToValueMap;//all the keys and their values 
    private Map<Integer,V> mIndexToValueMap;//index's for values according to the entrance order to 
    private Map<V,Integer> mValueToIndexMap;//values and their index's
    private Map<Integer,K> mIndexToKeyMap;//index's and their keys according to entrance order
    private Map<K,Integer> mKeyToIndexMap;//keys and their index's
    private int mNextIndex;//index for the data structure according to the order data was received by user

    public SuperStruct(){
        mInternalKeyToValueMap = new TreeMap<K,V>();
        mIndexToValueMap = new TreeMap<Integer,V>();
        mValueToIndexMap = new TreeMap <V,Integer>();
        mIndexToKeyMap = new TreeMap <Integer, K>();
        mKeyToIndexMap = new TreeMap <K,Integer>();     
    }
    public boolean containsKey(Object key) {
        boolean containable = mInternalKeyToValueMap.containsKey(key);
        return containable;
    }

    public boolean containsValue(Object value) {
        boolean containable = mValueToIndexMap.containsKey(value);
        return containable;
    }

    public V get(Object key) {
        if(mInternalKeyToValueMap.containsKey(key)){
            V value = mInternalKeyToValueMap.get(key);
            return value;
        }
        return null;
    }



    public Collection<K> keySet() {

        return mInternalKeyToValueMap.keySet();
    }
    /**
     * This method is putting the key and the value in the main TreeMap "mInternalKeyToValueMap", while on the mean time updating 4 other TreeMaps
     * with data regarding to the index in which data was received from the user.
     * all in all this method runs in complexity of 6*(O(log n)) that sums down to O(log n) cause constants don't calculate over the whole 
     * Complexity calculation
     * In case that a key already had a mapping to it and we overwrite the value we will run in complexity of 11*(O(log n)) which still sums down to O(log n) 
     * cause constants don't calculate over the whole 
     */

    public V put(K key, V value) {
        if(mValueToIndexMap.containsKey(value))//preventing duplications of value
            return value;
            if(mInternalKeyToValueMap.containsKey(key)){//when a key already exist in system and we want to overwrite its value
                int indexToDelete = mKeyToIndexMap.get(key);//we get the index of the value we over-write
                V value1 = mIndexToValueMap.get(indexToDelete);//using this index we get the value
                mValueToIndexMap.remove(value1);//we remove the value and its index
                mIndexToValueMap.remove(indexToDelete);//we remove the index and its value
            }
            mInternalKeyToValueMap.put(key, value);//putting the new value for the key in the main TreeMap
            mValueToIndexMap.put(value, mNextIndex);//populating the TreeMap of values and their index's - the order we received them from the user
            mIndexToValueMap.put(mNextIndex, value);//This TreeMap holds the index's for each value according to the order of insertion by user
            mIndexToKeyMap.put(mNextIndex, key);//This TreeMap holds the index's for each key according to the order of insertion by user
            mKeyToIndexMap.put(key,mNextIndex);//populating the TreeMap of keys and their index's - the order we received them from the user
            ++mNextIndex;//advancing the index which mark the insertion order of arguments to structure
            return null;

    }


    public V remove(Object key) {   
        if(mInternalKeyToValueMap.containsKey(key)==true && (mInternalKeyToValueMap.get(key)!=null))
        {
            V value = mInternalKeyToValueMap.get(key);
            mInternalKeyToValueMap.remove(key);//removing map for the value 
            int mIndexToRemoveValue = mValueToIndexMap.get(value);//getting the right index to remove the value
            mIndexToValueMap.remove(mIndexToRemoveValue);//vacating the value for this index
            mIndexToKeyMap.remove(mIndexToRemoveValue);//vacating the key for this index
            mKeyToIndexMap.remove(key);//removing a key and index in the keyToIndex Map
            mValueToIndexMap.remove(value);//removing a key and index in the ValueToIndex Map
            return value;

        }
        return null;
    }


    public Collection<V> values() {     
        return mInternalKeyToValueMap.values();
    }

    public Collection<V> getStrcutureSorted(){
        return mValueToIndexMap.keySet();
    }

    public V getValueByIndex(int index){//return the index in which the value sits in, if not present null 
        V value = mIndexToValueMap.get(index);
            return value;
    }

    public Integer getIndexByKey(K key){
        Integer returnable = mKeyToIndexMap.get(key);
        if(returnable == null)
            return -1;
        return returnable;


    }
    public K getKeyByIndex(int index){
        return mIndexToKeyMap.get(index);
    }
    }

推荐答案

这是一个有趣的难题.感觉应该是有可能的,但是细节是难以捉摸的.问题出在删除后的索引更新操作中.例如,如果索引存储在树节点中,那么在删除操作后平均需要更改 n/2 个索引,这会破坏您正在努力实现的 O(log n) 属性.

This is an interesting conundrum. It feels like it should be possible, but the details are elusive. The problem is in the index update operation after a delete. If the indexes are stored in a tree node, for example, then on average n/2 indexes will have to be altered after a delete operation, which ruins the O(log n) property you are striving for.

如果您同时在树和 ArrayList 中存储对象,您仍然会遇到在 ArrayList 中定位对象的问题,我无法在小于 O(n) 的时间内以简单的方式工作.ArrayList 可能有一些变化,也许是自定义构造,但这似乎需要大量工作.

If you store objects simultaneously in a tree and in an ArrayList, you still have the issue of locating an object in the ArrayList, which I can't get to work in a straightforward fashion in time less than O(n). Possibly there's some variation on an ArrayList, perhaps a custom construction, but that seems like a lot of work.

相反,我建议您考虑使用红黑树或类似的平衡树,并在 通过序数索引访问红黑树:对于树的每个节点,还存储子树中以给定节点为根节点的节点数.在插入和删除操作期间,您必须更新受旋转操作影响的所有节点的计数.这仍然可以在 O(log n) 时间内完成,但它很复杂.

Instead, I suggest you consider a red-black tree or similar balanced tree with the modification noted at Red-black tree access by ordinal index: for each node of the tree, store also the count of nodes in the subtree with the given node as root. During insert and delete operations, you will have to update the count of all nodes affected by a rotation operation. That can still be done in O(log n) time, but it is complicated.

然后对第 k 个最小(或最大)元素的二分搜索也将像往常一样通过常用递归算法在 O(log n) 时间内运行.

Then binary searches for the kth smallest (or largest) element will also run in O(log n) time, as usual, by the usual recursive algorithm.

以下是该技术的更多参考资料:http://www.cs.usfca.edu/~galles/cs673/lecture/lecture8.pdf, http://fdatamining.blogspot.ca/2011/09/functional-red-black-tree-with-dynamic.htmlhttp://en.wikipedia.org/wiki/Order_statistic_tree.这应该会让你开始.

Here are a few more references for the technique: http://www.cs.usfca.edu/~galles/cs673/lecture/lecture8.pdf, http://fdatamining.blogspot.ca/2011/09/functional-red-black-tree-with-dynamic.html, http://en.wikipedia.org/wiki/Order_statistic_tree. That should get you started.

更新:实施细节

您需要做的是创建两棵树.一个可以是普通的平衡树(如红黑树),用于保存对具有键/值对的对象的引用.您可以搜索键并在 O(log n) 中获取相应的值;插入和删除也是 O(log n).此外,第一棵树中的节点将持有对第二棵树(如下)中节点的引用.

What you will have to do is to create two trees. One could be an ordinary balanced tree (like a red-black tree) to hold references to your objects with key/value pairs. You could search on the key and get the corresponding value in O(log n); insertion and deletion would also be O(log n). In addition, nodes in this first tree would hold references to nodes in the second tree (below).

第二棵树也将保存对第一棵树中节点的引用,但它将是上面讨论的顺序统计树.插入总是将新项目放在树的右端.

The second tree would also hold references to nodes in the first tree, but it would be an order statistic tree as in the discussion above. Insertions would always place new items at the right end of the tree.

对该数据结构的插入操作将是通过键向第一棵树进行普通插入,在订单统计树的右侧插入,并且每个插入节点中的引用将被更新以指向适当的位置在另一棵树上.

An insert operation into this data structure would be an ordinary insert into the first tree by key, an insert into the right side of the order statistic tree, and the references in each inserted node would be updated to point to the appropriate location in the other tree.

可以在第一棵树上对 O(log n) 中的给定键进行搜索操作,这将返回适当的值,并在对另一棵树的引用之后,可用于查找订单统计信息"通过向上遍历树到根并执行简单的计算,在 O(log n) 时间内获得第二棵树中的节点.

A search operation could be done on the first tree for a given key in O(log n), which would return the appropriate value, and following a reference to the other tree, could be used to find the "order statistic" of the node in the second tree in O(log n) time by traversing up the tree to the root and performing a simple calculation.

可以在 O(log n) 时间内对队列中的第 k 个位置在第二棵树上进行搜索操作,返回对第二棵树的引用,可以取消引用以获得关联的键/值对.

A search operation could be done on the second tree in O(log n) time for the kth position in the queue, returning a reference to the second tree, which could be dereferenced to obtain the associated key/value pair.

在任何一棵树中的删除都会先于对另一棵树的引用推迟,然后删除第一棵树中的节点,然后删除另一棵树中的相应节点,都是 O(log n) 操作.

Deletion in either tree would be preceded by deferencing the reference to the other tree, then deletion of the node in the first tree, followed by deletion of the corresponding node in the other tree, both O(log n) operations.

我想就是这样.一切都可以在 O(log n) 时间内完成.第二棵树的空间成本很小,但它只包含引用;空间仍然是 O(n).

I think that's it. Everything can be done in O(log n) time. There is a minor cost in space for the second tree, but it just contains references; space is still O(n).

那行得通吗?

这篇关于是否可以创建一个具有 log(n) 复杂度的 ArrayList 属性的 Map?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆