马尔可夫链:SQL数据库和Java表示形式 [英] Markov Chain: SQL Database and Java Representation

查看:102
本文介绍了马尔可夫链:SQL数据库和Java表示形式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

现在这个问题有点晦涩。我有一个通过分析用户键入的文本生成的基于文本的markov链。它用于生成几乎连贯的乱码,并通过基于序列中的当前单词存储给定单词为文本序列中下一个单词的概率来工作。在javascript中,该对象类似于以下内容:

Now this question is a bit obscure. I have a text-based markov chain that I've generated by parsing user-typed text. It is used to generate an almost-coherent string of gibberish and works by storing the probability of a given word being the next word in a text sequence, based on the current word in the sequence. In javascript, this object would look something like the following:

var text_markov_chain = {
    "apple" : {
        "cake" : 0.2,
        "sauce" : 0.8
    },
    "transformer" : {
        "movie" : 0.95,
        "cat" : 0.025,
        "dog" : 0.025
    }
    "cat" : {
        "dog : 0.5,
        "nap" : 0.5
    }
    // ...
}

例如,如果当前这个词是变压器,那么我们生成的下一个词分别有95%的机会成为电影和2.5%的机会成为猫或狗。

So, for example, if the current word is transformer, then the next word we generate will have a 95% chance of being movie, and a 2.5% chance of being cat or dog respectively.

我的问题是双重:


  • 用Java表示此对象的最佳方法是什么?最好,因为我关心50%的快速访问和50%的快速访问。内存使用情况

  • 我如何将该对象存储在单个数据库表中(例如MySQL)?

更新:响应@biziclop的答案和@SanjayTSharma的评论,在我的课堂下,我最终写完了(这是一个正在进行中的项目,MIT许可证。目前,它仅生成一阶马尔可夫链。

import java.io.IOException;
import java.io.InputStream;
import java.io.ObjectInputStream;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Random;
import java.util.Set;
import java.util.StringTokenizer;
import java.util.TreeMap;

public class MarkovChain {
    HashMap<String, TreeMap<String, Float>> chain;
    Set<String> known_words;
    Random rand;

    /**
     * Generates a first order Markov Chain from the given text
     * @param input_text The text to parse
     */
    public MarkovChain(String input_text) {
        init(input_text, 1);
    }

    /**
     * Generates a nth order Markov Chain from the given text
     * @param input_text The text to parse
     * @param n The order of the Markov Chain
     */
    public MarkovChain(String input_text, int n) {
        init(input_text, n);
    }

    /**
     * Reads a Markov Chain from the given input stream. The object is assumed
     * to be binary and serialized
     * @param in The input stream, eg from a network port or file
     */
    public MarkovChain(InputStream in) {
        try {
            ObjectInputStream ob_in = new ObjectInputStream(in);
            chain = (HashMap<String, TreeMap<String, Float>>)ob_in.readObject();
            known_words = chain.keySet();
            ob_in.close();
            in.close();
        } catch (IOException e) {
            //e.printStackTrace();
            chain = null;
            known_words = null;
        } catch (ClassNotFoundException e) {
            //e.printStackTrace();
            chain = null;
            known_words = null;
        }
    }

    /**
     * Returns the next word, according to the Markov Chain probabilities 
     * @param current_word The current generated word
     */
    public String nextWord(String current_word) {
        if(current_word == null) return nextWord();

        // Then head off down the yellow-markov-brick-road
        TreeMap<String, Float> wordmap = chain.get(current_word);
        if(wordmap == null) {
            /* This *shouldn't* happen, but if we get a word that isn't in the
             * Markov Chain, choose another random one
             */
            return nextWord();
        }

        // Choose the next word based on an RV (Random Variable)
        float rv = rand.nextFloat();
        for(String word : wordmap.keySet()) {
            float prob = wordmap.get(word);
            rv -= prob;
            if(rv <= 0) {
                return word;
            }
        }

        /* We should never get here - if we do, then the probabilities have
         * been calculated incorrectly in the Markov Chain
         */
        assert false : "Probabilities in Markov Chain must sum to one!";
        return null;
    }

    /**
     * Returns the next word when the current word is unknown, irrelevant or
     * non existant (at the start of the sequence - randomly picks from known_words
     */
    public String nextWord() {
        return (String) known_words.toArray()[rand.nextInt(known_words.size())];
    }

    private void init(String input_text, int n) {
        if(input_text.length() <= 0) return;
        if(n <= 0) return;

        chain = new HashMap<String, TreeMap<String, Float>>();
        known_words = new HashSet<String>();
        rand = new Random(new Date().getTime());

        /** Generate the Markov Chain! **/
        StringTokenizer st = new StringTokenizer(input_text);

        while (st.hasMoreTokens()) {
            String word = st.nextToken();
            TreeMap<String, Float> wordmap = new TreeMap<String, Float>();

            // First check if the current word has previously been parsed
            if(known_words.contains(word)) continue;
            known_words.add(word);

            // Build the Markov probability table for this word
            StringTokenizer st_this_word = new StringTokenizer(input_text);
            String previous = "";
            while (st_this_word.hasMoreTokens()) {
                String next_word = st_this_word.nextToken();

                if(previous.equals(word)) {
                    if(wordmap.containsKey(next_word)) {
                        // Increment the number of counts for this word by 1
                        float num = wordmap.get(next_word);
                        wordmap.put(next_word, num + 1);
                    } else {
                        wordmap.put(next_word, 1.0f);
                    }
                }

                previous = next_word;
            } // End while (st_this_word.hasMoreTokens())

            /* The wordmap now contains a map of words and the number of occurrences they have.
             * We need to convert this to the probability of getting that word by dividing
             * by the total number of words there were
             */
            int total_number_of_words = wordmap.values().size();
            for(String k : wordmap.keySet()) {
                int num_occurances = wordmap.get(k).intValue();
                wordmap.put(k, 1.0f*num_occurances/total_number_of_words);
            }

            // Finally, we are ready to add this word and wordmap to the Markov chain
            chain.put(word, wordmap);

        } // End while (st.hasMoreTokens())

        // The (first order) Markov Chain has now been built!
    }
}


推荐答案

由将其存储在Java中,我想您可能会考虑以一种易于从中生成序列的方式存储它。

By storing it in Java, I'm guessing you think about storing it in a way that's easy to generate a sequence from.

首先,您需要一个hashmap,单词是按键。该哈希图的值将是一个树图,其键是累积概率,值是下一个单词。

First you need a hashmap, with the words being the keys. The values of this hashmap will be a treemap with the keys being the cumulative probability and the value being the next word.

因此它将类似于:

    HashMap<String, TreeMap<Double, String>> words = new HashMap<String, TreeMap<Double,String>>();

    TreeMap<Double, String> appleMap = new TreeMap<Double, String>();
    appleMap.put( 0.2d, "cake");
    appleMap.put( 1.0d, "sauce");
    words.put( "apple", appleMap );

    TreeMap<Double, String> transformerMap = new TreeMap<Double, String>();
    transformerMap.put( 0.95d, "movie");
    transformerMap.put( 0.975d, "cat");
    transformerMap.put( 1.0d, "dog");
    words.put( "transformer", transformerMap );

从此结构生成下一个单词非常容易。

It's very easy to generate the next word from this structure.

private String generateNextWord( HashMap<String, TreeMap<Double, String>> words, String currentWord ) {
    TreeMap<Double, String> probMap = words.get( currentWord );
    double d = Math.random();
    return probMap.ceilingEntry( d ).getValue();
}

在关系数据库中,您可以简单地拥有一个包含三列的表:current字,下一个字和重量。因此,您基本上是在存储马尔可夫链的状态转换图的边缘

In a relational database you can simply have a single table with three columns: current word, next word and weight. So you're basically storing the edges of the state transition graph of your Markov chain

您还可以将其规范化为两个表:一个顶点表,用于将单词与单词相对存储id,以及存储当前单词id,下一个单词id和权重的边表,但是除非您想用单词存储额外的字段,否则我认为这不是必需的。

You could also normalize it into two tables: a vertex table to store the words against word ids, and an edge table storing current word id, next word id and weight, but unless you want to store extra fields with your words, I don't think this is necessary.

这篇关于马尔可夫链:SQL数据库和Java表示形式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆