与Python相比,具有动态分配的结构数组在C中的运行速度非常慢 [英] Array of structs with dynamic allocation runs very slow in C in comparison to Python
问题描述
我(在其他SO用户的帮助下)粘合"了一个小型C程序,该程序将字符串标签映射到边缘列表数据结构中的整数标签.例如,对于输入文件
I "glue" together (with a help of other SO users) a small C program that maps string labels to integer labels in a edgelist data structure. For instance, for input file
Mike Andrew
Mike Jane
John Jane
程序输出
1 2
1 3
4 3
但是,我映射了巨大的边缘列表文件,不幸的是,与Python替代版本相比,该程序运行非常慢.下面,我用C和Python粘贴了这两个程序.请问如何提高C程序速度的指针.
However, I mapped huge edgelist files and unfortunately the program runs very slow in comparison to Python alternative. Below I pasted both programs, in C and Python. I kindly ask for a pointers how to improve speed of the C program.
#include <stdio.h>
#include <stdlib.h>
// Initial number of maximal lines in a file
enum { MAXL = 200};
typedef struct {
unsigned int first;
unsigned int second;
} edge;
typedef struct {
unsigned int hashed;
char **map;
} hash;
int insertInMap(hash *map, char *entry)
{
int i =0;
for (i=0;i<map->hashed;i++)
{
if (strcmp(map->map[i],entry) == 0)
return i+1;
}
/* Warning no boundary check is added */
map->map[map->hashed++] = strdup(entry);
return map->hashed;
}
int main() {
FILE *fp = NULL;
char node1[30];
char node2[30];
int idx = 0;
int i, n = 0, maxl = MAXL;
edge *edges;
hash map;
edges = malloc(MAXL * sizeof(edge));
map.map = malloc(MAXL * sizeof(char*));
map.hashed = 0;
fp = fopen("./test.txt", "r");
while (fscanf(fp, "%s %s", &node1, &node2) == 2) {
if (++n == maxl) { /* if limit reached, realloc lines */
void *tmp = realloc (edges, (maxl + 40) * sizeof *edges);
void *tmp1 = realloc (map.map, (maxl + 80) * sizeof(char*));
if (!tmp) { /* validate realloc succeeded */
fprintf (stderr, "error: realloc - virtual memory exhausted.\n");
break; /* on failure, exit with existing data */
}
edges = tmp; /* assign reallocated block to lines */
map.map = tmp1;
maxl += 40; /* update maxl to reflect new size */
}
edges[idx].first = insertInMap(&map,node1);
edges[idx].second = insertInMap(&map,node2);
idx++;
}
fclose(fp);
for (int i = 0; i < idx; i++) {
printf("%d -- %d\n", edges[i].first, edges[i].second);
}
free(edges);
return 0;
}
相应的Python替代方法:
The corresponding Python alternative:
import fileinput
i = 0
cui2int = {}
for line in fileinput.input():
(cui1, cui2) = line.split()
if cui1 in cui2int:
int1 = cui2int[cui1]
else:
i += 1
cui2int[cui1] = i
int1 = i
if cui2 in cui2int:
int2 = cui2int[cui2]
else:
i += 1
cui2int[cui2] = i
int2 = i
print(int1, int2)
已编辑并添加
以下是使用GLib哈希实现的修改后的代码.我提高了性能,但不幸的是输出仍然有问题
Below is modified code using the GLib hash implementation. I improved performance, but unfortunately still have troubles with the output which should be
1 2
1 3
4 3
代替
0 0
0 1
1 1
有人可以看看吗?
#include <stdio.h>
#include <stdlib.h>
#include <glib.h>
#include <stdint.h>
int main() {
GHashTable *table;
table = g_hash_table_new(g_int_hash, g_int_equal);
FILE *fp = NULL;
char node1[30];
char node2[30];
fp = fopen("./test.txt", "r");
int i = 0;
while (fscanf(fp, "%s %s", &node1, &node2) == 2) {
char *key1 = malloc(sizeof(char)*1024);
char *key2 = malloc(sizeof(char)*1024);
uint32_t* value = (uint32_t *)malloc(sizeof(uint32_t));
key1 = g_strdup(node1);
key2 = g_strdup(node2);
*value = i;
uint32_t *x;
if (g_hash_table_contains(table, key1)) {
x = (uint32_t *)g_hash_table_lookup(table, key1);
} else {
i++;
g_hash_table_insert(table, (gpointer)key1, (gpointer)value);
x = (uint32_t *)value;
}
uint32_t *y;
if (g_hash_table_contains(table, key2)) {
y = (uint32_t *)g_hash_table_lookup(table, key2);
} else {
g_hash_table_insert(table, (gpointer)key2, (gpointer)value);
y = (uint32_t *)value;
}
printf("%d -- %d\n", *x, *y);
}
fclose(fp);
g_hash_table_destroy(table);
table = NULL;
return 0;
}
推荐答案
这两个程序使用的是根本不同的数据结构,具有不同的时间复杂度. python程序使用的字典是高度优化的哈希表,具有摊销性能的 O ( 1 )用于查找和删除.
The two programs are using fundamentally different data structures with different time complexity. The python program is using a dictionary which is a highly tuned hash- table with O(1) amortized performance for lookup and deletion.
因此python程序以 O (字数)渐近复杂性运行.
So the python program is running with O(number of words) asymptotic complexity.
现在,谈论您要尝试创建的C程序,本质上只是一个键-值对数组.在这里插入或检索键需要 O (数组的大小),因为您可以循环遍历数组直到最后找到匹配项.
Now, talking about your C program what you have tried to create is essentially just an array of key-value pairs. Inserting or retrieving the keys takes O(size of the array) here since you can potentially loop through the array till the end to find a match.
如果做一些数学运算,结果就是 O (((i)单词数) 2 ).
If you do some math, it turns out to be O((number of words)2).
C ++具有名为 unordered_map 的内置哈希表实现,您如果您可以顺利切换到C ++,可以使用它.或在SO上检查此问题,以学习在C中编写自己的哈希表.
C++ has built-in Hash-table implementation named unordered_map, you can use that if you have no trouble switching to C++. Or check out this question on SO to learn writing your own hash table in C. What is a hash table and how do you make it in C?
这篇关于与Python相比,具有动态分配的结构数组在C中的运行速度非常慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!