CS50 拼写错误单词期间的拼写器分段错误问题 [英] CS50 Speller Segmentation Fault Issue During Misspelled Words
问题描述
我的代码在某处导致了分段错误.我不完全确定如何.我不认为这是负载问题,因为程序开始列出拼写错误的单词,然后突然停止并给我 seg fault 错误.
My code is causing a segmentation fault somewhere. I'm not entirely sure how. I don't think it's an issue with load, as the program begins listing off Misspelled words before abruptly stopping and giving me the seg fault error.
// Implements a dictionary's functionality
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include "dictionary.h"
#define HASHTABLE_SIZE 80000
unsigned int count = 0;
// Represents a node in a hash table
typedef struct node
{
char word[LENGTH + 1];
struct node *next;
}
node;
// Number of buckets in hash table
const unsigned int N = HASHTABLE_SIZE;
// Hash table
node *table[N];
// Returns true if word is in dictionary else false
bool check(const char *word)
{
node *tmp = NULL;
int ch = hash(word);
int len = strlen(word);
char w[len+1];
for(int i = 0; i<len; i++)
{
w[i] = tolower(word[i]);
}
w[len] = ' ';
tmp = table[ch];
while(tmp->next != NULL)
{
if(strcmp(tmp->word, w) == 0)
{
return true;
}
if(tmp->next != NULL)
{
tmp = tmp->next;
}
}
return false;
}
// Hashes word to a number
unsigned int hash(const char *word)
{
int len = strlen(word);
char key[len+1];
for(int p = 0; p < len; p++)
{
key[p] = tolower(word[p]);
}
key[len] = ' ';
unsigned int hash = 0;
for (int i = 0, n = strlen(key); i < n; i++)
hash = (hash << 2) ^ key[i];
return hash % HASHTABLE_SIZE;
}
// Loads dictionary into memory, returning true if successful else false
bool load(const char *dictionary)
{
FILE *file = fopen(dictionary, "r");
if(file == NULL)
{
printf("could not open file.
");
fclose(file);
return false;
}
char temp[LENGTH + 1];
while(fscanf(file, "%s", temp) != EOF)
{
node *tmp = malloc(sizeof(node));
strcpy(tmp->word, temp);
unsigned int code = hash(temp);
count++;
if(table[code] == NULL)
{
table[code] = tmp;
}
else if(table[code] != NULL)
{
node *pointer = table[code];
while(pointer->next != NULL)
{
tmp->next = table[code];
table[code] = tmp;
}
//YOU ARE HERE
}
}
return true;
}
// Returns number of words in dictionary if loaded else 0 if not yet loaded
unsigned int size(void)
{
node* tmp = NULL;
for(int i=0; i< N; i++ )
{
if(table[i]!=NULL)
{
tmp = table[i];
while(tmp->next != NULL)
{
tmp = tmp->next;
count++;
}
}
}
return count;
}
// Unloads dictionary from memory, returning true if successful else false
bool unload(void)
{
node *tmp = NULL;
node *del;
for(int i = 0; i < N; i++)
{
tmp = table[i];
while(tmp->next != NULL)
{
del = tmp;
if(tmp->next != NULL)
{
tmp = tmp->next;
}
free(del);
}
return true;
}
return false;
}
运行程序时,我收到:
~/pset5/speller/ $ ./speller dictionaries/large keys/her.txt
MISSPELLED WORDS
MISSPELLED
WORDS
Jonze
INT
Segmentation fault
所以它似乎正确地加载了字典和文本.
So it appears to be properly loading the dictionary and the text.
推荐答案
你对CS50 拼写器.具体要求:
您的检查实现必须不区分大小写.换句话说,如果 foo
在字典中,那么 check 应该返回 true 给定任何其资本化;foo
、foO
、fOo
、fOO
、fOO
、Foo
、FoO
、FOo
和 FOO
应视为拼写错误.
Your implementation of check must be case-insensitive. In other words, if
foo
is in dictionary, then check should return true given any capitalization thereof; none offoo
,foO
,fOo
,fOO
,fOO
,Foo
,FoO
,FOo
, andFOO
should be considered misspelled.
这意味着当您将字典加载到哈希表中时,您必须在计算哈希之前将字典单词转换为小写.否则,当您 check(word)
并将 word 的副本转换为小写时,如果原始字典单词在散列之前未转换为小写,您将永远不会计算相同的散列.
What this means is when you load the dictionary into the hash-table, you must convert the dictionary word to lower-case before computing the hash. Otherwise, when you check(word)
and you convert a copy of word to lower-case, you would never compute the same hash if the original dictionary word were not converted to lowercase before hashing.
您的 check(word)
函数在计算哈希之前也不会转换为小写.这将导致您错过使用字典单词的小写形式散列的字典单词.您也出现了段错误,因为您在取消引用 tmp->next
之前未能检查 tmp
是否不是 NULL
.但是,您在其他情况下如何检查哈希表的基础知识是正确的.
Your check(word)
function isn't converting to lower-case before computing the hash either. This will cause you to miss the dictionary word which was hashed with the lower-case form of the dictionary word. You segfault as well because you fail to check that tmp
is not NULL
before dereferencing tmp->next
. But, you were on the right track with the basics of how to check a hash table otherwise.
由于您将在散列和存储字典单词之前以及散列要检查的单词副本之前转换为小写,因此使用简单的字符串到低位函数是有意义的.然后你可以将你的 check()
函数简化为:
Since you will convert to lowercase both before hashing and storing the dictionary word and before hashing a copy of the word to check, it would make sense to use a simple string-to-lower function. Then you can reduce your check()
function to:
// string to lower
char *str2lower (char *str)
{
if (!str) return NULL;
char *p = str;
for (; *p; p++)
if (isupper((unsigned char)*p))
*p ^= ('A' ^ 'a');
return str;
}
// Returns true if word is in dictionary else false
bool check(const char *word)
{
char lcword[LENGTH+1]; /* make a copy of word from txt to convert to lc */
size_t len = strlen (word); /* get length of word */
unsigned h;
if (len > LENGTH) { /* validate word will fit */
fprintf (stderr, "error: check() '%s' exceeds LENGTH.
", word);
return false;
}
memcpy (lcword, word, len+1); /* copy word to lower-case word */
h = hash (str2lower(lcword)); /* convert to lower-case then hash */
for (node *n = table[h]; n; n = n->next) /* now loop over list nodes */
if (strcmp (lcword, n->word) == 0) /* compare lower-case words */
return true;
return false;
}
接下来,虽然没有在问题集中讨论,但您不应该吝啬哈希表的大小.dictionaries/large
中有 143091
个单词.理想情况下,您希望保持哈希表的负载因子小于 0.6
(不超过 60% 的存储桶被填充以最大程度地减少冲突)我尚未测试您的表的实际负载因子,但我不想要任何小于 N == 8000
Next, though not discussed in the problem-set, you should not skimp on hash-table size. There are 143091
words in dictionaries/large
. Ideally, you want to keep the load-factor of your hash table less than 0.6
(no more than 60% of your buckets filled to minimize collisions) I haven't tested the actual load factor for your table, but I wouldn't want anything less than N == 8000
更新:我确实检查了,并且使用 N == 131072
您的负载因子使用 lh_strhash()
将是 0.665
达到您想要重新散列的程度,但对于您的目的应该没问题.(值得注意的是,所有额外的存储都不会将检查时间的负载提高到百分之一秒以上(这表明即使处理额外的冲突,它们也相当有效)
update: I did check, and with N == 131072
your load-factor loading the large
dictionary using lh_strhash()
would be 0.665
which is getting to the point you would want to re-hash, but for your purposes should be fine. (notably all of the additional storage doesn't improve the load of check times more than a hundredth of a second (which indicates they are reasonably efficient even handling the additional collisions)
哈希函数
您可以尝试几个,但使用 /usr/share/dict/words
(这是 large
的来源)我找到了 openSSL lh_strhash()
散列函数提供最少的冲突次数,同时非常有效.您可以将 hash()
函数实现为包装器,并以这种方式快速尝试多种不同的哈希,例如
You can experiment with several, but using the /usr/share/dict/words
(which is where large
comes from) I have found the openSSL lh_strhash()
hash function provides the minimum number of collisions while being quite efficient. You can implement your hash()
function as a wrapper and try a number of different hashes quickly that way, e.g.
// openSSL lh_strhash
uint32_t lh_strhash (const char *s)
{
uint64_t ret = 0, v;
int64_t n = 0x100;
int32_t r;
if (!s || !*s) return (ret);
for (; *s; s++) {
v = n | (*s);
n += 0x100;
r = (int32_t)((v >> 2) ^ v) & 0x0f;
ret = (ret << r) | (ret >> (32 - r));
ret &= 0xFFFFFFFFL;
ret ^= v * v;
}
return ((ret >> 16) ^ ret);
}
// Hashes word to a number
unsigned int hash (const char *word)
{
return lh_strhash (word) % N;
}
您的 load()
函数同样无法在散列之前转换为小写.您不可能在哈希表中排列和存储字典中每个单词的每个大写排列.由于您必须执行不区分大小写的 check()
,因此只有在散列和存储之前进行转换(转换为大写或小写 - 保持一致)才有意义.
Your load()
function suffers from the same failure to convert to lower-case before hashing. You can't possibly permute and store every capitalization permutation for every word in the dictionary in your hash table. Since you must perform a case-insensitive check()
, it only makes sense to convert (to either upper or lower -- be consistent) before hashing and storing.
此外,在将新条目插入存储桶列表之前,无需迭代到存储桶的最后一个节点.(那是相当低效的)而是简单地使用一种称为forward-chaining"的方法.在将存储桶设置为新节点的地址之前,将新节点插入存储桶地址,将那里的内容移动到 ->next
指针.这给出了 O(1) 时间插入.例如:
Further, there is no need to iterate to the last node for the bucket before inserting a new entry in that bucket's list. (that is quite inefficient) Instead simply use a method called "forward-chaining" to insert the new node at the bucket address moving what was there to the ->next
pointer before setting the bucket to the address of the new node. That gives O(1) time insertions. For example:
// Loads dictionary into memory, returning true if successful else false
bool load (const char *dictionary)
{
char word[MAXC];
FILE *fp = fopen (dictionary, "r");
if (!fp) {
perror ("fopen-dictionary");
return false;
}
while (fgets (word, MAXC, fp)) {
unsigned h;
size_t len;
node *htnode = NULL;
word[(len = strcspn(word, "
"))] = 0; /* trim
or terminate at ' ' */
if (len > LENGTH) {
fprintf (stderr, "error: word '%s' exceeds LENGTH.
", word);
continue;
}
if (!(htnode = malloc (sizeof *htnode))) {
perror ("malloc-htnode");
return false;
}
h = hash (str2lower(word));
memcpy (htnode->word, word, len+1); /* copy word to htnode->word */
htnode->next = table[h]; /* insert node at table[h] */
table[h] = htnode; /* use fowrard-chaining for list */
htsize++; /* increment table size */
}
fclose (fp);
return htsize > 0;
}
至于哈希表大小,只需向 dictionary.c
添加一个全局变量,然后像上面 load()
中所做的那样递增该全局变量(即 htsize
变量).这使得表格 size()
的功能很简单:
As for hash table size, just add a global to dictionary.c
and increment that global as done in load()
above (that is the htsize
variable). That makes the table size()
function as a simple as:
// Hash table size
unsigned htsize;
...
// Returns number of words in dictionary if loaded else 0 if not yet loaded
unsigned int size (void)
{
return htsize;
}
您的 unload()
有点复杂,如果 table[i]
有单个节点,将无法释放分配的内存.相反,您实际上可以缩短逻辑并完成您所需要的:
Your unload()
is a bit convoluted, and will fail free the allocated memory it there is a single node at table[i]
. Instead you can actually shorten your logic and accomplish what you need with:
// Unloads dictionary from memory, returning true if successful else false
bool unload(void)
{
for (int i = 0; i < N; i++) {
node *n = table[i];
while (n) {
node *victim = n;
n = n->next;
free (victim);
}
}
htsize = 0;
return true;
}
键的使用/区别示例
创建一个 test/
目录,然后将输出重定向到 test/
目录中的文件将允许您将结果与预期结果进行比较:
Creating a test/
directory and then redirecting output to files in the test/
directory will allow you to compare the results with expected results:
$ ./bin/speller texts/bible.txt > test/bible.txt
keys/
目录包含来自staff"的输出.代码.此实现与键的输出相匹配,但也包括时间信息(这不是您可以更改的内容 - 它在 speller.c
中硬编码,您无法根据练习的限制进行修改):
The keys/
directory contains the output from the "staff" code. This implementation matches the output of the keys, but includes the timing information as well (that's not something you can change -- it is hardcoded in speller.c
which you cannot modify per the restriction on the exercise):
$ diff -uNb keys/bible.txt test/bible.txt
--- keys/bible.txt 2019-10-08 22:35:16.000000000 -0500
+++ test/bible.txt 2020-09-01 02:09:31.559728835 -0500
@@ -33446,3 +33446,9 @@
WORDS MISSPELLED: 33441
WORDS IN DICTIONARY: 143091
WORDS IN TEXT: 799460
+TIME IN load: 0.03
+TIME IN check: 0.51
+TIME IN size: 0.00
+TIME IN unload: 0.01
+TIME IN TOTAL: 0.55
+
(注意: -b
选项允许 diff
忽略空白数量的变化"
所以它会忽略行尾的变化,比如 DOS "
"
与 Linux '
'
行尾)
(note: the -b
option allows diff
to "ignore changes in the amount of white space"
so it will ignore changes in line-endings, like DOS "
"
versus Linux '
'
line endings)
代码输出和 keys/
目录中的文件之间的唯一区别是第一列中标有 '+'
符号的那些行(最后 6-lines) 显示时间信息是唯一的区别.
The only differences between the code output and the files in the keys/
directory are those lines marked with a '+'
sign in the first column (the last 6-lines) showing the timing information is the only difference.
内存使用/错误检查
所有内存都已正确释放:
All memory is properly freed:
$ valgrind ./bin/speller texts/lalaland.txt > test/lalaland.txt
==10174== Memcheck, a memory error detector
==10174== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==10174== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==10174== Command: ./bin/speller texts/lalaland.txt
==10174==
==10174==
==10174== HEAP SUMMARY:
==10174== in use at exit: 0 bytes in 0 blocks
==10174== total heap usage: 143,096 allocs, 143,096 frees, 8,026,488 bytes allocated
==10174==
==10174== All heap blocks were freed -- no leaks are possible
==10174==
==10174== For counts of detected and suppressed errors, rerun with: -v
==10174== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
查看一下,如果您还有其他问题,请告诉我.
Look things over and let me know if you have further questions.
如果您在细节上苦苦挣扎,这是使用的完整 dictionary.c
,我在末尾添加了 loadfactor()
函数,以便您计算如果您有兴趣,可以查看 N
上不同值的负载因子:
If you are struggling with the details, this is the complete dictionary.c
used, and I have added the loadfactor()
function at the end so you can compute the load-factor for varying values on N
if you are interested:
// Implements a dictionary's functionality
#include "dictionary.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <ctype.h>
// Represents a node in a hash table
typedef struct node
{
char word[LENGTH + 1];
struct node *next;
}
node;
// Number of buckets in hash table
#define N 131072
// Max Characters Per-Line of Input
#define MAXC 1024
// Hash table
node *table[N];
// Hash table size
unsigned htsize;
// string to lower
char *str2lower (char *str)
{
if (!str) return NULL;
char *p = str;
for (; *p; p++)
if (isupper((unsigned char)*p))
*p ^= ('A' ^ 'a');
return str;
}
// Returns true if word is in dictionary else false
bool check(const char *word)
{
char lcword[LENGTH+1]; /* make a copy of word from txt to convert to lc */
size_t len = strlen (word); /* get length of word */
unsigned h;
if (len > LENGTH) { /* validate word will fit */
fprintf (stderr, "error: check() '%s' exceeds LENGTH.
", word);
return false;
}
memcpy (lcword, word, len+1); /* copy word to lower-case word */
h = hash (str2lower(lcword)); /* convert to lower-case then hash */
for (node *n = table[h]; n; n = n->next) /* now loop over list nodes */
if (strcmp (lcword, n->word) == 0) /* compare lower-case words */
return true;
return false;
}
// openSSL lh_strhash
uint32_t lh_strhash (const char *s)
{
uint64_t ret = 0, v;
int64_t n = 0x100;
int32_t r;
if (!s || !*s) return (ret);
for (; *s; s++) {
v = n | (*s);
n += 0x100;
r = (int32_t)((v >> 2) ^ v) & 0x0f;
ret = (ret << r) | (ret >> (32 - r));
ret &= 0xFFFFFFFFL;
ret ^= v * v;
}
return ((ret >> 16) ^ ret);
}
// Hashes word to a number
unsigned int hash (const char *word)
{
return lh_strhash (word) % N;
}
// Loads dictionary into memory, returning true if successful else false
bool load (const char *dictionary)
{
char word[MAXC];
FILE *fp = fopen (dictionary, "r");
if (!fp) {
perror ("fopen-dictionary");
return false;
}
while (fgets (word, MAXC, fp)) {
unsigned h;
size_t len;
node *htnode = NULL;
word[(len = strcspn(word, "
"))] = 0; /* trim
or terminate at ' ' */
if (len > LENGTH) {
fprintf (stderr, "error: word '%s' exceeds LENGTH.
", word);
continue;
}
if (!(htnode = malloc (sizeof *htnode))) {
perror ("malloc-htnode");
return false;
}
h = hash (str2lower(word));
memcpy (htnode->word, word, len+1); /* copy word to htnode->word */
htnode->next = table[h]; /* insert node at table[h] */
table[h] = htnode; /* use fowrard-chaining for list */
htsize++; /* increment table size */
}
fclose (fp);
return htsize > 0;
}
// Returns number of words in dictionary if loaded else 0 if not yet loaded
unsigned int size (void)
{
return htsize;
}
// Unloads dictionary from memory, returning true if successful else false
bool unload(void)
{
for (int i = 0; i < N; i++) {
node *n = table[i];
while (n) {
node *victim = n;
n = n->next;
free (victim);
}
}
htsize = 0;
return true;
}
float loadfactor (void)
{
unsigned filled = 0;
for (int i = 0; i < N; i++)
if (table[i])
filled++;
return (float)filled / N;
}
这篇关于CS50 拼写错误单词期间的拼写器分段错误问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!