Linux套接字编程,错误处理 [英] Linux socket programming, Error handling

查看:87
本文介绍了Linux套接字编程,错误处理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

你好,
有人可以看看下面的代码,然后告诉我我在这里缺少什么.

下面是一个简单的tcp客户端代码,该代码打开与服务器的TCP连接并在循环中发送恒定消息.只要建立连接并发送消息,代码就可以正常工作.我遇到的问题是错误处理:
当程序开始运行并且建立连接后,它将开始发送消息.然后,我在客户端完成所有消息之前有意地关闭了服务器,以查看其如何处理错误(检查send()函数返回的错误代码).我惊讶于关闭服务器程序后,该函数在关闭服务器后首次尝试返回成功代码.在第二次尝试时,客户端崩溃(Linux终止程序,并且我得到shell提示).

为什么不发送函数返回错误代码? Linux为什么终止程序?然后如何进行可靠的错误检查?.

请注意,在Windows下运行相同的代码的行为符合我的预期(返回正确的错误代码).请指教.

Hello,
Can someone please look at the code below and tell me what I am missing here.

Below is a simple tcp client code that opens a TCP connection with a server and sends a constant message in a loop. The code works fine as far as establishing the connection and sending the message. The problem I have is error handling:
When the program starts running and if the connection is established, it starts sending the messages. I then purposely close the server in the middle before the client finishes all the messages to see how does it handle the error (check error code returned by send() function). I was surprised that after I shutdown the server program, the function returns success code for the first attempt after the server is closed. On the second attempt the client crashes (Linux terminates the program and I get the shell prompt back).

Why does not send function return error code?. Why does Linux terminates the program?. How do I do a robust error checking then ?.

Please note that running the same code under windows behaves as I expected (proper error codes are returned). Please advice.

//Performs a delay
int Delay (int milliseconds)
{
	struct timespec timeOut,remains;

	timeOut.tv_sec = milliseconds /MILLI;
	timeOut.tv_nsec = (milliseconds - timeOut.tv_sec*MILLI) * 1000000;
	if (nanosleep(&timeOut, &remains) == -1)
	{
		return timeOut.tv_sec *MILLI + timeOut.tv_nsec /1000000;
	}

	return 0;
}
/*----------------------------------------------------------------*/
//Connect to the server and send constant messages
int StartConnection()
{
	char ipaddress[] = "192.168.100.20";
	int port = 1234;
	struct sockaddr_in sa;
	int sock;

	if ((sock = socket(AF_INET, SOCK_STREAM, 0)) < 0)
	{
		perror("unable to create server socket");
		return -1;
	}

	memset(&sa, 0, sizeof(sa));
	sa.sin_family      = AF_INET;
	sa.sin_port	   = htons((USHORT)port);
	sa.sin_addr.s_addr = inet_addr(ipaddress);

	cout << "Connecting to " << ipaddress << ":" << port << endl;

	if (connect(sock,(SOCKADDR*)&sa,sizeof(sa)) < 0)
	{
		perror ("Error connecting");
		return -1;
	}

	cout << "Connection established" << endl;

	char data[]="Sample data\n";
	int count = 10;
	int delay = 1000;

	for (int i =0; i< count ; i++ )
	{
		cout << "Delaying " << delay/1000 \
                     << " seconds before sending data ("  \
                     << i+1 << " of " << count << ") ...";

		Delay(delay);
		int result=send(sock,(char*)data,sizeof(data),0);
		if (result > 0)
		        cout << "success" << endl;
		else
		{
                        //This is where my problem is
                        //It never returns error code
                        // when server shutsdown nt the middle
			perror ("error");
		}
	}

	cout << "All done" << endl;

	return 0;
}

推荐答案

当您尝试写入关闭的套接字时,Linux发出SIGPIPE信号,这将导致程序终止.您需要捕获和处理信号,或者使用send()中的MSG_NOSIGNAL标志将其禁用.请参见send()sigaction()socket()
Linux raises a SIGPIPE signal when you try to write to a closed socket, this causes your program to terminate. You need to either capture and handle the signal or disable it by using the MSG_NOSIGNAL flag in send(). See the man pages for send(), sigaction() and socket()


这篇关于Linux套接字编程,错误处理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆