如何在 C# 中获取服务器和客户端之间的延迟? [英] How do I obtain the latency between server and client in C#?

查看:38
本文介绍了如何在 C# 中获取服务器和客户端之间的延迟?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在为我用 ActionScript 3 编写的游戏引擎开发 C# 服务器应用程序.我使用权威服务器模型来防止作弊并确保游戏公平.到目前为止,一切正常:

I'm working on a C# Server application for a game engine I'm writing in ActionScript 3. I'm using an authoritative server model as to prevent cheating and ensure fair game. So far, everything works well:

当客户端开始移动时,它告诉服务器并开始本地渲染;然后,服务器告诉其他人客户端 X 已经开始移动,其中包含详细信息,以便他们也可以开始渲染.当客户端停止移动时,它告诉服务器,服务器根据客户端开始移动的时间和客户端呈现滴答延迟进行计算并回复每个人,以便他们可以使用正确的值进行更新.

When the client begins moving, it tells the server and starts rendering locally; the server, then, tells everyone else that client X has began moving, among with details so they can also begin rendering. When the client stops moving, it tells the server, which performs calculations based on the time the client began moving and the client render tick delay and replies to everyone, so they can update with the correct values.

问题是,当我在服务器计算中使用默认的 20 毫秒滴答延迟时,当客户端移动相当长的距离时,当它停止时会明显向前倾斜.如果我将延迟稍微增加到 22 毫秒,在我的本地网络上一切运行都非常顺利,但在其他位置,倾斜仍然存在.经过一些试验,我注意到所需的额外延迟与客户端和服务器之间的延迟非常相关.我什至把它归结为一个非常有效的公式:延迟 = 20 +(延迟/10).

The thing is, when I use the default 20ms tick delay on server calculations, when the client moves for a rather long distance, there's a noticeable tilt forward when it stops. If I increase slightly the delay to 22ms, on my local network everything runs very smoothly, but in other locations, the tilt is still there. After experimenting a little, I noticed that the extra delay needed is pretty much tied to the latency between client and server. I even boiled it down to a formula that would work quite nicely: delay = 20 + (latency / 10).

那么,我将如何获得某个客户端和服务器之间的延迟(我使用的是异步套接字).CPU 工作量不能太大,以免服务器运行缓慢.另外,这真的是最好的方法,还是有更有效/更简单的方法来做到这一点?

So, how would I proceed to obtain the latency between a certain client and the server (I'm using asynchronous sockets). The CPU effort can't be too much, as to not have the server run slowly. Also, is this really the best way, or is there a more efficient/easier way to do this?

推荐答案

抱歉,这不能直接回答您的问题,但一般而言,您不应过分依赖测量延迟,因为它可能变化很大.不仅如此,你不知道你测得的ping时间是否是对称的,这很重要.如果结果证明 20 毫秒的 ping 时间实际上是从服务器到客户端的 19 毫秒和从客户端到服务器的 1 毫秒,那么应用 10 毫秒的延迟校正是没有意义的.应用程序方面的延迟与网络方面的延迟不同 - 您可能能够 ping 某台机器并在 20 毫秒内得到响应,但如果您正在联系该机器上的服务器,该服务器每秒只处理 50 次网络输入,那么您的响应将额外延迟 0 到 20 毫秒,而且这会发生相当难以预测的变化.

Sorry that this isn't directly answering your question, but generally speaking you shouldn't rely too heavily on measuring latency because it can be quite variable. Not only that, you don't know if the ping time you measure is even symmetrical, which is important. There's no point applying 10ms of latency correction if it turns out that the ping time of 20ms is actually 19ms from server to client and 1ms from client to server. And latency in application terms is not the same as in networking terms - you may be able to ping a certain machine and get a response in 20ms but if you're contacting a server on that machine that only processes network input 50 times a second then your responses will be delayed by an extra 0 to 20ms, and this will vary rather unpredictably.

这并不是说延迟测量在平滑预测方面没有作用,但它不会解决您的问题,只需稍微清理一下即可.

That's not to say latency measurement it doesn't have a place in smoothing predictions out, but it's not going to solve your problem, just clean it up a bit.

从表面上看,这里的问题似乎是您在第一条消息中发送了信息,用于推断数据,直到收到最后一条消息.如果其他一切都保持不变,那么第一条消息中给出的移动向量乘以消息之间的时间将为服务器提供客户端大致现在所处的正确结束位置(延迟/2).但是如果延迟发生了变化,消息之间的时间就会增加或减少.客户端可能知道他移动了 10 个单位,但服务器模拟了他移动了 9 或 11 个单位,然后被告知将他弹回 10 个单位.

On the face of it, the problem here seems to be that that you're sent information in the first message which you use to extrapolate data from until the last message is received. If all else stays constant then the movement vector given in the first message multiplied by the time between the messages will give the server the correct end position that the client was in at roughly now-(latency/2). But if the latency changes at all, the time between the messages will grow or shrink. The client may know he's moved 10 units, but the server simulated him moving 9 or 11 units before being told to snap him back to 10 units.

对此的一般解决方案是不假设延迟将保持不变,而是发送定期位置更新,这允许服务器验证和纠正客户端的位置.现在只有 2 条消息,所有错误都在第二条消息之后找到并得到纠正.随着消息的增多,误差会分布在更多的样本点上,从而实现更平滑、更不明显的校正.

The general solution to this is to not assume that latency will stay constant but to send periodic position updates, which allow the server to verify and correct the client's position. With just 2 messages as you have now, all the error is found and corrected after the 2nd message. With more messages, the error is spread over many more sample points allowing for smoother and less visible correction.

虽然它永远不可能是完美的:它所需要的只是运动的最后一毫秒的滞后峰值,并且服务器的表示将过冲.如果您根据过去的事件预测未来的运动,则无法解决这个问题,因为除了选择正确但迟到或不正确但及时之外别无选择,因为信息需要时间来传递.(怪爱因斯坦.)

It can never be perfect though: all it takes is a lag spike in the last millisecond of movement and the server's representation will overshoot. You can't get around that if you're predicting future movement based on past events, as there's no real alternative to choosing either correct-but-late or incorrect-but-timely since information takes time to travel. (Blame Einstein.)

这篇关于如何在 C# 中获取服务器和客户端之间的延迟?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆