Web API记录最佳实践C# [英] Web API logging best practice C#

查看:96
本文介绍了Web API记录最佳实践C#的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

大家好,



我有一个与多个外部服务(SOAP,REST等)通信的Web API,所有对外部服务的请求都是并行的(60现在需要创建约70个请求)我现在需要记录所有请求和响应的最佳实践。



我现在需要在性能方面最好的方法,组织搜索和过滤数据,最小存储空间等



我尝试过:



Hi everyone,

I have a web API which communicate with multiple external services (SOAP, REST, etc) all requests to external services are in parallel (60~70 request created at moment) i need to now what is the best practice to log all requests and responses.

I need to now the best method in terms of performance, organized data for searching and filtering, minimum storage, etc

What I have tried:

I tried (SQL Server File Streaming) but this cause a server file system issues since the logging operation done in extremely high traffic.

推荐答案

你提到 60~70当时创建的请求是每秒?

如果安装在足够强大的mcachine上,SQL Server就足够了。



但我们需要考虑一下开箱即用 '并且不再担心日志记录问题,并开始考虑纯SQL Server效率问题。



将所有行存储在堆表中;没有索引,没有集群,因为将两者都写入表然后写入索引需要工作。你只是想通过使用堆减少SQL Server的写入时间。

...让这个堆使用一个足够大的数据文件......甚至可能在一个专门用于此的服务器中取决于实际工作负载。

目标堆表应该有一个列,指示该行是已处理还是已转移;一个类型为bit的列。



有一个作业或进程运行每隔这么多秒编程为:

1.更新所有行要转移的堆表。

2.将这些行复制到一个更结构化的表(可能是另一个数据库),它有索引和所有这些;这是您要用于查询和创建报告和过滤的表。

3.从堆中删除已处理的行。



基本上你应该保持堆表简短,不要在其上运行任何其他东西(没有报告)。

这些只是经验法则,基本准则。





我无法尝试以下建议,但值得一看:

不使用可变长度数据类型(varchar,nvarchar,varbinary );所有行都应该是固定长度的,如果可能的话,与SQL Server页面的大小成比例(参见了解页面和范围 [ ^ ]

匹配到页面大小是为了避免碎片,这可能是堆的真正问题。 SQL Server堆栈及其碎片 - 简单的谈话 [ ^ ]



祝你好运
You mentioned 60~70 request created at moment is that per second?
If installed on a powerful enough mcachine, SQL Server is good enough.

However we need to think a little 'out of the box' and stop worrying about a logging problem, and start think about a pure SQL Server efficiency problem.

Store all rows in a heap table; no indexes, no clustering, because it takes work to write both to table and then to an index. You just want to reduce SQL Server's write time by using a heap.
... Have this heap use a single large enough data file... Maybe even in a single server devoted to this depending on the actual workload.
The target heap table should have a column that indicates whether the row has been 'processed' or 'transferred'; a column of type bit.

Have a Job or process that runs every so many seconds programmed to:
1. update all rows in the heap table to 'transferred'.
2. copy those rows to a more structured table (at another database maybe) that has indexes and all of that; that is the table you want to use for querying and creating reports and filtering.
3. delete the processed rows from the heap.

Basically you should keep the heap table short and not run anything else on it (no reports).
Those are just rules of thumb, basic guidelines.


I have not been able to experiment with the following recommendation, but is worth a look:
Not using variable length data types (varchar, nvarchar, varbinary); all rows should be of fixed-length, and if possible, proportional to the size of SQL Server pages (see Understanding Pages and Extents[^]
The match-to-page-size is to avoid fragmentation which can be a real problem with heaps. SQL Server Heaps, and Their Fragmentation - Simple Talk[^]

Good luck


这篇关于Web API记录最佳实践C#的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆