如何使用SD卡以48 ksamples / s记录16位数据? [英] How can I use an SD card for logging 16-bit data at 48 ksamples/s?

查看:141
本文介绍了如何使用SD卡以48 ksamples / s记录16位数据?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

背景



我的电路板包含一个 STM32 微控制器,带有 SD / MMC卡,一个href =http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus =nofollow noreferrer> SPI ,并以48  ksamples / s采样模拟数据。我正在使用Keil Real-Time Library RTX内核, ELM FatFs



我有一个高优先级的任务,通过DMA以40个样本(40 x 16位)的块捕获模拟数据;数据通过长度为128(这样构成大约107 ms的采样缓冲)的队列传递到第二个低优先级任务,将任务分块到2560字节缓冲区(这是512字节SD扇区大小的倍数)和40个样本块大小)。当这个缓冲区已满(32个块或大约27 ms)时,数据将被写入文件系统。



观察



通过调试代码,我可以看到每32个块,数据写入,写入大约需要6 ms。直到(在FAT16)文件大小达到1 MB后,写入操作需要440 ms,此时队列填满和记录中止。如果我将卡格式化为 FAT32 ,则长写之前的文件大小事件是4  MB。



发生这种情况的文件大小在FAT16和FAT32之间改变的事实告诉我,这不是卡的限制,而是文件系统在1 MB或4 MB的边界上需要额外的时间。



同样看来,我的任务正在及时安排,并且时间在ELM FatFs 代码仅消耗 1 MB(或FAT32)4边界。



问题



是否有解释或解决方案?这是一个FAT问题,还是特定于ELM的FatFs代码?



我已考虑使用多个文件,但根据我的经验,FAT不会处理大量的文件一个单一的目录很好,这也将失败。根本不使用文件系统和写入原始的卡将是一种可能性,但理想情况下,我想使用标准驱动程序读取PC上的数据,无需特殊软件。



发生在我尝试编译器优化以获得写入时间;这似乎有一个效果,但是写入时间似乎更为可变。在-O2我得到一个8 MB文件,但结果不一致。我现在不知道文件大小与失败点之间是否存在直接的相关性?我已经看到它以这种方式在各种文件长度没有特定边界的情况下失败。也许这是一个卡片性能问题。



我进一步调整了代码,并应用了一个分裂的征服方法。这个观察结果可能会使问题过时,所有以前的观察都是错误的或是红色的。



我终于把它缩小到一个多扇区写入(CMD25)有时候,卡片的等待就绪轮询对于前五个扇区中的5个块来说是174 ms。等待准备的超时设置为500 ms,所以很快就会等待这么长时间。在一般情况下,使用CMD24(单扇区写入)的迭代次数更少 - 在每个扇区为140 ms,而不仅仅是偶尔。



所以这似乎是卡的行为。我将努力尝试一系列SD和MMC卡。

解决方案

首先要尝试的是很容易的:增加队列深度为640.这将给您535 ms缓冲,并应至少存活这个特定的文件系统事件。



第二件事是要配置ELM FatFs。默认情况下,许多嵌入式文件系统非常吝啬缓冲区使用。我看到一个使用单个512字节块缓冲区的所有操作,并且它爬行某些文件系统事务。我们给了几千字节,这个事情变得更快了数量级。



上面两个都取决于你是否有更多的RAM可用,当然。 p>

第三个选项是预先分配一个巨大的文件,然后在数据收集过程中覆盖数据。这将消除一些昂贵的群集分配和FAT操作操作。



由于编译器优化影响了此,您还必须考虑它是多线程问题的可能性。是否有其他线程运行可能会打扰低优先级的读者线程?您还应该尝试将缓冲区更改为除样本大小和闪存块大小之外的其他值,以防您遇到某种系统共鸣。


Background

My board incorporates an STM32 microcontroller with an SD/MMC card on SPI and samples analogue data at 48 ksamples/s. I am using the Keil Real-time Library RTX kernel, and ELM FatFs.

I have a high priority task that captures analogue data via DMA in blocks of 40 samples (40 x 16 bit); the data is passed via a queue of length 128 (which constitutes about 107 ms of sample buffering) to a second low priority task that collates sample blocks into a 2560 byte buffer (this being a multiple of both the 512 byte SD sector size and the 40 sample block size). when this buffer is full (32 blocks or approx 27 ms), the data is written to the file system.

Observation

By instrumenting the code, I can see that every 32 blocks, the data is written and that the write takes about 6 ms. This is sustained until (on FAT16) the file size gets to 1 MB, when the write operation takes 440 ms, by which time the queue fills and logging is aborted. If I format the card as FAT32, the file size before the 'long-write' event is 4 MB.

The fact that the file size at which this occurs changes between FAT16 and FAT32 suggests to me that it is not a limitation of the card but rather something that the file system does at the 1 MB or 4 MB boundaries that takes additional time.

It also appears that my tasks are being scheduled in a timely manner, and that the time is consumed in the ELM FatFs code only at the 1 MB (or 4 for FAT32) boundary.

The question

Is there an explanation or a solution? Is it a FAT issue, or rather specific to ELM's FatFs code perhaps?

I have considered using multiple files, but in my experience FAT does not handle large numbers of files in a single directory very well and this would simply fail also. Not using a file system at all and writing to the card raw would be a possibility, but ideally I'd like to read the data on a PC with standard drivers and no special software.

It occurred to me to try compiler optimisations to get the write-time down; this seems to have an effect, but the write times seemed much more variable. At -O2 I did get a 8 MB file, but the results were inconsistent. I am now not sure whether there is a direct correlation between the file size and the point at which it fails; I have seen it fail in this way at various file lengths on no particular boundary. Maybe it is a card performance issue.

I further instrumented the code and applied a divide an conquer approach. This observation probably renders the question obsolete and all previous observations are erroneous or red-herrings.

I finally narrowed it down to an instance a multi-sector write (CMD25) where occasionally the "wait ready" polling of the card takes 174 ms for the first three sectors out of a block of 5. The timeout for wait ready is set to 500 ms, so it would happily busy-wait for that long. Using CMD24 (single sector write) iteratively is much slower in the general case - 140 ms per sector - rather than just occasionally.

So it seems a behaviour of the card after all. I shall endeavour to try a range of cards SD and MMC.

解决方案

The first thing to try could be quite easy: increase the queue depth to 640. That would give you 535 ms of buffering and should survive at least this particular file system event.

The second thing to look at is the configuration of the ELM FatFs. Many embedded file systems are very stingy with buffer usage by default. I've seen one that used a single 512 byte block buffer for all operations and it crawled for certain file system transactions. We gave it a couple of kilobytes and the thing became orders of magnitude faster.

Both of the above are dependent on whether you have more RAM available, of course.

A third option would be to preallocate a huge file and then just overwrite the data during data collection. That would eliminate a number of expensive cluster allocation and FAT manipulation operations.

Since compiler optimization affected this, you must also consider the possibility that it is a multi-threading issue. Are there other threads running that could disturb the lower priority reader thread? You should also try changing the buffering there to something other than a multiple of the sample size and flash block size in case you're hitting some kind of system resonance.

这篇关于如何使用SD卡以48 ksamples / s记录16位数据?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆