夏令时持久连接上的时间处理 [英] Daylight Savings Time handling on persistent connections

查看:113
本文介绍了夏令时持久连接上的时间处理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我认为我不是第一个遇到此问题的人,但是我/ b $ b最近在FAQ或邮件列表中找不到任何内容。

我很抱歉,如果已经在某处记录了这一点...

我的应用程序将数据连续记录到Postgres表(每一次

15秒),维护持久的连接。使用时间戳(Postgres类型带时区的时间戳)记录每个数据



应用程序未明确设置时区,并且不

在插入记录时指定它。所以一切都默认为为系统配置的本地时区

,即美国/东方。

整个夏天一切正常。


直到今天早上,当DST结束时,US / Eastern

从GMT + 4切换到GMT + 5。所有记录都很好,直到美国东部时间01:59

(05:59 UTC)。然后时钟滴答到美国东部时间01:00(世界标准时间06:00),并且我获得了一个唯一的约束违规,因为数据库错误地计算了我想要的b $ b在EDT 01:00输入另一条记录

(05:00 UTC)。当我今天早上发现问题

时,我重新启动了应用程序,现在一切正常。


我怀疑Postgres会计算UTC的本地偏移量在会话初始化期间,每个会话只需要一次
。因此,当本地偏移因DST而改变时,
没有注意到,导致我刚才描述的问题。
。我很难测试这个,

,因为我没有一个系统,我可以自由地敲打时钟,但

它会完全解释这种行为。


这是发生了什么事吗?它被认为是一个错误吗?我可以看到制作

的情况是不改变会话中期的偏移量,但在这种情况下它应该在文档中更全面地解释




在我的情况下,我想我的应用程序会在插入它们之前将所有时间转换为UTC。这应该避免将来出现所有这些问题。


PostgreSQL版本(客户端和服务器)是7.4.5,在i686 Debian sarge上。

客户端应用程序是在python 2.3.4中使用psycopg。


谢谢,


Randall Nortman


- --------------------------(广播结束)------------------- --------

提示8:解释分析是你的朋友

解决方案

On Sun,2004年10月31日下午12:47:31 -0500,Tom Lane写道:

Randall Nortman< po *********** @ wonderclown.com>写道:

我不能在不弄乱我的时钟的情况下重现错误,但是从我的
日志中,这里是发送到服务器的SQL的文本:

插入sensor_readings_numeric(sensor_id,reading_ts,reading,
min,max)值(3,''2004-10-31 01:00:00'', 0.540602,0.519071837254,
0.551811824539)


这回来了:
错误:重复密钥违反了唯一约束sensor_readings_numeric_pkey



嗯...你正在生成那个时间戳字符串究竟是什么?
我怀疑你实际上发送了两次相同的时间戳字符串,相隔一个小时,在这种情况下我''必须将此称为应用程序错误。
您确实需要包含时区规范,以便具有明确的时间戳字符串。它不像你之前建议的那样是UTC,但它必须标有预定的
区域。




啊,我现在明白了。 PostgreSQL的行为与我预期的
不同。上面的时间戳字符串在时区中是不明确的

美国/东部 - 它可能是EST或EDT。我希望PostgreSQL能够根据处理SQL

语句的当前时间解决这种歧义 - 如果它当前是EST,那么服务器会

假设EST是有意的,但如果它目前是EDT,那么它将是b / b
假设EDT。如果是这种情况,我的代码将是正确的 - 是的,我

尝试两次插入相同的时间戳值,但插入的是当我的本地时区发出的
与UTC不同的偏移量。


但是看起来PostgreSQL在这种情况下总是假定为EDT,

无论当前时间如何?我可以看出这是一个好主意,因为它的可预测性。例如,运行在与服务器不同的主机上的客户端可能会使其时钟关闭几分钟;这可能会导致服务器对正确的

时区做出与客户端不同的假设。即使在同一主机上运行,​​发出命令的客户端和处理它的服务器之间的延迟

也可能导致这个问题。


所以是的,我看到总是在查询中明确指定时区

的智慧。在我的情况下,指定UTC可能是最简单的,

因为否则我必须弄清楚当生成传感器读数时DST是否为

有效。事实上,在我的代码中,

时间戳记录为自纪元以来的秒数,以UTC为单位,因此它无论如何都无法转换为本地时间。现在,

psycopg(我用于postgres访问的python模块)是

为我生成时间戳字符串(通过

psycopg.TimestampFromTicks())。我只需要弄明白如何获得

生成带有明确时区的字符串,我确定

是可能的。如果没有,我会自己生成字符串。


------------------------ ---(播出结束)---------------------------

提示7:不要忘记增加你的免费空间地图设置


Randall Nortman< po *********** @ wonderclown.com>写道:

我怀疑Postgres在会话初始化期间每个会话只计算一次UTC的本地偏移量。




事实证明并非如此。如果你能告诉我们

申请的确切命令,我们或许可以弄明白实际出错了什么。


问候,汤姆小巷


---------------------------(播出结束) ---------------------------

提示1:订阅和取消订阅命令转到 ma ******* @ postgresql.org


Randall Nortman< po *********** @ wonderclown.com>写道:

啊,我现在看到了。 PostgreSQL的行为与我预期的有点不同。上面的时间戳字符串在时区中是不明确的
美国/东部 - 它可能是EST或EDT。我期待PostgreSQL根据处理SQL
语句的当前时间来解决这种歧义


我认为这对它来说是一件非常糟糕的事情去做。对于表示现在的时间戳来说,看起来似乎是有意义的,但是只要你确定一个时间戳不是现在的时间戳。它变成了一个可靠的方式来自己拍摄自己的b $ b。

但是看来PostgreSQL在这种情况下总是假设EDT,
无论当前时间如何?


实际上,预期和记录的行为是它应该将b / b
解释为当地标准时间的模糊时间(例如,EST不是EDT)。

目前这似乎被打破了:-(,这很奇怪,因为我是

我确实在上次触摸相关子程序时测试了它。


在过去的版本中,我们在这一点上有不同且通常是特定于平台的行为

,但在8.0中应该可以确保一致

结果现在我们不再受当地libc'

时区代码的支配。


在我离开之前尝试修复它,有没有人反对

规则将模糊时间解释为当地标准时间?

这通常意味着选择两个可能的后期/>
解释,这可能是某些应用程序的错误选择。

(我注意到HPUX的cron已记录在案o在类似情况下选择早期的

解释。)

在我的代码中,实际上,时间戳记录为自UTC时代以来的秒数,因此转换到当地时间无论如何都没有意义。现在,psycopg(我用于postgres访问的python模块)正在为我生成时间戳字符串(通过
psycopg.TimestampFromTicks())。我只需要弄清楚如何使用明确的时区来生成字符串,我确信这是可能的。如果没有,我会自己生成字符串。




实际上,最好的办法就是完全放弃转换。

建议从Unix纪元值到时间戳的方法是


''epoch'':: timestamptz + NNNNN *''1秒' ':: interval


例如:


regression = #select''epoch'':: timestamptz + 1099251435 *''1秒'':: interval;

?列?

------------------------

2004-10-31 14:37:15-05

(1排)


或者你可以做到


选择''epoch'':: timestamptz +''1099251435秒'':: interval;


在执行时节省几微秒但需要组装

将查询字符串作为字符串。对于你的

应用程序,后者可能很容易,但如果说你从

数据库列中提取数值,那么前者会更容易。
< br $> b $ b问候,tom lane


--------------------------- (播出结束)---------------------------

提示8:解释分析是你的朋友/>


I assume I''m not the first person to have encountered this, but I
couldn''t find anything in the FAQ or on the mailing lists recently.
My apologies if this is already documented somewhere...

My application logs data to a Postgres table continuously (once every
15 seconds), maintaining a persistent connection. Each datum is
logged with a time stamp (Postgres type "timestamp with time zone").
The application does not explicitly set the time zone, and does not
specify it when inserting the records. So everything just defaults to
the local time zone configured for the system, which is "US/Eastern".
This has been working fine all summer.

Until this morning, of course, when DST ended and "US/Eastern"
switched from GMT+4 to GMT+5. Everything logged fine up to 01:59 EDT
(05:59 UTC). Then the clock ticked to 01:00 EST (06:00 UTC), and I
got a unique constraint violation, because the database incorrectly
computed that I was trying to insert another record at 01:00 EDT
(05:00 UTC). I restarted the application when I noticed the problem
this morning, and now everything is working correctly.

My suspicion is that Postgres calculates the local offset from UTC
only once per session, during session initialization. Therefore, it
fails to notice when the local offset changes as a result of DST,
causing the problem I just described. It''s hard for me to test this,
because I don''t have a system I can freely muck with the clock on, but
it would completely explain this behavior.

Is this what''s happening? Is it considered a bug? I can see making
the case for not changing the offset mid-session, but in that case it
should be explained more thoroughly in the documentation.

In my case, I think I''ll have my app convert all times to UTC before
inserting them. This should avoid all such problems in the future.

PostgreSQL version (client and server) is 7.4.5, on i686 Debian sarge.
The client app is in python 2.3.4 using psycopg.

Thanks,

Randall Nortman

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

解决方案

On Sun, Oct 31, 2004 at 12:47:31PM -0500, Tom Lane wrote:

Randall Nortman <po***********@wonderclown.com> writes:

I can''t reproduce the error without messing up my clock, but from my
logs, here''s the text of the SQL sent to the server:


insert into sensor_readings_numeric (sensor_id, reading_ts, reading,
min, max) values (3, ''2004-10-31 01:00:00'', 0.540602, 0.519071837254,
0.551811824539)


And this came back:
ERROR: duplicate key violates unique constraint "sensor_readings_numeric_pkey"



Hmm ... and you were generating that timestamp string how exactly?
I suspect that you actually sent the same timestamp string twice, one
hour apart, in which case I''d have to call this an application bug.
You really need to include the timezone specification in order to
have an unambiguous timestamp string. It doesn''t have to be UTC as you
previously suggested, but it does have to be labeled with the intended
zone.



Ah, I see now. PostgreSQL is behaving a bit differently than I
expected. The timestamp string above is ambiguous in the timezone
US/Eastern -- it could be EST or EDT. I was expecting PostgreSQL to
resolve this ambiguity based on the current time when the SQL
statement is processed -- if it''s currently EST, then the server would
assume that EST was intended, but if it''s currently EDT, then it would
assume EDT. If this were the case, my code would be correct -- yes, I
tried to insert the same timestamp value twice, but the inserts were
issued when my local timezone was in different offsets from UTC.

But it appears that PostgreSQL always assumes EDT in this case,
regardless of the current time? I can see that being a good idea, for
its predictability. For example, a client running on a different host
than the server might have its clock off by a few minutes; this could
then cause the server to make a different assumption about the correct
time zone than the client. Even running on the same host, a delay
between the client issuing a command and the server processing it
could cause this problem.

So yeah, I see the wisdom of always specifying a time zone explicitly
in the query. In my case, it will probably be easiest to specify UTC,
because otherwise I have to figure out myself whether or not DST was
in effect when the sensor reading was generated. In my code, in fact,
timestamps are recorded as seconds since the epoch, in UTC, so it
makes little sense to convert to local time anyway. Right now,
psycopg (the python module I''m using for postgres access) is
generating the timestamp string for me (via
psycopg.TimestampFromTicks()). I just need to figure out how to get
it to generate the string with an explicit time zone, which I''m sure
is possible. And if not, I''ll just generate the string myself.

---------------------------(end of broadcast)---------------------------
TIP 7: don''t forget to increase your free space map settings


Randall Nortman <po***********@wonderclown.com> writes:

My suspicion is that Postgres calculates the local offset from UTC
only once per session, during session initialization.



This is demonstrably not so. We might be able to figure out what
actually went wrong, if you would show us the exact commands your
application issued.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org


Randall Nortman <po***********@wonderclown.com> writes:

Ah, I see now. PostgreSQL is behaving a bit differently than I
expected. The timestamp string above is ambiguous in the timezone
US/Eastern -- it could be EST or EDT. I was expecting PostgreSQL to
resolve this ambiguity based on the current time when the SQL
statement is processed
I think this would be a very bad thing for it to do. It might seem
to make sense for a timestamp representing "now", but as soon as you
consider a timestamp that isn''t "now" it becomes a sure way to shoot
yourself in the foot.
But it appears that PostgreSQL always assumes EDT in this case,
regardless of the current time?
Actually, the intended and documented behavior is that it should
interpret an ambiguous time as local standard time (e.g., EST not EDT).
That seems to be broken at the moment :-(, which is odd because I''m
quite certain I tested it last time we touched the relevant subroutine.

We have had varying and often platform-specific behaviors on this point
in past releases, but in 8.0 it should be possible to ensure consistent
results now that we are no longer at the mercy of the local libc''s
timezone code.

Before I go off and try to fix it, does anyone have any objection to
the rule "interpret an ambiguous time as local standard time"?
This would normally mean picking the later of the two possible
interpretations, which might be the wrong choice for some applications.
(I notice that HPUX''s cron is documented to choose the earlier
interpretation in comparable situations.)
In my code, in fact, timestamps are recorded as seconds since the
epoch, in UTC, so it makes little sense to convert to local time
anyway. Right now, psycopg (the python module I''m using for postgres
access) is generating the timestamp string for me (via
psycopg.TimestampFromTicks()). I just need to figure out how to get
it to generate the string with an explicit time zone, which I''m sure
is possible. And if not, I''ll just generate the string myself.



Actually, your best bet is to forgo the conversion altogether. The
recommended way to get from a Unix epoch value to a timestamp is

''epoch''::timestamptz + NNNNN * ''1 second''::interval

For example:

regression=# select ''epoch''::timestamptz + 1099251435 * ''1 second''::interval;
?column?
------------------------
2004-10-31 14:37:15-05
(1 row)

Or you can do

select ''epoch''::timestamptz + ''1099251435 seconds''::interval;

which saves a couple microseconds at execution but requires assembling
the query string as a string. The latter is probably easy for your
application, but if say you were extracting the numeric value from a
database column, the former would be easier.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend


这篇关于夏令时持久连接上的时间处理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆