如何使用 SSDT 和 Visual Studio 2012 数据库项目正确管理数据库部署? [英] How to properly manage database deployment with SSDT and Visual Studio 2012 Database Projects?

查看:70
本文介绍了如何使用 SSDT 和 Visual Studio 2012 数据库项目正确管理数据库部署?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正处于研究阶段,试图在现有的小项目上采用 2012 数据库项目.我是 C# 开发人员,而不是 DBA,所以我对最佳实践不是特别熟悉.我已经在 google 和 stackoverflow 上搜索了几个小时,但我仍然不知道如何正确处理一些关键的部署场景.

I'm in the research phase trying to adopt 2012 Database Projects on an existing small project. I'm a C# developer, not a DBA, so I'm not particularly fluent with best practices. I've been searching google and stackoverflow for a few hours now but I still don't know how to handle some key deployment scenarios properly.

1) 在几个开发周期的过程中,我如何管理我的数据库的多个版本?如果我的数据库 v3 上有一个客户端,我想将它们升级到 v8,我该如何管理?我们目前为我们产品的每个版本管理手工制作的架构和数据迁移脚本.我们是否仍然需要单独执行此操作,或者新范式中是否有支持或替代此功能的内容?

1) Over the course of several development cycles, how do I manage multiple versions of my database? If I have a client on v3 of my database and I want to upgrade them to v8, how do I manage this? We currently manage hand-crafted schema and data migration scripts for every version of our product. Do we still need to do this separately or is there something in the new paradigm that supports or replaces this?

2) 如果架构以需要移动数据的方式发生变化,那么处理这种情况的最佳方法是什么?我假设在 Pre-Deployment 脚本中进行了一些工作以保留数据,然后 Post-Deploy 脚本将其放回正确的位置.是这样还是有更好的办法?

2) If the schema changes in such a way that requires data to be moved around, what is the best way to handle this? I assume some work goes in the Pre-Deployment script to preserve the data and then the Post-Deploy script puts it back in the right place. Is that the way of it or is there something better?

3) 也非常感谢有关如何最好地使用这些新技术的任何其他建议或指导!

3) Any other advice or guidance on how best to work with these new technologies is also greately appreciated!

更新:自从我最初提出这个问题以来,我对这个问题的理解有所增加,虽然我想出了一个可行的解决方案,但这并不是我所希望的解决方案.这是我的问题的改写:

UPDATE: My understanding of the problem has grown a little since I originally asked this question and while I came up with a workable solution, it wasn't quite the solution I was hoping for. Here's a rewording of my problem:

我遇到的问题纯粹是数据相关的.如果我的应用程序版本 1 上有一个客户端,并且我想将它们升级到我的应用程序版本 5,那么如果他们的数据库没有数据,我这样做不会有任何问题.我只是让 SSDT 智能地比较模式并一次性迁移数据库.不幸的是,客户有数据,所以事情没那么简单.从我的应用程序的版本 1 到版本 2 到版本 3(等等)的架构更改都会影响数据.我当前管理数据的策略要求我为每个版本升级(1 到 2、2 到 3 等)维护一个脚本.这会阻止我从应用程序的第 1 版直接转到第 5 版,因为我没有数据迁移脚本可以直接转到第 5 版.为每个客户端创建自定义升级脚本或管理从每个版本到每个更高版本的升级脚本的潜在客户都难以管理.我所希望的是 SSDT 支持某种策略,使管理数据方面的事情变得更容易,甚至可能和模式方面一样简单.我最近使用 SSDT 的经验并没有让我对现有这样的策略抱有任何希望,但我很想找到不同的答案.

The problem I'm having is purely data related. If I have a client on version 1 of my application and I want to upgrade them to version 5 of my application, I would have no problems doing so if their database had no data. I'd simply let SSDT intelligently compare schemas and migrate the database in one shot. Unfortunately clients have data so it's not that simple. Schema changes from version 1 of my application to version 2 to version 3 (etc) all impact data. My current strategy for managing data requires I maintain a script for each version upgrade (1 to 2, 2 to 3, etc). This prevents me from going straight from version 1 of my application to version 5 because I have no data migration script to go straight there. The prospect creating custom upgrade scripts for every client or managing upgrade scripts to go from every version to every greater version is exponentially unmanageable. What I was hoping was that there was some sort of strategy SSDT enables that makes managing the data side of things easier, maybe even as easy as the schema side of things. My recent experience with SSDT has not given me any hope of such a strategy existing but I would love to find out differently.

推荐答案

我自己一直在努力解决这个问题,我可以告诉你这并不容易.

I've been working on this myself, and I can tell you it's not easy.

首先,为了解决 JT 的回复 - 即使使用 SSDT 具有的声明性更新机制,您也不能忽略版本".SSDT 在将任何源模式移动到任何目标模式方面做得相当不错"(前提是你知道所有的开关和陷阱),确实这本身不需要版本化,但它不知道如何管理数据运动"(至少不是我能看到的!).因此,就像 DBProj 一样,您可以在 Pre/Post 脚本中使用自己的设备.由于数据移动脚本依赖于已知的开始和结束模式状态,因此您无法避免对数据库进行版本控制.因此,数据移动"脚本必须应用于模式的版本化快照,这意味着您不能随意将数据库从 v1 更新到 v8 并期望数据移动脚本 v2 到 v8 工作(大概,您不会需要一个 v1 数据运动脚本).

First, to address the reply by JT - you cannot dismiss "versions", even with declarative updating mechanics that SSDT has. SSDT does a "pretty decent" job (provided you know all the switches and gotchas) of moving any source schema to any target schema, and it's true that this doesn't require verioning per se, but it has no idea how to manage "data motion" (at least not that i can see!). So, just like DBProj, you left to your own devices in Pre/Post scripts. Because the data motion scripts depend on a known start and end schema state, you cannot avoid versioning the DB. The "data motion" scripts, therefore, must be applied to a versioned snapshot of the schema, which means you cannot arbitrarily update a DB from v1 to v8 and expect the data motion scripts v2 to v8 to work (presumably, you wouldn't need a v1 data motion script).

遗憾的是,我在 SSDT 发布中看不到任何允许我以集成方式处理这种情况的机制.这意味着您必须添加自己的脚手架.

Sadly, I can't see any mechanism in SSDT publishing that allows me to handle this scenario in an integrated way. That means you'll have to add your own scafolding.

第一个技巧是跟踪数据库(和 SSDT 项目)中的版本.我开始在 DBProj 中使用一个技巧,并将其带到 SSDT,经过一些研究,结果其他人也在使用它.您可以将 DB 扩展属性应用于数据库本身(称为BuildVersion"或AppVersion"或类似名称),并将版本值应用于它.然后,您可以在 SSDT 项目本身中捕获此扩展属性,SSDT 会将其添加为脚本(然后您可以选中包含扩展属性的发布选项).然后我使用 SQLCMD 变量来识别当前传递中应用的源版本和目标版本.一旦确定了源(项目快照)和目标(即将更新的目标数据库)之间的版本差异,您就可以找到所有需要应用的快照.遗憾的是,从 SSDT 部署内部很难做到这一点,您可能必须将其移至构建或部署管道(我们使用 TFS 自动化部署并具有自定义操作来执行此操作).

The first trick is to track versions within the database (and SSDT project). I started using a trick in DBProj, and brought it over to SSDT, and after doing some research, it turns out that others are using this too. You can apply a DB Extended Property to the database itself (call it "BuildVersion" or "AppVersion" or something like that), and apply the version value to it. You can then capture this extended property in the SSDT project itself, and SSDT will add it as a script (you can then check the publish option that includes extended properties). I then use SQLCMD variables to identify the source and target versions being applied in the current pass. Once you identify the delta of versions between the source (project snapshot) and target (target db about to be updated), you can find all the snapshots that need to be applied. Sadly, this is tricky to do from inside the SSDT deployment, and you'll probably have to move it to the build or deployment pipeline (we use TFS automated deployments and have custom actions to do this).

下一个障碍是保留架构快照及其关联的数据移动脚本.在这种情况下,它有助于使脚本尽可能具有幂等性(这意味着您可以重新运行脚本而不会产生任何不良副作用).它有助于将可以安全地重新运行的脚本与必须只执行一次的脚本分开.我们对静态参考数据(字典或查找表)做同样的事情——换句话说,我们有一个 MERGE 脚本库(每个表一个),可以保持参考数据同步,这些脚本包含在帖子中- 部署脚本(通过 SQLCMD :r 命令).这里要注意的重要一点是,您必须以正确的顺序执行它们,以防这些参考表中的任何一个彼此具有 FK 引用.我们按顺序将它们包含在主部署后脚本中,这有助于我们创建一个为我们生成这些脚本的工具——它还解决了依赖顺序.我们在版本"结束时运行此生成工具以捕获静态参考数据的当前状态.您所有的其他数据移动脚本基本上都是特殊情况,并且很可能是一次性的.在这种情况下,您可以执行以下两种操作之一:您可以针对 db build/app 版本使用 IF 语句,或者您可以在创建每个快照包后清除 1 次脚本.

The next hurdle is to keep snapshots of the schema with their associated data motion scripts. In this case, it helps to make the scripts as idempotent as possible (meaning, you can rerun the scripts without any ill side-effects). It helps to split scripts that can safely be rerun from scripts that must be executed one time only. We're doing the same thing with static reference data (dictionary or lookup tables) - in other words, we have a library of MERGE scripts (one per table) that keep the reference data in sync, and these scripts are included in the post-deployment scripts (via the SQLCMD :r command). The important thing to note here is that you must execute them in the correct order in case any of these reference tables have FK references to each other. We include them in the main post-deploy script in order, and it helps that we created a tool that generates these scripts for us - it also resolves dependency order. We run this generation tool at the close of a "version" to capture the current state of the static reference data. All your other data motion scripts are basically going to be special-case and most likely will be single-use only. In that case, you can do one of two things: you can use an IF statement against the db build/app version, or you can wipe out the 1 time scripts after creating each snapshot package.

记住 SSDT 会禁用 FK 检查约束,并且仅在部署后脚本运行后重新启用它们,这会有所帮助.例如,这让您有机会填充新的非空字段(顺便说一下,您必须启用为非空列生成临时智能"默认值的选项才能使其工作).但是,仅对 SSDT 由于架构更改而重新创建的表禁用 FK 检查约束.对于其他情况,您有责任确保数据移动脚本以正确的顺序运行,以避免检查约束投诉(或者您在脚本中手动禁用/重新启用它们).

It helps to remember that SSDT will disable FK check constraints and only re-enable them after the post-deployment scripts run. This gives you a chance to populate new non-null fields, for example (by the way, you have to enable the option to generate temporary "smart" defaults for non-null columns to make this work). However, FK check constraints are only disabled for tables that SSDT is recreating because of a schema change. For other cases, you are responsible for ensuring that data motion scripts run in the proper order to avoid check constraints complaints (or you manually have disable/re-enable them in your scripts).

DACPAC 可以帮助您,因为 DACPAC 本质上是一个快照.它将包含几个描述模式的 XML 文件(类似于项目的构建输出),但在您创建它的那一刻被冻结.然后,您可以使用 SQLPACKAGE.EXE 或部署提供程序来发布该包快照.我还没有完全弄清楚如何使用 DACPAC 版本控制,因为它更依赖于注册"数据应用程序,所以我们坚持使用我们自己的版本控制方案,但我们确实将我们自己的版本信息放入了 DACPAC 文件名中.

DACPAC can help you because DACPAC is essentially a snapshot. It will contain several XML files describing the schema (similar to the build output of the project), but frozen in time at the moment you create it. You can then use SQLPACKAGE.EXE or the deploy provider to publish that package snapshot. I haven't quite figured out how to use the DACPAC versioning, because it's more tied to "registered" data apps, so we're stuck with our own versioning scheme, but we do put our own version info into the DACPAC filename.

我希望我能提供一个更有说服力和详尽的例子,但我们仍在解决这里的问题.

I wish I had a more conclusive and exhasutive example to provide, but we're still working out the issues here too.

SSDT 真正糟糕的一件事是与 DBProj 不同,它目前不可扩展.尽管它在许多不同的方面比 DBProj 做得好得多,但您不能覆盖其默认行为,除非您可以在前/后脚本中找到解决问题的方法.我们现在正在努力解决的问题之一是,当您拥有数千万条记录时,重新创建更新表 (CCDR) 的默认方法真的很糟糕.

One thing that really sucks about SSDT is that unlike DBProj, it's currently not extensible. Although it does a much better job than DBProj at a lot of different things, you can't override its default behavior unless you can find some method inside of pre/post scripts of getting around a problem. One of the issues we're trying to resolve right now is that the default method of recreating a table for updates (CCDR) really stinks when you have tens of millions of records.

-更新:我已经有一段时间没有看到这篇文章了,但显然它最近很活跃,所以我想我会添加一些重要的注释:如果您使用的是 VS2012,那么现在 2013 年 6 月发布的 SSDT有一个内置的数据比较工具,还提供了扩展点——也就是说,您现在可以为项目包含构建贡献者和部署计划修改器.

-UPDATE: I haven't seen this post in some time, but apparently it's been active lately, so I thought I'd add a couple of important notes: if you are using VS2012, the June 2013 release of SSDT now has a Data Comparison tool built-in, and also provides extensibility points - that is to say, you can now include Build Contributors and Deployment Plan Modifiers for the project.

这篇关于如何使用 SSDT 和 Visual Studio 2012 数据库项目正确管理数据库部署?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆