在OpenMDAO中运行f2py封装的,基于PETSc的fortran代码时,由写语句引起的错误 [英] Errors caused by write statements when running f2py-wrapped, PETSc-based fortran code in OpenMDAO

查看:123
本文介绍了在OpenMDAO中运行f2py封装的,基于PETSc的fortran代码时,由写语句引起的错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用f2py封装我的基于PETSc的fortran分析代码,以便在OpenMDAO中使用(如建议的?我目前尚未这样做,因为if MPI块之外的任何内容/parallel_multi_point.html#running-multi-point-in-parallel"rel =" nofollow noreferrer>此示例.

在独立测试中,我能够成功导入一些东西,包括

from mpi4py import MPI
from petsc4py import PETSc
from openmdao.core.system import System
from openmdao.core.component import Component
from openmdao.core.basic_impl import BasicImpl
from openmdao.core._checks import check_connections, _both_names
from openmdao.core.driver import Driver
from openmdao.core.mpi_wrap import MPI, under_mpirun, debug
from openmdao.components.indep_var_comp import IndepVarComp
from openmdao.solvers.ln_gauss_seidel import LinearGaussSeidel
from openmdao.units.units import get_conversion_tuple
from openmdao.util.string_util import get_common_ancestor, nearest_child, name_relative_to
from openmdao.util.options import OptionsDictionary
from openmdao.util.dict_util import _jac_to_flat_dict

没有太多的韵律或我测试的理由,只是掉了几个随机的兔子洞(更多的方向会很棒).如果将它们导入到同一脚本中,则以下是某些 会导致错误的事情:

from openmdao.core.group import Group
from openmdao.core.parallel_group import ParallelGroup
from openmdao.core.parallel_fd_group import ParallelFDGroup
from openmdao.core.relevance import Relevance
from openmdao.solvers.scipy_gmres import ScipyGMRES
from openmdao.solvers.ln_direct import DirectSolver

因此,MPI导入似乎不是问题吗?但是,由于不太了解OpenMDAO代码,因此无法看到有问题的导入中的通用线程.

更新2:我应该补充一点,就是我对networkx软件包特别怀疑.如果我的脚本很简单

import networkx as nx
import module_name
module_name.execute()

然后我得到了错误.但是,如果我在networkz之前导入我的模块(即在上面的块中切换第1行和第2行),则不会收到错误消息.更奇怪的是,如果我还导入PETSc:

from petsc4py import PETSc
import networkx as nx
import module_name
module_name.execute()

然后一切正常...

更新3:我正在运行OS X El Capitan 10.11.6.我真的不记得我是如何安装python2.7的(现在需要使用此版本而不是3.x版本).几年前安装,位于/usr/local/bin中.但是,我切换到anaconda安装,重新安装了networkx,仍然出现相同的错误.

我发现,如果我使用gfortran(而不是mpif90)来编译f2py包装的内容(我假设这是你们的工作,是吗?),那么我不会得到错误.不幸的是,这导致我的fortran代码中的PETSc东西产生了一些奇怪的错误,可能是因为即使我强迫最终编译器使用gfortran,根据PETSc编译规则,那些.f90/.F90文件还是由mpif90编译的. >

更新4:我终于能够解决Internal Error: list_formatted_write()问题.通过使用mpif90 --showme,我可以看到mpif90使用的标志(因为它实际上只是gfortran加上一些标志).这样就省略了标志-Wl,-flat_namespace摆脱了那些与打印有关的错误.

现在,除了一个重要的例外,我现在可以导入大多数东西并运行我的代码而不会出现问题.如果我有一个基于petsc的fortran模块(pc_fort_mod),那么还要将PETSc导入到python环境中,即

from petsc4py import PETSc
import pc_fort_mod
pc_fort_mod.execute()

在fortran分析中导致PETSc错误(无效的矩阵,不成功的预分配).这对我来说似乎很合理,因为两者似乎都在尝试使用相同的PETSc库.知道是否有办法做到避免pc_fort_mod PETSc和petsc4py PETSC冲突吗?我想一种解决方法可能是拥有两个PETSc版本...

已解决:有人告诉我Update 4中描述的问题最终应该不是问题-应该可以同时在python和fortran中使用PETSc.最终,我可以使用自编译的PETSc版本而不是Homebrew配方来解决错误.

解决方案

我以前从没看过类似的东西,我们将network-X和编译后的fortran封装在F2py中,并多次在MPI下运行. /p>

我建议您删除并重新安装network-x软件包.

您正在使用哪个python,并且正在运行什么os?我们非常幸运地运行anaconda python安装.不过,在安装petsc时必须格外小心.从源代码构建和运行PETSc测试是最安全的方法.

I am using f2py to wrap my PETSc-based fortran analysis code for use in OpenMDAO (as suggested in this post). Rather than use f2py directly, I'm instead using it to generate the relevant .c, .pyc, etc. files and then linking them myself using mpif90.

In a simple python environment, I can import my .so and run the code without any problems:

>>> import module_name
>>> module_name.execute()

expected code output...

However, when trying to do the same thing in an OpenMDAO component, I get the following error:

At line 72 of file driver.F90
Internal Error: list_formatted_write(): Bad type

This happens even when running in serial and the error appears to the first place in the fortran code where I use write(*,*). What could be different about running under OpenMDAO that might cause this issue? Might it have something to do with the need to pass a comm object, as mentioned in the answer to my original question? I am not doing that at the moment as it was not clear to me from the the relevant OpenMDAO example how that should be done in my case.

When I try to find specific information about the error I'm getting, search results almost always point to the mpif90 or gfortran libraries and possibly needing to recompile or update the libraries. However, that doesn't explain to me why my analysis would work perfectly well in a simple python code but not in OpenMDAO.

UPDATE: Per some others' suggestions, I've tried a few more things. Firstly, I get the error regardless of if I'm running using mpiexec python <script> or merely python <script>. I do have the PETSc implementation set up, assuming that doesn't refer to anything beyond the if MPI block in this example.

In my standalone test, I am able to successfully import a handful of things, including

from mpi4py import MPI
from petsc4py import PETSc
from openmdao.core.system import System
from openmdao.core.component import Component
from openmdao.core.basic_impl import BasicImpl
from openmdao.core._checks import check_connections, _both_names
from openmdao.core.driver import Driver
from openmdao.core.mpi_wrap import MPI, under_mpirun, debug
from openmdao.components.indep_var_comp import IndepVarComp
from openmdao.solvers.ln_gauss_seidel import LinearGaussSeidel
from openmdao.units.units import get_conversion_tuple
from openmdao.util.string_util import get_common_ancestor, nearest_child, name_relative_to
from openmdao.util.options import OptionsDictionary
from openmdao.util.dict_util import _jac_to_flat_dict

Not too much rhyme or reason to what I tested, just went down a few random rabbit holes (more direction would be fantastic). Here are some of the things that do result in error if they are imported in the same script:

from openmdao.core.group import Group
from openmdao.core.parallel_group import ParallelGroup
from openmdao.core.parallel_fd_group import ParallelFDGroup
from openmdao.core.relevance import Relevance
from openmdao.solvers.scipy_gmres import ScipyGMRES
from openmdao.solvers.ln_direct import DirectSolver

So it doesn't seem that the MPI imports are a problem? However, not knowing the OpenMDAO code too well, I am having trouble seeing the common thread in the problematic imports.

UPDATE 2: I should add that I'm becoming particularly suspicious of the networkx package. If my script is simply

import networkx as nx
import module_name
module_name.execute()

then I get the error. If I import my module before networkz, however (i.e. switch lines 1 and 2 in the above block), I don't get the error. More strangely, if I also import PETSc:

from petsc4py import PETSc
import networkx as nx
import module_name
module_name.execute()

Then everything works...

UPDATE 3: I'm running OS X El Capitan 10.11.6. I genuinely don't remember how I installed the python2.7 (need to use this rather than 3.x at the moment) I was using. Installed years ago and was located in /usr/local/bin. However, I switched to an anaconda installation, re-installed networkx, and still get the same error.

I've discovered that if I compile the f2py-wrapped stuff using gfortran (I assume this is what you guys do, yes?) rather than mpif90, I don't get the errors. Unfortunately, this causes the PETSc stuff in my fortran code yield some strange errors, probably because those .f90/.F90 files, according to the PETSc compilation rules, are compiled by mpif90 even if I force the final compile to use gfortran.

UPDATE 4: I was finally able to solve the Internal Error: list_formatted_write() issue. By using mpif90 --showme I could see what flags mpif90 is using (since it's essentially just gfortran plus some flags). It turns omitting the flag -Wl,-flat_namespace got rid of those print-related errors.

Now I can import most things and run my code without a problem, with one important exception. If I have a petsc-based fortran module (pc_fort_mod), then also importing PETSc into the python environment, i.e.

from petsc4py import PETSc
import pc_fort_mod
pc_fort_mod.execute()

results in PETSc errors in the fortran analysis (invalid matrices, unsuccessful preallocation). This seems reasonable to me since both would appear to be attempting to use the same PETSc libraries. Any idea if there is a way to do this so that the pc_fort_mod PETSc and petsc4py PETSC don't clash? I guess a workaround may be to have two PETSc builds...

SOLVED: I'm told that the problem described in Update 4 ultimately should not be a problem--it should be possible to simultaneously use PETSc in python and fortran. I was ultimately able to resolve my error by using a self-compiled PETSc build rather than the Homebrew recipe.

解决方案

I've never quite seen anything like this before and we've used network-X with compiled fortran wrapped in F2py, running under MPI many many times.

I suggest that you remove and re-install your network-x package.

Which python are you using and what os are you running on? We've had very good luck running the anaconda python installation. You have to be a bit careful when installing petsc though. Building from source and running the PETSc tests is the safest way.

这篇关于在OpenMDAO中运行f2py封装的,基于PETSc的fortran代码时,由写语句引起的错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆