以编程方式射线追踪曼德尔灯泡所涉及的数学需要帮助. [英] Help needed with the mathematics involved in raytracing a Mandelbulb programatically.

查看:89
本文介绍了以编程方式射线追踪曼德尔灯泡所涉及的数学需要帮助.的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

你好.我正在为Mandelbulb分形编写基本的光线追踪器.我对射线追踪有相当的了解,但从未对射线进行分形.我猜想,相交函数基本上必须是相同的,才能解决光线与分形相交的方程式,而不是光线与球体相交的方程式. Wikipedia 中给出了Mandelbulg的等式.到目前为止,还算不错,但是我无法解决我自己与Mandelbulb相交的射线方程.我假设我们有一条从点P开始并具有方向矢量D的射线R.此外,在Mandelbulb的公式中还给出了N和R.确切的解决方案将非常不错+一些说明/逐步解决方案,因为我不想只复制粘贴解决方案.我也不知道如何在交点处找到曼德尔球的法线.
还有一些用于近似Mandelbulb的性能更快的方法吗?
总结一下:
1.如何求解Ray-Mandelbulb相交方程
2.交点处的法线是什么.
3.是否有更快的性能方法来逼近曼德尔布尔?

链接已修复-缺少http://


谢谢大家的答复.这是到目前为止发现的内容.
(我还有1个问题,在编辑结束时总结出来)

1.对于序列Z = {z0 = 0,zN =(z(N-1))^ k + p} dot(zN,zN)不成立的情况,我们将点p定义为Mandelbulb的一部分.当N接近无穷大时,增长到无穷大.与2D Mandelbrot集一样,可以证明如果dot(zI,zI)> 4对于某个自然数,我会随着N趋近无穷而使点(zN,zN)变为无穷.
(给定K,不同的K会产生不同的Mandelbulb,最受欢迎的是K = 8,
点(a,b)是检验者a和b的点积,
如果z是vectro,则在上面的Wiki链接中给出z ^ k的公式).

实际上,我们定义一个数字I并计算A = dot(zI,zI).如果A< = 4,则我们假定该点是曼德尔灯泡的一部分.较大给出更详细的信息.通常(根据Mandelbulbs上的各个站点),I == 10就足够了(尚未进行测试).

2.我将假设谁阅读的书不知道什么是光线追踪(因为所涉及的数学并不特定于光线追踪).如果您这样做,请跳至下一段.在光线跟踪中,您在空间中的某处有一个屏幕,并且想要将世界投射(渲染)到该屏幕上.因此,您要做的是将射线投射到每个像素中,找到首先撞击"该射线的对象,找到有关交点的一些属性(颜色,到像素的距离,表面的法线以及其他...),然后计算该属性的像素颜色.
在Mandelbulb的情况下,我们定义一个小距离E,然后以长度E规则地对光线进行采样.沿着光线的Mandelbulb的第一个点是我们命中"它的位置.较小的E可以提供更好的细节,但会减慢计算过程.

3.我找到了一些优化方法,但是这里不做详细介绍.但是,必须注意在GPU上进行计算,因为它极大地加快了处理速度,并且是光线追踪时的常规优化.

摘要:
1.我有一个曼德尔布尔的高度图".也就是说,对于每个像素,到曼德尔灯泡的距离是已知的.现在,我要使其发光".我不会实现分形表面的真实属性,因为它只是黑色的(如解决方案1所述).为了这个目的,我将需要在该射线与Mandelbulb的近似值的交点处的法线.到目前为止,我还没有找到如何执行此操作的方法(我在Google上搜索了很多).

PS:我不确定这是否是发布此问题的正确位置,因为它可能被视为一个新问题.
PPS:我将在工作完成后立即发布一些带有说明和代码的工作演示(可能在一篇文章中).
PPPS:对于解决方案1的作者,我不确定如何给您报价:您的签名是"SA"还是您的名字,所以我这样引用.如果您觉得不应该这样,请与我联系进行修复.

Hello. I''m writing a basic raytracer for Mandelbulb fractal. I have a fair knowledge of raytracing but have never raytraced fractals. I imagine it is basically the same exept the intersection function would have to solve the equation for ray intersecting the fractal rather that an equation for a ray intersecting a sphere. The equation of the Mandelbulg is given in Wikipedia. So far so good but I couldn''t solve the equation of a Ray intersecting the Mandelbulb myself. I am assuming that we have a ray R which starts at some point P and has direction vector D. Also in the formula for the Mandelbulb N and R are given. An exact solution would be really nice + some explanations/step by step solution as I don''t want to just copy paste the solution. Also I don''t know how to find the normal to the Mandelbulb at the intersection point.
also are there faster performance methods for approximating a Mandelbulb?.
So to sum it up:
1. How to solve Ray-Mandelbulb intersection equation
2. What is the Normal at the intersection.
3. Are there any faster performance methods for approximating a Mandelbulb?

link fixed - was missing http://


Thank you all for the replies. Here''s what found out so far.
(I have 1 question left, it is summed up in the end of the edit)

1. We define a point p to be part of the Mandelbulb if for the the sequence Z = {z0=0, zN = (z(N-1))^k + p} dot(zN,zN) doesn''t grow to infinity as N approaches infinity. As with the 2D Mandelbrot set it can be proven that if dot(zI,zI) > 4 for some natural number I then the dot(zN,zN) goes to infinity as N aproaches infinity.
(K is given, different Ks produce different Mandelbulbs, most popular one is K=8,
dot(a,b) is the dot product of vetors a and b,
if z is a vectro the formula for z^k is given in the wiki link above ).

In reality, we define a number I and calculate A = dot(zI,zI). if A<=4 then we assume the point is part of the Mandelbulb. Larger Is give finer details. Usually (according to various sites on Mandelbulbs) I==10 is enough (haven''t tested it yet).

2. I''ll assume whoever reads that doesn''t have any idea what raytracing is (because the maths involved isn''t specific to raytracing). If you do, skip to next paragraph. In ray tracing, you have a screen somewhere in space and you want to project (render) the world onto this screen. So what you do is cast a ray through each pixel, find what object "hits" this ray first, find some properties about the intersection point (color, distance to the pixel, normal of the surface, others ...) and calculate the color of the pixel from that properties.
In the Mandelbulb case we define a small distance E and then sample the ray at regular intervals with length E. The first point along the ray that is part of the Mandelbulb is where we "hit" it. Smaller Es give finer detail but slow down the calculation process.

3. I''ve found some optimization methods, but won''t go into detail here. Doing the computations on the GPU must be noted however, because it speeds the process immensely, and it is a general optimisation when raytracing.

Summary:
1.I have a "heightmap" of the Mandelbulb. That is for each pixel the distance to the Mandelbulb is known. Now I want to make it "shiny". I won''t be implementing the real properties of a fractal surface as it is just black (as noted in Solution 1). For the purpose I''ll need the normal at the intersection point of that ray with this approximation of the Mandelbulb. I haven''t found so far how to do this (I''ve Googled a lot).

PS: I wasn''t sure if this is the right place to post this question as it might be seen as a new one.
PPS: I''ll post some working demos of the whole thing with explanations and code (maybe in an article) as soon as I have them workig.
PPPS: To the author of Solution 1, I wasn''t sure how to quote you: by your signiture which is just "SA" or by your name so I quoted it like this. If you feel like it shouldn''t be like this please contact me to fix it.

推荐答案

我将以非常简单的方式解释其背后的一些数学思想.术语,然后您可以尝试应用您的射线跟踪知识,但这绝非易事.这也取决于您要在模型中实现的现实程度.最现实的模型几乎是不可能实现的,因此您可能需要作弊,但这在虚拟现实中是很常见的. :-)

严格来说,对于分形,真正不确定是表面和光线与分形集的表面之间的夹角和交点. 这个东西根本就不存在,即使从理论上讲也是如此.怎么可能呢?这是因为表面本身是深奥的;特别地,表面度量实际上是无限的.分形方面,您应该不会感到惊讶.



(要定义局部法线,您总是需要有一个可微分函数.请参见:
http://en.wikipedia.org/wiki/Differentiable_function [ http://www.sciencedirect.com/science/article/pii/S003960289800274X [ ^ ],
http://www.researchgate.net/publication/1867777_Elastic_Scattering_by_The_pect_Affinity_of_the_Deterministic_elf /www.researchgate.net/publication/1867777_Elastic_Scattering_by_Deterministic_and_Random_Fractals_Self-Affinity_of_the_Diffraction_Spectrum"target =" _ blank"title ="新窗口> ^ ],
http://www.researchgate.net/publication/225976935_Fractal_Characteristics_Investigation_on_from_From_From_Light_Sursing_on_from_from_Light_S .net/publication/225976935_Fractal_Characteristics_Investigation_on_Light_Scattering_from_Two_Dimensional_Rough_Surface"target =" _ blank"title =" New Window> ^ ],
http://www.researchgate.net/publication/26315277_Diffraction_by_fractal_metallic_supergratings [
因此,作弊将是针对每个视图重新渲染分形表面,这通常在可视化分形时完成.此过程总是在某些细节上停止,否则您将无限计算它们.然后,您应该评估该欠破裂"表面的最小特征.并使用直径更小的射线".换句话说,射线的直径应随着分形视图而缩小.在这种情况下,您可以使用常规的跟踪方法.虽然不太现实,但是可能可以更好地了解视觉上的分形.我警告过你,这一点都不容易.



因此,您的问题的答案是:

1和2:分形表面不存在这种东西.它们仅存在于不完全分形"中,该不完全分形"是从用于构建分形模型的有限次数的迭代中获得的.您可以渲染这样的非分形表面以得到单独的视图,并以通常的方式对该表面进行操作.
3.如果您可以慢慢解决此问题,请感到高兴.更严重的是:该问题将导致异常大量的计算.我可能会考虑使用视频卡GPU.

[END EDIT#1]



也许我在表面渲染"中随意使用术语渲染"令人困惑.这绝对不同于您的场景渲染.这就是发生的情况:对于法线曲面",您可以准备一个易于跟踪的曲面模型,并在需要为某个照明以及相机与渲染对象之间的特定距离和方向进行渲染的场景时使用该模型. />
在分形的情况下,没有这种东西.一个完整的分形对象永远不会存在于有限状态机中.这是因为此类对象的表示需要"无限数量的迭代,并且其信息容量是无限的.在某种程度上,它只存在于我们的想象中.因此,渲染过程分为两个阶段:每次在相机和渲染对象之间获得新的距离和方向时,您首先都需要使用有限的迭代次数来构建分形表面的近似模型.您应该应用一些与您通过扩展此自相似对象的片段而获得的详细程度有关的标准.在第二种状态下,您可以对其进行光线追踪.

[END EDIT#2]

所以,祝你好运,
-SA
I''ll explain some mathematical idea behind it in in very simple terms, and than you could try to apply your knowledge of raytracing, but it is not going to be simple. It also depends on how realistic you are going to be in your model. The most realistic model is almost impossible to implement, so you might need some cheating, very usual in virtual reality though. :-)

Strictly speaking, for a fractal, the angle and the point of intersection between the surface and the ray of light and the surface of the fractal set is truly undetermined. This thing simply does not exist, even theoretically. How it can be? This is because the surface itself is something esoteric,; in particular, the surface measure is truly infinite. You should not be surprised too much when it comes to fractals.



(To be able to define a local normal, you always need to have something called differentiable function. Please see:
http://en.wikipedia.org/wiki/Differentiable_function[^].

This is never the case with a fractal. In fact, you don''t need to have a fractal to face a non-differentiable function. There is an uncountable set of non-differentiable functions not related to fractals in any sense of this word. The fractals are very special functions.)

[END EDIT #3]

When it comes to reflection of light, you should remember that in real-life physics it''s possible to produce a piece of real material, something you can hold in your hand, which surface is best described using fractal mathematics. There are no true fractals in real life, but there are not such material objects as confined with a smooth surfaces as well: every solid matter under normal condition is composed of atoms, but for some object, fractal surface is just the best description. The fractal scaling goes down to a very small size, albeit not the size of the atom. And, it''s important to understand that such objects have wonderful physical properties, including some optical properties.

If this characteristic size gets less then the light wavelength (which is the common case — the wavelength is something relatively big, you can safely consider it to be something like a half of micrometer), from the standpoint of optical property, such objects can behave like true fractals. There is nothing amazing about its look: typically, they just look as very, very black bodies; some surfaces were claimed to the blackest recorded objects. Therefore, the most realistic model of reflection would be the one considering diffraction of light on the fractal surface. I''m afraid you would go into nearly top-notch mathematical hardness if you want to study it:
http://www.sciencedirect.com/science/article/pii/S003960289800274X[^],
http://www.researchgate.net/publication/1867777_Elastic_Scattering_by_Deterministic_and_Random_Fractals_Self-Affinity_of_the_Diffraction_Spectrum[^],
http://www.researchgate.net/publication/225976935_Fractal_Characteristics_Investigation_on_Light_Scattering_from_Two_Dimensional_Rough_Surface[^],
http://www.researchgate.net/publication/26315277_Diffraction_by_fractal_metallic_supergratings[^].

Can it be simpler? Probably, but you will face just with unusual high volume of data processing (again, typical for fractal). The methods of raytracing use some approach not adequately describing real optics even for smooth surfaces not getting close to the wavelength. In the nature, there are no "rays", but you can model things using the idealized "rays" representing some narrow bunches of light. What is the nature of fractal surface? No one bunch is narrow enough. The whole idea of fractal is that it is scaled down to more and more details observed at higher magnification. If you can observe some total number of tiny feature at the limits of the resolution (say, visible as few pixels), and than increase the magnification, you will recognize as many feature on the magnified picture.

So, the cheating would be to re-render the fractal surface for each view of it, which is always done when fractals are visualized. This procedure always stops at some level of detail, otherwise you would calculate them infinitely. Then, you should evaluate the smallest characteristic feature of this "under-fractalled" surface. And use the "rays" of even smaller diameter. In other words, the diameter of a ray should shrink with the fractal view. In this case, you can use your usual methods of tracing. Not quite realistic, but probably would allow to get a better idea what a fractal is visually. I warned you, this is not easy at all.



So the answers to your questions:

1 and 2: such things do not exist for fractal surface. They only exist for an "incomplete fractal", obtained from finite number of iteration used to build a model of a fractal. You can render such non-fractal surface for ever separate view and operate on this surface in a usual way.
3. Be happy if you can solve this problem even slowly. More seriously: the problem will incur unusually high volume of calculation. I would probably think about going in for using video card GPU.

[END EDIT #1]



Perhaps my arbitrary use of the term "render" in "render of a surface" was confusing. This is absolutely not the same as your scene rendering. Here is what happens: with "normal surfaces" you can have a ready-to-track surface model and work with it when you want to render a scene for certain lighting and certain distance and orientation between a camera and a rendered object.

In fractal case, there is no such thing. A complete fractal object never exist in the computer, which is a finite-state machine. This is because representation of such objects "needs" infinite number of iteration, and its informational capacity is infinite. In a way, it only exists in our imagination. So, the rendering is two-stage: when you get a new distance and orientation between a camera and a rendered object, each time, you first need to build up an approximated model of the fractal surface, using some finite number of iteration. You should apply some criterion related to the level of detail you obtain by scaling into the fragment of this self-similar object. On second state, you can ray-trace it.

[END EDIT #2]

So, wish you the best of luck,
—SA


我出于完整性考虑在此处添加了此代码.虽然我从任何意义上说都不真正认为它是一种解决方案,但它确实说明了我提到的维护zbuffer以便计算表面法线的想法.

它的价值基于Julia-Set.另外,从Borland是我唯一的编译器开始.天哪,我上次编译的时候还是Windows 3.11!

//4DFRACTA.CPP
I''ve added this code here for completeness. While I don''t really consider it a solution in any sense of the word, it does illustrate the idea I''d mentioned, of maintaining a zbuffer in order to calculate surface normals.

It''s based on the Julia-Set, for what it''s worth. Also, from a time when Borland was the only compiler I had. Goodness, think I still had Windows 3.11 when I last compiled it!

// 4DFRACTA.CPP
#include <math.h>
#include <graphics.h>
#include <conio.h>
#include <stdio.h>
#include <stdlib.h>
//#include <iostream.h>

//#include <fstream.h>


//	int zant=450;   //z-resolution. bigger zant -> better resolution
//	int zant1=25;   //z-resolution. bigger zant -> better resolution
	int zant=50;   //z-resolution. bigger zant -> better resolution
	int zant1=3;   //z-resolution. bigger zant -> better resolution
	int pixsize=2, vissize=1;

	double xmin=-1.66+0.5, xmax=1+0.5;
	double ymin=-1, ymax=1;
	double zmin=-1.7, zmax=1.7;
	int iter=6;

	double lightx=-1, lighty=1, lightz=-3;

	double vx=0, vy=0, vz=0;

	double cr=0.50;  //constant real value
	double ci=0.40;  //constant imaginary(1) value
	double cj=1;  //constant imaginary(2) value
	double ck=0.05;   //constant imaginary(3) value
	double wk=-0.55;   //4th dimension

  int background = 0;

	int maxcolor = 16;

	int sx,sy;
	double dx,dy,dz;
	double origx, origy, origz;
	double rminx, rminy, rminz;
	double dxx, dxy, dxz;
	double dyx, dyy, dyz;
	double dzx, dzy, dzz;
	double dzx1, dzy1, dzz1;
	double tempx, tempy, tempz;
	double cosx,cosy,cosz,sinx,siny,sinz;
	double z_buffer[640][10];
	int buffer_y;

void rotate3D(double &x,double &y,double &z)
{
	x-=origx;y-=origy;z-=origz;
	double xy=y*cosx-z*sinx;
	double xz=y*sinx+z*cosx;
	double xx=x;
	x=xx;
	y=xy;
	z=xz;
	double yx=x*cosy+z*siny;
	double yz=-x*siny+z*cosy;
	x=yx;
	z=yz;
	double zx=x*cosz-y*sinz;
	double zy=x*sinz+y*cosz;
	x=zx;
	y=zy;
	x+=origx;y+=origy;z+=origz;
}

void rotatevalues()
{
	rminx=xmin;rminy=ymin;rminz=zmin;
	rotate3D(rminx, rminy, rminz);
	tempx=xmax;tempy=ymin;tempz=zmin;
	rotate3D(tempx, tempy, tempz);
	dxx=(tempx-rminx)/sx;dxy=(tempy-rminy)/sx;dxz=(tempz-rminz)/sx;
	tempx=xmin;tempy=ymax;tempz=zmin;
	rotate3D(tempx, tempy, tempz);
	dyx=(tempx-rminx)/sy;dyy=(tempy-rminy)/sy;dyz=(tempz-rminz)/sy;
	tempx=xmin;tempy=ymin;tempz=zmax;
	rotate3D(tempx, tempy, tempz);
	dzx=(tempx-rminx)/zant;dzy=(tempy-rminy)/zant;dzz=(tempz-rminz)/zant;
	dzx1=dzx/zant1;dzy1=dzy/zant1;dzz1=dzz/zant1;
}

double calc_l(double x, double y, double z)
{     //  (x,y,z,w)^2 =
		//					( x*x - y*y - z*z - w*w ,
		//					  x*y + y*x + z*w - w*z ,
		//					  x*z + z*x - y*w + w*y ,
		//					  x*w + w*x + y*z - z*y ) }
	double lengde;
	double temp;
	double w=wk;
	int m=0;
	do {
	temp=x+x;
	x=x*x-y*y-z*z-w*w+cr;
	y=temp*y+ci;
	z=temp*z+cj;
	w=temp*w+ck;

	m++;
	lengde=x*x+y*y+z*z+w*w;
	} while ((m<iter) && (lengde<2));
 return lengde;
}

double calc_angle(double w,double e,double n,double s,double cx,double cy,double cz)
{
	double lightdx=cx-lightx;
	double lightdy=cy-lighty;
	double lightdz=cz-lightz;

	double lightlength=sqrt(lightdx*lightdx+lightdy*lightdy+lightdz*lightdz);

	double fx=/*(0)*(s-n)*/-(e-w)*(dy+dy);
	double fy=/*(e-w)*(0)*/-(dx+dx)*(s-n);
	double fz=(dx+dx)*(dy+dy)/*-(0)*(0)*/;

	double flength=sqrt(fx*fx+fy*fy+fz*fz);
	double c=(fx*lightdx+fy*lightdy+fz*lightdz)/(flength*lightlength);
	return c;
}

void show_buffer(int y)
{
	double a;

	for (int t=1; t<sx; t++) {
	 for (int i=1; i<=8; i++) {
		if ((y+i)<sy) {
			a=calc_angle(z_buffer[t-1][i],z_buffer[t+1][i],z_buffer[t][i-1],z_buffer[t][i+1]
						 ,t*dx+xmin,(y+i)*dy+ymin,z_buffer[t][i]);
			if ((z_buffer[t][i]>zmax) && (background==0)) {
			 setfillstyle(1,0);
			} else if (a<0) {
			 setfillstyle(1,1);
			} else {
			 setfillstyle(1,1+(maxcolor-1)*a);
			}
			bar(t*vissize,(y+i)*vissize,t*vissize+vissize-1,(y+i)*vissize+vissize-1);
		}
	 }
	}


	for (t=0; t<640; t++) {
	 z_buffer[t][0]=z_buffer[t][8];
	 z_buffer[t][1]=z_buffer[t][9];
	}
	buffer_y=2;
}

void main()
{
	int pz, pz1;
	double l;

	int gdriver = VGA, gmode = VGAHI, errorcode;
//	errorcode = registerbgidriver((void(*)())EGAVGA_driver);
//				registerbgidriver(void (*driver)(void));
//	if (errorcode < 0) {
//		printf("Graphics error: %s\n", grapherrormsg(errorcode));
//		exit(1);
//	}

	initgraph(&gdriver, &gmode, "..\bgi");

	errorcode = graphresult();
	if (errorcode != grOk) {
		printf("Graphics error: %s\n", grapherrormsg(errorcode));
		exit(1);
	}

	for (int i=1; i<16; i++) {
	 setrgbpalette(i, 0, i*4, 0);
	 setpalette(i, i);
	}
	setrgbpalette(0,0,0,63);
	setpalette(0, 0);


//	sx=getmaxx()/pixsize;
//	sy=getmaxy()/pixsize;
	sx = 600;
	sy = 400;

	dx=(xmax-xmin)/sx;
	dy=(ymax-ymin)/sy;
	dz=(zmax-zmin)/zant;
	double dz1=dz/zant1;

	origx=(xmin+xmax)/2;
	origy=(ymin+ymax)/2;
	origz=(zmin+zmax)/2;


	int ve=1;
//  for (ve=0; ve<50; ve++) {  //only used when making animations
//	 vx=0;vy=0;vz=0;
	vx=vx/180*3.14159265;
	vy=vy/180*3.14159265;
	vz=vz/180*3.14159265;

	cosx=cos(vx);cosy=cos(vy);cosz=cos(vz);
	sinx=sin(vx);siny=sin(vy);sinz=sin(vz);

	rotatevalues();
	buffer_y=0;
	for (int py=0; py<=sy; py++) {
	 for (int px=0; px<=sx; px++) {
		tempx=rminx+px*dxx+py*dyx/*+pz*dzx*/;
		tempy=rminy+px*dxy+py*dyy/*+pz*dzy*/;
		tempz=rminz+px*dxz+py*dyz/*+pz*dzz*/;
		pz=0;
		do {
			tempx+=dzx;
			tempy+=dzy;
			tempz+=dzz;
			l=calc_l(tempx,tempy,tempz);
			pz++;
		} while ((l>2) && (pz<zant));
		pz1=0;
		do {
			pz1++;
			tempx-=dzx1;
			tempy-=dzy1;
			tempz-=dzz1;
			l=calc_l(tempx,tempy,tempz);
		} while (l<2);
		if (pz < zant)
			z_buffer[px][buffer_y]=zmin+pz*dz-pz1*dz1;
		else
			z_buffer[px][buffer_y]=zmax+dz;
		setfillstyle(1,15-pz/10);
		bar(px*vissize,py*vissize,px*vissize+vissize-1,py*vissize+vissize-1);
		if (kbhit()) break;
	 }
	 buffer_y++;
	 if (buffer_y==10) show_buffer(py-9);
	 if (kbhit()) break;
	}
	if (!kbhit()) {
	 show_buffer(py-buffer_y);
	 printf("\7");
	}
	char answer = getch();
//  }  //end of FOR-loop. Only used when making animations
	closegraph();
}



请参阅此处,以非常快速地移植在线代码演示. (与Chrome配合使用效果最佳.Chrome浏览器的4s渲染,IE浏览器的15s渲染)
http://jsfiddle.net/enhzflep/K76jd/ [



See here for very quickly ported online demo of the code. (works best with Chrome. 4s render for chrome, 15s render for IE)
http://jsfiddle.net/enhzflep/K76jd/[^]


您的问题引起了我的兴趣-我还没有听说过3D Mandelbrot相关的问题分形之前.在浏览了一些链接之后,我发现了本网站上的照片真棒,还有一些3D可视化软件的链接第2页上的"3D分形"部分.也许您可以找到一些对您有用的东西.
Your question raised my interest - I hadn''t heard of 3D Mandelbrot-related fractals before. After following some some links I found this site with awesome pictures and also a couple of links to 3D visualization software for 3D fractals on page 2. Maybe there you can find something useful for your purposes.


这篇关于以编程方式射线追踪曼德尔灯泡所涉及的数学需要帮助.的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆