运动矢量计算 [英] Motion vectors calculation

查看:1395
本文介绍了运动矢量计算的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在处理以下代码:

  filename ='C:\li_walk.avi'; 
hVidReader = vision.VideoFileReader(filename,'ImageColorSpace','RGB','VideoOutputDataType','single');
hOpticalFlow = vision.OpticalFlow('OutputValue','复杂形式的水平和垂直分量','ReferenceFrameDelay',3);
hMean1 = vision.Mean;
hMean2 = vision.Mean('RunningMean',true);
hMedianFilt = vision.MedianFilter;
hclose = vision.MorphologicalClose('Neighborhood',strel('line',5,45));
hblob = vision.BlobAnalysis('CentroidOutputPort',false,'AreaOutputPort',true,'BoundingBoxOutputPort',true,'OutputDataType','double','MinimumBlobArea',250,'MaximumBlobArea',3600,'MaximumCount ',80);
herode = vision.MorphologicalErode('Neighborhood',strel('square',2));
hshapeins1 = vision.ShapeInserter('BorderColor','Custom','CustomBorderColor',[0 1 0]);
hshapeins2 = vision.ShapeInserter('Shape','Lines','BorderColor','Custom','CustomBorderColor',[255 255 0]);
htextins = vision.TextInserter('Text','%4d','Location',[11],'Color',[111],'FontSize',12)
sz = get(0,'ScreenSize');
pos = [20 sz(4)-300 200 200];
hVideo1 = vision.VideoPlayer('Name','Original Video','Position',pos);
pos(1)= pos(1)+220; %将下一个查看器移动到右侧
hVideo2 = vision.VideoPlayer('Name','Motion Vector','Position',pos);
pos(1)= pos(1)+220;
hVideo3 = vision.VideoPlayer('Name','Thresholded Video','Position',pos);
pos(1)= pos(1)+220;
hVideo4 = vision.VideoPlayer('Name','Results','Position',pos);
%初始化用于绘制运动矢量的变量。
lineRow = 22;
firstTime = true;
motionVecGain = 20;
borderOffset = 5;
decimFactorRow = 5;
decimFactorCol = 5;
while〜isDone(hVidReader)%到达文件结尾时停止
frame = step(hVidReader); %读取输入视频帧
grayFrame = rgb2gray(frame);
ofVectors = step(hOpticalFlow,grayFrame); %估计光流
%光流向量存储为复数。计算它们的
%幅度平方,稍后将用于阈值处理。
y1 = ofVectors。* conj(ofVectors);
%从复杂速度矩阵计算速度阈值。
vel_th = 0.5 * step(hMean2,step(hMean1,y1));
%阈值图像,然后过滤它以消除斑点噪声。
segmentedObjects = step(hMedianFilt,y1> = vel_th);
%稀释道路的部分,并填充blob中的洞。
segmentedObjects = step(hclose,step(herode,segmentedObjects));
%估计blob的面积和边界框。
[area,bbox] = step(hblob,segmentedObjects);
%选择ROI内的框(白线下方)。
Idx = bbox(:,1)> lineRow;
%基于blob大小,过滤掉不能是汽车的对象。
%当blob的面积和
%边界框的面积之间的比率高于0.4(40%)时,将其分类为汽车。
ratio = zeros(length(Idx),1);
ratio(Idx)= single(area(Idx,1))./ single(bbox(Idx,3)。
ratiob = ratio> 0.4;
count = int32(sum(ratiob)); %汽车数量
bbox(〜ratiob,:) = int32(-1);
%在跟踪的汽车周围绘制边界框。
y2 = step(hshapeins1,frame,bbox);
%显示跟踪的汽车数量和显示投资回报率的白线。
y2(22:23,:,:) = 1; %白线。
y2(1:15,1:30,:) = 0; %显示计数的背景
result = step(htextins,y2,count);
%生成绘制运动矢量的坐标。
if firstTime
[R C] = size(ofVectors); %高度和宽度(像素)
RV = borderOffset:decimFactorRow:(R-borderOffset);
CV = borderOffset:decimFactorCol:(C-borderOffset);
[Y X] = meshgrid(CV,RV);
firstTime = false;
sumu = 0;
sumv = 0;
end

grayFrame = rgb2gray(frame);
[ra ca na] = size(grayFrame);
ofVectors = step(hOpticalFlow,grayFrame); %估计光流量

ua = real(ofVectors);
ia = ofVectors - ua;
va = ia / complex(0,1);


sumu = ua + sumu;
sumv = va + sumv;
[xa ya] = meshgrid(1:1:ca,ra:-1:1);


%计算并绘制运动矢量。
tmp = ofVectors(RV,CV)。* motionVecGain;
lines = [Y(:),X(:),Y(:) + real(tmp(:)),X(:) + imag
motionVectors = step(hshapeins2,frame,lines);
%显示结果
step(hVideo1,frame); %原始视频
步骤(hVideo2,motionVectors); %带运动矢量的视频
步骤(hVideo3,segmentedObjects); %阈值视频
步骤(hVideo4,结果); %带有边框的视频

quiver(xa,ya,sumu,sumv)
end
release(hVidReader);

请帮助我理解上述代码的以下语句:

  ua = real(ofVectors); 
ia = ofVectors - ua;
va = ia / complex(0,1);

这些是运动矢量的水平(ua)和垂直(va)分量。什么真正的部分(Ofvectors)会是?请帮助我理解此代码段

解决方案

当对象 hOpticalFlow 是在代码的第三行构造的, OutputValue 属性设置为'复杂形式的水平和垂直分量'其中的效果是,当将步骤命令应用到 hOpticalFlow 和图像(框架)时,只得到flowVectors的大小,但是代表这些平面流向量的复数。它只是命令返回信息的一种紧凑方式。一旦在 ofVectors 中有复数,它是步骤命令的输出,命令

  ua = real(ofVectors); 

将每个向量的水平分量存储在 ua 。命令之后

  ia = ofVectors  -  ua;执行

ia 即流向量的垂直分量),因为 ua 中的实部被从 ofvector 中的复数中减去。但是,您需要除去 ia 中的虚数单位,因此除以 0 + 1i 。这是命令

  va = ia / complex(0,1); 


I am working on the following code:

filename = 'C:\li_walk.avi';
hVidReader = vision.VideoFileReader(filename, 'ImageColorSpace', 'RGB','VideoOutputDataType', 'single');
hOpticalFlow = vision.OpticalFlow('OutputValue', 'Horizontal and vertical components in complex form', 'ReferenceFrameDelay', 3);
hMean1 = vision.Mean;
hMean2 = vision.Mean('RunningMean', true);
hMedianFilt = vision.MedianFilter;
hclose = vision.MorphologicalClose('Neighborhood', strel('line',5,45));
hblob = vision.BlobAnalysis('CentroidOutputPort', false, 'AreaOutputPort', true, 'BoundingBoxOutputPort', true, 'OutputDataType', 'double','MinimumBlobArea', 250, 'MaximumBlobArea', 3600, 'MaximumCount', 80);
herode = vision.MorphologicalErode('Neighborhood', strel('square',2));
hshapeins1 = vision.ShapeInserter('BorderColor', 'Custom', 'CustomBorderColor', [0 1 0]);
hshapeins2 = vision.ShapeInserter( 'Shape','Lines', 'BorderColor', 'Custom','CustomBorderColor', [255 255 0]);
htextins = vision.TextInserter('Text', '%4d', 'Location',  [1 1],'Color', [1 1 1], 'FontSize', 12);
sz = get(0,'ScreenSize');
pos = [20 sz(4)-300 200 200];
hVideo1 = vision.VideoPlayer('Name','Original Video','Position',pos);
pos(1) = pos(1)+220; % move the next viewer to the right
hVideo2 = vision.VideoPlayer('Name','Motion Vector','Position',pos);
pos(1) = pos(1)+220;
hVideo3 = vision.VideoPlayer('Name','Thresholded Video','Position',pos);
pos(1) = pos(1)+220;
hVideo4 = vision.VideoPlayer('Name','Results','Position',pos);
% Initialize variables used in plotting motion vectors.
lineRow   =  22;
firstTime = true;
motionVecGain  = 20;
borderOffset   = 5;
decimFactorRow = 5;
decimFactorCol = 5;
while ~isDone(hVidReader)  % Stop when end of file is reached
    frame  = step(hVidReader);  % Read input video frame
    grayFrame = rgb2gray(frame);
    ofVectors = step(hOpticalFlow, grayFrame);   % Estimate optical flow
    % The optical flow vectors are stored as complex numbers. Compute their
    % magnitude squared which will later be used for thresholding.
    y1 = ofVectors .* conj(ofVectors);
    % Compute the velocity threshold from the matrix of complex velocities.
    vel_th = 0.5 * step(hMean2, step(hMean1, y1));
    % Threshold the image and then filter it to remove speckle noise.
    segmentedObjects = step(hMedianFilt, y1 >= vel_th);
    % Thin-out the parts of the road and fill holes in the blobs.
    segmentedObjects = step(hclose, step(herode, segmentedObjects));
    % Estimate the area and bounding box of the blobs.
    [area, bbox] = step(hblob, segmentedObjects);
    % Select boxes inside ROI (below white line).
    Idx = bbox(:,1) > lineRow;
    % Based on blob sizes, filter out objects which can not be cars.
    % When the ratio between the area of the blob and the area of the
    % bounding box is above 0.4 (40%), classify it as a car.
    ratio = zeros(length(Idx), 1);
    ratio(Idx) = single(area(Idx,1))./single(bbox(Idx,3).*bbox(Idx,4));
    ratiob = ratio > 0.4;
    count = int32(sum(ratiob));    % Number of cars
    bbox(~ratiob, :) = int32(-1);
    % Draw bounding boxes around the tracked cars.
    y2 = step(hshapeins1, frame, bbox);
    % Display the number of cars tracked and a white line showing the ROI.
    y2(22:23,:,:)   = 1;   % The white line.
    y2(1:15,1:30,:) = 0;   % Background for displaying count
    result = step(htextins, y2, count);
    % Generate coordinates for plotting motion vectors.
    if firstTime
      [R C] = size(ofVectors);            % Height and width in pixels
      RV = borderOffset:decimFactorRow:(R-borderOffset);
      CV = borderOffset:decimFactorCol:(C-borderOffset);
      [Y X] = meshgrid(CV,RV);
      firstTime = false;
      sumu=0;
      sumv=0;
    end

grayFrame = rgb2gray(frame);
[ra ca na] = size(grayFrame);
ofVectors = step(hOpticalFlow, grayFrame);   % Estimate optical flow

ua = real(ofVectors);
ia = ofVectors - ua;
va = ia/complex(0,1);


sumu=ua+sumu;
sumv=va+sumv;
[xa ya]=meshgrid(1:1:ca,ra:-1:1);


    % Calculate and draw the motion vectors.
    tmp = ofVectors(RV,CV) .* motionVecGain;
    lines = [Y(:), X(:), Y(:) + real(tmp(:)), X(:) + imag(tmp(:))];
    motionVectors = step(hshapeins2, frame, lines);
    % Display the results
    step(hVideo1, frame);            % Original video
    step(hVideo2, motionVectors);    % Video with motion vectors
    step(hVideo3, segmentedObjects); % Thresholded video
    step(hVideo4, result);           % Video with bounding boxes

    quiver(xa,ya,sumu,sumv)
end
release(hVidReader);

Please help me to understand the following statements of the above code:

ua = real(ofVectors);
ia = ofVectors - ua;
va = ia/complex(0,1);

these are the horizontal (ua) and vertical (va) components of the motion vectors. what real part of the (Ofvectors) will be? please help me in understanding this code segment

解决方案

When the object hOpticalFlow is constructed in the third line of the code, the OutputValue property is set to 'Horizontal and vertical components in complex form' which has the effect that when you apply the step command to hOpticalFlow and the image (frame), you will not get just the magnitudes of the flowVectors, but complex numbers that represent these planar flow vectors. It is just a compact way for the command to return the information. Once you have the complex numbers in ofVectors, which is the output of the step command, the command

ua = real(ofVectors);

stores the horizontal component of each vector in ua. After the command

ia = ofVectors - ua;

is executed, ia contains the imaginary (i.e., vertical components of the flow vectors) because the real parts in ua are subtracted from the complex numbers in ofVectors. However, you need to get rid of the imaginary units in ia, so you divide by 0+1i. This is what the command

va = ia/complex(0,1);

does.

这篇关于运动矢量计算的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆