Research Article

Automatic Representation and Segmentation of Video Sequences via a Novel Framework Based on the D-EVM and Kohonen Networks

Algorithm 9

Algorithm for the convolution process to generate set DCValues.
Input: The D-EVM for the mask, temporal position of the final frame, and the width and height of the frame.
Output: The DCValues set that contains the values of DC descriptor for every possible sub-animation.
(1)Procedure  AnimConv  (D-EVM mask, integer endFrame, integer width, integer length)
(2)   EVM currentResult; // Current intersection result
(3)   EVM frameSeq;
(4)   integer , maxTimeShift;
(5)   integer , maxXShift;
(6)   integer , maxYShift;
(7)   real maskDC;
(8)   Set of reals DCValues;
(9)   integer ; // Counter for every time shift.
   / Obtaining the frame sequence required for the intersection operation. /
(10)      frameSeq  maskAnimSections (mask, endFrame);
   / Obtaining the maximum shift for the time, and dimensions. /
(11)      maxTimeShift ← endFrame − mask.timeLength + 1;
(12)      maxXShift ← width − mask.xLength + 1;
(13)      maxYShift ← length − mask.yLength + 1;
(14)     ;
(15)     whiledo
(16)       whiley ≤ maxYShiftdo
(17)        whilex ≤ maxXShiftdo
(18)             currentResult ← maskIntersection (mask, frameSeq);
(19)               discreteCompactness (currentResult, mask.LcMin, mask.LcMax);
(20)             DCValues.addDCValue ();
(21)             mask.EVMTraslation (1, 1);
(22)              ;
(23)              ;
(24)         mask.dimReset (1);
(25)         mask.EVMTraslation (2, 1);
(26)        ;
(27)        ;
(28)       mask.dimReset (2);
(29)       mask.EVMTraslation ();
(30)       frameSeq ← maskAnimSections (mask, endFrame);
(31)       ;
(32)       ;
(33)       ;
(34)     return DCValues