-eigen2tensor

The -eigen2tensor command computes tensor fields for client commands like -smooth, given an ordered pair of input datasets representing vector lengths and orientations ( eigenvalues and eigenvectors ). Elements of these fields correspond with pixels of some initial operand image and typically quantify various dynamics in the locales of those pixels.

A related G'MIC command, -eigen, performs the complementary operation of extracting eigenvalue - eigenvector pairs from tensor fields. -eigen and -eigen2tensor essentially switch between two representations of an image's intensity dynamics: tensors on the one hand and eigenvalue - eigenvector datasets on the other.

The command takes no parameters but requires that the selected images form ordered datasets of eigenvalue - eigenvector pairs.

Tensor Fields

A tensor field is a specialized G'MIC dataset associated with a companion image. Elements of the tensor field are in a one-to-one correspondence with pixels of the companion image and form 2x2 or 3x3 matrices ( tensors ) that characterize particular aspects of the pixels' locales. Commonly those aspects include intensity gradients in a pixel's immediate neighborhood.

Tensor fields arise from an analysis undertaken by some other G'MIC command, such as -structuretensors or -diffusiontensors. The former detects image features, especially edges, and packages information about the strength and orientation of those features in a tensor field. The latter finds a smoothing geometry for an image which becomes an input to the -smooth command.

For single slice images, with depths equal to one, associated tensor fields consist of 2x2 matrices, which are sufficient to characterize local pixel dynamics along the x and y axes. Tensor fields comprised of 3x3 matrices are necessary for companion images with greater depth, as dynamics can be measured along the z axis as well.

The data encoded in a tensor field may be "unfolded" by -eigen into a less compact, but — for some types of computations —  more accessible form consisting of a set of eigenvalue fields together with a set of affiliated eigenvectors, altogether two separate datasets. For example, it is possible to compose a tensor field by hand where the separated eigenvalue-eigenvector form is an intermediary. When one is finished with such a pair, -eigen2tensor can compact the intermediary into a single tensor field suitable for other G'MIC commands.

 The particular organization of these unfolded datasets depend on whether the companion image is a single (depth == 1) or multiple (depth > 1) sliced image.

  Single:

  1. Each dataset has two channels.
  2. The eigenvalue dataset is non-negative. (a) channel zero holds the first eigenvalue for each pixel in the companion image, channel one holds the second eigenvalue
  3. Each channel in the eigenvector dataset holds one component of the gradient vector, (a) channel zero contains the 'x' component, (b) channel 1 conains the 'y' component. These are scaled to form unit length vectors. That is, squaring the two channel values and adding them produces one, plus or minus a vanishingly small amount accounting for rounding error (on the order of 1×10-15). The contour eigenvector is inferred, as it is always orthogonal to the gradient eigenvector.

Multiple:

  1. Both eigenvalue and eigenvector datasets have the same number of slices as the operand image. Each slice has the same structure
  2. Each eigenvalue slice has three channels; each eigenvector slice has six.
  3. Each eigenvalue slice is non-negative and, for each pixel,  the three channels hold, respectively, the first, second and third eigenvalues.
  4. For each eigenvector slice, the six channels hold the x, y, and z components for two of the three eigenvectors. We can infer the third as it is orthogonal to the plane formed by the other two. These vectors have unit length.

Application: Depicting a Tensor Field

We can conjure tensor fields from the Aether and pretend that whatever dynamics they encode pertain to some noise image. The noise image is unrelated to the tensor field and that is atypical. Commonly, a tensor field relates to a corresponding image through the action of some analytic G'MIC command like -structuretensors or -diffusiontensors, but in this application we're pursuing Art. Perhaps, more precisely, we are using noise to visualize a tensor field. The results can be quite pretty.

In any case, we will first conjure from the Aether the eigenvalue and eigenvector components. We will conjure up four gray scale images which we will call EigenOne, EigenTwo, Cosine, and Sine. This terminology comes from "Do Your Own Diffusion Tensor Fields" which (we think) might be worth your while to read.

gmic -srand 35195 \
-repeat 2 \
   6,6,1,1,'2*u-1' \
   -resize2dx[-1] 300,5 \
   -cut[-1] -0.75,0.75 \
   -normalize 0,1 \
-done \

Random EigenOne 1a. EigenOne

Random EigenTwo1b. EigenTwo

We hereby declare that these two images are, respectively, EigenOne and EigenTwo, datasets describing pixel-level intensity dynamics for some yet-to-be conjured PrettyPicture. Normally one computes such from pre-existing pretty pictures, but we are in contrary moods and prefer putting carts before horses. Or, perhaps, computing the intensity dynamics of an image before the image even exists.

The datasets started out as 6 × 6 noise fields; note the '2*u-1' generator attached to the image specification. We resize with a cubic interpolator, truncate to the middle 75% of the range, and then normalize to the range [0, … 1] so that usually one or the other eigenvalue dominates for each pixel of the PrettyPicture that doesn't exist yet. These are all heuristic decisions, made in the name of Art

8,8,1,1,'2*u-1' \
-resize2dx[-1] 300,5 \
-normalize[-1] -1,1 \
Random Sine Dataset
2. Cosine
Next we conjure Cosine from a similarly randomized field, but we're working at the slightly finer resolution of 8 × 8., another heuristic. The range of orientations that are available to us run from zero to π, correspondingly Cosine runs from -1 to 1. If you are wondering about the half-circle range, we'll confess at this juncture that we are making a fake diffusion tensor field, one that we will feed to -smooth. Diffusing an image from northeast to southwest is pretty much the same as diffusing an image from southwest to northeast. The scrub marks are parallel, no matter which end the arrow head sits, so there'is no call for the complete circular range.
--sqr[-1] \
-sub[-1] 1 \
-mul[-1] -1 \
-sqrt[-1] \

3. Sine

We derive Sine from Cosine via eveyrone's favorite trigometric identity: cos2u + sin2u = 1; though the computation represented in the pipeline looks more like this: sin u = √(1 - cos2u) The Sine of all possible orientations of eigenvectors range from 0 to 1, corresponding to the angular range of the half circle: 0 … π.

We now have EigenOne, EigenTwo, Cosine and Sine conjured out of the Aether, image dynamics which correspond to no image we know about.

-append[-4,-3] c \
-append[-2,-1] c \

EigenOne + EigenTwo4a. EigenOne (channel 0: Red) + EigenTwo (Channel 1: Green)

Cosine (Channel 0: Red) + Sine (Channel 1: Green) 4b. Cosine (Channel 0: Red) + Sine (Channel 1: Green)

Here we assemble our randomly derived tensor field. We append EigenOne and EigenTwo along the spectral image dimension, ditto Cosine and Sine. This leaves the eigenvalue/eigenvector dataset pair that -eigen2tensor expects to find on the image list.

-eigen2tensor[-2,-1] \
100%,100%,1,3 \
-noise[-1] 8,2 \
-dilate_circ 3 \
-rgb2hsl[-1] \
-split[-1] c \
-mul[-3] 0.5 \
-add[-3] 135.0 \
-append[-3,-2,-1] c \
-hsl2rgb[-1] \
-normalize[-1] 0,255
\
Tensor Field
5a. Tensor field produced from EigenOne, EigenTwo, Cosine and Sine.

5b. Emergence of PrettyPicture a. k. a. Noise

Following the invocation of -eigen2tensor, we have a tensor field dataset that represents a smoothing geometry.

We have no idea what image the smoothing geometry is for, but, when in pursuit of a Higher Art, We don't sweat the details. No picture? Well, heck, there is always noise.

There is a serious side to this whimsy. Noise is a good way to gauge the behavior of smoothing tensor fields.

We use the salt and pepper variety of noise, which fills all channels uniformly with unit-sized impulses. Here, it is scattershot over a three channel image and dilated somewhat.

The noise becomes our ersatz PrettyPicture. Being a uniform random pattern of points, this noise is especially sensitive to the aggregate behavior of the smoothing geometry encoded in the tensor field. We can imagine this field as being comprised of small ellipses, at various sizes, eccentricities and orientations, with each ellipse centered on a pixel. Some will blur isotropically – rather the same way in all directions. Other ellipses, the more eccentric ones, will be distinctly anisotropic in character and smooth only in particular directions. 

 

-repeat 3 \
-smooth[-1] [-2],100 \
-done \
-rm[0] \
-normalize[-1] 0,255 \
-apply_gamma 3.5 \
-o[-1] ppict.png

6. PrettyPicture, now smoothed.

Our PrettyPicture after three passes using -smooth.

With practice, one can read the behavior of the tensor field from the faux color display. Saturated blue and red regions both correspond to very elliptic tensors; blue has a very strong vertical bias, red a horizontal one. These colors engender very anisotropic blurring. The noise pattern elongates into hairlike patterns.

Other areas exhibit less strongly oriented blurring and correspond to areas of less saturated hue.

Finally, gray areas have little or no directional bias whatsoever. These areas behave very much like conventional isotropic blurring kernels.

Apart from orientation, the magnitude of the major and minor axes of the blurring ellipses dictate how much 'smearing' prevails in the locality of the pixel. Our concocted tensor field has one 'dead spot' in the upper left hand corner where the noise is unaltered. The corresponding region in the tensor field is a very dark gray. The discrete noise grains have been smeared into uniform colors in other areas.