Tutorial
1. | conjure out of an arbitrary image a pattern that might produce reasonable looking brush strokes. |
2. | render the illusion of paint having been pushed by fingers or brushes along paths defined by these strokes, with concomitant tracings of bristles or fingerprints and ridging of paint pushed to either side. |
3. | conjure up a decent enough lighting model so that our output image looks like it could be a photograph of a painting of our input image. |
Image Fetch Some ambersweet oranges, courtesy of the Agricultural Research Service, US Department of Agriculture. We reduce the image to 1024x1155. Such rescaling has a bearing on the outcome, since anisotropic smoothing is scale-dependent. Roughly, apparent finger size increases inversely to image area. Conversely, apparent dexterity improves in direct proportion to image area. In brief, 'small pictures' == 'fat, clumsy fingers' and that's OK if that's what you want. If you want the appearance of smaller fingers/brushes, however, then start with large images. |
Find Finger Strokes We add a quantized version of the image to the stack. As the amount of blurring increases, strokes become broader and its resolving power declines. We model fatter fingers or broader brushes. Conversely, less blurring gives rise to more strokes produced by smaller fingers or brushes. We equate a quantized band of uniform color to a 'brush stroke'. With this particular image, we envision a good many circular strokes building up to the highlights of the oranges. |
Generate a 'Finger Stroke Map' Gradient_norm and threshold elucidate the borders between quantized regions, making topographic contours, marking off regions of similar hue and luminance. Roughly, these correspond to finger paint strokes, but the analogy breaks down with tight gradients; fingers are impossibly small. To counter this, a threshold/dilation/thinning cycle merges tightly spaced gradients, with thinning finding the center of these aggregates. A second dilation finalizes the stroked regions and their boundaries. However, the final geometry awaits anisotropic smoothing. To that end, we derive a diffusion tensor field. We invoke diffusiontensors directly so that we may use the resulting diffusion tensor field again and again as well as modify it from time to time. |
Smoothing finger strokes We generate a diffusion tensor field and harness it to smooth the finger stroke map. The results establish where paint ridging occurs (light areas), with dark/black areas representing the strokes themselves. The intent of this smoothing step is to reduce many of the sharp turns which arise from the thinned contour map. Fingers or brushes rarely make sharp corners, particularly given the viscosity of real paint. Some areas do not break neatly down into strokes; these are murky gray areas. These turn into smear regions not unlike those made by fingers too fat to capture fine detail (but the attempt is made anyway). We achieve quite violent smoothing through two means. 1. The parameters passed to diffusiontensors are quite extreme, the high anisotropy value and low sharpness value favor highly directional smoothing. 2. We also make reduced-size copies of the tensor field and finger stroke map before invoking the smooth command. The action of the smooth command is scale dependent and is much more pronounced at smaller scales, an effect we deliberately exploit. The diffusion tensor field produced in this step directly or indirectly influences the character of brush strokes throughout the remainder of this script. Used to finally shape the finger stroke map, the same tensor field, modified, largely emulates the bristle brush pattern; in this manner, the orientation of the brush stroke and bristle patterns match because they are both developed from the same underlying tensor field. |
Modify the diffusion tensor field Readers of Do Your Own Diffusion Tensor Field will recall the breakdown of tensor fields into four gray scale images named EigenOne, EigenTwo, Cosine and Sine. Here, we break out EigenOne and EigenTwo and set them to one and zero across the entire field. These two modifications gives rise to extremely eccentric blurring kernels across the field which are, in turn, steered by the angles encoded in Cosine and Sine. It is through this diffusion tensor field, so modified, that we achive the effects of brush bristles or finger prints. Generally, EigenOne and EigenTwo together control the eccentricity of the diffusion kernels; by setting them to one and zero we'll garner extremely elliptic smoothing kernels and the effect is that of fine bristles. We could make these bristles bigger and coarser by reducing EigenOne somewhat and increasing EigenTwo by a like amount; the two values should still sum to one. Such modifications give rise to less eccentric (i. e. “rounder”) smoothing kernels and the effect will be akin to that of a brush with fewer, fatter bristles. Values of 0.7 for EigenOne and 0.3 for EigenTwo are about as far as one can go in this vein; bristles disappear at these settings and do not reappear until EigenOne is less than 0.3 and EigenTwo greater than 0.7. In this range, however the brush strokes are effectively rotated by 90°, an interesting, if not entirely useful effect. This apparent rotation arises from the nominally “long” axis of the blurring kernel becoming the shorter of the two, while the “short” axis becomes longer. |
Emulating Paint We emulate paint using a dull under-color and over saturated, noise-derived over-color, which gives the appearance of brush bristles. The bristles stem from salt-and-pepper impulse noise multiplied by the original image. We could think of the results as brush bristles frozen in time. To unfreeze time, we smooth these bristles with the modified diffusion tensor field from the previous step. Since that field is composed entirely of highly eccentric blurring kernels, these “bristles” will diffuse along lines in accordance to the angles encoded in the Cosine and Sine fields. Since these derive from the “Finger Stoke Map,” the bristles will extrude along paths nicely parallel to the edges in the stroke map. |
For the under-color, we desaturate and darken the original image and smooth it with the same diffusion tensor field we applied to the “bristles” image in the previous frame. |
Over and Under Colors Smoothed, the over and under color images constitute the color component of the finger painting, endowed now with images of bristles (over color) and ground (under-color). We have yet to introduce bumpiness, a sense that this color field is a surface with an irregular height that is lit in some fashion. To that end, we proceed to the lighting model. |
For surface lighting, we are just interested in the intensity changes along the width and height axes. To emulate light streaming in from a particular azimuth, we take the dot product of that direction with each element of the direction field. This produces a single channel scalar field where the most positive values (white) suggest surfaces traverse to the incoming “light rays” and sloping into them. The most negative values (black) are traverse but slope away. Intermediary shades of gray suggest surface orientations in between these extremes. |
Generate Elevation Map The first bit of information our lighting model needs is an elevation map, literally an image that indicates how high a point is from some “ground zero.” Not only does this give our model high and low points, but slopes from one to another elevation. Here, lighter values indicate higher elevations with zero intensity representing the minimum, “ground zero” baseline elevation. We construct our height map from two sources, the image of stroke edges from which we generated our diffusion tensor field, at position 5 from the end of the image list, and the smoothed “bristles” over-color image we generated in the previous step, at position 2 from the end of the image list. We make a luminance only version of the bristles and also scale it down by 70% with respect to the stroke edges. The final elevation map is the sum of these two sources, normalized to the range of zero to one. This image represents the height of our paint from the surface of the ground. Ridges made by fingers or brushes plowing through paint are generally higher (lighter) than the bristles. |
Generate Direction Field The gradient command measures the change of intensity along principle axes, width (x), height (y) and depth (z); these approximate the partial derivatives of the surface height along each of the principle directions. For our purposes, we only need the width and height axes, the command's default output. It delivers two single channel datasets to the image list, the penultimate measuring rates of change along the width axis, the last measuring rates of change along th height axis. We append these single channel datasets along the spectral access to make a single object, the direction field. Each pixel in this field represents a vector expressing the steepness and direction of slope or gradient “observed” at the corresponding pixel in the elevation map. For our purposes, these are the slopes which incident light rays reflect from. |
Shadow Dot Product Up to now, we have been making remarks on surface lighting, which also includes shadowing as well. It should not come as too big a surprise to readers that shadows arise from very much the same mechanism: dot products on direction fields. In this sense, it might help to think of shadows as “negative light.” In the left hand coordinate system common for computer graphics, the origin coincides with the upper left hand corner. The positive x axis points to the right, the positive y axis points down and positive angular rotations are clockwise. In this example, we have chosen that incoming light streams in at -135°(225°) Our first dot product, then is with a unit vector oriented at -135° dotted with each vector in the direction field. We form this dot product in three phases: 1. Forming a single pixel image containing the vector components of the “shadow ray.” 2. Resizing this image to match the dimensions of the direction field, forming a constant “unit vector field.” 3. Multiplying the direction and unit vector fields, then adding the channel elements together, forming pixel-wise dot products. The lightest values coincides with slopes at right angles to the “shadow ray” The shadow dot product, depicted on the left, is initially a negative image of the shadows. We perform a -cut to isolate the range of shadow values and normalize this range to the interval [0, … 1]. |
Shadow Multiplier It is an old painterly trick to make shadows with complementary color; and that is the case here. Our ultimate aim is to make an image we can multiply with the emulated paint image. This entails multiplying our shadow dot product with the original image to pick up the color of the shadow regions, but in the direct sense. The multiplication complete, we then make a negative of the product, coloring the shadows in the complementary sense and setting the values so that shadows are dark instead of light. |
Composite Shadows In these last three steps, we repeatedly form dot products for highlights and specular glints, then blend them with the color image. The image on the right is the initial set of blends: the over- and -under colors, then the shadow dot product. This gives an credible, but incomplete sense of a finger painting. In the next step, we composite in highlights and specular components. |
Composite Highlights The process of generating highlights is procedurally the same as shadows; we choose a “light ray” direction of 45°, the exact opposite direction of the shadow ray, and we do not make a negative of the product. |
Composite Specular The specular glints are just the uncolored highlight dot product reused, distorted exponentially. We normalize the highlight dot product from a large negative value to zero, then raise Euler's Number, e, by elements of this field, obtaining scalars that cluster more closely to zero as we drive the lower normalization boundary more negative. In a rougher parlance, “more negative” means “wetter, shinier” speculars. |
G'MIC is an open-source software distributed under the
CeCILL free software licenses (LGPL-like and/or
GPL-compatible).
Copyrights (C) Since July 2008,
David Tschumperlé - GREYC UMR CNRS 6072, Image Team.