Now Reading
The moist chicken

The moist chicken

2023-03-26 06:30:13

Paintings created in 2000

  • Renderer: POV-Ray (Megapov 0.4)
  • This picture gained the March-April 2000 spherical of the Web Ray-Tracing Competitors, with the subject “Metropolis”

Idea

There are lots of metropolis footage in Oyonale. Cities are a favorite topic of mine, in order that the IRTC “Metropolis” subject was by some means excellent.Too excellent truly, as a result of it got here at a time once I was of uninterested in making city footage. I did not need to make one other “one thing unusual occurs right here” image, or mannequin one other constructing. I wished contemporary concepts that may contain using new methods.

Nonetheless, after having tried with restricted success a few purely “unbelievable” concepts, it grew to become apparent that the one good topic could be an actual metropolis, probably one metropolis that would want no introduction, no rationalization, the very image of metropolis life, properly, the outdated Large Apple itself, New York Metropolis. There have been many benefits to this alternative. First, I had latest reminiscences of the place. Second, there was an abundance of reference materials, together with many private images. Selecting an actual metropolis additionally meant that I needed to do it as actual as doable, and photorealism is one thing I had averted to do till now in 3D.

In fact, even with town as the principle attraction, the picture nonetheless lacked idea. The Megapov documentation supplied the answer: as a result of meshes could be copied (nearly) endlessly, they?re good candidates for movement blur. So right here it was: the image could be about New York (truly a fantasy twin), and it will contain a motion-blurred character. Since movement blur is primarily a photographic impact, it was one other excuse to make the image extremely lifelike. The character may very well be a ghost from the previous : a human being, like a XIXe century woman, and even an animal. I briefly ran experiments with a deer, however I made a decision that I had made sufficient of “animals within the metropolis” footage. The character additionally may very well be a easy, hurried passer-by. The truth is, I am nonetheless unsure of what the blurred character actually is.

First assessments

Utilizing a private photograph of Time Sq. as a foundation, I did a primary check picture. The movement blur labored fairly properly, however the environment have been horrible. The truth is the unique photograph wasn?t inventive in any respect : it was exact however bland. So I went again to my assortment of images and books, in search of inspiration. Discovering the suitable one took a few days. It was a photograph of a NY avenue underneath the rain. It was fairly blurred and imprecise (the entire backside of the image confirmed a darkish compact mass of individuals with no seen floor) nevertheless it was telling me one thing necessary: there?s no want to really mannequin rain to acquire a wet impact, not less than in a nonetheless picture. With the suitable colors and lighting, a correct fog and a great quantity of blurring would do the trick. So I’d use the picture as a information for the color and shadow balancing. With the overall environment secured by the photographic reference, I’d have extra freedom to work on the image?s many different parts.

Ambiance tuning

The primary job was to create the overall environment, as a result of it will tie the entire picture collectively. One potential downside was that the picture could be too darkish. Within the reference photograph, the buildings have been slightly low, and there was a big expanse of seen sky. However due to that, the picture wasn?t very spectacular. I wished tall, darkish buildings AND sufficient gentle within the scene. So I created my first check scene, that concerned making randomly-sized dummy buildings, a primary sidewalk texture and a few fog. Beneath is one in all these first trial footage:

Click on on the picture for a bigger model.

Nicely, there was nonetheless a protracted solution to go, however these first pictures have been elementary in making an attempt a great variety of mixtures of lights and hues, till I stumbled on one thing acceptable. The truth is, the atmospheric tuning was everlasting till the final moments. Notably, as a result of I did many of the modeling on a conveyable with a passive matrix display, the colors appeared all greenish when proven on a extra dependable CRT. So, as a substitute of wild-guessing the suitable colors, I sampled them straight from the reference photograph, and later altered them to swimsuit POV. The truth is, the identical color is used all through the image for the fog, the skies and many of the lights, however with totally different intensities and patterns. Lastly, the environment was the results of the next:

  • A sky airplane with a bozo sample for the sky. I wanted a turbulent sample to interrupt up among the banding that seems when utilizing non-turbulent gradients. It could be attributable to the truth that my display is configured with 65000 colors, however I didn?t need the picture to be ugly alone laptop.
  • 2 vertical planes with a y gradient to simulate a floor fog impact and to dam the view. A big one is positioned far-off, at 2400 pov models, and a smaller one is midway, at 1300 pov models.
  • A filtering fog
  • 5 lights sources (together with 3 space lights), 4 of them in entrance of the digicam and 1 behind.
  • 2 darkish planes positioned behind the digicam are used to kill any parasite reflection.
  • The focal blur of the far-off sky and buildings.

Lamp posts

I had quite a lot of reference pictures however the bother was to seek out the suitable proportions since verisimilitude was the important thing. So, utilizing roughly detailed footage of NY residents ready for the “Stroll” signal, I attempted to estimate the relative heights and diameters of the assorted elements of the lamp submit fashions, whereas preserving the fashions as parameterised as doable for poetic license. The pinnacle, trunk and foot of the submit have been modelled in sPatch after which exported to DXF to be transformed to the common mesh format.

Notably tough to mannequin was the “Stroll/Do not stroll” field, whose exact geometry saved escaping me, despite quite a few images of the machine. I wished the factor to be as versatile as doable, in order that I might create numerous “nook” lamp submit with the identical fundamental mannequin. This proved to be very helpful, and definitely worth the sweating. The “don?t stroll ” picture itself was first painted in Image Writer. The impact was good, however not that good. Then I discovered a real-life picture of it, and the impact was fast. As an alternative of a daily POV image, I had one thing that seemed actual, for an excellent cause: it?s arduous to beat actuality.

Don't walk sign

Do not stroll signal Comparability of the 2 “Do not stroll” lights, with the ultimate one on the suitable. [close-up here]

A great deal of time was additionally spent scanning or retrieving images of actual NY road indicators, after which correcting them for parallax, color saturation and distinction. In a couple of instances, I needed to recreate them from scratch when the unique picture was too dangerous or too distorted. Listed below are the lamp posts and site visitors lights. The indicators colors are very saturated, in any other case the fog would have killed them.

The source and images maps for the lamps are provided here.

Buildings

Modeling buildings typically begins pretty much as good enjoyable, however at all times turns into essentially the most boring POV-related exercise. This was no exception. For some time, I saved retarding the second once I must truly mannequin them by utilizing dummies and even buildings from earlier pictures however unsuitable for this one. The dummies have been first only a lengthy field, then a sequence of randomly sized and colored packing containers. They did positive, however when the image began to be actually complicated, it grew to become arduous to check it with out the lifelike buildings.

Textures and dummies

Step one was to seek out good textures. I knew that I wasn?t going to do it procedurally. An actual wall or window construction may be very complicated, notably at this scale, and utilizing procedural textures to acquire photorealistic home windows would have been an actual waste of time and laptop reminiscence (I wanted tons of of them). So I took one in all my favorite New York books and scanned a dozen of constructing facades. Each photograph was corrected for the parallax, made tileable, and the colours have been brightened, contrasted and saturated. Then I devised a system that permit me apply a wall/window texture on a field in order that the image_map is at all times scaled with an accurate y/x ratio and by an quantity associated to the theoretical constructing dimension, so that each one the buildings created that method could be comparable. This method was particulary used to create dummy buildings: a loop would generate 20 or so dummies, every with its randomly chosen dimension and picture maps.

Beneath is an early render utilizing the dummies (the constructing on the suitable was a dummy taken from a earlier picture). In fact, each dummy was to get replaced by an actual mannequin.

Click on on the picture to see a bigger model.

The second constructing on the left

I began by the second constructing on the left. The principle thought was to use a picture map to its height-field counterpart: the picture is transformed to black and white, and processed till the all of the window are black and the window frames and wall elements are white (or gentle gray). When the unique picture map is utilized, it makes very lifelike home windows (not less than when seen from afar). The primary assessments have been good however I quickly stumbled on one thing very disturbing: in Megapov 0.4, a bug sends the raytracer into infinite loops when hitting some peak fields (this has been mounted in later variations). This was an actual downside, as a result of I couldn?t even foresee when this was going to occur. A single picture might take a number of days to render since I needed to cease and restart it a number of occasions. So the peak fields needed to go, and I needed to change them by CSG. After which there was one other downside: as a result of the picture map wasn?t straight: the home windows? borders have been fuzzy and didn’t type sq. angles. I then reduce up the unique picture into 34 little items: every bit was utilized to its personal CSG window, in a random trend. The outcome was truly higher than the peak fields, as a result of I had a complete management over the constructing construction.

Beneath are 6 examples of those pictures. All of them was closely processed, with hightlight/midtone/shadow correction, colour saturation and blurring.

f01.jpg (3413 octets)
f02.jpg (3423 octets)
f03.jpg (3834 octets)
w01.jpg (2233 octets)
w02.jpg (2326 octets)
w03.jpg (2028 octets)

The primary constructing on the left

The primary constructing on the left was the closest to the digicam, and thus it needed to be very detailed. There?s little to say right here: it was pure CSG and it took a complete week to construct, ground by ground. I used an actual image as a reference. For those who go to (or reside in) New York, you?ll see a constructing like this in entrance of the Flatiron.

When the CSG was completed, I rendered it with an orthographic digicam, and I used the ensuing picture as a foundation for the picture map. The soiled streaks have been obtained utilizing a “wind” filter that makes the darkish elements (the home windows) “bleed” in a selected path. After a lot blurring and different difficult processing (no images concerned!), I had a picture map that may very well be exactly superposed to the constructing. Beneath is a small model of the picture map.

The final trick concerned the home windows? reflections. Since all of the home windows have been similar, it was tough to have them behave independently. I might have made a macro, however as a substitute I used the “cells” sample which, utilized on a airplane straight positioned behind the constructing, gave the correct impact.

The source and image map for this building are available here.

image map

The Chrysler constructing

After spending fairly a very long time on the lamp posts and on the primary two buildings, I didn?t need to mannequin anything for some time. I knew that the 3dcafe had a number of buildings out there, together with a mannequin of the Chrysler constructing that I had utilized in 1996 within the image on the suitable (not featured within the E book of Beginnings).


At the moment, I had been by some means pissed off that I couldn’t put a great texture on it. However now I had Steve Cox? UV mapper ! So I downloaded the constructing once more, transformed it to the obj format, created the map with the mapper and went to work on the feel. For this, images of the true constructing have been combined with among the materials that I had beforehand scanned. This took a complete day to do. Because the map and mesh are very massive, I gained?t launch them however right here is how the map seems to be:

Chrysler building map

You?ll discover how the constructing backside was darkened to make it foggy (or smoggy). The constructing was transformed to 3DS (with Poser) after which to the mesh2 format (with 3DS2pov). It?s used twice within the picture (the second time solely as a “filling” materials).

One other constructing adopted precisely the identical course of (it?s the smaller one on the foot of the Chrysler, on the suitable).

Different buildings

The opposite buildings have been executed extra shortly, as unions of packing containers with picture maps combined with procedural textures utilized on them. I deliberate so as to add particulars however ran out of time.

The constructing on the suitable was taken from one in all my earlier pictures. It?s a heavy (1200 strains !) CSG assemble that took me every week to construct a couple of months in the past. I needed to modify it a little bit in order that it might match within the picture. You’ll be able to see this constructing within the image Waiting for Noah (it is the one on the suitable).

Miscellaneous objects

Fireplace hydrant

It occurred that Christophe Bouffartigue had created a really complicated mannequin of a hearth hydrant, and gave me the permission to make use of it in my image (and to launch the supply file). Although the mannequin was already excellent (you can see it in his own IRTC entry), I wished to enhance the textures. So I spent a day or so on it, first changing the spheres and cylinders by isosurfaces (in order that I might add 3d noise) after which creating a couple of textures. Truly, it was overkill, as a result of the mannequin might have been used straight, however I had some enjoyable doing so. This mannequin additionally makes use of the “hyperlink” macro by Chris Colefax. The picture on the suitable exhibits the way it seems to be with a great lighting. Click here to see a close-up of the model in final image.

The source for the fire hydrant is available here.

Fire hydrant

Information merchandising machines

The information merchandising machines are easy CSG, primarily based on images I had taken in Washington DC in 1992. There may be provision within the code for the picture maps used to characterize the newspaper and the advertisements.

The source for the vending machines is available here.

Trash can

There?s a trash can… It is quite hidden now behind the information machines, nevertheless it was once in entrance of the picture. It was made with sPatch and uv-mapped with a TGA bitmap with an alpha channel.

Road and aspect stroll

Since many of the picture?s environment could be primarily based on the wet impact, it was of the utmost significance to have a sidewalk and road texture that may mirror the sunshine in a sensible method. I wanted patches of darkness with a variable reflection. I believed that to do it procedurally was a nonsense, so I turned to my favorite portray programme as a substitute.

First, I drew a sample of rectangles of various gray intensities. This picture picture was used as a foundation for 2 pictures : (1) a contour filter supplied the peak subject itself (the cracks following the rectangles define) ; (2) the blurred and barely distorted picture was used as a picture sample for the sidewalk texture. These pictures are proven on the suitable at 20% of their unique dimension :

Height field bitmap Image pattern bitmap

The feel is a texture map mixing, by the image_pattern sample (a function out there in Megapov and doable in POV 3.5), two textures with totally different finishes (variable reflection and regular). The “moist” impact was obtained by averaging a number of regular statements (a bump_map) of various sizes. The a number of layers of normals are used to simulate reflection blurring. I had tried reflection blurring first nevertheless it was too gradual for the aim so I used the basic method as a substitute.

Right here is the code :

See Also


#declare c10=colour rgb;

#declare bsize=0.3;

#declare N1=regular{bump_map{png "bitmapsnormalsw"} bump_size bsize turbulence 0.3 scale 10000}

#declare N2=regular{bozo bsize turbulence 1 scale 0.001}

#declare N3=regular{bozo bsize turbulence 1 scale 0.01}

#declare N4=regular{bozo bsize turbulence 1 scale 0.1}

#declare N5=regular{bozo bsize turbulence 1 scale 1}

#declare txtSW1=texture{

pigment{c10*0.1}

regular{common normal_map {[2.0 N1][1 N2][1 N3][1 N4][1 N5]}}

end{ambient 0 diffuse 0.02 specular 0.002 roughness 0.1

metallic 2 reflect_metallic reflection_type 1 reflection_min 0.0

reflection_max 1 reflection_falloff 6 conserve_energy

}

}

#declare txtSW2=texture{

pigment{c10*0.1}

regular{common normal_map {[1 N2][1 N3][0.4 N4][0.2 N5]} scale 3}

end{ambient 0 diffuse 0.02 specular 0.002 roughness 0.1

reflection 0.1 reflect_metallic

}

}

#declare txtStr1=texture{

pigment{c10*0.01}

regular{common normal_map {[2.0 N1][1 N2][1 N3][1 N4][2 N5]}}

end{ambient 0 diffuse 0 specular 0.002 roughness 0.1

reflect_metallic reflection_type 1 reflection_min 0.0

reflection_max 0.7 reflection_falloff 6 conserve_energy

}

}

#declare txtStr2=texture{

pigment{c10*0.01}

regular{common normal_map {[1 N2][1 N3][0.4 N4][0.2 N5]} scale 3}

end{ambient 0 diffuse 0 specular 0.002 roughness 0.1

reflection 0.1 reflect_metallic

}

}

#declare matSW=materials{

texture {

image_pattern { png "bitmapsswmap" }

turbulence 0.01

texture_map {[0 txtSW1 scale ][1 txtSW2 scale ]}

scale rotate x*90

}

inside{ior 1.33}

}

This materials was utilized to the aspect stroll height-field, after which this aspect was replicated a number of occasions to type the entire sidewalk.

In a while, I drew some random trash parts with sPatch, exported them as DXF, transformed them to meshes with Crossroads and organized the trash randomly with some time loop.

The road is an isosurface airplane with noise3d. Its texture is just like the sidewalk’s. The crossing marks are made with peak fields and isosurfaces.

The ghost

The ghost clearly needed to be made with Poser. The usual enterprise man was used, and its trenchcoat was taken from the Poser 4 wardrobe. The umbrella was discovered at Renderosity. Posing it wasn?t tough however exporting it to Pov was much less easy that it ought to have been. Due to the assorted bugs in Poser’s 3DS export, I needed to export all the weather independently to the obj format, convert them to 3DS after which to POV. For these unfamiliar with Poser, right here is our ghost in flesh and bones:

mouille_wip_ps

The ghost texture was largely black with some reflection. It wasn?t so necessary, for the reason that ghost could be motion-blurred. The blurring concerned each translation and rotation (rotate -y*clock*2 translate *clock) and solely 10 samples.

Click here to see a close-up of the ghost.

Vehicles

The automobiles have been a late addition to the picture. The deadline was approaching and I felt that the picture was by some means too symbolic for my very own style. Additionally, I had executed fairly a couple of empty cities earlier than and this was turning into a private gimmick. Lastly, I felt that the huge expanse of vacancy on the left enhanced the loneliness of the ghost. I wished it to be lonely, however in a low-key kind of method. And the picture was too darkish and verging on uninteresting. Here’s a snapshot of the picture a couple of days earlier than the deadline: there aren’t any automobiles, no chicken, and a dummy constructing on the suitable. The Chrysler constructing does not have its terminal spire but, that I’d later make with CSG.

I then thought that placing automobiles would add some weight to the scene, add colors and provides it extra steadiness. All of the automobiles have been present in numerous 3D mannequin websites. Largely have been in 3DS format, and so they have been simply transformed to the mesh2 format by 3DS2POV. Happily, they have been all made of various elements textured independently. Sadly, it took me a while to determine what have been these elements and find out how to texture them, to not point out the vastly totally different models utilized by the mannequin makers. Some automobiles have been tiny whereas different have been 1000 models giant ! Different fashions have been positioned surprisingly, with the mannequin apparently containing some atmosphere information (lights, floor, sky). Every of the ten automobiles needed to be processed individually, till I used to be in a position to texture and place it precisely. Beneath is an instance of a automobile definition : the feel names not starting by “txt” (like MBRASS1) are those discovered within the unique 3DS file. The automobiles have been put in an array.


#declare txtPaint=texture{pigment{Black} regular{dents -1 scale
0.01} end{ambient 0 diffuse 1 specular 1 roughness 0.001 reflection 0.05}}

#declare txtRed=texture{pigment{rgb*2} end{ambient 1 diffuse 0}}

#declare default_texture = texture{txtPaint}

#declare MBLACK_MATTE = texture{txtBlack}

#declare MMaterial__3_azul = texture{txtPaint}

#declare MBLK_PLASTISTEEL = texture{txtBlack}

#declare MMaterial__10 = texture{txtBlack}

#declare MMaterial__2_crom = texture{txtMetal}


#declare MMaterial__8_cris = texture{txtLight}

#declare MMaterial__6 = texture{txtRed}

#declare MMaterial__11 = texture{txtRed}

#declare MBRASS1 = texture{txtMetal}

#declare MBLACK_GLASS = texture{txtGlass}

#declare MBLACK_GLASS1 = texture{txtGlass}

#declare MBLACK_MOLDURAS = texture{txtBlack}

#declare MBEIGE_PLSTC = texture{txtWhite}

#declare MBEIGE_MATTEOBSCU = texture{txtBlack}


#declare MWHITE_MATTE = texture{txtWhite}



#embrace "carslimomepo_o.inc"

#declare Vehicles[7]=union{

object{ pneu1 } object{ carroceria } object{ partegris } object{ ventanas }

object{ parteinfne } object{ moldurasgr } object{ parrillapl } object{ faros }

object{ calaveras } object{ cuartosdel } object{ calaverast } object{ rinesdelan }

object{ rinestrase } object{ pneu02 } object{ asientotra } object{ sombrerera }

object{ Division } object{ cristdivis } object{ sunroofcrs } object{ marcossu01 } object{
suelo }


object{ asientodel } object{ volante } object{ tableroins } object{ antena01 }

translate
scale 1.7/(0.165808+0.58554)

}

After that, I had so as to add higher taillights, since a easy ambient 1 texture wasn?t very lifelike. This was executed with emitting media put in field and spheres. Every media was positioned on the corresponding mesh, by trial and error. Right here is an instance of media code for the Thunderbird on the left :


#declare TLSmall=sphere{0,1

texture{pigment{Clear} end{ambient 0 diffuse 0}}

inside{

media{emission Purple*3 density{spherical color_map{[0 Black] [1 White*2]}}}

media{emission Yellow*4 density{spherical color_map{[0 Black] [1 White*4]}} scale 0.5}

}

hole scale 1 translate y*0.5

}


#declare TLLargei=sphere{0,1


texture{pigment{Clear} end{ambient 0 diffuse 0}}

inside{

media{emission Purple*3 density{spherical color_map{[0 Black] [1 White*2]}}}

media{emission rgb*4 density{spherical color_map{[0 Black] [1 White*3]}}
scale 0.5}

}

hole translate

}


#declare TLargeCombo=union{

object{TLLargei scale 1}object{TLLargei scale 1 translate x}object{TLLargei scale
translate x*2}

object{TLLargei scale translate x*3}object{TLLargei scale
translate x*4}

object{TLLargei scale translate x*5}object{TLLargei scale

translate x*6}

rotate z*-1 scale

}

#declare TLarge=union{

object{TLargeCombo}

sphere{0,1

texture{pigment{Clear} end{ambient 0 diffuse 0}}

inside{ media{emission Purple*3 density{spherical color_map{[0 Black] [1 White*2]}}}}


hole scale translate

}

}

object{TLSmall scale translate } // tlsmall


object{TLSmall scale translate scale
} // tlsmall

object{TLarge translate -0.5*x scale translate
} //tlbig


object{TLarge translate -0.5*x scale translate
scale } //tlbig

There is a 64 Thunderbird, a Toyota, a Porsche, 2 Camaros, 2 Mercedes (together with a limo), a Jaguar, a Montecarlo (mannequin by FastTraxxx) and a few kind of low-detail Jeep. These 10 automobiles have been replicated 12 occasions at not reminiscence price (due to the mesh format) and organized in 3 strains. One final addition was a licence plate for the Thunderbird. I ran out of time to make exhaust fumes.

Here’s a close-up of the T-Chicken. The close-up additionally reveals the simplistic halo of the Toyota’s taillights…

64 Thunderbird

Be aware: attributable to poor administration from my half, I did not maintain monitor of the assorted authors of the automobiles, both as a result of their names weren’t included with the mannequin, or as a result of the reference was misplaced in the course of the conversion course of. For those who acknowledge your work, please ship me a be aware in order that I can credit score you correctly.

The chicken

After nearly two months of modeling and tuning, I nonetheless felt that the ultimate picture was not full. It lacked a spotlight. Then I realised that the picture was truly fabricated from layers: there was the “constructing” layer, the “automobiles” layer, the “ghost” layer. Every layer was on a smaller scale than the earlier one, although not much less necessary. This was very symbolic of town as a fractal object, with every part having its personal complexity no matter its dimension. All it wanted was an final layer of the smallest scale. Therefore the chicken, which I might make very small and although very seen. Since I didn?t have the time to mannequin a chicken, I first downloaded a sPatch sparrow made by Jerme Livenais (http://www.chez.com/jrlivenais/vdesprit/present/gift_eng.htm). I modified it, didn’t uv-map it however, after a couple of check renders, determined that it was a nasty alternative. Although the mannequin is sweet (see the picture in Jerome?s web page), it by some means seemed like a shower toy once I put it on the lampost. It ruined the image ! So I turned to the Internet to do some bird-hunting. Lastly, I stumbled on a chicken picture that, after a lot processing (it makes use of an alpha channel), I placed on a field. You’ll be able to see the jpeg model beneath.

Bird map

Click here to see a close-up of the bird in the final image.

Remaining render

The image was over. I knew about its many shortcomings : I might have added extra trash (a coca-cola can, cigarette butts, empty bottles), extra road indicators, exhaust fumes for the automobiles, higher textures for the buildings, higher textures for the automobiles, higher tailights, higher crossing marks, detailed wiring for the road lights, extra road furnishings. However a deadline is a deadline and I used to be fairly exhausted anyway.

The ultimate render took about 21 hours on a Pentium II 350 Mhz. Most of it was spent within the final strains, the place there are numerous reflections.

Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top