Photorealistic Render
Introduction
The Render is a digital image resulting from the action of a specific computer program whose purpose is to generate it with the maximum multi-dimensional "photo-realism" possible, from a model of a three-dimensional scene or scenario interpreted from each of its perspectives.
The model is subjected to various processes, called 3D rendering, appealing to photographic techniques and application of light distribution simulations, geometric optical tracings "Ray tracing (physics)") of light paths and their behaviors according to the texturing of materials to create a series of effects and illusions in order to resemble a specific "realistic" situation.
The graphics process focuses on the automatic conversion of wireframe models to achieve flat images with photorealistic three-dimensional effects.
Rendering methods
Rendering is the final process of creating the actual 2D image or animation of the prepared scene. This can be compared to taking a photo or filming a scene after the setup has been finished in real life. Several different and often specialized representation methods have been developed. These range from non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing or radiosity. Rendering can take from: fractions of seconds to days for a single image/frame. In general, different methods are more suitable for photorealistic rendering or real-time rendering.
Real time
Rendering of interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to display as much information as possible that the eye can process in a fraction of a second (here in one frame. In the case of 30 frame-per-second animation one frame spans 30 of a second). The main goal is to achieve as high a degree of photorealism as possible at a minimum acceptable rendering speed (typically 24 frames per second, as this is the minimum the human eye needs to successfully create the illusion of motion). In fact, exploits can be applied to the way the eye "perceives" the world, and as a result the final image presented is not necessarily that of the real world, but one close enough for the human eye to tolerate. Rendering software can simulate visual effects such as lens flare, depth of field, or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and the human eye. These effects can lend an element of realism to a scene, even if the effect is simply a simulated artifact of a camera. This is the basic method used in games, interactive worlds and VRML. The rapid increase in computer processing power has enabled a progressively greater degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is usually polygonal and aided by the computer's GPU.