Advertisement
Guest User

Untitled

a guest
Apr 12th, 2013
195
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.26 KB | None | 0 0
  1. I don't know of a good one, so I'll write one here. This focuses on how things work, rather than how to use existing APIs, because I've basically never used GL. What's below is short and doesn't have much math, but it should be enough to allow someone who knows linear algebra and 2-D graphics to both understand and rederive most of 3D graphics.
  2. To rotate a point cloud, you multiply each point by a rotation matrix to get the rotated point. A rotation matrix that rotates around the X-axis looks like
  3. [[1 0 0]
  4. [0 c s]
  5. [0 -s c]]
  6. where s and c are the sin and cos of the angle you want to rotate. Then you can do an orthographic projection by just dropping the Z coordinate, leaving just X and Y coordinates (which you may need to scale to your screen), or a perspective projection by dividing X and Y by Z. (Be wary of division by zero.)
  7. The usual approach is to maintain the original points unrotated and make a rotated copy of them for every frame, instead of overwriting them with a rotated version every frame, so that numerical errors don't accumulate and you can get away with single-precision floating-point. Also, conventionally, positive Z coordinates are in front of the camera and negative Z coordinates are behind it.
  8. If the above isn't sufficiently clear, there's some code I wrote to generate an ASCII-art animation of a perspective-projected point cloud (the corners of a cube) at http://lists.canonical.org/pipermail/kragen-hacks/2012-April.... It's 15 lines of code and the only library it depends on are Python's functions to sleep for a fraction of a second, output stuff to stdout, and round to integer.
  9. EXTRAS:
  10. DISTANCE: For things that aren't points, you might be interested in how far away they are from the camera, too, like to scale them or figure out which ones are in front. That's the Z-coordinate after you rotate into camera space.
  11. TRANSFORM COMPOSITION: If you want to rotate around two axes, it's probably better to multiply the two rotation matrices together, then multiply each point by the resulting transformation matrix, rather than doing two matrix multiplies for each point. You can also scale camera space to screen coordinates this way.
  12. TRANSLATION: If you want to move the camera, you probably want to translate your points so the camera is at the origin before rotating them. If you represent your transformations as 4x4 matrices, with a possibly implicit fourth element in each point vector that is 1, you can represent translation in your transformation matrices too.
  13. MULTIPLE SEPARATELY MOVING OBJECTS: A point cloud is a single rigid object. But whether you're drawing point clouds or something more complicated, it's often interesting to be able to move multiple objects separately. The usual way is to go from two coordinate systems, camera and world, to N: camera, world, and one for each object. Each object has a transformation matrix that maps its object space into world space. You move the object by changing its transformation matrix.
  14. POLYGONS: If you're drawing polygons, straight lines are still straight lines when you rotate them, and in either perspective or orthographic projections, so you can just rotate and project the corners of the polygons into your canvas space, and then connect them with 2-D straight lines (or fill the resulting 2-D triangle).
  15. FLAT SHADING: The color resulting from ordinary illumination ("diffuse reflection") is the underlying color of the polygon, multiplied by the cosine of the angle between the normal (perpendicular) to the surface and the direction of illumination; it's easiest to compute that cosine by taking a dot-product between two unit vectors, and to compute the normal by normalizing a cross-product between two of the sides. If you have more than one lighting source, add together the colors generated by each lighting source. You probably want to treat negative cosines as zero, or you'll get negative lighting when faces are illuminated from behind.
  16. BACKFACE REMOVAL: if you're drawing a single convex object made of polygons, you can do correct hidden surface removal just by not drawing polygons whose normal points away from the camera (has a positive Z component). This is a useful optimization even if your object is more complicated, because it halves the load on the heavier-weight algorithms below.
  17. HIDDEN SURFACE REMOVAL: If your polygons don't intersect, or only intersect at their edges, you can use the "painter's algorithm" to get correctly displayed hidden surfaces by just drawing them in order from the furthest to the closest; if they do intersect, you can either cut them up so they don't intersect any more, or you can use a "Z buffer" which tells you which object is closest to the camera at each pixel --- as you draw your things, you check the Z buffer to see what's the currently closest Z coordinate at each pixel you're drawing, and if the relevant point on that object has a lower Z coordinate, you update that pixel in both the Z buffer and the canvas.
  18. SMOOTH SHADING: you can get apparently smooth surfaces out of quite rough polygon grids by storing a separate surface normal at each vertex, and then instead of coloring the whole polygon a single flat color, interpolate. You can either compute the colors at the corners of the polygons and interpolate the colors at each point you draw (Gouraud shading) or you can interpolate the normals and redo the lighting calculation for each point (Phong shading), which gives you dramatically better results if you have specular highlights.
  19. SPECULAR HIGHLIGHTS: The diffuse-illumination calculation explained in "FLAT SHADING" above is sufficient for things that aren't shiny at all. For things that are somewhat shiny, you want "specular highlights", and the usual way to do those is to do the lighting calculation a second time, but instead of directly using the cosine of the angle between the light source direction and the surface normal, you take that cosine to some power (called the "shininess" or "Phong exponent") first. The 5th power is pretty shiny.
  20. FOG: Faraway things fade exponentially. That is, you take the density of the fog (a fraction slightly less than 1) to the power of the Z coordinate of the point on the object, and multiply that by the color of the object.
  21. TEXTURE MAPPING: If you want your surfaces not to be a single solid color, you can use a raster image (called a "texture") to map colors onto the surface. You just figure out where you are on the surface (by doing a matrix multiply from your surface point into "texture space") and figure out which texture pixel ("texel") you're at, or which ones you should interpolate between. (You can also use some other function to generate the color, rather than having an explicitly stored texture. The important thing is that it maps a 3-D point in object space to a color.) This is the start of the whole universe of "shaders", which represents a big part of current 3-D work. Another application of shaders is bump mapping:
  22. BUMP MAPPING: If you're doing Phong shading, you can get apparent texture (in the usual sense: something you could feel if you could touch the object) on your surfaces without having to transform more points by simply perturbing the interpolated surface normals you're using to do your shading calculations. It's helpful if you perturb them in a deterministic way so that the texture moves with the surface.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement