點的法向量計算
在OPENGL程式設計中,三維模型頂點法向量的計算很重要,它直接影響著顯示的效果;本人接觸OPENGL時間不長,很長一段時間一直困惑於法向量計算的問題,後經仔細研究並找了些資料才基本實現了法向量的計算演算法,現總結如下,希望對初學者能有些幫助。
大家知道,在OPENGL中頂點-三角形模型對光照的反射取決於頂點法向量的設定,如果法向量計算正確,顯示出來的模型表面很光滑,有光澤,否則不是稜角分明就是模糊、看不清。下面假設模型資料來源於AUTOCAD 的DXF格式的檔案,由於DXF檔案儲存的資料是頂點座標和三角形頂點順序,沒有頂點法向量資訊,所以要自己計算;由立體幾何知識可知,一個點的法向量應該等於以這個點為頂點的所有三角形的法向量之和,所以只要計算出每個三角形的法向量,再把這個法向量加到此三角形三個頂點的法向量裡即可。下面是程式實現的部分關鍵程式碼及註釋:
void CEntity::ComputeNormalVector()
{
int i,j;
//取出每一個頂點結構
VERTEXLIST * pvl = new VERTEXLIST[m_nVertexNum+1];
VERTEXLIST * pvltemp = (VERTEXLIST *)m_vl.pNext;
for(i=1;i<m_nVertexNum+1;i++)
{
*(pvl+i) = *(pvltemp);
pvltemp =(VERTEXLIST *)pvltemp->pNext;
}
//取出每一個三角形頂點順序的結構
SEQUENCELIST * psl = new SEQUENCELIST[m_nSequenceNum];
SEQUENCELIST * psltemp = (SEQUENCELIST *)m_sl.pNext;
for(i=0;i<m_nSequenceNum;i++)
{
*(psl+i) = *(psltemp);
psltemp =(SEQUENCELIST *)psltemp->pNext;
}
//計算每個三角形的法向量
VERTEX v1,v2,v3;
VERTEX temp_v1,temp_v2,v;
for(i=0;i<m_nSequenceNum;i++)
{
v1 = (pvl+(psl+i)->sequence.a)->vertex;
v2 = (pvl+(psl+i)->sequence.b)->vertex;
v3 = (pvl+(psl+i)->sequence.c)->vertex; //取出三點座標
temp_v1=Vector2v(v1,v2); //兩點座標相減 求出兩點組成的向量
temp_v2=Vector2v(v2,v3); //求出三角形的兩個向量
v=NormalizeVertex(CrossVertex(temp_v2,temp_v1)); //兩個向量叉乘,歸一化
(psl+i)->NormalVertex = v; //儲存
}
VERTEX vNormal = VERTEX(0,0,0);
//遍歷每個三角形,把當前三角形法向量分別加到三個頂點法向量裡去
for(j=0;j<m_nSequenceNum;j++)
{
//取出當前三角形法向量
vNormal = (psl+j)->NormalVertex;
//將當前三角形法向量加到三個頂點法向量內
(pvl+(psl+j)->sequence.a )->NormalVertex = ::AddVertex((pvl+(psl+j)->sequence.a )->NormalVertex,vNormal);
(pvl+(psl+j)->sequence.b )->NormalVertex = ::AddVertex((pvl+(psl+j)->sequence.b )->NormalVertex,vNormal);
(pvl+(psl+j)->sequence.c )->NormalVertex = ::AddVertex((pvl+(psl+j)->sequence.c )->NormalVertex,vNormal);
}
//遍歷每一個頂點,將計算完的法向量存到VERTEXLIST連結串列內。
pvltemp = (VERTEXLIST *)m_vl.pNext;
for(i=1;i<m_nVertexNum+1;i++)
{
pvltemp->NormalVertex = (pvl+i)->NormalVertex;
pvltemp =(VERTEXLIST *)pvltemp->pNext;
}
delete [] pvl;
delete [] psl; //釋放緩衝區
}
兩個結構如下:
typedef struct{
void * pPre;
VERTEX vertex;
VERTEX NormalVertex; //頂點法向量
void * pNext;
}VERTEXLIST; //頂點 連結串列
typedef struct{
void * pPre;
SEQUENCE sequence;
VERTEX NormalVertex; //三角形法向量
void * pNext;
}SEQUENCELIST; //三角形順序 連結串列
至此,頂點法向量計算完畢,然後就是建立顯示列表:
id=glGenLists(1);
glNewList(id,GL_COMPILE);
glBegin(GL_TRIANGLES);
for(i=0;i<nSnum;i++) //遍歷每一個三角形
{
v1 = (pvertex+psl->sequence.a)->vertex; //頂點座標
v = (pvertex+psl->sequence.a)->NormalVertex; //頂點法向量
glNormal3f(v.x,v.y,v.z); //設定法向量
glVertex3f(v1.x,v1.y,v1.z); //畫點
v2=(pvertex+psl->sequence.b)->vertex;
v = (pvertex+psl->sequence.b)->NormalVertex;
glNormal3f(v.x,v.y,v.z);
glVertex3f(v2.x,v2.y,v2.z);
v3=(pvertex+psl->sequence.c)->vertex;
v = (pvertex+psl->sequence.c)->NormalVertex;
glNormal3f(v.x,v.y,v.z);
glVertex3f(v3.x,v3.y,v3.z);
psl=(SEQUENCELIST*)psl->pNext;
}
glEnd();
另外,正確設定光照和材質是影響OPENGL顯示效果的決定性因素,這裡就不再贅述,只列出我設定的材質、光照模型:
glShadeModel(GL_SMOOTH);
//設定材質和光照
GLfloat mat_ambient[]= { 0.6, 0.6, 0.6, 1.0 }; //環境光分量強度
GLfloat mat_diffuse[]= { 0.2, 0.2, 0.2, 1.0 }; //漫反射光分量強度
GLfloat mat_specular[] = { 0.0, 0.0, 0.0, 1.0 }; //鏡面反射
GLfloat mat_shininess[] = { 10.0 };
GLfloat light1_ambient[]= { 0.6, 0.6, 0.6, 1.0 };
GLfloat light1_diffuse[]= { 0.2, 0.2, 0.2, 1.0 };
GLfloat light1_specular[] = { 0.0, 0.0, 0.0, 0.0 };
GLfloat light1_position[] = { -400.0, -400.0, 200, 1.0 };
Wait... an article about how to calculate vertex normals for a mesh? This is very easy right? Yes, but let's say you want high quality normals at faster speed than usual implementatins. Let's say that you want the rutine to be small too. Ok, then this very
short is the articles for you. I assume you know what a mesh is, what a vertex normal is, and that you know what a cross product is. Yes? Ok, let's start.
Getting rid of the divissions
Most mesh normalization implementations out there use the typical normal averaging of face normals to calculate the vertex normal. For that, one iterates all the faces on which the current vertex is contained, accumulate the normals and then divide by the amount of faces used in the accumulation. Now think what this division is doing to the normal. It does not change it's direction for sure, since a division by a scalar only affects the length of the normal. Actually we do not care about this length at all, since we will most probably normalize it to unit length. This means that rigth after the face normal accumulation, we are done, we don't need any divission. Thus we can skip not only this operation by also all the code to keep track of the amount of faces that affect each vertex.
Getting rid of the normalizations (square roots and divisions)
Now think again what we are really doing when accumulating the face normals into vertex normals. We are making a linear combination of normals, with equal importance or weight for all of them. Is this good? Well, one would think that polygons with bigger area should probably contribute more to the final result. This indeed gives better quality vertex normals. So, let's try to calculate the area of each polygon... Hey, but wait! Open your primary school math book. Have a look to the definition of cross product, specially to the length of the cross product. Cheeses, the length of a cross product is proportional to the area of the paralepiped created by the two vectors involved in the product. And is not our triangle normal calculated actually as the cross product of two of its edges? Basically the cross product of these two edges will then give as a face normal with length proportional to the area of the triangle. For free! Ok, this means we must not normalice the face normals, and just accumulate them on the vertices so that we do a high quallity vertex normal calculation. We just skiped one vector normalization per face (meaning, one square root and a division, or an inverse-square root)!
Looping only once
Some implementations make two passes on the mesh to normalize it, one on the faces to calc face normals and vertex-to-normal connectivity, and a second one where this information is used to do the actual normal accumulation for each vertex. This is unnecessary, we can make the code a lot smaller and faster, and use less memory by doing it all in one pass. For each face on the mesh, calc the face normal (without normalization, as just explained), and directly accumulate this normal in each vertex belonging to the face. After you are done with the faces, each vertex will have recieved all the face normals it was supposed to recieve. That simple.
The code
To finish, let's put all together in a small piece of code:
void Mesh_normalize( Mesh *myself ) { Vert *vert = myself->vert; Triangle *face = myself->face; for( int i=0; i < myself->mNumVerts; i++ ) vert[i].normal = vec3(0.0f); for( int i=0; i < myself->mNumFaces; i++ ) { const int ia = face[i].v[0]; const int ib = face[i].v[1]; const int ic = face[i].v[2]; const vec3 e1 = vert[ia].pos - vert[ib].pos; const vec3 e2 = vert[ic].pos - vert[ib].pos; const vec3 no = cross( e1, e2 ); vert[ia].normal += no; vert[ib].normal += no; vert[ic].normal += no; } for( i=0; i < myself->mNumVerts; i++ ) verts[i].normal = normalize( verts[i].normal ); }
This is quite fast, fast enough to do lot of mesh normalizations per frame. On top of it, if you are using vertex shaders you can be interested on skipping the last vertex normalization and do it on the shader (last line on the code above). Also in some cases, like 64 or even 4 kilobyte demos, it's usual to have all allocated buffers automatically initialized to zero. In that case, if this is the first and only normalization for a given mesh, you may skip the first loop on the function too of course.