Texture Mapping

In previous tutorials we began constructing objects in our virtual space and assigning colors to their vertices, thus obtaining multicolored cubes which we could admire. However, in most, if not all cases, whenever you generate an object to use inside a game, a simulation, etc. the object might require more visually complex coating than a few simple colors. That’s where textures come in.

A texture is, in its simplest definition, an image which is applied to an object’s face much like you apply wallpaper to your bedroom wall. Usually the image is 2D, but it can also be 1D, 3D, cube shaped, array form (an object constructed by stacking multiple textures) etc. The image has a set of local texture coordinates which identify each corner and will be instrumental in giving it the correct orientation when placed on a face.

Textured Cube

Before we move further, it is important to specify that within our code (and in OpenGL in general) images that are saved in an 8 bit format are problematic and it is recommended that all images you will want to use must be on 24 bits (our recommendation is opening the file in question using Paint or any other image editing software and saving it in a 24 bit format).

A couple of paragraphs ago we mentioned texture coordinates which help with its orientation. These are called UV coordinates and are similar to the XY coordinates in the classic 2D coordinate system.

Texture axes

Texture axes

The UV coordinates only take values between 0 and 1, with (0,0) being at the system origin and the (1,1) being on its diagonal. Each texture coordinate will be linked with its own vertex, thus describing the part of the texture which will be attached to that certain face. It should be noted that the texture coordinates can have any values between 0 and 1, so in case you want to simulate just part of the box as a result of it being destroyed, you can choose an area in the shape of that piece from inside the image. The texture coordinate values can be determined via interpolation, knowing the vertices’ position in relation to the original face and the original texture coordinates associated with them.

In some cases, the texture will be a pattern which will repeat itself over a large face (wall, ground, etc.). Here is where the wrapping operation comes into play. There is an OpenGL function called glTexParameteri (GLenum target, GLenum pname, GLfloat param), where target is the type of texture in use, pname is the parameter you want to set and param is the new value of that parameter. To play around with wrapping, the second parameter must be GL_TEXTURE_WRAP_S or GL_TEXTURE_WRAP_T, S and T being interchangeable with U and V.

The third parameter can be one of the following: GL_CLAMP_TO_EDGE, GL_MIRRORED_REPEAT, or GL_REPEAT. Each axis has an independent parameter, so you can have combinations of repeat and clamp, like in the following examples:

Left up you have clamp on both axes, right up – clamp on S and repeat on T, left down – repeat on S and clamp on T and right down – repeat on both axes.

Left up you have clamp on both axes, right up – clamp on S and repeat on T, left down – repeat on S and clamp on T and right down – repeat on both axes. (image source)

The clamp setting takes the last pixel from a row/column beyond which the image is clamped and fills in the pixels along that line until it reaches one of the face’s edges. As for repeat, the meaning is quite clear. When it has plastered the texture on part of the face and there is still room on a certain axis, it will repeat the texture until it reaches an edge. Mirrored repeat is similar, with the slight difference that on each axis there will be pairs of mirrored textures.

Repeat mirrored Lena

Repeat mirrored Lena

Having given you an introduction to a few concepts regarding texture, let’s take a look at how to use textures in code. At the end of the tutorial, your project should look like this:

VS structure texture

Source code for this tutorial can be found on our GitHub repository.

We need to add a new class called TextureLoader in our engine project. This is going to load a BMP file and return a texture pointer. This class is instantiated only once in Engine class constructor and visible to other projects through a get method(check sources on Git).

#pragma once
#include <glew\glew.h>
#include <fstream>
#include <iostream>
#include <string>
#include "BMPHeaders.h"//check git for this class

namespace BasicEngine
	namespace Rendering
		class TextureLoader

				unsigned int LoadTexture(const std::string& filename,
                                                         unsigned int width, 
                                                         unsigned int height);


				void LoadBMPFile(const std::string& filename,
                                                 unsigned int& width,
                                                 unsigned int& height,
                                                 unsigned char*& data);

#include "TextureLoader.h"
using namespace BasicEngine::Rendering;





unsigned int TextureLoader::LoadTexture(const std::string& filename, 
                                        unsigned int width, 
                                        unsigned int height)
	unsigned char* data;
	LoadBMPFile(filename, width, height, data);

	//create the OpenGL texture
	unsigned int gl_texture_object;
	glGenTextures(1, &gl_texture_object);
	glBindTexture(GL_TEXTURE_2D, gl_texture_object);

	float maxAnisotropy;
	glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &maxAnisotropy);
	glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, maxAnisotropy);

	//when we work with textures of sizes not divisible by 4 we have to use the line reader
	//which loads the textures in OpenGL so as it can work with a 1 alligned memory (default is 4)
	glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

	//Generates texture
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);

	//eliminates the array from the RAM
	delete data;

	//creates the mipmap hierarchy

	//returns the texture object
	return gl_texture_object;

void TextureLoader::LoadBMPFile(const std::string& filename,
                                unsigned int& width,
                                unsigned int& height,
                                unsigned char*& data)
	//read the file
	std::ifstream file(filename.c_str(), std::ios::in | std::ios::binary);
	if (!file.good()){
		std::cout << "Texture Loader: Cannot open texture file ";
		width = 0;
		height = 0;

	//reads the headers
	Texture::BMP_Header h; Texture::BMP_Header_Info h_info;
	file.read((char*)&(h.type[0]), sizeof(char));
	file.read((char*)&(h.type[1]), sizeof(char));
	file.read((char*)&(h.f_lenght), sizeof(int));
	file.read((char*)&(h.rezerved1), sizeof(short));
	file.read((char*)&(h.rezerved2), sizeof(short));
	file.read((char*)&(h.offBits), sizeof(int));
	file.read((char*)&(h_info), sizeof(Texture::BMP_Header_Info));

	//assigning memory (a pixel has 3 components, R, G, B)
	data = new unsigned char[h_info.width*h_info.height * 3];

	// check if the line contains 4 byte groups
	long padd = 0;
	if ((h_info.width * 3) % 4 != 0) padd = 4 - (h_info.width * 3) % 4;

	width = h_info.width;
	height = h_info.height;

	long pointer;
	unsigned char r, g, b;
	//reading the matrix
	for (unsigned int i = 0; i < height; i++)
		for (unsigned int j = 0; j < width; j++)
			file.read((char*)&b, 1);	//in bmp, the component order in the pixel is b,g,r (in Windows)
			file.read((char*)&g, 1);
			file.read((char*)&r, 1);

			pointer = (i*width + j) * 3;
			data[pointer] = r;
			data[pointer + 1] = g;
			data[pointer + 2] = b;

		file.seekg(padd, std::ios_base::cur);

It is worth mentioning that in the project we created for this tutorial series, as of writing this, the only files which can be transformed into textures are 24-bit .bmp files.

When loading a texture, first you have to read and save the image in question in a structure. Afterwards, we have to ask OpenGL to reserve a name for it by calling the glGenTextures (GLsizei n, GLuint * textures) function, which prompts the reservation of n names which are binded to the textures pointer.

Afterwards, the texture is bounded to a texture unit using glBindTexture (GLenum target, GLuint texture). A texture unit is the reference to a texture object which will be used in a shader. When a texture is initialized it is automatically bounded to the no. 0 texture unit. This is also the default active one, unless changed by the function glActiveTexture (GLenum texture).

Next up, there are a series of parameter operations: wrapping, mip maps and filtering. We have covered wrapping a few paragraphs ago, so let’s dig into mip maps. These are structures in which a texture is stored along with smaller copies of it. This is mainly used to speed up graphical operations where LOD (Level of Detail) is implemented or you just need a smaller copy of the same texture. An example of an RGB mip map is the following:

Mipmap of Lena

Mipmap of Lena

It’s important to note that ideally the texture in question must be square and the width and height must be powers of 2 (512, 1024, 2048, etc.). The height and width of the mip map will be double the ones of the original image, so that the solution above can be generated. Three of the 4 quarters of the free quarter will be filled with the 3 channels of the RGB image and then the operation repeats itself within the 4th one with the image reduced to half its size.

Texture filtering deals with cases where textures are stretched beyond its original size or reduced in size, thus the new corresponding pixels to a certain texture coordinate will have to be determined. There are 3 major ways the new pixel can be chosen:

  • GL_NEAREST – closest pixel to the coordinate
  • GL_LINEAR – the average between the neighboring pixels

It is clear how the first 2 work, so let’s concentrate on the mip map options.

  • GL_NEAREST_MIPMAP_NEAREST uses the mipmap whose size is closest to the pixel and uses interpolation with the nearest neighbor.
  • GL_LINEAR_MIPMAP_NEAREST samples the closest mipmap using linea interpolation
  • GL_NEAREST_MIPMAP_LINEAR uses the 2 mipmaps that resemble the pixel in size and uses nearest neighbor interpolation
  • GL_LINEAR_MIPMAP_LINEAR does a linear interpolation between the 2 closest mipmaps

All models should be able to set and get multiple textures, so we have to add two new functions in our IGameObject interface which are going to be implemented in Model class.

virtual void SetTexture(std::string textureName, GLuint texture) = 0;
virtual const GLuint GetTexture(std::string textureName) const = 0;

We can have multiple textures for a single model so we should store these texture pointers in a std::map in our Model. Check again our GitHub repository to see Model.cpp and Model.h.

Now we have to add our new project called c2_3_DrawCubeTexture in our Visual Studio solution where we prepare the texture to be plastered on our rotating cube seen in previous tutorials. Nothing changes in CubeTexture.h, except the name. In CubeTexture.cpp  we need to change the Create and Draw methods.

#include "CubeTexture.h"
using namespace BasicEngine::Rendering;

#define PI 3.14159265



void CubeTexture::Create()
	GLuint vao;
	GLuint vbo;
	GLuint ibo;


	glGenVertexArrays(1, &vao);

	std::vector<VertexFormat> vertices;
	std::vector<unsigned int>  indices = { 0,  1,  2,  0,  2,  3,   //front
					       4,  5,  6,  4,  6,  7,   //right
					       8,  9,  10, 8,  10, 11 ,  //back
					       12, 13, 14, 12, 14, 15,  //left
					       16, 17, 18, 16, 18, 19,  //upper
					       20, 21, 22, 20, 22, 23}; //bottom
	vertices.push_back(VertexFormat(glm::vec3(-1.0, -1.0, 1.0),
                                        glm::vec2(0, 0)));
	vertices.push_back(VertexFormat(glm::vec3( 1.0, -1.0, 1.0),
                                        glm::vec2(1, 0)));
	vertices.push_back(VertexFormat(glm::vec3( 1.0,  1.0, 1.0),
                                        glm::vec2(1, 1)));
	vertices.push_back(VertexFormat(glm::vec3(-1.0,  1.0, 1.0),
                                        glm::vec2(0, 1)));

	vertices.push_back(VertexFormat(glm::vec3(1.0,  1.0,  1.0),
                                        glm::vec2(0, 0)));
	vertices.push_back(VertexFormat(glm::vec3(1.0,  1.0, -1.0),
                                        glm::vec2(1, 0)));
	vertices.push_back(VertexFormat(glm::vec3(1.0, -1.0, -1.0),
                                        glm::vec2(1, 1)));
	vertices.push_back(VertexFormat(glm::vec3(1.0, -1.0,  1.0),
                                        glm::vec2(0, 1)));

	vertices.push_back(VertexFormat(glm::vec3(-1.0, -1.0, -1.0),
                                        glm::vec2(0, 0)));
	vertices.push_back(VertexFormat(glm::vec3(1.0,  -1.0, -1.0),
                                        glm::vec2(1, 0)));
	vertices.push_back(VertexFormat(glm::vec3(1.0,   1.0, -1.0),
                                        glm::vec2(1, 1)));
	vertices.push_back(VertexFormat(glm::vec3(-1.0,  1.0, -1.0),
                                        glm::vec2(0, 1)));

	vertices.push_back(VertexFormat(glm::vec3(-1.0, -1.0, -1.0),
                                        glm::vec2(0, 0)));
	vertices.push_back(VertexFormat(glm::vec3(-1.0, -1.0,  1.0),
                                        glm::vec2(1, 0)));
	vertices.push_back(VertexFormat(glm::vec3(-1.0,  1.0,  1.0),
                                        glm::vec2(1, 1)));
	vertices.push_back(VertexFormat(glm::vec3(-1.0,  1.0, -1.0),
                                        glm::vec2(0, 1)));

	vertices.push_back(VertexFormat(glm::vec3( 1.0, 1.0,  1.0),
                                        glm::vec2(0, 0)));
	vertices.push_back(VertexFormat(glm::vec3(-1.0, 1.0,  1.0),
                                        glm::vec2(1, 0)));
	vertices.push_back(VertexFormat(glm::vec3(-1.0, 1.0, -1.0),
                                        glm::vec2(1, 1)));
	vertices.push_back(VertexFormat(glm::vec3( 1.0, 1.0, -1.0),
                                        glm::vec2(0, 1)));

	vertices.push_back(VertexFormat(glm::vec3(-1.0, -1.0, -1.0),
                                        glm::vec2(0, 0)));
	vertices.push_back(VertexFormat(glm::vec3( 1.0, -1.0, -1.0),
                                        glm::vec2(1, 0)));
	vertices.push_back(VertexFormat(glm::vec3( 1.0, -1.0,  1.0),
                                        glm::vec2(1, 1)));
	vertices.push_back(VertexFormat(glm::vec3(-1.0, -1.0,  1.0),
                                        glm::vec2(0, 1)));

	glGenBuffers(1, &vbo);
	glBindBuffer(GL_ARRAY_BUFFER, vbo);
                     vertices.size() * sizeof(VertexFormat),

	glGenBuffers(1, &ibo);
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
                     indices.size() * sizeof(unsigned int),

	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(VertexFormat), (void*)0);
	glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE,
                              (void*)(offsetof(VertexFormat, VertexFormat::texture)));
	this->vao = vao;

	rotation_speed = glm::vec3(90.0, 90.0, 90.0);
	rotation = glm::vec3(0.0, 0.0, 0.0);


void CubeTexture::Update()
	rotation = 0.01f * rotation_speed + rotation;
	rotation_sin = glm::vec3(rotation.x * PI / 180, rotation.y * PI / 180, rotation.z * PI / 180);

void CubeTexture::Draw(const glm::mat4& projection_matrix,
                       const glm::mat4& view_matrix)

	glBindTexture(GL_TEXTURE_2D, this->GetTexture("Create"));
	unsigned int textureLocation = glGetUniformLocation(program, "texture1");
	glUniform1i(textureLocation, 0);

	glUniform3f(glGetUniformLocation(program, "rotation"),
                    rotation_sin.x, rotation_sin.y, rotation_sin.z);
	glUniformMatrix4fv(glGetUniformLocation(program, "view_matrix"),
                           1, false, &view_matrix[0][0]);
	glUniformMatrix4fv(glGetUniformLocation(program, "projection_matrix"),
                           1, false, &projection_matrix[0][0]);

	glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_INT, 0);

The process is simple: create the index array, create the imbricated vertex and texture coordinate array, enable the texture (highlighted code above) and set the correct parameters and finally draw.

One last thing to examine are the vertex and fragment shaders used:

#version 450 core
layout(location = 0) in vec3 in_position;
layout(location = 1) in vec2 in_texture;

uniform mat4 projection_matrix, view_matrix;
uniform vec3 rotation;

out vec2 texcoord;

void main()

	texcoord = in_texture;
	mat4 rotate_x, rotate_y, rotate_z;

	rotate_x = mat4(1.0, 0.0, 0.0, 0.0,
					0.0, cos(rotation.x), sin(rotation.x), 0.0,
					0.0, -sin(rotation.x), cos(rotation.x), 0.0,
					0.0, 0.0, 0.0, 1.0);

	rotate_y = mat4(cos(rotation.y), 0.0, -sin(rotation.y), 0.0,
					0.0, 1.0, 0.0, 0.0,
					sin(rotation.y), 0.0, cos(rotation.y), 0.0,
					0.0, 0.0, 0.0, 1.0);

	rotate_z = mat4(cos(rotation.z), -sin(rotation.z), 0.0, 0.0,
					sin(rotation.z), cos(rotation.z), 0.0, 0.0,
					0.0, 0.0, 1.0, 0.0,
					0.0, 0.0, 0.0, 1.0);

	gl_Position = projection_matrix * view_matrix *
                      rotate_y * rotate_x *rotate_z * vec4(in_position, 1);


#version 450 core

layout(location = 0) out vec4 out_color;

uniform sampler2D texture1;

in vec2 texcoord;
void main()
	vec4 color = texture(texture1, texcoord);
	out_color = color;

As you can see, the only relevant operation takes place within the fragment shader, which gives each fragment the corresponding color from the texture. The vertex shader receives each vertex along with its own texture coordinates (in_texture) on location 1 and passes it along to the fragment shader via in/out texcoord variabile.

in and out are storage qualifier used in glsl to pass a variable between shader stages. The uniform sampler2D will access our texture. For the moment we have only one texture. Note the texture function which takes the sampler and the coordinates as parameters and return the final color (R,G,B,A).

Finally main.cpp from our textured cube:

#pragma once
#include <BasicEngine\Engine.h>
#include "CubeTexture.h"

using namespace BasicEngine;
int main(int argc, char **argv)

	Engine* engine = new Engine();


	CubeTexture* cube = new CubeTexture();
	int program = engine->GetShader_Manager()->GetShader("cubeShader");
	if (program != 0)
		std::cout << "invalid program..."; std::cin.get(); } unsigned int texture = engine->GetTexture_Loader()->LoadTexture("Textures\\Crate.bmp", 256, 256);

	engine->GetModels_Manager()->SetModel("cube", cube);


	delete engine;
	return 0;

I hope you found this texture tutorial illuminating and enjoyable. Look forward to more in-depth discussions about what else you can do with textures in the following weeks.

For your “homework”, try to replace the crate texture with bamboo.bmp. Crate.bmp and bamboo.bmp images can be found in our GitHub repository


I'm an engineer, currently employed at a financial software company. My interests include gaming, LPing and, of course, reviewing, but also game dev and graphics. Also, in the past I've dabbled in amateur photography, reviewing movies and writing short stories and blog posts. I am also a huge Song of Ice and Fire fan, but that's beside the point. Youtube Channel, Deviantart , Google + , Twitter

blog comments powered by Disqus