Texture Mapping Continued

The previous tutorial introduced the foundation of texture mapping in OpenGL, and focused on illustrating how a .BMP texture file is correctly mapped to a cube mesh. This is a great starting point, and during the current tutorial we will take it one step further by introducing texture loading libraries, the notion of multitexturing, and UV animation.


This is what you should have at the end of this tutorial – multitexturing and UV animation for recreating the Diablo III arcane power effect in OpenGL.

There are various libraries and texture loading tools out there (DevIL. FreeImage, etc.). The one we will be relying on in this tutorial is SOIL due to the fact that it’s free, comes with OpenGL functionality, and it’s pretty lightweight. Since an engine should ideally be platform independent, it’s also worth knowing how to read and handle various texture file formats on your own. What if you would like to port your engine for PlayStation development and not just PC? Most of the libraries out there will definitely let you down, so being able to do some “parser monkey business” will benefit you in the future.


Texture mapping using SOIL

Simple OpenGL Image Library (SOIL) is written in C and its focus is on loading textures. Download it from here. This is designed as a compact static library, so let’s use it as such in our Visual Studio solution.

  1. Locate the downloaded SOIL archive and unzip it to a location of your choice. Upon opening the folder you should have the following files:

    Contents of the unzipped SOIL folder.

    The folders of interest for us are projects, and src.

  2. Navigate to the projects folder. Locate the project folder VC9 and load SOIL.sln in Visual Studio. Once the files have all loaded, build the SOIL project. After the project has been built, feel free to quit Visual Studio and go back into the newly created VC9\Debug folder. Here you should find the static library SOIL.lib.

    The SOIL static library file generated after building the project with Visual Studio.

    Now copy SOIL.lib from this Debug folder into Dependencies\lib (located in the root folder of this tutorial series). At this point Dependencies\lib should contain static libraries for freeglut, glew32, and SOIL.
    Next we need to copy SOIL.h (located in the src folder where you have unzipped the SOIL archive) into Dependencies\include\soil. This is what your folders should look like:

    The tutorial Dependencies folder holding SOIL library, and SOIL header files.

    The tutorial Dependencies folder holding SOIL library, and SOIL header files.

    Finally, we have to tell Visual Studio where to find our newly added library in order to use it throughout the tutorials. This is done by firing up the tutorial solution in Visual Studio, and adding SOIL.lib in the engine Project Properties >> Linker >> Input >> Additional Dependencies.

    Then make sure that each project will depend on the basic engine at build time. In order to actually use SOIL, make sure you include SOIL.h in the source code as we will see in the following example:

    Overview of the tutorial solution hierarchy along with the newly added project (left). Basic Engine project properties menu (right).

    Overview of the tutorial solution hierarchy along with the newly added project (left). Basic Engine project properties menu (right).

  3. Now let’s try this out. Add a new project to your current solution. Use a similar naming convention as the rest of the projects in order to avoid confusion. Make sure the project has the same properties as the others (build dependency ticked for BasicEngine library, etc.).For the first test, let’s recreate the previous tutorial, but this time using SOIL. All the files from the previous tutorial remain unchanged, except for Main.cpp.
    #pragma once
    #include <BasicEngine\Engine.h>
    #include "CubeTextureAdvanced.h"
    #include "soil\SOIL.h"		 //new
    using namespace BasicEngine;
    int main(int argc, char **argv) {
    	Engine* engine = new Engine();
    	CubeTextureAdvanced* cube = new CubeTextureAdvanced();
    	int program = engine->GetShader_Manager()->GetShader("cubeShader");
    	if (program != 0) 
    		std::cout << "invalid program..."; std::cin.get();
            //just this line of code
            cube->SetTexture("Create", SOIL_load_OGL_texture("Textures\\Crate.bmp", SOIL_LOAD_AUTO,
    	engine->GetModels_Manager()->SetModel("cube", cube);
    	delete engine;
    	return 0;


    Result of the texturing project using SOIL.

Multitexturing and texture coordinate animation

For a final example in this tutorial we should look into multitexturing. And what better way to do this than to attempt to recreate the Diablo III arcane power UI globe?

This popular article on Diablo III resource bubbles sheds some light on how the arcane power effect is achieved – namely what geometry is used, what type of textures are mapped, and the order in which these textures are multiplied together.

At a first glance this effect looks as if it’s a result of compositing a number of textures together (multitexturing). In addition to this observation, it’s also clear that the textures are in motion (UV animation). After a more careful examination, we notice that one texture is scrolling to the right, while the others are scrolling up, or diagonally. This is crucial information for achieving the desired effect.

Since morphing geometry in the vertex shader (i.e. transforming a cube or a plane to mimic a sphere) is out of the scope of the current tutorial, we will generate the data for a sphere on the client side.

Create yet another project making sure all the properties are consistent, just as you did before.

Overview of the multitexturing project's structure inside our solution.

Overview of the multitexturing project’s structure inside our solution.

Starting of, the header file should use a way of retrieving system time. I chose Chrono due to its accuracy and extended functionality. Feel free to replace it with any time class of your choice.

In order to “animate” UVs we need to send the fragment shader the total amount of elapsed time since the application started. This is achieved by saving the time when the app starts, and then every frame, the delta time is updated and passed in a uniform to the fragment shader.

#pragma once
#include <BasicEngine\Rendering\Models\Model.h>
#include <chrono>

using namespace BasicEngine::Rendering::Models;

// Use the C++11 Chrono functionality to set up a timer
// The timer tells the frag shader how much time has passed since the app commenced
typedef std::chrono::high_resolution_clock HiResTime;	
typedef std::chrono::milliseconds MiliSec;			// require this to track ticks in miliseconds
typedef std::chrono::duration<float> DeltaTime;			// require this to convert from Chrono duration to float

class Multitexturing : public Model {

	// Will generate the geometry, indices, and UVs for a sphere (same structure as the old gluSphere - "stacks and slices")
	// We could've used the cube and "morphed" it into a sphere in the vertex shader 
	// However, that is out of the scope of this tutorial
	void CreateSphere(float radius, unsigned int rings, unsigned int sectors);	

	// Drawing a static sphere. We'll let the frag shader do the UV rotation this time
	virtual void Draw(const glm::mat4& projection_matrix, const glm::mat4& view_matrix) override final;


Just as before, we’ll have to create the geometry (in our case a sphere) and make sure we store it in a buffer, along with its texture coordinates and indices.

There are a number of ways to computationally generate a sphere. I’ve chosen the “gluSphere” way – pass in a radius value, a number or vertical sectors, and a number of horizontal rings. Iterate over these and generate vertex data that is then stored in a vector.

#include "Multitexturing.h"

using namespace BasicEngine::Rendering;

const float PI = 3.1415927;
int indicesSize;
std::chrono::time_point<std::chrono::system_clock> startTime;

Multitexturing::Multitexturing() {


Multitexturing::~Multitexturing() {


void Multitexturing::CreateSphere(float radius, unsigned int rings, unsigned int sectors) {

	// Generate a sphere
	std::vector<GLfloat> vertices;
	std::vector<GLfloat> texcoords;
	std::vector<GLushort> indices;

	float const RingsRecip   = 1.0 / (float)(rings - 1);
	float const SectorsRecip = 1.0 / (float)(sectors - 1);
	int countRings, countSectors;

	vertices.resize(rings * sectors * 3);						// *3 because it's a vec3 (position)
	texcoords.resize(rings * sectors * 2);						// *2 because it's a vec2 (UVs)

	std::vector<GLfloat>::iterator v = vertices.begin();
	std::vector<GLfloat>::iterator t = texcoords.begin();

	// Calculate vertices' position and their respective texture coordinates 
	for (countRings = 0; countRings < rings; countRings++) {
		float const y = sin(-PI / 2 + PI * countRings * RingsRecip) * radius;
		for (countSectors = 0; countSectors < sectors; countSectors++) {
			float const x = cos(2 * PI * countSectors * SectorsRecip) * sin(PI * countRings * RingsRecip);
			float const z = sin(2 * PI * countSectors * SectorsRecip) * sin(PI * countRings * RingsRecip);

			*t++ = countSectors * SectorsRecip; *t++ = countRings * RingsRecip;

			*v++ = x * radius; *v++ = y; *v++ = z * radius;

	// Calculate indices 
	indices.resize(rings * sectors * 4);
	std::vector<GLushort>::iterator i = indices.begin();
	for (countRings = 0; countRings < rings - 1; countRings++) {
		for (countSectors = 0; countSectors < sectors - 1; countSectors++) {
			*i++ = (countRings + 0) * sectors + countSectors;				// added for half-symmetry
			*i++ = (countRings + 0) * sectors + (countSectors + 1);
			*i++ = (countRings + 1) * sectors + (countSectors + 1);
			*i++ = (countRings + 1) * sectors + countSectors;

	// Use the previous tutorial approach for storing everything in a VertexFormat(vec3 vertices, vex2 texCoords)
	GLuint vao, vbo, ibo;

	glGenVertexArrays(1, &vao);

	std::vector<VertexFormat> vertexData;
	int count = 0;
	for (int verts = 0; verts < vertices.size(); verts += 3) {
		for (int tex = count; tex < texcoords.size(); tex += 2) {
                                                                    vertices[verts + 1],
                                                                     vertices[verts + 2]), 
				                          glm::vec2(texcoords, texcoords)));
			count += 2;

	glGenBuffers(1, &vbo);
	glBindBuffer(GL_ARRAY_BUFFER, vbo);
	glBufferData(GL_ARRAY_BUFFER, vertexData.size() * sizeof(VertexFormat), &vertexData[0], GL_STATIC_DRAW);

	glGenBuffers(1, &ibo);
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(GLushort), &indices[0], GL_STATIC_DRAW);

	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(VertexFormat), (void*)0);
	glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(VertexFormat), (void*)(offsetof(VertexFormat, VertexFormat::texture)));

	indicesSize = indices.size();		// index size must be passed to the glDrawElements() later on

	startTime = HiResTime::now();		// assume the app starts...now!

void Multitexturing::Draw(const glm::mat4& projection_matrix, const glm::mat4& view_matrix)

	glBindTexture(GL_TEXTURE_2D, this->GetTexture("BaseTexture"));
	unsigned int textureLocation = glGetUniformLocation(program, "nebulaTex1");
	glUniform1i(textureLocation, 0);

	glBindTexture(GL_TEXTURE_2D, this->GetTexture("SecondTexture"));
	unsigned int secondTextureLocation = glGetUniformLocation(program, "nebulaTex2");
	glUniform1i(secondTextureLocation, 1);

	glBindTexture(GL_TEXTURE_2D, this->GetTexture("ThirdTexture"));
	unsigned int thirdTextureLocation = glGetUniformLocation(program, "nebulaTex3");
	glUniform1i(thirdTextureLocation, 2);

	glBindTexture(GL_TEXTURE_2D, this->GetTexture("AlphaChanTexture"));
	unsigned int alphaChanTextureLocation = glGetUniformLocation(program, "alphaChanTex");
	glUniform1i(alphaChanTextureLocation, 3);

	glBindTexture(GL_TEXTURE_2D, this->GetTexture("RampTexture"));
	unsigned int rampTextureLocation = glGetUniformLocation(program, "rampTex");
	glUniform1i(rampTextureLocation, 4);
	auto      endTime = HiResTime::now();								// get current time
	DeltaTime dt      = endTime - startTime;							// calculate total elapsed time since app started
	MiliSec   dtMS    = std::chrono::duration_cast<MiliSec>(dt);		
	glUniform1f(glGetUniformLocation(program, "Timer"), dtMS.count());	// tuck it in a uniform and pass it on to the shader
	glUniformMatrix4fv(glGetUniformLocation(program, "view_matrix"), 1, false, &view_matrix[0][0]);
	glUniformMatrix4fv(glGetUniformLocation(program, "projection_matrix"), 1, false, &projection_matrix[0][0]);
	// Need to draw the object twice since the textures are scrolling and 
	// We do not wish to see overlapping geometry (due to the blend equation)
	// At this point make sure GL_BLEND, GL_CULL_FACE, and GL_DEPTH_TEST are enabled inside the SceneManager.

	glCullFace(GL_BACK); // draw back face 
	glDrawElements(GL_QUADS, indicesSize, GL_UNSIGNED_SHORT, 0);

	glCullFace(GL_FRONT); // draw front face
	glDrawElements(GL_QUADS, indicesSize, GL_UNSIGNED_SHORT, 0);

The vertex shader is trivial at this point, as it contains no rotation information.

#version 450 core

layout(location = 0) in vec3 in_position;
layout(location = 1) in vec2 in_texture;

uniform mat4 projection_matrix, view_matrix;
uniform vec3 rotation;

out vec2 texcoord;

void main() {
	texcoord = in_texture;
	gl_Position = projection_matrix * view_matrix * vec4(in_position, 1);

The fragment shader does all of the work this time – it takes in all of the textures we’ve specified with SOIL.

The first step is to use the timer as an offset for animating the UVs of each texture. This is done by adding and/or subtracting it from the x and y values of the UVs.
If we extract it from the y value the texture will appear to be scrolling up. If we add it to the x value, the texture will scroll to the right.

After setting this up, we need to isolate the “particles” that appear in the texture. We do this not by discarding fragments, but by reading the alpha texture’s value and passing that value into the alpha channel of one of the other textures.
Now that everything is set up, we can multiply all of these together.

Overview of the texture types used in this project along with the order in which they are combined.

Overview of the texture types used in this project along with the order in which they are combined.

#version 450 core

layout(location = 0) out vec4 out_color;

uniform sampler2D nebulaTex1;			
uniform sampler2D nebulaTex2;		
uniform sampler2D nebulaTex3;		
uniform sampler2D alphaChanTex;
uniform sampler2D rampTex;		
uniform float     Timer;

in vec2 texcoord;

void main() {
	float offset = Timer * 0.0001;
	vec4 firstColour  = texture (nebulaTex3,   vec2(texcoord.x + offset * 0.5, texcoord.y) * 1.8);
	vec4 secondColour = texture (nebulaTex1,   vec2(texcoord.x, texcoord.y - offset) * 1.5);
	vec4 thirdColour  = texture (nebulaTex2,   vec2(texcoord.x, texcoord.y - offset));
	vec4 aChanColour  = texture (alphaChanTex, vec2(texcoord.x, texcoord.y - offset));
	vec4 rampColour   = texture (rampTex,      vec2(texcoord.x, texcoord.y - offset) * 1.75); 

	// Use the greyscale value of the aChanColour as the alpha of the thirdColour.
	// This will basically set the alpha of the bright particles to a high value, and the rest to a low value
	// Hence isolating the particles from the rest of the texture values.
	vec4 particleColour = vec4(thirdColour.r, thirdColour.g, thirdColour.b, aChanColour.r);

	out_color = (firstColour * secondColour * 2) * particleColour * 2 * rampColour;	

Finally, our Main initializes the engine and brings everything to life:

#pragma once
#include <BasicEngine\Engine.h>
#include "Multitexturing.h"
#include "soil\SOIL.h"							

using namespace BasicEngine;

int main(int argc, char **argv)
	Engine* engine = new Engine();


	Multitexturing* multitexSphere = new Multitexturing();
	int program = engine->GetShader_Manager()->GetShader("MultiTexSphereShader");
	if (program != 0)
		multitexSphere->CreateSphere(2, 24, 48);	//generate sphere geometry and bind ibo, vbo buffers.
		std::cout << "Invalid program."; std::cin.get();
      // Make sure SOIL_FLAG is set to repeat since we will be "scrolling" the UV coordinates
        multitexSphere->SetTexture("BaseTexture",      SOIL_load_OGL_texture("Textures\\nebula1.png",   SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_TEXTURE_REPEATS));
	multitexSphere->SetTexture("SecondTexture",    SOIL_load_OGL_texture("Textures\\nebula2.png",   SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_TEXTURE_REPEATS));
	multitexSphere->SetTexture("ThirdTexture",     SOIL_load_OGL_texture("Textures\\nebula3.png",   SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_TEXTURE_REPEATS));
	multitexSphere->SetTexture("AlphaChanTexture", SOIL_load_OGL_texture("Textures\\alphaChan.png", SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_TEXTURE_REPEATS));
	multitexSphere->SetTexture("RampTexture",      SOIL_load_OGL_texture("Textures\\ramp.png",      SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_TEXTURE_REPEATS));

	engine->GetModels_Manager()->SetModel("sphere", multitexSphere);


	delete engine;
	return 0;

In order to test this tutorial head to our GitHub and clone the repository. My texture-making skills leave much to be desired (notice the inconsistent edges), but any picture of a galaxy or nebula should do fine for this example.


Result of the multitexturing tutorial.

I hope you’ve found the material covered here useful. In the next tutorial I’ll be extending the functionality of the Texture Loading class (a home-made alternative for external texture loading tools).

Software engineer (computer graphics) & CS PhD Padawan. Interested in game engine dev, GLSL, computer vision, GPGPU, V.E./V.R, algo challenges, & Raspberry Pi dev. When I don't shift bits, I enjoy playing SC2, driving to new places, drawing, and playing guitar.

blog comments powered by Disqus