Efficient parallax scrolling background using GLSL shaders

Posted on Wednesday, 18 December 2013


Take a look at this. The scrolling background has 4 layers, each with an alpha channel. Each layer is 2560x720. Looks pretty good I think!

"But Nicky," I hear you exclaim, "doesn't drawing each of those layers mean you're pushing an entire screen of pixels 4 times just to draw the background? That's horribly inefficient, isn't it?"

Yeah, on the desktop you might be fine, but on mobile you're asking for trouble. And even on the desktop, you don't want to push around pixels you don't need to. Wouldn't it be nice if you could draw the background in a single full screen quad? Yes. Yes it would. Here's how I did it.

As I'm using libGDX, I created a Mesh containing a single, fullscreen quad, and loaded the four textures. This isn't the interesting part though, if you're using libGDX you'll already know how to do that, and if you're not, whatever framework you're using should allow you to do it pretty easily too.

The real work is going on in a shader. First, we have to bind each of the background textures to a different texture unit, so we can combine them in the shader. Here's what it looks like in libGDX:

		backgrounds.get(0).bind(0);
		backgrounds.get(1).bind(1);
		backgrounds.get(2).bind(2);
		backgrounds.get(3).bind(3);

Now we've got to tell the shader how far in the level our player has travelled, which textures we're using, and render our fullscreen mesh. That's simple too:

		parallaxShader.begin();
		parallaxShader.setUniformf("travelDistance", travelDistance);
		parallaxShader.setUniformi("u_texture0", 0);
		parallaxShader.setUniformi("u_texture1", 1);
		parallaxShader.setUniformi("u_texture2", 2);
		parallaxShader.setUniformi("u_texture3", 3);
		fullscreenQuad.render(parallaxShader, GL20.GL_TRIANGLE_FAN);
		parallaxShader.end();

travelDistance is a float, in my case, it is camera.position.x / 1280 / 15 which moves the background at a pleasing speed.

Now the code that powers the parallaxShader. First, the vertex shader:

	attribute vec4 a_position;
	attribute vec4 a_color;
	attribute vec2 a_texCoords;
	uniform mat4 u_worldView;
	varying vec4 v_color;
	varying vec2 v_texCoords;
	varying vec2 t0c, t1c, t2c, t3c;
	uniform float travelDistance;
	void main()                  
	{                            
	    v_color = a_color; 
	    v_texCoords = a_texCoords; 
	    gl_Position = a_position;
	    vec2 texCoord = v_texCoords;
	    t0c = texCoord;
	    t0c.x = t0c.x + travelDistance / 3.0;
	    t1c = texCoord;
	    t1c.x = t1c.x + travelDistance / 2.5;
	    t2c = v_texCoords;
	    t2c.x = t2c.x + travelDistance / 1.8;
	    t3c = v_texCoords;
	    t3c.x = t3c.x + travelDistance / 1.3;
	}

The four vec2's are there so we can calculate the texture cooradinates for each of the four background layers. With each vector, we add the travelDistance to x, after dividing it by a factor so that they move at different rates. Then we have four vec2's for the fragment shader to use.

We calculate the coordinates here because this part of the pipeline is heavily optimized for calculating and interpolating texture coordinates. Doing this in the fragment shader works, but is nowhere near as efficient (and would kill performance on mobile).

	#ifdef GL_ES
	#define LOWP lowp
	precision mediump float;
	#else
	#define LOWP 
	#endif
	varying LOWP vec4 v_color;
	varying vec2 v_texCoords;
	varying vec2 t0c, t1c, t2c, t3c;
	uniform sampler2D u_texture0;
	uniform sampler2D u_texture1;
	uniform sampler2D u_texture2;
	uniform sampler2D u_texture3;
	void main()
	{
	  vec4 texel0, texel1, texel2, texel3;  
	  texel0 = texture2D(u_texture0, t0c);
	  texel1 = texture2D(u_texture1, t1c);
	  texel2 = texture2D(u_texture2, t2c);
	  texel3 = texture2D(u_texture3, t3c);
	  gl_FragColor = texel0;
	  gl_FragColor = vec4(texel1.a) * texel1 + vec4(1.0 - texel1.a) * gl_FragColor;
	  gl_FragColor = vec4(texel2.a) * texel2 + vec4(1.0 - texel2.a) * gl_FragColor;
	  gl_FragColor = vec4(texel3.a) * texel3 + vec4(1.0 - texel3.a) * gl_FragColor;

	}

Now we're in the fragment shader, we can get a texel from each texture, and blend them together. Those last three lines are doing a GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA blend so that the alpha from each texture is taken into account and each pixel doesn't end up either bright white or totally black :)

All pretty simple! And if you wanted to take it to the next level, you could dynamically generate the shader code based on how many background textures you have. Just remember the limitations of your target platform should you want to use this technique on mobile.