OpenGL, a noob’s guide for Android developers

Benjamin Monjoie
10 min readJun 3, 2017

French version available here : https://medium.com/@xzan/opengl-le-guide-du-noob-pour-d%C3%A9veloppeur-android-78f069c7214d

So, you decided to do some OpenGL on Android ? Before we dive deeper, it’s important for you to know what you are getting yourself into. You are going to cry, beg and question what you thought you knew since primary school. Rest assured, it’s normal !

‘guardedRun’ is the method containing all the code executed by the thread which calls your Renderer. Oups !

What’s OpenGL ?

To be sure we are talking about the same thing, let’s clear things up right away. OpenGL is a programming interface which let you talk to the device’s graphic driver. It could be a phone, a computer, a TV screen or any other device that supports OpenGL. Well yes, the device has to support it. As for Android devices, they supports :
* OpenGL ES 1.0 & 1.1 since Android 1.0 (API 4)
* OpenGL ES 2.0 since Android 2.2 (API 8)
* OpenGL ES 3.0 since Android 4.3 (API 18) (almost)
* OpenGL ES 3.1 since Android 5.0 (API 21)

ES what now ? …

You had figured it out, it was too nice to be true, there is a catch. Android doesn’t support OpenGL but OpenGL ES. OpenGL ES is a variant of OpenGL’s specifications for embedded system.
OK, ok ! That’s not that bad, there are differences but nothing major. That means some code working on your computer may not be working as is on your phone but almost.
Ouf !

The graphic driver ?

“I thought Android was made in Java and I didn’t have to care about the hardware unless I would use native code (C or C++)”

You are almost right. When you use OpenGL, you speak directly to the graphic driver and so it is possible the same Java code doesn’t work the same way on all phones. But there will be still time to care about this if and when such problems arise !

Let’s dive in !

I am only going to talk about OpenGL ES 2.0 because it is the one supported on most Android phones (Android 2.2+). Nevertheless, that should be enough to start and launch you for OpenGL ES 3.0 or 3.1 should you need it.

In the next sections, I will introduce some important points and gotchas” without getting into implementation details. To know how everything goes together, go check the example project which goes along this article : https://bitbucket.org/Xzan/opengl-example .
Please note everything has been put into one file on purpose to try to ease the reading.

Or, better yet, the amazing tutorial on Android developer’s website : https://developer.android.com/training/graphics/opengl/index.html

GLSurfaceView and the Renderer

We need to start somewhere and, generally, it’s best to start at the beginning. To avoid confusing you from the start, we are going to start with what you need to know for Android in Java.

In our case, we start by adding a view in which we are going to display the result of our OpenGL commands. This view is called GLSurfaceView and takes care of the thread’s creation for your OpenGL commands.
Then enters its interface : GLSurfaceView.Renderer which will be called at 3 key moments of GLSurfaceView’s OpenGL’s thread:

  • onSurfaceCreated(GL10 gl, EGLConfig config)
  • onSurfaceChanged(GL10 gl, int width, int height)
  • onDrawFrame(GL10 gl)

Even though they are pretty explicites, it’s important to know what you are going to do in each of them.

* In onSurfaceCreated, you initialize your program and your initial configurations. You can see this method as the View’s constructor . This method is called once for each Surface’s view’s cycle. But the Surface can be destroyed and this method will be called when the next one is created.

* onSurfaceChanged is a good place to create your textures (we’ll get back on that) and (re)create what depends on your view’s size. You can see this method as View.onSizeChanged(int w, int h, int oldw, int oldh). This method is also not called often.

* Finally, onDrawFrame is called everytime your views is going to be rendered on screen, in other words very often. You can see that as the method View.onDraw(Canvas canvas) and therefore, best practices about performances apply there as well (E.g.: do not instantiate objects in this method, etc.).

Bonus : You can ask your view not to be redrawn every time but only when it’s “dirty”. To achieve this, call GLSurfaceView.setRenderMode(int) with the parameter RENDERMODE_WHEN_DIRTY. Then, call GLSurfaceView.requestRender() to specify your view is dirty.

Note, SurfaceView isn’t a view like the others, it is rendered under your activity in which there is a hole to let you see your Surface through it. If you which to obtain something “more classic”, you can use a TextureView. There is no GLTextureView but you can find an implementation from Roman Nurik in Muzei : https://github.com/romannurik/muzei/blob/master/main/src/main/java/com/google/android/apps/muzei/render/GLTextureView.java .
In itself, it’s not impossible to do OpenGL without GLSurfaceView or GLTextureView but it’s far simpler to use them than to take care of all this by yourself, specially at the beginning.

How OpenGL works

Now, it’s time to talk about OpenGL but before we go further, to understand it fully, we need a full overview of OpenGL’s pipeline. In other words, the steps by which OpenGL go through to build the picture to render on screen based on the values we feed it.

In this section, I may use some words that are unknown to you or not completely understood as shader, fragment or texture. These concepts are important and are explained later. Do not hesitate to read this section again once you have finished reading the whole article to fully understand what is explained here.

Wonderful representation done by https://www.ntu.edu.sg/home/ehchua/programming/opengl/CG_BasicsTheory.html
  1. We pass to the vertex shader the vertices’ coordinates. It transforms them and pass them to the “rasterizer”. These vertices will form the triangles which are the base bricks of a 3D scene.
  2. The rasterizer will fill the triangle(s) with fragments so that they can be showed on screen. Fragments are a set of state which allow to calculate the final pixels.
  3. For each fragment, the fragment shader is called to give a color to render on screen.
  4. The data is finally merged to be rendered on screen or send into a texture as pixels.

GLSL programs (or shaders)

GLSL is short for OpenGL Shader Language and is the name of the language in which we program OpenGL. The keyword is “Shader”. This weird word you may have heard before, without understanding it, is actually simple. It’s a part of the OpenGL program which will be executed on the GPU.

Moreover, it exists several types of shaders. Out of those, there are two which are of interest to us :

  • The vertex shader : in charge of computing the rendered position. We feed it a set of attributes associated to a point in a 3D space and it will compute the position on screen. This set of attributes is more often composed of coordinates and a color (or texture coordinates). The vertex shader is called once per vertex.
  • The fragment shader : in charge of computing the color for each pixel. It receives the output of the vertex shader as input. This code is executed for each pixel of your image. To be clearer, the GPU optimizes most of its calls to be as performant as possible for rendering and is capable of computing the value of several pixels in parallel. We often represent this as if the GPU computes each pixels simultaneously but what is important to get out of this is that we don’t start with the top left corner to finish to the bottom right. If you decide an information for pixel [0,0], when it’s time to compute pixel [0,1] you won’t have this information.

A very simple example of those two shaders can be found in the example project :

Vertex shader :
precision mediump float;
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
attribute vec4 vTextureCoordinate;
varying vec2 position;
void main() {
gl_Position = uMVPMatrix * vPosition;
position = vTextureCoordinate.xy;
}
Fragment shader :
precision mediump float;
uniform sampler2D uTexture;
varying vec2 position;
void main() {
gl_FragColor = texture2D(uTexture, position);
}

In the vertex shader, we receive 3 parameters :

  • uMVPMatrix : a matrix which allows us to change the point of view, rotation and scale.
  • vPosition : the coordinates which will form our “strip”.
  • vTextureCoordinate : the coordinates of each of our vertex.

In the fragment shader, we receive 2 parameters :

  • uTexture : texture containing the picture to be shown.
  • position : parameter received from the vertex shader which contains the position of the pixel to display.

As I am sure you know, there are some shaders a lot more complex than those.

Enter the matrice (Coordinates systems)

OpenGL’s coordinates system doesn’t care about the screen size as shown in the picture below :

Image unshamefully stolen from https://developer.android.com/guide/topics/graphics/opengl.html#coordinate-mapping

This is why it’s important to pass the ratio and other kind of information to the vertex shader. That’s the usefulness of this line in the example project.

In the example project, and in a lot of other cases, the coordinates are passed as “strip”. That means we pass the coordinates of adjacent triangles’ vertices creating the image we wish to show:

Schema of4 triangles formed with the vertices A,B,C,D,E,F. Picture from Wikipedia : https://en.wikipedia.org/wiki/Triangle_strip

Still in the example project, the array passed is composed of 4 vertices starting in the bottom left corner, going to the bottom right corner, to the top left corner and finishing in the top right corner.

Left: What the strip would look like. Right: The triangles that has been formed by this strip

The coordinate of the texture are slightly different to respect the orientation in which the picture is loaded.

Also, if you look a bit closer, you’ll notice the passed coordinates for the position go from -1 to 1 on the other hand those for the textures goes from 0 to 1. Also, the position coordinates contains depth as the Z coordinate but it’s not mandatory if you only do 2D.

In the vertex shader given as example, there is a multiplication between uMVPMatrix and vPosition. vPosition is a matrix containing the coordinates mentioned ealier. uMVPMatrix is a matrix formed thanks to utility methods provided by the Matrix class from the Android framework. GLSL is capable of matrix multiplications and processing those simply and effectively.
Another nuance, I used “vTextureCoordinate.xy”. This creates a size 2 vector containing the first and second value of the size 4 vector which is vTextureCoordinate. I could have done : “vTextureCoordinate.xx” to create a size 2 vector with the first value of vTextureCoordinate for both value.

Note, a little subtlety you might encounter while reading shaders: to be more correct and respect naming conventions, I should have used “vTextureCoordinate.st”. STPQ replace XYZW when talking about texture coordinates and RGBA when talking about colors. In any case, using one or the other won’t change the execution, only its readability.

Textures

Textures are memory spaces in which the graphic processor (GPU) is stocking images. Either to render them or to write new values.

Code to create new texture in memory can be found here.

Buffers (FBO)

A FrameBuffer Object (or FBO) comes on top of a texture and writes in it. Without FBO, all OpenGL commands you would execute would go on screen. This would be pretty annoying if you would make some effect by combining some other effects.

Code to create FBOs can be found here.

Advices

In the sample project, I voluntarily put everything into one file. It has been done to ease the reading because, when I was learning, I noticed that trying to look for the piece of code I was trying to understand would stop me in my reflection and would not help me understand.
Nevertheless, once you got the bases, I can only advise you to abstract as much as possible your code behind some Java classes. For example, make a class that will handle the FBO’s creation for you, put your setup in a class that will extend GLSurfaceView and extract the shaders’ logic into their own classes (as GPUImage does it so well), etc. This will make the code cleaner and easier to read.

Crash often but crash well. Debugging OpenGL code is particularly hard. Check for errors and check your states (E.g.: the state of your FBOs or your shaders’ compilation’s error) and throw a RuntimeException to help you see what and where it’s wrong. This will help you during development but also when testing on different devices.

Finally, test manually on several devices and several Android versions. You won’t be able to catch everything but you will already pick up on a lot of tiny details. Also test on flagships like the Galaxy S series or Galaxy Note series from Samsung but also on more modest devices. Don’t forget some famous Chinese brands like Wiko or Xiaomi.
If you don’t own a lot of devices and/or want to play safe, publish your app on the Play Store’s alpha & beta channels to get feedback from your users before this goes to the wider audience.

Conclusion

Well you got it, developing for OpenGL, it’s a lot of boilerplate, mathematics and hair pulling. Nevertheless, rest assured, a lot of people went through this before you and they faced the same problem you will be facing so there will be help along the way.
I hope I was able through this article to highlight some of the key concepts which will help you understand StackOverflow posts when you will search for answers to your questions.

As mentioned earlier, the tutorial provided by the Android developer’s website is really good, even if missing a few points which, I hope, have been explained here. It also come with a great sample project which can be a huge source of inspiration and provides some utils methods which will help you a lot.

--

--