Upto: Table of Contents of full book "Programming the Raspberry Pi GPU"

Rendering OpenMAX to OpenGL on the Raspberry Pi

OpenMAX generally forms a self-contained system which can display videos using the GPU. OpenGL ES is a distinct system that can use the GPU to draw graphics. There is a hook to allow OpenMAX videos to be displayed on OpenGL ES surfaces on the Raspberry Pi which is discussed in this chapter.

Resources

Files

Files used are here

EGLImage

OpenMAX

OpenMAX uses buffers to pass information into and out of components. Some components such as video_render commmunicate directly with the hardware to render their output buffers. In addition, OpenMAX has an additional type of EGLImage that can be used as a buffer by some components. Images written to an EGLImage can then be rendered by OpenGL or OpenVG.

The OpenMAX specification says exactly nothing about how to create an EGLImage; this data type is not part of OpenMAX. All that it talks about is how a component can be given an EGLImage to use for a buffer, and that is by the call UseEGLImageBuffer:

	
OMX_UseEGLImage(hComponent, 
                ppBufferHdr,
                nPortIndex,
                pAppPrivate,
                eglImage);
	
      

Even if you have an EGLImage, there is no guarantee that a component will be able to use it. It may not be set up to use this type of buffer, and if it can't, then the call to UseEGLImageBuffer will return OMX_ErrorNotImplemented.

OpenMAX is silent about which components will be able to handle this buffer type. It is not part of the OpenMAX specification.

OpenGL

The type EGLImage is not part of the OpenGL ES specification either - that doesn't talk about how to display images from other sources.

eglCreateImageKHR

The function eglCreateImageKHR provides the missing link. The KHR_image_base specification defines a Khronos standard extension to OpenGL ES

This extension defines a new EGL resource type that is suitable for sharing 2D arrays of image data between client APIs, the EGLImage. Although the intended purpose is sharing 2D image data, the underlying interface makes no assumptions about the format or purpose of the resource being shared, leaving those decisions to the application and associated client APIs.

The specification defines the functions

	
    EGLImageKHR eglCreateImageKHR(
                            EGLDisplay dpy,
                            EGLContext ctx,
                            EGLenum target,
                            EGLClientBuffer buffer,
                            const EGLint *attrib_list)

    EGLBoolean eglDestroyImageKHR(
                            EGLDisplay dpy,
                            EGLImageKHR image)
	
      

The display and context are standard OpenGL ES. The target is not tightly specified:

<target> specifies the type of resource being used as the EGLImage source (examples include two-dimensional textures in OpenGL ES contexts and VGImage objects in OpenVG contexts)

The buffer is the resource used, cast to type EGLClientBuffer.

For OpenGL ES, a value of EGL_GL_TEXTURE_2D_KHR is given in eglext.h and can be used to specify that the resource is an OpenGL ES texture. The buffer itself is an OpenGL ES texture id.

Broadcom GPU

There are two marvellous examples in the hello_pi source tree: hello_videocube and hello_teapot. The examples given later are based on these. However, they use OpenGL ES version one whereas new applications should use OpenGL ES version two. Examination of these examples also reveals a number of issues.

Components

The Broadcom video_render component does not support EGLImage. Instead there is a new component egl_render. This takes video or image inputs, but also has an output buffer, which must be set to the EGLImage.

Threads

OpenGL ES has a processing loop, which typically draws frames as quickly as possible. OpenMAX also has a processing loop as it feeds data through a component pipeline. One of these can be run in the main thread, but the other requires its own thread, easily given by pthreads.

The OpenMAX thread will be filling the EGLImage buffer; the OpenGL ES thread will be using this to draw a texture. Should there be synchronisation? The RPi examples have none, and it doesn't seem to be a problem.

Rendering a video into an OpenGL ES texture

Rendering using an EGLImage falls naturally into two sections: setting up the OpenGL ES environment and setting up the OpenMAX environment. These two are essentially disjoint, with the connection point being creation of the EGLImage in the OpenGL ES part and setting that as a buffer in the OpenMAX part. The two parts then each run in their own thread.

The OpenGL ES we use is borrowed from the image drawing program of the OpenGL ES chapter. The only substantive change is to the function CreateSimpleTexture2D where we create an EGLImage and hand that to a new Posix thread which runs the OpenMAX code:

	
GLuint CreateSimpleTexture2D(ESContext *esContext)
{
   // Texture object handle
   GLuint textureId;
   UserData *userData = esContext->userData;
   
   //userData->width = esContext->width;
   //userData->height = esContext->height;

   // Generate a texture object
   glGenTextures ( 1, &textureId );

   // Bind the texture object
   glBindTexture ( GL_TEXTURE_2D, textureId );

   // Load the texture
   glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, 
		  IMAGE_SIZE_WIDTH, IMAGE_SIZE_HEIGHT, 
		  0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );

   // Set the filtering mode
   glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
   glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
   //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
   //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);


   /* Create EGL Image */
   eglImage = eglCreateImageKHR(
                esContext->eglDisplay,
                esContext->eglContext,
                EGL_GL_TEXTURE_2D_KHR,
                textureId, // (EGLClientBuffer)esContext->texture,
                0);
    
   if (eglImage == EGL_NO_IMAGE_KHR)
   {
      printf("eglCreateImageKHR failed.\n");
      exit(1);
   }

   // Start rendering
   pthread_create(&thread1, NULL, video_decode_test, eglImage);

   return textureId;
}
	
      

The full file for the OpenGL ES code is square.c:

//
// Book:      OpenGL(R) ES 2.0 Programming Guide
// Authors:   Aaftab Munshi, Dan Ginsburg, Dave Shreiner
// ISBN-10:   0321502795
// ISBN-13:   9780321502797
// Publisher: Addison-Wesley Professional
// URLs:      http://safari.informit.com/9780321563835
//            http://www.opengles-book.com
//

// Simple_Texture2D.c
//
//    This is a simple example that draws a quad with a 2D
//    texture image. The purpose of this example is to demonstrate 
//    the basics of 2D texturing
//
#include <stdlib.h>
#include <stdio.h>
#include "esUtil.h"

#include "EGL/eglext.h"

#include "triangle.h"

typedef struct
{
   // Handle to a program object
   GLuint programObject;

   // Attribute locations
   GLint  positionLoc;
   GLint  texCoordLoc;

   // Sampler location
   GLint samplerLoc;

   // Texture handle
   GLuint textureId;

   GLubyte *image;
    int width, height;
} UserData;

static void* eglImage = 0;
static pthread_t thread1;

#define IMAGE_SIZE_WIDTH 1920
#define IMAGE_SIZE_HEIGHT 1080

GLuint CreateSimpleTexture2D(ESContext *esContext)
{
   // Texture object handle
   GLuint textureId;
   UserData *userData = esContext->userData;
   
   //userData->width = esContext->width;
   //userData->height = esContext->height;

   // Generate a texture object
   glGenTextures ( 1, &textureId );

   // Bind the texture object
   glBindTexture ( GL_TEXTURE_2D, textureId );

   // Load the texture
   glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, 
		  IMAGE_SIZE_WIDTH, IMAGE_SIZE_HEIGHT, 
		  0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );

   // Set the filtering mode
   glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
   glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
   //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
   //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);


   /* Create EGL Image */
   eglImage = eglCreateImageKHR(
                esContext->eglDisplay,
                esContext->eglContext,
                EGL_GL_TEXTURE_2D_KHR,
                textureId, // (EGLClientBuffer)esContext->texture,
                0);
    
   if (eglImage == EGL_NO_IMAGE_KHR)
   {
      printf("eglCreateImageKHR failed.\n");
      exit(1);
   }

   // Start rendering
   pthread_create(&thread1, NULL, video_decode_test, eglImage);

   return textureId;
}


///
// Initialize the shader and program object
//
int Init ( ESContext *esContext )
{
    UserData *userData = esContext->userData;
    GLbyte vShaderStr[] =  
      "attribute vec4 a_position;   \n"
      "attribute vec2 a_texCoord;   \n"
      "varying vec2 v_texCoord;     \n"
      "void main()                  \n"
      "{                            \n"
      "   gl_Position = a_position; \n"
      "   v_texCoord = a_texCoord;  \n"
      "}                            \n";
   
    GLbyte fShaderStr[] =  
      "precision mediump float;                            \n"
      "varying vec2 v_texCoord;                            \n"
      "uniform sampler2D s_texture;                        \n"
      "void main()                                         \n"
      "{                                                   \n"
      "  gl_FragColor = texture2D( s_texture, v_texCoord );\n"
      "}                                                   \n";

   // Load the shaders and get a linked program object
   userData->programObject = esLoadProgram ( vShaderStr, fShaderStr );

   // Get the attribute locations
   userData->positionLoc = glGetAttribLocation ( userData->programObject, "a_position" );
   userData->texCoordLoc = glGetAttribLocation ( userData->programObject, "a_texCoord" );
   
   // Get the sampler location
   userData->samplerLoc = glGetUniformLocation ( userData->programObject, "s_texture" );

   // Load the texture
   userData->textureId = CreateSimpleTexture2D (esContext);

   glClearColor ( 0.5f, 0.5f, 0.5f, 1.0f );   

   return GL_TRUE;
}

///
// Draw a triangle using the shader pair created in Init()
//
void Draw ( ESContext *esContext )
{
   UserData *userData = esContext->userData;
   GLfloat vVertices[] = { -1.0f,  1.0f, 0.0f,  // Position 0
                            0.0f,  0.0f,        // TexCoord 0 
                           -1.0f, -1.0f, 0.0f,  // Position 1
                            0.0f,  1.0f,        // TexCoord 1
                            1.0f, -1.0f, 0.0f,  // Position 2
                            1.0f,  1.0f,        // TexCoord 2
                            1.0f,  1.0f, 0.0f,  // Position 3
                            1.0f,  0.0f         // TexCoord 3
                         };
   GLushort indices[] = { 0, 1, 2, 0, 2, 3 };
      
   // Set the viewport
   glViewport ( 0, 0, 1920, 1080); //esContext->width, esContext->height );
   
   // Clear the color buffer
   glClear ( GL_COLOR_BUFFER_BIT );

   // Use the program object
   glUseProgram ( userData->programObject );

   // Load the vertex position
   glVertexAttribPointer ( userData->positionLoc, 3, GL_FLOAT, 
                           GL_FALSE, 5 * sizeof(GLfloat), vVertices );
   // Load the texture coordinate
   glVertexAttribPointer ( userData->texCoordLoc, 2, GL_FLOAT,
                           GL_FALSE, 5 * sizeof(GLfloat), &vVertices[3] );

   glEnableVertexAttribArray ( userData->positionLoc );
   glEnableVertexAttribArray ( userData->texCoordLoc );

   // Bind the texture
   glActiveTexture ( GL_TEXTURE0 );
   glBindTexture ( GL_TEXTURE_2D, userData->textureId );

   // Set the sampler texture unit to 0
   glUniform1i ( userData->samplerLoc, 0 );

   glDrawElements ( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices );

}

///
// Cleanup
//
void ShutDown ( ESContext *esContext )
{
   UserData *userData = esContext->userData;

   // Delete texture object
   glDeleteTextures ( 1, &userData->textureId );

   // Delete program object
   glDeleteProgram ( userData->programObject );
	
   free(esContext->userData);
}

int main ( int argc, char *argv[] )
{
   ESContext esContext;
   UserData  userData;

   int width = 1920, height = 1080;
   GLubyte *image;
   
   esInitContext ( &esContext );
   esContext.userData = &userData;

   esCreateWindow ( &esContext, "Simple Texture 2D", width, height, ES_WINDOW_RGB );

   if ( !Init ( &esContext ) )
      return 0;

   esRegisterDrawFunc ( &esContext, Draw );

   esMainLoop ( &esContext );

   ShutDown ( &esContext );
}

      

The code on the OpenMAX side is a bit more complicated. The rendering component changes from video_render to egl_render and this component has an output port, 221. This needs the EGLImage attached by

	
OMX_UseEGLImage(ILC_GET_HANDLE(egl_render), &eglBuffer, 221, NULL, eglImage)
	
      

This should be attached after the PortSettingsChanged event has been received by the video_decode component and the tunnel has been set up between the two components.

We need to fill the EGLImage buffer. This is done by the usual OMX_FillThisBuffer call. But what happens when it is full? There is no specification for this. What the Broadcom component appears to do is to render its contents onto the EGL surface. The examples do not show any synchronisation technique - it just happens?

After it has been filled (and rendered?) the buffer should be filled again. When a buffer is filled, it generates a FilledBuffer event which is caught by the IL Client library. The library includes a hook ilclient_set_fill_buffer_done_callback whereby we can add a callback function which just refills the buffer:

	
void my_fill_buffer_done(void* data, COMPONENT_T* comp)
{
    if (OMX_FillThisBuffer(ilclient_get_handle(egl_render), eglBuffer) != OMX_ErrorNone)
	{
	    printf("OMX_FillThisBuffer failed in callback\n");
	    exit(1);
	}
}
	
      

With these additions, the code to render the video to the EGLImage is in the file video.c :

/*
  Copyright (c) 2012, Broadcom Europe Ltd
  Copyright (c) 2012, OtherCrashOverride
  All rights reserved.

  Redistribution and use in source and binary forms, with or without
  modification, are permitted provided that the following conditions are met:
  * Redistributions of source code must retain the above copyright
  notice, this list of conditions and the following disclaimer.
  * Redistributions in binary form must reproduce the above copyright
  notice, this list of conditions and the following disclaimer in the
  documentation and/or other materials provided with the distribution.
  * Neither the name of the copyright holder nor the
  names of its contributors may be used to endorse or promote products
  derived from this software without specific prior written permission.

  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
  ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
  WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
  DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
  DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
  (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
  LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
  ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
  SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/

// Video decode demo using OpenMAX IL though the ilcient helper library

#define JN 1

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/stat.h>

#include "bcm_host.h"
#include "ilclient.h"

static OMX_BUFFERHEADERTYPE* eglBuffer = NULL;
static COMPONENT_T* egl_render = NULL;

static void* eglImage = 0;

void my_fill_buffer_done(void* data, COMPONENT_T* comp)
{
    if (OMX_FillThisBuffer(ilclient_get_handle(egl_render), eglBuffer) != OMX_ErrorNone)
	{
	    printf("OMX_FillThisBuffer failed in callback\n");
	    exit(1);
	}
}

int get_file_size(char *fname) {
    struct stat st;

    if (stat(fname, &st) == -1) {
	perror("Stat'ing img file");
	return -1;
    }
    return(st.st_size);
}

#define err2str(x) ""

OMX_ERRORTYPE read_into_buffer_and_empty(FILE *fp,
					 COMPONENT_T *component,
					 OMX_BUFFERHEADERTYPE *buff_header,
					 int *toread) {
    OMX_ERRORTYPE r;

    int buff_size = buff_header->nAllocLen;
    int nread = fread(buff_header->pBuffer, 1, buff_size, fp);


    buff_header->nFilledLen = nread;
    *toread -= nread;
    // printf("Read %d, %d still left\n", nread, *toread);

    if (*toread <= 0) {
	printf("Setting EOS on input\n");
	buff_header->nFlags |= OMX_BUFFERFLAG_EOS;
    }
    r = OMX_EmptyThisBuffer(ilclient_get_handle(component),
			    buff_header);
    if (r != OMX_ErrorNone) {
	fprintf(stderr, "Empty buffer error %s\n",
		err2str(r));
    }
    return r;
}

// Modified function prototype to work with pthreads
void *video_decode_test(void* arg)
{
    const char* filename = "/opt/vc/src/hello_pi/hello_video/test.h264";
    eglImage = arg;

    if (eglImage == 0)
	{
	    printf("eglImage is null.\n");
	    exit(1);
	}

    OMX_VIDEO_PARAM_PORTFORMATTYPE format;
    OMX_TIME_CONFIG_CLOCKSTATETYPE cstate;

    COMPONENT_T *video_decode = NULL;
    COMPONENT_T *list[3];  // last entry should be null
    TUNNEL_T tunnel[2]; // last entry should be null

    ILCLIENT_T *client;
    FILE *in;
    int status = 0;
    unsigned int data_len = 0;

    memset(list, 0, sizeof(list));
    memset(tunnel, 0, sizeof(tunnel));

    if((in = fopen(filename, "rb")) == NULL)
	return (void *)-2;

    if((client = ilclient_init()) == NULL)
	{
	    fclose(in);
	    return (void *)-3;
	}

    if(OMX_Init() != OMX_ErrorNone)
	{
	    ilclient_destroy(client);
	    fclose(in);
	    return (void *)-4;
	}

    // callback
    ilclient_set_fill_buffer_done_callback(client, my_fill_buffer_done, 0);

    // create video_decode
    if(ilclient_create_component(client, &video_decode, "video_decode", ILCLIENT_DISABLE_ALL_PORTS | ILCLIENT_ENABLE_INPUT_BUFFERS) != 0)
	status = -14;
    list[0] = video_decode;

    // create egl_render
    if(status == 0 && ilclient_create_component(client, &egl_render, "egl_render", ILCLIENT_DISABLE_ALL_PORTS | ILCLIENT_ENABLE_OUTPUT_BUFFERS) != 0)
	status = -14;
    list[1] = egl_render;

    set_tunnel(tunnel, video_decode, 131, egl_render, 220);
    ilclient_change_component_state(video_decode, OMX_StateIdle);

    memset(&format, 0, sizeof(OMX_VIDEO_PARAM_PORTFORMATTYPE));
    format.nSize = sizeof(OMX_VIDEO_PARAM_PORTFORMATTYPE);
    format.nVersion.nVersion = OMX_VERSION;
    format.nPortIndex = 130;
    format.eCompressionFormat = OMX_VIDEO_CodingAVC;

    if (status != 0) {
	fprintf(stderr, "Error has occurred %d\n", status);
	exit(1);
    }

    if(OMX_SetParameter(ILC_GET_HANDLE(video_decode), 
			OMX_IndexParamVideoPortFormat, &format) != OMX_ErrorNone) {
	fprintf(stderr, "Error setting port format\n");
	exit(1);
    }

    if(ilclient_enable_port_buffers(video_decode, 130, NULL, NULL, NULL) != 0) {
	fprintf(stderr, "Error enablng port buffers\n");
	exit(1);
    }


    OMX_BUFFERHEADERTYPE *buf;
    int port_settings_changed = 0;
    int first_packet = 1;

    ilclient_change_component_state(video_decode, OMX_StateExecuting);

    int toread = get_file_size(filename);
    // Read the first block so that the video_decode can get
    // the dimensions of the video and call port settings
    // changed on the output port to configure it
    while (toread > 0) {
	buf = 
	    ilclient_get_input_buffer(video_decode,
				      130,
				      1 /* block */);
	if (buf != NULL) {
	    read_into_buffer_and_empty(in,
				       video_decode,
				       buf,
				       &toread);

	    // If all the file has been read in, then
	    // we have to re-read this first block.
	    // Broadcom bug?
	    if (toread <= 0) {
		printf("Rewinding\n");
		// wind back to start and repeat
		//fp = freopen(IMG, "r", fp);
		rewind(in);
		toread = get_file_size(filename);
	    }
	}

	if (toread > 0 && ilclient_remove_event(video_decode, 
						OMX_EventPortSettingsChanged, 
						131, 0, 0, 1) == 0) {
	    printf("Removed port settings event\n");
	    break;
	} else {
	    // printf("No portr settting seen yet\n");
	}
	// wait for first input block to set params for output port
	if (toread == 0) {
	    int err;
	    // wait for first input block to set params for output port
	    err = ilclient_wait_for_event(video_decode, 
					  OMX_EventPortSettingsChanged, 
					  131, 0, 0, 1,
					  ILCLIENT_EVENT_ERROR | ILCLIENT_PARAMETER_CHANGED, 
					  2000);
	    if (err < 0) {
		fprintf(stderr, "No port settings change\n");
		//exit(1);
	    } else {
		printf("Port settings changed\n");
		break;
	    }
	}
    }

    if(ilclient_setup_tunnel(tunnel, 0, 0) != 0)
	{
	    status = -7;
	    exit(1);
	}

    // Set egl_render to idle
    ilclient_change_component_state(egl_render, OMX_StateIdle);

    // Enable the output port and tell egl_render to use the texture as a buffer
    //ilclient_enable_port(egl_render, 221); THIS BLOCKS SO CANT BE USED
    if (OMX_SendCommand(ILC_GET_HANDLE(egl_render), OMX_CommandPortEnable, 221, NULL) != OMX_ErrorNone)
	{
	    printf("OMX_CommandPortEnable failed.\n");
	    exit(1);
	}

    if (OMX_UseEGLImage(ILC_GET_HANDLE(egl_render), &eglBuffer, 221, NULL, eglImage) != OMX_ErrorNone)
	{
	    printf("OMX_UseEGLImage failed.\n");
	    exit(1);
	}

    // Set egl_render to executing
    ilclient_change_component_state(egl_render, OMX_StateExecuting);


    // Request egl_render to write data to the texture buffer
    if(OMX_FillThisBuffer(ILC_GET_HANDLE(egl_render), eglBuffer) != OMX_ErrorNone)
	{
	    printf("OMX_FillThisBuffer failed.\n");
	    exit(1);
	}


   // now work through the file
    while (toread > 0) {
	OMX_ERRORTYPE r;

	// do we have a decode input buffer we can fill and empty?
	buf = 
	    ilclient_get_input_buffer(video_decode,
				      130,
				      1 /* block */);
	if (buf != NULL) {
	    read_into_buffer_and_empty(in,
				       video_decode,
				       buf,
				       &toread);
	}
    }

    sleep(2);

    // need to flush the renderer to allow video_decode to disable its input port
    ilclient_flush_tunnels(tunnel, 1);

    ilclient_disable_port_buffers(video_decode, 130, NULL, NULL, NULL);

    fclose(in);

    ilclient_disable_tunnel(tunnel);
    ilclient_teardown_tunnels(tunnel);

    ilclient_state_transition(list, OMX_StateIdle);
    ilclient_state_transition(list, OMX_StateLoaded);

    ilclient_cleanup_components(list);

    OMX_Deinit();

    ilclient_destroy(client);
    return (void *)status;
}


      

Conclusion

This chapter has shown how to render a video from OpenMAX onto an EGL surface using OpenGL ES. By changing the OpenGL ES program, complex effects such as spinning the image or rendering to a "teapot" texture can be achieved.


      

Copyright © Jan Newmarch, jan@newmarch.name
Creative Commons License
" Programming AudioVideo on the Raspberry Pi GPU " by Jan Newmarch is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License .
Based on a work at https://jan.newmarch.name/RPi/ .

If you like this book, please contribute using PayPal

Or Flattr me:
Flattr this book