• Dear Cerberus X User!

    As we prepare to transition the forum ownership from Mike to Phil (TripleHead GmbH), we need your explicit consent to transfer your user data in accordance with our amended Terms and Rules in order to be compliant with data protection laws.

    Important: If you accept the amended Terms and Rules, you agree to the transfer of your user data to the future forum owner!

    Please read the new Terms and Rules below, check the box to agree, and click "Accept" to continue enjoying your Cerberus X Forum experience. The deadline for consent is April 5, 2024.

    Do not accept the amended Terms and Rules if you do not wish your personal data to be transferred to the future forum owner!

    Accepting ensures:

    - Continued access to your account with a short break for the actual transfer.

    - Retention of your data under the same terms.

    Without consent:

    - You don't have further access to your forum user account.

    - Your account and personal data will be deleted after April 5, 2024.

    - Public posts remain, but usernames indicating real identity will be anonymized. If you disagree with a fictitious name you have the option to contact us so we can find a name that is acceptable to you.

    We hope to keep you in our community and see you on the forum soon!

    All the best

    Your Cerberus X Team

Android experiment

Wingnut

Well-known member
3rd Party Module Dev
Tutorial Author
Joined
Jan 2, 2020
Messages
1,414
I'm learning OO and 3d graphics but before continouning with that I want to go through with seeing if I can make an experiment I said I would try with readpixels because it seems doable actually. I know it's old GLES2.0 but I won't try too hard. And if I need to make too much changes in the sourcecode that might induce problems then I guess I will just quit trying.

Do you have a preference about what option to plump for in Cerberus?

Pseudo/C++ code :

Code:
'  Improves GLES 2.0 readpixels from 100ms to 5ms on Android

' INIT
EGLImageKHR eglCreateImageKHR(EGLDisplay dpy,               ' any valid display
                              EGLContext ctx,               ' EGL_NO_CONTEXT
                              EGLenum target,               ' EGL_NATIVE_BUFFER_ANDROID
                              EGLClientBuffer buffer,       ' ANativeWindowBuffer
                              const EGLint *attrib_list)
' Allocate ANativeWindowBuffer using the wrapper Graphicbuffer
GraphicBuffer *window = new GraphicBuffer(width, height, PIXEL_FORMAT_RGBA_8888, GraphicBuffer::USAGE_SW_READ_OFTEN | GraphicBuffer::USAGE_HW_TEXTURE);
struct ANativeWindowBuffer *buffer = window->getNativeBuffer();
EGLImageKHR *image = eglCreateImageKHR(eglGetCurrentDisplay(), EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, *attribs);

' USAGE :

' FBO OPTION 1
void EGLImageTargetTexture2DOES(enum target, eglImageOES image)
uint8_t *ptr;
glBindTexture(GL_TEXTURE_2D, texture_id);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, image);

' FBO OPTION 2
void EGLImageTargetRenderbufferStorageOES(enum target, eglImageOES image)
window->lock(GraphicBuffer::USAGE_SW_READ_OFTEN, &ptr);
memcpy(pixels, ptr, width * height * 4);
window->unlock();
 
Last edited:
Did the comparison now

The existing old PIXELREAD works perfectly on most platforms :

* DESKTOP - you can read a lot of pixels, no problem at all. Amazing.

* HTML5 - same performance as DESKTOP actually (impressive progress of the web technologies) but readpixel introduces a constant stall here. It does not matter if you read 1x1 pixel or 1280x768 pixels. It is the same size of stall. Otherwise it is as quick as DESKTOP. Amazing, just a weird cosmetic tiny bump.

* iOS - same as DESKTOP. Amazing.

* Android - totally DISASTER, you will have to keep it to unusable small sized to even keep the renderloop not freezing on many Android devices.

The only reason I'm going to do this is to remove the cosmetic bump on HTML5 and to make Android usable.
 
Last edited:
Back
Top Bottom