I’ve been working on an AndroidStudio app which uses TensorFlow Lite’s GPU delegate to speed up inference speed. It uses a model which takes an input array of size [n]x and outputs an array of size [n]x, with n being the number of 384-sized inputs I wish to feed in at a given time. Output n is only dependent on input n. For n=1, I have no problems – TF Lite’s CPU and GPU inference both work fine (albeit GPU does take longer – potentially because of the smaller input size?). When I increase n so that it is greater than 1 and run my model, CPU compute works fine, however GPU compute crashes my program. When I’m using an emulated Pixel 3 XL to run the program on I get this error message:
E/AndroidRuntime: FATAL EXCEPTION: main Process: com.example.mlptest, PID: 10405 java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: OpenCL library not loaded - dlopen failed: library "libOpenCL-pixel.so" not found Falling back to OpenGL TfLiteGpuDelegate Init: OpenGL ES 3.1 or above is required to use OpenGL inference. TfLiteGpuDelegate Prepare: delegate is not initialized Node number 4 (TfLiteGpuDelegateV2) failed to prepare.
When I run GPU compute on my personal phone, a Motorla Moto G7 Power, I get this error message:
E/AndroidRuntime: FATAL EXCEPTION: main Process: com.example.mlptest, PID: 16906 java.lang.IllegalStateException: Internal error: Unexpected failure when preparing tensor allocations: TfLiteGpuDelegate Init: Index is out of range TfLiteGpuDelegate Prepare: delegate is not initialized Node number 4 (TfLiteGpuDelegateV2) failed to prepare.
This crash happens as soon as it the GPU Delegate’s interpreter runs. I’m creating the delegate using these lines of code:
GpuDelegate delegate = new GpuDelegate(); Interpreter.Options options = (new Interpreter.Options()).addDelegate(delegate);
Initializing the interpreter with the options then running it:
Interpreter tfliteGPU = new Interpreter(loadedFile, options);
And finally closing the delegate after my computation:
The original TensorFlow model I am using was made in TensorFlow 1.x and converted from a frozen graph using the tflite_convert command. I’m running the app off of TF Lite 2.2.0 and TF Lite GPU 2.2.0:
implementation 'org.tensorflow:tensorflow-lite:2.2.0' implementation 'org.tensorflow:tensorflow-lite-gpu:2.2.0'
After a recommendation to try out the TensorFlow Nightly implementation:
implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly' implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly'
I switched my implementation in build.gradle to use 0.0.0-nightly and my problem went away. I can’t speak as to what may have originally caused it, however this is what solved it.
Answered By – abehonest