![]() Later in this post I'll show how to query for CameraMetadata. This means that the native api won't list camera devices with LEGACY hardware level. But in comparison to camera2 api, the native camera doesn't support HAL1 interface. NDK's native camera is the equivalent of camera2 interface. HA元 should overcome this disadvantage and give applications more power to control the camera. According to documentation, these operating modes were overlapping and it was hard to implement new features. HAL1 used operating modes to divide the functionality of the camera. HAL2 was just a temporary step between the aforementioned versions. There are 2 camera HALs supported simultaneously - HAL 1 and HAL 3. Hardware Abstraction Layer (HAL) is the standard interface that Android forces hardware vendors to implement. It allows you to connect 2 Android devices through USB OTG and perform many of the tasks that are normally only accessible from a developer machine via ADB directly from your Android phone/tablet. If you you're an Android enthusiast that likes to learn more about Android internals, I highly recommend to check out my Bugjaeger app. Even though in this app I'm using the Camera 2 Java API (I also wanted to support Android 6), the particle snow effect that I apply is used from C++ code in the same way I show in this post. If you want to see some GPU-accelerated effects that you can do with OpengGL ES2, you can also checkout my Fake Snow Cam app. And I wrote another blogpost which shows how to generate a video file from captured images using the MediaCodec API. I also wrote another post where I show how to do high performance processing of images with C/C++ and RenderScript. You can find the sample project also on github. In this post I would like to show how to use the native camera api to get image data for further processing (on CPU & GPU). Using the native camera api might help to reduce the unnecessary JNI parts. ![]() If your image processing is done mostly in C++, but you still have to jump back-and-forth between Java and C, you might be required to add a lot of JNI glue code for Java to C communication. Using the new api gives you one additional benefit - you can reduce the JNI glue code. I did not do any performance comparison myself, so I don't know for sure. Therefore, there might be some (small) performance gain. The new API allows you to access image data directly in C, without the need to pass them from Java. It also doesn't yet support reprocessing, which is less often used for the kinds of continuous processing applications where the NDK API makes more sense.Android 7.0 Nougat (API level 24) introduced the native camera API, which finally allowed fine-grained control of camera directly from C++. The primary drawback is that unlike the Java API, the NDK API only supports LIMITED or better camera devices there's no compatibility support for LEGACY devices. The NDK API provides no more control over the actual image processing pipeline than the Java one. It's also there to be used as a stable interface for other native system components to get camera data, mostly for various OEM extensions. But setting up a bunch of Java code to do that may be annoying for example, OpenCV could use the NDK directly in their Android camera wrappers, instead of the private (not guaranteed to be stable) native interfaces OpenCV has used in the past. The performance may be slightly better for use cases where an app was just passing the camera image buffers via JNI into native code anyway, but the overhead of accessing direct ByteBuffers from Java objects is not very high. The API is primarily there to be used by applications that don't have much of a Java component, for simplicity of implementation.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |