ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • CameraX 사진 깨지는 현상
    회사 생활/기타 지식들 2024. 1. 25. 11:52

    안드로이드 CameraX에서 ImageProxy를 bitmap으로 바꾸는 과정에서 오류가 발생했다. 

     

    CameraX의 버전이 1.3.0 이상이라면 ImageProxy.toBitmap()을 사용하면 될것이지만 그 이하 버전은 해당 메소드를 지원되지 않는다. 

     

    하지만 CameraX는 1.3.0 compileSdkVersion이 34 이상이여야 한다고 한다.

    Dependency 'androidx.camera:camera-extensions:1.3.0' requires 'compileSdkVersion' to be set to 34 or higher.
    Compilation target for module ':app' is 'android-33'

     

    물론 추후 compileSdkVersion나 targetSdk를 34로 업그레이드 시키겠지만 그건 좀 나중에 일이라서 우선 낮은 버전을 사용 할 때는 다른 대안을 생각해 봐야한다. 

     

    1. 첫번째 방식 (!!! 주의 : 틀린 방식 !!!)

    처음 ImageProxy를 bitmap으로 바꾸는 코드들 찾았다.

    (참고 링크 : https://stackoverflow.com/questions/56772967/converting-imageproxy-to-bitmap )

     

    a. 먼저 ImageProxy를 Image 타입으로 바꾸고

    @SuppressLint("UnsafeOptInUsageError") Image image = imageProxy.getImage();

     

    b. 그 다음 아래와 같은 방식으로 바꾸는 것이다. 

    public Bitmap imageToBitmapFromAnalysis(Image image) {
        Image.Plane[] planes = image.getPlanes();
        ByteBuffer yBuffer = planes[0].getBuffer();
        ByteBuffer uBuffer = planes[1].getBuffer();
        ByteBuffer vBuffer = planes[2].getBuffer();
    
        int ySize = yBuffer.remaining();
        int uSize = uBuffer.remaining();
        int vSize = vBuffer.remaining();
    
        byte[] nv21 = new byte[ySize + uSize + vSize];
        //U and V are swapped
        yBuffer.get(nv21, 0, ySize);
        vBuffer.get(nv21, ySize, vSize);
        uBuffer.get(nv21, ySize + vSize, uSize);
    
        YuvImage yuvImage = new YuvImage(nv21, ImageFormat.NV21, image.getWidth(), image.getHeight(), null);
        ByteArrayOutputStream out = new ByteArrayOutputStream();
        yuvImage.compressToJpeg(new Rect(0, 0, yuvImage.getWidth(), yuvImage.getHeight()), 75, out);
    
        byte[] imageBytes = out.toByteArray();
        return BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
    }
    

     

    하지만 모든 폰들이 정상적으로 이미지가 찍힌대로 바뀐것이 아니었다. 아래와 같이 이미지가 깨지는 폰들도 있었다. 

     

     

    2. 두번쨰 방식 : (해결 방법)

    참고 링크 : https://stackoverflow.com/questions/65822326/why-camerax-image-distorted-when-saved-in-android-phone-redmi-9t

     

    위에 링크에서 문제를 찾아보니 CameraX에서 나오는게 YUV_888_420 버퍼고 이것을 NV21인 것으로 가정해서 나온 코드인 것 같다.

     

    하지만 모든 YUV_888_420 버퍼가 NV21 형식이라고 가정해서는 안 되고, YUV_888_420 는 NV12, YU12 또는 YV12 형식일 수도 있다는 것이다. 

     

    그래서 제안하는 것이 YUV images 를 RGB bitmaps 으로 바꾸라는 것이다. (YuvToRgbConverter.kt)

     

    그래서 YuvToRgbConverter.kt 파일을 그대로 추가해주고 사용하려는 곳에 가져와 초기화 시켜주고 

     

    1. 먼저 추가한 파일을 사용하기 위해 YuvToRgbConverter 초기화

    YuvToRgbConverter yuvToRgbConverter = new YuvToRgbConverter(mContext);

     

    2. ImageAnalysis에 바로 사용해도 되지만 너무 길어지는 것 같아 processImage 메소드로 따로 만들어서 사용

    private ImageAnalysis.Analyzer setImageAnalyzer() {
        ImageAnalysis.Analyzer analyzer = new ImageAnalysis.Analyzer() {
            @Override
            public void analyze(@NonNull ImageProxy image) {
                processImage(image);
                image.close();
    
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        };
        return analyzer;
    }

     

    3. processImage 메소드 

       public void processImage(@NonNull ImageProxy imageProxy) {
    
            @SuppressLint("UnsafeOptInUsageError") Image image = imageProxy.getImage();
    
            Bitmap passportBitmap = Bitmap.createBitmap(imageProxy.getWidth(), imageProxy.getHeight(), Bitmap.Config.ARGB_8888); // 비트맵 객체 생성
            yuvToRgbConverter.yuvToRgb(image, passportBitmap);
            
            // passportBitmap으로 원하는 작업 수행
      }

     

    • ImageProxy를 Image 타입으로 바꿔준다.
    • Bitmap을 ImageProxy 크기에 맞게 만들어주고
    • YuvToRgbConverter의 yuvToRgb 메소드에 (1) Image와 (2) Bitmap을 파라미터로 넣어주면 끝난다. 

     

    YuvToRgbConverter.kt  코드:

    더보기
    /*
     * Copyright 2020 The Android Open Source Project
     *
     * Licensed under the Apache License, Version 2.0 (the "License");
     * you may not use this file except in compliance with the License.
     * You may obtain a copy of the License at
     *
     *     https://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    
    package com.example.android.camera.utils
    
    import android.content.Context
    import android.graphics.Bitmap
    import android.graphics.ImageFormat
    import android.graphics.Rect
    import android.media.Image
    import android.renderscript.Allocation
    import android.renderscript.Element
    import android.renderscript.RenderScript
    import android.renderscript.ScriptIntrinsicYuvToRGB
    import android.renderscript.Type
    import java.nio.ByteBuffer
    
    /**
     * Helper class used to efficiently convert a [Media.Image] object from
     * [ImageFormat.YUV_420_888] format to an RGB [Bitmap] object.
     *
     * The [yuvToRgb] method is able to achieve the same FPS as the CameraX image
     * analysis use case on a Pixel 3 XL device at the default analyzer resolution,
     * which is 30 FPS with 640x480.
     *
     * NOTE: This has been tested in a limited number of devices and is not
     * considered production-ready code. It was created for illustration purposes,
     * since this is not an efficient camera pipeline due to the multiple copies
     * required to convert each frame.
     */
    class YuvToRgbConverter(context: Context) {
        private val rs = RenderScript.create(context)
        private val scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs))
    
        private var pixelCount: Int = -1
        private lateinit var yuvBuffer: ByteBuffer
        private lateinit var inputAllocation: Allocation
        private lateinit var outputAllocation: Allocation
    
        @Synchronized
        fun yuvToRgb(image: Image, output: Bitmap) {
    
            // Ensure that the intermediate output byte buffer is allocated
            if (!::yuvBuffer.isInitialized) {
                pixelCount = image.cropRect.width() * image.cropRect.height()
                // Bits per pixel is an average for the whole image, so it's useful to compute the size
                // of the full buffer but should not be used to determine pixel offsets
                val pixelSizeBits = ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888)
                yuvBuffer = ByteBuffer.allocateDirect(pixelCount * pixelSizeBits / 8)
            }
    
            // Rewind the buffer; no need to clear it since it will be filled
            yuvBuffer.rewind()
    
            // Get the YUV data in byte array form using NV21 format
            imageToByteBuffer(image, yuvBuffer.array())
    
            // Ensure that the RenderScript inputs and outputs are allocated
            if (!::inputAllocation.isInitialized) {
                // Explicitly create an element with type NV21, since that's the pixel format we use
                val elemType = Type.Builder(rs, Element.YUV(rs)).setYuvFormat(ImageFormat.NV21).create()
                inputAllocation = Allocation.createSized(rs, elemType.element, yuvBuffer.array().size)
            }
            if (!::outputAllocation.isInitialized) {
                outputAllocation = Allocation.createFromBitmap(rs, output)
            }
    
            // Convert NV21 format YUV to RGB
            inputAllocation.copyFrom(yuvBuffer.array())
            scriptYuvToRgb.setInput(inputAllocation)
            scriptYuvToRgb.forEach(outputAllocation)
            outputAllocation.copyTo(output)
        }
    
        private fun imageToByteBuffer(image: Image, outputBuffer: ByteArray) {
            assert(image.format == ImageFormat.YUV_420_888)
    
            val imageCrop = image.cropRect
            val imagePlanes = image.planes
    
            imagePlanes.forEachIndexed { planeIndex, plane ->
                // How many values are read in input for each output value written
                // Only the Y plane has a value for every pixel, U and V have half the resolution i.e.
                //
                // Y Plane            U Plane    V Plane
                // ===============    =======    =======
                // Y Y Y Y Y Y Y Y    U U U U    V V V V
                // Y Y Y Y Y Y Y Y    U U U U    V V V V
                // Y Y Y Y Y Y Y Y    U U U U    V V V V
                // Y Y Y Y Y Y Y Y    U U U U    V V V V
                // Y Y Y Y Y Y Y Y
                // Y Y Y Y Y Y Y Y
                // Y Y Y Y Y Y Y Y
                val outputStride: Int
    
                // The index in the output buffer the next value will be written at
                // For Y it's zero, for U and V we start at the end of Y and interleave them i.e.
                //
                // First chunk        Second chunk
                // ===============    ===============
                // Y Y Y Y Y Y Y Y    V U V U V U V U
                // Y Y Y Y Y Y Y Y    V U V U V U V U
                // Y Y Y Y Y Y Y Y    V U V U V U V U
                // Y Y Y Y Y Y Y Y    V U V U V U V U
                // Y Y Y Y Y Y Y Y
                // Y Y Y Y Y Y Y Y
                // Y Y Y Y Y Y Y Y
                var outputOffset: Int
    
                when (planeIndex) {
                    0 -> {
                        outputStride = 1
                        outputOffset = 0
                    }
                    1 -> {
                        outputStride = 2
                        // For NV21 format, U is in odd-numbered indices
                        outputOffset = pixelCount + 1
                    }
                    2 -> {
                        outputStride = 2
                        // For NV21 format, V is in even-numbered indices
                        outputOffset = pixelCount
                    }
                    else -> {
                        // Image contains more than 3 planes, something strange is going on
                        return@forEachIndexed
                    }
                }
    
                val planeBuffer = plane.buffer
                val rowStride = plane.rowStride
                val pixelStride = plane.pixelStride
    
                // We have to divide the width and height by two if it's not the Y plane
                val planeCrop = if (planeIndex == 0) {
                    imageCrop
                } else {
                    Rect(
                        imageCrop.left / 2,
                        imageCrop.top / 2,
                        imageCrop.right / 2,
                        imageCrop.bottom / 2
                    )
                }
    
                val planeWidth = planeCrop.width()
                val planeHeight = planeCrop.height()
    
                // Intermediate buffer used to store the bytes of each row
                val rowBuffer = ByteArray(plane.rowStride)
    
                // Size of each row in bytes
                val rowLength = if (pixelStride == 1 && outputStride == 1) {
                    planeWidth
                } else {
                    // Take into account that the stride may include data from pixels other than this
                    // particular plane and row, and that could be between pixels and not after every
                    // pixel:
                    //
                    // |---- Pixel stride ----|                    Row ends here --> |
                    // | Pixel 1 | Other Data | Pixel 2 | Other Data | ... | Pixel N |
                    //
                    // We need to get (N-1) * (pixel stride bytes) per row + 1 byte for the last pixel
                    (planeWidth - 1) * pixelStride + 1
                }
    
                for (row in 0 until planeHeight) {
                    // Move buffer position to the beginning of this row
                    planeBuffer.position(
                        (row + planeCrop.top) * rowStride + planeCrop.left * pixelStride)
    
                    if (pixelStride == 1 && outputStride == 1) {
                        // When there is a single stride value for pixel and output, we can just copy
                        // the entire row in a single step
                        planeBuffer.get(outputBuffer, outputOffset, rowLength)
                        outputOffset += rowLength
                    } else {
                        // When either pixel or output have a stride > 1 we must copy pixel by pixel
                        planeBuffer.get(rowBuffer, 0, rowLength)
                        for (col in 0 until planeWidth) {
                            outputBuffer[outputOffset] = rowBuffer[col * pixelStride]
                            outputOffset += outputStride
                        }
                    }
                }
            }
        }
    }
Designed by Tistory.