2020-07-24

一、美颜类框架以及常见问题总结

1. 美颜预览流程

(1)在相机重构的时候,需要将美颜合入mtk的架构,为了代码可读性以及降低耦合性,摒弃了之前老相机的做法,将美颜单独作为一个模式,而不是和普通的拍照模式混在一起。这样就需要参照mtk的代码结构,为美颜单独制定一个预览容器的管理类,即EffectViewController.java, 其具体的方法都是mtk的结构,只是把预览容器替换为了我们美颜的。

host/src/com/freeme/camera/ui/CameraAppUI.java

    public void onCreate() {
        ...

        //mPreviewManager = new PreviewManager(mApp);
        //Set gesture listener to receive touch event.
        //mPreviewManager.setOnTouchListener(new OnTouchListenerImpl());
        mNormalPreviewManager = new PreviewManager(mApp, true, false);
        mBeautyFacePreviewManager = new PreviewManager(mApp, false, false);
        //美颜
        mEffectPreviewManager = new PreviewManager(mApp, false, true);
        mNormalPreviewManager.setOnTouchListener(new OnTouchListenerImpl());
        mBeautyFacePreviewManager.setOnTouchListener(new OnTouchListenerImpl());
        mEffectPreviewManager.setOnTouchListener(new OnTouchListenerImpl());
        mPreviewManager = mNormalPreviewManager;

        ...
    }

host/src/com/freeme/camera/ui/preview/PreviewManager.java

    public PreviewManager(IApp app, boolean isTextureView, boolean isEffectView) {
        ...

        //if (enabledValue == SURFACEVIEW_ENABLED_VALUE || appVersion == DEFAULT_APP_VERSION) {
        if (isTextureView) {
            mPreviewController = new TextureViewController(app);
        } else if (isEffectView) {
            mPreviewController = new EffectViewController(app);
        } else {
            mPreviewController = new BeautyFaceViewController(app);
        }

        ...
    }

(2)美颜预览容器管理流程

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/EffectMode.java

    public void resume(@Nonnull DeviceUsage deviceUsage) {
        ...
        prepareAndOpenCamera(false, mCameraId, false, false);

        ...
    }

    private void prepareAndOpenCamera(boolean needOpenCameraSync, String cameraId,
                                      boolean needFastStartPreview, boolean isFromSelectedCamera) {
        ..
        mIDeviceController.openCamera(info);
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/device/EffectDevice2Controller.java

    private void doOpenCamera(boolean sync) throws CameraOpenException {
        if (sync) {
            mCameraDeviceManager.openCameraSync(mCurrentCameraId, mDeviceCallback, null);
        } else {
            mCameraDeviceManager.openCamera(mCurrentCameraId, mDeviceCallback, null);
        }
    }

    public class DeviceStateCallback extends Camera2Proxy.StateCallback {

        @Override
        public void onOpened(@Nonnull Camera2Proxy camera2proxy) {
            mModeHandler.obtainMessage(MSG_DEVICE_ON_CAMERA_OPENED,
                    camera2proxy).sendToTarget();
        }
       ...
    }

    private class ModeHandler extends Handler {
        ...
        @Override
        public void handleMessage(Message msg) {
            switch (msg.what) {
                case MSG_DEVICE_ON_CAMERA_OPENED:
                    doCameraOpened((Camera2Proxy) msg.obj);
                    break;
                default:
                    break;
            }
        }
    }
    public void doCameraOpened(@Nonnull Camera2Proxy camera2proxy) {
        try {
            if (CameraState.CAMERA_OPENING == getCameraState()
                    && camera2proxy != null && camera2proxy.getId().equals(mCurrentCameraId)) {
                ...
                if (mPreviewSizeCallback != null) {
                        mPreviewSizeCallback.onPreviewSizeReady(new Size(mPreviewWidth,
                                mPreviewHeight));
                }
                ...
            }
        } catch (RuntimeException e) {
            e.printStackTrace();
        }
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/EffectMode.java

    public void onPreviewSizeReady(Size previewSize) {
        updatePictureSizeAndPreviewSize(previewSize);
    }

    private void updatePictureSizeAndPreviewSize(Size previewSize) {
        ...
        if (size != null && mIsResumed) {
            ...
            if (width != mPreviewWidth || height != mPreviewHeight) {
                onPreviewSizeChanged(width, height);
            }
        }
    }

    private void onPreviewSizeChanged(int width, int height) {
        ...
        mIApp.getAppUi().setPreviewSize(mPreviewHeight, mPreviewWidth, mISurfaceStatusListener);
        ...
    }

host/src/com/freeme/camera/ui/CameraAppUI.java

    public void setPreviewSize(final int width, final int height,
                               final ISurfaceStatusListener listener) {
        mApp.getActivity().runOnUiThread(new Runnable() {
            @Override
            public void run() {
                mPreviewManager.updatePreviewSize(width, height, listener);
                ...
            }
        });
    }

host/src/com/freeme/camera/ui/preview/PreviewManager.java

    public void updatePreviewSize(int width, int height, ISurfaceStatusListener listener) {
        ...
        if (mPreviewController != null) {
            mPreviewController.updatePreviewSize(width, height, listener);
        }
    }

host/src/com/freeme/camera/ui/preview/EffectViewController.java

    public void updatePreviewSize(int width, int height, ISurfaceStatusListener listener) {
        if (mPreviewWidth == width && mPreviewHeight == height) {
            ...
            if (mIsSurfaceCreated) {
                if (listener != null) {
                    ...
                    //设置预览容器
                    listener.surfaceAvailable(((CameraActivity) mApp.getActivity()).getEffectView().getSurfaceTexture(),
                            mPreviewHeight, mPreviewWidth);
                }
            }
            return;
        }
        ...
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/EffectMode.java

    private class SurfaceChangeListener implements ISurfaceStatusListener {

        public void surfaceAvailable(Object surfaceObject, int width, int height) {
            if (mModeHandler != null) {
                mModeHandler.post(new Runnable() {
                    @Override
                    public void run() {
                        if (mIDeviceController != null && mIsResumed) {
                            mIDeviceController.updatePreviewSurface(surfaceObject);
                        }
                    }
                });
            }
        }

        ...
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/device/EffectDevice2Controller.java

    public void updatePreviewSurface(Object surfaceObject) {
        
        synchronized (mSurfaceHolderSync) {
            if (surfaceObject instanceof SurfaceHolder) {
                mPreviewSurface = surfaceObject == null ? null :
                        ((SurfaceHolder) surfaceObject).getSurface();
            } else if (surfaceObject instanceof SurfaceTexture) {
                mPreviewSurface = surfaceObject == null ? null :
                        new Surface((SurfaceTexture) surfaceObject);
            }
            boolean isStateReady = CameraState.CAMERA_OPENED == mCameraState;
            if (isStateReady && mCamera2Proxy != null) {
                boolean onlySetSurface = mSurfaceObject == null && surfaceObject != null;
                mSurfaceObject = surfaceObject;
                if (surfaceObject == null) {
                    stopPreview();
                } else if (onlySetSurface && mNeedSubSectionInitSetting) {
                    mOutputConfigs.get(0).addSurface(mPreviewSurface);
                    if (mSession != null) {
                        mSession.finalizeOutputConfigurations(mOutputConfigs);
                        mNeedFinalizeOutput = false;
                        if (CameraState.CAMERA_OPENED == getCameraState()) {
                            repeatingPreview(false);
                            configSettingsByStage2();
                            repeatingPreview(false);
                        }
                    } else {
                        mNeedFinalizeOutput = true;
                    }
                } else {
                    configureSession(false);
                }
            }
        }
    }
//后续,美颜预览容器surfaceTexture,完全按照mtk的代码结构进行管理。

(3)美颜效果绘制流程

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/byted/EffectView.java

    public void onDrawFrame(GL10 gl) {
        if (mCameraChanging || mIsPaused) {
            return;
        }
        //将纹理图像更新为图像流中的最新帧。
        mSurfaceTexture.updateTexImage();
        if(mPauseed){
            GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
            GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
            return;
        }
        //清空缓冲区颜色
        GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
        GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);

        BytedEffectConstants.Rotation rotation = OrientationSensor.getOrientation();
        //调用字节跳动的美颜算法,处理预览数据,得到处理后的纹理dstTexture
        dstTexture = mEffectRenderHelper.processTexture(mSurfaceTextureID, rotation, getSurfaceTimeStamp());

        synchronized (this) {
            if (mVideoEncoder != null) {
                //美视相关,后面介绍
                mVideoEncoder.frameAvailableSoon();
            }
        }

        if (dstTexture != ShaderHelper.NO_TEXTURE) {
            //绘制纹理
            mEffectRenderHelper.drawFrame(dstTexture);
        }
        mFrameRator.addFrameStamp();
    }

(4)常见问题及常用解决思路

  • 问题:
  1. 美颜预览卡住,停留在上一个模式的最后一帧:因为美颜与普通模式用的预览容器不同,在模式切换的切换的时候,容器没有正常的切换以及显示。可以用AndroidStudio的布局查看器,查看effectview是否正常显示
  2. 往美颜模式切换,预览闪黑:这个问题的根本原因就是美颜和普通模式预览容器不同,所以在模式切换之间加了动画。
  3. 美颜模式下,切换摄像头,预览闪黑:这个问题需要调整EffectView.setCameraId 和 EffectView.setPauseed 在mtk代码结构里面的位置,这两个方法的初衷就是为了在切换摄像头的时候,停止绘制,否则会出现倒帧等现象。 就目前而言,效果可以接受。
  4. 其他的琐碎的问题,例如美颜效果控制面板等问题,就不做介绍了,普通界面问题,好改。

2. 基于字节跳动sdk开发的美视功能介绍

(1)思路:https://www.jianshu.com/p/9dc03b01bae3 参考这位大神的的思路,很详细。简单来说,就是另外开一个线程将字节跳动sdk处理后的纹理,即上文提到的dstTexture绘制到我们的录像容器,即MediaCode.createInputSurface()

(2)以视频流处理为例介绍一下流程,两个线程,一个即上文所说渲染(绘制)线程,另外一个录制线程(视频编码线程)

feature/mode/effectvideo/src/com/freeme/camera/feature/mode/effectvideo/EffectVideoMode.java

    private void startRecording() {
        ...
        mModeHandler.postDelayed(new Runnable() {
            @Override
            public void run() {
                mSurfaceView.startRecording(mCurrentVideoFilename, EffectVideoMode.this,
                        "on".equals(mSettingManager.getSettingController().queryValue("key_microphone")), mOrientationHint);
            }
        }, 300);
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/byted/EffectView.java

    public void startRecording(String currentDescriptorName, MediaMuxerListener mediaMuxerListener, boolean isRecordAudio, int orientation) {
        try {
            //创建写入指定路径的媒体混合器,将编码后的音视频按规则写入指定文件
            mMuxer = new MediaMuxerWrapper(currentDescriptorName, mediaMuxerListener);
            if (true) {
                //视频录制器,这里仅仅创建了一个对象,将mMuxer给到它,以便后续编码好的视频写入文件
                new MediaVideoEncoder(mMuxer, mMediaEncoderListener, mImageHeight, mImageWidth);
            }
            if (isRecordAudio) {
                //音频录制器
                new MediaAudioEncoder(mMuxer, mMediaEncoderListener);
            }
            //这里才是音视频录制器的准备工作,以视频为例介绍
            mMuxer.prepare();
            mMuxer.setOrientationHint(orientation);
            //开始录制
            mMuxer.startRecording();
        } catch (final IOException e) {
            Log.e(TAG, "startCapture:", e);
        }
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/encoder/MediaVideoEncoder.java    

    public MediaVideoEncoder(final MediaMuxerWrapper muxer, final MediaEncoderListener listener, final int width, final int height) {
        super(muxer, listener);
        if (DEBUG) Log.i(TAG, "MediaVideoEncoder: ");
        mWidth = width;
        mHeight = height;
        //渲染线程,RenderHandler implements Runnable,后面介绍
        mRenderHandler = RenderHandler.createHandler("VideoRenderThread");
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/glutils/RenderHandler.java

    public static final RenderHandler createHandler(final String name) {
        final RenderHandler handler = new RenderHandler();
        synchronized (handler.mSync) {
            //开启渲染线程,等待后续命令,开始渲染
            new Thread(handler, !TextUtils.isEmpty(name) ? name : TAG).start();
        }
        return handler;
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/encoder/MediaEncoder.java

    public MediaEncoder(final MediaMuxerWrapper muxer, final MediaEncoderListener listener) {
        ...
        mWeakMuxer = new WeakReference<MediaMuxerWrapper>(muxer);
        muxer.addEncoder(this);
        mListener = listener;
        synchronized (mSync) {
            mBufferInfo = new MediaCodec.BufferInfo();
            //编码线程,MediaEncoder implements Runnable,后面介绍
            new Thread(this, getClass().getSimpleName()).start();
            try {
                mSync.wait();
            } catch (final InterruptedException e) {
            }
        }
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/encoder/MediaMuxerWrapper.java

    public void prepare() throws IOException {
        if (mVideoEncoder != null)
            mVideoEncoder.prepare();
        if (mAudioEncoder != null)
            mAudioEncoder.prepare();
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/encoder/MediaVideoEncoder.java

    private static final String MIME_TYPE = "video/avc";

    protected void prepare() throws IOException {
        //格式、比特率、帧率、关键帧,这都是android固定的格式,不做介绍
        final MediaFormat format = MediaFormat.createVideoFormat(MIME_TYPE, mWidth, mHeight);
        format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);    // API >= 18
        format.setInteger(MediaFormat.KEY_BIT_RATE, calcBitRate());
        format.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE);
        format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1);

        //创建录制器,此处指定的为视频录制器
        mMediaCodec = MediaCodec.createEncoderByType(MIME_TYPE);
        mMediaCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
        //关键的地方,前面说原理的时候,要将字节跳动处理后的纹理绘制到录制容器,这里便是那个容器了。
        mSurface = mMediaCodec.createInputSurface();    // API >= 18
        //开启
        mMediaCodec.start();
        if (DEBUG) Log.i(TAG, "prepare finishing");
        if (mListener != null) {
            try {
                //回调提醒已准备好
                mListener.onPrepared(this);
            } catch (final Exception e) {
                Log.e(TAG, "prepare:", e);
            }
        }
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/byted/EffectView.java

    private final MediaEncoder.MediaEncoderListener mMediaEncoderListener = new MediaEncoder.MediaEncoderListener() {
        @Override
        public void onPrepared(final MediaEncoder encoder) {
            if (encoder instanceof MediaVideoEncoder) {
                setVideoEncoder((MediaVideoEncoder) encoder);
            } else if (encoder instanceof MediaAudioEncoder) {
                mAudioEncoder = (MediaAudioEncoder) encoder;
            }
        }

        ...
    };

    public void setVideoEncoder(final MediaVideoEncoder encoder) {
        queueEvent(new Runnable() {
            @Override
            public void run() {
                synchronized (this) {
                    if (encoder != null) {
                        //这里三个参数很关键:1.将effectView关联的GLContext,给到视频录制器,用以构建EGL环境
                        //2.dstTexture很熟悉了,字节跳动美颜sdk处理后的纹理
                        //3.mEffectRenderHelper也很熟悉,美颜流程里面不就是调用mEffectRenderHelper.drawFrame(dstTexture);将纹理绘制到预览容器上的吗?类比一下,后面将用这个“画笔”将处理后的纹理绘制到录制容器
                        //至此,脉络比较清晰了,环境有了,纹理有了,“画笔”有了,后面就是画纹理了。
                        encoder.setEglContext(EGL14.eglGetCurrentContext(), dstTexture, mEffectRenderHelper);
                    }
                    mVideoEncoder = encoder;
                }
            }
        });
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/byted/EffectView.java

    public void onDrawFrame(GL10 gl) {
        ...
        dstTexture = mEffectRenderHelper.processTexture(mSurfaceTextureID, rotation, getSurfaceTimeStamp());

        synchronized (this) {
            if (mVideoEncoder != null) {
                //开始录制
                mVideoEncoder.frameAvailableSoon();
            }
        }

        if (dstTexture != ShaderHelper.NO_TEXTURE) {
            mEffectRenderHelper.drawFrame(dstTexture);
        }
        mFrameRator.addFrameStamp();
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/encoder/MediaVideoEncoder.java

    public boolean frameAvailableSoon() {
        boolean result;
        if (result = super.frameAvailableSoon())
            mRenderHandler.draw(null);
        return result;
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/encoder/MediaEncoder.java

    public boolean frameAvailableSoon() {
        synchronized (mSync) {
            if (!mIsCapturing || mRequestStop || isPause) {
                return false;
            }
            mRequestDrain++;
            mSync.notifyAll();
        }
        return true;
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/glutils/RenderHandler.java
    //渲染线程
    public final void run() {
        ...
        for (; ; ) {
            ...

            if (localRequestDraw) {
                if ((mEglCore != null) && mTexId >= 0) {
                    mInputWindowSurface.makeCurrent();
                    
                    GLES20.glClearColor(1.0f, 1.0f, 0.0f, 1.0f);
                    GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
                    //美颜渲染一帧,这边也跟着渲染一帧
                    mEffectRenderHelper.drawFrame(mTexId);
                    mInputWindowSurface.swapBuffers();
                }
            } else {
                synchronized (mSync) {
                    try {
                        mSync.wait();
                    } catch (final InterruptedException e) {
                        break;
                    }
                }
            }
        }
        ...
    }

feature/mode/effect/src/com/freeme/camera/feature/mode/effect/encoder/MediaEncoder.java
    //编码线程
    public void run() {
        synchronized (mSync) {
            mRequestStop = false;
            mRequestDrain = 0;
            mSync.notify();
        }
        final boolean isRunning = true;
        boolean localRequestStop;
        boolean localRequestDrain;
        while (isRunning) {
            ...
            if (localRequestDrain) {
                //清空编码的数据并将其写入多路复用器,即将编码的数据取出来,用muxer写入指定的视频文件
                drain();
            } else {
                synchronized (mSync) {
                    try {
                        mSync.wait();
                    } catch (final InterruptedException e) {
                        break;
                    }
                }
            }
        } // end of while
        if (DEBUG) Log.d(TAG, "Encoder thread exiting");
        synchronized (mSync) {
            mRequestStop = true;
            mIsCapturing = false;
        }
    }


    protected void drain() {
        //这个方法稍微长一点,流程都是按照google规定的,具体细节自己去看源码,这里只介绍关键处,拿编码后的数据以及写入视频文件
        if (mMediaCodec == null) return;
        if (isPause) return;
        ByteBuffer[] encoderOutputBuffers = mMediaCodec.getOutputBuffers();
        int encoderStatus, count = 0;
        final MediaMuxerWrapper muxer = mWeakMuxer.get();
        if (muxer == null) {
            Log.w(TAG, "muxer is unexpectedly null");
            return;
        }
        LOOP:
        while (mIsCapturing) {
            // get encoded data with maximum timeout duration of TIMEOUT_USEC(=10[msec])
            //1.获取最长超时时间为TIMEOUT_USEC(= 10 [msec])的编码数据
            encoderStatus = mMediaCodec.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
            if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
                // wait 5 counts(=TIMEOUT_USEC x 5 = 50msec) until data/EOS come
                if (!mIsEOS) {
                    if (++count > 5)
                        break LOOP;        // out of while
                }
            } else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
                if (DEBUG) Log.v(TAG, "INFO_OUTPUT_BUFFERS_CHANGED");
                // this shoud not come when encoding
                //2.检索输出缓冲区的集合。
                encoderOutputBuffers = mMediaCodec.getOutputBuffers();
            } else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                if (DEBUG) Log.v(TAG, "INFO_OUTPUT_FORMAT_CHANGED");
                // this status indicate the output format of codec is changed
                // this should come only once before actual encoded data
                // but this status never come on Android4.3 or less
                // and in that case, you should treat when MediaCodec.BUFFER_FLAG_CODEC_CONFIG come.
                if (mMuxerStarted) {    // second time request is error
                    throw new RuntimeException("format changed twice");
                }
                // get output format from codec and pass them to muxer
                // getOutputFormat should be called after INFO_OUTPUT_FORMAT_CHANGED otherwise crash.
                final MediaFormat format = mMediaCodec.getOutputFormat(); // API >= 16
                mTrackIndex = muxer.addTrack(format);
                mMuxerStarted = true;
                if (!muxer.start()) {
                    // we should wait until muxer is ready
                    synchronized (muxer) {
                        while (!muxer.isStarted())
                            try {
                                muxer.wait(100);
                            } catch (final InterruptedException e) {
                                break LOOP;
                            }
                    }
                }
            } else if (encoderStatus < 0) {
                // unexpected status
                if (DEBUG)
                    Log.w(TAG, "drain:unexpected result from encoder#dequeueOutputBuffer: " + encoderStatus);
            } else {
                //3.编码好的数据
                final ByteBuffer encodedData = encoderOutputBuffers[encoderStatus];
                if (encodedData == null) {
                    // this never should come...may be a MediaCodec internal error
                    throw new RuntimeException("encoderOutputBuffer " + encoderStatus + " was null");
                }
                if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
                    // You shoud set output format to muxer here when you target Android4.3 or less
                    // but MediaCodec#getOutputFormat can not call here(because INFO_OUTPUT_FORMAT_CHANGED don't come yet)
                    // therefor we should expand and prepare output format from buffer data.
                    // This sample is for API>=18(>=Android 4.3), just ignore this flag here
                    if (DEBUG) Log.d(TAG, "drain:BUFFER_FLAG_CODEC_CONFIG");
                    mBufferInfo.size = 0;
                }

                if (mBufferInfo.size != 0) {
                    // encoded data is ready, clear waiting counter
                    count = 0;
                    if (!mMuxerStarted) {
                        // muxer is not ready...this will prrograming failure.
                        throw new RuntimeException("drain:muxer hasn't started");
                    }
                    // 4.将编码的数据写入多路复用器
                    mBufferInfo.presentationTimeUs = getPTSUs();
                    muxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
                    prevOutputPTSUs = mBufferInfo.presentationTimeUs;
                }
                // return buffer to encoder
                mMediaCodec.releaseOutputBuffer(encoderStatus, false);
                if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                    // when EOS come.
                    mIsCapturing = false;
                    break;      // out of while
                }
            }
        }
    }

//至此,视频渲染以及将编码好的数据写入指定的视频文件的流程就清楚了,音频同理,比这个简单。
//特别地:为什么要开发新美视,首先是效果,其次是tutu的sdk中渲染与资源释放的异步问题,导致老美视十分容易报错,已经停止合作,我们也改不了sdk。
//现在的新美视,已经稳定,整明白了流程,后面出现问题,具体问题具体分析。

3. 脸萌模式简单介绍

(1)脸萌模式原理:利用tutu美颜sdk返回的人脸坐标数据,调用第三方库libgdx,在纹理上继续绘制脸萌图案,libgdx也是封装好的opengl

(2)简单看下流程

feature/mode/beautyface/src/com/freeme/camera/feature/mode/beautyface/BeautyFaceView.java

    public void onDrawFrame(GL10 gl10) {
        mSurfaceTexture.updateTexImage();
        if (mPauseed) {
            return;
        }
        ...

        GLES20.glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
        GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);

        // 滤镜引擎处理,返回的 textureID 为 TEXTURE_2D 类型
        int textureWidth = mMeasuredHeight;/*mDrawBounds.height();*/
        int textureHeight = mMeasuredWidth;/*mDrawBounds.width();*/

        textureHeight = (int) (textureHeight / SAMPLER_RATIO);
        textureWidth = (int) (textureWidth / SAMPLER_RATIO);

        if (mDrawBounds.width() >= 972) {
            textureHeight = (int) (textureHeight / SAMPLER_RATIO);
            textureWidth = (int) (textureWidth / SAMPLER_RATIO);
        }

        mMeasuredHeight, /*textureHeight*/mMeasuredWidth);
        final int textureId = mFilterEngine.processFrame(mOESTextureId, textureWidth, textureHeight);
        
        textureProgram.draw(textureId);
        
        if (mCameraActivity.getCurrentCameraMode() == FreemeSceneModeData.FREEME_SCENE_MODE_FC_ID) {
            FaceAligment[] faceAligments = mFilterEngine.getFaceFeatures();
            float deviceAngle = mFilterEngine.getDeviceAngle();
            //绘制脸萌效果
            mFunnyFaceView.render(deviceAngle, faceAligments);
            //脸萌拍照
            mFunnyFaceView.capture();
        }
    }

feature/mode/facecute/src/com/freeme/camera/feature/mode/facecute/gles/FunnyFaceView.java

    public void render(float deviceAngle, FaceAligment[] faceAligments) {
        if (!mIsShowing || mIsSwitching || mIsDispose) {
            return;
        }
        long time = System.nanoTime();
        deltaTime = (time - lastFrameTime) / 1000000000.0f;
        lastFrameTime = time;
        mStateTime += deltaTime;
        if (faceAligments != null && faceAligments.length > 0) {
            ...
            int faceW = (int) face.width();
            int faceH = (int) face.height();
            int abs = Math.abs(faceH - faceW);
            //常见问题点:脸萌没有效果
            //原因:1.依赖tutu sdk人脸数据,远了识别不到人脸,就会没有脸萌效果
            //2.在渲染脸萌效果的时候会判断人脸宽高比,去掉会引起效果闪烁、闪白。这里对其进行了改良,加入了屏幕密度,使得大部分项目能够满足正常效果。
            if (faceW < mFaceMinSizePx || faceW > mFaceMaxSizePx || abs > 70 * mDensity) {
                mCamera.showOrNotFFBNoFaceIndicator(true);
                return;
            }
            ...
            drawItem(scale, 0, angle, landmarkInfo);
            mSpriteBatch.end();
            mCamera.showOrNotFFBNoFaceIndicator(false);
        } else {
            mCamera.showOrNotFFBNoFaceIndicator(true);
        }
    }

    private void drawItem(float scale, int orientation, float angle, LandmarkInfo markInfo) {
        if (mCurrItemList != null) {
            for (ItemInfo item : mCurrItemList) {
                TextureRegion currRegion = item.anim.getKeyFrame(mStateTime, true);
                AnchorInfo anchor = computeAnchorInfo(item, markInfo, scale, orientation);
                drawElements(currRegion, anchor, scale, orientation, angle);
            }
        }
    }

    private void drawElements(TextureRegion currRegion, AnchorInfo anchor, float scale,
                              int orientation, float angle) {
        ...
        //绘制
        mSpriteBatch.draw(currRegion, x, y, orignX, orignY, orignW, orignH, scale, scale,
                finalAngle);
    }

    public void capture() {
        if (mIsNeedCapture) {
            mIsNeedCapture = false;
            handleRGB565Data();
        }
    }

    private void handleRGB565Data() {
        long time = System.currentTimeMillis();
        final int data[] = this.getJpegDataFromGpu565(0, 0, mWidth, mHeight);
        ...
    }

    public int[] getJpegDataFromGpu565(int x, int y, int w, int h) {
        int size = w * h;
        ByteBuffer buf = ByteBuffer.allocateDirect(size * 4);
        buf.order(ByteOrder.nativeOrder());
        //glReadPixels
        GLES20.glReadPixels(x, y, w, h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, buf);
        int data[] = new int[size];
        buf.asIntBuffer().get(data);
        buf = null;
        return data;
    }
//脸萌的流程比较简单,依赖于tutu 美颜。 常见的问题就是上面说的那个。

至此,美颜类的三种,美颜、美视、脸萌介绍完毕。

二、插件类以及常见问题总结

1. 外部插件:模特、儿童;水印、大片;扫码

(1)外部插件框架:参考documents/FreemeOS/other/training/Camera/pluginmanager/Android插件化开发.md,上一位camera负责人大厨走的时候详细介绍过插件的来龙去脉,很详细,自己看文档。

(2)这里以扫码为例,看一下

feature/mode/qrcodescan/src/com/freeme/camera/feature/mode/qrcodescan/QrCodeScanMode.java

    //camera api2,用ImageReader获取预览数据
    public void onImageAvailable(ImageReader reader) {
        Image image = reader.acquireNextImage();
        //image->plane->buffer->byte[]
        //getBytesFromImageAsType:根据要求的结果类型进行填充,二维码需要的是亮度(Y)信息,简单的将UV数据拼接在Y数据之后即可
        mIApp.getmPluginManagerAgent().blendOutput(CameraUtil.getBytesFromImageAsType(image, 1), FreemeSceneModeData.FREEME_SCENE_MODE_QRCODE_ID);
        image.close();
    }

common/src/com/freeme/camera/common/pluginmanager/PluginManagerAgent.java

    public byte[] blendOutput(byte[] jpegData, int mode) {
        if (mModules != null && mModules.size() > 0) {
            IPluginModuleEntry plugin = mModules.get(mode, null);
            if (plugin != null) {
                return plugin.blendOutput(jpegData);
            }
        }
        return null;
    }

FreemeCameraPlugin/CameraQrCodeScan/app/src/main/java/com/freeme/cameraplugin/qrcodescan/QrCodeScan.java

    public byte[] blendOutput(byte[] jpegData) {
        if (QrCodeScanView.sFramingRect == null) {
            return super.blendOutput(jpegData);
        }
        synchronized (mDecodeHandlerObject) {
            if (mDecodeHandler != null && !mIsCoding) {
                mIsCoding = true;
                Point cameraResolution = mCameraConfigManager.getCameraResolution();
                Message message = mDecodeHandler.obtainMessage(MSG_START_DECODE, cameraResolution.x, cameraResolution.y, jpegData);
                message.sendToTarget();
            } else {
                Log.d(TAG, "Got preview callback, but no handler for it");
            }
        }
        return super.blendOutput(jpegData);
    }

    public void handleMessage(Message msg) {
        super.handleMessage(msg);
        if (msg.what == MSG_START_DECODE) {
            decode((byte[]) msg.obj, msg.arg1, msg.arg2);
        } else if (msg.what == MSG_QUIT_DECODE) {
            Looper.myLooper().quit();
        }
    }

    private void decode(byte[] data, int width, int height) {
           
            Result rawResult = null;
            Log.i(TAG, "decode bate length : " + data.length + ",width : " + width + ",height : " + height);
            //modify here
            byte[] rotatedData = new byte[data.length];
            for (int y = 0; y < height; y++) {
                for (int x = 0; x < width; x++)
                    rotatedData[x * height + height - y - 1] = data[x + y * width];
            }
            ...
            PlanarYUVLuminanceSource source = buildLuminanceSource(rotatedData, width, height, rect);
            BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
            try {
                //调用google 的 zxing库去识别
                rawResult = multiFormatReader.decodeWithState(bitmap);
            } catch (ReaderException re) {
                // continue
            } finally {
                multiFormatReader.reset();
            }

            ...
        }
    }
//扫码常见问题:识别不到二维码,就我遇到的而言,都是机器的对焦有问题。让项目检查对焦。
//提一句资源无法加载的问题:屏幕尺寸不符合后台判断条件的要求。

2. 内部插件(模式):假单反模式、人像模式

(1)原理:在预览容器(TextureView)之上覆盖一层BvirtualView

feature/mode/slr/src/com/freeme/camera/feature/mode/slr/BvirtualView.java

    protected void onDraw(Canvas canvas) {
        super.onDraw(canvas);
        canvas.setDrawFilter(mPFDF);
        drawTrueBgVirtualWithCanvas(canvas);
        drawDiaphragm(canvas);
    }

    private void drawTrueBgVirtualWithCanvas(Canvas canvas) {
        ...
        //获取预览帧的bitmap
        Bitmap preview = ((CameraAppUI) mApp.getAppUi()).getmPreviewManager().getPreviewBitmap(sampleFactor);//mScreenShotProvider.getPreviewFrame(sampleFactor);
        ...
        if (preview != null && mBlur != null) {
            ...
            //调用android自带的ScriptIntrinsicBlur将图片全部模糊
            Bitmap bgBlurBitmap = mBlur.blurBitmap(preview, mBlurDegress);
            if (SHOW_PREVIEW_DEBUG_LOG) {
                time1 = System.currentTimeMillis();
                Log.e(TAG, "blur bitmap :" + (time1 - time0) + " ms");
                time0 = System.currentTimeMillis();
            }
            BlurInfo info = new BlurInfo();
            info.x = (int) (mOnSingleX / apectScale);
            info.y = (int) (mOnSingleY / apectScale);
            info.inRadius = (int) (IN_SHARPNESS_RADIUS * scale / apectScale);
            info.outRadius = (int) (OUT_SHARPNESS_RADIUS * scale / apectScale);
            //调用blur库进行拼接,大厨开发好的
            //源码:https://github.com/azmohan/BvArithmetic
            SmoothBlurJni.smoothRender(bgBlurBitmap, preview, info);
            if (SHOW_PREVIEW_DEBUG_LOG) {
                time1 = System.currentTimeMillis();
                Log.e(TAG, "smooth render :" + (time1 - time0) + " ms");
            }
            Matrix matrix = new Matrix();
            matrix.setScale(apectScale, apectScale);
            //绘制
            canvas.drawBitmap(bgBlurBitmap, matrix, null);
            preview.recycle();
            bgBlurBitmap.recycle();
        }
    }
//常见问题:卡顿。 根本原因就是假单反的这一套太吃资源,在预览之上又覆盖了一层view
//可调小BvirtualView.java中的值来优化
    private final static int IN_SHARPNESS_RADIUS = 200;
    private final static int OUT_SHARPNESS_RADIUS = 320;
    private static int REFERENCE_ASPECT_SIZE = 720;
    private static int SUPPORT_MAX_ASPECT_SIZE = 720;
//如果想从根本上优化,可以像美颜那样,用opengl对纹理进行模糊算法处理之后,再绘制到预览容器上。
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,547评论 6 477
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,399评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,428评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,599评论 1 274
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,612评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,577评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,941评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,603评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,852评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,605评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,693评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,375评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,955评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,936评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,172评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 43,970评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,414评论 2 342

推荐阅读更多精彩内容

  • 锁 OSSpinLock 自旋锁 实现机制:忙等 操作重点:原子操作1.自旋锁2.互斥锁3.读写锁4.信号量5.条...
    王家小雷阅读 394评论 0 0
  • 背景介绍 Kafka简介 Kafka是一种分布式的,基于发布/订阅的消息系统。主要设计目标如下: 以时间复杂度为O...
    奇妙林林阅读 197评论 0 0
  • 集合(一) 为了解决数组的定长问题, JDK在1.2版本开发了集合框架, 集合和数组的相同点和不同点 集合是容器,...
    凉小枫阅读 156评论 0 0
  • 用到的组件 1、通过CocoaPods安装 2、第三方类库安装 3、第三方服务 友盟社会化分享组件 友盟用户反馈 ...
    SunnyLeong阅读 14,598评论 1 180
  • # 基础知识预备 ### 1、C/C++程序编译的四个过程 (以g++编译器为例) - 预处理:宏的替换,还有注...
    上进的小白_阅读 517评论 0 0