当前位置: 首页 > news >正文

ffmpeg 使用滤镜实现播放倍速

FFmpeg 的滤镜可以实现,对原始音视频流进行裁剪、缩放、变速、降噪等基础处理;添加特效(如水印、字幕、转场、滤镜效果);修复音视频缺陷(如去除杂音、校正色彩);适配不同场景的需求(如调整分辨率以适应设备、混音以匹配多声道输出)等功能。下面本文将使用滤镜来实现播放倍速功能。

    AVFilterGraph* m_filterGraph = nullptr;AVFilterContext* m_videoSrcFilterCtx = nullptr;AVFilterContext* m_videoSinkFilterCtx = nullptr;AVFilterContext* m_audioSrcFilterCtx = nullptr;AVFilterContext* m_audioSinkFilterCtx = nullptr;

这个有几个成员变量,其中 m_filterGraph 是用来管理滤镜的,我们创建滤镜,配置滤镜图的时候都需要用到它。简单来说,我们创建这样一个对象,然后在接下来跟滤镜有关的操作中,参数为 AVFilterGraph 的,都把它传进去就行了。
下面几个分别是视频源滤镜上下文,视频输出滤镜上下文,音频源滤镜上下文,音频输出滤镜上下文。我们会将 AVFrame 传入源滤镜上下文,然后从输出滤镜上下文获取新的 AVFrame.

    const AVFilter* buffersrc = avfilter_get_by_name("buffer");const AVFilter* buffersink = avfilter_get_by_name("buffersink");const AVFilter* setpts = avfilter_get_by_name("setpts");const AVFilter* abuffersrc = avfilter_get_by_name("abuffer");const AVFilter* abuffersink = avfilter_get_by_name("abuffersink");const AVFilter* atempo = avfilter_get_by_name("atempo");

这里有几个 AVFilter 对象,其中 buffer 对应 视频源滤镜,buffersink 对应视频输出滤镜,abuffer 和 abuffersink 则是音频输入和输出滤镜。setpts 滤镜是用来改变视频帧的 pts 的,而 atempo 滤镜是用来改变音频播放速度的。

    int ret = 0;const AVFilter* buffersrc = avfilter_get_by_name("buffer");const AVFilter* buffersink = avfilter_get_by_name("buffersink");AVRational time_base = m_videoCodecContext->time_base;char args[512];snprintf(args, sizeof(args),"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",m_videoCodecContext->width, m_videoCodecContext->height, m_videoCodecContext->pix_fmt,time_base.num, time_base.den,m_videoCodecContext->sample_aspect_ratio.num, m_videoCodecContext->sample_aspect_ratio.den);ret = avfilter_graph_create_filter(&m_videoSrcFilterCtx, buffersrc, "in", args, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 buffer 滤镜" << std::endl;return false;}ret = avfilter_graph_create_filter(&m_videoSinkFilterCtx, buffersink, "out", NULL, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 buffersink 滤镜" << std::endl;return false;}AVFilterContext* videoSpeedCtx = nullptr;const AVFilter* setpts = avfilter_get_by_name("setpts");if ((ret = avfilter_graph_create_filter(&videoSpeedCtx,setpts, "speed","0.5*PTS", nullptr, m_filterGraph)) < 0) {std::cout << "无法创建视频滤镜 " << std::endl;return -1;}

avfilter_graph_create_filter 用于创建和初始化滤镜,第一个参数是滤镜上下文,第二个参数是 AVFilter,第三个参数是名称,没啥用,随便设置即可,第四个参数是一个字符串,用于描述滤镜的属性。第五个参数没用,第六个参数则是滤镜图。
重点说明下第四个参数,输入滤镜的第四个参数是视频帧/音频帧的一些属性
视频参数
必须设置

  1. ​​video_size​​ (视频尺寸)
    ​​格式​​:widthxheight
    ​​示例​​:1280x720
  2. ​​pix_fmt​​ (像素格式)
    ​​格式​​:整数
    ​​示例​​:0(AV_PIX_FMT_YUV420P)
  3. ​​time_base​​ (时间基)
    ​​格式​​:num/den
    ​​示例​​:1/1000
    推荐
  4. ​​pixel_aspect​​ (像素宽高比)
    ​​格式​​:num/den
    ​​示例​​:1/1
    描述像素形状,影响显示比例
    可选(根据滤镜链需求)
  5. ​​frame_rate​​ (帧率)
    ​​格式​​:num/den或 小数
    ​​示例​​:30000/1001(29.97fps)
    说明​​:当滤镜链需要知道帧率时设置
  6. ​​sar​​ (样本宽高比)
    ​​格式​​:num/den
    ​​示例​​:1/1
    ​​说明​​:与pixel_aspect类似,有时需要同时设置
  7. ​​sws_param​​ (缩放参数)
    ​​格式​​:字符串
    ​​示例​​:flags=bicubic
    ​​说明​​:指定缩放算法,当需要调整大小时使用

音频参数

  1. ​​time_base​​ (时间基)
    ​​格式​​:num/den
    ​​示例​​:1/44100
  2. ​​sample_rate​​ (采样率)
    ​​格式​​:整数
    ​​示例​​:44100、48000
  3. ​​sample_fmt​​ (采样格式)
    ​​格式​​:字符串
    ​​示例​​:s16、fltp
  4. ​​channel_layout​​ (声道布局)
    ​​格式​​:十六进制或字符串
    ​​示例​​:0x3、stereo、mono

输出滤镜的参数放空即可,setpts 和 atempo 则跟滤镜有关。

		if (avfilter_link(m_videoSrcFilterCtx, 0, videoSpeedCtx, 0) < 0 ||avfilter_link(videoSpeedCtx, 0, m_videoSinkFilterCtx, 0) < 0) {std::cerr << "连接视频滤镜失败" << std::endl;return false;}

avfilter_link 用于链接滤镜,上面的代码表示数据从 m_videoSrcFilterCtx 流向 videoSpeedCtx,再从 videoSpeedCtx 流向 m_videoSinkFilterCtx,很好理解,数据从输入滤镜开始,流向中间的滤镜,有中间的滤镜进行处理,再流向下一个滤镜,最终流入到输出滤镜。
0 表示滤镜输入,输出端口的索引,有的滤镜有多个输入或输出,比如我们如果想把两段音频合并到一起,第一段音频就传入到融合滤镜的第一个端口,第二段音频传入到第二个端口。

		if ((ret = avfilter_graph_config(m_filterGraph, nullptr)) < 0) {std::cerr << "滤镜图配置失败 " << std::endl;return -1;}

这段代码用于检查滤镜图中所有链接和格式的有效性并进行配置。
至此,滤镜的相关设置就结束了。接下去只需将 AVFrame 传入输入滤镜上下文,再从输出滤镜上下文接收新的 AVFrame 即可

	av_buffersrc_add_frame(m_videoSrcFilterCtx, frame);while (av_buffersink_get_frame(m_videoSinkFilterCtx, filterFrame) >= 0) {// do something}

需要注意的是,虽然 setpts 的功能只是用于改变视频帧的 pts,但是实际上 filterFrame 的 format 和 frame 不同,因此格式转化的参数需要从 filterFrame 获取。

下面介绍滤镜连接的另外一种方式,这种方式与上一种的区别在于不需要手动连接滤镜,只需要创建 AVFilterInOut 对象,然后将其和输入滤镜和输出滤镜绑定,通过 avfilter_graph_parse_ptr 函数,自动地去链接滤镜。

    m_filterGraph = avfilter_graph_alloc();if (!m_filterGraph) {return false;}// 创建视频滤镜int ret = 0;const AVFilter* buffersrc = avfilter_get_by_name("buffer");const AVFilter* buffersink = avfilter_get_by_name("buffersink");AVRational time_base = m_videoCodecContext->time_base;char args[512];snprintf(args, sizeof(args),"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",m_videoCodecContext->width, m_videoCodecContext->height, m_videoCodecContext->pix_fmt,time_base.num, time_base.den,m_videoCodecContext->sample_aspect_ratio.num, m_videoCodecContext->sample_aspect_ratio.den);ret = avfilter_graph_create_filter(&m_videoSrcFilterCtx, buffersrc, "in",args, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 buffer 滤镜" << std::endl;return false;}// 设置buffersink参数ret = avfilter_graph_create_filter(&m_videoSinkFilterCtx, buffersink, "out",NULL, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 buffersink 滤镜" << std::endl;return false;}

前面的部分也是差不多的,创建滤镜图,创建源滤镜和输出滤镜。

    AVFilterInOut* outputs = avfilter_inout_alloc();AVFilterInOut* inputs = avfilter_inout_alloc();if (!outputs || !inputs) {return false;}outputs->name = av_strdup("in");outputs->filter_ctx = m_videoSrcFilterCtx;outputs->pad_idx = 0;outputs->next = NULL;inputs->name = av_strdup("out");inputs->filter_ctx = m_videoSinkFilterCtx;inputs->pad_idx = 0;inputs->next = NULL;if ((ret = avfilter_graph_parse_ptr(m_filterGraph, "setpts=PTS/2",&inputs, &outputs, NULL)) < 0) {std::cout << "无法解析滤镜描述" << std::endl;return false;}avfilter_inout_free(&inputs);avfilter_inout_free(&outputs);

这里创建了两个 AVFilterInOut 对象,并设置参数,需要注意的是 outputs 对应的是输入滤镜,inputs 对应的是输出滤镜,可以理解为数据从输入滤镜输出到 AVFilterInOut ,所以是 output。
那么为什么不把 inputs 取名成 outputs 这个就跟 avfilter_graph_parse_ptr 有关了。
avfilter_graph_parse_ptr 函数声明如下:

int avfilter_graph_parse_ptr(AVFilterGraph *graph, const char *filters,AVFilterInOut **inputs, AVFilterInOut **outputs,void *log_ctx);

AVFilterInOut **inputs 参数就对应上面代码中的 inputs 对象,另外一个同理。

avfilter_graph_parse_ptr(m_filterGraph, "setpts=PTS/2", &inputs, &outputs, NULL))

avfilter_graph_parse_ptr 将根据传入的 filters 自动地去创建滤镜链

		if ((ret = avfilter_graph_config(m_filterGraph, nullptr)) < 0) {std::cerr << "滤镜图配置失败 " << std::endl;return -1;}

最后同样需要调用 avfilter_graph_config 配置滤镜图

不过倍速后播放,音频和视频会有些偏差,暂时不确定是哪方面的问题

VideoDecoder.h 代码如下

#ifndef VIDEODECODER_H
#define VIDEODECODER_H
#include <atomic>
#include <functional>
#include <string>
#include <vector>
#include <mutex>
#include <condition_variable>
extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libavutil/avutil.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libavutil/channel_layout.h"
#include "libswresample/swresample.h"
#include "libavutil/opt.h"
#include "libavutil/time.h"#include "libavfilter/avfilter.h"      // 滤镜系统
#include "libavfilter/buffersrc.h"      // 缓冲源滤镜
#include "libavfilter/buffersink.h"     // 缓冲接收滤镜
#include "libavutil/pixdesc.h"}
#include "Timer.h"
#include "VideoStructs.h"using namespace std;class VideoDecoder
{
public:VideoDecoder();~VideoDecoder();bool open(const string& filePath);VideoParams getVideoParam() { return m_videoParam;}AudioParams getAudioParam() { return m_audioParam;}bool isStop() { return m_stop; }void addVideoCallback(std::function<bool(AVFrame*)> callback) {m_videoCallbacks.emplace_back(callback);}void addAudioCallback(std::function<bool(AVFrame*)> callback) {m_audioCallbacks.emplace_back(callback);}void play();void pause();void stop();void close();private:void showError(int ret);void readPacket();void decodeVideo();void decodeAudio();// 格式转换bool initSwrContext(const AVFrame* frame);bool initSwsContext(const AVFrame* frame);// 滤镜bool initFilter();bool initFilterByDescribe();// 硬解码AVHWDeviceType findSupportedHWDeviceType(const AVCodec* codec);bool initHWDevice();bool initHWFramesContext();bool hwFrameToSwFrame(AVFrame* hwFrame, AVFrame* swFrame);private:VideoParams m_videoParam;AudioParams m_audioParam;// 播放控制及音视频同步bool m_play = false;bool m_stop = true;Timer* m_timer;std::vector<AVPacket> m_videoPackets;std::vector<AVPacket> m_audioPackets;std::mutex m_videoMutex;std::mutex m_audioMutex;std::condition_variable m_videoAvailable;std::condition_variable m_audioAvailable;std::mutex m_fullMutex;std::condition_variable m_queueFull;std::mutex m_playMutex;std::condition_variable m_playCondition;std::atomic<int> m_threadCount;// 回调std::vector<std::function<bool(AVFrame*)>> m_videoCallbacks;std::vector<std::function<bool(AVFrame*)>> m_audioCallbacks;// 格式转化SwsContext* m_swsContext = nullptr;SwrContext* m_swrContext = nullptr;                 // 音频格式转换实例// 解码AVFormatContext* m_formatContext = nullptr;int m_videoIndex = -1;                              // 视频流所在索引int m_audioIndex = -1;                              // 音频流所在索引AVCodecContext* m_videoCodecContext = nullptr;      // 视频解码器实例AVCodecContext* m_audioCodecContext = nullptr;      // 音频解码器实例// 硬解码AVBufferRef* m_hwDeviceCtx = nullptr;   // 硬件设备上下文AVBufferRef* m_hwFramesCtx = nullptr;   // 硬件帧上下文AVHWDeviceType m_hwDeviceType = AV_HWDEVICE_TYPE_NONE; // 硬件设备类型(如NVDEC)enum AVPixelFormat m_hwPixFmt = AV_PIX_FMT_NONE; // 硬件像素格式(如AV_PIX_FMT_CUDA)bool m_bHardWare = false;// 滤镜AVFilterGraph* m_filterGraph = nullptr;AVFilterContext* m_videoSrcFilterCtx = nullptr;AVFilterContext* m_videoSinkFilterCtx = nullptr;AVFilterContext* m_audioSrcFilterCtx = nullptr;AVFilterContext* m_audioSinkFilterCtx = nullptr;
};#endif // VIDEODECODER_H

VideoDecoder.cpp 代码如下

#include "VideoDecoder.h"
#include <iostream>
#include <thread>VideoDecoder::VideoDecoder() {m_timer = new Timer;
}VideoDecoder::~VideoDecoder()
{delete m_timer;
}bool VideoDecoder::open(const string &filePath)
{close();int ret = avformat_open_input(&m_formatContext, filePath.c_str(), NULL, NULL);if (ret < 0) {showError(ret);close();return false;}// 查找流信息ret = avformat_find_stream_info(m_formatContext, NULL);if (ret < 0) {close();return false;}// 查找视频流m_videoIndex = av_find_best_stream(m_formatContext, AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0);if (m_videoIndex == -1) {close();return false;}// 查找视频解码器AVStream *videoStream = m_formatContext->streams[m_videoIndex];const AVCodec *videoCodec = avcodec_find_decoder(videoStream->codecpar->codec_id);if (videoCodec == nullptr) {close();return false;}// 创建视频解码器m_videoCodecContext = avcodec_alloc_context3(videoCodec);if (m_videoCodecContext == nullptr) {close();return false;}// 把视频流中的编解码参数复制给解码器的实例avcodec_parameters_to_context(m_videoCodecContext, videoStream->codecpar);// 打开视频解码器实例ret = avcodec_open2(m_videoCodecContext, NULL, NULL);if (ret < 0) {close();return ret;}if (m_videoCodecContext->time_base.num <= 0 || m_videoCodecContext->time_base.den <= 0) {// 强制同步为流的 time_basem_videoCodecContext->time_base = videoStream->time_base;}m_videoParam.bitRate = m_videoCodecContext->bit_rate;m_videoParam.width = m_videoCodecContext->width;m_videoParam.height = m_videoCodecContext->height;m_videoParam.fps = m_videoCodecContext->framerate.num / m_videoCodecContext->framerate.den ;m_videoParam.gopSize = m_videoCodecContext->gop_size;m_videoParam.maxBFrames = m_videoCodecContext->max_b_frames;/*m_hwDeviceType = findSupportedHWDeviceType(videoCodec); // 查找支持的硬件类型if (m_hwDeviceType != AV_HWDEVICE_TYPE_NONE) {if (!initHWDevice()) { // 初始化硬件设备m_bHardWare = false;m_hwDeviceType = AV_HWDEVICE_TYPE_NONE;m_hwPixFmt = AV_PIX_FMT_NONE;// 硬解码失败,仍尝试软解码(释放已绑定的硬件上下文)if (m_videoCodecContext->hw_device_ctx) {av_buffer_unref(&m_videoCodecContext->hw_device_ctx);m_videoCodecContext->hw_device_ctx = nullptr;}}}*/// 打开音频解码器实例m_audioIndex = av_find_best_stream(m_formatContext, AVMEDIA_TYPE_AUDIO, -1, -1, NULL, 0);if (m_audioIndex >= 0) {AVStream *audioStream = m_formatContext->streams[m_audioIndex];const AVCodec *audioCodec = avcodec_find_decoder(audioStream->codecpar->codec_id);if (audioCodec == nullptr) {close();return false;}m_audioCodecContext = avcodec_alloc_context3(audioCodec);if (m_audioCodecContext == nullptr) {close();return false;}avcodec_parameters_to_context(m_audioCodecContext, audioStream->codecpar);ret = avcodec_open2(m_audioCodecContext, audioCodec, NULL);if (ret < 0) {close();return false;}m_audioParam.sampleRate = m_audioCodecContext->sample_rate;m_audioParam.sampleFmt = m_audioCodecContext->sample_fmt;m_audioParam.bitRate = m_audioCodecContext->bit_rate;m_audioParam.channels = m_audioCodecContext->ch_layout.nb_channels;m_audioParam.samples = m_audioCodecContext->frame_size;}initFilter();//initFilterByDescribe();return true;
}void VideoDecoder::play()
{if (m_stop) {m_stop = false;m_play = true;m_timer->start();std::thread([this](){readPacket();m_threadCount--;}).detach();std::thread([this](){decodeVideo();m_threadCount--;}).detach();std::thread([this](){decodeAudio();m_threadCount--;}).detach();m_threadCount = 3;} else {m_timer->start();m_play = true;m_playCondition.notify_all();}}void VideoDecoder::pause()
{m_play = false;m_timer->pause();
}void VideoDecoder::stop()
{m_stop = true;m_timer->stop();// 唤醒所有条件变量,避免线程无法退出m_queueFull.notify_all();m_videoAvailable.notify_all();m_audioAvailable.notify_all();while (m_threadCount > 0) {std::this_thread::sleep_for(std::chrono::milliseconds(100));}
}void VideoDecoder::close() {if (m_swsContext != nullptr) {sws_free_context(&m_swsContext);}if (m_swrContext != nullptr) {swr_free(&m_swrContext);}if (m_formatContext != nullptr) {avformat_close_input(&m_formatContext);}if (m_videoCodecContext != nullptr) {avcodec_free_context(&m_videoCodecContext);}if (m_audioCodecContext != nullptr) {avcodec_free_context(&m_audioCodecContext);}
}void VideoDecoder::showError(int ret)
{char errorBuf[1024];av_strerror(ret, errorBuf, sizeof(errorBuf));std::cerr << errorBuf << std::endl;
}void VideoDecoder::readPacket()
{AVPacket* packet = av_packet_alloc();while (!m_stop && av_read_frame(m_formatContext, packet) >= 0) { // 轮询数据包if (packet->stream_index == m_videoIndex) {m_videoMutex.lock();m_videoPackets.emplace_back(*packet);m_videoMutex.unlock();m_videoAvailable.notify_all();} else if (packet->stream_index == m_audioIndex) {m_audioMutex.lock();m_audioPackets.emplace_back(*packet);m_audioMutex.unlock();m_audioAvailable.notify_all();}{std::unique_lock<std::mutex> lock(m_fullMutex);m_queueFull.wait(lock, [this]{return m_videoPackets.size() <= 10 && m_audioPackets.size() <= 10 || m_stop;});}}av_packet_free(&packet);stop();
}void VideoDecoder::decodeVideo()
{AVFrame* frame = av_frame_alloc();AVFrame* filterFrame = av_frame_alloc();AVFrame* convertFrame = av_frame_alloc();AVFrame* swFrame = nullptr;if (m_bHardWare) {swFrame = av_frame_alloc();}while (!m_stop) {{std::unique_lock<std::mutex> lock(m_videoMutex);m_videoAvailable.wait(lock, [this]{return !m_videoPackets.empty() || m_stop;});}if (m_stop) {break;}while (!m_play) {std::this_thread::sleep_for(std::chrono::milliseconds(5));}m_videoMutex.lock();AVPacket packet = m_videoPackets.front();m_videoPackets.erase(m_videoPackets.begin());m_videoMutex.unlock();m_queueFull.notify_all();int ret = avcodec_send_packet(m_videoCodecContext, &packet);av_packet_unref(&packet);while ( ret >= 0) {ret = avcodec_receive_frame(m_videoCodecContext, frame);if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {break;} else if (ret < 0) {return;}if (m_bHardWare) {hwFrameToSwFrame(frame, swFrame);int ret = av_buffersrc_add_frame(m_videoSrcFilterCtx, swFrame);if (ret < 0) {std::cerr << "推送视频帧到源滤镜失败" << std::endl;showError(ret);continue;}} else {int ret = av_buffersrc_add_frame(m_videoSrcFilterCtx, frame);if (ret < 0) {std::cerr << "推送视频帧到源滤镜失败" << std::endl;showError(ret);continue;}}while (av_buffersink_get_frame(m_videoSinkFilterCtx, filterFrame) >= 0) {if (!m_swsContext) {initSwsContext(filterFrame);}convertFrame->format = AV_PIX_FMT_RGB24;convertFrame->width = m_videoCodecContext->width;convertFrame->height = m_videoCodecContext->height;convertFrame->pts = filterFrame->pts;int ret = av_frame_get_buffer(convertFrame, 0);if (ret < 0) {showError(ret);return;}ret = sws_scale(m_swsContext,filterFrame->data, filterFrame->linesize,0, filterFrame->height,convertFrame->data, convertFrame->linesize);av_frame_copy_props(convertFrame, filterFrame);double pts_in_seconds = av_q2d(m_videoCodecContext->time_base) * convertFrame->pts;double diff = pts_in_seconds - m_timer->elapsedtime();//while (diff > 0.001 && !m_stop) {//std::this_thread::sleep_for(std::chrono::milliseconds(5));//diff = pts_in_seconds - m_timer->elapsedtime();//}for (const auto& callback : m_videoCallbacks) {callback(convertFrame);}av_frame_unref(filterFrame);av_frame_unref(convertFrame);}av_frame_unref(frame);}}av_frame_free(&frame);av_frame_free(&convertFrame);av_frame_free(&filterFrame);if (m_bHardWare) {av_frame_free(&swFrame);}
}void VideoDecoder::decodeAudio()
{AVFrame *frame = av_frame_alloc();AVFrame *convertFrame = av_frame_alloc();AVFrame *filterFrame = av_frame_alloc();while (!m_stop) {{std::unique_lock<std::mutex> lock(m_audioMutex);m_audioAvailable.wait(lock, [this]{return !m_audioPackets.empty() || m_stop;});}if (m_stop) {break;}while (!m_play) {std::this_thread::sleep_for(std::chrono::milliseconds(5));}m_audioMutex.lock();AVPacket packet = m_audioPackets.front();m_audioPackets.erase(m_audioPackets.begin());m_audioMutex.unlock();m_queueFull.notify_all();int ret = avcodec_send_packet(m_audioCodecContext, &packet);av_packet_unref(&packet);while (ret >= 0) {ret = avcodec_receive_frame(m_audioCodecContext, frame);if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {break;} else if (ret < 0) {return;}ret = av_buffersrc_add_frame(m_audioSrcFilterCtx, frame);if (ret < 0) {std::cerr << "推送音频帧到源滤镜失败" << std::endl;showError(ret);continue;}while (av_buffersink_get_frame(m_audioSinkFilterCtx, filterFrame) >= 0) {if (!m_swrContext) {initSwrContext(filterFrame);}int samples = swr_get_out_samples(m_swrContext, filterFrame->nb_samples);if (samples < 0) {av_frame_free(&convertFrame);return;}convertFrame->format = AV_SAMPLE_FMT_S16;int ret = av_channel_layout_copy(&convertFrame->ch_layout, &filterFrame->ch_layout);if (ret < 0) {showError(ret);return;}convertFrame->nb_samples = samples;ret = av_frame_get_buffer(convertFrame, 0);if (ret < 0) {showError(ret);av_frame_free(&convertFrame);return;}convertFrame->pts = filterFrame->pts;samples = swr_convert(m_swrContext,convertFrame->data, samples,(const uint8_t **)filterFrame->data, filterFrame->nb_samples);if (samples < 0) {av_frame_free(&convertFrame);showError(samples);return;}double pts_in_seconds = av_q2d(m_audioCodecContext->time_base) * convertFrame->pts;double diff = pts_in_seconds - m_timer->elapsedtime();while (diff > 0.001 && !m_stop) {std::this_thread::sleep_for(std::chrono::milliseconds(5));diff = pts_in_seconds - m_timer->elapsedtime();}convertFrame->nb_samples = samples;//av_frame_copy_props(convertFrame, filterFrame);//convertFrame->sample_rate = filterFrame->sample_rate;for (const auto& callback : m_audioCallbacks) {callback(convertFrame);}av_frame_unref(filterFrame);av_frame_unref(convertFrame);}av_frame_unref(frame);}}av_frame_free(&frame);av_frame_free(&filterFrame);av_frame_free(&convertFrame);
}bool VideoDecoder::initSwrContext(const AVFrame* frame)
{if (!m_audioCodecContext) {return false;}int ret = swr_alloc_set_opts2(&m_swrContext,&m_audioCodecContext->ch_layout,AV_SAMPLE_FMT_S16,m_audioCodecContext->sample_rate,&frame->ch_layout,(AVSampleFormat)frame->format,frame->sample_rate,0, NULL);if (ret < 0) {showError(ret);return false;}ret = swr_init(m_swrContext);if (ret < 0) {showError(ret);swr_free(&m_swrContext);m_swrContext = nullptr;return false;}return true;
}bool VideoDecoder::initSwsContext(const AVFrame* frame)
{// 创建转换上下文并指定颜色空间m_swsContext = sws_getContext(frame->width, frame->height, (AVPixelFormat)frame->format,m_videoCodecContext->width, m_videoCodecContext->height, AV_PIX_FMT_RGB24,SWS_BILINEAR, nullptr, nullptr, nullptr);return m_swsContext != nullptr;
}bool VideoDecoder::initFilter()
{m_filterGraph = avfilter_graph_alloc();if (!m_filterGraph) {return false;}// 创建视频滤镜int ret = 0;const AVFilter* buffersrc = avfilter_get_by_name("buffer");const AVFilter* buffersink = avfilter_get_by_name("buffersink");AVRational time_base = m_videoCodecContext->time_base;char args[512];snprintf(args, sizeof(args),"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",m_videoCodecContext->width, m_videoCodecContext->height, m_videoCodecContext->pix_fmt,time_base.num, time_base.den,m_videoCodecContext->sample_aspect_ratio.num, m_videoCodecContext->sample_aspect_ratio.den);ret = avfilter_graph_create_filter(&m_videoSrcFilterCtx, buffersrc, "in", args, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 buffer 滤镜" << std::endl;return false;}ret = avfilter_graph_create_filter(&m_videoSinkFilterCtx, buffersink, "out", NULL, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 buffersink 滤镜" << std::endl;return false;}AVFilterContext* videoSpeedCtx = nullptr;const AVFilter* setpts = avfilter_get_by_name("setpts");if ((ret = avfilter_graph_create_filter(&videoSpeedCtx,setpts, "speed","0.5*PTS", nullptr, m_filterGraph)) < 0) {std::cout << "无法创建视频滤镜 " << std::endl;return -1;}if (avfilter_link(m_videoSrcFilterCtx, 0, videoSpeedCtx, 0) < 0 ||avfilter_link(videoSpeedCtx, 0, m_videoSinkFilterCtx, 0) < 0) {std::cerr << "连接视频滤镜失败" << std::endl;return false;}const AVFilter* abuffersrc = avfilter_get_by_name("abuffer");const AVFilter* abuffersink = avfilter_get_by_name("abuffersink");char ch_layout_str[256] = {0};size_t buf_size = sizeof(ch_layout_str);av_channel_layout_describe(&m_audioCodecContext->ch_layout, ch_layout_str, buf_size);char aargs[512];ret = snprintf(aargs, sizeof(aargs),"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=%s",m_audioCodecContext->time_base.num, m_audioCodecContext->time_base.den, m_audioCodecContext->sample_rate,av_get_sample_fmt_name(m_audioCodecContext->sample_fmt), ch_layout_str);ret = avfilter_graph_create_filter(&m_audioSrcFilterCtx, abuffersrc, "in", aargs, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 buffer 滤镜" << std::endl;return false;}ret = avfilter_graph_create_filter(&m_audioSinkFilterCtx, abuffersink, "out", NULL, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 abuffersink 滤镜" << std::endl;showError(ret);return false;}// atempo 无法处理超过两倍的加速,但是可以通过叠加滤镜进行处理AVFilterContext* audioSpeedCtx = nullptr;const AVFilter* atempo = avfilter_get_by_name("atempo");if (avfilter_graph_create_filter(&audioSpeedCtx, atempo, "atempo", "tempo=2.0", nullptr, m_filterGraph) < 0) {std::cerr << "创建音频倍速滤镜失败 " << std::endl;return false;}if (avfilter_link(m_audioSrcFilterCtx, 0, audioSpeedCtx, 0) < 0 ||avfilter_link(audioSpeedCtx, 0, m_audioSinkFilterCtx, 0) < 0) {std::cerr << "连接音频滤镜失败" << std::endl;return false;}if ((ret = avfilter_graph_config(m_filterGraph, nullptr)) < 0) {std::cerr << "滤镜图配置失败 " << std::endl;return -1;}return true;
}bool VideoDecoder::initFilterByDescribe()
{m_filterGraph = avfilter_graph_alloc();if (!m_filterGraph) {return false;}// 创建视频滤镜int ret = 0;const AVFilter* buffersrc = avfilter_get_by_name("buffer");const AVFilter* buffersink = avfilter_get_by_name("buffersink");AVRational time_base = m_videoCodecContext->time_base;char args[512];snprintf(args, sizeof(args),"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",m_videoCodecContext->width, m_videoCodecContext->height, m_videoCodecContext->pix_fmt,time_base.num, time_base.den,m_videoCodecContext->sample_aspect_ratio.num, m_videoCodecContext->sample_aspect_ratio.den);ret = avfilter_graph_create_filter(&m_videoSrcFilterCtx, buffersrc, "in",args, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 buffer 滤镜" << std::endl;return false;}// 设置buffersink参数ret = avfilter_graph_create_filter(&m_videoSinkFilterCtx, buffersink, "out",NULL, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 buffersink 滤镜" << std::endl;return false;}// 这里需要注意的是 outputs 对应的是输入滤镜,inputs 对应的是输出滤镜// 可以理解为数据从输入滤镜输出到 AVFilterInOut ,所以是 output,AVFilterInOut* outputs = avfilter_inout_alloc();AVFilterInOut* inputs = avfilter_inout_alloc();if (!outputs || !inputs) {return false;}outputs->name = av_strdup("in");outputs->filter_ctx = m_videoSrcFilterCtx;outputs->pad_idx = 0;outputs->next = NULL;inputs->name = av_strdup("out");inputs->filter_ctx = m_videoSinkFilterCtx;inputs->pad_idx = 0;inputs->next = NULL;if ((ret = avfilter_graph_parse_ptr(m_filterGraph, "setpts=PTS/2",&inputs, &outputs, NULL)) < 0) {std::cout << "无法解析滤镜描述" << std::endl;return false;}avfilter_inout_free(&inputs);avfilter_inout_free(&outputs);const AVFilter* abuffersrc = avfilter_get_by_name("abuffer");const AVFilter* abuffersink = avfilter_get_by_name("abuffersink");char ch_layout_str[256] = {0};size_t buf_size = sizeof(ch_layout_str);av_channel_layout_describe(&m_audioCodecContext->ch_layout, ch_layout_str, buf_size);char aargs[512];ret = snprintf(aargs, sizeof(aargs),"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=%s",m_audioCodecContext->time_base.num, m_audioCodecContext->time_base.den, m_audioCodecContext->sample_rate,av_get_sample_fmt_name(m_audioCodecContext->sample_fmt), ch_layout_str);ret = avfilter_graph_create_filter(&m_audioSrcFilterCtx, abuffersrc, "in",aargs, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 buffer 滤镜" << std::endl;return false;}ret = avfilter_graph_create_filter(&m_audioSinkFilterCtx, abuffersink, "out",NULL, NULL, m_filterGraph);if (ret < 0) {std::cout << "无法创建 abuffersink 滤镜" << std::endl;showError(ret);return false;}AVFilterInOut* ainputs = avfilter_inout_alloc();AVFilterInOut* aoutputs = avfilter_inout_alloc();if (!aoutputs || !ainputs) {return false;}aoutputs->name = av_strdup("in");aoutputs->filter_ctx = m_audioSrcFilterCtx;aoutputs->pad_idx = 0;aoutputs->next = NULL;ainputs->name = av_strdup("out");ainputs->filter_ctx = m_audioSinkFilterCtx;ainputs->pad_idx = 0;ainputs->next = NULL;ret = avfilter_graph_parse_ptr(m_filterGraph, "atempo=2", &ainputs, &aoutputs, NULL);if (ret < 0) {av_log(NULL, AV_LOG_ERROR, "Cannot parse audio filter string\n");showError(ret);return false;}if ((ret = avfilter_graph_config(m_filterGraph, NULL)) < 0) {std::cout << "滤镜图配置失败" << std::endl;return false;}avfilter_inout_free(&ainputs);avfilter_inout_free(&aoutputs);return true;
}AVHWDeviceType VideoDecoder::findSupportedHWDeviceType(const AVCodec* codec) {for (int i = 0;; i++) {const AVCodecHWConfig* hwConfig = avcodec_get_hw_config(codec, i);if (hwConfig != nullptr) {m_hwPixFmt = hwConfig->pix_fmt; // 记录硬件像素格式return hwConfig->device_type;} else {break;}}return AV_HWDEVICE_TYPE_NONE;
}bool VideoDecoder::initHWDevice()
{if (m_hwDeviceType == AV_HWDEVICE_TYPE_NONE || !m_videoCodecContext) {return false;}// 创建硬件设备上下文(NULL表示自动选择设备,如默认GPU)int ret = av_hwdevice_ctx_create(&m_hwDeviceCtx, m_hwDeviceType,NULL, NULL, 0);if (ret < 0) {showError(ret);std::cerr << "创建硬件设备上下文失败(如GPU未识别)" << std::endl;return false;}// 将硬件设备上下文绑定到解码器m_videoCodecContext->hw_device_ctx = av_buffer_ref(m_hwDeviceCtx);if (!m_videoCodecContext->hw_device_ctx) {showError(AVERROR(ENOMEM));av_buffer_unref(&m_hwDeviceCtx);m_hwDeviceCtx = nullptr;return false;}// 初始化硬件帧上下文(管理硬件帧池)if (!initHWFramesContext()) {av_buffer_unref(&m_hwDeviceCtx);m_hwDeviceCtx = nullptr;return false;}m_bHardWare = true;return true;
}bool VideoDecoder::initHWFramesContext()
{if (!m_hwDeviceCtx || !m_videoCodecContext) return false;// 分配硬件帧上下文m_hwFramesCtx = av_hwframe_ctx_alloc(m_hwDeviceCtx);if (!m_hwFramesCtx) {std::cerr << "分配硬件帧上下文失败" << std::endl;return false;}// 设置硬件帧参数AVHWFramesContext* hwFramesCtx = (AVHWFramesContext*)m_hwFramesCtx->data;hwFramesCtx->format = m_hwPixFmt;                  // 硬件像素格式if (m_hwPixFmt == AV_PIX_FMT_CUDA || m_hwPixFmt == AV_PIX_FMT_VULKAN) {hwFramesCtx->sw_format = AV_PIX_FMT_NV12;} else if (m_hwPixFmt == AV_PIX_FMT_VAAPI) {hwFramesCtx->sw_format = AV_PIX_FMT_YUV420P;}hwFramesCtx->width = m_videoCodecContext->width;   // 视频宽度hwFramesCtx->height = m_videoCodecContext->height; // 视频高度hwFramesCtx->initial_pool_size = 16;               // 帧池大小(建议 8-32)// 3. 初始化硬件帧上下文int ret = av_hwframe_ctx_init(m_hwFramesCtx);if (ret < 0) {showError(ret);av_buffer_unref(&m_hwFramesCtx);std::cerr << "初始化硬件帧上下文失败" << std::endl;return false;}// 4. 将硬件帧上下文绑定到解码器m_videoCodecContext->hw_frames_ctx = av_buffer_ref(m_hwFramesCtx);if (!m_videoCodecContext->hw_frames_ctx) {std::cerr << "绑定硬件帧上下文到解码器失败" << std::endl;av_buffer_unref(&m_hwFramesCtx);return false;}return true;
}bool VideoDecoder::hwFrameToSwFrame(AVFrame *hwFrame, AVFrame *swFrame)
{if (!hwFrame || !swFrame) {return false;}AVHWFramesContext* hwFramesCtx = (AVHWFramesContext*)m_hwFramesCtx->data;av_frame_unref(swFrame);swFrame->format = hwFramesCtx->sw_format;swFrame->width = hwFrame->width;swFrame->height = hwFrame->height;int ret = av_frame_get_buffer(swFrame, 0);if (ret < 0) {showError(ret);std::cerr << "分配软件帧内存失败" << std::endl;return false;}// 硬件帧数据传输到软件帧(GPU -> CPU)ret = av_hwframe_transfer_data(swFrame, hwFrame, 0);if (ret < 0) {showError(ret);std::cerr << "硬件帧转软件帧失败(可能是GPU资源不足)" << std::endl;return false;}// 将 时间戳(pts) 音频的采样率(sample rate) 视频的像素宽高比(sample aspect ratio)等字段从源帧(src)复制到目标帧(dst)av_frame_copy_props(swFrame, hwFrame);return true;
}

AudioPlayer.h 代码如下

#ifndef AUDIOPLAYER_H
#define AUDIOPLAYER_H
#include <SDL.h>
#include <vector>
#include <mutex>
#include <condition_variable>
#include "VideoStructs.h"
struct AVFrame;
using namespace std;
class AudioPlayer
{
public:AudioPlayer();~AudioPlayer();bool initSdl(const AudioParams& param);void audioCallbackImpl(Uint8* stream, int len);static void audioCallback(void* userdata, Uint8* stream, int len);bool displayAudio(AVFrame* frame);private:void closeSdl();
private:vector<AVFrame*> m_frames;std::mutex m_mutex;std::condition_variable m_available;std::condition_variable m_full;SDL_AudioDeviceID m_deviceId;bool m_stop = false;AVFrame* m_currentFrame = nullptr;int m_offset;};#endif // AUDIOPLAYER_H

AudioPlayer.cpp 代码如下

#include "AudioPlayer.h"
#include <iostream>
#include <thread>
extern "C" {
#include "libavcodec/avcodec.h"
}
AudioPlayer::AudioPlayer() {
}AudioPlayer::~AudioPlayer()
{m_stop = true;m_available.notify_all();closeSdl();
}bool AudioPlayer::initSdl(const AudioParams &param)
{// 初始化SDL音频if (SDL_Init(SDL_INIT_AUDIO) < 0) {std::cerr << "SDL初始化失败: " << SDL_GetError() << std::endl;return false;}SDL_AudioSpec desired_spec, obtained_spec;desired_spec.freq = param.sampleRate;               // 采样率desired_spec.format = AUDIO_S16SYS;                 // 16位有符号整数desired_spec.channels = 2;desired_spec.silence = 0;                           // 静音值desired_spec.samples = param.samples;                         // 缓冲区大小if (desired_spec.samples <= 0) {desired_spec.samples = 1024;}desired_spec.callback = audioCallback;              // 回调函数desired_spec.userdata = this;                       // 传递this指针m_deviceId = SDL_OpenAudioDevice(nullptr, 0, &desired_spec, &obtained_spec, 0);if (m_deviceId == 0) {std::cerr << "无法打开音频设备: " << SDL_GetError() << std::endl;return false;}SDL_PauseAudioDevice(m_deviceId, 0);return true;
}void AudioPlayer::audioCallbackImpl(Uint8 *stream, int len)
{SDL_memset(stream, 0, len);{std::unique_lock<std::mutex> lock(m_mutex);m_available.wait(lock, [this] {return m_frames.size() > 4 || m_deviceId == 0 || m_stop;});}if (m_currentFrame == nullptr) {m_currentFrame= m_frames.front();m_frames.erase(m_frames.begin());m_offset = 0;}int offset = 0;while (len > 0 && !m_stop) {if (m_deviceId == 0 || m_stop) {return;}// 计算本次回调需要复制的数据量int bytes_to_copy = std::min(len, m_currentFrame->linesize[0] - m_offset);// 复制数据到音频缓冲区memcpy(stream + offset, m_currentFrame->data[0] + m_offset, bytes_to_copy);// 更新位置offset += bytes_to_copy;m_offset += bytes_to_copy;len -= bytes_to_copy;if (m_currentFrame->linesize[0] == m_offset) {std::unique_lock<std::mutex> lock(m_mutex);av_frame_free(&m_currentFrame);m_currentFrame = m_frames.front();m_frames.erase(m_frames.begin());m_offset = 0;}}
}void AudioPlayer::audioCallback(void *userdata, Uint8 *stream, int len)
{AudioPlayer* player = static_cast<AudioPlayer*>(userdata);player->audioCallbackImpl(stream, len);
}bool AudioPlayer::displayAudio(AVFrame *frame)
{{std::lock_guard<std::mutex> lock(m_mutex);AVFrame* cloneFrame = av_frame_alloc();if (av_frame_ref(cloneFrame, frame) != 0) {av_frame_free(&cloneFrame);return false;}m_frames.emplace_back(cloneFrame);}m_available.notify_one();return true;
}void AudioPlayer::closeSdl()
{SDL_AudioStatus status = SDL_GetAudioDeviceStatus(m_deviceId);if (status == SDL_AUDIO_PLAYING) {SDL_PauseAudioDevice(m_deviceId, 1);}if (m_deviceId != 0) {SDL_CloseAudioDevice(m_deviceId);m_deviceId = 0;}SDL_QuitSubSystem(SDL_INIT_AUDIO);
}

VideoWidget.h 代码如下

#include <QOpenGLWidget>
#include <QOpenGLFunctions>
#include <QOpenGLShaderProgram>
#include <QOpenGLTexture>
#include <QMutex>
#include <SDL.h>struct AVFrame;
class VideoWidget : public QOpenGLWidget, protected QOpenGLFunctions {Q_OBJECT
public:explicit VideoWidget(QWidget *parent = nullptr);~VideoWidget() override;// 接收解码后的帧并更新显示bool displayVideo(AVFrame* frame);// 设置是否保持宽高比void setKeepAspectRatio(bool keep);// 清除显示内容void clear();protected:void initializeGL() override;void resizeGL(int w, int h) override;void paintGL() override;private:bool initShader();void createTextures(int width, int height);void updateTextures(const AVFrame* frame);// 重置顶点坐标(处理宽高比)void resetVertices(int windowWidth, int windowHeight);private:QOpenGLShaderProgram *m_program = nullptr;  // 着色器程序QOpenGLTexture* m_texture = nullptr;QMutex m_mutex;                             // 多线程同步锁AVFrame* m_currentFrame = nullptr;          // 当前待渲染的帧int m_frameWidth = 0;                       // 帧宽度int m_frameHeight = 0;                      // 帧高度bool m_hasNewFrame = false;                 // 是否有新帧待渲染bool m_keepAspectRatio = true;              // 是否保持宽高比GLfloat m_vertices[8] = {0};                // 顶点坐标数组bool m_isPlaying = false;                   // 播放状态标记QString m_errorMsg;                         // 错误信息
};

VideoWidget.cpp 代码如下

#include "VideoWidget.h"
#include <QDebug>
#include <QPainter>
#include <QEvent>
extern "C" {
#include "libavcodec/avcodec.h"
}VideoWidget::VideoWidget(QWidget *parent): QOpenGLWidget(parent) {setAutoFillBackground(false);setAttribute(Qt::WA_OpaquePaintEvent);setAcceptDrops(true);
}VideoWidget::~VideoWidget() {makeCurrent();// 释放纹理资源if (m_texture) {delete m_texture;m_texture = nullptr;}delete m_program;// 释放缓存的帧if (m_currentFrame) {av_frame_free(&m_currentFrame);m_currentFrame = nullptr;}doneCurrent();
}void VideoWidget::setKeepAspectRatio(bool keep) {if (m_keepAspectRatio != keep) {m_keepAspectRatio = keep;resetVertices(width(), height());update();}
}void VideoWidget::clear() {QMutexLocker locker(&m_mutex);if (m_currentFrame) {av_frame_free(&m_currentFrame);m_currentFrame = nullptr;}m_hasNewFrame = false;m_isPlaying = false;m_errorMsg.clear();update();
}bool VideoWidget::displayVideo(AVFrame *frame) {QMutexLocker locker(&m_mutex);  // 加锁保护共享资源// 释放旧帧if (m_currentFrame) {av_frame_free(&m_currentFrame);}// 复制新帧(避免原帧被解码器释放)m_currentFrame = av_frame_alloc();if (av_frame_ref(m_currentFrame, frame) != 0) {qWarning() << "无法复制视频帧";av_frame_free(&m_currentFrame);return false;}// 更新帧尺寸(如果有变化)m_frameWidth = m_currentFrame->width;m_frameHeight = m_currentFrame->height;resetVertices(m_frameWidth, m_frameHeight);m_hasNewFrame = true;m_isPlaying = true;m_errorMsg.clear();// 触发重绘update();return true;
}void VideoWidget::initializeGL() {initializeOpenGLFunctions();glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // 黑色背景glEnable(GL_TEXTURE_2D);// 初始化着色器(核心修改4:RGB着色器替换YUV着色器)if (!initShader()) {m_errorMsg = "着色器初始化失败";qFatal("%s", m_errorMsg.toUtf8().constData());}// 初始化顶点坐标resetVertices(width(), height());// 启用顶点属性(0:顶点坐标,1:纹理坐标)glEnableVertexAttribArray(0);glEnableVertexAttribArray(1);
}void VideoWidget::resizeGL(int w, int h) {glViewport(0, 0, w, h);  // 设置视口为窗口大小resetVertices(w, h);     // 重新计算顶点坐标
}void VideoWidget::paintGL() {glClear(GL_COLOR_BUFFER_BIT); // 清除颜色缓冲区QMutexLocker locker(&m_mutex);if (!m_isPlaying && !m_hasNewFrame) {return;}if (m_hasNewFrame && m_currentFrame) {// 首次渲染或帧尺寸变化时,创建RGB纹理if (!m_texture || m_frameWidth != m_currentFrame->width || m_frameHeight != m_currentFrame->height) {createTextures(m_frameWidth, m_frameHeight);}// 更新RGB纹理数据updateTextures(m_currentFrame);m_hasNewFrame = false;}// 绑定着色器并绘制(仅绑定单个RGB纹理)if (m_program && m_texture) {m_program->bind();m_texture->bind(0);m_program->setUniformValue("u_texture", 0);static const GLfloat texCoords[] = {0.0f, 1.0f,  // 左下1.0f, 1.0f,  // 右下0.0f, 0.0f,  // 左上1.0f, 0.0f   // 右上};// 传递顶点坐标和纹理坐标glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, m_vertices);glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, texCoords);// 绘制四边形(GL_TRIANGLE_STRIP 高效绘制4个顶点)glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);m_program->release();m_texture->release();}
}bool VideoWidget::initShader() {m_program = new QOpenGLShaderProgram;// 顶点着色器:传递顶点坐标和纹理坐标const char* vertexShader = R"(#version 120attribute vec2 vertexIn;       // 顶点坐标(location 0)attribute vec2 textureIn;      // 纹理坐标(location 1)varying vec2 textureOut;       // 传递给片段着色器的纹理坐标void main() {gl_Position = vec4(vertexIn, 0.0, 1.0);textureOut = textureIn;})";// 片段着色器:实现YUV420P转RGB(BT.601标准)const char* fragmentShader = R"(
#version 120#ifdef GL_ESprecision mediump float; // 移动设备精度兼容#endifvarying vec2 textureOut;       // 从顶点着色器接收的纹理坐标uniform sampler2D u_texture;   // RGB纹理(仅1个)void main() {// 直接采样RGB纹理(无需YUV转RGB计算)vec3 rgbColor = texture2D(u_texture, textureOut).rgb;// 输出RGB颜色(Alpha=1.0,不透明)gl_FragColor = vec4(rgbColor, 1.0);})";// 编译并链接着色器if (!m_program->addShaderFromSourceCode(QOpenGLShader::Vertex, vertexShader)) {qWarning() << "顶点着色器错误:" << m_program->log();return false;}if (!m_program->addShaderFromSourceCode(QOpenGLShader::Fragment, fragmentShader)) {qWarning() << "片段着色器错误:" << m_program->log();return false;}if (!m_program->link()) {qWarning() << "着色器链接错误:" << m_program->log();return false;}return m_program->bind();
}void VideoWidget::createTextures(int width, int height) {// 释放旧纹理(避免内存泄漏)if (m_texture) {delete m_texture;m_texture = nullptr;}// 创建2D纹理(格式为 RGB888,对应 RGB24 每通道8位)m_texture = new QOpenGLTexture(QOpenGLTexture::Target2D);m_texture->create();m_texture->setSize(width, height); // 纹理尺寸 = 帧尺寸m_texture->setFormat(QOpenGLTexture::RGBAFormat); // RGB24 对应格式// 过滤模式:线性插值(缩放时平滑)m_texture->setMinMagFilters(QOpenGLTexture::Linear, QOpenGLTexture::Linear);// 环绕模式:边缘像素延伸(避免纹理边缘失真)m_texture->setWrapMode(QOpenGLTexture::ClampToEdge);m_texture->allocateStorage(); // 分配纹理内存
}void VideoWidget::updateTextures(const AVFrame* frame) {if (!frame || frame->format != AV_PIX_FMT_RGB24) {m_errorMsg = "不支持的帧格式(仅支持 RGB24)";qWarning() << m_errorMsg;return;}GLint prevRowLength;glGetIntegerv(GL_UNPACK_ROW_LENGTH, &prevRowLength);glBindTexture(GL_TEXTURE_2D, m_texture->textureId());int pixelPerRow = frame->linesize[0] / 3;glPixelStorei(GL_UNPACK_ROW_LENGTH, pixelPerRow); // 设置每行像素数glTexSubImage2D(GL_TEXTURE_2D, 0,          // 纹理目标、多级纹理层级0, 0,                      // 纹理偏移(x,y)frame->width, frame->height,// 纹理更新区域尺寸GL_RGB,                    // 像素数据格式(RGB顺序)GL_UNSIGNED_BYTE,          // 像素数据类型(无符号字节)frame->data[0]             // RGB24 数据指针);// 恢复原像素存储模式glPixelStorei(GL_UNPACK_ROW_LENGTH, prevRowLength);glBindTexture(GL_TEXTURE_2D, 0); // 解绑纹理
}void VideoWidget::resetVertices(int windowWidth, int windowHeight) {if (windowWidth <= 0 || windowHeight <= 0 || !m_keepAspectRatio ||m_frameWidth <= 0 || m_frameHeight <= 0) {// 铺满模式const GLfloat fullScreenVertices[] = {-1.0f, -1.0f,1.0f, -1.0f,-1.0f,  1.0f,1.0f,  1.0f};memcpy(m_vertices, fullScreenVertices, sizeof(fullScreenVertices));return;}// 保持宽高比计算float videoAspect = static_cast<float>(m_frameWidth) / m_frameHeight;float windowAspect = static_cast<float>(windowWidth) / windowHeight;float scaleX = 1.0f;float scaleY = 1.0f;if (videoAspect > windowAspect) {// 视频更宽,按宽度缩放scaleX = windowAspect / videoAspect;} else {// 视频更高,按高度缩放scaleY = videoAspect / windowAspect;}// 计算顶点坐标const GLfloat vertices[] = {-scaleX, -scaleY,scaleX, -scaleY,-scaleX,  scaleY,scaleX,  scaleY};memcpy(m_vertices, vertices, sizeof(vertices));
}

Timer.h 代码如下

#ifndef TIMER_H
#define TIMER_H
#include <mutex>
class Timer
{
public:Timer();void start();void pause();void stop();double elapsedtime();private:double currentTimestamp();private:std::mutex m_timeMutex;double m_lastclock;                                 // 记录上次更新的时间戳double m_elapsedtime;
};#endif // TIMER_H

Timer.cpp 代码如下

#include "Timer.h"
#include <chrono>Timer::Timer() {m_elapsedtime = 0;
}void Timer::start()
{m_lastclock = currentTimestamp();
}void Timer::pause()
{m_timeMutex.lock();m_elapsedtime += currentTimestamp() - m_lastclock;m_lastclock = currentTimestamp();m_timeMutex.unlock();
}void Timer::stop()
{m_timeMutex.lock();m_elapsedtime = 0;m_timeMutex.unlock();
}double Timer::elapsedtime()
{m_timeMutex.lock();m_elapsedtime += currentTimestamp() - m_lastclock;m_lastclock = currentTimestamp();m_timeMutex.unlock();return m_elapsedtime;
}double Timer::currentTimestamp()
{auto now = std::chrono::system_clock::now();auto duration = now.time_since_epoch();return std::chrono::duration<double>(duration).count();
}
http://www.dtcms.com/a/565374.html

相关文章:

  • 阿里开源线上诊断工具Arthas,适合生产环境故障排查
  • AI大模型架构设计与优化
  • 【论文精读】迈向更好的指标:从T2VScore看文本到视频生成的新评测范式
  • 无锡建设工程质量监督网站做i爱小说网站
  • java变量解读
  • 优化排名推广教程网站建筑设计公司资质
  • 基于Springboot的旧物公益捐赠管理系统3726v22v(程序、源码、数据库、调试部署方案及开发环境)系统界面展示及获取方式置于文档末尾,可供参考。
  • Spring Boot + EasyExcel 枚举转换器:通用方案 vs 专用方案对比
  • 基于AWS服务的客户服务电话情感分析解决方案
  • 盲盒抽赏小程序一番赏玩法拓展:从模仿到创新的商业化落地
  • wordpress建淘宝客网站监理工程师查询系统入口
  • vps 建网站ip地址反查域名
  • 下载和导入原理图符号和封装
  • VinePPO:基于蒙特卡洛采样的无偏 credit assignment 进行价值估计,提升大模型推理能力
  • 静态化GTFOBins 本地部置教程
  • 自建网站公司ip子域名二级域名解析
  • 搭建出属于你自己的精彩网站!
  • 3DXML 转 3DXML 实操手册:从本地软件处理到在线工具推荐(含迪威模型网教程)
  • git小乌龟如何单个文件回退及整个版本回退
  • 班级同学录网站建设iis网站301重定向
  • 高性能负载均衡器HAProxy全解析
  • 《投资-151》PEG指标,衡量股票估值是否合理、特别是评估成长股的一个关键工具。
  • 广东省省考备考(第一百四十天11.3)——言语、判断推理(强化训练)
  • leetcode前缀和(C++)
  • 冬创网站建设培训中心高端网站建设公司有哪些
  • java面试:有了解过RocketMq架构么?详细讲解一下
  • JAVA国际版同城打车源码同城服务线下结账系统源码适配PAD支持Android+IOS+H5
  • Milvus:数据字段-主字段和自动识别(五)
  • 【深入浅出PyTorch】--8.1.PyTorch生态--torchvision
  • Blender新手入门,超详细!!!