各协议全面支持vp8/vp9/av1编码,ertmp新增支持opus编码 (#4498)

实现功能:
- rtp 增加vp8,vp9和av1编码支持
- 实现MP4录像所需的extra_data接口
- 扩展rtmp增加对opus、vp8、vp9和av1的支持

已知问题:
- 开启enhance rtmp后,ffmpeg暂时不支持播放vp8编码格式,其他格式的支持
- vp9和av1开始播放时容易遇到卡顿情况,过几秒后好了,原因暂时未知

---------

Co-authored-by: xia-chu <771730766@qq.com>
This commit is contained in:
mtdxc
2025-10-16 19:26:46 +08:00
committed by GitHub
parent 046bdecd1e
commit b003eb3eec
26 changed files with 1609 additions and 69 deletions

View File

@@ -17,8 +17,8 @@ jobs:
with:
vcpkgDirectory: '${{github.workspace}}/vcpkg'
vcpkgTriplet: x64-windows-static
# 2024.06.01
vcpkgGitCommitId: '47364fbc300756f64f7876b549d9422d5f3ec0d3'
# 2025.07.11
vcpkgGitCommitId: 'efcfaaf60d7ec57a159fc3110403d939bfb69729'
vcpkgArguments: 'openssl libsrtp[openssl] usrsctp'
- name: 编译

View File

@@ -36,7 +36,7 @@
- [谁在使用zlmediakit?](https://github.com/ZLMediaKit/ZLMediaKit/issues/511)
- 全面支持ipv6网络
- 支持多轨道模式(一个流中多个视频/音频)
- 全协议支持H264/H265/AAC/G711/OPUS/MP3,部分支持VP8/VP9/AV1/JPEG/MP3/H266/ADPCM/SVAC/G722/G723/G729
- 全协议支持H264/H265/AAC/G711/OPUS/MP3/VP8/VP9/AV1部分支持JPEG/H266/ADPCM/SVAC/G722/G723/G729
## 项目定位
@@ -57,7 +57,7 @@
- 服务器/客户端完整支持Basic/Digest方式的登录鉴权全异步可配置化的鉴权接口
- 支持H265编码
- 服务器支持RTSP推流(包括`rtp over udp` `rtp over tcp`方式)
- 支持H264/H265/AAC/G711/OPUS/MJPEG/MP3编码其他编码能转发但不能转协议
- 支持H264/H265/AAC/G711/OPUS/MJPEG/MP3/VP8/VP9/AV1编码,其他编码能转发但不能转协议
- RTMP[S]
- RTMP[S] 播放服务器支持RTSP/MP4/HLS转RTMP
@@ -70,25 +70,25 @@
- 支持H264/H265/AAC/G711/OPUS/MP3编码其他编码能转发但不能转协议
- 支持[RTMP-H265](https://github.com/ksvc/FFmpeg/wiki)
- 支持[RTMP-OPUS](https://github.com/ZLMediaKit/ZLMediaKit/wiki/RTMP%E5%AF%B9H265%E5%92%8COPUS%E7%9A%84%E6%94%AF%E6%8C%81)
- 支持[enhanced-rtmp(H265)](https://github.com/veovera/enhanced-rtmp)
- 支持[enhanced-rtmp(H265/VP8/VP9/AV1/OPUS)](https://github.com/veovera/enhanced-rtmp)
- HLS
- 支持HLS文件(mpegts/fmp4)生成自带HTTP文件服务器
- 通过cookie追踪技术可以模拟HLS播放为长连接可以实现HLS按需拉流、播放统计等业务
- 支持HLS播发器支持拉流HLS转rtsp/rtmp/mp4
- 支持H264/H265/AAC/G711/OPUS/MP3编码
- 支持H264/H265/AAC/G711/OPUS/MP3/VP8/VP9/AV1编码
- 支持多轨道模式
- TS
- 支持http[s]-ts直播
- 支持ws[s]-ts直播
- 支持H264/H265/AAC/G711/OPUS/MP3编码
- 支持H264/H265/AAC/G711/OPUS/MP3/VP8/VP9/AV1编码
- 支持多轨道模式
- fMP4
- 支持http[s]-fmp4直播
- 支持ws[s]-fmp4直播
- 支持H264/H265/AAC/G711/OPUS/MJPEG/MP3编码
- 支持H264/H265/AAC/G711/OPUS/MJPEG/MP3/VP8/VP9/AV1编码
- 支持多轨道模式
- HTTP[S]与WebSocket
@@ -103,7 +103,7 @@
- GB28181与RTP推流
- 支持UDP/TCP RTP(PS/TS/ES)推流服务器可以转换成RTSP/RTMP/HLS等协议
- 支持RTSP/RTMP/HLS等协议转rtp推流客户端支持TCP/UDP模式提供相应restful api支持主动被动方式
- 支持H264/H265/AAC/G711/OPUS/MP3编码
- 支持H264/H265/AAC/G711/OPUS/MP3/VP8/VP9/AV1编码
- 支持es/ps/ts/ehome rtp推流
- 支持es/ps rtp转推
- 支持GB28181主动拉流模式
@@ -113,7 +113,7 @@
- MP4点播与录制
- 支持录制为FLV/HLS/MP4
- RTSP/RTMP/HTTP-FLV/WS-FLV支持MP4文件点播支持seek
- 支持H264/H265/AAC/G711/OPUS/MP3编码
- 支持H264/H265/AAC/G711/OPUS/MP3/VP8/VP9/AV1编码
- 支持多轨道模式
- WebRTC
@@ -131,13 +131,13 @@
- 支持webrtc over tcp模式
- 优秀的nack、jitter buffer算法, 抗丢包能力卓越
- 支持whip/whep协议
- 支持编码格式与rtsp协议一致
- [支持ice-full,支持作为webrtc客户端拉流、推流以及p2p模式](./webrtc/USAGE.md)
- [SRT支持](./srt/srt.md)
- 其他
- 支持丰富的restful api以及web hook事件
- 支持简单的telnet调试
- 支持配置文件热加载
- 支持配置文件、ssl证书热加载
- 支持流量统计、推拉流鉴权等事件
- 支持虚拟主机,可以隔离不同域名
- 支持按需拉流,无人观看自动关断拉流
@@ -172,12 +172,9 @@
- 1、支持rtsp-ts/hls/http-ts/rtp组播/udp组播拉流转协议支持ts透传模式无需解复用转rtsp-ts/hls/http-ts/srt协议。
- 2、支持接收rtsp-ts/srt推流支持ts透传模式无需解复用转rtsp-ts/hls/http-ts/srt协议。
- 3、上述功能同时支持解复用ts为es流再转rtsp/rtmp/flv/http-ts/hls/hls-fmp4/mp4/fmp4/webrtc等协议。
- VP9/AV1版本
- 全面新增支持av1/vp9编码rtmp/rtsp/ts/ps/hls/mp4/fmp4等协议全面支持av1/vp9。
- 其他
- 支持s3/minio云存储内存流直接写入解决录像文件io系统瓶颈问题。
- 支持s3/minio云存储内存流直接写入解决录像文件io系统瓶颈问题支持从s3云存储http读取并下载
- 支持onvif设备扫描与添加拉流。
- 支持GA1400视图api。
@@ -224,7 +221,6 @@ bash build_docker_images.sh
- [jessibuca](https://github.com/langhuihui/jessibuca) 基于wasm支持H265的播放器
- [wsPlayer](https://github.com/v354412101/wsPlayer) 基于MSE的websocket-fmp4播放器
- [BXC_gb28181Player](https://github.com/any12345com/BXC_gb28181Player) C++开发的支持国标GB28181协议的视频流播放器
- [RTCPlayer](https://github.com/leo94666/RTCPlayer) 一个基于Android客户端的的RTC播放器
- WEB管理网站
- [zlm_webassist](https://github.com/1002victor/zlm_webassist) 本项目配套的前后端分离web管理项目

View File

@@ -10,23 +10,43 @@
#include "AV1.h"
#include "AV1Rtp.h"
#include "VpxRtmp.h"
#include "Extension/Factory.h"
#include "Extension/CommonRtp.h"
#include "Extension/CommonRtmp.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
Sdp::Ptr AV1Track::getSdp(uint8_t payload_type) const {
return std::make_shared<DefaultSdp>(payload_type, *this);
bool AV1Track::inputFrame(const Frame::Ptr &frame) {
char *dataPtr = frame->data() + frame->prefixSize();
if (0 == aom_av1_codec_configuration_record_init(&_context, dataPtr, frame->size() - frame->prefixSize())) {
_width = _context.width;
_height = _context.height;
//InfoL << _width << "x" << _height;
}
return VideoTrackImp::inputFrame(frame);
}
Track::Ptr AV1Track::clone() const {
return std::make_shared<AV1Track>(*this);
}
Buffer::Ptr AV1Track::getExtraData() const {
if (_context.bytes <= 0)
return nullptr;
auto ret = BufferRaw::create(4 + _context.bytes);
ret->setSize(aom_av1_codec_configuration_record_save(&_context, (uint8_t *)ret->data(), ret->getCapacity()));
return ret;
}
void AV1Track::setExtraData(const uint8_t *data, size_t size) {
if (aom_av1_codec_configuration_record_load(data, size, &_context) > 0) {
_width = _context.width;
_height = _context.height;
}
}
namespace {
CodecId getCodec() {
@@ -34,8 +54,7 @@ CodecId getCodec() {
}
Track::Ptr getTrackByCodecId(int sample_rate, int channels, int sample_bit) {
// AV1是视频编解码器这里的参数实际上是width, height, fps
return std::make_shared<AV1Track>(sample_rate, channels, sample_bit);
return std::make_shared<AV1Track>();
}
Track::Ptr getTrackBySdp(const SdpTrack::Ptr &track) {
@@ -51,15 +70,15 @@ RtpCodec::Ptr getRtpDecoderByCodecId() {
}
RtmpCodec::Ptr getRtmpEncoderByTrack(const Track::Ptr &track) {
return std::make_shared<CommonRtmpEncoder>(track);
return std::make_shared<VpxRtmpEncoder>(track);
}
RtmpCodec::Ptr getRtmpDecoderByTrack(const Track::Ptr &track) {
return std::make_shared<CommonRtmpDecoder>(track);
return std::make_shared<VpxRtmpDecoder>(track);
}
Frame::Ptr getFrameFromPtr(const char *data, size_t bytes, uint64_t dts, uint64_t pts) {
return std::make_shared<FrameFromPtr>(CodecAV1, (char *)data, bytes, dts, pts);
return std::make_shared<AV1FrameNoCacheAble>((char *)data, bytes, dts, pts, 0);
}
} // namespace
@@ -73,4 +92,4 @@ CodecPlugin av1_plugin = { getCodec,
getRtmpDecoderByTrack,
getFrameFromPtr };
}//namespace mediakit
} // namespace mediakit

View File

@@ -13,9 +13,35 @@
#include "Extension/Frame.h"
#include "Extension/Track.h"
#include "aom-av1.h"
namespace mediakit {
template <typename Parent>
class AV1FrameHelper : public Parent {
public:
friend class FrameImp;
//friend class toolkit::ResourcePool_l<Av1FrameHelper>;
using Ptr = std::shared_ptr<AV1FrameHelper>;
template <typename... ARGS>
AV1FrameHelper(ARGS &&...args)
: Parent(std::forward<ARGS>(args)...) {
this->_codec_id = CodecAV1;
}
bool keyFrame() const override {
auto ptr = (uint8_t *) this->data() + this->prefixSize();
return (*ptr & 0x78) >> 3 == 1;
}
bool configFrame() const override { return false; }
bool dropAble() const override { return false; }
bool decodeAble() const override { return true; }
};
/// Av1 帧类
using AV1Frame = AV1FrameHelper<FrameImp>;
using AV1FrameNoCacheAble = AV1FrameHelper<FrameFromPtr>;
/**
* AV1视频通道
*/
@@ -23,12 +49,17 @@ class AV1Track : public VideoTrackImp {
public:
using Ptr = std::shared_ptr<AV1Track>;
AV1Track(int width = 0, int height = 0, int fps = 0) : VideoTrackImp(CodecAV1, width, height, fps) {}
AV1Track() : VideoTrackImp(CodecAV1) {}
private:
Sdp::Ptr getSdp(uint8_t payload_type) const override;
Track::Ptr clone() const override;
bool inputFrame(const Frame::Ptr &frame) override;
toolkit::Buffer::Ptr getExtraData() const override;
void setExtraData(const uint8_t *data, size_t size) override;
protected:
aom_av1_t _context = {0};
};
}//namespace mediakit
#endif //ZLMEDIAKIT_AV1_H
} // namespace mediakit
#endif

View File

@@ -7,7 +7,7 @@
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "AV1.h"
#include "AV1Rtp.h"
#include <algorithm>
#include <cstring>
@@ -309,8 +309,7 @@ AV1RtpDecoder::AV1RtpDecoder() {
}
void AV1RtpDecoder::obtainFrame() {
_frame = FrameImp::create();
_frame->_codec_id = CodecAV1;
_frame = FrameImp::create<AV1Frame>();
}
AV1RtpDecoder::AggregationHeader AV1RtpDecoder::parseAggregationHeader(uint8_t header) {

View File

@@ -13,18 +13,35 @@
#include "Extension/Factory.h"
#include "Extension/CommonRtp.h"
#include "Extension/CommonRtmp.h"
#include "riff-acm.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
Track::Ptr G711Track::clone() const {
return std::make_shared<G711Track>(*this);
Buffer::Ptr G711Track::getExtraData() const {
struct wave_format_t wav = {0};
wav.wFormatTag = getCodecId() == CodecG711A ? WAVE_FORMAT_ALAW : WAVE_FORMAT_MULAW;
wav.nChannels = getAudioChannel();
wav.nSamplesPerSec = getAudioSampleRate();
wav.nAvgBytesPerSec = 8000;
wav.nBlockAlign = 1;
wav.wBitsPerSample = 8;
auto buff = BufferRaw::create(18 + wav.cbSize);
wave_format_save(&wav, (uint8_t*)buff->data(), buff->size());
return buff;
}
Sdp::Ptr G711Track::getSdp(uint8_t payload_type) const {
return std::make_shared<DefaultSdp>(payload_type, *this);
void G711Track::setExtraData(const uint8_t *data, size_t size) {
struct wave_format_t wav;
if (wave_format_load(data, size, &wav) > 0) {
// Successfully parsed Opus header
_sample_rate = wav.nSamplesPerSec;
_channels = wav.nChannels;
_codecid = (wav.wFormatTag == WAVE_FORMAT_ALAW) ? CodecG711A : CodecG711U;
} else {
WarnL << "Failed to parse G711 extra data";
}
}
namespace {

View File

@@ -18,19 +18,16 @@ namespace mediakit{
/**
* G711音频通道
* G711 audio channel
* [AUTO-TRANSLATED:57f8bc08]
*/
class G711Track : public AudioTrackImp{
public:
using Ptr = std::shared_ptr<G711Track>;
G711Track(CodecId codecId, int sample_rate, int channels, int sample_bit) : AudioTrackImp(codecId, 8000, 1, 16) {}
toolkit::Buffer::Ptr getExtraData() const override;
void setExtraData(const uint8_t *data, size_t size) override;
private:
Sdp::Ptr getSdp(uint8_t payload_type) const override;
Track::Ptr clone() const override;
Track::Ptr clone() const override { return std::make_shared<G711Track>(*this); }
};
}//namespace mediakit

View File

@@ -133,7 +133,7 @@ static inline void bytestream2_put_be16(PutByteContext *p, uint16_t value) {
}
}
static inline void bytestream2_put_be24(PutByteContext *p, uint16_t value) {
static inline void bytestream2_put_be24(PutByteContext *p, uint32_t value) {
if (!p->eof && (p->buffer_end - p->buffer >= 2)) {
p->buffer[0] = value >> 16;
p->buffer[1] = value >> 8;

View File

@@ -11,16 +11,32 @@
#include "Opus.h"
#include "Extension/Factory.h"
#include "Extension/CommonRtp.h"
#include "Extension/CommonRtmp.h"
#include "OpusRtmp.h"
#include "opus-head.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
void OpusTrack::setExtraData(const uint8_t *data, size_t size) {
opus_head_t header;
if (opus_head_load(data, size, &header) > 0) {
// Successfully parsed Opus header
_sample_rate = header.input_sample_rate;
_channels = header.channels;
}
}
Sdp::Ptr OpusTrack::getSdp(uint8_t payload_type) const {
return std::make_shared<DefaultSdp>(payload_type, *this);
Buffer::Ptr OpusTrack::getExtraData() const {
struct opus_head_t opus = { 0 };
opus.version = 1;
opus.channels = getAudioChannel();
opus.input_sample_rate = getAudioSampleRate();
// opus.pre_skip = 120;
opus.channel_mapping_family = 0;
auto ret = BufferRaw::create(29);
ret->setSize(opus_head_save(&opus, (uint8_t *)ret->data(), ret->getCapacity()));
return ret;
}
namespace {
@@ -46,11 +62,11 @@ RtpCodec::Ptr getRtpDecoderByCodecId() {
}
RtmpCodec::Ptr getRtmpEncoderByTrack(const Track::Ptr &track) {
return std::make_shared<CommonRtmpEncoder>(track);
return std::make_shared<OpusRtmpEncoder>(track);
}
RtmpCodec::Ptr getRtmpDecoderByTrack(const Track::Ptr &track) {
return std::make_shared<CommonRtmpDecoder>(track);
return std::make_shared<OpusRtmpDecoder>(track);
}
Frame::Ptr getFrameFromPtr(const char *data, size_t bytes, uint64_t dts, uint64_t pts) {

View File

@@ -19,23 +19,20 @@ namespace mediakit {
/**
* Opus帧音频通道
* Opus frame audio channel
* [AUTO-TRANSLATED:522e95da]
*/
class OpusTrack : public AudioTrackImp{
class OpusTrack : public AudioTrackImp {
public:
using Ptr = std::shared_ptr<OpusTrack>;
OpusTrack() : AudioTrackImp(CodecOpus,48000,2,16){}
private:
// 克隆该Track [AUTO-TRANSLATED:9a15682a]
// Clone this Track
Track::Ptr clone() const override {
return std::make_shared<OpusTrack>(*this);
}
// 生成sdp [AUTO-TRANSLATED:663a9367]
// Generate sdp
Sdp::Ptr getSdp(uint8_t payload_type) const override ;
toolkit::Buffer::Ptr getExtraData() const override;
void setExtraData(const uint8_t *data, size_t size) override;
};
}//namespace mediakit

94
ext-codec/OpusRtmp.cpp Normal file
View File

@@ -0,0 +1,94 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "OpusRtmp.h"
#include "Rtmp/utils.h"
#include "Common/config.h"
#include "Extension/Factory.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
void OpusRtmpDecoder::inputRtmp(const RtmpPacket::Ptr &pkt) {
auto data = pkt->data();
int size = pkt->size();
auto flags = (uint8_t)data[0];
auto codec = (RtmpAudioCodec)(flags >> 4);
auto type = flags & 0x0F;
data++; size--;
if (codec == RtmpAudioCodec::FOURCC) {
// @todo parse enhance audio header and check fourcc
data += 4;
size -= 4;
if (type == (uint8_t)RtmpPacketType::PacketTypeSequenceStart) {
getTrack()->setExtraData((uint8_t *)data, size);
} else {
outputFrame(data, size, pkt->time_stamp, pkt->time_stamp);
}
} else {
outputFrame(data, size, pkt->time_stamp, pkt->time_stamp);
}
}
void OpusRtmpDecoder::outputFrame(const char *data, size_t size, uint32_t dts, uint32_t pts) {
RtmpCodec::inputFrame(Factory::getFrameFromPtr(getTrack()->getCodecId(), data, size, dts, pts));
}
////////////////////////////////////////////////////////////////////////
OpusRtmpEncoder::OpusRtmpEncoder(const Track::Ptr &track) : RtmpCodec(track) {
_enhanced = mINI::Instance()[Rtmp::kEnhanced];
}
bool OpusRtmpEncoder::inputFrame(const Frame::Ptr &frame) {
auto packet = RtmpPacket::create();
if (_enhanced) {
uint8_t flags = ((uint8_t)RtmpAudioCodec::FOURCC << 4) | (uint8_t)RtmpPacketType::PacketTypeCodedFrames;
packet->buffer.push_back(flags);
packet->buffer.append("Opus", 4);
} else {
uint8_t flags = getAudioRtmpFlags(getTrack());
packet->buffer.push_back(flags);
}
packet->buffer.append(frame->data(), frame->size());
packet->body_size = packet->buffer.size();
packet->time_stamp = frame->dts();
packet->chunk_id = CHUNK_AUDIO;
packet->stream_index = STREAM_MEDIA;
packet->type_id = MSG_AUDIO;
// Output rtmp packet
RtmpCodec::inputRtmp(packet);
return true;
}
void OpusRtmpEncoder::makeConfigPacket() {
auto extra_data = getTrack()->getExtraData();
if (!extra_data || !extra_data->size())
return;
auto pkt = RtmpPacket::create();
if (_enhanced) {
uint8_t flags = ((uint8_t)RtmpAudioCodec::FOURCC << 4) | (uint8_t)RtmpPacketType::PacketTypeSequenceStart;
pkt->buffer.push_back(flags);
pkt->buffer.append("Opus", 4);
} else {
uint8_t flags = getAudioRtmpFlags(getTrack());
pkt->buffer.push_back(flags);
}
pkt->buffer.append(extra_data->data(), extra_data->size());
pkt->body_size = pkt->buffer.size();
pkt->chunk_id = CHUNK_AUDIO;
pkt->stream_index = STREAM_MEDIA;
pkt->time_stamp = 0;
pkt->type_id = MSG_AUDIO;
RtmpCodec::inputRtmp(pkt);
}
} // namespace mediakit

51
ext-codec/OpusRtmp.h Normal file
View File

@@ -0,0 +1,51 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_OPUS_RTMPCODEC_H
#define ZLMEDIAKIT_OPUS_RTMPCODEC_H
#include "Rtmp/RtmpCodec.h"
#include "Extension/Track.h"
namespace mediakit {
/**
* Rtmp解码类
* 将 Opus over rtmp 解复用出 OpusFrame
*/
class OpusRtmpDecoder : public RtmpCodec {
public:
using Ptr = std::shared_ptr<OpusRtmpDecoder>;
OpusRtmpDecoder(const Track::Ptr &track) : RtmpCodec(track) {}
void inputRtmp(const RtmpPacket::Ptr &rtmp) override;
protected:
void outputFrame(const char *data, size_t size, uint32_t dts, uint32_t pts);
};
/**
* Rtmp打包类
*/
class OpusRtmpEncoder : public RtmpCodec {
bool _enhanced = false;
public:
using Ptr = std::shared_ptr<OpusRtmpEncoder>;
OpusRtmpEncoder(const Track::Ptr &track);
bool inputFrame(const Frame::Ptr &frame) override;
void makeConfigPacket() override;
};
} // namespace mediakit
#endif // ZLMEDIAKIT_OPUS_RTMPCODEC_H

79
ext-codec/VP8.cpp Normal file
View File

@@ -0,0 +1,79 @@
#include "VP8.h"
#include "VP8Rtp.h"
#include "VpxRtmp.h"
#include "Extension/Factory.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
bool VP8Track::inputFrame(const Frame::Ptr &frame) {
char *dataPtr = frame->data() + frame->prefixSize();
if (frame->keyFrame()) {
if (frame->size() - frame->prefixSize() < 10)
return false;
_width = ((dataPtr[7] << 8) + dataPtr[6]) & 0x3FFF;
_height = ((dataPtr[9] << 8) + dataPtr[8]) & 0x3FFF;
webm_vpx_codec_configuration_record_from_vp8(&_vpx, &_width, &_height, dataPtr, frame->size() - frame->prefixSize());
// InfoL << _width << "x" << _height;
}
return VideoTrackImp::inputFrame(frame);
}
Buffer::Ptr VP8Track::getExtraData() const {
auto ret = BufferRaw::create(8 + _vpx.codec_intialization_data_size);
ret->setSize(webm_vpx_codec_configuration_record_save(&_vpx, (uint8_t *)ret->data(), ret->getCapacity()));
return ret;
}
void VP8Track::setExtraData(const uint8_t *data, size_t size) {
webm_vpx_codec_configuration_record_load(data, size, &_vpx);
}
namespace {
CodecId getCodec() {
return CodecVP8;
}
Track::Ptr getTrackByCodecId(int sample_rate, int channels, int sample_bit) {
return std::make_shared<VP8Track>();
}
Track::Ptr getTrackBySdp(const SdpTrack::Ptr &track) {
return std::make_shared<VP8Track>();
}
RtpCodec::Ptr getRtpEncoderByCodecId(uint8_t pt) {
return std::make_shared<VP8RtpEncoder>();
}
RtpCodec::Ptr getRtpDecoderByCodecId() {
return std::make_shared<VP8RtpDecoder>();
}
RtmpCodec::Ptr getRtmpEncoderByTrack(const Track::Ptr &track) {
return std::make_shared<VpxRtmpEncoder>(track);
}
RtmpCodec::Ptr getRtmpDecoderByTrack(const Track::Ptr &track) {
return std::make_shared<VpxRtmpDecoder>(track);
}
Frame::Ptr getFrameFromPtr(const char *data, size_t bytes, uint64_t dts, uint64_t pts) {
return std::make_shared<VP8FrameNoCacheAble>((char *)data, bytes, dts, pts, 0);
}
} // namespace
CodecPlugin vp8_plugin = { getCodec,
getTrackByCodecId,
getTrackBySdp,
getRtpEncoderByCodecId,
getRtpDecoderByCodecId,
getRtmpEncoderByTrack,
getRtmpDecoderByTrack,
getFrameFromPtr };
} // namespace mediakit

49
ext-codec/VP8.h Normal file
View File

@@ -0,0 +1,49 @@
#ifndef ZLMEDIAKIT_VP8_H
#define ZLMEDIAKIT_VP8_H
#include "Extension/Frame.h"
#include "Extension/Track.h"
#include "webm-vpx.h"
namespace mediakit {
template <typename Parent>
class VP8FrameHelper : public Parent {
public:
friend class FrameImp;
//friend class toolkit::ResourcePool_l<VP8FrameHelper>;
using Ptr = std::shared_ptr<VP8FrameHelper>;
template <typename... ARGS>
VP8FrameHelper(ARGS &&...args)
: Parent(std::forward<ARGS>(args)...) {
this->_codec_id = CodecVP8;
}
bool keyFrame() const override {
auto ptr = (uint8_t *) this->data() + this->prefixSize();
return !(*ptr & 0x01);
}
bool configFrame() const override { return false; }
bool dropAble() const override { return false; }
bool decodeAble() const override { return true; }
};
/// VP8 帧类
using VP8Frame = VP8FrameHelper<FrameImp>;
using VP8FrameNoCacheAble = VP8FrameHelper<FrameFromPtr>;
class VP8Track : public VideoTrackImp {
public:
VP8Track() : VideoTrackImp(CodecVP8) {}
Track::Ptr clone() const override { return std::make_shared<VP8Track>(*this); }
bool inputFrame(const Frame::Ptr &frame) override;
toolkit::Buffer::Ptr getExtraData() const override;
void setExtraData(const uint8_t *data, size_t size) override;
private:
webm_vpx_t _vpx = {0};
};
} // namespace mediakit
#endif

356
ext-codec/VP8Rtp.cpp Normal file
View File

@@ -0,0 +1,356 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "VP8Rtp.h"
#include "Extension/Frame.h"
#include "Common/config.h"
namespace mediakit{
const int16_t kNoPictureId = -1;
const int8_t kNoTl0PicIdx = -1;
const uint8_t kNoTemporalIdx = 0xFF;
const int kNoKeyIdx = -1;
// internal bits
constexpr int kXBit = 0x80;
constexpr int kNBit = 0x20;
constexpr int kSBit = 0x10;
constexpr int kKeyIdxField = 0x1F;
constexpr int kIBit = 0x80;
constexpr int kLBit = 0x40;
constexpr int kTBit = 0x20;
constexpr int kKBit = 0x10;
constexpr int kYBit = 0x20;
constexpr int kFailedToParse = 0;
// VP8 payload descriptor
// https://datatracker.ietf.org/doc/html/rfc7741#section-4.2
//
// 0 1 2 3 4 5 6 7
// +-+-+-+-+-+-+-+-+
// |X|R|N|S|R| PID | (REQUIRED)
// +-+-+-+-+-+-+-+-+
// X: |I|L|T|K| RSV | (OPTIONAL)
// +-+-+-+-+-+-+-+-+
// I: |M| PictureID | (OPTIONAL)
// +-+-+-+-+-+-+-+-+
// | PictureID |
// +-+-+-+-+-+-+-+-+
// L: | TL0PICIDX | (OPTIONAL)
// +-+-+-+-+-+-+-+-+
// T/K: |TID|Y| KEYIDX | (OPTIONAL)
// +-+-+-+-+-+-+-+-+
struct RTPVideoHeaderVP8 {
void InitRTPVideoHeaderVP8();
int Size() const;
int Write(uint8_t *data, int size) const;
int Read(const uint8_t *data, int data_length);
bool isFirstPacket() const { return beginningOfPartition && partitionId == 0; }
friend bool operator!=(const RTPVideoHeaderVP8 &lhs, const RTPVideoHeaderVP8 &rhs) { return !(lhs == rhs); }
friend bool operator==(const RTPVideoHeaderVP8 &lhs, const RTPVideoHeaderVP8 &rhs) {
return lhs.nonReference == rhs.nonReference && lhs.pictureId == rhs.pictureId && lhs.tl0PicIdx == rhs.tl0PicIdx && lhs.temporalIdx == rhs.temporalIdx
&& lhs.layerSync == rhs.layerSync && lhs.keyIdx == rhs.keyIdx && lhs.partitionId == rhs.partitionId
&& lhs.beginningOfPartition == rhs.beginningOfPartition;
}
bool nonReference; // Frame is discardable.
int16_t pictureId; // Picture ID index, 15 bits;
// kNoPictureId if PictureID does not exist.
int8_t tl0PicIdx; // TL0PIC_IDX, 8 bits;
// kNoTl0PicIdx means no value provided.
uint8_t temporalIdx; // Temporal layer index, or kNoTemporalIdx.
bool layerSync; // This frame is a layer sync frame.
// Disabled if temporalIdx == kNoTemporalIdx.
int8_t keyIdx; // 5 bits; kNoKeyIdx means not used.
int8_t partitionId; // VP8 partition ID
bool beginningOfPartition; // True if this packet is the first
// in a VP8 partition. Otherwise false
};
void RTPVideoHeaderVP8::InitRTPVideoHeaderVP8() {
nonReference = false;
pictureId = kNoPictureId;
tl0PicIdx = kNoTl0PicIdx;
temporalIdx = kNoTemporalIdx;
layerSync = false;
keyIdx = kNoKeyIdx;
partitionId = 0;
beginningOfPartition = false;
}
int RTPVideoHeaderVP8::Size() const {
bool tid_present = this->temporalIdx != kNoTemporalIdx;
bool keyid_present = this->keyIdx != kNoKeyIdx;
bool tl0_pid_present = this->tl0PicIdx != kNoTl0PicIdx;
bool pid_present = this->pictureId != kNoPictureId;
int ret = 2;
if (pid_present)
ret += 2;
if (tl0_pid_present)
ret++;
if (tid_present || keyid_present)
ret++;
return ret == 2 ? 1 : ret;
}
int RTPVideoHeaderVP8::Write(uint8_t *data, int size) const {
int ret = 0;
bool tid_present = this->temporalIdx != kNoTemporalIdx;
bool keyid_present = this->keyIdx != kNoKeyIdx;
bool tl0_pid_present = this->tl0PicIdx != kNoTl0PicIdx;
bool pid_present = this->pictureId != kNoPictureId;
uint8_t x_field = 0;
if (pid_present)
x_field |= kIBit;
if (tl0_pid_present)
x_field |= kLBit;
if (tid_present)
x_field |= kTBit;
if (keyid_present)
x_field |= kKBit;
uint8_t flags = 0;
if (x_field != 0)
flags |= kXBit;
if (this->nonReference)
flags |= kNBit;
// Create header as first packet in the frame. NextPacket() will clear it
// after first use.
flags |= kSBit;
data[ret++] = flags;
if (x_field == 0) {
return ret;
}
data[ret++] = x_field;
if (pid_present) {
const uint16_t pic_id = static_cast<uint16_t>(this->pictureId);
data[ret++] = (0x80 | ((pic_id >> 8) & 0x7F));
data[ret++] = (pic_id & 0xFF);
}
if (tl0_pid_present) {
data[ret++] = this->tl0PicIdx;
}
if (tid_present || keyid_present) {
uint8_t data_field = 0;
if (tid_present) {
data_field |= this->temporalIdx << 6;
if (this->layerSync)
data_field |= kYBit;
}
if (keyid_present) {
data_field |= (this->keyIdx & kKeyIdxField);
}
data[ret++] = data_field;
}
return ret;
}
int RTPVideoHeaderVP8::Read(const uint8_t *data, int data_length) {
// RTC_DCHECK_GT(data_length, 0);
int parsed_bytes = 0;
// Parse mandatory first byte of payload descriptor.
bool extension = (*data & 0x80) ? true : false; // X bit
this->nonReference = (*data & 0x20) ? true : false; // N bit
this->beginningOfPartition = (*data & 0x10) ? true : false; // S bit
this->partitionId = (*data & 0x07); // PID field
data++;
parsed_bytes++;
data_length--;
if (!extension)
return parsed_bytes;
if (data_length == 0)
return kFailedToParse;
// Optional X field is present.
bool has_picture_id = (*data & 0x80) ? true : false; // I bit
bool has_tl0_pic_idx = (*data & 0x40) ? true : false; // L bit
bool has_tid = (*data & 0x20) ? true : false; // T bit
bool has_key_idx = (*data & 0x10) ? true : false; // K bit
// Advance data and decrease remaining payload size.
data++;
parsed_bytes++;
data_length--;
if (has_picture_id) {
if (data_length == 0)
return kFailedToParse;
this->pictureId = (*data & 0x7F);
if (*data & 0x80) {
data++;
parsed_bytes++;
if (--data_length == 0)
return kFailedToParse;
// PictureId is 15 bits
this->pictureId = (this->pictureId << 8) + *data;
}
data++;
parsed_bytes++;
data_length--;
}
if (has_tl0_pic_idx) {
if (data_length == 0)
return kFailedToParse;
this->tl0PicIdx = *data;
data++;
parsed_bytes++;
data_length--;
}
if (has_tid || has_key_idx) {
if (data_length == 0)
return kFailedToParse;
if (has_tid) {
this->temporalIdx = ((*data >> 6) & 0x03);
this->layerSync = (*data & 0x20) ? true : false; // Y bit
}
if (has_key_idx) {
this->keyIdx = *data & 0x1F;
}
data++;
parsed_bytes++;
data_length--;
}
return parsed_bytes;
}
/////////////////////////////////////////////////
// VP8RtpDecoder
VP8RtpDecoder::VP8RtpDecoder() {
obtainFrame();
}
void VP8RtpDecoder::obtainFrame() {
_frame = FrameImp::create<VP8Frame>();
}
bool VP8RtpDecoder::inputRtp(const RtpPacket::Ptr &rtp, bool key_pos) {
auto seq = rtp->getSeq();
bool ret = decodeRtp(rtp);
if (!_gop_dropped && seq != (uint16_t)(_last_seq + 1) && _last_seq) {
_gop_dropped = true;
WarnL << "start drop vp8 gop, last seq:" << _last_seq << ", rtp:\r\n" << rtp->dumpString();
}
_last_seq = seq;
return ret;
}
bool VP8RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtp) {
auto payload_size = rtp->getPayloadSize();
if (payload_size <= 0) {
// No actual payload
return false;
}
auto payload = rtp->getPayload();
auto stamp = rtp->getStampMS();
auto seq = rtp->getSeq();
RTPVideoHeaderVP8 info;
int offset = info.Read(payload, payload_size);
if (!offset) {
//_frame_drop = true;
return false;
}
bool start = info.isFirstPacket();
if (start) {
_frame->_pts = stamp;
_frame->_buffer.clear();
_frame_drop = false;
}
if (_frame_drop) {
// This frame is incomplete
return false;
}
if (!start && seq != (uint16_t)(_last_seq + 1)) {
// 中间的或末尾的rtp包其seq必须连续否则说明rtp丢包那么该帧不完整必须得丢弃
_frame_drop = true;
_frame->_buffer.clear();
return false;
}
// Append data
_frame->_buffer.append((char *)payload + offset, payload_size - offset);
bool end = rtp->getHeader()->mark;
if (end) {
// 确保下一次fu必须收到第一个包
_frame_drop = true;
// 该帧最后一个rtp包,输出frame [AUTO-TRANSLATED:a648aaa5]
// The last rtp packet of this frame, output frame
outputFrame(rtp);
}
return (info.isFirstPacket() && (payload[offset] & 0x01) == 0);
}
void VP8RtpDecoder::outputFrame(const RtpPacket::Ptr &rtp) {
if (_frame->dropAble()) {
// 不参与dts生成 [AUTO-TRANSLATED:dff3b747]
// Not involved in dts generation
_frame->_dts = _frame->_pts;
} else {
// rtsp没有dts那么根据pts排序算法生成dts [AUTO-TRANSLATED:f37c17f3]
// Rtsp does not have dts, so dts is generated according to the pts sorting algorithm
_dts_generator.getDts(_frame->_pts, _frame->_dts);
}
if (_frame->keyFrame() && _gop_dropped) {
_gop_dropped = false;
InfoL << "new gop received, rtp:\r\n" << rtp->dumpString();
}
if (!_gop_dropped || _frame->configFrame()) {
RtpCodec::inputFrame(_frame);
}
obtainFrame();
}
////////////////////////////////////////////////////////////////////////
bool VP8RtpEncoder::inputFrame(const Frame::Ptr &frame) {
RTPVideoHeaderVP8 info;
info.InitRTPVideoHeaderVP8();
info.beginningOfPartition = true;
info.nonReference = !frame->dropAble();
uint8_t header[20];
int header_size = info.Write(header, sizeof(header));
int pdu_size = getRtpInfo().getMaxSize() - header_size;
const char *ptr = frame->data() + frame->prefixSize();
size_t len = frame->size() - frame->prefixSize();
bool key = frame->keyFrame();
bool mark = false;
for (size_t pos = 0; pos < len; pos += pdu_size) {
if (len - pos <= pdu_size) {
pdu_size = len - pos;
mark = true;
}
auto rtp = getRtpInfo().makeRtp(TrackVideo, nullptr, pdu_size + header_size, mark, frame->pts());
if (rtp) {
uint8_t *payload = rtp->getPayload();
memcpy(payload, header, header_size);
memcpy(payload + header_size, ptr + pos, pdu_size);
RtpCodec::inputRtp(rtp, key);
}
key = false;
header[0] &= (~kSBit); // Clear 'Start of partition' bit.
}
return true;
}
} // namespace mediakit

66
ext-codec/VP8Rtp.h Normal file
View File

@@ -0,0 +1,66 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_VP8RTPCODEC_H
#define ZLMEDIAKIT_VP8RTPCODEC_H
#include "VP8.h"
// for DtsGenerator
#include "Common/Stamp.h"
#include "Rtsp/RtpCodec.h"
namespace mediakit {
/**
* vp8 rtp解码类
* 将 vp8 over rtsp-rtp 解复用出 VP8Frame
*/
class VP8RtpDecoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<VP8RtpDecoder>;
VP8RtpDecoder();
/**
* 输入vp8 rtp包
* @param rtp rtp包
* @param key_pos 此参数忽略之
*/
bool inputRtp(const RtpPacket::Ptr &rtp, bool key_pos = true) override;
private:
bool decodeRtp(const RtpPacket::Ptr &rtp);
void outputFrame(const RtpPacket::Ptr &rtp);
void obtainFrame();
private:
bool _gop_dropped = false;
bool _frame_drop = true;
uint16_t _last_seq = 0;
VP8Frame::Ptr _frame;
DtsGenerator _dts_generator;
};
/**
* vp8 rtp打包类
*/
class VP8RtpEncoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<VP8RtpEncoder>;
bool inputFrame(const Frame::Ptr &frame) override;
private:
uint16_t _pic_id = 0;
};
}//namespace mediakit
#endif //ZLMEDIAKIT_VP8RTPCODEC_H

76
ext-codec/VP9.cpp Normal file
View File

@@ -0,0 +1,76 @@
#include "VP9.h"
#include "VP9Rtp.h"
#include "VpxRtmp.h"
#include "Extension/Factory.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
bool VP9Track::inputFrame(const Frame::Ptr &frame) {
char *dataPtr = frame->data() + frame->prefixSize();
if (frame->keyFrame()) {
if (frame->size() - frame->prefixSize() < 10)
return false;
webm_vpx_codec_configuration_record_from_vp9(&_vpx, &_width, &_height, dataPtr, frame->size() - frame->prefixSize());
}
return VideoTrackImp::inputFrame(frame);
}
Buffer::Ptr VP9Track::getExtraData() const {
auto ret = BufferRaw::create(8 + _vpx.codec_intialization_data_size);
ret->setSize(webm_vpx_codec_configuration_record_save(&_vpx, (uint8_t *)ret->data(), ret->getCapacity()));
return ret;
}
void VP9Track::setExtraData(const uint8_t *data, size_t size) {
webm_vpx_codec_configuration_record_load(data, size, &_vpx);
}
namespace {
CodecId getCodec() {
return CodecVP9;
}
Track::Ptr getTrackByCodecId(int sample_rate, int channels, int sample_bit) {
return std::make_shared<VP9Track>();
}
Track::Ptr getTrackBySdp(const SdpTrack::Ptr &track) {
return std::make_shared<VP9Track>();
}
RtpCodec::Ptr getRtpEncoderByCodecId(uint8_t pt) {
return std::make_shared<VP9RtpEncoder>();
}
RtpCodec::Ptr getRtpDecoderByCodecId() {
return std::make_shared<VP9RtpDecoder>();
}
RtmpCodec::Ptr getRtmpEncoderByTrack(const Track::Ptr &track) {
return std::make_shared<VpxRtmpEncoder>(track);
}
RtmpCodec::Ptr getRtmpDecoderByTrack(const Track::Ptr &track) {
return std::make_shared<VpxRtmpDecoder>(track);
}
Frame::Ptr getFrameFromPtr(const char *data, size_t bytes, uint64_t dts, uint64_t pts) {
return std::make_shared<VP9FrameNoCacheAble>((char *)data, bytes, dts, pts, 0);
}
} // namespace
CodecPlugin vp9_plugin = { getCodec,
getTrackByCodecId,
getTrackBySdp,
getRtpEncoderByCodecId,
getRtpDecoderByCodecId,
getRtmpEncoderByTrack,
getRtmpDecoderByTrack,
getFrameFromPtr };
} // namespace mediakit

49
ext-codec/VP9.h Normal file
View File

@@ -0,0 +1,49 @@
#ifndef ZLMEDIAKIT_VP9_H
#define ZLMEDIAKIT_VP9_H
#include "Extension/Frame.h"
#include "Extension/Track.h"
#include "webm-vpx.h"
namespace mediakit {
template <typename Parent>
class VP9FrameHelper : public Parent {
public:
friend class FrameImp;
//friend class toolkit::ResourcePool_l<VP9FrameHelper>;
using Ptr = std::shared_ptr<VP9FrameHelper>;
template <typename... ARGS>
VP9FrameHelper(ARGS &&...args)
: Parent(std::forward<ARGS>(args)...) {
this->_codec_id = CodecVP9;
}
bool keyFrame() const override {
auto ptr = (uint8_t *) this->data() + this->prefixSize();
return (*ptr & 0x80);
}
bool configFrame() const override { return false; }
bool dropAble() const override { return false; }
bool decodeAble() const override { return true; }
};
/// VP9 帧类
using VP9Frame = VP9FrameHelper<FrameImp>;
using VP9FrameNoCacheAble = VP9FrameHelper<FrameFromPtr>;
class VP9Track : public VideoTrackImp {
public:
VP9Track() : VideoTrackImp(CodecVP9) {};
Track::Ptr clone() const override { return std::make_shared<VP9Track>(*this); }
bool inputFrame(const Frame::Ptr &frame) override;
toolkit::Buffer::Ptr getExtraData() const override;
void setExtraData(const uint8_t *data, size_t size) override;
private:
webm_vpx_t _vpx = {0};
};
} // namespace mediakit
#endif

320
ext-codec/VP9Rtp.cpp Normal file
View File

@@ -0,0 +1,320 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "VP9Rtp.h"
#include "Extension/Frame.h"
#include "Common/config.h"
namespace mediakit{
const int16_t kNoPictureId = -1;
const int8_t kNoTl0PicIdx = -1;
const uint8_t kNoTemporalIdx = 0xFF;
const int kNoKeyIdx = -1;
struct VP9ResolutionLayer {
int width;
int height;
};
struct RTPPayloadVP9 {
bool hasPictureID = false;
bool interPicturePrediction = false;
bool hasLayerIndices = false;
bool flexibleMode = false;
bool beginningOfLayerFrame = false;
bool endingOfLayerFrame = false;
bool hasScalabilityStructure = false;
bool largePictureID = false;
int pictureID = -1;
int temporalID = -1;
bool isSwitchingUp = false;
int spatialID = -1;
bool isInterLayeredDepUsed = false;
int tl0PicIdx = -1;
int referenceIdx = -1;
bool additionalReferenceIdx = false;
int spatialLayers = -1;
bool hasResolution = false;
bool hasGof = false;
int numberOfFramesInGof = -1;
std::vector<VP9ResolutionLayer> resolutions;
int parse(unsigned char* data, int dataLength);
bool keyFrame() const { return beginningOfLayerFrame && !interPicturePrediction; }
std::string dump() const {
char line[64] = {0};
snprintf(line, sizeof(line), "%c%c%c%c%c%c%c- %d %d, %d %d",
hasPictureID ? 'I' : ' ',
interPicturePrediction ? 'P' : ' ',
hasLayerIndices ? 'L' : ' ',
flexibleMode ? 'F' : ' ',
beginningOfLayerFrame ? 'B' : ' ',
endingOfLayerFrame ? 'E' : ' ',
hasScalabilityStructure ? 'V' : ' ',
pictureID, tl0PicIdx,
spatialID, temporalID);
return line;
}
};
//
// VP9 format:
//
// Payload descriptor (Flexible mode F = 1)
// 0 1 2 3 4 5 6 7
// +-+-+-+-+-+-+-+-+
// |I|P|L|F|B|E|V|-| (REQUIRED)
// +-+-+-+-+-+-+-+-+
// I: |M| PICTURE ID | (REQUIRED)
// +-+-+-+-+-+-+-+-+
// M: | EXTENDED PID | (RECOMMENDED)
// +-+-+-+-+-+-+-+-+
// L: | T |U| S |D| (CONDITIONALLY RECOMMENDED)
// +-+-+-+-+-+-+-+-+ -
// P,F: | P_DIFF |N| (CONDITIONALLY REQUIRED) - up to 3 times
// +-+-+-+-+-+-+-+-+ -
// V: | SS |
// | .. |
// +-+-+-+-+-+-+-+-+
//
// Payload descriptor (Non flexible mode F = 0)
//
// 0 1 2 3 4 5 6 7
// +-+-+-+-+-+-+-+-+
// |I|P|L|F|B|E|V|-| (REQUIRED)
// +-+-+-+-+-+-+-+-+
// I: |M| PICTURE ID | (RECOMMENDED)
// +-+-+-+-+-+-+-+-+
// M: | EXTENDED PID | (RECOMMENDED)
// +-+-+-+-+-+-+-+-+
// L: | T |U| S |D| (CONDITIONALLY RECOMMENDED)
// +-+-+-+-+-+-+-+-+
// | TL0PICIDX | (CONDITIONALLY REQUIRED)
// +-+-+-+-+-+-+-+-+
// V: | SS |
// | .. |
// +-+-+-+-+-+-+-+-+
#define kIBit 0x80
#define kPBit 0x40
#define kLBit 0x20
#define kFBit 0x10
#define kBBit 0x08
#define kEBit 0x04
#define kVBit 0x02
int RTPPayloadVP9::parse(unsigned char *data, int dataLength) {
const unsigned char* dataPtr = data;
// Parse mandatory first byte of payload descriptor
this->hasPictureID = (*dataPtr & kIBit); // I bit
this->interPicturePrediction = (*dataPtr & kPBit); // P bit
this->hasLayerIndices = (*dataPtr & kLBit); // L bit
this->flexibleMode = (*dataPtr & kFBit); // F bit
this->beginningOfLayerFrame = (*dataPtr & kBBit); // B bit
this->endingOfLayerFrame = (*dataPtr & kEBit); // E bit
this->hasScalabilityStructure = (*dataPtr & kVBit); // V bit
dataPtr++;
if (this->hasPictureID) {
this->largePictureID = (*dataPtr & 0x80); // M bit
this->pictureID = (*dataPtr & 0x7F);
if (this->largePictureID) {
dataPtr++;
this->pictureID = ntohs((this->pictureID << 16) + (*dataPtr & 0xFF));
}
dataPtr++;
}
if (this->hasLayerIndices) {
this->temporalID = (*dataPtr & 0xE0) >> 5; // T bits
this->isSwitchingUp = (*dataPtr & 0x10); // U bit
this->spatialID = (*dataPtr & 0x0E) >> 1; // S bits
this->isInterLayeredDepUsed = (*dataPtr & 0x01); // D bit
if (this->flexibleMode) { // marked in webrtc code
do {
dataPtr++;
this->referenceIdx = (*dataPtr & 0xFE) >> 1;
this->additionalReferenceIdx = (*dataPtr & 0x01); // D bit
} while (this->additionalReferenceIdx);
} else {
dataPtr++;
this->tl0PicIdx = (*dataPtr & 0xFF);
}
dataPtr++;
}
if (this->flexibleMode && this->interPicturePrediction) {
/* Skip reference indices */
uint8_t nbit;
do {
uint8_t p_diff = (*dataPtr & 0xFE) >> 1;
nbit = (*dataPtr & 0x01);
dataPtr++;
} while (nbit);
}
if (this->hasScalabilityStructure) {
this->spatialLayers = (*dataPtr & 0xE0) >> 5; // N_S bits
this->hasResolution = (*dataPtr & 0x10); // Y bit
this->hasGof = (*dataPtr & 0x08); // G bit
dataPtr++;
if (this->hasResolution) {
for (int i = 0; i <= this->spatialLayers; i++) {
int width = (dataPtr[0] << 8) + dataPtr[1];
dataPtr += 2;
int height = (dataPtr[0] << 8) + dataPtr[1];
dataPtr += 2;
// InfoL << "got vp9 " << width << "x" << height;
this->resolutions.push_back({ width, height });
}
}
if (this->hasGof) {
this->numberOfFramesInGof = *dataPtr & 0xFF; // N_G bits
dataPtr++;
for (int frame_index = 0; frame_index < this->numberOfFramesInGof; frame_index++) {
// TODO(javierc): Read these values if needed
int reference_indices = (*dataPtr & 0x0C) >> 2; // R bits
dataPtr++;
for (int reference_index = 0; reference_index < reference_indices; reference_index++) {
dataPtr++;
}
}
}
}
return dataPtr - data;
}
////////////////////////////////////////////////////
VP9RtpDecoder::VP9RtpDecoder() {
obtainFrame();
}
void VP9RtpDecoder::obtainFrame() {
_frame = FrameImp::create<VP9Frame>();
}
bool VP9RtpDecoder::inputRtp(const RtpPacket::Ptr &rtp, bool key_pos) {
auto seq = rtp->getSeq();
bool is_gop = decodeRtp(rtp);
if (!_gop_dropped && seq != (uint16_t)(_last_seq + 1) && _last_seq) {
_gop_dropped = true;
WarnL << "start drop VP9 gop, last seq:" << _last_seq << ", rtp:\r\n" << rtp->dumpString();
}
_last_seq = seq;
return is_gop;
}
bool VP9RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtp) {
auto payload_size = rtp->getPayloadSize();
if (payload_size < 1) {
// No actual payload
return false;
}
auto payload = rtp->getPayload();
auto stamp = rtp->getStampMS();
auto seq = rtp->getSeq();
RTPPayloadVP9 info;
int offset = info.parse(payload, payload_size);
// InfoL << rtp->dumpString() << "\n" << info.dump();
bool start = info.beginningOfLayerFrame;
if (start) {
_frame->_pts = stamp;
_frame->_buffer.clear();
_frame_drop = false;
}
if (_frame_drop) {
// This frame is incomplete
return false;
}
if (!start && seq != (uint16_t)(_last_seq + 1)) {
// 中间的或末尾的rtp包其seq必须连续否则说明rtp丢包那么该帧不完整必须得丢弃
_frame_drop = true;
_frame->_buffer.clear();
return false;
}
// Append data
_frame->_buffer.append((char *)payload + offset, payload_size - offset);
if (info.endingOfLayerFrame) { // rtp->getHeader()->mark
// 确保下一个包必须是beginningOfLayerFrame
_frame_drop = true;
// 该帧最后一个rtp包,输出frame
outputFrame(rtp);
}
return info.keyFrame();
}
void VP9RtpDecoder::outputFrame(const RtpPacket::Ptr &rtp) {
if (_frame->dropAble()) {
// 不参与dts生成 [AUTO-TRANSLATED:dff3b747]
// Not involved in dts generation
_frame->_dts = _frame->_pts;
} else {
// rtsp没有dts那么根据pts排序算法生成dts [AUTO-TRANSLATED:f37c17f3]
// Rtsp does not have dts, so dts is generated according to the pts sorting algorithm
_dts_generator.getDts(_frame->_pts, _frame->_dts);
}
if (_frame->keyFrame() && _gop_dropped) {
_gop_dropped = false;
InfoL << "new gop received, rtp:\r\n" << rtp->dumpString();
}
if (!_gop_dropped || _frame->configFrame()) {
// InfoL << _frame->pts() << " size=" << _frame->size();
RtpCodec::inputFrame(_frame);
}
obtainFrame();
}
////////////////////////////////////////////////////////////////////////
bool VP9RtpEncoder::inputFrame(const Frame::Ptr &frame) {
uint8_t header[20] = { 0 };
int nheader = 1;
header[0] = kBBit;
bool key = frame->keyFrame();
if (!key)
header[0] |= kPBit;
#if 1
header[0] |= kIBit;
if (++_pic_id > 0x7FFF) {
_pic_id = 0;
}
header[1] = (0x80 | ((_pic_id >> 8) & 0x7F));
header[2] = (_pic_id & 0xFF);
nheader += 2;
#endif
const char *ptr = frame->data() + frame->prefixSize();
int len = frame->size() - frame->prefixSize();
int pdu_size = getRtpInfo().getMaxSize() - nheader;
bool mark = false;
for (size_t pos = 0; pos < len; pos += pdu_size) {
if (len - pos <= pdu_size) {
pdu_size = len - pos;
header[0] |= kEBit;
mark = true;
}
auto rtp = getRtpInfo().makeRtp(TrackVideo, nullptr, pdu_size + nheader, mark, frame->pts());
if (rtp) {
uint8_t *payload = rtp->getPayload();
memcpy(payload, header, nheader);
memcpy(payload + nheader, ptr + pos, pdu_size);
RtpCodec::inputRtp(rtp, key);
}
key = false;
header[0] &= (~kBBit); // Clear 'Begin of partition' bit.
}
return true;
}
} // namespace mediakit

64
ext-codec/VP9Rtp.h Normal file
View File

@@ -0,0 +1,64 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_VP9RTPCODEC_H
#define ZLMEDIAKIT_VP9RTPCODEC_H
#include "VP9.h"
// for DtsGenerator
#include "Common/Stamp.h"
#include "Rtsp/RtpCodec.h"
namespace mediakit {
/**
* VP9 rtp解码类
* 将 VP9 over rtsp-rtp 解复用出 VP9Frame
*/
class VP9RtpDecoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<VP9RtpDecoder>;
VP9RtpDecoder();
/**
* 输入VP9 rtp包
* @param rtp rtp包
* @param key_pos 此参数忽略之
*/
bool inputRtp(const RtpPacket::Ptr &rtp, bool key_pos = true) override;
private:
bool decodeRtp(const RtpPacket::Ptr &rtp);
void outputFrame(const RtpPacket::Ptr &rtp);
void obtainFrame();
private:
bool _gop_dropped = false;
bool _frame_drop = true;
uint16_t _last_seq = 0;
VP9Frame::Ptr _frame;
DtsGenerator _dts_generator;
};
/**
* VP9 rtp打包类
*/
class VP9RtpEncoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<VP9RtpEncoder>;
bool inputFrame(const Frame::Ptr &frame) override;
private:
uint16_t _pic_id = 0;
};
}//namespace mediakit
#endif //ZLMEDIAKIT_VP9RTPCODEC_H

175
ext-codec/VpxRtmp.cpp Normal file
View File

@@ -0,0 +1,175 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "VpxRtmp.h"
#include "Rtmp/utils.h"
#include "Common/config.h"
#include "Extension/Factory.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
void VpxRtmpDecoder::inputRtmp(const RtmpPacket::Ptr &pkt) {
if (_info.codec == CodecInvalid) {
// First, determine if it is an enhanced rtmp
parseVideoRtmpPacket((uint8_t *)pkt->data(), pkt->size(), &_info);
}
if (_info.is_enhanced) {
// Enhanced rtmp
parseVideoRtmpPacket((uint8_t *)pkt->data(), pkt->size(), &_info);
if (!_info.is_enhanced || _info.codec != getTrack()->getCodecId()) {
throw std::invalid_argument("Invalid enhanced-rtmp packet!");
}
auto data = (uint8_t *)pkt->data() + RtmpPacketInfo::kEnhancedRtmpHeaderSize;
auto size = pkt->size() - RtmpPacketInfo::kEnhancedRtmpHeaderSize;
switch (_info.video.pkt_type) {
case RtmpPacketType::PacketTypeSequenceStart: {
getTrack()->setExtraData(data, size);
break;
}
case RtmpPacketType::PacketTypeCodedFramesX:
case RtmpPacketType::PacketTypeCodedFrames: {
auto pts = pkt->time_stamp;
if (RtmpPacketType::PacketTypeCodedFrames == _info.video.pkt_type) {
CHECK_RET(size > 3);
// SI24 = [CompositionTime Offset]
int32_t cts = (load_be24(data) + 0xff800000) ^ 0xff800000;
pts += cts;
data += 3;
size -= 3;
}
outputFrame((char*)data, size, pkt->time_stamp, pts);
break;
}
default:
WarnL << "Unknown pkt_type: " << (int)_info.video.pkt_type;
break;
}
} else {
CHECK_RET(pkt->size() > 5);
uint8_t *cts_ptr = (uint8_t *)(pkt->buffer.data() + 2);
int32_t cts = (load_be24(cts_ptr) + 0xff800000) ^ 0xff800000;
// 国内扩展(12) Vpx rtmp
if (pkt->isConfigFrame()) {
getTrack()->setExtraData((uint8_t *)pkt->data() + 5, pkt->size() - 5);
} else {
outputFrame(pkt->data() + 5, pkt->size() - 5, pkt->time_stamp, pkt->time_stamp + cts);
}
}
}
void VpxRtmpDecoder::outputFrame(const char *data, size_t size, uint32_t dts, uint32_t pts) {
RtmpCodec::inputFrame(Factory::getFrameFromPtr(getTrack()->getCodecId(), data, size, dts, pts));
}
////////////////////////////////////////////////////////////////////////
VpxRtmpEncoder::VpxRtmpEncoder(const Track::Ptr &track) : RtmpCodec(track) {
_enhanced = mINI::Instance()[Rtmp::kEnhanced];
}
bool VpxRtmpEncoder::inputFrame(const Frame::Ptr &frame) {
auto packet = RtmpPacket::create();
packet->buffer.resize(8 + frame->size());
char *buff = packet->data();
int32_t cts = frame->pts() - frame->dts();
if (_enhanced) {
auto header = (RtmpVideoHeaderEnhanced *)buff;
header->enhanced = 1;
header->frame_type = frame->keyFrame() ? (int)RtmpFrameType::key_frame : (int)RtmpFrameType::inter_frame;
switch (frame->getCodecId()) {
case CodecVP8: header->fourcc = htonl((uint32_t)RtmpVideoCodec::fourcc_vp8); break;
case CodecVP9: header->fourcc = htonl((uint32_t)RtmpVideoCodec::fourcc_vp9); break;
case CodecAV1: header->fourcc = htonl((uint32_t)RtmpVideoCodec::fourcc_av1); break;
default: break;
}
buff += RtmpPacketInfo::kEnhancedRtmpHeaderSize;
if (cts) {
header->pkt_type = (uint8_t)RtmpPacketType::PacketTypeCodedFrames;
set_be24(buff, cts);
buff += 3;
} else {
header->pkt_type = (uint8_t)RtmpPacketType::PacketTypeCodedFramesX;
}
} else {
// flags
uint8_t flags = 0;
switch (getTrack()->getCodecId()) {
case CodecVP8: flags = (uint8_t)RtmpVideoCodec::vp8; break;
case CodecVP9: flags = (uint8_t)RtmpVideoCodec::vp9; break;
case CodecAV1: flags = (uint8_t)RtmpVideoCodec::av1; break;
default: break;
}
flags |= (uint8_t)(frame->keyFrame() ? RtmpFrameType::key_frame : RtmpFrameType::inter_frame) << 4;
buff[0] = flags;
buff[1] = (uint8_t)RtmpH264PacketType::h264_nalu;
// cts
set_be24(&buff[2], cts);
buff += 5;
}
packet->time_stamp = frame->dts();
memcpy(buff, frame->data(), frame->size());
buff += frame->size();
packet->body_size = buff - packet->data();
packet->chunk_id = CHUNK_VIDEO;
packet->stream_index = STREAM_MEDIA;
packet->type_id = MSG_VIDEO;
// Output rtmp packet
RtmpCodec::inputRtmp(packet);
return true;
}
void VpxRtmpEncoder::makeConfigPacket() {
auto extra_data = getTrack()->getExtraData();
if (!extra_data || !extra_data->size())
return;
auto pkt = RtmpPacket::create();
pkt->body_size = 5 + extra_data->size();
pkt->buffer.resize(pkt->body_size);
auto buff = pkt->buffer.data();
if (_enhanced) {
auto header = (RtmpVideoHeaderEnhanced *)buff;
header->enhanced = 1;
header->pkt_type = (int)RtmpPacketType::PacketTypeSequenceStart;
header->frame_type = (int)RtmpFrameType::key_frame;
switch (getTrack()->getCodecId()) {
case CodecVP8: header->fourcc = htonl((uint32_t)RtmpVideoCodec::fourcc_vp8); break;
case CodecVP9: header->fourcc = htonl((uint32_t)RtmpVideoCodec::fourcc_vp9); break;
case CodecAV1: header->fourcc = htonl((uint32_t)RtmpVideoCodec::fourcc_av1); break;
default: break;
}
} else {
uint8_t flags = 0;
switch (getTrack()->getCodecId()) {
case CodecVP8: flags = (uint8_t)RtmpVideoCodec::vp8; break;
case CodecVP9: flags = (uint8_t)RtmpVideoCodec::vp9; break;
case CodecAV1: flags = (uint8_t)RtmpVideoCodec::av1; break;
default: break;
}
flags |= ((uint8_t)RtmpFrameType::key_frame << 4);
buff[0] = flags;
buff[1] = (uint8_t)RtmpH264PacketType::h264_config_header;
// cts
memset(buff + 2, 0, 3);
}
memcpy(buff+5, extra_data->data(), extra_data->size());
pkt->chunk_id = CHUNK_VIDEO;
pkt->stream_index = STREAM_MEDIA;
pkt->time_stamp = 0;
pkt->type_id = MSG_VIDEO;
RtmpCodec::inputRtmp(pkt);
}
} // namespace mediakit

54
ext-codec/VpxRtmp.h Normal file
View File

@@ -0,0 +1,54 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_VPX_RTMPCODEC_H
#define ZLMEDIAKIT_VPX_RTMPCODEC_H
#include "Rtmp/RtmpCodec.h"
#include "Extension/Track.h"
namespace mediakit {
/**
* Rtmp解码类
* 将 Vpx over rtmp 解复用出 VpxFrame
*/
class VpxRtmpDecoder : public RtmpCodec {
public:
using Ptr = std::shared_ptr<VpxRtmpDecoder>;
VpxRtmpDecoder(const Track::Ptr &track) : RtmpCodec(track) {}
void inputRtmp(const RtmpPacket::Ptr &rtmp) override;
protected:
void outputFrame(const char *data, size_t size, uint32_t dts, uint32_t pts);
protected:
RtmpPacketInfo _info;
};
/**
* Rtmp打包类
*/
class VpxRtmpEncoder : public RtmpCodec {
bool _enhanced = false;
public:
using Ptr = std::shared_ptr<VpxRtmpEncoder>;
VpxRtmpEncoder(const Track::Ptr &track);
bool inputFrame(const Frame::Ptr &frame) override;
void makeConfigPacket() override;
};
} // namespace mediakit
#endif // ZLMEDIAKIT_VPX_RTMPCODEC_H

View File

@@ -21,6 +21,8 @@ namespace mediakit {
static std::unordered_map<int, const CodecPlugin *> s_plugins;
REGISTER_CODEC(vp8_plugin);
REGISTER_CODEC(vp9_plugin);
REGISTER_CODEC(h264_plugin);
REGISTER_CODEC(h265_plugin);
REGISTER_CODEC(av1_plugin);
@@ -97,10 +99,15 @@ static CodecId getVideoCodecIdByAmf(const AMFValue &val) {
if (val.type() != AMF_NULL) {
auto type_id = (RtmpVideoCodec)val.as_integer();
switch (type_id) {
case RtmpVideoCodec::fourcc_avc1:
case RtmpVideoCodec::h264: return CodecH264;
case RtmpVideoCodec::fourcc_hevc:
case RtmpVideoCodec::h265: return CodecH265;
case RtmpVideoCodec::av1:
case RtmpVideoCodec::fourcc_av1: return CodecAV1;
case RtmpVideoCodec::vp8:
case RtmpVideoCodec::fourcc_vp8: return CodecVP8;
case RtmpVideoCodec::vp9:
case RtmpVideoCodec::fourcc_vp9: return CodecVP9;
default: WarnL << "Unsupported codec: " << (int)type_id; return CodecInvalid;
}
@@ -191,15 +198,16 @@ AMFValue Factory::getAmfByCodecId(CodecId codecId) {
GET_CONFIG(bool, enhanced, Rtmp::kEnhanced);
switch (codecId) {
case CodecAAC: return AMFValue((int)RtmpAudioCodec::aac);
case CodecH264: return AMFValue((int)RtmpVideoCodec::h264);
case CodecH264: return enhanced ? AMFValue((int)RtmpVideoCodec::fourcc_avc1) : AMFValue((int)RtmpVideoCodec::h264);
case CodecH265: return enhanced ? AMFValue((int)RtmpVideoCodec::fourcc_hevc) : AMFValue((int)RtmpVideoCodec::h265);
case CodecG711A: return AMFValue((int)RtmpAudioCodec::g711a);
case CodecG711U: return AMFValue((int)RtmpAudioCodec::g711u);
case CodecOpus: return AMFValue((int)RtmpAudioCodec::opus);
case CodecADPCM: return AMFValue((int)RtmpAudioCodec::adpcm);
case CodecMP3: return AMFValue((int)RtmpAudioCodec::mp3);
case CodecAV1: return AMFValue((int)RtmpVideoCodec::fourcc_av1);
case CodecVP9: return AMFValue((int)RtmpVideoCodec::fourcc_vp9);
case CodecAV1: return enhanced ? AMFValue((int)RtmpVideoCodec::fourcc_av1) : AMFValue((int)RtmpVideoCodec::av1);
case CodecVP8: return enhanced ? AMFValue((int)RtmpVideoCodec::fourcc_vp8) : AMFValue((int)RtmpVideoCodec::vp8);
case CodecVP9: return enhanced ? AMFValue((int)RtmpVideoCodec::fourcc_vp9) : AMFValue((int)RtmpVideoCodec::vp9);
default: return AMFValue(AMF_NULL);
}
}

View File

@@ -195,16 +195,21 @@ public:
_fps = fps;
}
VideoTrackImp(CodecId codec_id) {
_codec_id = codec_id;
_fps = 30;
}
int getVideoWidth() const override { return _width; }
int getVideoHeight() const override { return _height; }
float getVideoFps() const override { return _fps; }
bool ready() const override { return true; }
bool ready() const override { return _width > 0 && _height > 0; }
Track::Ptr clone() const override { return std::make_shared<VideoTrackImp>(*this); }
Sdp::Ptr getSdp(uint8_t payload_type) const override;
CodecId getCodecId() const override { return _codec_id; }
private:
protected:
CodecId _codec_id;
int _width = 0;
int _height = 0;
@@ -324,7 +329,7 @@ public:
Track::Ptr clone() const override { return std::make_shared<AudioTrackImp>(*this); }
Sdp::Ptr getSdp(uint8_t payload_type) const override;
private:
protected:
CodecId _codecid;
int _sample_rate;
int _channels;

View File

@@ -271,6 +271,8 @@ CodecId parseVideoRtmpPacket(const uint8_t *data, size_t size, RtmpPacketInfo *i
switch ((RtmpVideoCodec)ntohl(enhanced_header->fourcc)) {
case RtmpVideoCodec::fourcc_av1: info->codec = CodecAV1; break;
case RtmpVideoCodec::fourcc_vp9: info->codec = CodecVP9; break;
case RtmpVideoCodec::fourcc_vp8: info->codec = CodecVP8; break;
case RtmpVideoCodec::fourcc_avc1: info->codec = CodecH264; break;
case RtmpVideoCodec::fourcc_hevc: info->codec = CodecH265; break;
default: WarnL << "Rtmp video codec not supported: " << std::string((char *)data + 1, 4);
}
@@ -292,6 +294,21 @@ CodecId parseVideoRtmpPacket(const uint8_t *data, size_t size, RtmpPacketInfo *i
info->video.h264_pkt_type = (RtmpH264PacketType)classic_header->h264_pkt_type;
break;
}
case RtmpVideoCodec::vp8: {
CHECK(size >= 0, "Invalid rtmp buffer size: ", size);
info->codec = CodecVP8;
break;
}
case RtmpVideoCodec::vp9: {
CHECK(size >= 0, "Invalid rtmp buffer size: ", size);
info->codec = CodecVP9;
break;
}
case RtmpVideoCodec::av1: {
CHECK(size >= 0, "Invalid rtmp buffer size: ", size);
info->codec = CodecAV1;
break;
}
default: WarnL << "Rtmp video codec not supported: " << (int)classic_header->codec_id; break;
}
}

View File

@@ -306,11 +306,15 @@ enum class RtmpVideoCodec : uint32_t {
screen_video2 = 6, // Screen video version 2
h264 = 7, // avc
h265 = 12, // 国内扩展
av1 = 13,
vp8 = 14,
vp9 = 15,
// 增强型rtmp FourCC [AUTO-TRANSLATED:442b77fb]
// Enhanced rtmp FourCC
fourcc_vp8 = MKBETAG('v', 'p', '0', '8'),
fourcc_vp9 = MKBETAG('v', 'p', '0', '9'),
fourcc_av1 = MKBETAG('a', 'v', '0', '1'),
fourcc_avc1 = MKBETAG('a', 'v', 'c', '1'),
fourcc_hevc = MKBETAG('h', 'v', 'c', '1')
};
@@ -375,6 +379,7 @@ enum class RtmpAudioCodec : uint8_t {
mp3 = 2,
g711a = 7,
g711u = 8,
FOURCC = 9, // Enhanced audio
aac = 10,
opus = 13 // 国内扩展
};