kurento用gstreamer推流 RTP to RTMP

摘要:
基于Gstreamer的rtp到rtmp代码flv不支持音频48000。它支持44k。1.使用ffmpeg将流推送到rtp。srs的示例flv文件:ffmpeg re stream_ loop-1-i./doc/source.200kbps.768x320.flv-An-vcodech264-frtprtp://127.0.0.1:5004-vn acodecibopus-frtprtp://127.0.0.1:5003在执行命令之后,可以根据输出提取sdp描述信息:sdp:v=0o=-00INIP4127.0.0.1s=NoNamet=00a=tool:libavformat57.83.100m=video5004RTP/AVP96c=INIP4127.0.0.1a=rtpmap:96H264/90000a=fmtp:96packagemode=1m=audio5003RTP/AVP97c=INIP4127..0.1b=AS:96a=rtpmap:97opus/48000/2a=fmtp:77sprop sterio=1p内容提供给kurento的rtpEndpoint的processOffer,在sdp中,流输出格式为sink。推送至aacffmpeg以重新流化_ loop-1-iaudio_opus。mp4-an-vcodech264-frtprtp://127.0.0.1:59000-vn acodecac-frtprtp://127.0.0.1:49000sdpV=0o=-00INIP4127.0.0.1s=NoNamet=00a=工具:libavformat57.83.100m=video59000RTP/AVP96c=INIP4127.00.1a=rtpmap:96H264/90000a=fmtp:96封装模式=1m=audio49000RTP/AVP97c=INIP4127..0.1b=AS:128a=rtpmap=97MPEG4-GENERIC/4800/2a=fmtp:97配置文件级别id=1;模式=AAC hbr;尺寸长度=13;索引长度=3;指数增量长度=3;Config=119056E500,其中流输出为aac。2.将rtp流推送到rtmp2.1.使用ffmpeg将文件直接推送到rtpffmpeg re idoc/source。200kbps。768x320.法兰接头yrtmp://192.168.16.133/live/livestreamffmpeg-re-stream_loop-1-iaudio_opus.mp4-vcodeccopy-acodecaac-fflv-yrtmp://192.168.16.133/live/550002.2根据sdp将带有ffmpeg的rtp推送到rtmp:ffmpeg协议_白名单“文件,udp,rtp”-i127.0.0.1_5500.sdp-vcodeccopy-acodeccopy-fflvrtmp://192.168.16.133:1935/live/550002.3直接接收rtp端口以推送流:我们需要使用Gstreamer将流推送到rtmp服务器:gst launch 1.5 emrtpbiname=rtpbinlatency=5udpsrcport=5003caps=“application/x-rtp,media=audio,clock rate=48000,encoding name=OPUS”!)其他:将相机推到rtmpgst-launch-1.0-vv4l2src!文件链接位置='rtmpsrca。flv'参考:https://stackoverflow.com/questions/38495163/rtmp-streaming-via-gstreamer-1-0-appsrc-to-rtmpsink最新版本为1.5.kurento的综合建筑将自动安装。Gi-language=cgstreamer代码实现rtp流到rtmp:#include #包含 #包含 #defineVIDEO_CAPS“应用程序/x-rtp,媒体=视频,时钟速率=90000,enc

基于Gstreamer的rtp转rtmp代码

flv不支持 音频 48000. 支持44k。flv不支持音频opus格式。
 
1,用ffmpeg推流到rtp。

srs的示例flv文件:

ffmpeg -re -stream_loop -1 -i ./doc/source.200kbps.768x320.flv -an -vcodec h264 -f rtp rtp://127.0.0.1:5004 -vn -acodec libopus -f rtp rtp://127.0.0.1:5003

命令执行后,根据输出可以提取到sdp描述信息:(蓝色是两个端口红色是格式96,H264)

SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
t=0 0
a=tool:libavformat 57.83.100
m=video 5004 RTP/AVP 96
c=IN IP4 127.0.0.1
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1
m=audio 5003 RTP/AVP 97
c=IN IP4 127.0.0.1
b=AS:96
a=rtpmap:97 opus/48000/2
a=fmtp:97 sprop-stereo=1

这个sdp内容提供给kurento的rtpEndpoint的processOffer(sdp),即sdp里面是流的输出格式 sink。flv源流里面是aac,需要转换到opus。

推流成aac

ffmpeg -re -stream_loop -1 -i audio_opus.mp4 -an -vcodec h264 -f rtp rtp://127.0.0.1:59000 -vn -acodec aac -f rtp rtp://127.0.0.1:49000

sdp为

v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
t=0 0
a=tool:libavformat 57.83.100
m=video 59000 RTP/AVP 96
c=IN IP4 127.0.0.1
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1
m=audio 49000 RTP/AVP 97
c=IN IP4 127.0.0.1
b=AS:128
a=rtpmap:97 MPEG4-GENERIC/48000/2
a=fmtp:97 profile-level-id=1;mode=AAC-hbr;sizelength=13;indexlength=3;indexdeltalength=3; config=119056E500

这里流输出为aac。

2,推rtp流到rtmp( rtmp://192.168.16.133/live/[key] )

2.1 用ffmpeg直接推文件到rtmp

ffmpeg -re -i doc/source.200kbps.768x320.flv -c copy 
    -f flv -y rtmp://192.168.16.133/live/livestream

ffmpeg -re -stream_loop -1 -i audio_opus.mp4  -vcodec copy -acodec aac  -f flv -y rtmp://192.168.16.133/live/55000

2.2 用ffmpeg推rtp 按照 sdp 到rtmp:

ffmpeg 
-protocol_whitelist "file,udp,rtp" 
-i 127.0.0.1_55000.sdp 
-vcodec copy 
-acodec copy 
-f flv 
rtmp://192.168.16.133:1935/live/55000

2.3 直接接收rtp端口来推流: 我们要用Gstreamer推流到rtmp服务器: 

gst-launch-1.5 -em 
  rtpbin name=rtpbin latency=5 
  udpsrc port=5003 caps="application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS" ! rtpbin.recv_rtp_sink_0 
    rtpbin.  ! rtpopusdepay ! opusdec ! audioconvert ! audioresample ! voaacenc  bitrate=48000 ! aacparse avenc_aac  ! mux. 
  udpsrc port=5004 caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264" ! rtpbin.recv_rtp_sink_1 
    rtpbin.  ! rtph264depay ! h264parse ! mux. 
  flvmux name=mux streamable=true ! rtmpsink sync=false location=rtmp://192.168.16.133/live/55000

里面标红得值都来自于上面发布流的SDP结果。(用avenc_aac音频就会导致播完一帧后卡住!!!)

其他:

推摄像头到rtmp

gst-launch-1.0 -v v4l2src ! 'video/x-raw, width=640, height=480, framerate=30/1' 
! queue ! videoconvert ! omxh264enc ! h264parse ! flvmux ! rtmpsink location='rtmp://{MY_IP}/rtmp/live'

获取flv流:

gst-launch-1.0 rtmpsrc location='rtmp://{MY_IP}/rtmp/live' ! filesink location='rtmpsrca.flv'

参考:https://stackoverflow.com/questions/38495163/rtmp-streaming-via-gstreamer-1-0-appsrc-to-rtmpsink


安装 目前最新得是1.5. kurento得omni build all会自动安装。
apt-get install -y 
  gstreamer1.5-libav 
  gstreamer1.5-plugins-bad 
  gstreamer1.5-plugins-base 
  gstreamer1.5-plugins-good 
  gstreamer1.5-tools
API文档:
rutorial:https://gstreamer.freedesktop.org/documentation/tutorials/basic/hello-world.html?gi-language=c
https://gstreamer.freedesktop.org/documentation/libav/avenc_aac.html?gi-language=c

gstreamer代码实现rtp推流到rtmp:

#include <string.h>
#include <math.h>
 
#include <gst/gst.h>
 
#define VIDEO_CAPS "application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264"
#define AUDIO_CAPS "application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS"
 
/* will be called when rtpbin has validated a payload that we can depayload */
static void
pad_added_cb(GstElement *rtpbin, GstPad *new_pad, GstElement *depay)
{
    char *pad_name = GST_PAD_NAME(new_pad);
    char *depay_name = gst_element_get_name(depay);
    if (strstr(pad_name, "recv_rtp_src_0_") && strstr(depay_name, "audiodepay"))
    {
        GstPad *sinkpad;
        GstPadLinkReturn lres;
 
        g_print("new payload on rtpbin: %s %s %s
",
                gst_element_get_name(rtpbin), GST_PAD_NAME(new_pad), gst_element_get_name(depay));
 
        sinkpad = gst_element_get_static_pad(depay, "sink");
        g_assert(sinkpad);
 
        lres = gst_pad_link(new_pad, sinkpad);
        g_assert(lres == GST_PAD_LINK_OK);
        gst_object_unref(sinkpad);
    }
    else if (strstr(pad_name, "recv_rtp_src_1_") && strstr(depay_name, "videodepay"))
    {
        GstPad *sinkpad;
        GstPadLinkReturn lres;
 
        g_print("new payload on rtpbin: %s %s %s
",
                gst_element_get_name(rtpbin), GST_PAD_NAME(new_pad), gst_element_get_name(depay));
 
        sinkpad = gst_element_get_static_pad(depay, "sink");
        g_assert(sinkpad);
 
        lres = gst_pad_link(new_pad, sinkpad);
        g_assert(lres == GST_PAD_LINK_OK);
        gst_object_unref(sinkpad);
    }
}
 
int main(int argc, char *argv[])
{
    GMainLoop *loop;
    GstElement *pipeline;
 
    GstElement *rtpbin;
    GstElement *audiosrc, *audiodepay, *audiodec, *audiores, *audioconv, *audiosink;
    GstElement *videosrc, *videodepay, *videosink;
    GstElement *flvmux, *rtmpsink;
 
    gboolean res;
    GstCaps *caps;
    GstPadLinkReturn lres;
    GstPad *srcpad, *audio_sinkpad, *video_sinkpad;
 
    gst_init(&argc, &argv);
    pipeline = gst_pipeline_new(NULL);
    g_assert(pipeline);
 
    /* the rtpbin element */
    rtpbin = gst_element_factory_make("rtpbin", "rtpbin");
    g_assert(rtpbin);
    gst_bin_add(GST_BIN(pipeline), rtpbin);
    // 001 源
    audiosrc = gst_element_factory_make("udpsrc", "audiosrc");
    g_assert(audiosrc);
    g_object_set(audiosrc, "port", 5003, NULL);
    caps = gst_caps_from_string(AUDIO_CAPS);
    g_object_set(audiosrc, "caps", caps, NULL);
    gst_caps_unref(caps);
    gst_bin_add(GST_BIN(pipeline), audiosrc);
 
    videosrc = gst_element_factory_make("udpsrc", "videosrc");
    g_assert(videosrc);
    g_object_set(videosrc, "port", 5004, NULL);
    caps = gst_caps_from_string(VIDEO_CAPS);
    g_object_set(videosrc, "caps", caps, NULL);
    gst_caps_unref(caps);
    gst_bin_add(GST_BIN(pipeline), videosrc);
 
    /* now link all to the rtpbin, start by getting an RTP sinkpad for session 0 */
    srcpad = gst_element_get_static_pad(audiosrc, "src");
    audio_sinkpad = gst_element_get_request_pad(rtpbin, "recv_rtp_sink_0");
    lres = gst_pad_link(srcpad, audio_sinkpad);
    g_assert(lres == GST_PAD_LINK_OK);
    gst_object_unref(srcpad);
 
    srcpad = gst_element_get_static_pad(videosrc, "src");
    video_sinkpad = gst_element_get_request_pad(rtpbin, "recv_rtp_sink_1");
    lres = gst_pad_link(srcpad, video_sinkpad);
    g_assert(lres == GST_PAD_LINK_OK);
    gst_object_unref(srcpad);
 
    /* the depayloading and decoding */
    audiodepay = gst_element_factory_make("rtpopusdepay", "audiodepay");
    g_assert(audiodepay);
    audiodec = gst_element_factory_make("opusdec", "audiodec");
    g_assert(audiodepay);
    /* the audio playback and format conversion */
    audioconv = gst_element_factory_make("audioconvert", "audioconv");
    g_assert(audioconv);
    audiores = gst_element_factory_make("audioresample", "audiores");
    g_assert(audiores);
    audiosink = gst_element_factory_make("avenc_aac", "audiosink"); // autoaudiosink voaacenc avenc_aac avenc_opus
    g_assert(audiosink);
    /* add depayloading and playback to the pipeline and link */
    gst_bin_add_many(GST_BIN(pipeline), audiodepay, audiodec, audioconv,
                     audiores, audiosink, NULL);
    res = gst_element_link_many(audiodepay, audiodec, audioconv, audiores,
                                audiosink, NULL);
    g_assert(res == TRUE);
 
    videodepay = gst_element_factory_make("rtph264depay", "videodepay");
    g_assert(videodepay);
    videosink = gst_element_factory_make("h264parse", "videosink");
    g_assert(videosink);
    gst_bin_add_many(GST_BIN(pipeline), videodepay, videosink, NULL);
    res = gst_element_link_many(videodepay, videosink, NULL);
    g_assert(res == TRUE);
 
    // flvmux
    flvmux = gst_element_factory_make("flvmux", "flvmux");
    g_assert(flvmux);
    g_object_set(flvmux, "streamable", TRUE, NULL);
    gst_bin_add(GST_BIN(pipeline), flvmux);
 
    res = gst_element_link(audiosink, flvmux);
    g_assert(res == TRUE);
    res = gst_element_link(videosink, flvmux);
    g_assert(res == TRUE);
 
    rtmpsink = gst_element_factory_make("rtmpsink", "rtmpsink");
    g_assert(rtmpsink);
    g_object_set(rtmpsink, "sync", FALSE, NULL);
    g_object_set(rtmpsink, "location", "rtmp://u1802/live/demo2", NULL);
    gst_bin_add(GST_BIN(pipeline), rtmpsink);
    res = gst_element_link(flvmux, rtmpsink);
    g_assert(res == TRUE);
 
    /* the RTP pad that we have to connect to the depayloader will be created
   * dynamically so we connect to the pad-added signal, pass the depayloader as
   * user_data so that we can link to it. */
    g_signal_connect(rtpbin, "pad-added", G_CALLBACK(pad_added_cb), audiodepay);
    g_signal_connect(rtpbin, "pad-added", G_CALLBACK(pad_added_cb), videodepay);
 
    /* set the pipeline to playing */
    g_print("starting receiver pipeline
");
    gst_element_set_state(pipeline, GST_STATE_PLAYING);
 
    /* we need to run a GLib main loop to get the messages */
    loop = g_main_loop_new(NULL, FALSE);
    g_main_loop_run(loop);
 
    g_print("stopping receiver pipeline
");
    gst_element_set_state(pipeline, GST_STATE_NULL);
 
    gst_object_unref(loop);
    gst_object_unref(pipeline);
    gst_object_unref(audio_sinkpad);
    gst_object_unref(video_sinkpad);
    gst_object_unref(rtmpsink);
    gst_object_unref(flvmux);
    gst_object_unref(rtpbin);
    gst_object_unref(audiosrc);
    gst_object_unref(audiodepay);
    gst_object_unref(audiodec);
    gst_object_unref(audiores);
    gst_object_unref(audioconv);
    gst_object_unref(audiosink);
    gst_object_unref(videosrc);
    gst_object_unref(videodepay);
    gst_object_unref(videosink);
    return 0;
}
 
//

 参考:https://www.tianjiaguo.com/2019/11/gstreamer-rtp2rtmp/

原创 基于Gstreamer的实时视频流的分发

1  OverviewGstreamer是一款功能强大、易扩展、可复用的、跨平台的用流媒体应用程序的框架。该框架大致包含了应用层接口、主核心框架以及扩展插件三个部分。   Fig 1.0Gstreamer应用层接口主要是给各类应用程序提供接口

 


GStreamer

包含如下三个可执行程序
  • gst-inspect-1.0
    输出GStreamer安装插件情况
  • gst-launch-1.0
    构建GStreamer Pipeline,简单来说就是管道模型,在一堆数据流上面叠加一些处理,获取输出结果。
  • ges-launch-1.0
    GStreamer编辑服务原型工具

GStreamer Pipeline Examples

视频测试源
# 播放测试源
 gst-launch-1.0 videotestsrc ! autovideosink
 
 # 产生一个 1280*720的视频源然后播放
 gst-launch-1.0 -v videotestsrc  ! video/x-raw, framerate=25/1,width=1280,height=720 ! autovideosink
摄像头数据
# 播放摄像头内容
gst-launch-1.0  v4l2src ! autovideosink

# 设置我们需要的大小、格式和帧率,
gst-launch-1.0 v4l2src ! video/x-raw,format=YUY2, width=320,height=240,framerate=20/1 ! autovideosink
调整和裁剪
gst-launch-1.0 v4l2src ! video/x-raw,format=YUY2,width=640,height=480,framerate=15/1  
 ! aspectratiocrop aspect-ratio=16/9 !  videoconvert ! autovideosink
编码和多路复用技术
单路流
# 使用x264将视频编码到H.264,并将其放入MPEG-TS传输流:
gst-launch-1.0 -v videotestsrc ! video/x-raw,framerate=25/1, width=640, height=360 
! x264enc ! mpegtsmux ! filesink location=test.ts

# 播放本地文件
gst-launch-1.0 -v playbin uri=file:///home/frank/test.ts
RTMP到RTP
gst-launch-1.0 -v  rtmpsrc location=rtmp://172.17.230.220/live/123 ! flvdemux ! h264parse ! rtph264pay config-interval=-1 pt=111 ! udpsink host=121.199.37.143 port=15004

gst-launch-1.0 -v rtmpsrc location=rtmp://172.17.230.220/live/123 
    ! flvdemux name=demux demux.audio ! queue ! decodebin ! audioconvert ! audioresample  
    ! opusenc ! rtpopuspay timestamp-offset=0  ! udpsink  host=121.199.37.143  port=15002 
    demux.video! queue ! h264parse ! rtph264pay timestamp-offset=0 config-interval=-1  
    ! udpsink  host=121.199.37.143 port=15004

GStreamer API使用

参考资料


 linux的gstreamer安装:https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c#getting-the-tutorials-source-code

1, 开发库安装 
Ubuntu已经安装了gstreamer库,因此只需要再安装几个开发库即可: 
libstreamer0.10-0 
libstreamer0.10-dev 
libstreamer0.10-0-dbg 

# sudo apt-get install libstreamer0.10-0 libgstreamer0.10-dev libstreamer0.10-0-dbg 


2,测试gstreamer开发库 
#include <gst/gst.h> 
int main (int   argc,char *argv[]) 

    const gchar *nano_str; 
    guint major, minor, micro, nano; 
    gst_init (&argc, &argv); 
    gst_version (&major, &minor, &micro, &nano); 
    if (nano == 1) 
        nano_str = "(CVS)"; 
    else if (nano == 2) 
        nano_str = "(Prerelease)"; 
    else 
        nano_str = ""; 
    printf ("This program is linked against GStreamer %d.%d.%d %s ", 
          major, minor, micro, nano_str); 
    return 0; 


3,编译运行 

看库依赖,tutorial示例获取:

pkg-config --cflags --libs gstreamer-1.0

git clone https://gitlab.freedesktop.org/gstreamer/gst-docs
gcc -Wall $(pkg-config --cflags --libs  gstreamer-1.5) gstest.c -o hello  $ ubuntu不起作用,链接找不到库!!!!
gcc basic-tutorial-1.c -o basic-tutorial-1 `pkg-config --cflags --libs gstreamer-1.5`

./hello 
也可以为:
gcc  gstest.c -o a.out -pthread -I/usr/include/gstreamer-1.5 -I/usr/lib/x86_64-linux-gnu/gstreamer-1.5/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -lgstreamer-1.5 -lgobject-2.0 -lglib-2.0

运行结果: 

This program is linked against GStreamer 1.8.1 (CVS)


参考:

各种aac:https://telecom.altanai.com/category/telecom-architectures/telecom-info/

gstream c++示例:https://github.com/GNOME/gstreamermm

 
 

免责声明:文章转载自《kurento用gstreamer推流 RTP to RTMP》仅用于学习参考。如对内容有疑问,请及时联系本站处理。

上篇解除docker文件限制ES的关键端口下篇

宿迁高防,2C2G15M,22元/月;香港BGP,2C5G5M,25元/月 雨云优惠码:MjYwNzM=

相关文章

怒肝半月!Python 学习路线+资源大汇总

Python 学习路线 by 鱼皮。 原创不易,请勿抄袭,违者必究! 大家好,我是鱼皮,肝了十天左右的 Python 学习路线终于来了~ 和之前一样,在看路线前,建议大家先通过以下视频了解几个问题: Python 为什么这么火? 为什么都在说学 Python 找不到工作?Python 真香么? 我要学 Python 么? 怎么快速学习? 视频地址:...

AAC头部格式

一共有2种AAC头格式,一种是StreamMuxConfig,另一种是AudioSpecificConfig 1、AudioSpecificConfig 读写header的代码参考    ffmpeg libavcodecaacenc.c put_audio_specific_config()     ffmpeg libavcodecmpeg4audi...

ffmpeg参数说明(转载)

ffmpeg.exe -i F:\闪客之家\闪客之歌.mp3 -ab 56 -ar 22050 -b 500 -r 15 -s 320x240 f:\11.flv ffmpeg -i F:\01.wmv -ab 56 -ar 22050 -b 500 -r 15 -s 320x240 f:\test.flv 使用-ss参数 作用(time_off set...

ffmpeg 使用 gdb 调试相关技巧

本文说明了,在ffmpeg二次开发或调用库的过程,如何借助于ffmpeg源码进行调试。 注:ffmpeg版本是4.0。 1. 编写代码 编写将pcm数据转换为mp2的代码 pcm_to_mp2.c #include <libavformat/avformat.h> #include <libavcodec/avcodec.h> #i...

PC浏览器播放m3u8

 HLS(HTTP Live Streaming)是苹果公司针对iPhone、iPod、iTouch和iPad等移动设备而开发的基于HTTP协议的流媒体解决方案。在 HLS 技术中 Web 服务器向客户端提供接近实时的音视频流。但在使用的过程中是使用的标准的 HTTP 协议,所以这时,只要使用 HLS 的技术,就能在普通的 HTTP 的应用上直接提供点播和...

MVC3.0视频点播及上传格式转化

在MVC3.0中播放视频文件需要做一下配置:具体配置如下 View Code 1<div class="vidoplay">2<div>3<objectclassid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000"codebase="http://download.macromed...