标签 ffmpeg 下的文章

“”

Hardware/VAAPI – FFmpeg
update:2021-9-30
Device Selection
The libva driver needs to be attached to a DRM device to work. This can be connected either directly or via a running X server. When working standlone, it is generally best to use a DRM render node (/dev/dri/render*) - only use a connection via X if you actually want to deal with surfaces inside X (with DRI2, for example).

In ffmpeg, a named global device can be created using the -init_hw_device option:

ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128

With the decode hwaccel, each input stream can then be given a previously initialised device with the -hwaccel_device option:

ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_device foo -i ...

If only one stream is being used, -hwaccel_device can also accept a device path directly:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -i ...
Where filters require a device (for example, the hwupload filter), the device used in a filter graph can be specified with the -filter_hw_device option:

ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128 -i ... -filter_hw_device foo -filter_complex ...hwupload... ...
If you have multiple usable devices in the same machine (for example, an Intel integrated GPU and an AMD discrete graphics card), they can be used simultaneously to decode different streams:

ffmpeg -init_hw_device vaapi=intel:/dev/dri/renderD128 -init_hw_device vaapi=amd:/dev/dri/renderD129 -hwaccel vaapi -hwaccel_device intel -i ... -hwaccel vaapi -hwaccel_device amd -i ...
(See <​http://www.ffmpeg.org/ffmpeg.html#toc-Advanced-Video-options>; for more detail about these options.)

Finally, the -vaapi_device option may be more convenient in single-device cases with filters.

ffmpeg -vaapi_device /dev/dri/renderD128

acts equivalently to:

ffmpeg -init_hw_device vaapi=vaapi0:/dev/dri/renderD128 -filter_hw_device vaapi0

Surface Formats
The hardware codecs used by VAAPI are not able to access frame data in arbitrary memory. Therefore, all frame data needs to be uploaded to hardware surfaces connected to the appropriate device before being used. All VAAPI hardware surfaces in ffmpeg are represented by the vaapi pixfmt (the internal layout is not visible here, though).

The hwaccel decoders normally output frames in the associated hardware format, but by default the ffmpeg utility download the output frames to normal memory before passing them to the next component. This allows the decoder to work standlone to make decoding faster without any additional options:

ffmpeg -hwaccel vaapi ... -i input.mp4 -c:v libx264 ... output.mp4

For other outputs, the option -hwaccel_output_format can be used to specify the format to be used. This can be a software format (which formats are usable depends on the driver), or it can be the vaapi hardware format to indicate that the surface should not be downloaded.

For example, to decode only and do nothing with the result:

ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi ... -i input.mp4 -f null -

This can be used to test the speed / CPU use of the decoder only (the download operation typically adds a large amount of additional overhead).

When decoder output is in hardware surfaces, the frames will be given to following filters or encoders in that form. The scale_vaapi and deinterlace_vaapi filters act on vaapi format frames to scale and deinterlace them respecitvely. There are also some generic filters - hwdownload, hwupload and hwmap - which support all hardware formats, including VAAPI (see <​http://www.ffmpeg.org/ffmpeg-filters.html#hwdownload>;).

For example, take an interlaced input, decode, deinterlace, scale to 720p, download to normal memory and encode with libx264:

ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi ... -i interlaced_input.mp4 -vf 'deinterlace_vaapi,scale_vaapi=w=1280:h=720,hwdownload,format=nv12' -c:v libx264 ... progressive_output.mp4

Encoding
The encoders only accept input as VAAPI surfaces. If the input is in normal memory, it will need to be uploaded before giving the frames to the encoder - in the ffmpeg utility, the hwupload filter can be used for this. It will upload to a surface with the same layout as the software frame, so it may be necessary to add a format filter immediately before to get the input into the right format (hardware generally wants the nv12 layout, but most software functions use the yuv420p layout). The hwupload filter also requires a device to upload to, which needs to be defined before the filter graph is created.

So, to use the default decoder for some input, then upload frames to VAAPI and encode with H.264 and default settings:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v h264_vaapi output.mp4

If the input is known to be hardware-decodable, then we can use the hwaccel:

ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device /dev/dri/renderD128 -i input.mp4 -c:v h264_vaapi output.mp4

Finally, when the input may or may not be hardware decodable we can do:

ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device foo -i input.mp4 -filter_hw_device foo -vf 'format=nv12|vaapi,hwupload' -c:v h264_vaapi output.mp4

This works because the decoder will output either vaapi surfaces (if the hwaccel is usable) or software frames (if it isn't). In the first case, it matches the vaapi format and hwupload does nothing (it passes through hardware frames unchanged). In the second case, it matches the nv12 format and converts whatever the input is to that, then uploads. Performance will likely vary by a large amount depending which path is chosen, though.

The supported encoders are:

H.262 / MPEG-2 part 2 mpeg2_vaapi
H.264 / MPEG-4 part 10 (AVC) h264_vaapi
H.265 / MPEG-H part 2 (HEVC) hevc_vaapi
MJPEG / JPEG mjpeg_vaapi
VP8 vp8_vaapi
VP9 vp9_vaapi
For an explanation of codec options, see <​http://www.ffmpeg.org/ffmpeg-codecs.html#VAAPI-encoders>;.

Mapping options from libx264
No CRF-like mode is currently supported. The only constant-quality mode is CQP (constant quantisation parameter), which has no adaptivity to scene content. It does, however, allow different quality settings for different frame types, to improve compression by spending fewer bits on unreferenced B-frames - see the (i|b)_q(factor|offset) options. CQP mode cannot be combined with a maximum bitrate or buffer size.

CBR and VBR modes are supported, though the output of them varies significantly by driver and device (default is VBR, set -maxrate equal to -b:v for CBR). HRD buffering options (rc_max_rate, rc_buffer_size) are functional, and the encoder will generate buffering_period and pic_timing SEI when appropriate.

There is no complete analogue of the -preset option. The -compression_level option controls the local speed/quality tradeoff in the encoder (that is, the amount of effort expended on trying to get the best results from local choices like motion estimation and mode decision), using a nebulous per-device scale. The argument is a small integer, from 1 up to some limit dependent on the device (not more than 7) - higher values are faster / lower stream quality. Separately, some hardware (Intel gen9) supports a low-power mode with more restricted features. It is accessible via the -low_power option.

Neither two-pass encoding nor lookahead are supported at all - only local rate control is possible. VBR mode should do a reasonably good job at getting close to an overall bitrate target, but quality will vary significantly through a stream if the complexity varies.

Full Examples
All of these examples assume the input and output files will contain one video stream (audio will need to be considered separately). It is assumed that VAAPI is usable via the DRM device node /dev/dri/renderD128.

Decode-only
Decode an input with hardware if possible, output in normal memory to encode with libx264:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -i input.mp4 -c:v libx264 -crf 20 output.mp4

Decode an input with hardware, deinterlace it if it was interlaced, downscale, then download to normal memory to encode with libx264 (will fail if the input is not supported by the hardware decoder):

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -vf 'deinterlace_vaapi=rate=field:auto=1,scale_vaapi=w=640:h=360,hwdownload,format=nv12' -c:v libx264 -crf 20 output.mp4

Decode an input and discard the output (this can be used as a crude benchmark of the decoder):

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -f null -

Encode-only
Encode an input with H.264 at 5Mbps VBR:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v h264_vaapi -b:v 5M output.mp4

As previous, but use constrained baseline profile only for compatibility with old devices:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v h264_vaapi -b:v 5M -profile 578 -bf 0 output.mp4

Encode with H.264 at good constant quality:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v h264_vaapi -qp 18 output.mp4

Encode with 10-bit H.265 at 15Mbps VBR (recent hardware required - Kaby Lake or later Intel):

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=p010,hwupload' -c:v hevc_vaapi -b:v 15M -profile 2 output.mp4

Scale to 720p and encode with H.264 at 5Mbps CBR:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'hwupload,scale_vaapi=w=1280:h=720:format=nv12' -c:v h264_vaapi -b:v 5M -maxrate 5M output.mp4

Encode with VP9 at 5Mbps VBR:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v vp9_vaapi -b:v 5M output.webm

Encode with VP9 at good constant quality, using pseudo-B-frames to improve compression:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v vp9_vaapi -global_quality 50 -bf 1 -bsf:v vp9_raw_reorder,vp9_superframe output.webm

Camera Capture
Capture a raw stream from a V4L2 camera device and encode it as H.264:

ffmpeg -vaapi_device /dev/dri/renderD128 -f v4l2 -video_size 1920x1080 -i /dev/video0 -vf 'format=nv12,hwupload' -c:v h264_vaapi output.mp4

Capture an MJPEG stream from a V4L2 camera device (e.g. a UVC webcam), decode it and encode it as H.264:

ffmpeg -f v4l2 -input_format mjpeg -video_size 1920x1080 -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i /dev/video0 -vf 'scale_vaapi=format=nv12' -c:v h264_vaapi output.mp4

The extra scale_vaapi instance is needed here to convert the VAAPI surfaces to the correct format for encoding - webcams will typically supply images in YUV 4:2:2 format.

Screen Capture
Capture the screen from X and encode with H.264 at reasonable constant-quality:

ffmpeg -vaapi_device /dev/dri/renderD128 -f x11grab -video_size 1920x1080 -i :0 -vf 'hwupload,scale_vaapi=format=nv12' -c:v h264_vaapi -qp 24 output.mp4

Note that it is also possible to do the format conversion (RGB to YUV) on the CPU - this is slower, but might be desirable if other filters are going to be applied:

ffmpeg -vaapi_device /dev/dri/renderD128 -f x11grab -video_size 1920x1080 -i :0 -vf 'format=nv12,hwupload' -c:v h264_vaapi -qp 24 output.mp4

Capture the screen from the first active KMS plane:

ffmpeg -device /dev/dri/card0 -f kmsgrab -i - -vf 'hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' -c:v h264_vaapi -qp 24 output.mp4

Compared to capturing through X as in the previous examples, this should use much less CPU (all surfaces stay on the GPU side) and can work outside X (on VTs or in Wayland), but can only capture whole planes and requires DRM master or CAP_SYS_ADMIN to run.

Transcode
Hardware-only transcode to H.264 at 2Mbps CBR:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -c:v h264_vaapi -b:v 2M -maxrate 2M output.mp4
Decode, deinterlace if interlaced, scale to 720p, encode with H.265 at 5Mbps VBR:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -vf 'deinterlace_vaapi=rate=field:auto=1,scale_vaapi=w=1280:h=720' -c:v hevc_vaapi -b:v 5M output.mp4
Transcode to 10-bit H.265 at 15Mbps VBR (the input can be 10-bit, but need not be):

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -vf 'scale_vaapi=format=p010' -c:v hevc_vaapi -profile 2 -b:v 15M output.mp4
Transcode to H.264 in constrained baseline profile at level 3 and 1Mbps CBR for compatibility with old devices:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -vf 'fps=30,scale_vaapi=w=640:h=-2:format=nv12' -c:v h264_vaapi -profile 578 -level 30 -bf 0 -b:v 1M -maxrate 1M output.mp4
Decode the input, then pick a frame from it every 10 seconds to make a sequence of JPEG screenshots at high quality:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -r 1/10 -c:v mjpeg_vaapi -global_quality 90 -f image2 output%03d.jpeg
Burn subtitles into the video while transcoding:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -vf 'scale_vaapi,hwmap=mode=read+write+direct,format=nv12,ass=subtitles.ass,hwmap' -c:v h264_vaapi -b:v 2M -maxrate 2M output.mp4
(Note that the scale_vaapi filter is required here to copy the frames - without it, the subtitles would be drawn directly on the reference frames being used by the decoder at the same time.)

Transcode to two different outputs (one at constant-quality and one at constant-bitrate) from the same input:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -filter_complex 'splitcq' -map '[cq]' -c:v h264_vaapi -qp 18 output-cq.mp4 -map '[cb]' -c:v h264_vaapi -b:v 5M -maxrate 5M output-cb.mp4
Transcode for multiple streaming formats (one H.264 and one VP9, with the same parameters) from the same input:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -filter_complex 'splith264' -map '[h264]' -c:v h264_vaapi -b:v 5M output-h264.mp4 -map '[vp9]' -c:v vp9_vaapi -b:v 5M output-vp9.webm
Decode on one device, download, upload to a second device, and encode:

ffmpeg -init_hw_device vaapi=decdev:/dev/dri/renderD128 -init_hw_device vaapi=encdev:/dev/dri/renderD129 -hwaccel vaapi -hwaccel_device decdev -hwaccel_output_format vaapi -i input.mp4 -filter_hw_device encdev -vf 'hwdownload,format=nv12,hwupload' -c:v h264_vaapi -b:v 5M output.mp4
Other
Use the VAAPI deinterlacer standalone to attempt to make a software transcode run faster (this may actually make things slower - the additional copying to the GPU and back is quite a large overhead):

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload,deinterlace_vaapi=rate=field,hwdownload,format=nv12' -c:v libx264 -crf 24 output.mp4
Referenced from:https://trac.ffmpeg.org/wiki/Hardware/VAAPI

ffmpeg -f x11grab -video_size 1920x1080 -i :0  output.mp4

ffmpeg -f x11grab -video_size 1920x1080 -i :0  output.mp4
ffmpeg version 4.1.7 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.3.0-17ubuntu1~20.04)
  configuration: --disable-static --enable-shared
  libavutil      56. 22.100 / 56. 22.100
  libavcodec     58. 35.100 / 58. 35.100
  libavformat    58. 20.100 / 58. 20.100
  libavdevice    58.  5.100 / 58.  5.100
  libavfilter     7. 40.101 /  7. 40.101
  libswscale      5.  3.100 /  5.  3.100
  libswresample   3.  3.100 /  3.  3.100
[x11grab @ 0x55e93e579740] Stream #0: not enough frames to estimate rate; consider increasing probesize
Input #0, x11grab, from ':0':
  Duration: N/A, start: 1632986648.181873, bitrate: N/A
    Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1920x1080, 29.97 fps, 1000k tbr, 1000k tbn, 1000k tbc
File 'output.mp4' already exists. Overwrite ? [y/N] y
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo (native) -> mpeg4 (native))
Press [q] to stop, [?] for help
Output #0, mp4, to 'output.mp4':
  Metadata:
    encoder         : Lavf58.20.100
    Stream #0:0: Video: mpeg4 (mp4v / 0x7634706D), yuv420p, 1920x1080, q=2-31, 200 kb/s, 29.97 fps, 30k tbn, 29.97 tbc
    Metadata:
      encoder         : Lavc58.35.100 mpeg4
    Side data:
      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame=  894 fps= 44 q=31.0 Lsize=    7757kB time=00:00:29.79 bitrate=2132.5kbits/s dup=292 drop=290 speed=1.46x    
video:7752kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.060582%

Build FFmpeg QSV · Intel-Media-SDK/MediaSDK Wiki
update:2021-9-30
Install all required common packages:
You can get it in two ways:

Build from sources:

Intel Media Driver for VAAPI (aka iHD) (gmmlib and LibVA required)
Intel Media SDK
do not forget to export environment variables:

export LIBVA_DRIVERS_PATH=/path/to/iHD_driver
export LIBVA_DRIVER_NAME=iHD
export LD_LIBRARY_PATH=/path/to/msdk/lib
export PKG_CONFIG_PATH=/path/to/msdk/lib/pkgconfig

Starting from Ubuntu 19.04 Intel media stack components are available for installation via apt-get (see: Intel media stack on Ubuntu).

sudo apt-get install libva-dev libmfx-dev intel-media-va-driver-non-free
export LIBVA_DRIVER_NAME=iHD

For build ffplay (as part of ffmpeg) you also need to install additional dependencies:

sudo apt-get install libsdl2-dev

Build FFmpeg
Get ffmpeg sources
git clone https://github.com/ffmpeg/ffmpeg
cd ffmpeg

Configure and build FFmpeg install
Configure ffmpeg for use with vaapi and MediaSDK. Main key is --enable-libmfx

./configure --arch=x86_64 --disable-yasm --enable-vaapi --enable-libmfx
make

If you need a debug version of ffmpeg you can try

./configure --arch=x86_64 --disable-yasm --enable-vaapi --enable-libmfx \

        --enable-debug=3 --disable-stripping --extra-cflags=-gstabs+ \
        --disable-optimizations

make
Referenced from:https://github.com/Intel-Media-SDK/MediaSDK/wiki/Build-FFmpeg-QSV

ffmpeg 维基百科
FFmpeg 是一个开放源代码的自由软件,可以执行音频和视频多种格式的录影、转换、串流功能,包含了libavcodec——这是一个用于多个项目中音频和视频的解码器库,以及libavformat——一个音频与视频格式转换库。

“FFmpeg”这个单词中的“FF”指的是“Fast Forward(快速前进)”。有些新手写信给“FFmpeg”的项目负责人,询问FF是不是代表“Fast Free”或者“Fast Fourier”等意思,“FFmpeg”的项目负责人回信说:“Just for the record, the original meaning of "FF" in FFmpeg is "Fast Forward"...”

这个项目最初是由法国程序员法布里斯·贝拉(Fabrice Bellard)发起的,而现在是由迈克尔·尼德梅尔(Michael Niedermayer)在进行维护。许多FFmpeg的开发者同时也是MPlayer项目的成员,FFmpeg在MPlayer项目中是被设计为服务器版本进行开发。

2011年3月13日,FFmpeg部分开发人士决定另组Libav,同时制定了一套关于项目继续发展和维护的规则。

ffmpeg 命令行程序
命令行应用程序
ffmpeg:用于对视频文档或音频档案转换格式
ffplay:一个简单的播放器,基于SDL与FFmpeg库
ffprobe:用于显示媒体文件的信息,见MediaInfo
ffmpeg 常用参数
参数明细可用ffmpeg -h显示;编解码器名称等明细可用ffmpeg -formats显示。

下列为较常使用的参数:

主要参数

-i——设置输入文件名。
-f——设置输出格式。
-y——若输出文件已存在时则覆盖文件。
-fs——超过指定的文件大小时则结束转换。
-t——指定输出文件的持续时间,以秒为单位。
-ss——从指定时间开始转换,以秒为单位。
-ss和-t一起使用时代表从-ss的时间开始转换持续时间为-t的视频,例如:-ss 00:00:01.00 -t 00:00:10.00即从00:00:01.00开始转换到00:00:11.00。
-title——设置标题。
-timestamp——设置时间戳。
-vsync——增减Frame使影音同步。
-c——指定输出文件的编码。
-metadata——更改输出文件的元数据。
-help——查看帮助信息。
影像参数
-b:v——设置影像流量,默认为200Kbit/秒。(单位请引用下方注意事项)
-r——设置帧率值,默认为25。
-s——设置画面的宽与高。
-aspect——设置画面的比例。
-vn——不处理影像,于仅针对声音做处理时使用。
-vcodec( -c:v )——设置影像影像编解码器,未设置时则使用与输入文件相同之编解码器。

ffmpeg linux shell 批量提取音频内容

cat cc.sh 
#!/bin/sh
 
 
folder="."
id=0
for file_a in ${folder}/*
do
    let id++
    in_filename=`basename $file_a`
    num=`echo $id | awk '{printf("%02d",$0)}'`;
    out_filename="output/$num.aac"
    if [ "${in_filename##*.}"x = "mp4"x ]||[ "${in_filename##*.}"x = "ts"x ];then
    ffmpeg -i $in_filename -vn  $out_filename 
    fi
done

ffmpeg 下载 m3u8

ffmpeg -i http://.../playlist.m3u8 -c copy -bsf:a aac_adtstoasc output.mp4

ffmpeg aac_adtstoasc 格式说明

官方说明地址在这儿:https://ffmpeg.org/ffmpeg-bitstream-filters.html#aac_005fadtstoasc
*Convert MPEG-2/4 AAC ADTS to an MPEG-4 Audio Specific Configuration bitstream.
This filter creates an MPEG-4 AudioSpecificConfig from an MPEG-2/4 ADTS header and removes the ADTS header.
This filter is required for example when copying an AAC stream from a raw ADTS AAC or an MPEG-TS container to MP4A-LATM, to an FLV file, or to MOV/MP4 files and related formats such as 3GP or M4A. Please note that it is auto-inserted for MP4A-LATM and MOV/MP4 and related formats.*

1)将AAC编码器编码后的原始码流(ADTS头 + ES流)封装为MP4或者FLV或者MOV等格式时,需要先将ADTS头转换为MPEG-4 AudioSpecficConfig (将音频相关编解码参数提取出来),并将原始码流中的ADTS头去掉(只剩下ES流)。
2)相反,从MP4或者FLV或者MOV等格式文件中解封装出AAC码流(只有ES流)时,需要在解析出的AAC码流前添加ADTS头(含音频相关编解码参数)。

ubuntu ffmpeg screen capture

ffmpeg -f x11grab -video_size 1920x1030 -framerate 50 -i :0.0 -vf format=yuv420p output.mp4

Capturing your Desktop / Screen Recording for Linux
Use the x11grab device:

ffmpeg -video_size 1024x768 -framerate 25 -f x11grab -i :0.0+100,200 output.mp4

This will grab the image from desktop, starting with the upper-left corner at x=100, y=200 with a width and height of 1024⨉768.

If you need audio too, you can use ALSA (see Capture/ALSA for more info):

ffmpeg -video_size 1024x768 -framerate 25 -f x11grab -i :0.0+100,200 -f alsa -ac 2 -i hw:0 output.mkv

Or the pulse input device (see Capture/PulseAudio for more info):

ffmpeg -video_size 1024x768 -framerate 25 -f x11grab -i :0.0+100,200 -f pulse -ac 2 -i default output.mkv

ffmpeg 编译参数

ffmpeg -version

ffmpeg version N-105038-g30322ebe3c Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-17ubuntu1~20.04)
configuration: --enable-libx264 --enable-libpulse --enable-gpl --enable-openssl --enable-nonfree --enable-x86asm --enable-libmp3lame --enable-libx265 --enable-librtmp

ffmpeg 编解码列表

$ ffmpeg -codecs
$ ffmpeg -encoders
$ ffmpeg -decoders
$ ffmpeg -formats

ffmpeg 错误 OpenSSL <3.0.0 is incompatible with the gpl

Add --enable-nonfree

ffmpeg 录制电脑音频

pactl list short sources

1 alsa_output.pci-0000_00_1f.3.analog-stereo.monitor module-alsa-card.c s16le 2ch 48000Hz IDLE
2 alsa_input.pci-0000_00_1f.3.analog-stereo module-alsa-card.c s16le 2ch 48000Hz SUSPENDED

ffmpeg pulse 电脑录音

ffmpeg -f pulse -i 1 -ac 1 out.mp3

You can reference sources either by number: -f pulse -i 5, or by name -f pulse -i alsa_input.pci-0000_00_1b.0.analog-stereo, or just use -f pulse -i default to use the source currently set as default in pulseaudio.

ffmpeg 麦克风录音

ffmpeg -f pulse -i alsa_input.pci-0000_00_1b.0.analog-stereo -ac 1 recording.m4a

ffmpeg 录取播放声音

ffmpeg -f pulse -i alsa_output.pci-0000_00_1b.0.analog-stereo.monitor -ac 2 recording.m4a

Linux 录取电脑音视频

ffmpeg -video_size 1024x768 -framerate 25 -f x11grab -i :0.0+100,200 -f pulse -ac 2 -i 1 output.mkv

move the moov atom to the begining of the video file using FFMpeg

ffmpeg -i input_video_file.mp4 -vcodec copy -acodec copy -movflags faststart output_video_file.mp4

ffmpeg 分割mp4

ffmpeg -i input.mp4 -c copy -segment_time 30 -f segment output%03d.mp4

ffmpeg -i input.mp4 -c copy -segment_time 30 -f segment -segment_start_number 1 -individual_header_trailer 1 -break_non_keyframes 1 -reset_timestamps 1 output%03d.mp4

FFMPEG推流到rtsp server 命令
FFmpeg推流
注意:在推流之前先运行rtsp-simple-server,下载地址: https://github.com/bluenviron/mediamtx/releases

UDP推流

ffmpeg -re -i input.mp4 -c copy -f rtsp rtsp://127.0.0.1:8554/stream

TCP推流

ffmpeg -re -i input.mp4 -c copy -rtsp_transport tcp -f rtsp rtsp://127.0.0.1:8554/stream

循环推流

ffmpeg -re -stream_loop -1 -i input.mp4 -c copy -f rtsp rtsp://127.0.0.1:8554/stream

其中:

-re 为以流的方式读取;

-stream_loop 为循环读取视频源的次数,-1为无限循环;

-i 为输入的文件;

-f 为格式化输出到哪里;
Referenced from:https://blog.csdn.net/chan1987818/article/details/128219230

ffmpeg 生成白底视频

ffmpeg -f lavfi -i color=White:1280x720:d=360 -format rgb32 -f matroska  test1.mp4

ls -lh test1.mp4 
-rw-rw-r-- 1 hesy hesy 412K  5月  7 10:05 test1.mp4
ffmpeg -i test1.mp4 
Input #0, matroska,webm, from 'test1.mp4':
  Metadata:
    ENCODER         : Lavf58.76.100
  Duration: 00:06:00.00, start: 0.000000, bitrate: 9 kb/s
  Stream #0:0: Video: h264 (High), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 1k tbn, 50 tbc (default)

ffmpeg -i test1.mp4 -vcodec libx264 -preset ultrafast -b:v 2000k  output1.mp4

ffmpeg -i output1.mp4 -vf "subtitles=output.srt:force_style='FontName=Arial,FontSize=18,PrimaryColour=&H00FF00&,Outline=2,Shadow=1,BackColour=&H000000&,Bold=-1,Alignment=2,MarginV=135'" output2.mp4

ffmpeg 单张图片生成视频

ffmpeg -loop 1 -i 1.png -s 1280x720 -t 100 output.mp4

-loop 1 :因为只有一张图片所以必须加入这个参数(循环这张图片)
-t :图片转换成视频的持续时长,单位是秒(S),必须指定该值,否则会无限制生成视频
-s :指定视频的分辨率

ffmpeg转场特效:

ffmpeg -i 1.mp4 -vf fade=in:0:5 out.mp4   前5帧淡入
ffmpeg -i 1.mp4 -vf fade=out:35:40 out.mp4   后5帧淡出
ffmpeg -i 1.mp4 -vf fade=in:0:3,fade=out:37:40 out.mp4   前3帧淡入 后3帧淡出

ffmpeg 视频四宫格

ffmpeg -i 1.mp4 -i 2.mp4 -i  3.mp4 -i  4.mp4 -filter_complex "nullsrc=size=640x480[base];[0:v] setpts=PTS-STARTPTS,scale=320x240[upperleft];[1:v]setpts=PTS-STARTPTS,scale=320x240[upperright];[2:v]setpts=PTS-STARTPTS, scale=320x240[lowerleft];[3:v]setpts=PTS-STARTPTS,scale=320x240[lowerright];[base][upperleft]overlay=shortest=1[tmp1];[tmp1][upperright]overlay=shortest=1:x=320[tmp2];[tmp2][lowerleft]overlay=shortest=1:y=240[tmp3];[tmp3][lowerright]overlay=shortest=1:x=320:y=240" out.mp4

 1.2.3.4.mp4为文件路径

ffmpeg 视频实现各种特效
直接上命令:

//渐入
ffmpeg -i in.mp4 -vf fade=in:0:90 out.mp4
//黑白
ffmpeg -i in.mp4 -vf lutyuv="u=128:v=128" out.mp4
//锐化
ffmpeg -i in.mp4 -vf unsharp=luma_msize_x=7:luma_msize_y=7:luma_amount=2.5 out.mp4
//反锐化
ffmpeg -i in.mp4 -vf unsharp=7:7:-2:7:7:-2 out.mp4
//渐晕
ffmpeg -i in.mp4 -vf vignette=PI/4 out.mp4
//闪烁渐晕
ffmpeg -i in.mp4 -vf vignette='PI/4+random(1)*PI/50':eval=frame out.mp4
//视频颤抖
ffmpeg -i in.mp4 -vf crop="in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)sin(n/10):(in_h-out_h)/2+((in_h-out_h)/2)sin(n/7)" out.mp4
//色彩变幻
ffmpeg -i in.mp4 -vf hue="H=2PIt:s=sin(2PIt)+1" out.mp4
//模糊处理
ffmpeg -i in.mp4 -vf boxblur=5:1:cr=0:ar=0 out.mp4
//镜像翻转
ffmpeg -i in.mp4 -vf crop=iw/2:ih:0:0,splitleft;[tmp]hflip[right];[left]pad=iw*2[a];aoverlay=w out.mp4
//水平翻转
ffmpeg -i in.mp4 -vf geq=p(W-X\,Y) out.mp4
//垂直翻转
ffmpeg -i in.mp4 -vf vflip out.mp4
//浮雕效果
ffmpeg -i in.mp4 -vf format=gray,geq=lum_expr='(p(X,Y)+(256-p(X-4,Y-4)))/2' out.mp4
//均匀噪声
ffmpeg -i in.mp4 -vf noise=alls=20:allf=t+u out.mp4

来源:https://www.cnblogs.com/famhuai/p/ffmpeg.html

视频中去水印delogo
ffmpeg -i input.mp4 -vf delogo=x=0:y=0:w=100:h=60 output.mp4
语法:-vf delogo=x:y:w:h[:t[:show]]
x:y 离左上角的坐标
w:h logo的宽和高
show:默认值0,若设置为1,指定区域的边界会绘制一个绿色的矩形框用于找到合适的x、y、w、h参数。

Use the enable option
The delogo filter supports the enable option (timeline support). You can see if a filter supports this option with ffmpeg -filters.

Apply delogo filter between 5-10 seconds

ffmpeg -i input.mp4 -vf "delogo=x=1539:y=23:w=353:h=93:enable='between(t,5,10)'" -c:a copy output.mp4

Apply delogo to multiple areas

ffmpeg -i input.mp4 -vf "delogo=x=1539:y=23:w=353:h=93,delogo=x=100:y=24:w=100:h=72" -c:a copy output.mp4

Apply delogo to two squares between 5-10 seconds

ffmpeg -i input.mp4 -vf "delogo=x=1539:y=23:w=353:h=93:enable='between(t,5,10)',delogo=x=100:y=24:w=100:h=72:enable='between(t,5,10)'" -c:a copy output.mp4

ffmpeg -i yun.mp4 -t 10 -vf "delogo=x=20:y=30:w=175:h=55:show=1,delogo=x=1020:y=30:w=210:h=60:show=1" output.mp4

多个区域消除logo

ffmpeg -i output.mp4 -vf "delogo=x=200:y=600:w=800:h=100:show=0:enable='between(n,0,50)',delogo=x=20:y=15:w=300:h=80:show=0:enable='between(t,11,26)',delogo=x=20:y=15:w=300:h=80:show=0:enable='between(t,54,68)',delogo=x=20:y=15:w=300:h=80:show=0:enable='between(t,94,108)',delogo=x=20:y=15:w=300:h=80:show=0:enable='between(t,130,142)',delogo=x=20:y=15:w=300:h=80:show=0:enable='between(t,165,177)',delogo=x=20:y=15:w=300:h=80:show=0:enable='between(t,201,216)'"  output1.mp4

ffmpeg视频[片头]添加3秒的图片视频

ffmpeg -y -loop 1 -framerate 1 -t 3 -i /storage/emulated/0/1/input.png -i /storage/emulated/0/1/input.mp4 -f lavfi -t 3.0 -i anullsrc=channel_layout=stereo:sample_rate=44100 -filter_complex [0:v]scale=iw:ih[outv0];[1:v]scale=iw:ih[outv1];[outv0][outv1]concat=n=2:v=1:a=0:unsafe=1[outv];[2:a][1:a]concat=n=2:v=0:a=1[outa] -map [outv] -map [outa] -r 15 -b 1M -f mp4 -vcodec libx264 -c:a aac -pix_fmt yuv420p -s 960x540 -preset superfast /storage/emulated/0/1/result.mp4

ffmpeg视频片头添加3秒的图片视频简化已测试验证通过版本,图片和视频分辨率不要求一样,会自动转换

ffmpeg -y -loop 1 -framerate 1 -t 3 -i 2005.png -i output.mp4 -f lavfi -t 3.0 -i anullsrc=channel_layout=stereo:sample_rate=44100 -filter_complex "[0:v][1:v]concat=n=2:v=1:a=0:unsafe=1[outv];[2:a][1:a]concat=n=2:v=0:a=1[outa]" -map [outv] -map [outa] -r 25 -b 1M -f mp4 -vcodec libx264 -c:a aac -pix_fmt yuv420p -s 1280x720 -preset superfast result.mp4

参考: https://cloud.tencent.com/developer/article/1672609

视频[片尾]添加3秒的图片视频

ffmpeg -y -i /storage/emulated/0/1/input.mp4 -loop 1 -framerate 25 -t 3.0 -i /storage/emulated/0/1/input.png -f lavfi -t 3.0 -i anullsrc=channel_layout=stereo:sample_rate=44100 -filter_complex [0:v]scale=iw:ih[outv0];[1:v]scale=iw:ih[outv1];[outv0][outv1]concat=n=2:v=1:a=0:unsafe=1[outv];[0:a][2:a]concat=n=2:v=0:a=1[outa] -map [outv] -map [outa] -r 25 -b 1M -f mp4 -vcodec libx264 -c:a aac -pix_fmt yuv420p -s 960x540 -preset superfast /storage/emulated/0/1/result.mp4

图片合成视频(带动画)

ffmpeg -y -loop 1 -r 25 -i /storage/emulated/0/1/input.png -vf zoompan=z=1.1:x='if(eq(x,0),100,x-1)':s='960*540' -t 10 -pix_fmt yuv420p /storage/emulated/0/1/result.mp4

视频转Gif

ffmpeg -y -ss 0 -t 7 -i /storage/emulated/0/1/input.mp4 -r 5 -s 280x606 -preset superfast /storage/emulated/0/1/result.gif

Gif转视频

ffmpeg -y -i /storage/emulated/0/1/input.gif -pix_fmt yuv420p -preset superfast /storage/emulated/0/1/result.mp4

添加水印

ffmpeg -y -i /storage/emulated/0/1/input.mp4 -i /storage/emulated/0/1/input.png -filter_complex [0:v]scale=iw:ih[outv0];[1:0]scale=0.0:0.0[outv1];outv0overlay=0:0 -preset superfast /storage/emulated/0/1/result.mp4

添加背景音乐(支持调节原音和配乐的音量)

ffmpeg -y -i /storage/emulated/0/1/input.mp4 -i /storage/emulated/0/1/input.mp3 -filter_complex [0:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,volume=0.2[a0];[1:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,volume=1[a1];a0amix=inputs=2:duration=first[aout] -map [aout] -ac 2 -c:v copy -map 0:v:0 -preset superfast /storage/emulated/0/1/result.mp4

截取指定时间的一张图)

ffmpeg -y -i /storage/emulated/0/1/input.mp4 -f image2 -ss 00:00:03 -vframes 1 -preset superfast /storage/emulated/0/1/result.jpg

视频转图片(每隔一秒截取一张图)

ffmpeg -y -i /storage/emulated/0/1/input.mp4 -f image2 -r 1 -q:v 10 -preset superfast /storage/emulated/0/1/%3d.jpg

FFmpeg文档汇总:https://ffmpeg.org/documentation.html

FFmpeg filters文档:https://ffmpeg.org/ffmpeg-filters.html

视频拼接

ffmpeg -y -f concat -safe 0 -i Cam01.txt -c copy Cam01.mp4

音频拼接

ffmpeg -y -i "concat:123.mp3|124.mp3" -acodec copy output.mp3

音频处理
音频拼接
代码语言:javascript
复制
ffmpeg -y -i "concat:123.mp3|124.mp3" -acodec copy output.mp3
-i代表输入参数

contact:123.mp3|124.mp3代表着需要连接到一起的音频文件

-acodec copy output.mp3 重新编码并复制到新文件中

音频混音
代码语言:javascript
复制
ffmpeg -y -i 124.mp3 -i 123.mp3 -filter_complex amix=inputs=2:duration=first:dropout_transition=2 -f mp3 remix.mp3
-i代表输入参数

-filter_complex ffmpeg滤镜功能,非常强大,详细请查看文档

amix是混合多个音频到单个音频输出

inputs=2代表是2个音频文件,如果更多则代表对应数字

duration 确定最终输出文件的长度

longest(最长)|shortest(最短)|first(第一个文件)

dropout_transition The transition time, in seconds, for volume renormalization when an input stream ends. The default value is 2 seconds.

-f mp3 输出文件格式

音频裁剪
ffmpeg -y -i 124.mp3 -vn -acodec copy -ss 00:00:00 -t 00:01:32 output.mp3
-i代表输入参数

-acodec copy output.mp3 重新编码并复制到新文件中

-ss 开始截取的时间点

-t 截取音频时间长度

视频格式转换

ffmpeg -i input.avi c:a copy c:v copy output.mp4
ffmpeg -i input.mp4 output.ts

分离/合并视频音频流

ffmpeg -i input-video -vn -c:a copy output-audio //分离音频流
//-vn is no video.
//-acodec copy says use the same audio stream that's already in there.

ffmpeg -i input-video -c:v copy -an output-video //分离视频流
//-an is no audio.
//-vcodec copy says use the same video stream that's already in there.
ffmpeg -i input-video -c:v copy -an output-video -c:a -vn output-audio //同时分离音频和视频流
ffmpeg -i video_file -i audio_file -c:v copy -c:a copy output_file //合并视频音频

截取视频片段

ffmpeg -ss 5 -i input.mp4 -t 10 -c:v copy -c:a copy output.mp4
//-ss 5指定从输入视频第5秒开始截取,-t 10指明最多截取10秒。
//而-c:v copy -c:a copy标示视频与音频的编码不发生改变,而是直接复制,这样会大大提升速度

截取10s到20s

ffmpeg -i input.mp4 -c:v libx264 -filter:v select="between(t, 10, 20)" out.mp4

从头截取前10帧

ffmpeg -i input.mp4 -c:v libx264 -filter:v select="gt(n, -1)" -vframes 10 out.mp4
ffmpeg -i input.mp4 -c:v libx264 -filter:v select="between(n, 0, 9)" out.mp4

中间截取 (10-20帧)

ffmpeg -i input.mp4 -c:v libx264 -filter:v select="between(n, 10, 20)" out.mp4

视频转换为m3u8格式

ffmpeg -i yoona.mp4 -c copy -map 0 -f segment -segment_list yoona.m3u8 -segment_time 10 yoona-%04d.ts

把视频文件推送到rtmp服务器

ffmpeg -re -i jack.mp4 -c copy -f flv rtmp://host/live/test

对视频进行切片

ffmpeg -i input.mp4 -c copy -f segment -segment_format mp4 -segment_time 300 -reset_timestamps 1 test_output-%d.mp4
以上命令把input.mp4切分为5分钟时长的视频片段。

按照时间点进行剪切

ffmpeg -i input.mp4 -c copy -f segment -segment_format mp4 -segment_times 60,120,150 -reset_timestamps 1 test_output-%d.mp4
在第60秒、第120秒和第150秒这三个时间点进行切片。

视频截图
获取视频时间点01:23:45的截图

ffmpeg -ss 01:23:45 -i input -vframes 1 -q:v 2 output.jpg

-i input file the path to the input file
-ss 01:23:45 seek the position to the specified timestamp
-vframes 1 only handle one video frame
-q:v 2 to control output quality. Full range is a linear scale of 1-31 where a lower value results in a higher quality. 2-5 is a good range to try.
output.jpg output filename, should have a well-known extension

截取视频帧序号34的图像,帧序号从0开始。

ffmpeg -i <input> -vf "select=eq(n,34)" -vframes 1 out.png

视频/图片尺寸修改

ffmpeg -i sample.jpg -s w*h out.jpg

分离视频的YUV通道分量

ffmpeg -i jack.mp4 -filter_complex "extractplanes=y+u+vy[v]" -map "[y]" jack_y.mp4 -map "[u]" jack_u.mp4 -map "[v]" jack_v.mp4

图片序列与视频的互相转换

ffmpeg -i %04d.jpg output.mp4
ffmpeg -i input.mp4 %04d.jpg
\第一行命令是把0001.jpg、0002.jpg、0003.jpg等编码成output.mp4,
\第二行则是相反把input.mp4变成0001.jpg……。
\%04d.jpg表示从1开始用0补全的4位整数为文件名的jpg文件序列。

视频加水印
ffmpeg -i xizong.mp4 -i fleight.jpg -filter_complex "overlay=W-w-5:5" -codec:a copy xizong_fleight.mp4
//在右上角加水印,边距为5像素。

视频旋转transpose
顺时针翻转90度

ffmpeg -i input -vf transpose=1 output
翻转180度

ffmpeg -i in.mp4 -vf "transpose=1,transpose=1" out.mp4

调色3DLutFilter
应用颜色查找表Lut,有‘nearest’、‘trilinear’、‘tetrahedral’三种插值算法,命令行调用如下:

ffmpeg -i log.mp4 -vf lut3d="file=DK79.cube" out.mp4

图像/视频转换成黑白的
图像转换成黑白的

ffmpeg -i sample.png -vf hue=s=0 output.png

//把视频转换为黑白的
ffmpeg -i julin_5s.mp4 -vf hue=s=0 -c:a copy julin_monochrome.mp4

裁剪视频crop
使用-crop选项,语法如下:

crop=ow[:oh[:x[:y[:keep_aspect]]]]
ow、oh表示裁减之后输出视频的宽和高,
x、y表示在输入视频上开始裁减的横坐标和纵坐标,
keep_aspect: 1表示保持裁剪后输出的纵横比与输入一致,0表示不保持。
裁剪输入视频的左三分之一,中间三分之一,右三分之一:
ffmpeg -i input -vf crop=iw/3:ih :0:0 output
ffmpeg -i input -vf crop=iw/3:ih :iw/3:0 output
ffmpeg -i input -vf crop=iw/3:ih :iw/3*2:0 output
裁剪中间一半区域:
ffmpeg -i input -vf crop=iw/2:ih/2 output

视频尺寸缩放scale
将输入的1920x1080缩小到960x540输出

ffmpeg -i input.mp4 -vf scale=960:540 output.mp4
//ps: 如果540不写,写成-1,即scale=960:-1, 那也是可以的,ffmpeg会通知缩放滤镜在输出时保持原始的宽高比。

视频中去水印delogo
ffmpeg -i input.mp4 -vf delogo=x=0:y=0:w=100:h=60 output.mp4
使用delogo filter
它对logo周围像素的简单插值来实现。只需设置一个覆盖logo的矩形。在每个方向上,紧靠矩形外的下一个像素的值将用于计算矩形内的插值像素值。

语法:-vf delogo=x:y:w:h[:t[:show]]
x:y 离左上角的坐标
w:h logo的宽和高
show:默认值0,若设置为1,指定区域的边界会绘制一个绿色的矩形框用于找到合适的x、y、w、h参数。
给视频或图像加上黑边pad
比如一个输入视频尺寸是1280x534的源,想要加上黑边变成1280x720,使用下面的命令可以实现。

ffmpeg -i input.mp4 -vf pad=1280:720:0:93:black output.mp4
按照从左到右的顺序依次为:“宽”、“高”、“X坐标”和“Y坐标”,宽和高指的是输入视频尺寸(包含加黑边的尺寸),X、Y指的是视频所在位置。
默认是加黑色的边,black可以不写。
上面的命令中93是根据(720-534) / 2计算得到。
如果视频原始1920x800的话,完整的语法应该是:

-vf 'scale=1280:534,pad=1280:720:0:93:black'

先将视频缩小到1280x534,然后在加入黑边变成1280x720,将1280x534的视频放置在x=0,y=93的地方。

视频按时间拼接
使用 FFmpeg 的concat 工具。

先创建一个文本文件,包含需要拼接的文件的列表。

filelist.txt:

file 'input1.mp4'
file 'input2.mp4'
file 'input3.mp4'
命令

ffmpeg -f concat -i filelist.txt -c copy output.mp4

视频左右拼接

ffmpeg -ss 00:05:00 -i v1.mp4 -ss -ss 00:05:00 -i v2.mp4 -filter_complex "[0:v]crop=iw/2:ih:0:0[v1];[1:v]crop=iw/2:ih:iw/2:0[v2];v1hstack[v3];[v3]drawbox=iw/2-1:0:2:ih[v4]" -map [v4] -map 0:a -c:a copy -c:v libx264 -t 00:01:00 -y out.mp4

打印视频帧的一些信息showinfo

ffmpeg -hide_banner -i v.mp4 -vf showinfo -frames:v 1 -f null /dev/null

生成纯色的图片
生成黑色的图片

ffmpeg -f lavfi -i color=Black:640x480 -frames:v 1 -y black.jpg

打印帧序号/pts到视频帧水印drawtext
打印帧序号

ffmpeg -i test.mp4 -vf drawtext=fontcolor=red:fontsize=40:fontfile=msyh.ttf:line_spacing=7:text=%{n}:x=50:y=50 -vsync 0 -y out.mp4
打印帧pts

ffmpeg -i test.mp4 -vf drawtext=fontcolor=red:fontsize=30:fontfile=msyh.ttf:line_spacing=7:text=%{pts}:x=50:y=50 -vsync 0 -y out.mp4
打印帧类型

ffmpeg -i test.mp4 -vf drawtext=fontcolor=red:fontsize=20:fontfile=msyh.ttf:line_spacing=7:text=%{pict_type}:x=50:y=50 -vsync 0 -y out.mp4
把视频的pts[时间戳]添加为视频水印,精度到毫秒

ffmpeg -i test.mp4 -vf “drawtext=fontsize=60:text=’%{pts:hms}’” -c:v libx264 -c copy -f mp4 output.mp4 -y

fmpeg或ffplay 打开 DEBUG 日志输出。
$ ffplay -v debug $URL
API中开启debug日志

av_log_set_level(AV_LOG_DEBUG);

FFmpeg命令日志输出到文件
执行ffmpeg或者ffprobe输出重定向到文件的时候发现文件是空的。
可以采用下面的方式。

ffprobe xxx > file 2>&1

参考网址: https://zhuanlan.zhihu.com/p/544522172

FFmpeg给视频添加背景图片

ffmpeg -loop 1 -i 1.jpg -i input.mp4 -filter_complex 'overlay=(W-w)/2:(H-h)/2:shortest=1,format=yuv420p' -c:a copy test.mp4

FFmpeg将1080p视频提速4倍并调整为720p

ffmpeg -r 100 -i 20240506160722.mp4 -r 25 -s 1280x720 -b 1500k -y  4x.mp4

FFmpeg转码为ios能播放的mp4文件

ffmpeg -i result.mp4 -c:v libx264 -pix_fmt yuv420p -profile:v high -level:v 4.1 -c:a copy  output.mp4 

多个视频overlay

ffmpeg -i 2.mp4 -i s.mp4 -filter_complex "[0][1]overlay=0:638[v1];[v1][1]overlay=128:638[v2];[v2][1]overlay=256:638[v3];[v3][1]overlay=384:638[v4];[v4][1]overlay=512:638[v5];[v5][1]overlay=640:638[v6];[v6][1]overlay=768:638[v7];[v7][1]overlay=896:638[v8];[v8][1]overlay=1024:638[v9];[v9][1]overlay=1152:638" output1.mp4
ffmpeg -i output1.mp4 -i s.mp4 -filter_complex "[0][1]overlay=0:638[v1];[v1][1]overlay=128:638[v2];[v2][1]overlay=256:638[v3];[v3][1]overlay=384:638[v4];[v4][1]overlay=512:638[v5];[v5][1]overlay=640:638[v6];[v6][1]overlay=768:638[v7];[v7][1]overlay=896:638[v8];[v8][1]overlay=1024:638[v9];[v9][1]overlay=1152:638" output2.mp4

FFmpeg ALL中文文档,非常详细的介绍
https://www.jishuchi.com/books/ffmpeg-all

画网格的命令
ffmpeg -i ji_ok.mp4 -vf "drawgrid=w=iw/3:h=ih/3:t=5:color=white@0.6" ji_grid.mp4

生成黑色视频
ffmpeg -f lavfi -i color=Black:1280x720:d=211 black.mp4

将前景视频放到黑底视频上面
ffmpeg -i black.mp4 -i j0v.mp4 -fileter_complex "0overlay=x=75:y=0" b_j0v.mp4

将多个前景视频放到黑底视频上面
ffmpeg -i black.mp4 -i j0v.mp4 -i j1v.mp4 -filter_complex "0overlay=x=75:y=0[v1];v1overlay=x=75:y=650" b_ji.mp4

在视频上面画个框
ffmpeg -i ji_ok.mp4 -vf "drawbox=x=80:w=900:h=650:c=white" -t 10 j1.mp4
查看滤镜的帮助文档
ffmpeg -h filter=drawbox

在视频上显示当前播放的帧号
ffmpeg -i ji_ok.mp4 -vf "drawtext=fontfile=Arial.ttf: text='%{frame_num}': start_number=1: x=(w-tw)/2: y=h-(2*lh): fontcolor=black: fontsize=20: box=1: boxcolor=white: boxborderw=5" j1.mp4

下雪的滤镜视频:
root@3cf1c295cf5f:/usr/local/bin# xvfb-run -a --server-args="-screen 0 1280x720x24 -ac -nolisten tcp -dpi 96 +extension RANDR" ffmpeg -i /home/hesy/output.mp4 -filter_complex "plusglshader=sdsource=snow_shader.gl:vxsource=star_vertex.gl" -c:v libx264 -pix_fmt yuv420p -profile:v high -level:v 4.1 -c:a copy result.mp4

抠图
ffmpeg -i 44s.mp4 -stream_loop -1 -i dance.mp4 -filter_complex "[1:v]scale=720x900,colorkey=0x95949a:0.01:1[1v];0:voverlay=1:2" -t 45 out.mp4

Record audio from an application
Load the snd_aloop module:

modprobe snd-aloop pcm_substreams=1
Set the default ALSA audio output to one substream of the Loopback device in your .asoundrc (or /etc/asound.conf)

.asoundrc

pcm.!default { type plug slave.pcm "hw:Loopback,0,0" }
You can now record audio from a running application using:

ffmpeg -f alsa -ac 2 -ar 44100 -i hw:Loopback,1,0 out.wav

解决ffmpeg推流转发频繁丢包问题_ffmpeg丢包

在代码中加入 -rtsp_transport tcp 即可解决,推流好几个小时无报错

ffmpeg -rtsp_transport tcp -i "rtsp://admin:password@192.168.1.1/h264/ch1/sub/av_stream" -c:v libx264 -c:a copy -s 384*288 -rtsp_transport tcp -stimeout 15000000 -max_delay 500000 -f rtsp -r 50 -an "rtsp://www.baidu.com/94fea02d64444399ac4546e5298eec96_kejian"

老版本:

//声明:    
AVBitStreamFilterContext* h264bsfc =  av_bitstream_filter_init("h264_mp4toannexb"); 
//使用
av_bitstream_filter_filter(h264bsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, pkt.data, pkt.size, 0);
//释放:
av_bitstream_filter_close(h264bsfc);  

新版本:


//声明  
AVBSFContext *bsf_ctx = nullptr;
const AVBitStreamFilter *pfilter = av_bsf_get_by_name("h264_mp4toannexb");
av_bsf_alloc(pfilter, &bsf_ctx);
//使用:
av_bsf_send_packet(bsf_ctx, &packet);
av_bsf_receive_packet(bsf_ctx, &packet);
//释放:
av_bsf_free(&bsf_ctx);

h264有两种封装,一种是annexb模式,传统模式,有startcode(0x000001或0x0000001)分割NALU,在mpegts流媒体中使用,vlc里打开编码器信息中显示h264;

一种是AVCC模式,一般用mp4、mkv、flv容器封装,以长度信息分割NALU, vlc里打开编码器信息中显示avc1。

很多场景需要进行这两种格式之间的转换,FFmpeg提供了名称为h264_mp4toannexb的Bitstream Filter(bsf)来实现这个功能。

例如将mp4转换成h264可使用如下指令:

mp4->h264:sudo ffmpeg -i test.mp4 -codec copy -bsf: h264_mp4toannexb -f h264 test.h264

bsf的使用方法:

(1)使用查询函数av_bsf_get_by_name 根据名称查找对应的AVBitstreamFilter。

(2)使用av_bsf_alloc为AVBSFContext和他的一些内部变量分配内存。

(3)设置AVBSFContext可能需要使用一些变解码参数和time_base[2].

(4)在设置好参数之后使用av_bsf_init来初始化bsf.

(5)使用av_bsf_send_packet函数输入数据,并使用av_bsf_receive_packet获得处理后的输出数据。

(6)处理结束后,av_bsf_free清理用于上下文和内部缓冲区的内存。