标签 ffmpeg 下的文章

“”

Hardware/VAAPI – FFmpeg
update:2021-9-30
Device Selection
The libva driver needs to be attached to a DRM device to work. This can be connected either directly or via a running X server. When working standlone, it is generally best to use a DRM render node (/dev/dri/render*) - only use a connection via X if you actually want to deal with surfaces inside X (with DRI2, for example).

In ffmpeg, a named global device can be created using the -init_hw_device option:

ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128

With the decode hwaccel, each input stream can then be given a previously initialised device with the -hwaccel_device option:

ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_device foo -i ...

If only one stream is being used, -hwaccel_device can also accept a device path directly:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -i ...
Where filters require a device (for example, the hwupload filter), the device used in a filter graph can be specified with the -filter_hw_device option:

ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128 -i ... -filter_hw_device foo -filter_complex ...hwupload... ...
If you have multiple usable devices in the same machine (for example, an Intel integrated GPU and an AMD discrete graphics card), they can be used simultaneously to decode different streams:

ffmpeg -init_hw_device vaapi=intel:/dev/dri/renderD128 -init_hw_device vaapi=amd:/dev/dri/renderD129 -hwaccel vaapi -hwaccel_device intel -i ... -hwaccel vaapi -hwaccel_device amd -i ...
(See <​http://www.ffmpeg.org/ffmpeg.html#toc-Advanced-Video-options>; for more detail about these options.)

Finally, the -vaapi_device option may be more convenient in single-device cases with filters.

ffmpeg -vaapi_device /dev/dri/renderD128

acts equivalently to:

ffmpeg -init_hw_device vaapi=vaapi0:/dev/dri/renderD128 -filter_hw_device vaapi0

Surface Formats
The hardware codecs used by VAAPI are not able to access frame data in arbitrary memory. Therefore, all frame data needs to be uploaded to hardware surfaces connected to the appropriate device before being used. All VAAPI hardware surfaces in ffmpeg are represented by the vaapi pixfmt (the internal layout is not visible here, though).

The hwaccel decoders normally output frames in the associated hardware format, but by default the ffmpeg utility download the output frames to normal memory before passing them to the next component. This allows the decoder to work standlone to make decoding faster without any additional options:

ffmpeg -hwaccel vaapi ... -i input.mp4 -c:v libx264 ... output.mp4

For other outputs, the option -hwaccel_output_format can be used to specify the format to be used. This can be a software format (which formats are usable depends on the driver), or it can be the vaapi hardware format to indicate that the surface should not be downloaded.

For example, to decode only and do nothing with the result:

ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi ... -i input.mp4 -f null -

This can be used to test the speed / CPU use of the decoder only (the download operation typically adds a large amount of additional overhead).

When decoder output is in hardware surfaces, the frames will be given to following filters or encoders in that form. The scale_vaapi and deinterlace_vaapi filters act on vaapi format frames to scale and deinterlace them respecitvely. There are also some generic filters - hwdownload, hwupload and hwmap - which support all hardware formats, including VAAPI (see <​http://www.ffmpeg.org/ffmpeg-filters.html#hwdownload>;).

For example, take an interlaced input, decode, deinterlace, scale to 720p, download to normal memory and encode with libx264:

ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi ... -i interlaced_input.mp4 -vf 'deinterlace_vaapi,scale_vaapi=w=1280:h=720,hwdownload,format=nv12' -c:v libx264 ... progressive_output.mp4

Encoding
The encoders only accept input as VAAPI surfaces. If the input is in normal memory, it will need to be uploaded before giving the frames to the encoder - in the ffmpeg utility, the hwupload filter can be used for this. It will upload to a surface with the same layout as the software frame, so it may be necessary to add a format filter immediately before to get the input into the right format (hardware generally wants the nv12 layout, but most software functions use the yuv420p layout). The hwupload filter also requires a device to upload to, which needs to be defined before the filter graph is created.

So, to use the default decoder for some input, then upload frames to VAAPI and encode with H.264 and default settings:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v h264_vaapi output.mp4

If the input is known to be hardware-decodable, then we can use the hwaccel:

ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device /dev/dri/renderD128 -i input.mp4 -c:v h264_vaapi output.mp4

Finally, when the input may or may not be hardware decodable we can do:

ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device foo -i input.mp4 -filter_hw_device foo -vf 'format=nv12|vaapi,hwupload' -c:v h264_vaapi output.mp4

This works because the decoder will output either vaapi surfaces (if the hwaccel is usable) or software frames (if it isn't). In the first case, it matches the vaapi format and hwupload does nothing (it passes through hardware frames unchanged). In the second case, it matches the nv12 format and converts whatever the input is to that, then uploads. Performance will likely vary by a large amount depending which path is chosen, though.

The supported encoders are:

H.262 / MPEG-2 part 2 mpeg2_vaapi
H.264 / MPEG-4 part 10 (AVC) h264_vaapi
H.265 / MPEG-H part 2 (HEVC) hevc_vaapi
MJPEG / JPEG mjpeg_vaapi
VP8 vp8_vaapi
VP9 vp9_vaapi
For an explanation of codec options, see <​http://www.ffmpeg.org/ffmpeg-codecs.html#VAAPI-encoders>;.

Mapping options from libx264
No CRF-like mode is currently supported. The only constant-quality mode is CQP (constant quantisation parameter), which has no adaptivity to scene content. It does, however, allow different quality settings for different frame types, to improve compression by spending fewer bits on unreferenced B-frames - see the (i|b)_q(factor|offset) options. CQP mode cannot be combined with a maximum bitrate or buffer size.

CBR and VBR modes are supported, though the output of them varies significantly by driver and device (default is VBR, set -maxrate equal to -b:v for CBR). HRD buffering options (rc_max_rate, rc_buffer_size) are functional, and the encoder will generate buffering_period and pic_timing SEI when appropriate.

There is no complete analogue of the -preset option. The -compression_level option controls the local speed/quality tradeoff in the encoder (that is, the amount of effort expended on trying to get the best results from local choices like motion estimation and mode decision), using a nebulous per-device scale. The argument is a small integer, from 1 up to some limit dependent on the device (not more than 7) - higher values are faster / lower stream quality. Separately, some hardware (Intel gen9) supports a low-power mode with more restricted features. It is accessible via the -low_power option.

Neither two-pass encoding nor lookahead are supported at all - only local rate control is possible. VBR mode should do a reasonably good job at getting close to an overall bitrate target, but quality will vary significantly through a stream if the complexity varies.

Full Examples
All of these examples assume the input and output files will contain one video stream (audio will need to be considered separately). It is assumed that VAAPI is usable via the DRM device node /dev/dri/renderD128.

Decode-only
Decode an input with hardware if possible, output in normal memory to encode with libx264:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -i input.mp4 -c:v libx264 -crf 20 output.mp4

Decode an input with hardware, deinterlace it if it was interlaced, downscale, then download to normal memory to encode with libx264 (will fail if the input is not supported by the hardware decoder):

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -vf 'deinterlace_vaapi=rate=field:auto=1,scale_vaapi=w=640:h=360,hwdownload,format=nv12' -c:v libx264 -crf 20 output.mp4

Decode an input and discard the output (this can be used as a crude benchmark of the decoder):

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -f null -

Encode-only
Encode an input with H.264 at 5Mbps VBR:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v h264_vaapi -b:v 5M output.mp4

As previous, but use constrained baseline profile only for compatibility with old devices:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v h264_vaapi -b:v 5M -profile 578 -bf 0 output.mp4

Encode with H.264 at good constant quality:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v h264_vaapi -qp 18 output.mp4

Encode with 10-bit H.265 at 15Mbps VBR (recent hardware required - Kaby Lake or later Intel):

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=p010,hwupload' -c:v hevc_vaapi -b:v 15M -profile 2 output.mp4

Scale to 720p and encode with H.264 at 5Mbps CBR:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'hwupload,scale_vaapi=w=1280:h=720:format=nv12' -c:v h264_vaapi -b:v 5M -maxrate 5M output.mp4

Encode with VP9 at 5Mbps VBR:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v vp9_vaapi -b:v 5M output.webm

Encode with VP9 at good constant quality, using pseudo-B-frames to improve compression:

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v vp9_vaapi -global_quality 50 -bf 1 -bsf:v vp9_raw_reorder,vp9_superframe output.webm

Camera Capture
Capture a raw stream from a V4L2 camera device and encode it as H.264:

ffmpeg -vaapi_device /dev/dri/renderD128 -f v4l2 -video_size 1920x1080 -i /dev/video0 -vf 'format=nv12,hwupload' -c:v h264_vaapi output.mp4

Capture an MJPEG stream from a V4L2 camera device (e.g. a UVC webcam), decode it and encode it as H.264:

ffmpeg -f v4l2 -input_format mjpeg -video_size 1920x1080 -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i /dev/video0 -vf 'scale_vaapi=format=nv12' -c:v h264_vaapi output.mp4

The extra scale_vaapi instance is needed here to convert the VAAPI surfaces to the correct format for encoding - webcams will typically supply images in YUV 4:2:2 format.

Screen Capture
Capture the screen from X and encode with H.264 at reasonable constant-quality:

ffmpeg -vaapi_device /dev/dri/renderD128 -f x11grab -video_size 1920x1080 -i :0 -vf 'hwupload,scale_vaapi=format=nv12' -c:v h264_vaapi -qp 24 output.mp4

Note that it is also possible to do the format conversion (RGB to YUV) on the CPU - this is slower, but might be desirable if other filters are going to be applied:

ffmpeg -vaapi_device /dev/dri/renderD128 -f x11grab -video_size 1920x1080 -i :0 -vf 'format=nv12,hwupload' -c:v h264_vaapi -qp 24 output.mp4

Capture the screen from the first active KMS plane:

ffmpeg -device /dev/dri/card0 -f kmsgrab -i - -vf 'hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' -c:v h264_vaapi -qp 24 output.mp4

Compared to capturing through X as in the previous examples, this should use much less CPU (all surfaces stay on the GPU side) and can work outside X (on VTs or in Wayland), but can only capture whole planes and requires DRM master or CAP_SYS_ADMIN to run.

Transcode
Hardware-only transcode to H.264 at 2Mbps CBR:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -c:v h264_vaapi -b:v 2M -maxrate 2M output.mp4
Decode, deinterlace if interlaced, scale to 720p, encode with H.265 at 5Mbps VBR:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -vf 'deinterlace_vaapi=rate=field:auto=1,scale_vaapi=w=1280:h=720' -c:v hevc_vaapi -b:v 5M output.mp4
Transcode to 10-bit H.265 at 15Mbps VBR (the input can be 10-bit, but need not be):

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -vf 'scale_vaapi=format=p010' -c:v hevc_vaapi -profile 2 -b:v 15M output.mp4
Transcode to H.264 in constrained baseline profile at level 3 and 1Mbps CBR for compatibility with old devices:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -vf 'fps=30,scale_vaapi=w=640:h=-2:format=nv12' -c:v h264_vaapi -profile 578 -level 30 -bf 0 -b:v 1M -maxrate 1M output.mp4
Decode the input, then pick a frame from it every 10 seconds to make a sequence of JPEG screenshots at high quality:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -r 1/10 -c:v mjpeg_vaapi -global_quality 90 -f image2 output%03d.jpeg
Burn subtitles into the video while transcoding:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -vf 'scale_vaapi,hwmap=mode=read+write+direct,format=nv12,ass=subtitles.ass,hwmap' -c:v h264_vaapi -b:v 2M -maxrate 2M output.mp4
(Note that the scale_vaapi filter is required here to copy the frames - without it, the subtitles would be drawn directly on the reference frames being used by the decoder at the same time.)

Transcode to two different outputs (one at constant-quality and one at constant-bitrate) from the same input:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -filter_complex 'splitcq' -map '[cq]' -c:v h264_vaapi -qp 18 output-cq.mp4 -map '[cb]' -c:v h264_vaapi -b:v 5M -maxrate 5M output-cb.mp4
Transcode for multiple streaming formats (one H.264 and one VP9, with the same parameters) from the same input:

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.mp4 -filter_complex 'splith264' -map '[h264]' -c:v h264_vaapi -b:v 5M output-h264.mp4 -map '[vp9]' -c:v vp9_vaapi -b:v 5M output-vp9.webm
Decode on one device, download, upload to a second device, and encode:

ffmpeg -init_hw_device vaapi=decdev:/dev/dri/renderD128 -init_hw_device vaapi=encdev:/dev/dri/renderD129 -hwaccel vaapi -hwaccel_device decdev -hwaccel_output_format vaapi -i input.mp4 -filter_hw_device encdev -vf 'hwdownload,format=nv12,hwupload' -c:v h264_vaapi -b:v 5M output.mp4
Other
Use the VAAPI deinterlacer standalone to attempt to make a software transcode run faster (this may actually make things slower - the additional copying to the GPU and back is quite a large overhead):

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload,deinterlace_vaapi=rate=field,hwdownload,format=nv12' -c:v libx264 -crf 24 output.mp4
Referenced from:https://trac.ffmpeg.org/wiki/Hardware/VAAPI

ffmpeg -f x11grab -video_size 1920x1080 -i :0  output.mp4

ffmpeg -f x11grab -video_size 1920x1080 -i :0  output.mp4
ffmpeg version 4.1.7 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.3.0-17ubuntu1~20.04)
  configuration: --disable-static --enable-shared
  libavutil      56. 22.100 / 56. 22.100
  libavcodec     58. 35.100 / 58. 35.100
  libavformat    58. 20.100 / 58. 20.100
  libavdevice    58.  5.100 / 58.  5.100
  libavfilter     7. 40.101 /  7. 40.101
  libswscale      5.  3.100 /  5.  3.100
  libswresample   3.  3.100 /  3.  3.100
[x11grab @ 0x55e93e579740] Stream #0: not enough frames to estimate rate; consider increasing probesize
Input #0, x11grab, from ':0':
  Duration: N/A, start: 1632986648.181873, bitrate: N/A
    Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1920x1080, 29.97 fps, 1000k tbr, 1000k tbn, 1000k tbc
File 'output.mp4' already exists. Overwrite ? [y/N] y
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo (native) -> mpeg4 (native))
Press [q] to stop, [?] for help
Output #0, mp4, to 'output.mp4':
  Metadata:
    encoder         : Lavf58.20.100
    Stream #0:0: Video: mpeg4 (mp4v / 0x7634706D), yuv420p, 1920x1080, q=2-31, 200 kb/s, 29.97 fps, 30k tbn, 29.97 tbc
    Metadata:
      encoder         : Lavc58.35.100 mpeg4
    Side data:
      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame=  894 fps= 44 q=31.0 Lsize=    7757kB time=00:00:29.79 bitrate=2132.5kbits/s dup=292 drop=290 speed=1.46x    
video:7752kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.060582%

Build FFmpeg QSV · Intel-Media-SDK/MediaSDK Wiki
update:2021-9-30
Install all required common packages:
You can get it in two ways:

Build from sources:

Intel Media Driver for VAAPI (aka iHD) (gmmlib and LibVA required)
Intel Media SDK
do not forget to export environment variables:

export LIBVA_DRIVERS_PATH=/path/to/iHD_driver
export LIBVA_DRIVER_NAME=iHD
export LD_LIBRARY_PATH=/path/to/msdk/lib
export PKG_CONFIG_PATH=/path/to/msdk/lib/pkgconfig

Starting from Ubuntu 19.04 Intel media stack components are available for installation via apt-get (see: Intel media stack on Ubuntu).

sudo apt-get install libva-dev libmfx-dev intel-media-va-driver-non-free
export LIBVA_DRIVER_NAME=iHD

For build ffplay (as part of ffmpeg) you also need to install additional dependencies:

sudo apt-get install libsdl2-dev

Build FFmpeg
Get ffmpeg sources
git clone https://github.com/ffmpeg/ffmpeg
cd ffmpeg

Configure and build FFmpeg install
Configure ffmpeg for use with vaapi and MediaSDK. Main key is --enable-libmfx

./configure --arch=x86_64 --disable-yasm --enable-vaapi --enable-libmfx
make

If you need a debug version of ffmpeg you can try

./configure --arch=x86_64 --disable-yasm --enable-vaapi --enable-libmfx \

        --enable-debug=3 --disable-stripping --extra-cflags=-gstabs+ \
        --disable-optimizations

make
Referenced from:https://github.com/Intel-Media-SDK/MediaSDK/wiki/Build-FFmpeg-QSV

ffmpeg 维基百科
FFmpeg 是一个开放源代码的自由软件,可以执行音频和视频多种格式的录影、转换、串流功能,包含了libavcodec——这是一个用于多个项目中音频和视频的解码器库,以及libavformat——一个音频与视频格式转换库。

“FFmpeg”这个单词中的“FF”指的是“Fast Forward(快速前进)”。有些新手写信给“FFmpeg”的项目负责人,询问FF是不是代表“Fast Free”或者“Fast Fourier”等意思,“FFmpeg”的项目负责人回信说:“Just for the record, the original meaning of "FF" in FFmpeg is "Fast Forward"...”

这个项目最初是由法国程序员法布里斯·贝拉(Fabrice Bellard)发起的,而现在是由迈克尔·尼德梅尔(Michael Niedermayer)在进行维护。许多FFmpeg的开发者同时也是MPlayer项目的成员,FFmpeg在MPlayer项目中是被设计为服务器版本进行开发。

2011年3月13日,FFmpeg部分开发人士决定另组Libav,同时制定了一套关于项目继续发展和维护的规则。

ffmpeg 命令行程序
命令行应用程序
ffmpeg:用于对视频文档或音频档案转换格式
ffplay:一个简单的播放器,基于SDL与FFmpeg库
ffprobe:用于显示媒体文件的信息,见MediaInfo
ffmpeg 常用参数
参数明细可用ffmpeg -h显示;编解码器名称等明细可用ffmpeg -formats显示。

下列为较常使用的参数:

主要参数

-i——设置输入文件名。
-f——设置输出格式。
-y——若输出文件已存在时则覆盖文件。
-fs——超过指定的文件大小时则结束转换。
-t——指定输出文件的持续时间,以秒为单位。
-ss——从指定时间开始转换,以秒为单位。
-ss和-t一起使用时代表从-ss的时间开始转换持续时间为-t的视频,例如:-ss 00:00:01.00 -t 00:00:10.00即从00:00:01.00开始转换到00:00:11.00。
-title——设置标题。
-timestamp——设置时间戳。
-vsync——增减Frame使影音同步。
-c——指定输出文件的编码。
-metadata——更改输出文件的元数据。
-help——查看帮助信息。
影像参数
-b:v——设置影像流量,默认为200Kbit/秒。(单位请引用下方注意事项)
-r——设置帧率值,默认为25。
-s——设置画面的宽与高。
-aspect——设置画面的比例。
-vn——不处理影像,于仅针对声音做处理时使用。
-vcodec( -c:v )——设置影像影像编解码器,未设置时则使用与输入文件相同之编解码器。

ffmpeg linux shell 批量提取音频内容

cat cc.sh 
#!/bin/sh
 
 
folder="."
id=0
for file_a in ${folder}/*
do
    let id++
    in_filename=`basename $file_a`
    num=`echo $id | awk '{printf("%02d",$0)}'`;
    out_filename="output/$num.aac"
    if [ "${in_filename##*.}"x = "mp4"x ]||[ "${in_filename##*.}"x = "ts"x ];then
    ffmpeg -i $in_filename -vn  $out_filename 
    fi
done

ffmpeg 下载 m3u8

ffmpeg -i http://.../playlist.m3u8 -c copy -bsf:a aac_adtstoasc output.mp4

ffmpeg aac_adtstoasc 格式说明

官方说明地址在这儿:https://ffmpeg.org/ffmpeg-bitstream-filters.html#aac_005fadtstoasc
*Convert MPEG-2/4 AAC ADTS to an MPEG-4 Audio Specific Configuration bitstream.
This filter creates an MPEG-4 AudioSpecificConfig from an MPEG-2/4 ADTS header and removes the ADTS header.
This filter is required for example when copying an AAC stream from a raw ADTS AAC or an MPEG-TS container to MP4A-LATM, to an FLV file, or to MOV/MP4 files and related formats such as 3GP or M4A. Please note that it is auto-inserted for MP4A-LATM and MOV/MP4 and related formats.*

1)将AAC编码器编码后的原始码流(ADTS头 + ES流)封装为MP4或者FLV或者MOV等格式时,需要先将ADTS头转换为MPEG-4 AudioSpecficConfig (将音频相关编解码参数提取出来),并将原始码流中的ADTS头去掉(只剩下ES流)。
2)相反,从MP4或者FLV或者MOV等格式文件中解封装出AAC码流(只有ES流)时,需要在解析出的AAC码流前添加ADTS头(含音频相关编解码参数)。

ubuntu ffmpeg screen capture

ffmpeg -f x11grab -video_size 1920x1030 -framerate 50 -i :0.0 -vf format=yuv420p output.mp4

Capturing your Desktop / Screen Recording for Linux
Use the x11grab device:

ffmpeg -video_size 1024x768 -framerate 25 -f x11grab -i :0.0+100,200 output.mp4

This will grab the image from desktop, starting with the upper-left corner at x=100, y=200 with a width and height of 1024⨉768.

If you need audio too, you can use ALSA (see Capture/ALSA for more info):

ffmpeg -video_size 1024x768 -framerate 25 -f x11grab -i :0.0+100,200 -f alsa -ac 2 -i hw:0 output.mkv

Or the pulse input device (see Capture/PulseAudio for more info):

ffmpeg -video_size 1024x768 -framerate 25 -f x11grab -i :0.0+100,200 -f pulse -ac 2 -i default output.mkv

ffmpeg 编译参数

ffmpeg -version

ffmpeg version N-105038-g30322ebe3c Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-17ubuntu1~20.04)
configuration: --enable-libx264 --enable-libpulse --enable-gpl --enable-openssl --enable-nonfree --enable-x86asm --enable-libmp3lame --enable-libx265 --enable-librtmp

ffmpeg 编解码列表

$ ffmpeg -codecs
$ ffmpeg -encoders
$ ffmpeg -decoders
$ ffmpeg -formats

ffmpeg 错误 OpenSSL <3.0.0 is incompatible with the gpl

Add --enable-nonfree

ffmpeg 录制电脑音频

pactl list short sources

1 alsa_output.pci-0000_00_1f.3.analog-stereo.monitor module-alsa-card.c s16le 2ch 48000Hz IDLE
2 alsa_input.pci-0000_00_1f.3.analog-stereo module-alsa-card.c s16le 2ch 48000Hz SUSPENDED

ffmpeg pulse 电脑录音

ffmpeg -f pulse -i 1 -ac 1 out.mp3

You can reference sources either by number: -f pulse -i 5, or by name -f pulse -i alsa_input.pci-0000_00_1b.0.analog-stereo, or just use -f pulse -i default to use the source currently set as default in pulseaudio.

ffmpeg 麦克风录音

ffmpeg -f pulse -i alsa_input.pci-0000_00_1b.0.analog-stereo -ac 1 recording.m4a

ffmpeg 录取播放声音

ffmpeg -f pulse -i alsa_output.pci-0000_00_1b.0.analog-stereo.monitor -ac 2 recording.m4a

Linux 录取电脑音视频

ffmpeg -video_size 1024x768 -framerate 25 -f x11grab -i :0.0+100,200 -f pulse -ac 2 -i 1 output.mkv

move the moov atom to the begining of the video file using FFMpeg

ffmpeg -i input_video_file.mp4 -vcodec copy -acodec copy -movflags faststart output_video_file.mp4

ffmpeg 分割mp4

ffmpeg -i input.mp4 -c copy -segment_time 30 -f segment output%03d.mp4

ffmpeg -i input.mp4 -c copy -segment_time 30 -f segment -segment_start_number 1 -individual_header_trailer 1 -break_non_keyframes 1 -reset_timestamps 1 output%03d.mp4

FFMPEG推流到rtsp server 命令
FFmpeg推流
注意:在推流之前先运行rtsp-simple-server,下载地址: https://github.com/bluenviron/mediamtx/releases

UDP推流

ffmpeg -re -i input.mp4 -c copy -f rtsp rtsp://127.0.0.1:8554/stream

TCP推流

ffmpeg -re -i input.mp4 -c copy -rtsp_transport tcp -f rtsp rtsp://127.0.0.1:8554/stream

循环推流

ffmpeg -re -stream_loop -1 -i input.mp4 -c copy -f rtsp rtsp://127.0.0.1:8554/stream

其中:

-re 为以流的方式读取;

-stream_loop 为循环读取视频源的次数,-1为无限循环;

-i 为输入的文件;

-f 为格式化输出到哪里;
Referenced from:https://blog.csdn.net/chan1987818/article/details/128219230