分类 Ubuntu 下的文章

“Ubuntu是以桌面应用为主的Linux发行版,基于Debian。Ubuntu有三个正式版本,包括桌面版、服务器版及用于物联网设备和机器人的Core版。从17.10版本开始,Ubuntu以GNOME为默认桌面环境。 Ubuntu是著名的Linux发行版之一,也是目前最多用户的Linux版本。 ”

在Ubuntu 22.04上部署使用,显示NVIDIA GeForce RTX 3090,显存24Gb,根据官方的说法,消费级的显卡就可以部署了.

官方模型下载地址:https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B
github开源地址:https://github.com/Wan-Video/Wan2.1

安装说明:

git clone https://github.com/Wan-Video/Wan2.1.git
cd Wan2.1
pip install -r requirements.txt

在进行pip 安装的时候,会遇到安装编译 flash-attn半天没反应的问题.
Building wheel for flash-attn (setup.py) 

在网上也有很多人有类似的问题,解决办法是去网站上面下载一个编译后的版本来得最快.

下载地址在这儿:https://github.com/Dao-AILab/flash-attention/releases

根据自己的设备情况来选择合适的版本:
https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whl

设备情况是指以下几项内容的版本,torch的版本,python的版本,cuda的版本,像上面的链接表示torch大于2.4,cuda是12,python是3.10就都可以.

查看torch版本的方法

import torch
print(torch.__version__)
2.6.0+cu124

查看cuda版本的方法如下,其实,上面也已经显示cuda版本为12.4了.

import torch
print(torch.version.cuda)
12.4

下载完成后,直接pip install 就可以了

pip install flash_attn-2.7.4.post1+cu12torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whl

保险起见,再次运行

pip install -r requirements.txt

接下来是下载模型,家用电脑上测试就只能体验1.3B的模型了(说是只要求8G的显存,实际使用的时候发现用的18G),14B的模型咱搞不了.

下载模型:

pip install modelscope
modelscope download Wan-AI/Wan2.1-T2V-1.3B --local_dir ./Wan2.1-T2V-1.3B

接下来慢慢等,这个包不大不小,有17G左右的样子

~/wan/Wan2.1-T2V-1.3B$ du -h -d1
252K    ./examples
6.3M    ./assets
20K    ./._____temp
21M    ./google
17G    .

等待模型都下载完成了,就可以开炉炼了.

python3 generate.py  --task t2v-1.3B --size 832*480 --ckpt_dir ../Wan2.1-T2V-1.3B --sample_shift 8 --sample_guide_scale 6 --prompt "纪实摄影风格, 中景镜头,一位20岁精致妆容的韩国美女,露出白皙的皮肤, 充满青春与活力。半身特写,锐利的边缘" --frame_num=81 --save_file=15.mp4

参数简单说明:
../Wan2.1-T2V-1.3B是模型的保存目录
frame_num是帧数,看下面的日志fps是16帧,不知道在哪里看的,说的帧数要是4n+1还是n+1,这个n就是指fps, 反正如果不指定这个帧数,默认就是81帧
save_file是要保存的文件名,如果不指定的话,生成的文件名老长了,就是指示词直接命名.
下面是程序的各种输出:

[2025-02-28 10:23:09,245] INFO: offload_model is not specified, set to True.
[2025-02-28 10:23:09,245] INFO: Generation job args: Namespace(task='t2v-1.3B', size='832*480', frame_num=81, ckpt_dir='../Wan2.1-T2V-1.3B', offload_model=True, ulysses_size=1, ring_size=1, t5_fsdp=False, t5_cpu=False, dit_fsdp=False, save_file='15.mp4', prompt='纪实摄影风格,中景镜头,一位20岁精致妆容的韩国美女,露出白皙的皮肤, 充满青春与活力。半身特写,锐利的边缘', use_prompt_extend=False, prompt_extend_method='local_qwen', prompt_extend_model=None, prompt_extend_target_lang='ch', base_seed=6044133464219050096, image=None, sample_solver='unipc', sample_steps=50, sample_shift=8.0, sample_guide_scale=6.0)
[2025-02-28 10:23:09,246] INFO: Generation model config: {'__name__': 'Config: Wan T2V 1.3B', 't5_model': 'umt5_xxl', 't5_dtype': torch.bfloat16, 'text_len': 512, 'param_dtype': torch.bfloat16, 'num_train_timesteps': 1000, 'sample_fps': 16, 'sample_neg_prompt': '色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走', 't5_checkpoint': 'models_t5_umt5-xxl-enc-bf16.pth', 't5_tokenizer': 'google/umt5-xxl', 'vae_checkpoint': 'Wan2.1_VAE.pth', 'vae_stride': (4, 8, 8), 'patch_size': (1, 2, 2), 'dim': 1536, 'ffn_dim': 8960, 'freq_dim': 256, 'num_heads': 12, 'num_layers': 30, 'window_size': (-1, -1), 'qk_norm': True, 'cross_attn_norm': True, 'eps': 1e-06}
[2025-02-28 10:23:09,246] INFO: Input prompt: 纪实摄影风格,中景镜头,一位20岁精致妆容的韩国美女,露出白皙的皮肤, 充满青春与活力。半身特写,锐利的边缘
[2025-02-28 10:23:09,246] INFO: Creating WanT2V pipeline.
[2025-02-28 10:24:11,819] INFO: loading ../Wan2.1-T2V-1.3B/models_t5_umt5-xxl-enc-bf16.pth
[2025-02-28 10:24:19,237] INFO: loading ../Wan2.1-T2V-1.3B/Wan2.1_VAE.pth
[2025-02-28 10:24:19,634] INFO: Creating WanModel from ../Wan2.1-T2V-1.3B
[2025-02-28 10:24:21,254] INFO: Generating video ...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [08:03<00:00, 9.67s/it]
[2025-02-28 10:32:55,448] INFO: Saving generated video to 15.mp4
[2025-02-28 10:32:56,941] INFO: Finished.

用时8分3秒,只生成了个5秒钟的视频,效果如下:

ffmpeg -i 15.mp4 -hide_banner
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '15.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf61.1.100
  Duration: 00:00:05.06, start: 0.000000, bitrate: 4443 kb/s
  Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 832x480, 4440 kb/s, 16 fps, 16 tbr, 16384 tbn, 32 tbc (default)
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
      encoder         : Lavc61.3.100 libx264

源视频如下:https://const.net.cn/ai/15.mp4

截图如下:
截图 2025-02-28 10-37-04.png

发现脸太大了,我们想要半身的视频,可以调整提示语.重新炼

python3 generate.py  --task t2v-1.3B --size 832*480 --ckpt_dir ../Wan2.1-T2V-1.3B --sample_shift 8 --sample_guide_scale 6 --prompt "纪实摄影风格, 中景镜头,一位20岁精致妆容的韩国美女,穿着白色的T-shirt,露出白皙的皮肤,充满青春与活力。半身特写,锐利的边缘" --frame_num=81 --save_file=16.mp4

在这个等待的时间,也可以看看显卡的占用情况

nvidia-smi 
Fri Feb 28 10:40:36 2025       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01             Driver Version: 535.183.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3090        Off | 00000000:65:00.0 Off |                  N/A |
| 72%   65C    P2             348W / 350W |  18472MiB / 24576MiB |    100%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A   1282546      C   python3                                   18466MiB |
+---------------------------------------------------------------------------------------+

发现显存确实是占用了18G.

生成的视频如下:https://const.net.cn/ai/16.mp4
视频截图:

截图 2025-02-28 10-50-17.png
不想自己搭建模型的,也可以在阿里的网站和app体验,用手机注册就可以了,每天登录有50灵感值,生成一个视频会扣除一定的灵感值,手机使用通义app每天也可以加50灵感值,体验一下应该是够了.官网地址:https://tongyi.aliyun.com/wanxiang/creation

安装部署参考的官方文档在这里https://paddlepaddle.github.io/PaddleOCR/latest/ppocr/quick_start.html#211

1.搭建一个python的虚拟环境.

mkdir paddleocc
cd paddleocr
python -m venv .
cd bin
source activate

然后在命令行的提示语中就有(paddleocr)这个提示信息了.

2.安装相应的依赖

pip install --upgrade pip
pip install pysocks -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install paddlepaddle -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install paddleocr -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install pymupdf -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install pyfftw -i https://pypi.tuna.tsinghua.edu.cn/simple
sudo apt install -y ccache
sudo apt install libgomp1

3.验证安装是否正确

paddleocr -h

4.转码pdf文件

paddleocr --image_dir ./2.pdf --use_angle_cls true --use_gpu false
paddleocr --image_dir ./2.pdf --use_angle_cls true --use_gpu false --savefile true

输出如下:

[2025/04/22 11:11:53] ppocr INFO: for usage help, please use paddleocr --help
[2025/04/22 11:11:53] ppocr DEBUG: Namespace(help='==SUPPRESS==', use_gpu=False, use_xpu=False, use_npu=False, use_mlu=False, use_gcu=False, ir_optim=True, use_tensorrt=False, min_subgraph_size=15, precision='fp32', gpu_mem=500, gpu_id=0, image_dir='./2.pdf', page_num=0, det_algorithm='DB', det_model_dir='/home/hesy/.paddleocr/whl/det/ch/ch_PP-OCRv4_det_infer', det_limit_side_len=960, det_limit_type='max', det_box_type='quad', det_db_thresh=0.3, det_db_box_thresh=0.6, det_db_unclip_ratio=1.5, max_batch_size=10, use_dilation=False, det_db_score_mode='fast', det_east_score_thresh=0.8, det_east_cover_thresh=0.1, det_east_nms_thresh=0.2, det_sast_score_thresh=0.5, det_sast_nms_thresh=0.2, det_pse_thresh=0, det_pse_box_thresh=0.85, det_pse_min_area=16, det_pse_scale=1, scales=[8, 16, 32], alpha=1.0, beta=1.0, fourier_degree=5, rec_algorithm='SVTR_LCNet', rec_model_dir='/home/hesy/.paddleocr/whl/rec/ch/ch_PP-OCRv4_rec_infer', rec_image_inverse=True, rec_image_shape='3, 48, 320', rec_batch_num=6, max_text_length=25, rec_char_dict_path='/media/hesy/Elements/python-all/paddleocr/lib/python3.11/site-packages/paddleocr/ppocr/utils/ppocr_keys_v1.txt', use_space_char=True, vis_font_path='./doc/fonts/simfang.ttf', drop_score=0.5, e2e_algorithm='PGNet', e2e_model_dir=None, e2e_limit_side_len=768, e2e_limit_type='max', e2e_pgnet_score_thresh=0.5, e2e_char_dict_path='./ppocr/utils/ic15_dict.txt', e2e_pgnet_valid_set='totaltext', e2e_pgnet_mode='fast', use_angle_cls=True, cls_model_dir='/home/hesy/.paddleocr/whl/cls/ch_ppocr_mobile_v2.0_cls_infer', cls_image_shape='3, 48, 192', label_list=['0', '180'], cls_batch_num=6, cls_thresh=0.9, enable_mkldnn=False, cpu_threads=10, use_pdserving=False, warmup=False, sr_model_dir=None, sr_image_shape='3, 32, 128', sr_batch_num=1, draw_img_save_dir='./inference_results', save_crop_res=False, crop_res_save_dir='./output', use_mp=False, total_process_num=1, process_id=0, benchmark=False, save_log_path='./log_output/', show_log=True, use_onnx=False, onnx_providers=False, onnx_sess_options=False, return_word_box=False, output='./output', table_max_len=488, table_algorithm='TableAttn', table_model_dir=None, merge_no_span_structure=True, table_char_dict_path=None, formula_algorithm='LaTeXOCR', formula_model_dir=None, formula_char_dict_path=None, formula_batch_num=1, layout_model_dir=None, layout_dict_path=None, layout_score_threshold=0.5, layout_nms_threshold=0.5, kie_algorithm='LayoutXLM', ser_model_dir=None, re_model_dir=None, use_visual_backbone=True, ser_dict_path='../train_data/XFUND/class_list_xfun.txt', ocr_order_method=None, mode='structure', image_orientation=False, layout=True, table=True, formula=False, ocr=True, recovery=False, recovery_to_markdown=False, use_pdf2docx_api=False, invert=False, binarize=False, alphacolor=(255, 255, 255), lang='ch', det=True, rec=True, type='ocr', savefile=True, ocr_version='PP-OCRv4', structure_version='PP-StructureV2')
[2025/04/22 11:11:54] ppocr INFO: ./2.pdf

最后生成的文件在output目录下,如下所示:

head 2.txt 

[[[132.0, 132.0], [552.0, 132.0], [552.0, 211.0], [132.0, 211.0]],
('财富的真相', 0.9918341636657715)] [[[129.0, 232.0], [555.0, 233.0],
[555.0, 262.0], [129.0, 261.0]], ('一种学校不教却人人需要的知识',
0.9969227910041809)] [[[300.0, 326.0], [384.0, 326.0], [384.0, 348.0], [300.0, 348.0]], ('李笑来著', 0.9979466795921326)] [[[248.0, 942.0],
[309.0, 942.0], [309.0, 965.0], [248.0, 965.0]], ('WGS',
0.6117124557495117)] [[[320.0, 948.0], [436.0, 948.0], [436.0, 966.0], [320.0, 966.0]], ('广东经济出版社', 0.9972420930862427)]

出现的几个错误解决:
错误1:
ERROR: Could not install packages due to an OSError: Missing dependencies for SOCKS support.
WARNING: There was an error checking the latest version of pip.

解决办法:

unset all_proxy
unset ALL_PROXY
pip install pysocks -i https://pypi.tuna.tsinghua.edu.cn/simple

错误2:
运行paddleocr -h的时候报错,错误如下:
/home/hesy/paddleocr/lib/python3.11/site-packages/paddle/utils/cpp_extension/extension_utils.py:711: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md
warnings.warn(warning_message)

解决办法:

sudo apt install -y ccache

错误3:
运行paddleocr -h的时候报错,错误如下:
ImportError: libgomp-24e2ab19.so.1.0.0: cannot open shared object file: No such file or directory

解决办法:

pip install pyfftw
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/paddleocr/lib/python3.11/site-packages/pyFFTW.libs