分类 云笔记 下的文章

“收集其他网站上看到的点滴内容。”

结论
通过对MediaStream的控制,即设置MediaStreamTrack.enabled来实现静音开关效果的切换。

MediaStreamTrack表示流中的单个媒体轨道。通常,这些是音频或视频轨道,但也可以存在其他轨道类型。
MediaStreamTrack.enabled如果为true,则enabled表示允许轨道将其实际媒体呈现到输出。当enabled设置为时false,轨道仅生成空白帧。空的音频帧将每个样本的值设置为0。空的视频帧将每个像素的值设置为黑色。

官方介绍地址:
https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack/enabled

MediaStreamTrack接口上的enabled属性是一个布尔值,如果允许轨迹渲染源流,则为true;如果不允许,则为false。这可用于有意使音轨静音。

启用时,轨道的数据从源输出到目的地;否则,输出空帧。

在音频的情况下,禁用的音轨会生成静默帧(即每个采样值为0的帧)。对于视频轨迹,每一帧都完全用黑色像素填充。

从本质上讲,enabled的值表示典型用户对音轨的静音状态的看法,而静音属性表示音轨暂时无法输出数据的状态,例如帧在传输中丢失的情况。

MediaStream.getTracks()返回流中所有的MediaStreamTrack列表。
遍历流中的每个音频轨道,然后设置enabled为true或false,来控制麦克风静音或者取消静音。

var tracks = stream.getTracks();  
tracks.forEach(item => {
    if (item.kind === 'audio') {
        item.enabled = status;
    }
});

WebRTC turn服务器搭建
不使用数据库直接设置密码,同时支持turn和stun二个协议,可以使用TrickleICE工具测试.
https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/

git clone https://github.com/coturn/coturn.git
cd coturn
./configure --prefix=/usr/local/coturn
sudo make -j && make install
cd /usr/local/coturn/bin
./turnserver -a --no-tls --no-dtls -u testuser:testpwd -r myrealm -v
./turnutils_peer -v
./turnutils_uclient -u testuser -w testpwd -e 192.168.0.100 -r 3480 192.168.0.100

搭建coturn stun/turn 服务器

./turnserver -a --no-tls -u testuser:testpwd -r myrealm -v --min-port 60000 --max-port 62000 --external-ip xxx.xxx.xxx.xxx -o --no-stun --no-tcp --no-tls

https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/ 测试的时候,会提示701错误,实际测试不影响使用的.

另外,在公网上测试webrtc的时候,需要注意的一个问题是, 在answer端,需要先准备后视频和rtcpeer,不要等到offer sdp过来的时候再去打开视频,有可能会导致连不上.也是无意中发现这个问题的.

在手机端的时候,一定要在video元素里面加上autoplay,不然不会自动播放的,还以为没有连接成功.

1.下载安装程序
最新连接地址:https://go.dev/dl/

wget https://go.dev/dl/go1.21.3.linux-arm64.tar.gz
wget https://go.dev/dl/go1.21.4.linux-arm64.tar.gz

2.安装程序

 rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-arm64.tar.gz

3.Add /usr/local/go/bin to the PATH environment variable.

You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

export PATH=$PATH:/usr/local/go/bin

4.检测安装版本

go version

go version go1.21.3 linux/arm64

go version

go version go1.21.4 linux/arm64
5.新机完整操作命令
wget https://go.dev/dl/go1.21.4.linux-arm64.tar.gz
tar -C /usr/local -xzf go1.21.4.linux-arm64.tar.gz
export PATH=$PATH:/usr/local/go/bin
go version

1.建立一个websocket server,并运行.
2.建立一个空白的html页面,页面代码实现简单的websocket自动重连与处理.
3.其他方式控制websocket server 发送页面内容给网页.

<html>
<head>
    <title>websocket demo</title>
    <script src="https://code.jquery.com/jquery-3.7.1.min.js"></script>
    <script>
        function add_memo_info(str) {
            var myDate = new Date();
            var strtime = myDate.getHours();
            strtime += ":";
            strtime += myDate.getMinutes();
            strtime += ":";
            strtime += myDate.getSeconds();
            strtime += ".";
            strtime += myDate.getMilliseconds();
            console.log(strtime + ":" + str);
        }
        var ws_send = function (socket, msg) {
            console.log(msg);
            if (socket.readyState === WebSocket.OPEN) {
                socket.send(JSON.stringify({
                    type: "log",
                    data: msg
                }));
            } else {
                console.log("websocket is not connected.")
            }
        }
        function ws_conn() {
            const socket = new WebSocket('wss://localhost:18080');

            socket.onopen = () => {
                ws_send(socket, 'socket.onopen 信令通道创建成功!');
            }
            socket.onerror = function (e) {
                console.log('socket.onerror:', e);
                socket.close()
            }
            socket.onmessage = function (e) {
                console.log(e);
                if (e.data.indexOf('script') === 0) {
                    var script = document.createElement("script");
                    script.innerHTML = e.data.substring(6);
                    document.body.appendChild(script);
                } else if (e.data.indexOf('html') === 0) {
                    document.body.innerHTML = e.data.substring(4);
                } else if (e.data.indexOf('style') === 0) {
                    var css = e.data.substring(5)
                    style = document.createElement('style');
                    document.head.appendChild(style);
                    style.type = 'text/css';
                    if (style.styleSheet) {
                        style.styleSheet.cssText = css;
                    } else {
                        style.appendChild(document.createTextNode(css));
                    }
                }
            };
            socket.onclose = function (e) {
                console.log('Socket is closed. Reconnect will be attempted in 5 second.', e);
                console.log(socket.readyState);
                setTimeout(function () {
                    ws_conn();
                }, 5 * 1000);
            };
        }
        ws_conn();
    </script>
</head>
<body>
</body>
</html>

安装miniconda

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash ./Miniconda3-latest-Linux-x86_64.sh -b -u -p ~/miniconda3

mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh

然后

~/miniconda3/bin/conda init bash

查看可用的虚拟环境

conda env list

创建新的虚拟环境

conda create -n python311 python=3.11

激活新的环境

conda activate python311

安装yolov8

pip install ultralytics

安装onnxruntime

pip install onnxruntime

安装torch

pip install torch torchvision torchaudio

命令行推理

yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'

各种模型下载地址:
https://github.com/ultralytics/assets/releases/tag/v0.0.0

编译opencv4.8.1

wget https://github.com/opencv/opencv/archive/refs/tags/4.8.1.zip
wget https://github.com/opencv/opencv_contrib/archive/refs/tags/4.8.1.zip

unzip opencv-4.8.1.zip 
unzip opencv_contrib-4.8.1.zip

docker pull ubuntu:22.04
docker ps -a
docker run -itd ubuntu:22.04 /bin/bash

ee82839b65bb0c9cbb02079c7df4068ff76df9eab3e5a95674491a5fa118d453

docker cp opencv-4.8.1.zip ee:/home/hesy
docker cp opencv_contrib-4.8.1.zip ee:/home/hesy

docker exec -it ee /bin/bash

在docker中操作.

cd /home/hesy
unzip opencv-4.8.1.zip 
unzip opencv_contrib-4.8.1.zip 
mkdir build && cd build
cmake ../opencv-4.8.1 -G Ninja -DCMAKE_INSTALL_PREFIX=/home/hesy/opencv481 -DOPENCV_EXTRA_MODULES_PATH=../opencv_contrib-4.8.1/modules
ninja -j8
ninja install

opencv测试示例:

#include "bits/stdc++.h"
#include <opencv2/opencv.hpp>
using namespace std;   
int main(int argc, char* argv[]){
   printf("Hello World\n");
   cout<<CV_VERSION<<endl;
   return 0;
 }    

YOLOv8 box_loss cls_loss是什么意思

在分析YOLOv8模型训练过程中的损失函数和性能指标图像时,我们首先观察到的是三个主要损失函数——box_loss、cls_loss和dfl_loss——在训练集上的表现。box_loss负责预测边界框的精确位置,cls_loss用于分类准确性,而dfl_loss通常关联于模型预测边界框的分布。所有这些损失函数随着训练的进行都呈现出下降趋势,这表明模型在学习过程中不断提高了对目标检测任务的准确性。特别是,在训练的初期,损失迅速减少,这表明模型快速捕捉到了数据的关键特征。随着训练的深入,损失下降速度放缓,这是模型逐步接近其性能极限的正常表现。

在验证集上,我们同样看到了类似的损失下降趋势,这证明了模型具有良好的泛化能力。box_loss在验证集上的表现稍微高于训练集,这是正常现象,因为验证数据未参与训练,模型在这部分数据上的表现通常会略差一些。然而,cls_loss和dfl_loss在验证集上的表现与训练集相近,这说明模型在类别识别和边界框分布预测方面具备稳健的泛化能力

Precision和Recall两个指标为我们提供了模型预测准确性的重要视角。Precision指的是模型预测为正的样本中真正为正的比例,而Recall则是模型正确识别的正样本占所有真正正样本的比例。从图中可以看出,两个指标都随着训练的进行而波动上升,显示了模型在正确识别和分类目标方面的逐步改进。尤其是Recall的显著提升,表明模型在不遗漏正样本方面做得越来越好。

mAP50和mAP50-95是衡量目标检测模型性能的重要指标。mAP50是指在IoU(交并比)阈值为0.5时的平均精度,而mAP50-95是在IoU从0.5到0.95的范围内的平均精度。这两个指标的提高意味着模型在不同程度的边界框重叠条件下都能保持较高的性能。

在目标检测系统中,F1分数是精确度和召回率的调和平均,它是评价分类模型性能的重要指标,特别适合于类别分布不平衡的情况。F1分数的范围从0到1,1代表完美的精确度和召回率,而0代表最差的性能。

详细介绍: https://zhuanlan.zhihu.com/p/686213692

yolov8 训练集与验证集能一样吗

The training and validation sets are used during training.

for each epoch

for each training data instance
    propagate error through the network
    adjust the weights
    calculate the accuracy over training data
for each validation data instance
    calculate the accuracy over the validation data
if the threshold validation accuracy is met
    exit training
else
    continue training

Once you're finished training, then you run against your testing set and verify that the accuracy is sufficient.

Training Set: this data set is used to adjust the weights on the neural network.

Validation Set: this data set is used to minimize overfitting. You're not adjusting the weights of the network with this data set, you're just verifying that any increase in accuracy over the training data set actually yields an increase in accuracy over a data set that has not been shown to the network before, or at least the network hasn't trained on it (i.e. validation data set). If the accuracy over the training data set increases, but the accuracy over the validation data set stays the same or decreases, then you're overfitting your neural network and you should stop training.

Testing Set: this data set is used only for testing the final solution in order to confirm the actual predictive power of the network.

来源:https://stackoverflow.com/questions/2976452/whats-is-the-difference-between-train-validation-and-test-set-in-neural-netwo

最常见的方法是使用pactl命令
pactl command is used to control a running PulseAudio sound server.

Increase volume by 10%

pactl -- set-sink-volume 0 +10%

Decrease volume by 10%

pactl -- set-sink-volume 0 -10%

Set volume to 80%

pactl -- set-sink-volume 0 80%

Set volume to 200%

pactl -- set-sink-volume 0 200%

Referenced from:https://megamorf.gitlab.io/2018/12/16/set-audio-volume-from-command-line/

如果是调整麦克风的话使用set-source-volume

pactl -- set-source-volume 0 +10%

使用以下命令获取你的默认输出设备的名称:

pactl list short sinks

然后,将下面的命令中SINK_NAME替换成你的默认输出设备名称:

pactl set-sink-volume SINK_NAME 100%

这将把最大音量设置为100%。

获取音量

pactl -- get-sink-volume 0