当前位置: 首页 > news >正文

做网站需要哪些东西和步骤360建站官网

做网站需要哪些东西和步骤,360建站官网,做p2p网站多少钱,wordpress+菜单大小使用的命令:iostat -x 5 可以看到 ssd的利用率已经满了。 之前在的数据集放在了 hdd上,训练结果特别慢。 所以我把它移动到了ssd上,然后训练参数用的 resume, 但是!!!!它把历史记住…

在这里插入图片描述
使用的命令:iostat -x 5
可以看到 ssd的利用率已经满了。

之前在的数据集放在了 hdd上,训练结果特别慢。
所以我把它移动到了ssd上,然后训练参数用的 resume,
但是!!!!它把历史记住了,仍然不从ssd上来取数据。
配置文件的路径也换了,但它还是会去找旧的。

在这里插入图片描述
现在的100% 是扫描数据的100%
因数数据集15G~20G,还是比较多的。

engine/trainer: task=detect, mode=train, model=/home/justin/Desktop/code/python_project/Jersey-Number/yolov8n.pt, data=/home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/data.yaml, epochs=1000, time=None, patience=100, batch=64, imgsz=640, save=True, save_period=-1, cache=False, device=[0, 1], workers=8, project=None, name=train70, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=runs/detect/train70
Overriding model.yaml nc=80 with nc=4from  n    params  module                                       arguments                     0                  -1  1       464  ultralytics.nn.modules.conv.Conv             [3, 16, 3, 2]                 1                  -1  1      4672  ultralytics.nn.modules.conv.Conv             [16, 32, 3, 2]                2                  -1  1      7360  ultralytics.nn.modules.block.C2f             [32, 32, 1, True]             3                  -1  1     18560  ultralytics.nn.modules.conv.Conv             [32, 64, 3, 2]                4                  -1  2     49664  ultralytics.nn.modules.block.C2f             [64, 64, 2, True]             5                  -1  1     73984  ultralytics.nn.modules.conv.Conv             [64, 128, 3, 2]               6                  -1  2    197632  ultralytics.nn.modules.block.C2f             [128, 128, 2, True]           7                  -1  1    295424  ultralytics.nn.modules.conv.Conv             [128, 256, 3, 2]              8                  -1  1    460288  ultralytics.nn.modules.block.C2f             [256, 256, 1, True]           9                  -1  1    164608  ultralytics.nn.modules.block.SPPF            [256, 256, 5]                 10                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          11             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           12                  -1  1    148224  ultralytics.nn.modules.block.C2f             [384, 128, 1]                 13                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          14             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           15                  -1  1     37248  ultralytics.nn.modules.block.C2f             [192, 64, 1]                  16                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]                17            [-1, 12]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           18                  -1  1    123648  ultralytics.nn.modules.block.C2f             [192, 128, 1]                 19                  -1  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]              20             [-1, 9]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           21                  -1  1    493056  ultralytics.nn.modules.block.C2f             [384, 256, 1]                 22        [15, 18, 21]  1    752092  ultralytics.nn.modules.head.Detect           [4, [64, 128, 256]]           
Model summary: 225 layers, 3011628 parameters, 3011612 gradients, 8.2 GFLOPsTransferred 319/355 items from pretrained weights
DDP: debug command /home/justin/miniconda3/bin/python -m torch.distributed.run --nproc_per_node 2 --master_port 41127 /home/justin/.config/Ultralytics/DDP/_temp_uog7ddsr140402595641744.py
WARNING:__main__:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
*****************************************
Ultralytics YOLOv8.2.1 🚀 Python-3.11.0 torch-2.3.0+cu121 CUDA:0 (NVIDIA GeForce RTX 4090, 24210MiB)CUDA:1 (NVIDIA GeForce RTX 4090, 24188MiB)
TensorBoard: Start with 'tensorboard --logdir runs/detect/train70', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=4
Transferred 319/355 items from pretrained weights
Freezing layer 'model.22.dfl.conv.weight'
AMP: running Automatic Mixed Precision (AMP) checks with YOLOv8n...
/home/justin/miniconda3/lib/python3.11/site-packages/torch/nn/modules/conv.py:456: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.)return F.conv2d(input, weight, bias, self.stride,
AMP: checks passed ✅
train: Scanning /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/
train: Scanning /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/train: Scanning /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/train: Scanning /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/
train: Scanning /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/
train: Scanning /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/train: Scanning /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/train: Scanning /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/

我就是看这里:

train: WARNING ⚠️ /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/images/284193,42a000df17be3d.jpg: 1 duplicate labels removed
train: WARNING ⚠️ /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/images/284193,575c000f3f01e40.jpg: 1 duplicate labels removed
train: WARNING ⚠️ /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/images/284193,70d2000c58fbf86.jpg: 1 duplicate labels removed
train: WARNING ⚠️ /home/justin/Desktop/code/python_project/Jersey-Number/datasets/20240511_four_in_1/data_head_person_hoop_number/train/images/284193,880000198e8148.jpg: 1 duplicate labels removed

看出路径不对了,然后from scratch开始训练,就好使了。

然而并无卵用,确实换到ssd上了,还是很差,应该是碎文件所致,哎。。。所以,深度学习级别的hello world 用plk存储文件是有道理的,为了不让他那么碎啊 =====个人理解啊。

http://www.hrbkazy.com/news/25513.html

相关文章:

  • b2b平台网站功能市场营销师报名官网
  • 自己做考试题目网站手机百度下载app
  • 濮阳网站建设价格seo网站推广批发
  • 建设网站包括哪些百度竞价教程
  • 自己电脑怎么做网站晋城今日头条新闻
  • 苏州企业网站设计搜索引擎优化代理
  • 宁波seo网站建设费用搜索优化是什么意思
  • 市体育局网站 两学一做北京seo多少钱
  • 网站开发与应用论文产品关键词大全
  • 旅游网站建设规划百度推广官网
  • 政法网站内容建设微信营销软件排行榜
  • 高速公路建设管理局网站网易最新消息新闻
  • wordpress 移动到回收站发生错误网络营销的未来发展趋势
  • 贵州快速整站优化宁波网站建设公司
  • 独立网站怎么做推广设计培训班学费一般多少
  • 微信公众号微网站怎么建设关键词搜索排名优化
  • 白云电子商务网站建设内容营销平台有哪些
  • 美区国区家庭共享搜索引擎优化的含义和目标
  • wordpress 翻页没内容甲马营seo网站优化的
  • 网站接入变更skr搜索引擎入口
  • 网站国内服务器租用百度关键词关键词大全
  • 设计网站栏目seo 视频
  • ppt超链接网站怎么做南通网站快速收录
  • 建设有限公司网站seo公司北京
  • 赤裸做爰游戏漫画网站中国局势最新消息今天
  • 天河做网站开发2345纯净版推广包
  • qq浏览器网页版入口seo是指搜索引擎营销
  • 公司网站建站模板模板360优化大师安卓版下载
  • 做网站的经历微信scrm
  • 自己电脑做网站模板短视频精准获客