中断训练

This commit is contained in:
wudong 2022-11-29 16:14:09 +08:00
parent c3a6574372
commit 0bafaf6e1d
13 changed files with 1418 additions and 956 deletions

602
.gitignore vendored
View File

@ -1,301 +1,301 @@
### Python template
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
### IntelliJ IDEA ###
.idea/
*.iws
*.iml
*.ipr
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# spec
manage.spec
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
staticfiles/
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# pyenv
.python-version
# Environments
.venv
venv/
ENV/
.vscode
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
### Node template
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
# nyc test coverage
.nyc_output
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (http://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# Typescript v1 declaration files
typings/
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
### Linux template
*~
# temporary files which can be created if a process still has a handle open of a deleted file
.fuse_hidden*
# KDE directory preferences
.directory
# Linux trash folder which might appear on any partition or disk
.Trash-*
# .nfs files are created when an open file is removed but is still being accessed
.nfs*
### VisualStudioCode template
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
### Windows template
# Windows thumbnail cache files
Thumbs.db
ehthumbs.db
ehthumbs_vista.db
# Dump file
*.stackdump
# Folder config file
Desktop.ini
# Recycle Bin used on file shares
$RECYCLE.BIN/
# Windows Installer files
*.cab
*.msi
*.msm
*.msp
# Windows shortcuts
*.lnk
### macOS template
# General
*.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
### SublimeText template
# Cache files for Sublime Text
*.tmlanguage.cache
*.tmPreferences.cache
*.stTheme.cache
# Workspace files are user-specific
*.sublime-workspace
# Project files should be checked into the repository, unless a significant
# proportion of contributors will probably not be using Sublime Text
# *.sublime-project
# SFTP configuration file
sftp-config.json
# Package control specific files
Package Control.last-run
Package Control.ca-list
Package Control.ca-bundle
Package Control.system-ca-bundle
Package Control.cache/
Package Control.ca-certs/
Package Control.merged-ca-bundle
Package Control.user-ca-bundle
oscrypto-ca-bundle.crt
bh_unicode_properties.cache
# Sublime-github package stores a github token in this file
# https://packagecontrol.io/packages/sublime-github
GitHub.sublime-settings
### Vim template
# Swap
[._]*.s[a-v][a-z]
[._]*.sw[a-p]
[._]s[a-v][a-z]
[._]sw[a-p]
# Session
Session.vim
# Temporary
.netrwhist
# Auto-generated tag files
tags
### VirtualEnv template
# Virtualenv
[Bb]in
[Ii]nclude
[Ll]ib
[Ll]ib64
[Ss]cripts
pyvenv.cfg
pip-selfcheck.json
.env
### Project template
izan/media/
.pytest_cache/
!app/yolov5/yolov5s.pt
*.pt
*.pdparams
*.onnx
### Python template
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
### IntelliJ IDEA ###
.idea/
*.iws
*.iml
*.ipr
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# spec
manage.spec
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
staticfiles/
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# pyenv
.python-version
# Environments
.venv
venv/
ENV/
.vscode
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
### Node template
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
# nyc test coverage
.nyc_output
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (http://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# Typescript v1 declaration files
typings/
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
### Linux template
*~
# temporary files which can be created if a process still has a handle open of a deleted file
.fuse_hidden*
# KDE directory preferences
.directory
# Linux trash folder which might appear on any partition or disk
.Trash-*
# .nfs files are created when an open file is removed but is still being accessed
.nfs*
### VisualStudioCode template
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
### Windows template
# Windows thumbnail cache files
Thumbs.db
ehthumbs.db
ehthumbs_vista.db
# Dump file
*.stackdump
# Folder config file
Desktop.ini
# Recycle Bin used on file shares
$RECYCLE.BIN/
# Windows Installer files
*.cab
*.msi
*.msm
*.msp
# Windows shortcuts
*.lnk
### macOS template
# General
*.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
### SublimeText template
# Cache files for Sublime Text
*.tmlanguage.cache
*.tmPreferences.cache
*.stTheme.cache
# Workspace files are user-specific
*.sublime-workspace
# Project files should be checked into the repository, unless a significant
# proportion of contributors will probably not be using Sublime Text
# *.sublime-project
# SFTP configuration file
sftp-config.json
# Package control specific files
Package Control.last-run
Package Control.ca-list
Package Control.ca-bundle
Package Control.system-ca-bundle
Package Control.cache/
Package Control.ca-certs/
Package Control.merged-ca-bundle
Package Control.user-ca-bundle
oscrypto-ca-bundle.crt
bh_unicode_properties.cache
# Sublime-github package stores a github token in this file
# https://packagecontrol.io/packages/sublime-github
GitHub.sublime-settings
### Vim template
# Swap
[._]*.s[a-v][a-z]
[._]*.sw[a-p]
[._]s[a-v][a-z]
[._]sw[a-p]
# Session
Session.vim
# Temporary
.netrwhist
# Auto-generated tag files
tags
### VirtualEnv template
# Virtualenv
[Bb]in
[Ii]nclude
[Ll]ib
[Ll]ib64
[Ss]cripts
pyvenv.cfg
pip-selfcheck.json
.env
### Project template
izan/media/
.pytest_cache/
!app/yolov5/yolov5s.pt
*.pt
*.pdparams
*.onnx

View File

@ -1,29 +1,29 @@
import os
# 根目录
ROOT_PATH = os.path.split(os.path.abspath(__name__))[0]
# 开启debug
DEBUG = True
# 密钥
SECRET_KEY = 'WugjsfiYBEVsiQfiSwEbIOEAGnOIFYqoOYHEIK'
# 数据库配置
SQLALCHEMY_DATABASE_URI = 'postgresql+psycopg2://deepLearner:dp2021@124.71.203.3:5432/demo'
#SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://demo:demo123@192.168.2.9:3306/flask_demo'
SQLALCHEMY_TRACK_MODIFICATIONS = False
# 查询时会显示原始SQL语句
SQLALCHEMY_ECHO = True
# SQLALCHEMY_DATABASE_URI = 'sqlite:///{}'.format(os.path.join(ROOT_PATH, 'demo.db'))
# SQLALCHEMY_TRACK_MODIFICATIONS = False
# 数据库配置
db = {
'host': '127.0.0.1',
'user': 'root',
'password': 'sdust2020',
'port': 6379,
'database': 'school',
'charset': 'utf8',
'db': 0
}
import os
# 根目录
ROOT_PATH = os.path.split(os.path.abspath(__name__))[0]
# 开启debug
DEBUG = True
# 密钥
SECRET_KEY = 'WugjsfiYBEVsiQfiSwEbIOEAGnOIFYqoOYHEIK'
# 数据库配置
SQLALCHEMY_DATABASE_URI = 'postgresql+psycopg2://deepLearner:dp2021@124.71.203.3:5432/demo'
#SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://demo:demo123@192.168.2.9:3306/flask_demo'
SQLALCHEMY_TRACK_MODIFICATIONS = False
# 查询时会显示原始SQL语句
SQLALCHEMY_ECHO = True
# SQLALCHEMY_DATABASE_URI = 'sqlite:///{}'.format(os.path.join(ROOT_PATH, 'demo.db'))
# SQLALCHEMY_TRACK_MODIFICATIONS = False
# 数据库配置
db = {
'host': '127.0.0.1',
'user': 'root',
'password': 'sdust2020',
'port': 6379,
'database': 'school',
'charset': 'utf8',
'db': 0
}

View File

@ -1,8 +1,8 @@
from .default import * # NOQA F401
# 数据库配置
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://demo:demo123@192.168.2.9:3306/flask_demo'
# SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://demo:demo123@192.168.2.9:3306/flask_demo?allowPublicKeyRetrieval=true&useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai'
SQLALCHEMY_TRACK_MODIFICATIONS = False
# 查询时会显示原始SQL语句
SQLALCHEMY_ECHO = True
from .default import * # NOQA F401
# 数据库配置
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://demo:demo123@192.168.2.9:3306/flask_demo'
# SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://demo:demo123@192.168.2.9:3306/flask_demo?allowPublicKeyRetrieval=true&useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai'
SQLALCHEMY_TRACK_MODIFICATIONS = False
# 查询时会显示原始SQL语句
SQLALCHEMY_ECHO = True

View File

@ -1,36 +1,44 @@
"""
@Time 2022/11/15 10:13
@Auth
@File global_var.py
@IDE PyCharm
@MottoABC(Always Be Coding)
@Desc
"""
import json
from app.utils.redis_config import redis_client
def _init(): # 初始化
dict = {}
redis_client.__setattr__("_global_dict", json.dumps(dict))
def set_value(key, value):
# 定义一个全局变量
dict = redis_client.get_redis().get("_global_dict")
if dict is None:
dict = {}
dict[key] = value
# redis_client.get_redis().set("_global_dict", json.dumps(dict))
redis_client.__setattr__("_global_dict", json.dumps(dict))
def get_value(key):
# 获得一个全局变量,不存在则提示读取对应变量失败
try:
return redis_client.get_redis().get("_global_dict")[key]
except Exception as e:
print(e)
print('读取' + key + '失败\r\n')
"""
@Time 2022/11/15 10:13
@Auth
@File global_var.py
@IDE PyCharm
@MottoABC(Always Be Coding)
@Desc
"""
import multiprocessing
def _init(): # 初始化
# 中断标志
global _global_dict
_global_dict = {}
# # ws列表存储
# global _active_connections
# _active_connections = multiprocessing.Manager().list()
# # ws字典存储
# global _active_connections_dist
# _active_connections_dist = multiprocessing.Manager().dict()
# def get_active_connections():
# return _active_connections
# def get_active_connections_dist():
# return _active_connections_dist
def set_value(key, value):
# 定义一个全局变量
_global_dict[key] = value
def get_value(key):
# 获得一个全局变量,不存在则提示读取对应变量失败
try:
return _global_dict[key]
except:
print('读取' + key + '失败\r\n')

View File

@ -1,3 +1,3 @@
from .default import * # NOQA F401
DEBUG = False
from .default import * # NOQA F401
DEBUG = False

View File

@ -1,14 +1,14 @@
import os
from .default import ROOT_PATH
from .default import * # NOQA F401
TEST_BASE_DIR = os.path.join(ROOT_PATH, '.test')
SQLALCHEMY_DATABASE_URI = 'sqlite:///{}'.format(
os.path.join(TEST_BASE_DIR, 'demo.db'))
# SQLALCHEMY_ECHO = True
TESTING = True
if not os.path.exists(TEST_BASE_DIR):
os.makedirs(TEST_BASE_DIR)
import os
from .default import ROOT_PATH
from .default import * # NOQA F401
TEST_BASE_DIR = os.path.join(ROOT_PATH, '.test')
SQLALCHEMY_DATABASE_URI = 'sqlite:///{}'.format(
os.path.join(TEST_BASE_DIR, 'demo.db'))
# SQLALCHEMY_ECHO = True
TESTING = True
if not os.path.exists(TEST_BASE_DIR):
os.makedirs(TEST_BASE_DIR)

View File

@ -1,486 +1,486 @@
"""
@Time 2022/9/20 16:17
@Auth
@File AlgorithmController.py
@IDE PyCharm
@MottoABC(Always Be Coding)
@Desc算法接口
"""
import json
from functools import wraps
from threading import Thread
from multiprocessing import Process
from time import sleep
from flask import Blueprint, request
from app.schemas.TrainResult import Report, ProcessValueList
from app.utils.RedisMQTool import Task
from app.utils.StandardizedOutput import output_wrapped
from app.utils.redis_config import redis_client
from app.utils.websocket_tool import manager
from app.configs.global_var import set_value
import sys
from pathlib import Path
from pynvml import *
# FILE = Path(__file__).resolve()
# ROOT = FILE.parents[0] # YOLOv5 root directory
# if str(ROOT) not in sys.path:
# sys.path.append(str(ROOT)) # add ROOT to PATH
# sys.path.append("/mnt/sdc/algorithm/AICheck-MaskRCNN/app/maskrcnn_ppx")
# import ppx as pdx
bp = Blueprint('AlgorithmController', __name__)
ifKillDict = {}
def start_train_algorithm():
"""
调用训练算法
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/start_train_algorithm', methods=['get'])
def wrapped_function():
param = request.args.get('param')
id = request.args.get('id')
dict = manager.active_connections_dist
# t = Thread(target=func, args=(param, id))
t = Process(target=func, args=(param, id, dict[id]), name=id)
set_value(key=id, value=False)
t.start()
return output_wrapped(0, 'success', '成功')
return wrapped_function
return wrapTheFunction
def start_test_algorithm():
"""
调用验证算法
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/start_test_algorithm', methods=['get'])
def wrapped_function_test():
param = request.args.get('param')
id = request.args.get('id')
t = Thread(target=func, args=(param, id))
t.start()
return output_wrapped(0, 'success', '成功')
return wrapped_function_test
return wrapTheFunction
def start_detect_algorithm():
"""
调用检测算法
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/start_detect_algorithm', methods=['get'])
def wrapped_function_detect():
param = request.args.get('param')
id = request.args.get('id')
t = Thread(target=func, args=(param, id))
t.start()
return output_wrapped(0, 'success', '成功')
return wrapped_function_detect
return wrapTheFunction
def start_download_pt():
"""
下载模型
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/start_download_pt', methods=['get'])
def wrapped_function_start_download_pt():
param = request.args.get('param')
data = func(param)
return output_wrapped(0, 'success', data)
return wrapped_function_start_download_pt
return wrapTheFunction
def algorithm_process_value():
"""
获取中间值, redis订阅发布
"""
def wrapTheFunction(func):
@wraps(func)
def wrapped_function(*args, **kwargs):
data = func(*args, **kwargs)
print(data)
Task(redis_conn=redis_client.get_redis(), channel="ceshi").publish_task(
data={'code': 0, 'msg': 'success', 'data': data})
return output_wrapped(0, 'success', data)
return wrapped_function
return wrapTheFunction
def algorithm_process_value_websocket():
"""
获取中间值, websocket发布
"""
def wrapTheFunction(func):
@wraps(func)
def wrapped_function(*args, **kwargs):
data = func(*args, **kwargs)
id = data["id"]
data_res = {'code': 0, "type": 'connected', 'msg': 'success', 'data': data}
manager.send_message_proj_json(message=data_res, id=id)
return data
return wrapped_function
return wrapTheFunction
def algorithm_kill_value_websocket():
"""
获取kill值, websocket发布
"""
def wrapTheFunction(func):
@wraps(func)
def wrapped_function(*args, **kwargs):
data = func(*args, **kwargs)
id = data["id"]
data_res = {'code': 1, "type": 'kill', 'msg': 'success', 'data': data}
manager.send_message_proj_json(message=data_res, id=id)
return data
return wrapped_function
return wrapTheFunction
def algorithm_error_value_websocket():
"""
获取error值, websocket发布
"""
def wrapTheFunction(func):
@wraps(func)
def wrapped_function(*args, **kwargs):
data = func(*args, **kwargs)
id = data["id"]
data_res = {'code': 2, "type": 'error', 'msg': 'fail', 'data': data}
manager.send_message_proj_json(message=data_res, id=id)
return data
return wrapped_function
return wrapTheFunction
def obtain_train_param():
"""
获取训练参数
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/obtain_train_param', methods=['get'])
def wrapped_function_train_param(*args, **kwargs):
data = func(*args, **kwargs)
return output_wrapped(0, 'success', data)
return wrapped_function_train_param
return wrapTheFunction
def obtain_test_param():
"""
获取验证参数
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/obtain_test_param', methods=['get'])
def wrapped_function_test_param(*args, **kwargs):
data = func(*args, **kwargs)
return output_wrapped(0, 'success', data)
return wrapped_function_test_param
return wrapTheFunction
def obtain_detect_param():
"""
获取测试参数
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/obtain_detect_param', methods=['get'])
def wrapped_function_inf_param(*args, **kwargs):
data = func(*args, **kwargs)
return output_wrapped(0, 'success', data)
return wrapped_function_inf_param
return wrapTheFunction
def obtain_download_pt_param():
"""
获取下载模型参数
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/obtain_download_pt_param', methods=['get'])
def wrapped_function_obtain_download_pt_param(*args, **kwargs):
data = func(*args, **kwargs)
return output_wrapped(0, 'success', data)
return wrapped_function_obtain_download_pt_param
return wrapTheFunction
@bp.route('/change_ifKillDIct', methods=['get'])
def change_ifKillDIct():
"""
修改全局变量
"""
id = request.args.get('id')
type = request.args.get('type')
set_value(id, type)
return output_wrapped(0, 'success')
# @start_train_algorithm()
# def start(param: str):
# """
# 例子
# """
# print(param)
# process_value_list = ProcessValueList(name='1', value=[])
# report = Report(rate_of_progess=0, process_value=[process_value_list], id='1')
#
# @algorithm_process_value_websocket()
# def process(v: int):
# print(v)
# report.rate_of_progess = ((v + 1) / 10) * 100
# report.precision[0].value.append(v)
# return report.dict()
#
# for i in range(10):
# process(i)
# return report.dict()
from setparams import TrainParams
import os
from app.schemas.TrainResult import DetectProcessValueDice, DetectReport
from app import file_tool
def error_return(id: str, data):
"""
算法出错返回
"""
data_res = {'code': 2, "type": 'error', 'msg': 'fail', 'data': data}
manager.send_message_proj_json(message=data_res, id=id)
# 启动训练
@start_train_algorithm()
def train_R0DY(params_str, id, getsomething):
print('**********************************')
print(getsomething)
print('**********************************')
manager.active_connections_dist[id] = getsomething
print('**********************************')
print(manager.active_connections_dist)
print('**********************************')
print(params_str)
print('**********************************')
from app.yolov5.train_server import train_start
params = TrainParams()
params.read_from_str(params_str)
print(params.get('device').value)
data_list = file_tool.get_file(ori_path=params.get('DatasetDir').value, type_list=params.get('CLASS_NAMES').value)
weights = params.get('resumeModPath').value # 初始化模型绝对路径
img_size = params.get('img_size').value
savemodel = os.path.splitext(params.get('saveModDir').value)[0] + '_' + str(img_size) + '.pt' # 模型命名加上图像参数
epoches = params.get('epochnum').value
batch_size = params.get('batch_size').value
device = params.get('device').value
#try:
train_start(weights, savemodel, epoches, img_size, batch_size, device, data_list, id, getsomething)
print("train down!")
# except Exception as e:
# print(repr(e))
# error_return(id=id,data=repr(e))
# 启动验证程序
@start_test_algorithm()
def validate_RODY(params_str, id):
from app.yolov5.validate_server import validate_start
params = TrainParams()
params.read_from_str(params_str)
weights = params.get('modPath').value # 验证模型绝对路径
(filename, extension) = os.path.splitext(weights) # 文件名与后缀名分开
img_size = int(filename.split('ROD')[1].split('_')[2]) # 获取图像参数
# v_num = int(filename.split('ROD')[1].split('_')[1]) #获取版本号
output = params.get('outputPath').value
batch_size = params.get('batch_size').default
device = params.get('device').value
validate_start(weights, img_size, batch_size, device, output, id)
@start_detect_algorithm()
def detect_RODY(params_str, id):
from app.yolov5.detect_server import detect_start
params = TrainParams()
params.read_from_str(params_str)
weights = params.get('modPath').value # 检测模型绝对路径
input = params.get('inputPath').value
outpath = params.get('outputPath').value
# (filename, extension) = os.path.splitext(weights) # 文件名与后缀名分开
# img_size = int(filename.split('ROD')[1].split('_')[2]) #获取图像参数
# v_num = int(filename.split('ROD')[1].split('_')[1]) #获取版本号
# batch_size = params.get('batch_size').default
device = params.get('device').value
detect_start(input, weights, outpath, device, id)
@start_download_pt()
def Export_model_RODY(params_str):
from app.yolov5.export import Start_Model_Export
import zipfile
params = TrainParams()
params.read_from_str(params_str)
exp_inputPath = params.get('exp_inputPath').value # 模型路径
print('输入模型:', exp_inputPath)
exp_device = params.get('device').value
imgsz = params.get('imgsz').value
modellist = Start_Model_Export(exp_inputPath, exp_device, imgsz)
exp_outputPath = exp_inputPath.replace('pt', 'zip') # 压缩文件
print('模型路径:',exp_outputPath)
zipf = zipfile.ZipFile(exp_outputPath, 'w')
for file in modellist:
zipf.write(file, arcname=Path(file).name) # 将torchscript和onnx模型压缩
return exp_outputPath
@obtain_train_param()
def returnTrainParams():
nvmlInit()
gpuDeviceCount = nvmlDeviceGetCount() # 获取Nvidia GPU块数
_kernel = [f"cuda:{a}" for a in range(gpuDeviceCount)]
params_list = [
{"index": 0, "name": "epochnum", "value": 10, "description": '训练轮次', "default": 100, "type": "I", 'show': True},
{"index": 1, "name": "batch_size", "value": 4, "description": '批次图像数量', "default": 1, "type": "I",
'show': True},
{"index": 2, "name": "img_size", "value": 640, "description": '训练图像大小', "default": 640, "type": "I",
'show': True},
{"index": 3, "name": "device", "value": f'{_kernel[0]}', "description": '训练核心', "default": f'{_kernel[0]}', "type": "E",
"items": _kernel, 'show': False}, # _kernel
{"index": 4, "name": "saveModDir", "value": "E:/alg_demo-master/alg_demo/app/yolov5/best.pt",
"description": '保存模型路径',
"default": "./app/maskrcnn/saved_model/test.pt", "type": "S", 'show': False},
{"index": 5, "name": "resumeModPath", "value": '/yolov5s.pt',
"description": '继续训练路径', "default": '', "type": "S",
'show': False},
{"index": 6, "name": "resumeMod", "value": '', "description": '继续训练模型', "default": '', "type": "E", "items": '',
'show': True},
{"index": 7, "name": "CLASS_NAMES", "value": ['hole', '456'], "description": '类别名称', "default": '', "type": "L",
"items": '',
'show': False},
{"index": 8, "name": "DatasetDir", "value": "E:/aicheck/data_set/11442136178662604800/ori",
"description": '数据集路径',
"default": "./app/maskrcnn/datasets/test", "type": "S", 'show': False} # ORI_PATH
]
# {"index": 9, "name": "saveEpoch", "value": 2, "description": '保存模型轮次', "default": 2, "type": "I", 'show': True}]
params_str = json.dumps(params_list)
return params_str
@obtain_test_param()
def returnValidateParams():
# nvmlInit()
# gpuDeviceCount = nvmlDeviceGetCount() # 获取Nvidia GPU块数
# _kernel = [f"cuda:{a}" for a in range(gpuDeviceCount)]
params_list = [
{"index": 0, "name": "modPath", "value": "E:/alg_demo-master/alg_demo/app/yolov5/圆孔_123_RODY_1_640.pt",
"description": '验证模型路径', "default": "./app/maskrcnn/saved_model/test.pt", "type": "S", 'show': False},
{"index": 1, "name": "batch_size", "value": 1, "description": '批次图像数量', "default": 1, "type": "I",
'show': False},
{"index": 2, "name": "img_size", "value": 640, "description": '训练图像大小', "default": 640, "type": "I",
'show': False},
{"index": 3, "name": "outputPath", "value": 'E:/aicheck/data_set/11442136178662604800/val_results/',
"description": '输出结果路径',
"default": './app/maskrcnn/datasets/M006B_waibi/res', "type": "S", 'show': False},
{"index": 4, "name": "device", "value": "0", "description": '训练核心', "default": "cuda", "type": "S",
"items": '', 'show': False} # _kernel
]
# {"index": 9, "name": "saveEpoch", "value": 2, "description": '保存模型轮次', "default": 2, "type": "I", 'show': True}]
params_str = json.dumps(params_list)
return params_str
@obtain_detect_param()
def returnDetectParams():
# nvmlInit()
# gpuDeviceCount = nvmlDeviceGetCount() # 获取Nvidia GPU块数
# _kernel = [f"cuda:{a}" for a in range(gpuDeviceCount)]
params_list = [
{"index": 0, "name": "inputPath", "value": 'E:/aicheck/data_set/11442136178662604800/input/',
"description": '输入图像路径', "default": './app/maskrcnn/datasets/M006B_waibi/JPEGImages', "type": "S",
'show': False},
{"index": 1, "name": "outputPath", "value": 'E:/aicheck/data_set/11442136178662604800/val_results/',
"description": '输出结果路径',
"default": './app/maskrcnn/datasets/M006B_waibi/res', "type": "S", 'show': False},
{"index": 2, "name": "modPath", "value": "E:/alg_demo-master/alg_demo/app/yolov5/圆孔_123_RODY_1_640.pt",
"description": '模型路径', "default": "./app/maskrcnn/saved_model/test.pt", "type": "S", 'show': False},
{"index": 3, "name": "device", "value": "0", "description": '推理核', "default": "cpu", "type": "S",
'show': False},
]
# {"index": 9, "name": "saveEpoch", "value": 2, "description": '保存模型轮次', "default": 2, "type": "I", 'show': True}]
params_str = json.dumps(params_list)
return params_str
@obtain_download_pt_param()
def returnDownloadParams():
params_list = [
{"index": 0, "name": "exp_inputPath", "value": 'E:/alg_demo-master/alg_demo/app/yolov5/圆孔_123_RODY_1_640.pt',
"description": '转化模型输入路径',
"default": 'E:/alg_demo-master/alg_demo/app/yolov5/圆孔_123_RODY_1_640.pt/',
"type": "S", 'show': False},
{"index": 1, "name": "device", "value": 'gpu', "description": 'CPU或GPU', "default": 'gpu', "type": "S",
'show': False},
{"index": 2, "name": "imgsz", "value": 640, "description": '图像大小', "default": 640, "type": "I",
'show': True}
]
params_str = json.dumps(params_list)
return params_str
if __name__ == '__main__':
par = returnTrainParams()
print(par)
id='1'
train_R0DY(par,id)
"""
@Time 2022/9/20 16:17
@Auth
@File AlgorithmController.py
@IDE PyCharm
@MottoABC(Always Be Coding)
@Desc算法接口
"""
import json
from functools import wraps
from threading import Thread
from multiprocessing import Process
from time import sleep
from flask import Blueprint, request
from app.schemas.TrainResult import Report, ProcessValueList
from app.utils.RedisMQTool import Task
from app.utils.StandardizedOutput import output_wrapped
from app.utils.redis_config import redis_client
from app.utils.websocket_tool import manager
from app.configs.global_var import set_value
import sys
from pathlib import Path
from pynvml import *
# FILE = Path(__file__).resolve()
# ROOT = FILE.parents[0] # YOLOv5 root directory
# if str(ROOT) not in sys.path:
# sys.path.append(str(ROOT)) # add ROOT to PATH
# sys.path.append("/mnt/sdc/algorithm/AICheck-MaskRCNN/app/maskrcnn_ppx")
# import ppx as pdx
bp = Blueprint('AlgorithmController', __name__)
ifKillDict = {}
def start_train_algorithm():
"""
调用训练算法
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/start_train_algorithm', methods=['get'])
def wrapped_function():
param = request.args.get('param')
id = request.args.get('id')
dict = manager.active_connections_dist
# t = Thread(target=func, args=(param, id))
t = Process(target=func, args=(param, id, dict[id]), name=id)
set_value(key=id, value=False)
t.start()
return output_wrapped(0, 'success', '成功')
return wrapped_function
return wrapTheFunction
def start_test_algorithm():
"""
调用验证算法
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/start_test_algorithm', methods=['get'])
def wrapped_function_test():
param = request.args.get('param')
id = request.args.get('id')
t = Thread(target=func, args=(param, id))
t.start()
return output_wrapped(0, 'success', '成功')
return wrapped_function_test
return wrapTheFunction
def start_detect_algorithm():
"""
调用检测算法
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/start_detect_algorithm', methods=['get'])
def wrapped_function_detect():
param = request.args.get('param')
id = request.args.get('id')
t = Thread(target=func, args=(param, id))
t.start()
return output_wrapped(0, 'success', '成功')
return wrapped_function_detect
return wrapTheFunction
def start_download_pt():
"""
下载模型
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/start_download_pt', methods=['get'])
def wrapped_function_start_download_pt():
param = request.args.get('param')
data = func(param)
return output_wrapped(0, 'success', data)
return wrapped_function_start_download_pt
return wrapTheFunction
def algorithm_process_value():
"""
获取中间值, redis订阅发布
"""
def wrapTheFunction(func):
@wraps(func)
def wrapped_function(*args, **kwargs):
data = func(*args, **kwargs)
print(data)
Task(redis_conn=redis_client.get_redis(), channel="ceshi").publish_task(
data={'code': 0, 'msg': 'success', 'data': data})
return output_wrapped(0, 'success', data)
return wrapped_function
return wrapTheFunction
def algorithm_process_value_websocket():
"""
获取中间值, websocket发布
"""
def wrapTheFunction(func):
@wraps(func)
def wrapped_function(*args, **kwargs):
data = func(*args, **kwargs)
id = data["id"]
data_res = {'code': 0, "type": 'connected', 'msg': 'success', 'data': data}
manager.send_message_proj_json(message=data_res, id=id)
return data
return wrapped_function
return wrapTheFunction
def algorithm_kill_value_websocket():
"""
获取kill值, websocket发布
"""
def wrapTheFunction(func):
@wraps(func)
def wrapped_function(*args, **kwargs):
data = func(*args, **kwargs)
id = data["id"]
data_res = {'code': 1, "type": 'kill', 'msg': 'success', 'data': data}
manager.send_message_proj_json(message=data_res, id=id)
return data
return wrapped_function
return wrapTheFunction
def algorithm_error_value_websocket():
"""
获取error值, websocket发布
"""
def wrapTheFunction(func):
@wraps(func)
def wrapped_function(*args, **kwargs):
data = func(*args, **kwargs)
id = data["id"]
data_res = {'code': 2, "type": 'error', 'msg': 'fail', 'data': data}
manager.send_message_proj_json(message=data_res, id=id)
return data
return wrapped_function
return wrapTheFunction
def obtain_train_param():
"""
获取训练参数
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/obtain_train_param', methods=['get'])
def wrapped_function_train_param(*args, **kwargs):
data = func(*args, **kwargs)
return output_wrapped(0, 'success', data)
return wrapped_function_train_param
return wrapTheFunction
def obtain_test_param():
"""
获取验证参数
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/obtain_test_param', methods=['get'])
def wrapped_function_test_param(*args, **kwargs):
data = func(*args, **kwargs)
return output_wrapped(0, 'success', data)
return wrapped_function_test_param
return wrapTheFunction
def obtain_detect_param():
"""
获取测试参数
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/obtain_detect_param', methods=['get'])
def wrapped_function_inf_param(*args, **kwargs):
data = func(*args, **kwargs)
return output_wrapped(0, 'success', data)
return wrapped_function_inf_param
return wrapTheFunction
def obtain_download_pt_param():
"""
获取下载模型参数
"""
def wrapTheFunction(func):
@wraps(func)
@bp.route('/obtain_download_pt_param', methods=['get'])
def wrapped_function_obtain_download_pt_param(*args, **kwargs):
data = func(*args, **kwargs)
return output_wrapped(0, 'success', data)
return wrapped_function_obtain_download_pt_param
return wrapTheFunction
@bp.route('/change_ifKillDIct', methods=['get'])
def change_ifKillDIct():
"""
修改全局变量
"""
id = request.args.get('id')
type = request.args.get('type')
set_value(id, type)
return output_wrapped(0, 'success')
# @start_train_algorithm()
# def start(param: str):
# """
# 例子
# """
# print(param)
# process_value_list = ProcessValueList(name='1', value=[])
# report = Report(rate_of_progess=0, process_value=[process_value_list], id='1')
#
# @algorithm_process_value_websocket()
# def process(v: int):
# print(v)
# report.rate_of_progess = ((v + 1) / 10) * 100
# report.precision[0].value.append(v)
# return report.dict()
#
# for i in range(10):
# process(i)
# return report.dict()
from setparams import TrainParams
import os
from app.schemas.TrainResult import DetectProcessValueDice, DetectReport
from app import file_tool
def error_return(id: str, data):
"""
算法出错返回
"""
data_res = {'code': 2, "type": 'error', 'msg': 'fail', 'data': data}
manager.send_message_proj_json(message=data_res, id=id)
# 启动训练
@start_train_algorithm()
def train_R0DY(params_str, id, getsomething):
print('**********************************')
print(getsomething)
print('**********************************')
manager.active_connections_dist[id] = getsomething
print('**********************************')
print(manager.active_connections_dist)
print('**********************************')
print(params_str)
print('**********************************')
from app.yolov5.train_server import train_start
params = TrainParams()
params.read_from_str(params_str)
print(params.get('device').value)
data_list = file_tool.get_file(ori_path=params.get('DatasetDir').value, type_list=params.get('CLASS_NAMES').value)
weights = params.get('resumeModPath').value # 初始化模型绝对路径
img_size = params.get('img_size').value
savemodel = os.path.splitext(params.get('saveModDir').value)[0] + '_' + str(img_size) + '.pt' # 模型命名加上图像参数
epoches = params.get('epochnum').value
batch_size = params.get('batch_size').value
device = params.get('device').value
#try:
train_start(weights, savemodel, epoches, img_size, batch_size, device, data_list, id, getsomething)
print("train down!")
# except Exception as e:
# print(repr(e))
# error_return(id=id,data=repr(e))
# 启动验证程序
@start_test_algorithm()
def validate_RODY(params_str, id):
from app.yolov5.validate_server import validate_start
params = TrainParams()
params.read_from_str(params_str)
weights = params.get('modPath').value # 验证模型绝对路径
(filename, extension) = os.path.splitext(weights) # 文件名与后缀名分开
img_size = int(filename.split('ROD')[1].split('_')[2]) # 获取图像参数
# v_num = int(filename.split('ROD')[1].split('_')[1]) #获取版本号
output = params.get('outputPath').value
batch_size = params.get('batch_size').default
device = params.get('device').value
validate_start(weights, img_size, batch_size, device, output, id)
@start_detect_algorithm()
def detect_RODY(params_str, id):
from app.yolov5.detect_server import detect_start
params = TrainParams()
params.read_from_str(params_str)
weights = params.get('modPath').value # 检测模型绝对路径
input = params.get('inputPath').value
outpath = params.get('outputPath').value
# (filename, extension) = os.path.splitext(weights) # 文件名与后缀名分开
# img_size = int(filename.split('ROD')[1].split('_')[2]) #获取图像参数
# v_num = int(filename.split('ROD')[1].split('_')[1]) #获取版本号
# batch_size = params.get('batch_size').default
device = params.get('device').value
detect_start(input, weights, outpath, device, id)
@start_download_pt()
def Export_model_RODY(params_str):
from app.yolov5.export import Start_Model_Export
import zipfile
params = TrainParams()
params.read_from_str(params_str)
exp_inputPath = params.get('exp_inputPath').value # 模型路径
print('输入模型:', exp_inputPath)
exp_device = params.get('device').value
imgsz = params.get('imgsz').value
modellist = Start_Model_Export(exp_inputPath, exp_device, imgsz)
exp_outputPath = exp_inputPath.replace('pt', 'zip') # 压缩文件
print('模型路径:',exp_outputPath)
zipf = zipfile.ZipFile(exp_outputPath, 'w')
for file in modellist:
zipf.write(file, arcname=Path(file).name) # 将torchscript和onnx模型压缩
return exp_outputPath
@obtain_train_param()
def returnTrainParams():
nvmlInit()
gpuDeviceCount = nvmlDeviceGetCount() # 获取Nvidia GPU块数
_kernel = [f"cuda:{a}" for a in range(gpuDeviceCount)]
params_list = [
{"index": 0, "name": "epochnum", "value": 10, "description": '训练轮次', "default": 100, "type": "I", 'show': True},
{"index": 1, "name": "batch_size", "value": 4, "description": '批次图像数量', "default": 1, "type": "I",
'show': True},
{"index": 2, "name": "img_size", "value": 640, "description": '训练图像大小', "default": 640, "type": "I",
'show': True},
{"index": 3, "name": "device", "value": f'{_kernel[0]}', "description": '训练核心', "default": f'{_kernel[0]}', "type": "E",
"items": _kernel, 'show': False}, # _kernel
{"index": 4, "name": "saveModDir", "value": "E:/alg_demo-master/alg_demo/app/yolov5/best.pt",
"description": '保存模型路径',
"default": "./app/maskrcnn/saved_model/test.pt", "type": "S", 'show': False},
{"index": 5, "name": "resumeModPath", "value": '/yolov5s.pt',
"description": '继续训练路径', "default": '', "type": "S",
'show': False},
{"index": 6, "name": "resumeMod", "value": '', "description": '继续训练模型', "default": '', "type": "E", "items": '',
'show': True},
{"index": 7, "name": "CLASS_NAMES", "value": ['hole', '456'], "description": '类别名称', "default": '', "type": "L",
"items": '',
'show': False},
{"index": 8, "name": "DatasetDir", "value": "E:/aicheck/data_set/11442136178662604800/ori",
"description": '数据集路径',
"default": "./app/maskrcnn/datasets/test", "type": "S", 'show': False} # ORI_PATH
]
# {"index": 9, "name": "saveEpoch", "value": 2, "description": '保存模型轮次', "default": 2, "type": "I", 'show': True}]
params_str = json.dumps(params_list)
return params_str
@obtain_test_param()
def returnValidateParams():
# nvmlInit()
# gpuDeviceCount = nvmlDeviceGetCount() # 获取Nvidia GPU块数
# _kernel = [f"cuda:{a}" for a in range(gpuDeviceCount)]
params_list = [
{"index": 0, "name": "modPath", "value": "E:/alg_demo-master/alg_demo/app/yolov5/圆孔_123_RODY_1_640.pt",
"description": '验证模型路径', "default": "./app/maskrcnn/saved_model/test.pt", "type": "S", 'show': False},
{"index": 1, "name": "batch_size", "value": 1, "description": '批次图像数量', "default": 1, "type": "I",
'show': False},
{"index": 2, "name": "img_size", "value": 640, "description": '训练图像大小', "default": 640, "type": "I",
'show': False},
{"index": 3, "name": "outputPath", "value": 'E:/aicheck/data_set/11442136178662604800/val_results/',
"description": '输出结果路径',
"default": './app/maskrcnn/datasets/M006B_waibi/res', "type": "S", 'show': False},
{"index": 4, "name": "device", "value": "0", "description": '训练核心', "default": "cuda", "type": "S",
"items": '', 'show': False} # _kernel
]
# {"index": 9, "name": "saveEpoch", "value": 2, "description": '保存模型轮次', "default": 2, "type": "I", 'show': True}]
params_str = json.dumps(params_list)
return params_str
@obtain_detect_param()
def returnDetectParams():
# nvmlInit()
# gpuDeviceCount = nvmlDeviceGetCount() # 获取Nvidia GPU块数
# _kernel = [f"cuda:{a}" for a in range(gpuDeviceCount)]
params_list = [
{"index": 0, "name": "inputPath", "value": 'E:/aicheck/data_set/11442136178662604800/input/',
"description": '输入图像路径', "default": './app/maskrcnn/datasets/M006B_waibi/JPEGImages', "type": "S",
'show': False},
{"index": 1, "name": "outputPath", "value": 'E:/aicheck/data_set/11442136178662604800/val_results/',
"description": '输出结果路径',
"default": './app/maskrcnn/datasets/M006B_waibi/res', "type": "S", 'show': False},
{"index": 2, "name": "modPath", "value": "E:/alg_demo-master/alg_demo/app/yolov5/圆孔_123_RODY_1_640.pt",
"description": '模型路径', "default": "./app/maskrcnn/saved_model/test.pt", "type": "S", 'show': False},
{"index": 3, "name": "device", "value": "0", "description": '推理核', "default": "cpu", "type": "S",
'show': False},
]
# {"index": 9, "name": "saveEpoch", "value": 2, "description": '保存模型轮次', "default": 2, "type": "I", 'show': True}]
params_str = json.dumps(params_list)
return params_str
@obtain_download_pt_param()
def returnDownloadParams():
params_list = [
{"index": 0, "name": "exp_inputPath", "value": 'E:/alg_demo-master/alg_demo/app/yolov5/圆孔_123_RODY_1_640.pt',
"description": '转化模型输入路径',
"default": 'E:/alg_demo-master/alg_demo/app/yolov5/圆孔_123_RODY_1_640.pt/',
"type": "S", 'show': False},
{"index": 1, "name": "device", "value": 'gpu', "description": 'CPU或GPU', "default": 'gpu', "type": "S",
'show': False},
{"index": 2, "name": "imgsz", "value": 640, "description": '图像大小', "default": 640, "type": "I",
'show': True}
]
params_str = json.dumps(params_list)
return params_str
if __name__ == '__main__':
par = returnTrainParams()
print(par)
id='1'
train_R0DY(par,id)

View File

@ -1,33 +1,33 @@
import logging
from flask import Blueprint, app
from app.exts import redisClient
from app.utils.StandardizedOutput import output_wrapped
bp = Blueprint('WebStatus', __name__)
@bp.route('/ping', methods=['GET'])
def ping():
""" For health check.
"""
res = output_wrapped(0, 'pong', '')
return res
@bp.route('/redis/set', methods=['post'])
def redis_set():
redisClient.set('foo', 'bar', ex=60*60*6)
res = output_wrapped(0, 'set foo', '')
return res
@bp.route('/redis/get', methods=['get'])
def redis_get():
""" For health check.
"""
the_food = redisClient.get('foo')
if not the_food:
return output_wrapped(5006, 'foo', "")
return output_wrapped(0, 'foo', the_food.decode("utf-8"))
import logging
from flask import Blueprint, app
from app.exts import redisClient
from app.utils.StandardizedOutput import output_wrapped
bp = Blueprint('WebStatus', __name__)
@bp.route('/ping', methods=['GET'])
def ping():
""" For health check.
"""
res = output_wrapped(0, 'pong', '')
return res
@bp.route('/redis/set', methods=['post'])
def redis_set():
redisClient.set('foo', 'bar', ex=60*60*6)
res = output_wrapped(0, 'set foo', '')
return res
@bp.route('/redis/get', methods=['get'])
def redis_get():
""" For health check.
"""
the_food = redisClient.get('foo')
if not the_food:
return output_wrapped(5006, 'foo', "")
return output_wrapped(0, 'foo', the_food.decode("utf-8"))

View File

@ -1,4 +1,4 @@
from app.core.common_utils import import_subs
__all__ = import_subs(locals(), modules_only=True)
from app.core.common_utils import import_subs
__all__ = import_subs(locals(), modules_only=True)

View File

@ -1,9 +1,9 @@
path: null
train: /mnt/sdc/aicheck/IntelligentizeAI/data_set/193120735164768256/trained/images/train/
val: /mnt/sdc/aicheck/IntelligentizeAI/data_set/193120735164768256/trained/images/val/
train: /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/190857268466688000/trained/images/train/
val: /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/190857268466688000/trained/images/val/
test: null
names:
0: hole
1: '456'
2: dog
3: cat
2: zui
3: mianbang

503
nohup.out
View File

@ -1,51 +1,478 @@
nohup: ignoring input
2022-11-24 08:58:34,262 INFO sqlalchemy.engine.Engine select pg_catalog.version()
2022-11-24 08:58:34,262 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-11-24 08:58:34,267 INFO sqlalchemy.engine.Engine select current_schema()
2022-11-24 08:58:34,267 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-11-24 08:58:34,272 INFO sqlalchemy.engine.Engine show standard_conforming_strings
2022-11-24 08:58:34,272 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-11-24 08:58:34,277 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2022-11-24 08:58:34,277 INFO sqlalchemy.engine.Engine COMMIT
export: data=app/yolov5/data/coco128.yaml, weights=/mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.pt, imgsz=[640, 640], batch_size=1, device=0, half=False, inplace=False, train=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=11, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['torchscript', 'onnx']
2022-11-29 08:44:22,208 INFO sqlalchemy.engine.Engine select pg_catalog.version()
2022-11-29 08:44:22,209 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-11-29 08:44:22,214 INFO sqlalchemy.engine.Engine select current_schema()
2022-11-29 08:44:22,214 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-11-29 08:44:22,219 INFO sqlalchemy.engine.Engine show standard_conforming_strings
2022-11-29 08:44:22,219 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-11-29 08:44:22,225 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2022-11-29 08:44:22,225 INFO sqlalchemy.engine.Engine COMMIT
detect_server: id=195095688265211904_17_detect, weights=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights//核酸检测_190857268466688000_R-ODY_17_640.pt, source=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/camera, output=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/results, data=app/yolov5/data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=0, view_img=False, save_txt=True, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=app/yolov5/runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 🚀 2022-11-7 Python-3.8.13 torch-1.8.0+cu111 CUDA:0 (Tesla T4, 15110MiB)
2022-11-29 08:44:36.902619: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-29 08:44:37.033367: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2022-11-29 08:44:37.070603: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-11-29 08:44:37.607302: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/cv2/../../lib64::/usr/local/cuda/lib64
2022-11-29 08:44:37.607416: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/cv2/../../lib64::/usr/local/cuda/lib64
2022-11-29 08:44:37.607434: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Fusing layers...
192.168.0.20 - - [2022-11-29 08:44:42] "GET /api/obtain_detect_param HTTP/1.1" 200 1110 0.000672
192.168.0.20 - - [2022-11-29 08:44:42] "GET /api/start_detect_algorithm?param=%5B%7B%22index%22%3A+0%2C+%22name%22%3A+%22inputPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2F195095688265211904%2Fcamera%22%2C+%22description%22%3A+%22%5Cu8f93%5Cu5165%5Cu56fe%5Cu50cf%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fdatasets%2FM006B_waibi%2FJPEGImages%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+1%2C+%22name%22%3A+%22outputPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2F195095688265211904%2Fresults%22%2C+%22description%22%3A+%22%5Cu8f93%5Cu51fa%5Cu7ed3%5Cu679c%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fdatasets%2FM006B_waibi%2Fres%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+2%2C+%22name%22%3A+%22modPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F%2F%5Cu6838%5Cu9178%5Cu68c0%5Cu6d4b_190857268466688000_R-ODY_17_640.pt%22%2C+%22description%22%3A+%22%5Cu6a21%5Cu578b%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fsaved_model%2Ftest.pt%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+3%2C+%22name%22%3A+%22device%22%2C+%22value%22%3A+%220%22%2C+%22description%22%3A+%22%5Cu63a8%5Cu7406%5Cu6838%22%2C+%22default%22%3A+%22cpu%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%5D&id=195095688265211904_17_detect HTTP/1.1" 200 161 0.002945
detect_server: id=195095688265211904_17_detect, weights=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights//核酸检测_190857268466688000_R-ODY_17_640.pt, source=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/camera, output=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/results, data=app/yolov5/data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=0, view_img=False, save_txt=True, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=app/yolov5/runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 🚀 2022-11-7 Python-3.8.13 torch-1.8.0+cu111 CUDA:0 (Tesla T4, 15110MiB)
Fusing layers...
Model summary: 213 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
------进入websocket
Model summary: 213 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
image 1/1 /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/camera/微信截图_20221129084118.png: 640x512 1 zui, 1 mianbang, 23.9ms
Speed: 0.4ms pre-process, 23.9ms inference, 1.5ms NMS per image at shape (1, 3, 640, 640)
image 1/1 /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/camera/微信截图_20221129084118.png: 640x512 1 zui, 1 mianbang, 21.2ms
Speed: 0.4ms pre-process, 21.2ms inference, 1.1ms NMS per image at shape (1, 3, 640, 640)
192.168.0.20 - - [2022-11-29 08:46:07] "GET /api/obtain_detect_param HTTP/1.1" 200 1110 0.000634
192.168.0.20 - - [2022-11-29 08:46:07] "GET /api/start_detect_algorithm?param=%5B%7B%22index%22%3A+0%2C+%22name%22%3A+%22inputPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2F195095688265211904%2Fcamera%22%2C+%22description%22%3A+%22%5Cu8f93%5Cu5165%5Cu56fe%5Cu50cf%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fdatasets%2FM006B_waibi%2FJPEGImages%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+1%2C+%22name%22%3A+%22outputPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2F195095688265211904%2Fresults%22%2C+%22description%22%3A+%22%5Cu8f93%5Cu51fa%5Cu7ed3%5Cu679c%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fdatasets%2FM006B_waibi%2Fres%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+2%2C+%22name%22%3A+%22modPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F%2F%5Cu6838%5Cu9178%5Cu68c0%5Cu6d4b_190857268466688000_R-ODY_17_640.pt%22%2C+%22description%22%3A+%22%5Cu6a21%5Cu578b%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fsaved_model%2Ftest.pt%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+3%2C+%22name%22%3A+%22device%22%2C+%22value%22%3A+%220%22%2C+%22description%22%3A+%22%5Cu63a8%5Cu7406%5Cu6838%22%2C+%22default%22%3A+%22cpu%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%5D&id=195095688265211904_17_detect HTTP/1.1" 200 161 0.002091
detect_server: id=195095688265211904_17_detect, weights=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights//核酸检测_190857268466688000_R-ODY_17_640.pt, source=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/camera, output=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/results, data=app/yolov5/data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=0, view_img=False, save_txt=True, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=app/yolov5/runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 🚀 2022-11-7 Python-3.8.13 torch-1.8.0+cu111 CUDA:0 (Tesla T4, 15110MiB)
Fusing layers...
Model summary: 213 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
image 1/1 /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/camera/微信截图_20221129084118.png: 640x512 1 zui, 1 mianbang, 12.3ms
Speed: 0.4ms pre-process, 12.3ms inference, 0.8ms NMS per image at shape (1, 3, 640, 640)
------进入websocket
192.168.0.20 - - [2022-11-29 08:46:19] "GET /api/obtain_detect_param HTTP/1.1" 200 1110 0.000607
192.168.0.20 - - [2022-11-29 08:46:19] "GET /api/start_detect_algorithm?param=%5B%7B%22index%22%3A+0%2C+%22name%22%3A+%22inputPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2F195095688265211904%2Fcamera%22%2C+%22description%22%3A+%22%5Cu8f93%5Cu5165%5Cu56fe%5Cu50cf%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fdatasets%2FM006B_waibi%2FJPEGImages%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+1%2C+%22name%22%3A+%22outputPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2F195095688265211904%2Fresults%22%2C+%22description%22%3A+%22%5Cu8f93%5Cu51fa%5Cu7ed3%5Cu679c%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fdatasets%2FM006B_waibi%2Fres%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+2%2C+%22name%22%3A+%22modPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F%2F%5Cu6838%5Cu9178%5Cu68c0%5Cu6d4b_190857268466688000_R-ODY_17_640.pt%22%2C+%22description%22%3A+%22%5Cu6a21%5Cu578b%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fsaved_model%2Ftest.pt%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+3%2C+%22name%22%3A+%22device%22%2C+%22value%22%3A+%220%22%2C+%22description%22%3A+%22%5Cu63a8%5Cu7406%5Cu6838%22%2C+%22default%22%3A+%22cpu%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%5D&id=195095688265211904_17_detect HTTP/1.1" 200 161 0.002050
detect_server: id=195095688265211904_17_detect, weights=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights//核酸检测_190857268466688000_R-ODY_17_640.pt, source=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/camera, output=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/results, data=app/yolov5/data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=0, view_img=False, save_txt=True, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=app/yolov5/runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 🚀 2022-11-7 Python-3.8.13 torch-1.8.0+cu111 CUDA:0 (Tesla T4, 15110MiB)
Fusing layers...
Model summary: 213 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
image 1/1 /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/195095688265211904/camera/微信截图_20221129084118.png: 640x512 1 zui, 1 mianbang, 12.3ms
Speed: 0.4ms pre-process, 12.3ms inference, 0.8ms NMS per image at shape (1, 3, 640, 640)
------进入websocket
192.168.0.20 - - [2022-11-29 09:46:07] "GET /api/obtain_train_param HTTP/1.1" 200 1936 0.003772
------进入websocket
------进入websocket
192.168.0.20 - - [2022-11-29 09:52:50] "GET /api/obtain_train_param HTTP/1.1" 200 1936 0.000666
------进入websocket
------进入websocket
------进入websocket
-----------回调消息成功------------
-----------回调消息成功------------
-----------回调消息成功------------
-----------回调消息成功------------
-----------回调消息成功------------
-----------回调消息成功------------
-----------回调消息成功------------
-----------回调消息成功------------
-----------回调消息成功------------
192.168.0.20 - - [2022-11-29 10:06:37] "GET /api/obtain_train_param HTTP/1.1" 200 1936 0.000656
------进入websocket
存储ws连接对象
requirements: /mnt/sdc/algorithm/R-ODY/app/yolov5/requirements.txt not found, check failed.
True
requirements: /mnt/sdc/algorithm/R-ODY/app/yolov5/requirements.txt not found, check failed.
True
存储ws连接对象
图片总数量: 1
处理成功数量: 1
处理失败数量: 0
图片总数量: 1
处理成功数量: 1
处理失败数量: 0
requirements: /mnt/sdc/algorithm/R-ODY/app/yolov5/requirements.txt not found, check failed.
True
图片总数量: 1
处理成功数量: 1
处理失败数量: 0
存储ws连接对象
requirements: /mnt/sdc/algorithm/R-ODY/app/yolov5/requirements.txt not found, check failed.
True
图片总数量: 1
处理成功数量: 1
处理失败数量: 0
存储ws连接对象
存储ws连接对象
存储ws连接对象
存储ws连接对象
存储ws连接对象
存储ws连接对象
None
None
None
None
None
None
None
None
None
存储ws连接对象
192.168.0.20 - - [2022-11-29 10:06:49] "GET /api/start_train_algorithm?param=%5B%7B%22index%22%3A+0%2C+%22name%22%3A+%22epochnum%22%2C+%22value%22%3A+10%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu8f6e%5Cu6b21%22%2C+%22default%22%3A+100%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+1%2C+%22name%22%3A+%22batch_size%22%2C+%22value%22%3A+4%2C+%22description%22%3A+%22%5Cu6279%5Cu6b21%5Cu56fe%5Cu50cf%5Cu6570%5Cu91cf%22%2C+%22default%22%3A+1%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+2%2C+%22name%22%3A+%22img_size%22%2C+%22value%22%3A+640%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu56fe%5Cu50cf%5Cu5927%5Cu5c0f%22%2C+%22default%22%3A+640%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+3%2C+%22name%22%3A+%22device%22%2C+%22value%22%3A+%22cuda%3A0%22%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu6838%5Cu5fc3%22%2C+%22default%22%3A+%22cuda%3A0%22%2C+%22type%22%3A+%22E%22%2C+%22items%22%3A+%5B%22cuda%3A0%22%2C+%22cuda%3A1%22%5D%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+4%2C+%22name%22%3A+%22saveModDir%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F1128test_194741569180540928_R-ODY_19.pt%22%2C+%22description%22%3A+%22%5Cu4fdd%5Cu5b58%5Cu6a21%5Cu578b%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fsaved_model%2Ftest.pt%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+5%2C+%22name%22%3A+%22resumeModPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F%2Fyolov5s.pt%22%2C+%22description%22%3A+%22%5Cu7ee7%5Cu7eed%5Cu8bad%5Cu7ec3%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+6%2C+%22name%22%3A+%22resumeMod%22%2C+%22value%22%3A+%22%22%2C+%22description%22%3A+%22%5Cu7ee7%5Cu7eed%5Cu8bad%5Cu7ec3%5Cu6a21%5Cu578b%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22E%22%2C+%22items%22%3A+%22%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+7%2C+%22name%22%3A+%22CLASS_NAMES%22%2C+%22value%22%3A+%5B%22hole%22%2C+%22456%22%2C+%22aeroplane%22%2C+%22tvmonitor%22%2C+%22train%22%2C+%22boat%22%2C+%22dog%22%2C+%22chair%22%2C+%22bird%22%2C+%22bicycle%22%2C+%22person%22%2C+%22bottle%22%2C+%22sheep%22%2C+%22cat%22%5D%2C+%22description%22%3A+%22%5Cu7c7b%5Cu522b%5Cu540d%5Cu79f0%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22L%22%2C+%22items%22%3A+%22%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+8%2C+%22name%22%3A+%22DatasetDir%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2F194741569180540928%2Fori%22%2C+%22description%22%3A+%22%5Cu6570%5Cu636e%5Cu96c6%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fdatasets%2Ftest%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%5D&id=194741569180540928_19_train HTTP/1.1" 200 161 0.057693
删除图片数据
删除json数据
train_server: weights=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights//yolov5s.pt, savemodel=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/1128test_194741569180540928_R-ODY_19_640.pt, cfg=, data=/mnt/sdc/algorithm/R-ODY/app/yolov5/data/coco128.yaml, hyp=/mnt/sdc/algorithm/R-ODY/app/yolov5/data/hyps/hyp.scratch-low.yaml, epochs=10, batch_size=4, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=None, image_weights=False, device=cuda:0, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=/mnt/sdc/algorithm/R-ODY/app/yolov5/runs/train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest
Weights & Biases: run 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs in Weights & Biases
ClearML: run 'pip install clearml' to automatically track, visualize and remotely train YOLOv5 🚀 in ClearML
TensorBoard: Start with 'tensorboard --logdir /mnt/sdc/algorithm/R-ODY/app/yolov5/runs/train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=14
from n params module arguments
0 -1 1 3520 app.yolov5.models.common.Conv [3, 32, 6, 2, 2]
1 -1 1 18560 app.yolov5.models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 app.yolov5.models.common.C3 [64, 64, 1]
3 -1 1 73984 app.yolov5.models.common.Conv [64, 128, 3, 2]
4 -1 2 115712 app.yolov5.models.common.C3 [128, 128, 2]
5 -1 1 295424 app.yolov5.models.common.Conv [128, 256, 3, 2]
6 -1 3 625152 app.yolov5.models.common.C3 [256, 256, 3]
7 -1 1 1180672 app.yolov5.models.common.Conv [256, 512, 3, 2]
8 -1 1 1182720 app.yolov5.models.common.C3 [512, 512, 1]
9 -1 1 656896 app.yolov5.models.common.SPPF [512, 512, 5]
10 -1 1 131584 app.yolov5.models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 app.yolov5.models.common.Concat [1]
13 -1 1 361984 app.yolov5.models.common.C3 [512, 256, 1, False]
14 -1 1 33024 app.yolov5.models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 app.yolov5.models.common.Concat [1]
17 -1 1 90880 app.yolov5.models.common.C3 [256, 128, 1, False]
18 -1 1 147712 app.yolov5.models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 app.yolov5.models.common.Concat [1]
20 -1 1 296448 app.yolov5.models.common.C3 [256, 256, 1, False]
21 -1 1 590336 app.yolov5.models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 app.yolov5.models.common.Concat [1]
23 -1 1 1182720 app.yolov5.models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 51243 app.yolov5.models.yolo.Detect [14, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model summary: 270 layers, 7057387 parameters, 7057387 gradients, 16.1 GFLOPs
**********************************
[<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06880>]
**********************************
**********************************
{'195095688265211904_17_detect': [<geventwebsocket.websocket.WebSocket object at 0x7fd0e269a340>, <geventwebsocket.websocket.WebSocket object at 0x7fcf55fe9820>, <geventwebsocket.websocket.WebSocket object at 0x7fcfb9ef6760>, <geventwebsocket.websocket.WebSocket object at 0x7fcf55fe9e20>], '194741569180540928_14_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06760>], '194741569180540928_15_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06a60>], '194741569180540928_16_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06520>], '194741569180540928_17_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf5499d100>], '194741569180540928_18_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf5499d460>], '194741569180540928_19_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06880>]}
**********************************
[{"index": 0, "name": "epochnum", "value": 10, "description": "\u8bad\u7ec3\u8f6e\u6b21", "default": 100, "type": "I", "show": true}, {"index": 1, "name": "batch_size", "value": 4, "description": "\u6279\u6b21\u56fe\u50cf\u6570\u91cf", "default": 1, "type": "I", "show": true}, {"index": 2, "name": "img_size", "value": 640, "description": "\u8bad\u7ec3\u56fe\u50cf\u5927\u5c0f", "default": 640, "type": "I", "show": true}, {"index": 3, "name": "device", "value": "cuda:0", "description": "\u8bad\u7ec3\u6838\u5fc3", "default": "cuda:0", "type": "E", "items": ["cuda:0", "cuda:1"], "show": false}, {"index": 4, "name": "saveModDir", "value": "/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/1128test_194741569180540928_R-ODY_19.pt", "description": "\u4fdd\u5b58\u6a21\u578b\u8def\u5f84", "default": "./app/maskrcnn/saved_model/test.pt", "type": "S", "show": false}, {"index": 5, "name": "resumeModPath", "value": "/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights//yolov5s.pt", "description": "\u7ee7\u7eed\u8bad\u7ec3\u8def\u5f84", "default": "", "type": "S", "show": false}, {"index": 6, "name": "resumeMod", "value": "", "description": "\u7ee7\u7eed\u8bad\u7ec3\u6a21\u578b", "default": "", "type": "E", "items": "", "show": true}, {"index": 7, "name": "CLASS_NAMES", "value": ["hole", "456", "aeroplane", "tvmonitor", "train", "boat", "dog", "chair", "bird", "bicycle", "person", "bottle", "sheep", "cat"], "description": "\u7c7b\u522b\u540d\u79f0", "default": "", "type": "L", "items": "", "show": false}, {"index": 8, "name": "DatasetDir", "value": "/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/194741569180540928/ori", "description": "\u6570\u636e\u96c6\u8def\u5f84", "default": "./app/maskrcnn/datasets/test", "type": "S", "show": false}]
**********************************
cuda:0
图像: ['2007_000032.jpg', '2007_000241.jpg', '2007_000068.jpg', '4.jpg', '3.jpg', '2007_000033.jpg', '10.jpg', '2007_000042.jpg', '7.jpg', '2007_000170.jpg', '2007_001583.jpg', '8.jpg', '2007_000187.jpg', '1.jpg', '2007_001457.jpg', '2007_000061.jpg', '2007_000027.jpg', '2007_000063.jpg', '2007_000129.jpg', '5.jpg', '2007_000123.jpg', '2007_000121.jpg', '9.jpg', '2007_000175.jpg', '2007_000039.jpg', '2007_001430.jpg', '6.jpg', '2007_001585.jpg', '2.jpg']
图像路径 /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/194741569180540928/ori/images/2007_000032.jpg
1111
标签 /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/194741569180540928/ori/labels/2007_000032.json
2222
ROOT############### /mnt/sdc/algorithm/R-ODY/app/yolov5
opt.device: cuda:0
device: cuda:0
get in train()
Process 194741569180540928_19_train:
Traceback (most recent call last):
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/flask_sockets.py", line 40, in __call__
handler, values = adapter.match()
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/werkzeug/routing.py", line 1945, in match
raise NotFound()
werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/mnt/sdc/algorithm/R-ODY/app/controller/AlgorithmController.py", line 327, in train_R0DY
train_start(weights, savemodel, epoches, img_size, batch_size, device, data_list, id, getsomething)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/train_server.py", line 733, in train_start
main(opt,data_list,id,getsomething)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/train_server.py", line 630, in main
train(opt.hyp, opt, device, data_list,id,getsomething,callbacks)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/train_server.py", line 168, in train
model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 673, in to
return self._apply(convert)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/models/yolo.py", line 136, in _apply
self = super()._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 409, in _apply
param_applied = fn(param)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 671, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/cuda/__init__.py", line 160, in _lazy_init
raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
192.168.0.20 - - [2022-11-29 10:21:16] "GET /api/obtain_train_param HTTP/1.1" 200 1936 0.000878
------进入websocket
存储ws连接对象
192.168.0.20 - - [2022-11-29 10:21:27] "GET /api/start_train_algorithm?param=%5B%7B%22index%22%3A+0%2C+%22name%22%3A+%22epochnum%22%2C+%22value%22%3A+10%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu8f6e%5Cu6b21%22%2C+%22default%22%3A+100%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+1%2C+%22name%22%3A+%22batch_size%22%2C+%22value%22%3A+4%2C+%22description%22%3A+%22%5Cu6279%5Cu6b21%5Cu56fe%5Cu50cf%5Cu6570%5Cu91cf%22%2C+%22default%22%3A+1%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+2%2C+%22name%22%3A+%22img_size%22%2C+%22value%22%3A+640%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu56fe%5Cu50cf%5Cu5927%5Cu5c0f%22%2C+%22default%22%3A+640%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+3%2C+%22name%22%3A+%22device%22%2C+%22value%22%3A+%22cuda%3A0%22%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu6838%5Cu5fc3%22%2C+%22default%22%3A+%22cuda%3A0%22%2C+%22type%22%3A+%22E%22%2C+%22items%22%3A+%5B%22cuda%3A0%22%2C+%22cuda%3A1%22%5D%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+4%2C+%22name%22%3A+%22saveModDir%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F1128test_194741569180540928_R-ODY_20.pt%22%2C+%22description%22%3A+%22%5Cu4fdd%5Cu5b58%5Cu6a21%5Cu578b%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fsaved_model%2Ftest.pt%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+5%2C+%22name%22%3A+%22resumeModPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F%2Fyolov5s.pt%22%2C+%22description%22%3A+%22%5Cu7ee7%5Cu7eed%5Cu8bad%5Cu7ec3%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+6%2C+%22name%22%3A+%22resumeMod%22%2C+%22value%22%3A+%22%22%2C+%22description%22%3A+%22%5Cu7ee7%5Cu7eed%5Cu8bad%5Cu7ec3%5Cu6a21%5Cu578b%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22E%22%2C+%22items%22%3A+%22%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+7%2C+%22name%22%3A+%22CLASS_NAMES%22%2C+%22value%22%3A+%5B%22hole%22%2C+%22456%22%2C+%22aeroplane%22%2C+%22tvmonitor%22%2C+%22train%22%2C+%22boat%22%2C+%22dog%22%2C+%22chair%22%2C+%22bird%22%2C+%22bicycle%22%2C+%22person%22%2C+%22bottle%22%2C+%22sheep%22%2C+%22cat%22%5D%2C+%22description%22%3A+%22%5Cu7c7b%5Cu522b%5Cu540d%5Cu79f0%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22L%22%2C+%22items%22%3A+%22%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+8%2C+%22name%22%3A+%22DatasetDir%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2F194741569180540928%2Fori%22%2C+%22description%22%3A+%22%5Cu6570%5Cu636e%5Cu96c6%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fdatasets%2Ftest%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%5D&id=194741569180540928_20_train HTTP/1.1" 200 161 0.050371
删除图片数据
删除json数据
train_server: weights=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights//yolov5s.pt, savemodel=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/1128test_194741569180540928_R-ODY_20_640.pt, cfg=, data=/mnt/sdc/algorithm/R-ODY/app/yolov5/data/coco128.yaml, hyp=/mnt/sdc/algorithm/R-ODY/app/yolov5/data/hyps/hyp.scratch-low.yaml, epochs=10, batch_size=4, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=None, image_weights=False, device=cuda:0, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=/mnt/sdc/algorithm/R-ODY/app/yolov5/runs/train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest
Weights & Biases: run 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs in Weights & Biases
ClearML: run 'pip install clearml' to automatically track, visualize and remotely train YOLOv5 🚀 in ClearML
TensorBoard: Start with 'tensorboard --logdir /mnt/sdc/algorithm/R-ODY/app/yolov5/runs/train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=14
from n params module arguments
0 -1 1 3520 app.yolov5.models.common.Conv [3, 32, 6, 2, 2]
1 -1 1 18560 app.yolov5.models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 app.yolov5.models.common.C3 [64, 64, 1]
3 -1 1 73984 app.yolov5.models.common.Conv [64, 128, 3, 2]
4 -1 2 115712 app.yolov5.models.common.C3 [128, 128, 2]
5 -1 1 295424 app.yolov5.models.common.Conv [128, 256, 3, 2]
6 -1 3 625152 app.yolov5.models.common.C3 [256, 256, 3]
7 -1 1 1180672 app.yolov5.models.common.Conv [256, 512, 3, 2]
8 -1 1 1182720 app.yolov5.models.common.C3 [512, 512, 1]
9 -1 1 656896 app.yolov5.models.common.SPPF [512, 512, 5]
10 -1 1 131584 app.yolov5.models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 app.yolov5.models.common.Concat [1]
13 -1 1 361984 app.yolov5.models.common.C3 [512, 256, 1, False]
14 -1 1 33024 app.yolov5.models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 app.yolov5.models.common.Concat [1]
17 -1 1 90880 app.yolov5.models.common.C3 [256, 128, 1, False]
18 -1 1 147712 app.yolov5.models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 app.yolov5.models.common.Concat [1]
20 -1 1 296448 app.yolov5.models.common.C3 [256, 256, 1, False]
21 -1 1 590336 app.yolov5.models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 app.yolov5.models.common.Concat [1]
23 -1 1 1182720 app.yolov5.models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 51243 app.yolov5.models.yolo.Detect [14, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model summary: 270 layers, 7057387 parameters, 7057387 gradients, 16.1 GFLOPs
**********************************
[<geventwebsocket.websocket.WebSocket object at 0x7fcf5499d7c0>]
**********************************
**********************************
{'195095688265211904_17_detect': [<geventwebsocket.websocket.WebSocket object at 0x7fd0e269a340>, <geventwebsocket.websocket.WebSocket object at 0x7fcf55fe9820>, <geventwebsocket.websocket.WebSocket object at 0x7fcfb9ef6760>, <geventwebsocket.websocket.WebSocket object at 0x7fcf55fe9e20>], '194741569180540928_14_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06760>], '194741569180540928_15_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06a60>], '194741569180540928_16_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06520>], '194741569180540928_17_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf5499d100>], '194741569180540928_18_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf5499d460>], '194741569180540928_19_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06880>], '194741569180540928_20_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf5499d7c0>]}
**********************************
[{"index": 0, "name": "epochnum", "value": 10, "description": "\u8bad\u7ec3\u8f6e\u6b21", "default": 100, "type": "I", "show": true}, {"index": 1, "name": "batch_size", "value": 4, "description": "\u6279\u6b21\u56fe\u50cf\u6570\u91cf", "default": 1, "type": "I", "show": true}, {"index": 2, "name": "img_size", "value": 640, "description": "\u8bad\u7ec3\u56fe\u50cf\u5927\u5c0f", "default": 640, "type": "I", "show": true}, {"index": 3, "name": "device", "value": "cuda:0", "description": "\u8bad\u7ec3\u6838\u5fc3", "default": "cuda:0", "type": "E", "items": ["cuda:0", "cuda:1"], "show": false}, {"index": 4, "name": "saveModDir", "value": "/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/1128test_194741569180540928_R-ODY_20.pt", "description": "\u4fdd\u5b58\u6a21\u578b\u8def\u5f84", "default": "./app/maskrcnn/saved_model/test.pt", "type": "S", "show": false}, {"index": 5, "name": "resumeModPath", "value": "/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights//yolov5s.pt", "description": "\u7ee7\u7eed\u8bad\u7ec3\u8def\u5f84", "default": "", "type": "S", "show": false}, {"index": 6, "name": "resumeMod", "value": "", "description": "\u7ee7\u7eed\u8bad\u7ec3\u6a21\u578b", "default": "", "type": "E", "items": "", "show": true}, {"index": 7, "name": "CLASS_NAMES", "value": ["hole", "456", "aeroplane", "tvmonitor", "train", "boat", "dog", "chair", "bird", "bicycle", "person", "bottle", "sheep", "cat"], "description": "\u7c7b\u522b\u540d\u79f0", "default": "", "type": "L", "items": "", "show": false}, {"index": 8, "name": "DatasetDir", "value": "/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/194741569180540928/ori", "description": "\u6570\u636e\u96c6\u8def\u5f84", "default": "./app/maskrcnn/datasets/test", "type": "S", "show": false}]
**********************************
cuda:0
图像: ['2007_000032.jpg', '2007_000241.jpg', '2007_000068.jpg', '4.jpg', '3.jpg', '2007_000033.jpg', '10.jpg', '2007_000042.jpg', '7.jpg', '2007_000170.jpg', '2007_001583.jpg', '8.jpg', '2007_000187.jpg', '1.jpg', '2007_001457.jpg', '2007_000061.jpg', '2007_000027.jpg', '2007_000063.jpg', '2007_000129.jpg', '5.jpg', '2007_000123.jpg', '2007_000121.jpg', '9.jpg', '2007_000175.jpg', '2007_000039.jpg', '2007_001430.jpg', '6.jpg', '2007_001585.jpg', '2.jpg']
图像路径 /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/194741569180540928/ori/images/2007_000032.jpg
1111
标签 /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/194741569180540928/ori/labels/2007_000032.json
2222
ROOT############### /mnt/sdc/algorithm/R-ODY/app/yolov5
opt.device: cuda:0
device: cuda:0
get in train()
Process 194741569180540928_20_train:
Traceback (most recent call last):
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/flask_sockets.py", line 40, in __call__
handler, values = adapter.match()
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/werkzeug/routing.py", line 1945, in match
raise NotFound()
werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/mnt/sdc/algorithm/R-ODY/app/controller/AlgorithmController.py", line 327, in train_R0DY
train_start(weights, savemodel, epoches, img_size, batch_size, device, data_list, id, getsomething)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/train_server.py", line 733, in train_start
main(opt,data_list,id,getsomething)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/train_server.py", line 630, in main
train(opt.hyp, opt, device, data_list,id,getsomething,callbacks)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/train_server.py", line 168, in train
model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 673, in to
return self._apply(convert)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/models/yolo.py", line 136, in _apply
self = super()._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 409, in _apply
param_applied = fn(param)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 671, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/cuda/__init__.py", line 160, in _lazy_init
raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
192.168.0.20 - - [2022-11-29 10:21:55] "GET /api/obtain_train_param HTTP/1.1" 200 1936 0.000899
------进入websocket
存储ws连接对象
192.168.0.20 - - [2022-11-29 10:22:24] "GET /api/start_train_algorithm?param=%5B%7B%22index%22%3A+0%2C+%22name%22%3A+%22epochnum%22%2C+%22value%22%3A+10%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu8f6e%5Cu6b21%22%2C+%22default%22%3A+100%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+1%2C+%22name%22%3A+%22batch_size%22%2C+%22value%22%3A+4%2C+%22description%22%3A+%22%5Cu6279%5Cu6b21%5Cu56fe%5Cu50cf%5Cu6570%5Cu91cf%22%2C+%22default%22%3A+1%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+2%2C+%22name%22%3A+%22img_size%22%2C+%22value%22%3A+640%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu56fe%5Cu50cf%5Cu5927%5Cu5c0f%22%2C+%22default%22%3A+640%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+3%2C+%22name%22%3A+%22device%22%2C+%22value%22%3A+%22cuda%3A0%22%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu6838%5Cu5fc3%22%2C+%22default%22%3A+%22cuda%3A0%22%2C+%22type%22%3A+%22E%22%2C+%22items%22%3A+%5B%22cuda%3A0%22%2C+%22cuda%3A1%22%5D%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+4%2C+%22name%22%3A+%22saveModDir%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F%5Cu6838%5Cu9178%5Cu68c0%5Cu6d4b_190857268466688000_R-ODY_18.pt%22%2C+%22description%22%3A+%22%5Cu4fdd%5Cu5b58%5Cu6a21%5Cu578b%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fsaved_model%2Ftest.pt%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+5%2C+%22name%22%3A+%22resumeModPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F%2Fyolov5s.pt%22%2C+%22description%22%3A+%22%5Cu7ee7%5Cu7eed%5Cu8bad%5Cu7ec3%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+6%2C+%22name%22%3A+%22resumeMod%22%2C+%22value%22%3A+%22%2F1128test_194741569180540928_R-ODY_13_640.pt%22%2C+%22description%22%3A+%22%5Cu7ee7%5Cu7eed%5Cu8bad%5Cu7ec3%5Cu6a21%5Cu578b%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22E%22%2C+%22items%22%3A+%22%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+7%2C+%22name%22%3A+%22CLASS_NAMES%22%2C+%22value%22%3A+%5B%22hole%22%2C+%22456%22%2C+%22zui%22%2C+%22mianbang%22%5D%2C+%22description%22%3A+%22%5Cu7c7b%5Cu522b%5Cu540d%5Cu79f0%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22L%22%2C+%22items%22%3A+%22%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+8%2C+%22name%22%3A+%22DatasetDir%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2F190857268466688000%2Fori%22%2C+%22description%22%3A+%22%5Cu6570%5Cu636e%5Cu96c6%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fdatasets%2Ftest%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%5D&id=190857268466688000_18_train HTTP/1.1" 200 161 0.050019
删除图片数据
删除json数据
192.168.0.20 - - [2022-11-29 10:22:26] "GET /api/obtain_download_pt_param HTTP/1.1" 200 792 0.000835
train_server: weights=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights//yolov5s.pt, savemodel=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/核酸检测_190857268466688000_R-ODY_18_640.pt, cfg=, data=/mnt/sdc/algorithm/R-ODY/app/yolov5/data/coco128.yaml, hyp=/mnt/sdc/algorithm/R-ODY/app/yolov5/data/hyps/hyp.scratch-low.yaml, epochs=10, batch_size=4, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=None, image_weights=False, device=cuda:0, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=/mnt/sdc/algorithm/R-ODY/app/yolov5/runs/train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest
Weights & Biases: run 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs in Weights & Biases
ClearML: run 'pip install clearml' to automatically track, visualize and remotely train YOLOv5 🚀 in ClearML
TensorBoard: Start with 'tensorboard --logdir /mnt/sdc/algorithm/R-ODY/app/yolov5/runs/train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=4
from n params module arguments
0 -1 1 3520 app.yolov5.models.common.Conv [3, 32, 6, 2, 2]
1 -1 1 18560 app.yolov5.models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 app.yolov5.models.common.C3 [64, 64, 1]
3 -1 1 73984 app.yolov5.models.common.Conv [64, 128, 3, 2]
4 -1 2 115712 app.yolov5.models.common.C3 [128, 128, 2]
5 -1 1 295424 app.yolov5.models.common.Conv [128, 256, 3, 2]
6 -1 3 625152 app.yolov5.models.common.C3 [256, 256, 3]
7 -1 1 1180672 app.yolov5.models.common.Conv [256, 512, 3, 2]
8 -1 1 1182720 app.yolov5.models.common.C3 [512, 512, 1]
9 -1 1 656896 app.yolov5.models.common.SPPF [512, 512, 5]
10 -1 1 131584 app.yolov5.models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 app.yolov5.models.common.Concat [1]
13 -1 1 361984 app.yolov5.models.common.C3 [512, 256, 1, False]
14 -1 1 33024 app.yolov5.models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 app.yolov5.models.common.Concat [1]
17 -1 1 90880 app.yolov5.models.common.C3 [256, 128, 1, False]
18 -1 1 147712 app.yolov5.models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 app.yolov5.models.common.Concat [1]
20 -1 1 296448 app.yolov5.models.common.C3 [256, 256, 1, False]
21 -1 1 590336 app.yolov5.models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 app.yolov5.models.common.Concat [1]
23 -1 1 1182720 app.yolov5.models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 24273 app.yolov5.models.yolo.Detect [4, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model summary: 270 layers, 7030417 parameters, 7030417 gradients, 16.0 GFLOPs
**********************************
[<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06e80>]
**********************************
**********************************
{'195095688265211904_17_detect': [<geventwebsocket.websocket.WebSocket object at 0x7fd0e269a340>, <geventwebsocket.websocket.WebSocket object at 0x7fcf55fe9820>, <geventwebsocket.websocket.WebSocket object at 0x7fcfb9ef6760>, <geventwebsocket.websocket.WebSocket object at 0x7fcf55fe9e20>], '194741569180540928_14_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06760>], '194741569180540928_15_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06a60>], '194741569180540928_16_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06520>], '194741569180540928_17_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf5499d100>], '194741569180540928_18_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf5499d460>], '194741569180540928_19_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06880>], '194741569180540928_20_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf5499d7c0>], '190857268466688000_18_train': [<geventwebsocket.websocket.WebSocket object at 0x7fcf54d06e80>]}
**********************************
[{"index": 0, "name": "epochnum", "value": 10, "description": "\u8bad\u7ec3\u8f6e\u6b21", "default": 100, "type": "I", "show": true}, {"index": 1, "name": "batch_size", "value": 4, "description": "\u6279\u6b21\u56fe\u50cf\u6570\u91cf", "default": 1, "type": "I", "show": true}, {"index": 2, "name": "img_size", "value": 640, "description": "\u8bad\u7ec3\u56fe\u50cf\u5927\u5c0f", "default": 640, "type": "I", "show": true}, {"index": 3, "name": "device", "value": "cuda:0", "description": "\u8bad\u7ec3\u6838\u5fc3", "default": "cuda:0", "type": "E", "items": ["cuda:0", "cuda:1"], "show": false}, {"index": 4, "name": "saveModDir", "value": "/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/\u6838\u9178\u68c0\u6d4b_190857268466688000_R-ODY_18.pt", "description": "\u4fdd\u5b58\u6a21\u578b\u8def\u5f84", "default": "./app/maskrcnn/saved_model/test.pt", "type": "S", "show": false}, {"index": 5, "name": "resumeModPath", "value": "/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights//yolov5s.pt", "description": "\u7ee7\u7eed\u8bad\u7ec3\u8def\u5f84", "default": "", "type": "S", "show": false}, {"index": 6, "name": "resumeMod", "value": "/1128test_194741569180540928_R-ODY_13_640.pt", "description": "\u7ee7\u7eed\u8bad\u7ec3\u6a21\u578b", "default": "", "type": "E", "items": "", "show": true}, {"index": 7, "name": "CLASS_NAMES", "value": ["hole", "456", "zui", "mianbang"], "description": "\u7c7b\u522b\u540d\u79f0", "default": "", "type": "L", "items": "", "show": false}, {"index": 8, "name": "DatasetDir", "value": "/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/190857268466688000/ori", "description": "\u6570\u636e\u96c6\u8def\u5f84", "default": "./app/maskrcnn/datasets/test", "type": "S", "show": false}]
**********************************
cuda:0
图像: ['IMG_20221117_132941~1.jpg', 'IMG_20221117_133002~1.jpg', 'IMG_20221117_152005~1.jpg', 'IMG_20221117_133005~1.jpg', 'IMG_20221117_132939~1.jpg', 'IMG_20221117_151945~1.jpg', 'IMG_20221117_133009~1.jpg', 'IMG_20221117_152023~1.jpg', 'IMG_20221117_133035~1.jpg', 'IMG_20221117_132947~1.jpg', 'IMG_20221117_151925~1.jpg', 'IMG_20221117_152026~1.jpg', 'IMG_20221117_133037~1.jpg', 'IMG_20221117_152018~1.jpg', 'IMG_20221117_152002~1.jpg', 'IMG_20221117_152004~1.jpg', 'IMG_20221117_152019~1.jpg', 'IMG_20221117_133006~1.jpg', 'IMG_20221117_152020~1.jpg', 'IMG_20221117_151959~1.jpg', 'IMG_20221117_152024~1.jpg', 'IMG_20221117_151921~1.jpg', 'IMG_20221117_151923~1.jpg', 'IMG_20221117_133038~1.jpg', 'IMG_20221117_151943~1.jpg', 'IMG_20221117_151924~1.jpg', 'IMG_20221117_152022~1.jpg', 'IMG_20221117_133032~1.jpg', 'IMG_20221117_151957~1.jpg', 'IMG_20221117_151939~1.jpg', 'IMG_20221117_133040~1.jpg', 'IMG_20221117_151946~1.jpg', 'IMG_20221117_151944~1.jpg', 'IMG_20221117_133007~1.jpg', 'IMG_20221117_132946~1.jpg', 'IMG_20221117_133004~1.jpg', 'IMG_20221117_152001~1.jpg', 'IMG_20221117_151941~1.jpg', 'IMG_20221117_151919~1.jpg', 'IMG_20221117_132944~1.jpg']
图像路径 /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/190857268466688000/ori/images/IMG_20221117_132941~1.jpg
1111
标签 /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/190857268466688000/ori/labels/IMG_20221117_132941~1.json
2222
ROOT############### /mnt/sdc/algorithm/R-ODY/app/yolov5
opt.device: cuda:0
device: cuda:0
get in train()
Process 190857268466688000_18_train:
Traceback (most recent call last):
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/flask_sockets.py", line 40, in __call__
handler, values = adapter.match()
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/werkzeug/routing.py", line 1945, in match
raise NotFound()
werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/mnt/sdc/algorithm/R-ODY/app/controller/AlgorithmController.py", line 327, in train_R0DY
train_start(weights, savemodel, epoches, img_size, batch_size, device, data_list, id, getsomething)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/train_server.py", line 733, in train_start
main(opt,data_list,id,getsomething)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/train_server.py", line 630, in main
train(opt.hyp, opt, device, data_list,id,getsomething,callbacks)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/train_server.py", line 168, in train
model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 673, in to
return self._apply(convert)
File "/mnt/sdc/algorithm/R-ODY/app/yolov5/models/yolo.py", line 136, in _apply
self = super()._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 409, in _apply
param_applied = fn(param)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/nn/modules/module.py", line 671, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/torch/cuda/__init__.py", line 160, in _lazy_init
raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
export: data=app/yolov5/data/coco128.yaml, weights=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/核酸检测_190857268466688000_R-ODY_17_640.pt, imgsz=[640, 640], batch_size=1, device=0, half=False, inplace=False, train=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=11, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['torchscript', 'onnx']
YOLOv5 🚀 2022-11-7 Python-3.8.13 torch-1.8.0+cu111 CUDA:0 (Tesla T4, 15110MiB)
Fusing layers...
Model summary: 213 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
PyTorch: starting from /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.pt with output shape (1, 25200, 9) (13.8 MB)
PyTorch: starting from /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/核酸检测_190857268466688000_R-ODY_17_640.pt with output shape (1, 25200, 9) (13.8 MB)
TorchScript: starting export with torch 1.8.0+cu111...
TorchScript: export success ✅ 0.9s, saved as /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.torchscript (27.3 MB)
TorchScript: export success ✅ 1.0s, saved as /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/核酸检测_190857268466688000_R-ODY_17_640.torchscript (27.3 MB)
ONNX: starting export with onnx 1.12.0...
ONNX: export success ✅ 1.8s, saved as /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.onnx (27.2 MB)
ONNX: export success ✅ 1.6s, saved as /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/核酸检测_190857268466688000_R-ODY_17_640.onnx (27.2 MB)
Export complete (6.5s)
Results saved to /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights
Detect: python detect.py --weights /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.onnx
Validate: python val.py --weights /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.onnx
PyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '/mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.onnx')
Export complete (2.8s)
Results saved to /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights
Detect: python detect.py --weights /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/核酸检测_190857268466688000_R-ODY_17_640.onnx
Validate: python val.py --weights /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/核酸检测_190857268466688000_R-ODY_17_640.onnx
PyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/核酸检测_190857268466688000_R-ODY_17_640.onnx')
Visualize: https://netron.app
192.168.0.20 - - [2022-11-24 08:59:15] "GET /api/start_download_pt?param=%5B%7B%22index%22%3A+0%2C+%22name%22%3A+%22exp_inputPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2Faicheck%2FIntelligentizeAI%2Fdata_set%2Fweights%2Fces2_193120735164768256_R-ODY_2_640.pt%22%2C+%22description%22%3A+%22%5Cu8f6c%5Cu5316%5Cu6a21%5Cu578b%5Cu8f93%5Cu5165%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22E%3A%2Falg_demo-master%2Falg_demo%2Fapp%2Fyolov5%2F%5Cu5706%5Cu5b54_123_RODY_1_640.pt%2F%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+1%2C+%22name%22%3A+%22device%22%2C+%22value%22%3A+%22gpu%22%2C+%22description%22%3A+%22CPU%5Cu6216GPU%22%2C+%22default%22%3A+%22gpu%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+2%2C+%22name%22%3A+%22imgsz%22%2C+%22value%22%3A+640%2C+%22description%22%3A+%22%5Cu56fe%5Cu50cf%5Cu5927%5Cu5c0f%22%2C+%22default%22%3A+640%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%5D&id=875 HTTP/1.1" 200 240 7.984953
export: data=app/yolov5/data/coco128.yaml, weights=/mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.pt, imgsz=[640, 640], batch_size=1, device=0, half=False, inplace=False, train=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=11, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['torchscript', 'onnx']
YOLOv5 🚀 2022-11-7 Python-3.8.13 torch-1.8.0+cu111 CUDA:0 (Tesla T4, 15110MiB)
192.168.0.20 - - [2022-11-29 10:22:31] "GET /api/start_download_pt?param=%5B%7B%22index%22%3A+0%2C+%22name%22%3A+%22exp_inputPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F%5Cu6838%5Cu9178%5Cu68c0%5Cu6d4b_190857268466688000_R-ODY_17_640.pt%22%2C+%22description%22%3A+%22%5Cu8f6c%5Cu5316%5Cu6a21%5Cu578b%5Cu8f93%5Cu5165%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22E%3A%2Falg_demo-master%2Falg_demo%2Fapp%2Fyolov5%2F%5Cu5706%5Cu5b54_123_RODY_1_640.pt%2F%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+1%2C+%22name%22%3A+%22device%22%2C+%22value%22%3A+%22gpu%22%2C+%22description%22%3A+%22CPU%5Cu6216GPU%22%2C+%22default%22%3A+%22gpu%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+2%2C+%22name%22%3A+%22imgsz%22%2C+%22value%22%3A+640%2C+%22description%22%3A+%22%5Cu56fe%5Cu50cf%5Cu5927%5Cu5c0f%22%2C+%22default%22%3A+640%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%5D&id=737 HTTP/1.1" 200 270 3.030957
------进入websocket
输入模型: /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/核酸检测_190857268466688000_R-ODY_17_640.pt
['torchscript', 'onnx']
['torchscript', 'onnx']
('torchscript', 'onnx', 'openvino', 'engine', 'coreml', 'saved_model', 'pb', 'tflite', 'edgetpu', 'tfjs')
True
模型路径: /mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/核酸检测_190857268466688000_R-ODY_17_640.zip
存储ws连接对象
192.168.0.20 - - [2022-11-29 10:44:19] "GET /api/start_train_algorithm?param=%5B%7B%22index%22%3A+0%2C+%22name%22%3A+%22epochnum%22%2C+%22value%22%3A+10%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu8f6e%5Cu6b21%22%2C+%22default%22%3A+100%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+1%2C+%22name%22%3A+%22batch_size%22%2C+%22value%22%3A+4%2C+%22description%22%3A+%22%5Cu6279%5Cu6b21%5Cu56fe%5Cu50cf%5Cu6570%5Cu91cf%22%2C+%22default%22%3A+1%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+2%2C+%22name%22%3A+%22img_size%22%2C+%22value%22%3A+640%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu56fe%5Cu50cf%5Cu5927%5Cu5c0f%22%2C+%22default%22%3A+640%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+3%2C+%22name%22%3A+%22device%22%2C+%22value%22%3A+%22cuda%3A0%22%2C+%22description%22%3A+%22%5Cu8bad%5Cu7ec3%5Cu6838%5Cu5fc3%22%2C+%22default%22%3A+%22cuda%3A0%22%2C+%22type%22%3A+%22E%22%2C+%22items%22%3A+%5B%22cuda%3A0%22%2C+%22cuda%3A1%22%5D%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+4%2C+%22name%22%3A+%22saveModDir%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F1128test_194741569180540928_R-ODY_21.pt%22%2C+%22description%22%3A+%22%5Cu4fdd%5Cu5b58%5Cu6a21%5Cu578b%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fsaved_model%2Ftest.pt%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+5%2C+%22name%22%3A+%22resumeModPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2Fweights%2F%2Fyolov5s.pt%22%2C+%22description%22%3A+%22%5Cu7ee7%5Cu7eed%5Cu8bad%5Cu7ec3%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+6%2C+%22name%22%3A+%22resumeMod%22%2C+%22value%22%3A+%22%22%2C+%22description%22%3A+%22%5Cu7ee7%5Cu7eed%5Cu8bad%5Cu7ec3%5Cu6a21%5Cu578b%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22E%22%2C+%22items%22%3A+%22%22%2C+%22show%22%3A+true%7D%2C+%7B%22index%22%3A+7%2C+%22name%22%3A+%22CLASS_NAMES%22%2C+%22value%22%3A+%5B%22hole%22%2C+%22456%22%2C+%22aeroplane%22%2C+%22tvmonitor%22%2C+%22train%22%2C+%22boat%22%2C+%22dog%22%2C+%22chair%22%2C+%22bird%22%2C+%22bicycle%22%2C+%22person%22%2C+%22bottle%22%2C+%22sheep%22%2C+%22cat%22%5D%2C+%22description%22%3A+%22%5Cu7c7b%5Cu522b%5Cu540d%5Cu79f0%22%2C+%22default%22%3A+%22%22%2C+%22type%22%3A+%22L%22%2C+%22items%22%3A+%22%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+8%2C+%22name%22%3A+%22DatasetDir%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2FIntelligentizeAI%2FIntelligentizeAI%2Fdata_set%2F194741569180540928%2Fori%22%2C+%22description%22%3A+%22%5Cu6570%5Cu636e%5Cu96c6%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22.%2Fapp%2Fmaskrcnn%2Fdatasets%2Ftest%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%5D&id=194741569180540928_21_train HTTP/1.1" 200 161 0.051209
删除图片数据
删除json数据
train_server: weights=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights//yolov5s.pt, savemodel=/mnt/sdc/IntelligentizeAI/IntelligentizeAI/data_set/weights/1128test_194741569180540928_R-ODY_21_640.pt, cfg=, data=/mnt/sdc/algorithm/R-ODY/app/yolov5/data/coco128.yaml, hyp=/mnt/sdc/algorithm/R-ODY/app/yolov5/data/hyps/hyp.scratch-low.yaml, epochs=10, batch_size=4, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=None, image_weights=False, device=cuda:0, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=/mnt/sdc/algorithm/R-ODY/app/yolov5/runs/train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest
Weights & Biases: run 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs in Weights & Biases
ClearML: run 'pip install clearml' to automatically track, visualize and remotely train YOLOv5 🚀 in ClearML
TensorBoard: Start with 'tensorboard --logdir /mnt/sdc/algorithm/R-ODY/app/yolov5/runs/train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=14
Fusing layers...
Model summary: 213 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
PyTorch: starting from /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.pt with output shape (1, 25200, 9) (13.8 MB)
TorchScript: starting export with torch 1.8.0+cu111...
TorchScript: export success ✅ 0.8s, saved as /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.torchscript (27.3 MB)
ONNX: starting export with onnx 1.12.0...
ONNX: export success ✅ 1.6s, saved as /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.onnx (27.2 MB)
Export complete (2.6s)
Results saved to /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights
Detect: python detect.py --weights /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.onnx
Validate: python val.py --weights /mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.onnx
PyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '/mnt/sdc/aicheck/IntelligentizeAI/data_set/weights/ces2_193120735164768256_R-ODY_2_640.onnx')
Visualize: https://netron.app
192.168.0.20 - - [2022-11-24 08:59:17] "GET /api/start_download_pt?param=%5B%7B%22index%22%3A+0%2C+%22name%22%3A+%22exp_inputPath%22%2C+%22value%22%3A+%22%2Fmnt%2Fsdc%2Faicheck%2FIntelligentizeAI%2Fdata_set%2Fweights%2Fces2_193120735164768256_R-ODY_2_640.pt%22%2C+%22description%22%3A+%22%5Cu8f6c%5Cu5316%5Cu6a21%5Cu578b%5Cu8f93%5Cu5165%5Cu8def%5Cu5f84%22%2C+%22default%22%3A+%22E%3A%2Falg_demo-master%2Falg_demo%2Fapp%2Fyolov5%2F%5Cu5706%5Cu5b54_123_RODY_1_640.pt%2F%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+1%2C+%22name%22%3A+%22device%22%2C+%22value%22%3A+%22gpu%22%2C+%22description%22%3A+%22CPU%5Cu6216GPU%22%2C+%22default%22%3A+%22gpu%22%2C+%22type%22%3A+%22S%22%2C+%22show%22%3A+false%7D%2C+%7B%22index%22%3A+2%2C+%22name%22%3A+%22imgsz%22%2C+%22value%22%3A+640%2C+%22description%22%3A+%22%5Cu56fe%5Cu50cf%5Cu5927%5Cu5c0f%22%2C+%22default%22%3A+640%2C+%22type%22%3A+%22I%22%2C+%22show%22%3A+true%7D%5D&id=875 HTTP/1.1" 200 240 2.709875
from n params module arguments
0 -1 1 3520 app.yolov5.models.common.Conv [3, 32, 6, 2, 2]
1 -1 1 18560 app.yolov5.models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 app.yolov5.models.common.C3 [64, 64, 1]
3 -1 1 73984 app.yolov5.models.common.Conv [64, 128, 3, 2]
4 -1 2 115712 app.yolov5.models.common.C3 [128, 128, 2]
5 -1 1 295424 app.yolov5.models.common.Conv [128, 256, 3, 2]
6 -1 3 625152 app.yolov5.models.common.C3 [256, 256, 3]
7 -1 1 1180672 app.yolov5.models.common.Conv [256, 512, 3, 2]
8 -1 1 1182720 app.yolov5.models.common.C3 [512, 512, 1]
9 -1 1 656896 app.yolov5.models.common.SPPF [512, 512, 5]
10 -1 1 131584 app.yolov5.models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 app.yolov5.models.common.Concat [1]
13 -1 1 361984 app.yolov5.models.common.C3 [512, 256, 1, False]
14 -1 1 33024 app.yolov5.models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 app.yolov5.models.common.Concat [1]
17 -1 1 90880 app.yolov5.models.common.C3 [256, 128, 1, False]
18 -1 1 147712 app.yolov5.models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 app.yolov5.models.common.Concat [1]
20 -1 1 296448 app.yolov5.models.common.C3 [256, 256, 1, False]
21 -1 1 590336 app.yolov5.models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 app.yolov5.models.common.Concat [1]
23 -1 1 1182720 app.yolov5.models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 51243 app.yolov5.models.yolo.Detect [14, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]

27
train_log.txt Normal file
View File

@ -0,0 +1,27 @@
nohup: ignoring input
2022-11-28 17:42:23,653 INFO sqlalchemy.engine.Engine select pg_catalog.version()
2022-11-28 17:42:23,653 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-11-28 17:42:23,659 INFO sqlalchemy.engine.Engine select current_schema()
2022-11-28 17:42:23,659 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-11-28 17:42:23,663 INFO sqlalchemy.engine.Engine show standard_conforming_strings
2022-11-28 17:42:23,664 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-11-28 17:42:23,669 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2022-11-28 17:42:23,669 INFO sqlalchemy.engine.Engine COMMIT
Traceback (most recent call last):
File "./app/run.py", line 134, in <module>
server.serve_forever()
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/gevent/baseserver.py", line 398, in serve_forever
self.start()
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/gevent/baseserver.py", line 336, in start
self.init_socket()
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/gevent/pywsgi.py", line 1545, in init_socket
StreamServer.init_socket(self)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/gevent/server.py", line 180, in init_socket
self.socket = self.get_listener(self.address, self.backlog, self.family)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/gevent/server.py", line 192, in get_listener
return _tcp_listener(address, backlog=backlog, reuse_addr=cls.reuse_addr, family=family)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/gevent/server.py", line 288, in _tcp_listener
sock.bind(address)
File "/home/wd/anaconda3/envs/aicheck_RODY/lib/python3.8/site-packages/gevent/_socketcommon.py", line 563, in bind
return self._sock.bind(address)
OSError: [Errno 98] Address already in use: ('192.168.0.20', 6914)