mmclassification 训练自己的数据集

文章目录

    • 从源码安装
    • 数据集准备
    • config文件
    • 训练
    • 附录

从源码安装

git clone https://github.com/open-mmlab/mmpretrain.git
cd mmpretrain
pip install -U openmim && mim install -e .

下面是我使用的版本

/media/xp/data/pydoc/mmlab/mmpretrain$ pip show mmcv mmpretrain mmengine
Name: mmcv
Version: 2.1.0
Summary: OpenMMLab Computer Vision Foundation
Home-page: https://github.com/open-mmlab/mmcv
Author: MMCV Contributors
Author-email: openmmlab@gmail.com
License: UNKNOWN
Location: /home/xp/anaconda3/envs/py3/lib/python3.8/site-packages
Requires: addict, mmengine, numpy, packaging, Pillow, pyyaml, yapf
Required-by: 
---
Name: mmpretrain
Version: 1.2.0
Summary: OpenMMLab Model Pretraining Toolbox and Benchmark
Home-page: https://github.com/open-mmlab/mmpretrain
Author: MMPretrain Contributors
Author-email: openmmlab@gmail.com
License: Apache License 2.0
Location: /media/xp/data/pydoc/mmlab/mmpretrain
Editable project location: /media/xp/data/pydoc/mmlab/mmpretrain
Requires: einops, importlib-metadata, mat4py, matplotlib, modelindex, numpy, rich
Required-by: 
---
Name: mmengine
Version: 0.10.3
Summary: Engine of OpenMMLab projects
Home-page: https://github.com/open-mmlab/mmengine
Author: MMEngine Authors
Author-email: openmmlab@gmail.com
License: UNKNOWN
Location: /home/xp/anaconda3/envs/py3/lib/python3.8/site-packages
Requires: addict, matplotlib, numpy, opencv-python, pyyaml, rich, termcolor, yapf
Required-by: mmcv

数据集准备

我以cat and dog分类数据集为例,我的训练集如下

/media/xp/data/image/deep_image/mini_cat_and_dog$ tree -L 2
.
├── train
│   ├── cat
│   └── dog
└── val├── cat└── dog

在这里插入图片描述
在这里插入图片描述
注意:我训练的时候有些图好像是坏的,mmcv以opencv为后端来获取图片,这里最好先把坏图过滤掉,不然训练的时候会报cv imencode失败或者找不到图像。用下面的代码可以去除掉opencv打不开的图。

import cv2 as cv
import osdef find_all_image_files(root_dir):image_files = []for root, dirs, files in os.walk(root_dir):for file in files:if file.endswith('.jpg') or file.endswith('.png'):image_files.append(os.path.join(root, file))return image_filesdef is_bad_image(image_file):try:img = cv.imread(image_file)if img is None:return Truereturn Falseexcept:return Truedef remove_bad_images(root_dir):image_files = find_all_image_files(root_dir)for image_file in image_files:if is_bad_image(image_file):os.remove(image_file)print(f"Removed bad image: {image_file}")remove_bad_images("/media/xp/data/image/deep_image/mini_cat_and_dog")

config文件

mmlab系列的训练测试转化都是以config来配置的,三个基础块,一个是数据集,一个是模型,一个是runtime,有很多模型都是从_base_目录中继承这三个组件,然后修改其中的一些选项来训练不同的模型和数据集。
在训练的时候mm会保存一个训练的配置到work_dir目录下,后面也可以直接复制这个config去修改,把所有内容整合到一个config中,方便管理。如果你也喜欢这样的方式可以直接copy附录中的config修改去训练。
下面是我训练mobilenet v3时修改的config。

  • 在config/mobilenet_v3 目录下添加一个文件my_mobilenetv3.py
    configs/mobilenet_v3/my_mobilenetv3.py
_base_ = [# '../_base_/models/mobilenet_v3/mobilenet_v3_small_075_imagenet.py','../_base_/datasets/my_custom.py','../_base_/default_runtime.py',
]# model settingsmodel = dict(type='ImageClassifier',backbone=dict(type='MobileNetV3', arch='small_075'),neck=dict(type='GlobalAveragePooling'),head=dict(type='StackedLinearClsHead',num_classes=2,in_channels=432,mid_channels=[1024],dropout_rate=0.2,act_cfg=dict(type='HSwish'),loss=dict(type='CrossEntropyLoss', loss_weight=1.0),init_cfg=dict(type='Normal', layer='Linear', mean=0., std=0.01, bias=0.),topk=(1, 1)))
# model = dict(backbone=dict(norm_cfg=dict(type='BN', eps=1e-5, momentum=0.1)))my_image_size = 128
my_max_epochs = 300
my_batch_size = 128train_pipeline = [dict(type='LoadImageFromFile'),dict(type='RandomResizedCrop',scale=my_image_size,backend='pillow',interpolation='bicubic'),dict(type='RandomFlip', prob=0.5, direction='horizontal'),dict(type='AutoAugment',policies='imagenet',hparams=dict(pad_val=[round(x) for x in [128,128,128]])),dict(type='RandomErasing',erase_prob=0.2,mode='rand',min_area_ratio=0.02,max_area_ratio=1 / 3,fill_color=[128,128,128],fill_std=[50,50,50]),dict(type='PackInputs'),
]test_pipeline = [dict(type='LoadImageFromFile'),dict(type='ResizeEdge',scale=my_image_size,edge='short',backend='pillow',interpolation='bicubic'),dict(type='CenterCrop', crop_size=my_image_size),dict(type='PackInputs'),
]train_dataloader = dict(dataset=dict(pipeline=train_pipeline))
val_dataloader = dict(dataset=dict(pipeline=test_pipeline))
test_dataloader = val_dataloader# schedule settings
optim_wrapper = dict(optimizer=dict(type='RMSprop',lr=0.064,alpha=0.9,momentum=0.9,eps=0.0316,weight_decay=1e-5))param_scheduler = dict(type='StepLR', by_epoch=True, step_size=2, gamma=0.973)train_cfg = dict(by_epoch=True, max_epochs=600, val_interval=10)
val_cfg = dict()
test_cfg = dict()# NOTE: `auto_scale_lr` is for automatically scaling LR
# based on the actual training batch size.
# base_batch_size = (8 GPUs) x (128 samples per GPU)
auto_scale_lr = dict(base_batch_size=my_batch_size)
  • 在configs/base/datasets/下面创建 my_custom.py
# dataset settings
dataset_type = 'CustomDataset'
data_preprocessor = dict(num_classes=2,# RGB format normalization parametersmean=[128,128,128],std=[50,50,50],# convert image from BGR to RGBto_rgb=True,
)train_pipeline = [dict(type='LoadImageFromFile'),dict(type='ResizeEdge', scale=128, edge='short'),dict(type='CenterCrop', crop_size=128),dict(type='RandomFlip', prob=0.5, direction='horizontal'),dict(type='PackInputs'),
]test_pipeline = [dict(type='LoadImageFromFile'),dict(type='ResizeEdge', scale=128, edge='short'),dict(type='CenterCrop', crop_size=128),dict(type='PackInputs'),
]train_dataloader = dict(batch_size=32,num_workers=1,dataset=dict(type=dataset_type,data_root='/media/xp/data/image/deep_image/mini_cat_and_dog',data_prefix='train',with_label=True,pipeline=train_pipeline),sampler=dict(type='DefaultSampler', shuffle=True),
)val_dataloader = dict(batch_size=32,num_workers=1,dataset=dict(type=dataset_type,data_root='/media/xp/data/image/deep_image/mini_cat_and_dog',data_prefix='val',with_label=True,pipeline=test_pipeline),sampler=dict(type='DefaultSampler', shuffle=False),
)
val_evaluator = dict(type='Accuracy', topk=(1, 1))# If you want standard test, please manually configure the test dataset
test_dataloader = val_dataloader
test_evaluator = val_evaluator

训练

$ python tools/train.py configs/mobilenet_v3/my_mobilenetv3.py 

输出

04/22 10:09:07 - mmengine - INFO - 
------------------------------------------------------------
System environment:sys.platform: linuxPython: 3.8.18 (default, Sep 11 2023, 13:40:15) [GCC 11.2.0]CUDA available: FalseMUSA available: Falsenumpy_random_seed: 1921958984GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0PyTorch: 2.2.2PyTorch compiling details: PyTorch built with:- GCC 9.3- C++ Version: 201703- Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications- Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)- OpenMP 201511 (a.k.a. OpenMP 4.5)- LAPACK is enabled (usually provided by MKL)- NNPACK is enabled- CPU capability usage: AVX2- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.2, USE_CUDA=0, USE_CUDNN=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, TorchVision: 0.17.2OpenCV: 4.9.0MMEngine: 0.10.3Runtime environment:cudnn_benchmark: Falsemp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0}dist_cfg: {'backend': 'nccl'}seed: 1921958984deterministic: FalseDistributed launcher: noneDistributed training: FalseGPU number: 1
--------------------------------------
04/22 10:09:08 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io
04/22 10:09:08 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future.
04/22 10:09:08 - mmengine - INFO - Checkpoints will be saved to /media/xp/data/pydoc/mmlab/mmpretrain/work_dirs/my_mobilenetv3.
Corrupt JPEG data: 214 extraneous bytes before marker 0xd9
04/22 10:09:17 - mmengine - INFO - Exp name: my_mobilenetv3_20240422_100907
04/22 10:09:17 - mmengine - INFO - Epoch(train)   [1][98/98]  lr: 6.4000e-02  eta: 1:31:37  time: 0.0913  data_time: 0.0129  loss: 11.2596
04/22 10:09:17 - mmengine - INFO - Saving checkpoint at 1 epochs
Corrupt JPEG data: 214 extraneous bytes before marker 0xd9
04/22 10:09:26 - mmengine - INFO - Exp name: my_mobilenetv3_20240422_100907
04/22 10:09:26 - mmengine - INFO - Epoch(train)   [2][98/98]  lr: 6.4000e-02  eta: 1:30:36  time: 0.0905  data_time: 0.0129  loss: 0.7452
04/22 10:09:26 - mmengine - INFO - Saving checkpoint at 2 epochs
Corrupt JPEG data: 214 extraneous bytes before marker 0xd9
04/22 10:09:35 - mmengine - INFO - Exp name: my_mobilenetv3_20240422_100907
04/22 10:09:35 - mmengine - INFO - Epoch(train)   [3][98/98]  lr: 6.2272e-02  eta: 1:29:30  time: 0.0841  data_time: 0.0059  loss: 0.7198
04/22 10:09:35 - mmengine - INFO - Saving checkpoint at 3 epochs
Corrupt JPEG data: 214 extraneous bytes before marker 0xd9
04/22 10:09:44 - mmengine - INFO - Exp name: my_mobilenetv3_20240422_100907
04/22 10:09:44 - mmengine - INFO - Epoch(train)   [4][98/98]  lr: 6.2272e-02  eta: 1:29:02  time: 0.0856  data_time: 0.0047  loss: 0.6938
04/22 10:09:44 - mmengine - INFO - Saving checkpoint at 4 epochs
Corrupt JPEG data: 214 extraneous bytes before marker 0xd9
04/22 10:09:53 - mmengine - INFO - Exp name: my_mobilenetv3_20240422_100907
04/22 10:09:53 - mmengine - INFO - Epoch(train)   [5][98/98]  lr: 6.0591e-02  eta: 1:28:42  time: 0.0877  data_time: 0.0100  loss: 0.7128
04/22 10:09:53 - mmengine - INFO - Saving checkpoint at 5 epochs
Corrupt JPEG data: 214 extraneous bytes before marker 0xd9
04/22 10:10:02 - mmengine - INFO - Exp name: my_mobilenetv3_20240422_100907
04/22 10:10:02 - mmengine - INFO - Epoch(train)   [6][98/98]  lr: 6.0591e-02  eta: 1:28:32  time: 0.0857  data_time: 0.0069  loss: 0.7214
04/22 10:10:02 - mmengine - INFO - Saving checkpoint at 6 epochs
Corrupt JPEG data: 214 extraneous bytes before marker 0xd9
04/22 10:10:11 - mmengine - INFO - Exp name: my_mobilenetv3_20240422_100907
04/22 10:10:11 - mmengine - INFO - Epoch(train)   [7][98/98]  lr: 5.8955e-02  eta: 1:28:11  time: 0.0860  data_time: 0.0063  loss: 0.7113
04/22 10:10:11 - mmengine - INFO - Saving checkpoint at 7 epochs
Corrupt JPEG data: 214 extraneous bytes before marker 0xd9
04/22 10:10:20 - mmengine - INFO - Exp name: my_mobilenetv3_20240422_100907
04/22 10:10:20 - mmengine - INFO - Epoch(train)   [8][98/98]  lr: 5.8955e-02  eta: 1:28:05  time: 0.0881  data_time: 0.0083  loss: 0.6989
04/22 10:10:20 - mmengine - INFO - Saving checkpoint at 8 epochs
Corrupt JPEG data: 214 extraneous bytes before marker 0xd9
04/22 10:10:29 - mmengine - INFO - Exp name: my_mobilenetv3_20240422_100907
04/22 10:10:29 - mmengine - INFO - Epoch(train)   [9][98/98]  lr: 5.7363e-02  eta: 1:28:23  time: 0.0883  data_time: 0.0077  loss: 0.6874
04/22 10:10:29 - mmengine - INFO - Saving checkpoint at 9 epochs
Corrupt JPEG data: 214 extraneous bytes before marker 0xd9
04/22 10:10:39 - mmengine - INFO - Exp name: my_mobilenetv3_20240422_100907
04/22 10:10:39 - mmengine - INFO - Epoch(train)  [10][98/98]  lr: 5.7363e-02  eta: 1:28:28  time: 0.0894  data_time: 0.0068  loss: 0.7028
04/22 10:10:39 - mmengine - INFO - Saving checkpoint at 10 epochs
04/22 10:10:39 - mmengine - INFO - Epoch(val) [10][3/3]    accuracy/top1: 60.8696  data_time: 0.0411  time: 0.0650

附录

  • 数据集准备
    官方文档
  • 训练完整config,可以直接修改了拿去训练用的,三个模块整合一起的。

my_train_batch_size = 64
my_val_batch_size = 16
my_image_size = 128
my_max_epochs = 300my_checkpoints_interval = 10 # 10 epochs to save a checkpointmy_train_dataset_root = '/media/xp/data/image/deep_image/mini_cat_and_dog'
my_train_data_prefix = 'train'
my_val_dataset_root = '/media/xp/data/image/deep_image/mini_cat_and_dog'
my_val_data_prefix = 'val'
my_test_dataset_root = '/media/xp/data/image/deep_image/mini_cat_and_dog'
my_test_data_prefix = 'test'work_dir = './work_dirs/my_mobilenetv3'my_class_names = ['cat', 'dog']auto_scale_lr = dict(base_batch_size=128)
data_preprocessor = dict(mean=[128,128,128,], num_classes=2, std=[50,50,50,], to_rgb=True)
dataset_type = 'CustomDataset'default_hooks = dict(checkpoint=dict(interval=my_checkpoints_interval, type='CheckpointHook'),logger=dict(interval=100, type='LoggerHook'),param_scheduler=dict(type='ParamSchedulerHook'),sampler_seed=dict(type='DistSamplerSeedHook'),timer=dict(type='IterTimerHook'),visualization=dict(enable=False, type='VisualizationHook'))
default_scope = 'mmpretrain'
env_cfg = dict(cudnn_benchmark=False,dist_cfg=dict(backend='nccl'),mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
launcher = 'none'
load_from = None
log_level = 'INFO'
model = dict(backbone=dict(arch='small_075', type='MobileNetV3'),head=dict(act_cfg=dict(type='HSwish'),dropout_rate=0.2,in_channels=432,init_cfg=dict(bias=0.0, layer='Linear', mean=0.0, std=0.01, type='Normal'),loss=dict(loss_weight=1.0, type='CrossEntropyLoss'),mid_channels=[1024,],num_classes=len(my_class_names),topk=(1,1,),type='StackedLinearClsHead'),neck=dict(type='GlobalAveragePooling'),type='ImageClassifier')optim_wrapper = dict(optimizer=dict(alpha=0.9,eps=0.0316,lr=0.064,momentum=0.9,type='RMSprop',weight_decay=1e-05))
param_scheduler = dict(by_epoch=True, gamma=0.973, step_size=2, type='StepLR')
randomness = dict(deterministic=False, seed=None)
resume = False
test_cfg = dict()
test_dataloader = dict(batch_size=my_val_batch_size,collate_fn=dict(type='default_collate'),dataset=dict(data_prefix='val',data_root=my_val_dataset_root,pipeline=[dict(type='LoadImageFromFile'),dict(backend='pillow',edge='short',interpolation='bicubic',scale=my_image_size,type='ResizeEdge'),dict(crop_size=my_image_size, type='CenterCrop'),dict(type='PackInputs'),],type='CustomDataset',with_label=True),num_workers=1,persistent_workers=True,pin_memory=True,sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(topk=(1,1,), type='Accuracy')
test_pipeline = [dict(type='LoadImageFromFile'),dict(backend='pillow',edge='short',interpolation='bicubic',scale=my_image_size,type='ResizeEdge'),dict(crop_size=my_image_size, type='CenterCrop'),dict(type='PackInputs'),
]
train_cfg = dict(by_epoch=True, max_epochs=my_max_epochs, val_interval=10)
train_dataloader = dict(batch_size=my_train_batch_size,collate_fn=dict(type='default_collate'),dataset=dict(data_prefix=my_train_data_prefix,data_root=my_train_dataset_root,pipeline=[dict(type='LoadImageFromFile'),dict(backend='pillow',interpolation='bicubic',scale=my_image_size,type='RandomResizedCrop'),dict(direction='horizontal', prob=0.5, type='RandomFlip'),dict(hparams=dict(pad_val=[128,128,128,]),policies='imagenet',type='AutoAugment'),dict(erase_prob=0.2,fill_color=[128,128,128,],fill_std=[50,50,50,],max_area_ratio=0.3333333333333333,min_area_ratio=0.02,mode='rand',type='RandomErasing'),dict(type='PackInputs'),],type='CustomDataset',with_label=True),num_workers=1,persistent_workers=True,pin_memory=True,sampler=dict(shuffle=True, type='DefaultSampler'))
train_pipeline = [dict(type='LoadImageFromFile'),dict(backend='pillow',interpolation='bicubic',scale=my_image_size,type='RandomResizedCrop'),dict(direction='horizontal', prob=0.5, type='RandomFlip'),dict(hparams=dict(pad_val=[128,128,128,]),policies='imagenet',type='AutoAugment'),dict(erase_prob=0.2,fill_color=[128,128,128,],fill_std=[50,50,50,],max_area_ratio=0.3333333333333333,min_area_ratio=0.02,mode='rand',type='RandomErasing'),dict(type='PackInputs'),
]
val_cfg = dict()
val_dataloader = dict(batch_size=my_val_batch_size,collate_fn=dict(type='default_collate'),dataset=dict(data_prefix=my_val_data_prefix,data_root=my_val_dataset_root,pipeline=[dict(type='LoadImageFromFile'),dict(backend='pillow',edge='short',interpolation='bicubic',scale=my_image_size,type='ResizeEdge'),dict(crop_size=my_image_size, type='CenterCrop'),dict(type='PackInputs'),],type='CustomDataset',with_label=True),num_workers=1,persistent_workers=True,pin_memory=True,sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = dict(topk=(1,1,), type='Accuracy')
vis_backends = [dict(type='LocalVisBackend'),
]
visualizer = dict(type='UniversalVisualizer', vis_backends=[dict(type='LocalVisBackend'),])

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://xiahunao.cn/news/2979861.html

如若内容造成侵权/违法违规/事实不符,请联系瞎胡闹网进行投诉反馈,一经查实,立即删除!

相关文章

Fisher判别示例:鸢尾花(iris)数据(R)

先读取iris数据,再用程序包MASS(记得要在使用MASS前下载好该程序包)中的线性函数lda()作判别分析: data(iris) #读入数据 iris #展示数据 attach(iris) #用变量名绑定对应数据 library(MASS) #加载MASS程序包 ldlda(Species~…

路由引入,过滤实验

实验拓补图 实验目的: 1、按照图示配置 IP 地址,R1,R3,R4 loopback口模拟业务网段 2、R1 和 R2 运行 RIPv2,R2,R3和R4运行 OSPF,各自协议内部互通 3、在 RIP 和 oSPF 间配置双向路由引入,要求除 R4 上的…

Jackson 2.x 系列【30】Spring Boot 集成之数据脱敏

有道无术,术尚可求,有术无道,止于术。 本系列Jackson 版本 2.17.0 本系列Spring Boot 版本 3.2.4 源码地址:https://gitee.com/pearl-organization/study-jaskson-demo 文章目录 1. 概述2. 实现思路3. 案例演示3.1 脱敏规则3.2 自…

【全网首发】Mogdb 5.0.6新特性:CM双网卡生产落地方案

在写这篇文章的时候,刚刚加班结束,顺手写了这篇文章。 前言 某大型全国性行业核心系统数据库需要A、B两个物理隔离的双网卡架构方案,已成为行业标准。而最新发布的MogDB 5.0.6的CM新增支持流复制双网段部署,用于网卡级高可用容灾(…

Vue--》深入了解 VueUse 功能性工具集

今天博主为大家介绍一款实用性的插件名字叫做 VueUse ,它是专门为 Vue.js 生态系统设计的功能性工具集合。其提供了许多可重用的功能函数,可以帮助开发者更轻松地构建 Vue.js 应用程序。其提供了大量的功能,包括状态管理、副作用管理、组合式…

SpringCloud系列(12)--服务提供者(Service Provider)集群搭建

前言:在上一章节中我们成功把微服务注册进了Eureka集群,但这还不够,虽然注册服务中心Eureka已经是服务配置了,但服务提供者目前只有一个,如果服务提供者宕机了或者流量过大,都会影响到用户即服务使用者的使…

GoJudge环境部署本地调用云服务器部署go-judge判题机详细部署教程go-judge多语言支持

前言 本文基于go-judge项目搭建,由于go-judge官网项目GitHub - criyle/go-judge: Sandbox Server in REST / gRPC API. Based on Linux container technologies.,资料太少,而且只给了C语言的调用样例,无法知道其他常见语言比如&am…

4.23学习总结

一.NIO(一) (一).简介: NIO 是 Java SE 1.4 引入的一组新的 I/O 相关的 API,它提供了非阻塞式 I/O、选择器、通道、缓冲区等新的概念和机制。相比与传统的 I/O 多出的 N 不是单纯的 New,更多的是代表了 Non-blocking 非阻塞,NIO具有更高的并…

OKCC搭建配置什么样的服务器合适

OKCC呼叫中心系统是一种采用软硬件结合的架构方式、及分布式的IP技术,从多角度为企业提供整合的一体化解决方案。因此,搭建OKCC呼叫中心系统所使用的服务器应该满足以下几点要求: 稳定性:服务器需要具有较高的稳定性和可靠性&…

JavaSE——程序逻辑控制

1. 顺序结构 顺序结构 比较简单,按照代码书写的顺序一行一行执行。 例如: public static void main(String[] args) {System.out.println(111);System.out.println(222);System.out.println(333);} 运行结果如下: 如果调整代码的书写顺序 , …

2024华中杯A题|太阳能路灯光伏板的朝向设计问题(思路、代码.....)

太阳能路灯由太阳能电池板组件部分(包括支架)、LED灯头、控制箱(包含控制器、蓄电池)、市电辅助器和灯杆几部分构成。太阳能电池板通过支架固定在灯杆上端。太阳能电池板也叫光伏板,它利用光伏效应接收太阳辐射能并转化为电能输出,经过充放电控制器储存在蓄电池中。太阳能…

Midjourney-01 初试上手 注册使用并生成你的第一张AI图片 详细流程 提示词 过程截图 生成结果 付费文生图的天花板!

背景介绍 Midjourney是一款基于人工智能技术的绘画软件,利用深度学习算法来辅助用户进行绘画创作。这款软件能够通过用户输入的文本描述生成图像,支持多种生成方式,包括文字生成图片、图片生成图片和混合图片生成图片。 图像生成方式&#…

开发区块链DApp应用,引领数字经济新潮流

随着区块链技术的飞速发展,分布式应用(DApp)正成为数字经济中的一股强劲力量。DApp以其去中心化、透明公正的特点,为用户带来了全新的数字体验,开创了数字经济的新潮流。作为一家专业的区块链DApp应用开发公司&#xf…

软件项目经理需要具备这 11 个能力

当前软件开发技术更新换代越来越快,各种项目实施管理思想也日新月异,作为一个软件项目经理,需要具备这 11 种能力: 1. 项目管理能力 了解项目管理的基本原则和方法,包括制定项目计划、资源分配、风险管理、问题解决和…

出海不出局 | 小游戏引爆高线市场,新竞争态势下的应用出海攻略

出海小游戏,出息了! 根据 Sensor Tower 近期发布的“2024 年 3 月中国手游收入 TOP30”榜单,出海小游戏在榜单中成了亮眼的存在。 其中,《菇勇者传说》3 月海外收入环比增长 63%,斩获出海手游收入增长冠军&#xff0c…

IUG-CF论文精读

Neural collaborative filtering with ideal user group labels (具有理想用户组标签的神经协同过滤) 论文地址:https://www.sciencedirect.com/science/article/pii/S0957417423023898 摘要: 人口统计信息是推荐系统(RSs)的关键…

Mysql用语句创建表/插入列【示例】

一、 创建表 COMMENT表示字段或列的注释 -- 新建student表 CREATE TABLE student (id BIGINT NOT NULL COMMENT 学生id, enroll_date DATE NOT NULL COMMENT 注册时间, NAME VARCHAR(18) DEFAULT NOT NULL COMMENT 学生姓名, deal_flag TINYINT(1) DEFAULT 0 NOT NULL COMM…

TLV61048非同步升压BOOST转换器输入电压2.6-5.5V输出电流4A输出电压最高15V

推荐原因: 输入电压较低,输出电流可达3.5A SOT23-6封装 批量价格约0.70元 TLV61048引脚 TLV61048引脚功能 7 详细说明 7.1 概述 TLV61048是一款非同步升压转换器,支持高达 15 V 的输出电压和输入范围从 2.61 V 到 5.5 V。该TLV61048集成了…

Nacos配置中心动态刷新原理

三种模式: ①:pull模式:主动拉去配置,通过固定的时间间隔。缺点:频繁请求,时效性不高,时间间隔不好设置。 ②:push模式:服务端检测到变化,主动将新配置推送给…

Atlas Vector Search:借助语义搜索和 AI 针对任何类型的数据构建智能应用

Atlas Vector Search已正式上线! Vector Search(向量搜索)现在支持生产工作负载,开发者可以继续构建由语义搜索和生成式人工智能驱动的智能应用,同时通过 Search Node(搜索节点)优化资源消耗并…