07-04 周四 关于vLLM(LLMs_inference)源码安装过程问题与解决

07-04 周四 关于LLMs_inference源码安装过程问题与解决
时间版本修改人描述
2024年7月4日09:48:09V0.1宋全恒新建文档

简介

 由于最近需要向vLLM上集成功能,因此,需要能够调试自己的仓库LLMs_Inference,该文档记录了源码编译的完整的过程。

 参考链接如下:

  • Build from source

 正常简单执行下述的代码,即可完成源码的编译安装

git clone https://github.com/vllm-project/vllm.git
cd vllm
# export VLLM_INSTALL_PUNICA_KERNELS=1 # optionally build for multi-LoRA capability
pip install -e .  # This may take 5-10 minutes.

 但实际上还是比较麻烦的。因为仓库LLMs_Inference是从vllm仓库fork出来的,所以理论上应该是一样的。

仓库介绍

 仓库中有多个依赖环境,

image-20240704100120109

这些文件通常用于记录项目的依赖关系,以便在特定环境中进行安装和配置。

  • requirements.txt:一般用于列出项目所需的所有依赖项及其版本要求。通过在该文件中指定所需的库和版本,方便一次性安装所有依赖。

  • requirements-cpu.txtrequirements-cuda.txtrequirements-rocm.txtrequirements-neuron.txt:这些文件可能是针对不同的硬件或计算环境的特定依赖列表。例如,requirements-cuda.txt 可能包含与 CUDA(Compute Unified Device Architecture,一种并行计算平台和编程模型)相关的依赖;requirements-rocm.txt 可能涉及 ROCm(Radeon Open Compute platform,AMD 的开源计算平台)的依赖;requirements-neuron.txt 也许和特定的神经元芯片或相关技术的依赖有关。

 而 requirements-dev.txt 通常用于开发环境所需的额外依赖项,这些依赖可能不是项目在运行时必需的,但对于开发、测试、构建等过程是需要的。

源码编译安装 vLLM 是否需要安装所有这些依赖文件,取决于你的具体需求和使用场景。

如果你计划在特定的硬件环境(如使用 CUDA、ROCM 等)中运行 vLLM 或进行相关开发,那么可能需要根据相应的环境安装对应的依赖文件。

以安装 vLLM 为例,通常需要先创建 conda 环境并激活,然后查看 requirements.txt 中指定的 PyTorch 版本等依赖信息,再进行安装。

vllm开发环境准备

直接在宿主机上安装

解决torch依赖下载问题

(llms_inference) yuzailiang@ubuntu:/mnt/self-define/sunning/lmdeploy/LLMs_Inference$ python -c "import torch; print('device count:',torch.cuda.device_count(), 'available: ', torch.cuda.is_available())"
Traceback (most recent call last):File "<string>", line 1, in <module>File "/home/yuzailiang/anaconda3/envs/llms_inference/lib/python3.9/site-packages/torch/__init__.py", line 237, in <module>from torch._C import *  # noqa: F403
ImportError: /home/yuzailiang/anaconda3/envs/llms_inference/lib/python3.9/site-packages/torch/lib/../../nvidia/cusparse/lib/libcusparse.so.12: undefined symbol: __nvJitLinkAddData_12_1, version libnvJitLink.so.12

 上面是直接安装时发现虽然安装了torch==2.3.0但是,无法使用gpu。

解决方式:

 单独安装torch依赖

pip install torch==2.3.0 torchvision torchaudio --index-url  https://download.pytorch.org/whl/cu118

 安装之后,验证torch可以正确的驱动CUDA调用GPU

(llms_inference) yuzailiang@ubuntu:/mnt/self-define/sunning/lmdeploy/LLMs_Inference$ python -c "import torch; print('device count:',torch.cuda.device_count(), 'available: ', torch.cuda.is_available())"
device count: 8 available:  True

继续执行pip install -e .

 问题还是存在

packages/torch/__init__.py", line 237, in <module>from torch._C import *  # noqa: F403
ImportError: /home/yuzailiang/anaconda3/envs/llms_inference/lib/python3.9/site-packages/torch/lib/../../nvidia/cusparse/lib/libcusparse.so.12: undefined symbol: __nvJitLinkAddData_12_1, version libnvJitLink.so.12

尝试使用cu121-Couldn’t find CUDA library root.

pip install torch==2.3.0 torchvision torchaudio --index-url  https://download.pytorch.org/whl/cu121
Building wheels for collected packages: vllmBuilding editable for vllm (pyproject.toml) ... errorerror: subprocess-exited-with-error× Building editable for vllm (pyproject.toml) did not run successfully.│ exit code: 1╰─> [139 lines of output]running editable_wheelcreating /tmp/pip-wheel-c7m73v0l/.tmp-t6j0dz53/vllm.egg-infowriting /tmp/pip-wheel-c7m73v0l/.tmp-t6j0dz53/vllm.egg-info/PKG-INFOwriting dependency_links to /tmp/pip-wheel-c7m73v0l/.tmp-t6j0dz53/vllm.egg-info/dependency_links.txtwriting requirements to /tmp/pip-wheel-c7m73v0l/.tmp-t6j0dz53/vllm.egg-info/requires.txtwriting top-level names to /tmp/pip-wheel-c7m73v0l/.tmp-t6j0dz53/vllm.egg-info/top_level.txtwriting manifest file '/tmp/pip-wheel-c7m73v0l/.tmp-t6j0dz53/vllm.egg-info/SOURCES.txt'reading manifest file '/tmp/pip-wheel-c7m73v0l/.tmp-t6j0dz53/vllm.egg-info/SOURCES.txt'reading manifest template 'MANIFEST.in'adding license file 'LICENSE'writing manifest file '/tmp/pip-wheel-c7m73v0l/.tmp-t6j0dz53/vllm.egg-info/SOURCES.txt'creating '/tmp/pip-wheel-c7m73v0l/.tmp-t6j0dz53/vllm-0.4.2+cu120.dist-info'creating /tmp/pip-wheel-c7m73v0l/.tmp-t6j0dz53/vllm-0.4.2+cu120.dist-info/WHEELrunning build_pyrunning build_ext-- The CXX compiler identification is GNU 9.4.0-- Detecting CXX compiler ABI info-- Detecting CXX compiler ABI info - done-- Check for working CXX compiler: /usr/bin/c++ - skipped-- Detecting CXX compile features-- Detecting CXX compile features - done-- Build type: RelWithDebInfo-- Target device: cuda-- Found Python: /home/yuzailiang/anaconda3/envs/llms_inference/bin/python3.9 (found version "3.9.19") found components: Interpreter Development.Module-- Found python matching: /home/yuzailiang/anaconda3/envs/llms_inference/bin/python3.9.-- Found CUDA: /usr/local/cuda-12.0 (found version "12.0")CMake Error at /tmp/pip-build-env-xxhrqd7n/overlay/lib/python3.9/site-packages/cmake/data/share/cmake-3.30/Modules/Internal/CMakeCUDAFindToolkit.cmake:148 (message):Couldn't find CUDA library root.Call Stack (most recent call first):subprocess.CalledProcessError: Command '['cmake', '/mnt/self-define/sunning/lmdeploy/LLMs_Inference', '-G', 'Ninja', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/tmp/tmpwkyqp9r6.build-lib/vllm', '-DCMAKE_ARCHIVE_OUTPUT_DIRECTORY=/tmp/tmpuvdjn65m.build-temp', '-DVLLM_TARGET_DEVICE=cuda', '-DCMAKE_CXX_COMPILER_LAUNCHER=ccache', '-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache', '-DVLLM_PYTHON_EXECUTABLE=/home/yuzailiang/anaconda3/envs/llms_inference/bin/python3.9', '-DNVCC_THREADS=1', '-DCMAKE_JOB_POOL_COMPILE:STRING=compile', '-DCMAKE_JOB_POOLS:STRING=compile=256']' returned non-zero exit status 1.

image-20240708104936789

上述是非常复杂的环境,因为显示正在使用的cu120,即cuda 12.0.

Building wheels for collected packages: vllmBuilding editable for vllm (pyproject.toml) ... errorsubprocess.CalledProcessError: Command '['cmake', '/mnt/self-define/sunning/lmdeploy/LLMs_Inference', '-G', 'Ninja', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/tmp/tmplnbtei_9.build-lib/vllm', '-DCMAKE_ARCHIVE_OUTPUT_DIRECTORY=/tmp/tmpdu5xwhxm.build-temp', '-DVLLM_TARGET_DEVICE=cuda', '-DCMAKE_CXX_COMPILER_LAUNCHER=ccache', '-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache', '-DVLLM_PYTHON_EXECUTABLE=/home/yuzailiang/anaconda3/envs/llms_inference/bin/python3.9', '-DNVCC_THREADS=1', '-DCMAKE_JOB_POOL_COMPILE:STRING=compile', '-DCMAKE_JOB_POOLS:STRING=compile=256']' returned non-zero exit status 1.

 但是nvidia-smi显示的cuda版本为12.4,就很奇怪。

(base) yuzailiang@ubuntu:/mnt/self-define/sunning/lmdeploy/LLMs_Inference$ nvidia-smi 
Tue Jul  9 09:02:46 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.14              Driver Version: 550.54.14      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+---------------------

 而在 /usr/local目录下,并没有这个cuda12.4.

(base) yuzailiang@ubuntu:/usr/local$ ll | grep cuda
lrwxrwxrwx  1 root root   22 Jul  8 05:22 cuda -> /etc/alternatives/cuda/
lrwxrwxrwx  1 root root   25 Jul  8 05:22 cuda-12 -> /etc/alternatives/cuda-12/
drwxr-xr-x 17 root root 4096 Jun 25 08:42 cuda-12.0/
drwxr-xr-x 15 root root 4096 Jul  8 05:22 cuda-12.1/

放弃宿主环境改用容器

最终还是花费了几天之后,改用镜像来进行开发环境的搭建。

docker run -d --name smoothquant --gpus all -v /mnt/self-define:/mnt/self-define -p 8022:22 -it llm_inference:v1.0

基本思路,就是启动容器,将必要的目录挂载进去,并且在容器中部署sshd服务,使用vscode将容器当成一个独立的Linux机器,以实现断点调试开发的功能。

 具体过程参见 镜像启动添加sshd服务。

 基本过程如下所示:

1. 启动容器,赋予全部PGU,以及适当的挂载目录和端口映射。
2. 容器安装sshd服务,(ubuntu和centos不一样)
3. 配置sshd服务,修改root密码
4. 启动sshd服务
5. vscode直接连接容器(作为远程linux服务器)。
6. 搭建调试环境和断点。 参见 [05-16 周四 vscode 搭建远程调试环境_python vscode server 远程调试-CSDN博客](https://blog.csdn.net/lk142500/article/details/138969211?spm=1001.2014.3001.5502)

编辑CMakeLists.txt

LLMs_Inference/CMakeLists.txt里的地178行 FetchContent_Declare(
cutlass
GIT_REPOSITORY https://github.com/nvidia/cutlass.git
# CUTLASS 3.5.0
GIT_TAG 7d49e6c7e2f8896c47f586706e67e1fb215529dc
)
改成
FetchContent_Declare(
cutlass
SOURCE_DIR /cutlass#GIT_REPOSITORY https://github.com/nvidia/cutlass.git
# CUTLASS 3.5.0
#GIT_TAG 7d49e6c7e2f8896c47f586706e67e1fb215529dc
)

执行源码安装

pip install -e .
root@145206f3e691:/mnt/self-define/sunning/lmdeploy/LLMs_Inference# pip install -e .
Obtaining file:///mnt/self-define/sunning/lmdeploy/LLMs_InferenceInstalling build dependencies ... doneChecking if build backend supports build_editable ... doneGetting requirements to build wheel ... donePreparing metadata (pyproject.toml) ... done
Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (5.9.8)
Requirement already satisfied: prometheus-fastapi-instrumentator>=7.0.0 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (7.0.0)
Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (3.9.3)
Requirement already satisfied: outlines==0.0.34 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (0.0.34)
Requirement already satisfied: lm-format-enforcer==0.10.1 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (0.10.1)
Requirement already satisfied: tokenizers>=0.19.1 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (0.19.1)
Requirement already satisfied: ninja in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (1.11.1.1)
Requirement already satisfied: pydantic>=2.0 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (2.6.4)
Requirement already satisfied: prometheus-client>=0.18.0 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (0.20.0)
Requirement already satisfied: cmake>=3.21 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (3.28.4)
Requirement already satisfied: tiktoken>=0.6.0 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (0.7.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (1.26.4)
Requirement already satisfied: openai in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (1.31.0)
Requirement already satisfied: vllm-flash-attn==2.5.8.post2 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (2.5.8.post2)
Requirement already satisfied: fastapi in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (0.110.0)
Requirement already satisfied: uvicorn[standard] in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (0.29.0)
Requirement already satisfied: torch==2.3.0 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (2.3.0)
Requirement already satisfied: ray>=2.9 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (2.10.0)
Requirement already satisfied: transformers>=4.40.0 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (4.42.3)
Requirement already satisfied: py-cpuinfo in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (9.0.0)
Requirement already satisfied: xformers==0.0.26.post1 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (0.0.26.post1)
Requirement already satisfied: sentencepiece in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (0.2.0)
Requirement already satisfied: vllm-nccl-cu12<2.19,>=2.18 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (2.18.1.0.4.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (4.10.0)
Requirement already satisfied: nvidia-ml-py in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (12.555.43)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (2.31.0)
Requirement already satisfied: filelock>=3.10.4 in /usr/local/lib/python3.10/dist-packages (from vllm==0.4.2+cu124) (3.13.1)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from lm-format-enforcer==0.10.1->vllm==0.4.2+cu124) (6.0.1)
Requirement already satisfied: interegular>=0.3.2 in /usr/local/lib/python3.10/dist-packages (from lm-format-enforcer==0.10.1->vllm==0.4.2+cu124) (0.3.3)
Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from lm-format-enforcer==0.10.1->vllm==0.4.2+cu124) (24.0)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.10/dist-packages (from outlines==0.0.34->vllm==0.4.2+cu124) (1.6.0)
Requirement already satisfied: numba in /usr/local/lib/python3.10/dist-packages (from outlines==0.0.34->vllm==0.4.2+cu124) (0.59.1)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.10/dist-packages (from outlines==0.0.34->vllm==0.4.2+cu124) (3.0.0)
Requirement already satisfied: joblib in /usr/local/lib/python3.10/dist-packages (from outlines==0.0.34->vllm==0.4.2+cu124) (1.3.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from outlines==0.0.34->vllm==0.4.2+cu124) (1.12.0)
Requirement already satisfied: diskcache in /usr/local/lib/python3.10/dist-packages (from outlines==0.0.34->vllm==0.4.2+cu124) (5.6.3)
Requirement already satisfied: referencing in /usr/local/lib/python3.10/dist-packages (from outlines==0.0.34->vllm==0.4.2+cu124) (0.34.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from outlines==0.0.34->vllm==0.4.2+cu124) (3.1.3)
Requirement already satisfied: lark in /usr/local/lib/python3.10/dist-packages (from outlines==0.0.34->vllm==0.4.2+cu124) (1.1.9)
Requirement already satisfied: jsonschema in /usr/local/lib/python3.10/dist-packages (from outlines==0.0.34->vllm==0.4.2+cu124) (4.21.1)
Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (10.3.2.106)
Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (11.0.2.54)
Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (8.9.2.26)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (12.1.105)
Requirement already satisfied: triton==2.3.0 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (2.3.0)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (1.12)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (12.1.105)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (2024.2.0)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (12.1.105)
Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (11.4.5.107)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (3.2.1)
Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (12.1.105)
Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (12.1.3.1)
Requirement already satisfied: nvidia-nccl-cu12==2.20.5 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (2.20.5)
Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /usr/local/lib/python3.10/dist-packages (from torch==2.3.0->vllm==0.4.2+cu124) (12.1.0.106)
Requirement already satisfied: nvidia-nvjitlink-cu12 in /usr/local/lib/python3.10/dist-packages (from nvidia-cusolver-cu12==11.4.5.107->torch==2.3.0->vllm==0.4.2+cu124) (12.4.99)
Requirement already satisfied: starlette<1.0.0,>=0.30.0 in /usr/local/lib/python3.10/dist-packages (from prometheus-fastapi-instrumentator>=7.0.0->vllm==0.4.2+cu124) (0.36.3)
Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic>=2.0->vllm==0.4.2+cu124) (0.6.0)
Requirement already satisfied: pydantic-core==2.16.3 in /usr/local/lib/python3.10/dist-packages (from pydantic>=2.0->vllm==0.4.2+cu124) (2.16.3)
Requirement already satisfied: msgpack<2.0.0,>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from ray>=2.9->vllm==0.4.2+cu124) (1.0.8)
Requirement already satisfied: aiosignal in /usr/local/lib/python3.10/dist-packages (from ray>=2.9->vllm==0.4.2+cu124) (1.3.1)
Requirement already satisfied: frozenlist in /usr/local/lib/python3.10/dist-packages (from ray>=2.9->vllm==0.4.2+cu124) (1.4.1)
Requirement already satisfied: protobuf!=3.19.5,>=3.15.3 in /usr/local/lib/python3.10/dist-packages (from ray>=2.9->vllm==0.4.2+cu124) (5.26.1)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.10/dist-packages (from ray>=2.9->vllm==0.4.2+cu124) (8.1.7)
Requirement already satisfied: regex>=2022.1.18 in /usr/local/lib/python3.10/dist-packages (from tiktoken>=0.6.0->vllm==0.4.2+cu124) (2023.12.25)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->vllm==0.4.2+cu124) (2.2.1)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->vllm==0.4.2+cu124) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->vllm==0.4.2+cu124) (3.6)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->vllm==0.4.2+cu124) (2024.2.2)
Requirement already satisfied: huggingface-hub<1.0,>=0.16.4 in /usr/local/lib/python3.10/dist-packages (from tokenizers>=0.19.1->vllm==0.4.2+cu124) (0.23.4)
Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.40.0->vllm==0.4.2+cu124) (0.4.2)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.40.0->vllm==0.4.2+cu124) (4.66.2)
Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->vllm==0.4.2+cu124) (1.9.4)
Requirement already satisfied: async-timeout<5.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->vllm==0.4.2+cu124) (4.0.3)
Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->vllm==0.4.2+cu124) (23.2.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->vllm==0.4.2+cu124) (6.0.5)
Requirement already satisfied: httpx<1,>=0.23.0 in /usr/local/lib/python3.10/dist-packages (from openai->vllm==0.4.2+cu124) (0.27.0)
Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.10/dist-packages (from openai->vllm==0.4.2+cu124) (4.3.0)
Requirement already satisfied: distro<2,>=1.7.0 in /usr/local/lib/python3.10/dist-packages (from openai->vllm==0.4.2+cu124) (1.9.0)
Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from openai->vllm==0.4.2+cu124) (1.3.1)
Requirement already satisfied: h11>=0.8 in /usr/local/lib/python3.10/dist-packages (from uvicorn[standard]->vllm==0.4.2+cu124) (0.14.0)
Requirement already satisfied: websockets>=10.4 in /usr/local/lib/python3.10/dist-packages (from uvicorn[standard]->vllm==0.4.2+cu124) (12.0)
Requirement already satisfied: python-dotenv>=0.13 in /usr/local/lib/python3.10/dist-packages (from uvicorn[standard]->vllm==0.4.2+cu124) (1.0.1)
Requirement already satisfied: httptools>=0.5.0 in /usr/local/lib/python3.10/dist-packages (from uvicorn[standard]->vllm==0.4.2+cu124) (0.6.1)
Requirement already satisfied: watchfiles>=0.13 in /usr/local/lib/python3.10/dist-packages (from uvicorn[standard]->vllm==0.4.2+cu124) (0.21.0)
Requirement already satisfied: uvloop!=0.15.0,!=0.15.1,>=0.14.0 in /usr/local/lib/python3.10/dist-packages (from uvicorn[standard]->vllm==0.4.2+cu124) (0.19.0)
Requirement already satisfied: exceptiongroup>=1.0.2 in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->openai->vllm==0.4.2+cu124) (1.2.0)
Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.10/dist-packages (from httpx<1,>=0.23.0->openai->vllm==0.4.2+cu124) (1.0.5)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->outlines==0.0.34->vllm==0.4.2+cu124) (2.1.5)
Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /usr/local/lib/python3.10/dist-packages (from jsonschema->outlines==0.0.34->vllm==0.4.2+cu124) (2023.12.1)
Requirement already satisfied: rpds-py>=0.7.1 in /usr/local/lib/python3.10/dist-packages (from jsonschema->outlines==0.0.34->vllm==0.4.2+cu124) (0.18.0)
Requirement already satisfied: llvmlite<0.43,>=0.42.0dev0 in /usr/local/lib/python3.10/dist-packages (from numba->outlines==0.0.34->vllm==0.4.2+cu124) (0.42.0)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch==2.3.0->vllm==0.4.2+cu124) (1.3.0)
Installing collected packages: vllmAttempting uninstall: vllmFound existing installation: vllm 0.4.2+cu124Uninstalling vllm-0.4.2+cu124:Successfully uninstalled vllm-0.4.2+cu124Running setup.py develop for vllmSuccessfully installed vllm-0.4.2+cu124
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

vscode建立连接容器并建立python调试环境

 相当于使用vscode直接连接到容器环境中。具体过程可以参见

07-09 周二 镜像启动容器添加openssh,使用vscode断点调试Python工程

 上述笔记详细的记录了容器启动,openssh-server的安装以及配置和sshd服务的启动,

05-16 周四 vscode 搭建远程调试环境_python vscode server 远程调试-CSDN博客

总结

 深度学习的开发,一般一个组共享同一个基础环境,还是太复杂容易出问题了,所以准备一个容器,隔离开使用环境反而是一个不错的方式,能够减少问题出现的概率。

 经过本文的实践,对于想要基于pip安装的仓库中继续进行开发而搭建,需要的基本步骤如下:

  1. git clone或者其他方式获取源码
  2. 下载依赖
  3. 源码编译

 经过源码编译之后,程序就可以顺利的进行断点调试了。在行文撰写过程中,为了使用vscode连接容器,也撰写了07-09 周二 镜像启动容器添加openssh,使用vscode断点调试Python工程

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://xiahunao.cn/news/3226328.html

如若内容造成侵权/违法违规/事实不符,请联系瞎胡闹网进行投诉反馈,一经查实,立即删除!

相关文章

对比:9款最佳个人项目管理软件盘点

文章介绍了9款个人项目管理软件&#xff1a;PingCode、Worktile、Flowus、Todoist、Trello、Teambition、有道云笔记、Notion、Microsoft To Do。 在管理个人项目时&#xff0c;是否常感到信息零散、进度难以把控&#xff1f;选择合适的项目管理软件&#xff0c;可以有效解决这…

【密码学】从有限状态自动机到密钥流生成器

本文是对流密码内容的拓展&#xff0c;在流密码中种子密钥通过一个伪随机数生成器产生一个与明文等长的伪随机密钥流。而本文的内容就是在回答这样两个问题&#xff1a; 伪随机密钥流是如何生成的&#xff1f;流密码、流密钥生成器和有限状态自动机之间是什么关系&#xff1f;…

Defensor 4.5:构建数据资产为中心的安全运营体系

5月31日“向星力”未来数据技术峰会上&#xff0c;星环科技重磅发布数据安全管理平台 Defensor 4.5版本。新版本引入了以数据资产为中心的数据安全运营体系&#xff0c;通过智能化大模型技术&#xff0c;帮助企业快速、精准地识别核心重要资产&#xff1b;建设全局的数据安全策…

拥抱应用创新,拒绝无谓的模型竞争

&#x1f49d;&#x1f49d;&#x1f49d;欢迎来到我的博客&#xff0c;很高兴能够在这里和您见面&#xff01;希望您在这里可以感受到一份轻松愉快的氛围&#xff0c;不仅可以获得有趣的内容和知识&#xff0c;也可以畅所欲言、分享您的想法和见解。 推荐:kwan 的首页,持续学…

怎么判断自己是否适合学习PMP?

判断自己是否适合学习PMP项目管理专业人士认证&#xff0c;可以从以下几个方面进行考量&#xff1a; 1、职业发展需求&#xff1a; 如果您在项目管理领域工作&#xff0c;或计划未来从事相关工作&#xff0c;PMP认证能显著提升您的竞争力。 对于项目经理、产品经理、技术领导…

充电桩运营平台的技术方案 53页

充电桩运营平台的技术方案 53页&#xff0c;全套解决方案 内容太多&#xff0c;复制图片丢失&#xff0c;需要完整的私信我。

一次性语音芯片——智能家居的新兴技术

一次性语音芯片&#xff0c;作为现代智能家居技术&#xff0c;正以其魅力和性能&#xff0c;逐渐渗透到我们日常生活的每一个角落。这些小巧而强大的芯片&#xff0c;不仅为智能家居设备赋予了“说话”的能力&#xff0c;更在提升用户体验、增强设备交互性方面发挥了举足轻重的…

数据库db文件损坏修复方法(sqlite3:database disk image is malformed)

参考博客&#xff1a; https://www.cnblogs.com/liuyangQAQ/p/18037546 sqlite3数据库提示database disk image is malformed 解决办法-CSDN博客 【SQL】sqlite数据库损坏报错&#xff1a;database disk image is malformed&#xff08;已解决&#xff09;-CSDN博客 一、第…

Yarn标签调度--HDP测试

hadoop版本从2.7.2版本开始 新增标签调度功能。 标签调度功能&#xff1a;可以给节点设置标签 让作业任务调度到这个设置的标签节点。 列入&#xff1a; 某个任务需要用到gpu资源&#xff0c;而gpu并非在每个节点上都有&#xff0c;通过对节点设置标签&#xff0c;可以使作业…

imx6ull/linux应用编程学习(15) 移植MQTT客户端库/测试

1. 准备开发环境 确保你的Ubuntu系统已经安装了必要的工具和依赖项。打开终端并运行以下命令&#xff1a; sudo apt update sudo apt install build-essential cmake git2. 获取MQTT库 git clone https://github.com/eclipse/paho.mqtt.c.git cd paho.mqtt.c3. 编译MQTT库 mk…

苹果开发者取消自动续费

文档&#xff1a;https://support.apple.com/zh-cn/118428 如果没有找到订阅&#xff0c;那就是账号不对 取消订阅后&#xff0c;就不会自动续费了&#xff0c;如果不放心&#xff0c;可以把付款绑定的方式也取消

TQZC706开发板教程:在ZC706+ADRV9009硬件平台运行ADI Linux

本教程使用2024-06-18的ADI镜像文件&#xff0c;创建ZC706ADRV9009的linux工程进行测试。 首先需要下载ADI的镜像文件下载地址如所示&#xff1a; https://wiki.analog.com/resources/tools-software/linux-software/adi-kuiper_images/release_notes#r2_patch_1 烧写完成后若…

50斤的东西寄什么快递最便宜?邮寄物品最便宜的方法分享

作为一个电商创业者&#xff0c;我经常需要寄送大量商品。寄送50斤左右的东西时&#xff0c;选择哪个快递公司最便宜成了我们关注的重点。今天&#xff0c;我将分享一些实用的寄快递省钱技巧和打包建议&#xff0c;帮助大家在快递费用上省下一大笔。云木寄快递 首先&#xff0…

小红书矩阵源码(多账号发布+批量剪辑视频+一键分发)

在数字化时代&#xff0c;社交媒体已成为品牌推广和个人表达的重要渠道。小红书作为国内领先的生活分享社区&#xff0c;其矩阵源码的出现&#xff0c;为多账号运营提供了前所未有的便利。本文将深入探讨小红书矩阵源码如何通过多账号发布、批量剪辑视频、一键分发以及持续迭代…

windows10设置环境变量Path步骤

1、鼠标右键“我的电脑”&#xff0c;点击“属性”&#xff0c;打开控制面板窗口&#xff0c;如图&#xff1a; 2、点击“高级系统设置”&#xff0c;弹出设置窗口&#xff0c;如图&#xff1a; 3、点击底部的“环境变量”&#xff0c;弹出环境变量窗口&#xff0c;如图&#x…

[CTF]-PWN:House of Cat堆题型综合解析

原理&#xff1a; 调用顺序&#xff1a; exit->_IO_wfile_jumps->_IO_wfile_seekoff->_IO_switch_to_wget_mode _IO_wfile_seekoff源码&#xff1a; off64_t _IO_wfile_seekoff (FILE *fp, off64_t offset, int dir, int mode) {off64_t result;off64_t delta, new…

android13 cat /proc/cupinfo没有Serial问题

1.前言 我们有些客户是使用cpuinfo节点去获取系统的cpuid的,如下: cat /proc/cupinfo processor : 0 BogoMIPS : 48.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp CPU impleme…

tensorflow之欠拟合与过拟合,正则化缓解

过拟合泛化性弱 欠拟合解决方法&#xff1a; 增加输入特征项 增加网络参数 减少正则化参数 过拟合的解决方法&#xff1a; 数据清洗 增大训练集 采用正则化 增大正则化参数 正则化缓解过拟合 正则化在损失函数中引入模型复杂度指标&#xff0c;利用给w增加权重&#xff0c;…

滥用云服务进行传播的恶意软件越来越多

由于云服务提供了传统方式所不具备的可扩展性、匿名性和容错性&#xff0c;攻击者越来越多地开始利用云服务来存储、分发和建立 C&C 信道&#xff0c;例如 VCRUM 存储在 AWS 上或 SYK Cryptor 通过 DriveHQ 进行分发。 过去的一个月内&#xff0c;研究人员一直在监控使用这…

入门PHP就来我这(高级)20 ~ 事务处理

有胆量你就来跟着路老师卷起来&#xff01; -- 纯干货&#xff0c;技术知识分享 路老师给大家分享PHP语言的知识了&#xff0c;旨在想让大家入门PHP&#xff0c;并深入了解PHP语言。 接着上篇我们来看下PDO数据库的事务处理。 1 PDO中的事务处理 1.1 事务相关概念 事务&#…