一、docker-compose 运行启动到一半中断问题
这类问题一般是因为内存不足或者机器性能低导致的,在启动过程中docker-compose.yml文件,在监控检查的时候配置有超时选择,由于性能比较低的机器启动慢,因此需要延长timeout时间
我把容器的超时和重试调整为:120s和10次就可以启动了
services:
elasticsearch:
container_name: es-container-local
image: docker.elastic.co/elasticsearch/elasticsearch:8.13.2
user: root
privileged: true
# ports:
# - 9200:9200
# - 9300:9300
restart: on-failure
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/third_party/es/plugins:/usr/share/elasticsearch/plugins
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/es/data:/usr/share/elasticsearch/data
command: >
/bin/bash -c "
mkdir -p /usr/share/elasticsearch/data /usr/share/elasticsearch/plugins &&
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch &&
su elasticsearch -c '/usr/share/elasticsearch/bin/elasticsearch'
"
healthcheck:
test: curl --fail http://localhost:9200/_cat/health || exit 1
interval: 10s
timeout: 120s
retries: 10
etcd:
container_name: milvus-etcd-local
image: quay.io/coreos/etcd:v3.5.5
environment:
- ETCD_AUTO_COMPACTION_MODE=revision
- ETCD_AUTO_COMPACTION_RETENTION=1000
- ETCD_QUOTA_BACKEND_BYTES=4294967296
- ETCD_SNAPSHOT_COUNT=50000
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/etcd:/etcd
command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
healthcheck:
test: ["CMD", "etcdctl", "endpoint", "health"]
interval: 10s
timeout: 120s
retries: 10
minio:
container_name: milvus-minio-local
image: minio/minio:RELEASE.2023-03-20T20-16-18Z
environment:
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
# ports:
# - "9001:9001"
# - "9000:9000"
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/minio:/minio_data
command: minio server /minio_data --console-address ":9001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 10s
timeout: 120s
retries: 10
standalone:
container_name: milvus-standalone-local
image: milvusdb/milvus:v2.4.8
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "3"
command: ["milvus", "run", "standalone"]
security_opt:
- seccomp:unconfined
environment:
ETCD_ENDPOINTS: etcd:2379
MINIO_ADDRESS: minio:9000
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/milvus:/var/lib/milvus
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9091/healthz"]
interval: 10s
start_period: 90s
timeout: 120s
retries: 10
ports:
# - "19530:19530"
# - "9091:9091"
depends_on:
- "etcd"
- "minio"
mysql:
container_name: mysql-container-local
privileged: true
image: mysql:8.4
# ports:
# - "3306:3306"
command: --max-connections=10000
environment:
- MYSQL_ROOT_PASSWORD=123456
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/mysql:/var/lib/mysql
qanything_local:
container_name: qanything-container-local
image: xixihahaliu01/qanything-linux:v1.5.1
command: /bin/bash -c "cd /workspace/QAnything && bash scripts/entrypoint.sh"
privileged: true
shm_size: '8gb'
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/:/workspace/QAnything/
ports:
- "8777:8777"
environment:
- NCCL_LAUNCH_MODE=PARALLEL
- GPUID=${GPUID:-0}
- USER_IP=${USER_IP:-localhost}
depends_on:
standalone:
condition: service_healthy
mysql:
condition: service_started
elasticsearch:
condition: service_healthy
tty: true
stdin_open: true
networks:
default:
name: QAnything
二、启动完成之后,无法解析任何文件,并且显示milvus insert error
一开始也是毫无头绪,但是提示卸载insert error,因此我找到了辅助insert 的日志路径:/root/QAnything/logs/insert_logs
,我发现这里有一个很特别的错误就是Cannot connect to host localhost:9001 ssl:default [Connection refused]
,9001端口连接失败,第一印象应该是minio的服务没启动,然而排查一轮下来并不是minio服务,跟它没关系。
于是我继续排查,localhost的访问,那可能是跟核心容器有关,于是来到了qanything_kernel文件夹,既然需要解析,那得有解析模型结合之前的经验,可能跟embedding和rerank有关。
于是在代码中找到了源头/root/QAnything/qanything_kernel/dependent_server/embedding_server/embedding_server.py
于是跟进容器中排查,是什么问题 docker exec -it qanything-container-local bash
进入容器查找进程信息 ps -aux
发现压根没有启动服务
在容器中启动embedding服务 python3 -u qanything_kernel/dependent_server/embedding_server/embedding_server.py
,出现超时30S退出,无法完整启动,到这里基本能判断是这里出现问题,核心问题就是修改服务的超时等待时间。
在退出提示中有一个官网网站提示了如何解决这个问题:https://sanic.dev/en/guide/deployment/manager.html#worker-ack,只要重新配置超时时间即可,取代默认的30s。
编辑文件 qanything_kernel/dependent_server/embedding_server/embedding_server.py
,注意这里是修改容器外面的路径,因为本地的路径已经映射到容器里面了。
同理 也修rerank文件 /root/QAnything/qanything_kernel/dependent_server/rerank_server/rerank_server.py
from sanic import Sanic
from sanic.response import json
from sanic.worker.manager import WorkerManager
from qanything_kernel.dependent_server.rerank_server.rerank_async_backend import RerankAsyncBackend
from qanything_kernel.configs.model_config import LOCAL_RERANK_MODEL_PATH, LOCAL_RERANK_THREADS
from qanything_kernel.utils.general_utils import get_time_async
import argparse
WorkerManager.THRESHOLD = 6000
# 接收外部参数mode
parser = argparse.ArgumentParser()
修改完成之后重启容器即可
总结
服务器性能低,导致启动服务超时,容器无法启动,增加timeout时间,服务无法启动或者启动失败,排查启动超时问题。
更详细内容查看
独立博客 https://www.dataeast.cn/
CSDN博客 https://blog.csdn.net/siberiaWarpDrive
B站视频空间 https://space.bilibili.com/25871614?spm_id_from=333.1007.0.0
关注 “曲速引擎 Warp Drive” 微信公众号