- 帖子: 51
- 感谢您收到 0
Deepseek国内下载渠道
2026-01-08 19:04 #1003
by service
新帖
1. ModelScope(魔搭社区) - 阿里出品网址: [url]
www.modelscope.cn/models
[/url]可用模型:
2. OpenI(启智社区)网址: [url]
openi.pcl.ac.cn/
[/url]搜索关键词:
5. 国内镜像站汇总镜像站网址备注HF Mirror
hf-mirror.com
HuggingFace镜像清华大学镜像
mirrors.tuna.tsinghua.edu.cn
有部分模型阿里云镜像
mirrors.aliyun.com
🔄 使用镜像站下载方法A:使用
hf-mirror.com
方法B:使用wget直接下载
📦 具体下载链接DeepSeek-Coder GGUF版本(推荐)6.7B模型(Q4_K_M量化):
1.3B模型(更小,适合测试):
V2-Lite模型(16B):
批量下载脚本
🐳 使用Docker镜像如果直接下载困难,可以使用Docker:
💾 从现有资源转换如果你有其他格式的模型,可以转换为GGUF:
🔍 搜索技巧在搜索引擎中搜索:site:github.com "deepseek-coder" gguf
site:modelscope.cn deepseek-coder
"deepseek-coder" 百度网盘国内论坛和社区:
📁 备用下载链接(直接下载)V2-Lite Q4_K_M:
使用curl下载:
🆘 如果所有链接都失效Plan B:使用Ollama直接拉取
Plan C:使用现成的模型包在淘宝、闲鱼等平台搜索"DeepSeek-Coder 模型",有人会出售已经下载好的模型U盘或网盘资源。Plan D:联系我获取帮助如果以上方法都不行,我可以帮你:
💡 最佳建议对于国内用户,推荐:
- DeepSeek-Coder-6.7B-Instruct: [url] modelscope.cn/models/deepseek-ai/deepseek-coder-6.7b-instruct [/url]
- DeepSeek-Coder-33B-Instruct: [url] modelscope.cn/models/deepseek-ai/deepseek-coder-33b-instruct [/url]
- DeepSeek-Coder-V2-Lite: [url] modelscope.cn/models/deepseek-ai/DeepSee...der-V2-Lite-Instruct [/url]
Code:
# 安装ModelScope
pip install modelscope
# 下载模型
from modelscope import snapshot_download
model_dir = snapshot_download('deepseek-ai/deepseek-coder-6.7b-instruct')
- "deepseek-coder"
- "deepseek coder gguf"
- 百度网盘搜索网站(如: pan.baidu.com 搜索"deepseek-coder")
- QQ群/微信群:一些AI相关的群会有模型分享
- GitHub加速下载:使用 ghproxy.com
- 示例链接:
Code:
https://ghproxy.com/https://github.com/TheBloke/DeepSeek-Coder-6.7B-Instruct-GGUF
Code:
# 设置环境变量
export HF_ENDPOINT=https://hf-mirror.com
# 使用huggingface-cli下载
pip install huggingface-hub
huggingface-cli download deepseek-ai/deepseek-coder-6.7b-instruct --local-dir ./deepseek-coder
Code:
# 通过hf-mirror下载GGUF文件
wget https://hf-mirror.com/TheBloke/DeepSeek-Coder-6.7B-Instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
Code:
https://hf-mirror.com/TheBloke/DeepSeek-Coder-6.7B-Instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
Code:
https://hf-mirror.com/TheBloke/DeepSeek-Coder-1.3B-Instruct-GGUF/resolve/main/deepseek-coder-1.3b-instruct.Q4_K_M.gguf
Code:
https://hf-mirror.com/TheBloke/DeepSeek-Coder-V2-Lite-Instruct-GGUF/resolve/main/deepseek-coder-v2-lite-instruct.Q4_K_M.gguf
Code:
#!/bin/bash
# download_deepseek.sh
# 使用hf-mirror镜像
MIRROR="https://hf-mirror.com"
# 选择要下载的模型
MODELS=(
"TheBloke/DeepSeek-Coder-1.3B-Instruct-GGUF:deepseek-coder-1.3b-instruct.Q4_K_M.gguf"
"TheBloke/DeepSeek-Coder-6.7B-Instruct-GGUF:deepseek-coder-6.7b-instruct.Q4_K_M.gguf"
"TheBloke/DeepSeek-Coder-V2-Lite-Instruct-GGUF:deepseek-coder-v2-lite-instruct.Q4_K_M.gguf"
)
# 创建目录
mkdir -p deepseek-models
cd deepseek-models
echo "开始下载DeepSeek-Coder模型..."
echo "使用镜像: $MIRROR"
for model in "${MODELS[@]}"; do
IFS=':' read -r repo filename <<< "$model"
echo "下载: $filename"
wget "${MIRROR}/${repo}/resolve/main/${filename}" \
--show-progress \
--no-check-certificate
if [ $? -eq 0 ]; then
echo "✓ $filename 下载完成"
else
echo "✗ $filename 下载失败"
fi
done
echo "所有下载完成!"
ls -lh *.gguf
Code:
# 拉取包含模型的Docker镜像
docker pull registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.3.0-py37-torch1.11.0
# 运行并访问
docker run -it --gpus all -p 7860:7860 registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.3.0-py37-torch1.11.0
Code:
[b]转换PyTorch模型到GGUF[/b]# 克隆llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# 编译
make
# 转换模型(需要有原始的PyTorch模型)
python convert.py /path/to/deepseek-coder --outtype f16 --outfile deepseek-coder.gguf
# 量化
./quantize deepseek-coder.gguf deepseek-coder-q4_0.gguf q4_0
site:modelscope.cn deepseek-coder
"deepseek-coder" 百度网盘国内论坛和社区:
- 知乎:搜索"deepseek-coder 下载"
- CSDN:搜索"deepseek-coder 模型下载"
- B站:一些UP主会分享模型下载链接
- 技术微信群/QQ群:AI绘画、大模型相关群组
Code:
# download_from_china.py
import os
import requests
from tqdm import tqdm
class DeepSeekDownloader:
def __init__(self):
self.mirrors = {
'hf_mirror': 'https://hf-mirror.com',
'modelscope': 'https://modelscope.cn/api/v1/models'
}
def download_from_hf_mirror(self, repo_id, filename, save_path):
"""从hf-mirror下载"""
url = f"{self.mirrors['hf_mirror']}/{repo_id}/resolve/main/{filename}"
print(f"正在从hf-mirror下载: {filename}")
print(f"URL: {url}")
try:
response = requests.get(url, stream=True)
total_size = int(response.headers.get('content-length', 0))
with open(save_path, 'wb') as file, tqdm(
desc=filename,
total=total_size,
unit='iB',
unit_scale=True,
unit_divisor=1024,
) as bar:
for data in response.iter_content(chunk_size=1024):
size = file.write(data)
bar.update(size)
print(f"✓ 下载完成: {save_path}")
return True
except Exception as e:
print(f"✗ 下载失败: {e}")
return False
def download_gguf_model(self, model_size="6.7B", quant="Q4_K_M"):
"""下载GGUF模型"""
models = {
"1.3B": "TheBloke/DeepSeek-Coder-1.3B-Instruct-GGUF",
"6.7B": "TheBloke/DeepSeek-Coder-6.7B-Instruct-GGUF",
"V2-Lite": "TheBloke/DeepSeek-Coder-V2-Lite-Instruct-GGUF"
}
if model_size not in models:
print(f"错误: 不支持的大小 {model_size}")
return
repo_id = models[model_size]
filename = f"deepseek-coder-{model_size.lower().replace('-', '')}-instruct.{quant}.gguf"
# 创建保存目录
save_dir = f"./deepseek-models/{model_size}"
os.makedirs(save_dir, exist_ok=True)
save_path = os.path.join(save_dir, filename)
return self.download_from_hf_mirror(repo_id, filename, save_path)
if __name__ == "__main__":
downloader = DeepSeekDownloader()
print("DeepSeek-Coder 国内下载助手")
print("="*50)
# 选择模型大小
print("可选模型:")
print("1. DeepSeek-Coder 1.3B (约0.8GB)")
print("2. DeepSeek-Coder 6.7B (约4GB)")
print("3. DeepSeek-Coder V2-Lite 16B (约8GB)")
choice = input("请选择(1/2/3): ").strip()
model_map = {
"1": "1.3B",
"2": "6.7B",
"3": "V2-Lite"
}
if choice in model_map:
model_size = model_map[choice]
print(f"开始下载 {model_size} 模型...")
downloader.download_gguf_model(model_size)
else:
print("无效选择")
Code:
# 尝试这些链接(可能需要复制到下载工具)
https://hf-mirror.com/TheBloke/DeepSeek-Coder-V2-Lite-Instruct-GGUF/resolve/main/deepseek-coder-v2-lite-instruct.Q4_K_M.gguf
# 或
https://modelscope.cn/api/v1/models/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct/repo?Revision=master&FilePath=deepseek-coder-v2-lite-instruct.Q4_K_M.gguf
Code:
# 使用hf-mirror
curl -L -o deepseek-coder-v2-lite-instruct.Q4_K_M.gguf \
"https://hf-mirror.com/TheBloke/DeepSeek-Coder-V2-Lite-Instruct-GGUF/resolve/main/deepseek-coder-v2-lite-instruct.Q4_K_M.gguf"
Code:
# Ollama有内置的镜像,可能可以访问
ollama pull deepseek-coder:6.7b
# 如果Ollama也无法访问,使用代理
set http_proxy=http://127.0.0.1:7890
set https_proxy=http://127.0.0.1:7890
ollama pull deepseek-coder:6.7b
- 提供我已有的模型文件(通过其他方式传输)
- 指导你使用其他替代模型
Code:
cd ComfyUI/models/LLM/deepseek-coder
ollama create deepseek-v2-lite ./deepseek-coder-v2-lite-instruct.Q4_K_M.gguf
- 首选:使用
镜像站Code:hf-mirror.com
- 备选:使用 ModelScope
- 紧急:先下载 1.3B 小模型测试,确认工作流正常后再下载大模型
Code:
# 最简单的方式,复制这个命令
wget https://hf-mirror.com/TheBloke/DeepSeek-Coder-6.7B-Instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf

