可以使用多种AI模型自动化制作web和手机应用软件的利器:bolt.diy
前段时间发现了:AI自动化制作web和手机应用软件的利器:bolt.new-CSDN博客
但是它只能使用anthropic的claude-3-5-sonnet-20240620模型,这使它的AI能力大打折扣,那么有办法使用其它模型吗?可以的,bolt.diy可以使用各种AI大模型!官网:https://github.com/stackblitz-labs/bolt.diy
下载软件源代码
git clone https://github.com/stackblitz-labs/bolt.diy
安装和启动
进入bolt.diy目录,然后执行
npm install -g pnpm
pnpm install
pnpm run dev
启动后输出
pnpm run dev
> bolt@0.0.7 dev E:\github\bolt.diy
> node pre-start.cjs && remix vite:dev
★═══════════════════════════════════════★
B O L T . D I Y
⚡️ Welcome ⚡️
★═══════════════════════════════════════★
📍 Current Version Tag: v"0.0.7"
📍 Current Commit Version: "0202aef"
Please wait until the URL appears here
★═══════════════════════════════════════★
warn Data fetching is changing to a single fetch in React Router v7
┃ You can use the `v3_singleFetch` future flag to opt-in early.
┃ -> https://remix.run/docs/en/2.13.1/start/future-flags#v3_singleFetch
┗
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
➜ press h + enter to show help
登录 http://localhost:5173/
可以看到跟踪信息里有支持的AI模型列表:
INFO LLMManager Registering Provider: Anthropic
INFO LLMManager Registering Provider: Cohere
INFO LLMManager Registering Provider: Deepseek
INFO LLMManager Registering Provider: Google
INFO LLMManager Registering Provider: Groq
INFO LLMManager Registering Provider: HuggingFace
INFO LLMManager Registering Provider: Hyperbolic
INFO LLMManager Registering Provider: Mistral
INFO LLMManager Registering Provider: Ollama
INFO LLMManager Registering Provider: OpenAI
INFO LLMManager Registering Provider: OpenRouter
INFO LLMManager Registering Provider: OpenAILike
INFO LLMManager Registering Provider: Perplexity
INFO LLMManager Registering Provider: xAI
INFO LLMManager Registering Provider: Together
INFO LLMManager Registering Provider: LMStudio
INFO LLMManager Registering Provider: AmazonBedrock
INFO LLMManager Registering Provider: Github
配置模型
登录页面后,显示配置页面:
但是我咋没有找到Ollama 、LMStudio或者OpenAI兼容模型配置呢?
算了,用OpenAI试试吧。
明白了,原来是要配置.env文件,官方给了.env.example文件:
# Rename this file to .env once you have filled in the below environment variables!
# Get your GROQ API Key here -
# https://console.groq.com/keys
# You only need this environment variable set if you want to use Groq models
GROQ_API_KEY=
# Get your HuggingFace API Key here -
# https://huggingface.co/settings/tokens
# You only need this environment variable set if you want to use HuggingFace models
HuggingFace_API_KEY=
# Get your Open AI API Key by following these instructions -
# https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key
# You only need this environment variable set if you want to use GPT models
OPENAI_API_KEY=
# Get your Anthropic API Key in your account settings -
# https://console.anthropic.com/settings/keys
# You only need this environment variable set if you want to use Claude models
ANTHROPIC_API_KEY=
# Get your OpenRouter API Key in your account settings -
# https://openrouter.ai/settings/keys
# You only need this environment variable set if you want to use OpenRouter models
OPEN_ROUTER_API_KEY=
# Get your Google Generative AI API Key by following these instructions -
# https://console.cloud.google.com/apis/credentials
# You only need this environment variable set if you want to use Google Generative AI models
GOOGLE_GENERATIVE_AI_API_KEY=
# You only need this environment variable set if you want to use oLLAMA models
# DONT USE http://localhost:11434 due to IPV6 issues
# USE EXAMPLE http://127.0.0.1:11434
OLLAMA_API_BASE_URL=
# You only need this environment variable set if you want to use OpenAI Like models
OPENAI_LIKE_API_BASE_URL=
# You only need this environment variable set if you want to use Together AI models
TOGETHER_API_BASE_URL=
# You only need this environment variable set if you want to use DeepSeek models through their API
DEEPSEEK_API_KEY=
# Get your OpenAI Like API Key
OPENAI_LIKE_API_KEY=
# Get your Together API Key
TOGETHER_API_KEY=
# You only need this environment variable set if you want to use Hyperbolic models
#Get your Hyperbolics API Key at https://app.hyperbolic.xyz/settings
#baseURL="https://api.hyperbolic.xyz/v1/chat/completions"
HYPERBOLIC_API_KEY=
HYPERBOLIC_API_BASE_URL=
# Get your Mistral API Key by following these instructions -
# https://console.mistral.ai/api-keys/
# You only need this environment variable set if you want to use Mistral models
MISTRAL_API_KEY=
# Get the Cohere Api key by following these instructions -
# https://dashboard.cohere.com/api-keys
# You only need this environment variable set if you want to use Cohere models
COHERE_API_KEY=
# Get LMStudio Base URL from LM Studio Developer Console
# Make sure to enable CORS
# DONT USE http://localhost:1234 due to IPV6 issues
# Example: http://127.0.0.1:1234
LMSTUDIO_API_BASE_URL=
# Get your xAI API key
# https://x.ai/api
# You only need this environment variable set if you want to use xAI models
XAI_API_KEY=
# Get your Perplexity API Key here -
# https://www.perplexity.ai/settings/api
# You only need this environment variable set if you want to use Perplexity models
PERPLEXITY_API_KEY=
# Get your AWS configuration
# https://console.aws.amazon.com/iam/home
# The JSON should include the following keys:
# - region: The AWS region where Bedrock is available.
# - accessKeyId: Your AWS access key ID.
# - secretAccessKey: Your AWS secret access key.
# - sessionToken (optional): Temporary session token if using an IAM role or temporary credentials.
# Example JSON:
# {"region": "us-east-1", "accessKeyId": "yourAccessKeyId", "secretAccessKey": "yourSecretAccessKey", "sessionToken": "yourSessionToken"}
AWS_BEDROCK_CONFIG=
# Include this environment variable if you want more logging for debugging locally
VITE_LOG_LEVEL=debug
# Get your GitHub Personal Access Token here -
# https://github.com/settings/tokens
# This token is used for:
# 1. Importing/cloning GitHub repositories without rate limiting
# 2. Accessing private repositories
# 3. Automatic GitHub authentication (no need to manually connect in the UI)
#
# For classic tokens, ensure it has these scopes: repo, read:org, read:user
# For fine-grained tokens, ensure it has Repository and Organization access
VITE_GITHUB_ACCESS_TOKEN=
# Specify the type of GitHub token you're using
# Can be 'classic' or 'fine-grained'
# Classic tokens are recommended for broader access
VITE_GITHUB_TOKEN_TYPE=classic
# Example Context Values for qwen2.5-coder:32b
#
# DEFAULT_NUM_CTX=32768 # Consumes 36GB of VRAM
# DEFAULT_NUM_CTX=24576 # Consumes 32GB of VRAM
# DEFAULT_NUM_CTX=12288 # Consumes 26GB of VRAM
# DEFAULT_NUM_CTX=6144 # Consumes 24GB of VRAM
DEFAULT_NUM_CTX=
还是用自建AI服务器,创建.env文件,文件内容
OPENAI_API_KEY="your_api_key_here"
OPENAI_BASE_URL="http://192.168.1.5:1337/v1"
最终试验下来这样写
OPENAI_LIKE_API_BASE_URL="http://192.168.1.5:1337/v1"
LMSTUDIO_API_BASE_URL="http://192.168.1.5:1337/v1"
但是配置了还是没起作用啊,而且根本看不到Ollama 、LMStudio或者OpenAI兼容模型啊?
明白了,原来配置文件是.env.production 。修改它,加上前面的两句,还是不行。
说:Click the settings icon in the sidebar to open the settings menu
但是我没有找到这个按钮啊,明白了,原来是因为我用了360浏览器,结果它左边弄了很多广告图标,导致我把最下面的配置图标也当成了广告。
就这样:
最下面这个配置图标,就是bole.diy的,这是大人能干出来的事?
点进去,打开本地大模型选项,终于可以配置了Ollama 、LMStudio和OpenAI兼容模型了!
INFO api.llmcall Generating response Provider: OpenAILike, Model: deepseek-v3
这样模型终于配置完成了!
测试
用这个prompt
帮我建一个类似“雷电”打飞机的游戏页面,使用html5,飞机可以左右移动,用左右方向键移动。景物和障碍从上向下滚动。按空格键发射子弹,子弹击中障碍物后,障碍物消失,并得10分。点击“开始游戏”开始,得分达到100分通关,输出“恭喜通关!”
运行后,卡在这里:
Analysis Complete
Determining Files to Read
不知道咋回事。
但至少,确实自动生成了很多文件