摘要:PS D:\code\cursor_space\qa_chat_doc\backend> uvicorn main:app --reloadINFO: Will watch for changes in these directories: ['D:\\cod
在经历了cursor的多次反复重新搭建项目的痛苦后,当我先基于bolt生成项目初始版本,再基于cursor做具体细化后,大约2小时内顺利拿到了自己想要的结果,再次记录下主要的过程,供同行参考。
bolt注册,打开官网地址:https://bolt.new/官网页面效果
描述需求【提示词】我想做一个类似上面的智能文档问答的web网站,希望采用llamaindex、PyPDF2、vue、openai、Chroma技术栈来实现,请基于此要求,帮我实现,界面效果参考附件图片
在这里我找了一个自己比较喜欢的文档对话参考站,截图后作为附件上传到了这个对话窗口,作为提示词的额外补充。同时由于我本身对文档智能对话有技术背景,所以我直接指定了适合企业级生产的技术栈来限定框架。
【TIPS】对于没有任何开发技术背景的朋友,不用自己设定技术栈,可以让他帮忙推荐最适合的技术栈,也可以拿到自己满意的结果
代码下载页面
cursor导入上述代码工程cursor的安装注册可参考我的上一篇文章,有比较详细的安装注册步骤。
cursor界面
项目源码分析总结【提示词】@Codebase 这是一个智能文档问答的web网站模板项目,请在项目根目录下创建 design.md 文件,并将该项目的目录结构和技术要点总结在这个文件中,方便后续我与你讨论需求时供你参考。
首次对话界面
desgin.md文件内容
先尝试启动项目按照他提供的启动步骤,一步步操作,如果有报错,可以将错误信息完整的发到聊天框,按照他给的修复方案执行即可。
【提示词】如何启动该项目
项目启动对话截图
(venv) PS D:\code\cursor_space\qa_chat_doc\backend> uvicorn main:app --reloadINFO: Will watch for changes in these directories: ['D:\\code\\cursor_space\\qa_chat_doc\\backend']INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)INFO: Started reloader process [15984] using WatchFilesProcess SpawnProcess-1:Traceback (most recent call last):File "E:\soft\anaconda3\Lib\multiprocessing\process.py", line 314, in _bootstrapself.runFile "E:\soft\anaconda3\Lib\multiprocessing\process.py", line 108, in runself._target(*self._args, **self._kwargs)File "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\uvicorn\_subprocess.py", line 78, in subprocess_startedtarget(sockets=sockets)File "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\uvicorn\server.py", line 62, in runreturn asyncio.run(self.serve(sockets=socketsFile "E:\soft\anaconda3\Lib\asyncio\runners.py", line 194, in runreturn runner.run(main)^^^^^^^^^^^^^^^^File "E:\soft\anaconda3\Lib\asyncio\runners.py", line 118, in runreturn self._loop.run_until_complete(taskFile "E:\soft\anaconda3\Lib\asyncio\base_events.py", line 687, in run_until_completereturn future.result^^^^^^^^^^^^^^^File "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\uvicorn\server.py", line 69, in serve config.loadFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\uvicorn\config.py", line 458, in load self.loaded_app = import_from_string(self.appFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\uvicorn\importer.py", line 24, in import_from_stringraise exc from NoneFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\uvicorn\importer.py", line 21, in import_from_stringmodule = importlib.import_module(module_strFile "E:\soft\anaconda3\Lib\importlib\__init__.py", line 90, in import_modulereturn _bootstrap._gcd_import(name[level:], package, levelFile "", line 1387, in _gcd_importFile "", line 1360, in _find_and_loadFile "", line 1331, in _find_and_load_unlockedFile "", line 935, in _load_unlockedFile "", line 995, in exec_moduleFile "", line 488, in _call_with_frames_removedFile "D:\code\cursor_space\qa_chat_doc\backend\main.py", line 7, in from llama_index import VectorStoreIndex, SimpleDirectoryReaderFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\__init__.py", line 24, in from llama_index.indices import (File "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\indices\__init__.py", line 4, in from llama_index.indices.composability.graph import ComposableGraphFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\indices\composability\__init__.py", line 4, in from llama_index.indices.composability.graph import ComposableGraphFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\indices\composability\graph.py", line 7, in from llama_index.indices.base import BaseIndexFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\indices\base.py", line 6, in from llama_index.chat_engine.types import BaseChatEngine, ChatModeFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\chat_engine\__init__.py", line 1, in from llama_index.chat_engine.condense_plus_context import CondensePlusContextChatEngineFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\chat_engine\condense_plus_context.py", line 7, in from llama_index.chat_engine.types import (File "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\chat_engine\types.py", line 17, in from llama_index.memory import BaseMemoryFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\memory\__init__.py", line 1, in from llama_index.memory.chat_memory_buffer import ChatMemoryBufferFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\memory\chat_memory_buffer.py", line 9, in from llama_index.storage.chat_store import BaseChatStore, SimpleChatStoreFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\storage\__init__.py", line 3, in from llama_index.storage.storage_context import StorageContextFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\storage\storage_context.py", line 26, in from llama_index.vector_stores.simple import DEFAULT_PERSIST_FNAME as VECTOR_STORE_FNAMEFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\vector_stores\__init__.py", line 31, in from llama_index.vector_stores.myscale import MyScaleVectorStoreFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\vector_stores\myscale.py", line 10, in from llama_index.readers.myscale import (File "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\readers\__init__.py", line 20, in from llama_index.readers.download import download_loaderFile "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\readers\download.py", line 10, in from llama_index.download.module import (File "D:\code\cursor_space\qa_chat_doc\backend\venv\Lib\site-packages\llama_index\download\module.py", line 12, in import pkg_resourcesModuleNotFoundError: No module named 'pkg_resources'修复方案
修复方案
错误修复示范2:
错误反馈
代码自动修复
大模型key申请我使用的是openai的模型,由于网络问题,使用了代理站上购买的,链接地址为:https://api.juheai.top,之前也一直在用这个,感觉自己觉得还可以,其它的没怎么尝试过,大家也可以根据自己的渠道来选择。
代理站首页
模型价格
当前项目模型的秘钥配置是在.env文件中,根据个人的渠道修改即可。
模型配置文件
初始界面
文档问答
后台处理日志
最后谈点自己的感受,这个是整个开发过程的大致步骤,总体来说还算顺利,使用ai来编程其实和自己编程最大的区别在于ai编程你要像一个教练,自己编程就是教练和学员融合一体的,整个过程80%的时间是进行bug修复,所以大家要有心理上的准备,刚开始肯定会碰到各种各样的问题,不要慌,你只要负责复制黏贴,具体修复交给ai,可以先从简单的2048、贪吃蛇小游戏开始,可以增强自己的信心。
来源:阳光裂缝