代理式AI的综合教程:从基础快速响应到全自主代码生成与执行

360影视 日韩动漫 2025-05-15 15:21 1

摘要:在本教程中,我们将讲解代理式架构的五个级别,从最简单的语言模型调用到完全自主的代码生成和执行系统。本教程专为在Google Colab上无缝运行而设计。从一个简单的“处理器”开始(仅回显模型输出),你将逐步构建路由逻辑、集成外部工具、编排多步骤工作流,并最终使

在本教程中,我们将讲解代理式架构的五个级别,从最简单的语言模型调用到完全自主的代码生成和执行系统。本教程专为在Google Colab上无缝运行而设计。从一个简单的“处理器”开始(仅回显模型输出),你将逐步构建路由逻辑、集成外部工具、编排多步骤工作流,并最终使模型能够规划、验证、优化并执行自己的Python代码。在每个部分中,你都会找到详细的解释、自包含的演示函数以及清晰的提示,展示如何在实际AI应用中平衡人工控制与机器自治。

复制

import osimport torchfrom transformers import pipeline, AutoTokenizer, AutoModelForCausalLMimport reimport jsonimport timeimport randomfrom IPython.display import clear_output1.2.3.4.5.6.7.8.

我们导入核心Python和第三方库,包括用于环境和执行控制的os和time,以及Hugging Face的Transformers(pipeline、AutoTokenizer、AutoModelForCausalLM)用于模型加载和推理。此外,我们使用re和json解析LLM输出、随机种子和模拟数据,同时利用clear_output保持整洁的Colab界面。

复制

MODEL_NAME = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"def get_model_and_tokenizer: if not hasattr(get_model_and_tokenizer, "model"): print(f"Loading model {MODEL_NAME}...") tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, torch_dtype=torch.float16, device_map="auto", low_cpu_mem_usage=True ) get_model_and_tokenizer.model = model get_model_and_tokenizer.tokenizer = tokenizer print("Model loaded successfully!")return get_model_and_tokenizer.model, get_model_and_tokenizer.tokenizer1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.

我们定义了一个变量MODEL_NAME指向TinyLlama 1.1B聊天模型,并实现了一个懒加载辅助函数get_model_and_tokenizer,该函数仅在首次调用时下载并初始化分词器和模型,将其缓存以最小化开销,并在后续调用中返回缓存实例。

复制

def get_model_and_tokenizer: if not hasattr(get_model_and_tokenizer, "model"): print(f"Loading model {MODEL_NAME}...") tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, torch_dtype=torch.float16, device_map="auto", low_cpu_mem_usage=True ) get_model_and_tokenizer.model = model get_model_and_tokenizer.tokenizer = tokenizer print("Model loaded successfully!")return get_model_and_tokenizer.model, get_model_and_tokenizer.tokenizer1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.

这个辅助函数为TinyLlama模型及其标记器实现了懒加载模式。在第一次调用时,它下载并初始化半精度和自动设备排布,将它们作为函数对象的属性缓存,在后续调用时,则直接返回已经加载的实例以避免冗余开销。

复制

def generate_text(prompt, max_length=512): model, tokenizer = get_model_and_tokenizer messages = [{"role": "user", "content": prompt}] formatted_prompt = tokenizer.apply_chat_template(messages, tokenize=False) inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device) with torch.no_grad: output = model.generate( **inputs, max_new_tokens=max_length, do_sample=True, temperature=0.7, top_p=0.9, ) generated_text = tokenizer.decode(output[0], skip_special_tokens=True) response = generated_text.split("ASSISTANT: ")[-1].stripreturn response1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.21.

generate_text函数封装了TinyLlama推理流程:它获取缓存的模型和分词器,将用户提示格式化为聊天模板,对输入进行分词并移动到模型设备,然后通过采样参数生成响应。生成完成后,它解码输出并通过“ASSISTANT: ”标记提取助手的回答。

在最简单的级别,代码定义了一个直接的文本生成管道,将模型纯粹视为语言处理器。当用户提供提示词时,simple_processor函数调用generate_text辅助函数生成自由形式的响应,并直接显示该响应。此级别展示了最基本的交互模式:接收输入、生成输出,程序流程完全由人工控制。

复制

def simple_processor(prompt): """Level 1: Simple Processor - Model has no impact on program flow""" response = generate_text(prompt) return responsedef demo_level1: print("\n" + "="*50) print("LEVEL 1: SIMPLE PROCESSOR DEMO") print("="*50) print("At this level, the AI has no control over program flow.") print("It simply takes input and produces output.\n") user_input = input("Enter your question or prompt: ") or "Write a short poem about artificial intelligence." print("\nProcessing your request...\n") output = simple_processor(user_input) print("OUTPUT:") print("-"*50) print(output)print("-"*50)1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.

simple_processor函数体现了智能体层次结构中的简单处理器,将模型纯粹作为文本生成器;它接受用户提供的提示并委托给generate_text。它返回模型生成的内容,无任何分支或决策逻辑。伴随的demo_level1例程提供了一个最小的交互循环,打印清晰的标题,请求用户输入(有合理的默认值),调用simple_processor,然后显示原始输出,展示最基本的提示到响应的工作流程,其中AI对程序流程没有影响。

第二级引入基于模型分类的条件路由。router_agent函数首先要求模型将查询分类为“技术性”、“创造性”或“事实性”,然后规范化该分类并将其分配到专门的处理函数(handle_technical_query、handle_creative_query或handle_factual_query)。此路由机制使模型能够部分控制程序流程,指导后续交互路径。

复制

def router_agent(user_query): """Level 2: Router - Model determines basic program flow""" category_prompt = f"""Classify the following query into one of these categories: 'technical', 'creative', or 'factual'. Query: {user_query} Return ONLY the category name and nothing else.""" category_response = generate_text(category_prompt) category = category_response.lower if "technical" in category: category = "technical" elif "creative" in category: category = "creative" else: category = "factual" print(f"Query classified as: {category}") if category == "technical": return handle_technical_query(user_query) elif category == "creative": return handle_creative_query(user_query) else: return handle_factual_query(user_query)def handle_technical_query(query): system_prompt = f"""You are a technical assistant. Provide detailed technical explanations. User query: {query}""" response = generate_text(system_prompt) return f"[Technical Response]\n{response}"def handle_creative_query(query): system_prompt = f"""You are a creative assistant. Be imaginative and inspiring. User query: {query}""" response = generate_text(system_prompt) return f"[Creative Response]\n{response}"def handle_factual_query(query): system_prompt = f"""You are a factual assistant. Provide accurate information concisely. User query: {query}""" response = generate_text(system_prompt) return f"[Factual Response]\n{response}"def demo_level2: print("\n" + "="*50) print("LEVEL 2: ROUTER DEMO") print("="*50) print("At this level, the AI determines basic program flow.") print("It decides which processing path to take.\n") user_query = input("Enter your question or prompt: ") or "How do neural networks work?" print("\nProcessing your request...\n") result = router_agent(user_query) print("OUTPUT:") print("-"*50) print(result)print("-"*50)1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.21.22.23.24.25.26.27.28.29.30.31.32.33.34.35.36.37.38.39.40.41.42.43.44.45.46.47.48.49.50.51.52.53.54.55.56.57.58.59.60.61.62.63.64.65.66.67.68.

router_agent函数通过首先要求模型将用户的查询分类为“技术性”、“创造性”或“事实性”,然后规范化该分类并将其分配给相应的处理程序(handle_technical_query、handle_creative_query或handle_factual_query),每个处理程序都将原始查询包装在适当的系统风格提示中,然后调用generate_text来实现路由器行为。demo_level2例程提供了一个清晰的CLI风格界面,打印标题,接受输入(有默认值),调用router_agent,并显示分类后的响应,展示了模型如何通过对处理路径的选择来基本控制程序流程。

第三级通过嵌入基于JSON的函数选择协议,赋予模型决定调用哪些外部工具的能力。tool_calling_agent函数向用户提供一个问题及一系列潜在工具选项(如天气查询、信息搜索、日期时间获取或直接响应),并指示模型返回一个指定工具及其参数的有效JSON消息。通过正则表达式提取JSON对象后,代码会安全地回退到直接响应以防解析失败。最后,模型整合工具结果生成连贯的答案。

复制

def tool_calling_agent(user_query): """Level 3: Tool Calling - Model determines how functions are executed""" tool_selection_prompt = f"""Based on the user query, select the most appropriate tool from the following list: 1. get_weather: Get the current weather for a location 2. search_information: Search for specific information on a topic 3. get_date_time: Get current date and time 4. direct_response: Provide a direct response without using tools USER QUERY: {user_query} INSTRUCTIONS: - Return your response in valid JSON format - Include the tool name and any required parameters - For get_weather, include location parameter - For search_information, include query and depth parameter (basic or detailed) - For get_date_time, include timezone parameter (optional) - For direct_response, no parameters needed Example output format: {{"tool": "get_weather", "parameters": {{"location": "New York"}}}}""" tool_selection_response = generate_text(tool_selection_prompt) try: json_match = re.search(r'({.*})', tool_selection_response, re.DOTALL) if json_match: tool_selection = json.loads(json_match.group(1)) else: print("Could not parse tool selection. Defaulting to direct response.") tool_selection = {"tool": "direct_response", "parameters": {}} except json.JSONDecodeError: print("Invalid JSON in tool selection. Defaulting to direct response.") tool_selection = {"tool": "direct_response", "parameters": {}} tool_name = tool_selection.get("tool", "direct_response") parameters = tool_selection.get("parameters", {}) print(f"Selected tool: {tool_name}") if tool_name == "get_weather": location = parameters.get("location", "Unknown") tool_result = get_weather(location) elif tool_name == "search_information": query = parameters.get("query", user_query) depth = parameters.get("depth", "basic") tool_result = search_information(query, depth) elif tool_name == "get_date_time": timezone = parameters.get("timezone", "UTC") tool_result = get_date_time(timezone) else: return generate_text(f"Please provide a helpful response to: {user_query}") final_prompt = f"""User Query: {user_query} Tool Used: {tool_name} Tool Result: {json.dumps(tool_result)} Based on the user's query and the tool result above, provide a helpful response.""" final_response = generate_text(final_prompt) return final_responsedef get_weather(location): weather_conditions = ["Sunny", "Partly cloudy", "Overcast", "Light rain", "Heavy rain", "Thunderstorms", "Snowy", "Foggy"] temperatures = { "cold": list(range(-10, 10)), "mild": list(range(10, 25)), "hot": list(range(25, 40)) } location_hash = sum(ord(c) for c in location) condition_index = location_hash % len(weather_conditions) season = ["winter", "spring", "summer", "fall"][location_hash % 4] temp_range = temperatures["cold"] if season in ["winter", "fall"] else temperatures["hot"] if season == "summer" else temperatures["mild"] temperature = random.choice(temp_range) return { "location": location, "temperature": f"{temperature}°C", "conditions": weather_conditions[condition_index], "humidity": f"{random.randint(30, 90)}%" }def search_information(query, depth="basic"): mock_results = [ f"First result about {query}", f"Second result discussing {query}", f"Third result analyzing {query}" ] if depth == "detailed": mock_results.extend([ f"Fourth detailed analysis of {query}", f"Fifth comprehensive overview of {query}", f"Sixth academic paper on {query}" ]) return { "query": query, "results": mock_results, "depth": depth, "sources": [f"source{i}.com" for i in range(1, len(mock_results) + 1)] }def get_date_time(timezone="UTC"): current_time = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime) return { "current_datetime": current_time, "timezone": timezone }def demo_level3: print("\n" + "="*50) print("LEVEL 3: TOOL CALLING DEMO") print("="*50) print("At this level, the AI selects which tools to use and with what parameters.") print("It can process the results from tools to create a final response.\n") user_query = input("Enter your question or prompt: ") or "What's the weather like in San Francisco?" print("\nProcessing your request...\n") result = tool_calling_agent(user_query) print("OUTPUT:") print("-"*50) print(result) print("-"*50)1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.21.22.23.24.25.26.27.28.29.30.31.32.33.34.35.36.37.38.39.40.41.42.43.44.45.46.47.48.49.50.51.52.53.54.55.56.57.58.59.60.61.62.63.64.65.66.67.68.69.70.71.72.73.74.75.76.77.78.79.80.81.82.83.84.85.86.87.88.89.90.91.92.93.94.95.96.97.98.99.100.101.102.103.104.105.106.107.108.109.110.111.112.113.114.115.116.117.118.119.120.121.122.123.124.125.126.

在第三级实现中,tool_calling_agent函数提示模型从预定义的实用程序集中选择,例如天气查询、模拟网络搜索或日期/时间检索,通过返回一个包含选定工具名称及其参数的JSON对象。然后,它安全地解析该JSON,调用相应的Python函数以获得结构化数据,并进行后续模型调用,将工具的输出整合到一个连贯的面向用户的响应中。

第四级扩展了工具调用模式,形成一个多步智能体,管理其工作流和状态。MultiStepAgent类维护用户输入、工具输出和智能体操作的内部记忆。每次迭代生成一个规划提示,总结整个记忆,要求模型选择几个工具之一,如网络搜索模拟、信息提取、文本摘要或报告创建,或完成任务并生成最终输出。执行所选工具并将结果附加回记忆后,重复该过程,直到模型发出“完成”动作或达到最大步骤数。最后,智能体将记忆整理成一个连贯的最终响应。这种结构展示了LLM如何协调复杂的多阶段过程,同时咨询外部函数并根据先前结果细化其计划。

复制

class MultiStepAgent: """Level 4: Multi-Step Agent - Model controls iteration and program continuation""" def __init__(self): self.tools = { "search_web": self.search_web, "extract_info": self.extract_info, "summarize_text": self.summarize_text, "create_report": self.create_report } self.memory = self.max_steps = 5 def run(self, user_task): self.memory.append({"role": "user", "content": user_task}) steps_taken = 0 while steps_taken

MultiStepAgent类维护用户输入和工具输出的演变记忆,然后反复提示LLM决定其下一步行动,无论是搜索网络、提取信息、汇总文本、创建报告还是完成任务,执行所选工具并将结果附加到记忆中,直到任务完成或达到步骤上限。在此过程中,它展示了第四级智能体如何通过让模型控制迭代和程序继续来协调多步骤工作流。

在最高层次上,AutonomousAgent类展示了一个闭环系统,其中模型不仅负责规划和执行,还能生成、验证、优化并运行新的Python代码。记录用户任务后,智能体要求模型生成详细计划,然后提示其生成独立的解决方案代码,自动清除Markdown格式。随后的验证步骤查询模型是否存在语法或逻辑问题;如果发现问题,则要求模型优化代码。验证后的代码被包装了沙箱实用程序,例如安全打印、捕获输出缓冲区和结果捕获逻辑,并在受限制的本地环境中执行。最后,智能体合成了一份专业报告,解释了所做的工作、完成方式和最终结果。这一级别展示了真正自主的AI系统,可以通过动态代码创建和执行扩展其能力。

复制

class AutonomousAgent: """Level 5: Fully Autonomous Agent - Model creates & executes new code""" def __init__(self): self.memory = def run(self, user_task): self.memory.append({"role": "user", "content": user_task}) print(" Planning solution approach...") planning_message = self.plan_solution(user_task) self.memory.append({"role": "assistant", "content": planning_message}) print(" Generating solution code...") generated_code = self.generate_solution_code self.memory.append({"role": "assistant", "content": f"Generated code: ```python\n{generated_code}\n```"}) print(" Validating code...") validation_result = self.validate_code(generated_code) if not validation_result["valid"]: print(" Code validation found issues - refining...") refined_code = self.refine_code(generated_code, validation_result["issues"]) self.memory.append({"role": "assistant", "content": f"Refined code: ```python\n{refined_code}\n```"}) generated_code = refined_code else: print(" Code validation passed") try: print(" Executing solution...") execution_result = self.safe_execute_code(generated_code, user_task) self.memory.append({"role": "system", "content": f"Execution result: {execution_result}"}) # Generate a final report print(" Creating final report...") final_report = self.create_final_report(execution_result) return final_report except Exception as e: return f"Error executing the solution: {str(e)}\n\nGenerated code was:\n```python\n{generated_code}\n```" def plan_solution(self, task): prompt = f"""Task: {task} You are an autonomous problem-solving agent. Create a detailed plan to solve this task. Include: 1. Breaking down the task into subtasks 2. What algorithms or approaches you'll use 3. What data structures are needed 4. Any external resources or libraries required 5. Expected challenges and how to address them Provide a step-by-step plan. """ return generate_text(prompt) def generate_solution_code(self): context = "Task and planning information:\n" for item in self.memory: if item["role"] == "user": context += f"USER TASK: {item['content']}\n\n" elif item["role"] == "assistant": context += f"PLANNING: {item['content']}\n\n" prompt = f"""{context} Generate clean, efficient Python code that solves this task. Include comments to explain the code. The code should be self-contained and able to run inside a Python script or notebook. Only include the Python code itself without any markdown formatting. """ code = generate_text(prompt) code = re.sub(r'^```python\n|```$', '', code, flags=re.MULTILINE) return code def validate_code(self, code): prompt = f"""Code to validate: ```python {code} ``` Examine the code for the following issues: 1. Syntax errors 2. Logic errors 3. Inefficient implementations 4. Security concerns 5. Missing error handling 6. Import statements for unavailable libraries If the code has any issues, describe them in detail. If the code looks good, state "No issues found." """ validation_response = generate_text(prompt) if "no issues" in validation_response.lower or "code looks good" in validation_response.lower: return {"valid": True, "issues": None} else: return {"valid": False, "issues": validation_response} def refine_code(self, original_code, issues): prompt = f"""Original code: ```python {original_code} ``` Issues identified: {issues} Please provide a corrected version of the code that addresses these issues. Only include the Python code itself without any markdown formatting. """ refined_code = generate_text(prompt) refined_code = re.sub(r'^```python\n|```$', '', refined_code, flags=re.MULTILINE) return refined_code def safe_execute_code(self, code, user_task): safe_imports = """ # Standard library imports import math import random import re import time import json from datetime import datetime # Define a function to capture printed output captured_output = original_print = print def safe_print(*args, **kwargs): output = " ".join(str(arg) for arg in args) captured_output.append(output) original_print(output) print = safe_print # Define a result variable to store the final output result = None # Function to store the final result def store_result(value): global result result = value return value """ result_capture = """ # Store the final result if not already done if 'result' not in locals or result is None: try: # Look for variables that might contain the final result potential_results = [var for var in locals if not var.startswith('_') and var not in ['math', 'random', 're', 'time', 'json', 'datetime', 'captured_output', 'original_print', 'safe_print', 'result', 'store_result']] if potential_results: # Use the last defined variable as the result store_result(locals[potential_results[-1]]) except: pass """ full_code = safe_imports + "\n# User code starts here\n" + code + "\n\n" + result_capture code_lines = code.split('\n') first_lines = code_lines[:3] print(f"\nExecuting (first 3 lines):\n{first_lines}") local_env = {} try: exec(full_code, {}, local_env) return { "output": local_env.get('captured_output', ), "result": local_env.get('result', "No explicit result returned") } except Exception as e: return {"error": str(e)} def create_final_report(self, execution_result): if isinstance(execution_result.get('output'), list): output_text = "\n".join(execution_result.get('output', )) else: output_text = str(execution_result.get('output', '')) result_text = str(execution_result.get('result', '')) error_text = execution_result.get('error', '') context = "Task history:\n" for item in self.memory: if item["role"] == "user": context += f"USER TASK: {item['content']}\n\n" prompt = f"""{context} EXECUTION OUTPUT: {output_text} EXECUTION RESULT: {result_text} {f"ERROR: {error_text}" if error_text else ""} Create a final report that explains the solution to the original task. Include: 1. What was done 2. How it was accomplished 3. The final results 4. Any insights or conclusions drawn from the analysis Format the report in a professional, easy to read manner. """ return generate_text(prompt)def demo_level5: print("\n" + "="*50) print("LEVEL 5: FULLY AUTONOMOUS AGENT DEMO") print("="*50) print("At this level, the AI generates and executes code to solve complex problems.") print("It can create, validate, refine, and run custom code solutions.\n") user_task = input("Enter a data analysis or computational task: ") or "Analyze a dataset of numbers [10, 45, 65, 23, 76, 12, 89, 32, 50] and create visualizations of the distribution" print("\nProcessing your request... (this may take a minute or two)\n") agent = AutonomousAgent result = agent.run(user_task) print("\nFINAL REPORT:") print("-"*50) print(result)print("-"*50)1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.21.22.23.24.25.26.27.28.29.30.31.32.33.34.35.36.37.38.39.40.41.42.43.44.45.46.47.48.49.50.51.52.53.54.55.56.57.58.59.60.61.62.63.64.65.66.67.68.69.70.71.72.73.74.75.76.77.78.79.80.81.82.83.84.85.86.87.88.89.90.91.92.93.94.95.96.97.98.99.100.101.102.103.104.105.106.107.108.109.110.111.112.113.114.115.116.117.118.119.120.121.122.123.124.125.126.127.128.129.130.131.132.133.134.135.136.137.138.139.140.141.142.143.144.145.146.147.148.149.150.151.152.153.154.155.156.157.158.159.160.161.162.163.164.165.166.167.168.169.170.171.172.173.174.175.176.177.178.179.180.181.182.183.184.185.186.187.188.189.190.191.192.193.194.195.196.197.198.199.200.201.202.203.204.205.206.207.208.209.210.211.212.213.214.215.216.217.218.219.220.221.222.223.224.225.226.227.228.229.230.231.232.233.234.235.236.237.

AutonomousAgent类体现了完全自主智能体的自主性,包括维护用户任务的连续记忆,并系统协调五大核心阶段:规划、代码生成、验证、安全执行和报告。启动运行时,智能体会提示模型生成解决任务的详细计划,并将该计划存储在记忆中。接下来,它要求模型基于该计划创建独立的Python代码,去掉任何Markdown格式,并通过查询模型语法、逻辑、性能和安全问题来验证代码。如果验证发现任何问题,智能体还会指示模型优化代码,直至通过检查。最终代码被包装在一个沙箱执行框架中,包含捕获的输出缓冲区和自动结果提取,并在隔离的本地环境中执行。最后,智能体通过将执行结果反馈给模型,生成一份精心制作的专业报告,具体解释做了什么、如何完成以及获得了哪些见解。随附的demo_level5函数则提供一个简单的交互循环,即接受用户任务、运行智能体并呈现全面的最终报告。

def main: while True: clear_output(wait=True) print("\n" + "="*50) print("AI AGENT LEVELS DEMO") print("="*50) print("\nThis notebook demonstrates the 5 levels of AI agents:") print("1. Simple Processor - Model has no impact on program flow") print("2. Router - Model determines basic program flow") print("3. Tool Calling - Model determines how functions are executed") print("4. Multi-Step Agent - Model controls iteration and program continuation") print("5. Fully Autonomous Agent - Model creates & executes new code") print("6. Quit") choice = input("\nSelect a level to demo (1-6): ") if choice == "1": demo_level1 elif choice == "2": demo_level2 elif choice == "3": demo_level3 elif choice == "4": demo_level4 elif choice == "5": demo_level5 elif choice == "6": print("\nThank you for exploring the AI Agent levels!") break else: print("\nInvalid choice. Please select 1-6.") input("\nPress Enter to return to the main menu...")if __name__ == "__main__":main1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.21.22.23.24.25.26.27.28.29.30.31.32.33.34.35.36.

最后,main函数会呈现一个简单的交互菜单循环,清除Colab输出以提高可读性,显示所有五个智能体级别以及退出选项,然后将用户的选择分发到相应的演示函数,再等待输入返回到菜单。这种结构提供了一个集中的CLI风格界面,使你可以按顺序探索每个智能体级别,而无需手动介入各具体环节。

通过这五个级别的实践,我们深入了解了代理式AI的原则以及控制、灵活性和自治之间的权衡。我们看到系统如何从简单的提示词响应行为演变为复杂的决策管道,甚至自我修改代码执行。无论你是希望开发智能助手、构建数据管道还是试验新兴AI能力,这一渐进框架都为设计强大且可扩展的智能体提供了坚实的路线指导。

来源:51CTO一点号

相关推荐