Calling Out to LLMs
Codegen natively integrates with LLMs via the codebase.ai(…) method, which lets you use large language models (LLMs) to help generate, modify, and analyze code.
Configuration
Before using AI capabilities, you need to provide an OpenAI API key via codebase.set_ai_key(…):
Calling Codebase.ai(…)
The Codebase.ai(…) method takes three key arguments:
- prompt: Clear instruction for what you want the AI to do
- target: The symbol (function, class, etc.) being operated on - its source code will be provided to the AI
- context: Additional information you want to provide to the AI, which you can gather using GraphSitter’s analysis tools
Codegen does not automatically provide any context to the LLM by default. It does not “understand” your codebase, only the context you provide.
The context parameter can include:
- A single symbol (its source code will be provided)
- A list of related symbols
- A dictionary mapping descriptions to symbols/values
- Nested combinations of the above
How Context Works
The AI doesn’t automatically know about your codebase. Instead, you can provide relevant context by:
- Using GraphSitter’s static analysis to gather information:
- Passing this information to the AI:
Common Use Cases
Code Generation
Generate new code or refactor existing code:
Documentation
Generate and format documentation:
Code Analysis and Improvement
Use AI to analyze and improve code:
Contextual Modifications
Make changes with full context awareness:
Best Practices
-
Provide Relevant Context
-
Gather Comprehensive Context
-
Handle AI Limits
-
Review Generated Code
Limitations and Safety
- The AI doesn’t automatically know about your codebase - you must provide relevant context
- AI-generated code should always be reviewed
- Default limit of 150 AI requests per codemod execution
- Use set_session_options(…) to adjust limits:
You can also use codebase.set_session_options
to increase the execution time and the number of operations allowed in a session. This is useful for handling larger tasks or more complex operations that require additional resources. Adjust the max_seconds
and max_transactions
parameters to suit your needs:
Was this page helpful?