You can monitor what your agents are doing in real time.
Access the Dashboard: Open your web browser and navigate to http://localhost:5006
You need to set up dashboard when creating agent: with dashboard=True parameter:
agent = Agent(
name=”MyAgent”,
instructions=”You are a helpful assistant.”,
functions=[],
dashboard=True # Enable the dashboard
)
Of course I assume you have your swarm buzzing happily around
There are all agent parameter below:
• name (Type: str
): The name of the agent. Default is "Agent"
.
• model (Type: str
): Specifies the language model to be used by the agent. Default is "gpt-4o"
.
• instructions (Type: Union[str, Callable[[], str]]
): A string or callable that provides instructions for the agent’s behavior. Default is "You are a helpful agent."
.
• functions (Type: ListAgentFunction
): A list of functions that the agent can call to perform tasks.
• tool_choice (Type: str
): Determines how the agent chooses tools:
• "required"
: Forces the LLM to choose one of the provided functions.
• "auto"
: Lets the LLM decide if any tool needs to be called.
• "none"
: No function calls will be made.
• parallel_tool_calls (Type: bool
): If set to True
, allows the agent to call multiple functions simultaneously. Default is True
.
• context_variables (Type: dict
): A dictionary of additional context variables available for functions and agent instructions. Default is {}
.
• max_turns (Type: int
): The maximum number of conversational turns allowed. Default is infinity (float("inf")
).
• model_override (Type: str
): An optional string to override the model being used by the agent. Default is None
.
• execute_tools (Type: bool
): If set to False
, interrupts execution and immediately returns a message when an agent tries to call a function. Default is True
.
• stream (Type: bool
): If set to True
, enables streaming responses from the agent. Default is False
.
• debug (Type: bool
): Enables debugging mode for additional insights during execution.
• id (Type: str
): A unique identifier for the agent instance.
• llm (Type: LanguageModelInstance
): The specific language model instance used by the agent.
• template (Type: str
): A template used for formatting responses.
• max_loops (Type: int
): Maximum number of loops the agent can run.
• stopping_condition (Type: Callable
): A callable function that determines when to stop looping.
• loop_interval (Type: float
): Interval in seconds between loops.
• retry_attempts (Type: int
): Number of retry attempts for failed LLM calls.
• retry_interval (Type: float
): Interval in seconds between retry attempts.
• return_history (Type: bool
): Indicates whether to return conversation history.
• stopping_token (Type: str
): A token that stops the agent from looping when present in the response.
• dynamic_loops (Type: bool
): Allows dynamic determination of loop counts based on conditions.
• interactive (Type: bool
): Indicates whether to run in interactive mode.
• dashboard (Type: bool
): Indicates whether to display a dashboard during operation.
These parameters provide flexibility in defining how an Agent behaves and interacts with users or other agents