Environments Package

The Environments package defines the simulation spaces, boundary conditions, and environmental forces that affect agent behavior. Environments provide the context in which multi-agent interactions occur and can include obstacles, boundaries, fields, and external forces.

Environments Module

The environments module provides different types of environments for multi-agent swarm simulations. These environments define the physical space in which agents operate and interact with their surroundings.

Available Environments

  • Environment : Abstract base class for defining environments.

  • EmptyEnvironment : A static environment with no external forces acting on agents.

  • ShepherdingEnvironment : A dynamic environment where the goal moves along a linear path.

Usage

To use an environment, import it and instantiate with a configuration file:

from swarmsim.environments import ShepherdingEnvironment
env = ShepherdingEnvironment(config_path="config.yaml")
env.update()

Modules

  • base_environment : Defines the abstract Environment class, which serves as a base for all environments.

  • empty_environment : Implements EmptyEnvironment, which has no external forces.

  • shepherding_environment : Implements ShepherdingEnvironment, where the goal moves dynamically.

Examples

Example YAML configuration:

environment:
    dimensions: [100, 100]
    goal_radius: 5
    goal_pos: [0, 0]
    final_goal_pos: [20, -20]
    num_steps: 2000

This will create an environment where the goal moves from (0, 0) to (20, -20) over 2000 steps.

class swarmsim.Environments.Environment(config_path)[source]

Bases: ABC

Abstract base class for an environment in which agents operate.

This class reads configuration parameters from a YAML file and requires derived classes to implement the get_forces, get_info, and update methods.

Parameters:

config_path (str) – Path to the YAML configuration file containing environment parameters.

Variables:

dimensions (tuple of (int, int)) – The dimensions of the environment in 2D (width, height), loaded from the configuration file.

Config requirements:

dimensions (tuple of (int, int), optional) – The dimensions of the environment in 2D. Default is (100, 100).

Raises:
  • FileNotFoundError – If the configuration file is not found.

  • KeyError – If required environment parameters are missing in the configuration file.

Examples

Example YAML configuration:

environment:
    dimensions: [200, 150]

This will set the environment dimensions to (200, 150).

__init__(config_path)[source]

Initializes the Environment with configuration parameters from a YAML file.

Parameters:

config_path (str) – Path to the YAML configuration file.

abstractmethod get_forces(agents)[source]

Computes the forces exerted by the environment on the agents.

This method must be implemented by subclasses to define the environmental forces acting on agents in the simulation.

Parameters:

agents (list) – A list of agent objects for which the environmental forces are being computed.

Returns:

np.ndarray – An array representing the forces exerted on each agent.

abstractmethod get_info()[source]

Retrieves environment-specific information for logging.

This method must be implemented by subclasses to return relevant environment data.

Returns:

dict – A dictionary containing information about the environment.

abstractmethod update()[source]

Updates the environment state.

This method must be implemented by subclasses to define how the environment evolves over time.

class swarmsim.Environments.EmptyEnvironment(config_path)[source]

Bases: Environment

An environment with no forces acting on the agents.

This environment represents a simple static environment where agents are not influenced by external forces. The environment dimensions are loaded from a YAML configuration file.

Parameters:

config_path (str) – Path to the YAML configuration file containing environment parameters.

Variables:

dimensions (tuple of (int, int)) – The dimensions of the environment in 2D (width, height), inherited from the Environment base class.

Config requirements:

dimensions (tuple of (int, int), optional) – The dimensions of the environment in 2D. Default is (50, 50).

Raises:
  • FileNotFoundError – If the configuration file is not found.

  • KeyError – If required environment parameters are missing in the configuration file.

Examples

Example YAML configuration:

environment:
    dimensions: [50, 50]

This will set the environment dimensions to (50, 50).

__init__(config_path)[source]

Initializes the EmptyEnvironment with the configuration parameters from a YAML file.

Parameters:

config_path (str) – Path to the YAML configuration file.

get_forces(agents)[source]

Computes the forces exerted by the environment on the agents.

Since this is an empty environment, no external forces are exerted on the agents. This method returns an array of zeros, representing zero force applied to each agent.

Parameters:

agents (list) – A list of agent objects for which the environmental forces are being computed.

Returns:

np.ndarray – An array of shape (num_agents, 2), where each row is [0, 0] indicating no force.

get_info()[source]

Retrieves environment-specific information for logging.

Since this is an empty environment, it returns an empty dictionary.

Returns:

dict – An empty dictionary {}.

update()[source]

Updates the environment state.

Since this is a static environment with no forces or dynamic elements, this method does nothing.

class swarmsim.Environments.ShepherdingEnvironment(config_path)[source]

Bases: EmptyEnvironment

Dynamic environment for shepherding tasks with moving goal positions.

This environment extends EmptyEnvironment to support shepherding scenarios where the goal position moves along a predefined trajectory. The goal starts at an initial position and moves toward a final destination over a specified number of simulation steps, creating dynamic targets for shepherding controllers.

The environment is designed for multi-agent shepherding tasks where herder agents must guide target agents to a moving goal region. The linear goal movement creates realistic scenarios where the target location changes predictably over time.

Parameters:

config_path (str) – Path to the YAML configuration file containing environment parameters.

Variables:
  • goal_radius (float) – Radius of the goal region where agents are considered to have reached the target.

  • goal_pos (np.ndarray) – Current 2D position of the goal center.

  • final_goal_pos (np.ndarray) – Final 2D coordinates where the goal will stop moving.

  • num_steps (int) – Total number of simulation steps over which the goal moves.

  • step_count (int) – Current simulation step counter for tracking goal movement progress.

  • start_step (int) – Simulation step at which goal movement begins (allows for initial stationary period).

  • direction (np.ndarray) – Unit direction vector from initial to final goal position.

Config Requirements:
  • The YAML configuration file must contain the following parameters under the environment section

  • goal_radius (float, optional) – Radius of the goal region in environment units. Default is 5.0.

  • goal_pos (list of float, optional) – Initial 2D coordinates [x, y] of the goal position. Default is [0, 0].

  • final_goal_pos (list of float) – Final 2D coordinates [x, y] where the goal movement terminates. Required parameter.

  • num_steps (int, optional) – Number of simulation steps for the complete goal trajectory. Default is 2000.

  • start_step (int, optional) – Simulation step when goal movement begins. Default is 0.

  • dimensions (list of int, optional) – Environment dimensions [width, height]. Inherited from EmptyEnvironment.

Notes

The goal movement follows a linear trajectory:

goal_pos(t) = initial_pos + (t - start_step) / num_steps * (final_pos - initial_pos)

where t is the current step count. The goal remains stationary before start_step and after reaching the final position.

Key features: - Linear Movement: Goal moves in straight line from initial to final position - Configurable Timing: Movement start time and duration are adjustable - Goal Region: Circular region around goal position with specified radius - Status Tracking: Provides information about goal state and agent proximity

Examples

Example YAML configuration:

environment:
    dimensions: [100, 100]
    goal_radius: 8.0
    goal_pos: [0, 0]
    final_goal_pos: [30, -20]
    num_steps: 1500
    start_step: 100

Advanced usage with shepherding simulation:

from swarmsim.Environments import ShepherdingEnvironment
from swarmsim.Populations import BrownianMotion
from swarmsim.Controllers import ShepherdingLamaController

# Create environment
env = ShepherdingEnvironment('config.yaml')

# Create populations
targets = BrownianMotion('config.yaml', 'Targets')
herders = BrownianMotion('config.yaml', 'Herders')

# Create shepherding controller
controller = ShepherdingLamaController(
    population=herders,
    targets=targets,
    environment=env,
    config_path='config.yaml'
)

# Environment automatically updates goal position each step
env.update()  # Advances goal along trajectory

# Check if targets are near current goal
info = env.get_info()
print(f"Goal position: {env.goal_pos}")
print(f"Targets in goal: {info['targets_in_goal']}")

Applications include: - Dynamic shepherding scenarios - Moving target tracking problems - Adaptive goal-seeking behaviors - Multi-phase shepherding tasks

__init__(config_path)[source]

Initialize the ShepherdingEnvironment with dynamic goal movement.

Parameters:

config_path (str) – Path to the YAML configuration file containing environment parameters.

Raises:
  • FileNotFoundError – If the configuration file cannot be found.

  • KeyError – If required parameters are missing from the configuration.

Notes

The initialization computes the step-wise movement vector based on the linear trajectory from initial to final goal position over the specified number of steps.

get_info()[source]

Retrieve current environment state information for logging and monitoring.

Returns:

dict – Dictionary containing environment status with keys:

  • ‘Goal region radius’: float

    Current radius of the goal region

  • ‘Goal region center’: np.ndarray

    Current 2D coordinates of the goal center

Notes

This information can be used by: - Loggers to track goal movement over time - Controllers to access current goal state - Renderers to visualize the goal region - Analysis tools to compute performance metrics

update()[source]

Advance the goal position along its trajectory.

This method is called at each simulation timestep to update the goal position. The goal moves linearly from its initial position toward the final position over the specified number of steps, starting at the configured start step.

Notes

The goal movement algorithm:

  1. Step Counting: Increment internal step counter

  2. Movement Window: Check if current step is within movement period

  3. Position Update: Add direction vector to current goal position

  4. Boundary Conditions: Goal stops at final position

Movement occurs only when: - Current step > start_step (allows initial stationary period) - Current step < start_step + num_steps (prevents overshoot)

The direction vector is pre-computed during initialization as:

direction = (final_pos - initial_pos) / num_steps

After num_steps, the goal remains stationary at the final position.

Package Overview

Environments in SwarmSim provide:

  • Boundary Conditions: Define simulation space limits and boundary behaviors

  • Environmental Forces: External fields, currents, and forces affecting agents

  • Obstacle Management: Static and dynamic obstacles within the simulation space

  • Spatial Organization: Grid-based and continuous space representations

  • Physical Realism: Realistic environmental physics and constraints

All environment classes inherit from the base environment interface, ensuring consistent integration with simulators and other components.

Core Modules

Base Environment Interface

The foundation class that defines the standard interface for all environment implementations.

class swarmsim.Environments.base_environment.Environment(config_path)[source]

Bases: ABC

Abstract base class for an environment in which agents operate.

This class reads configuration parameters from a YAML file and requires derived classes to implement the get_forces, get_info, and update methods.

Parameters:

config_path (str) – Path to the YAML configuration file containing environment parameters.

Variables:

dimensions (tuple of (int, int)) – The dimensions of the environment in 2D (width, height), loaded from the configuration file.

Config requirements:

dimensions (tuple of (int, int), optional) – The dimensions of the environment in 2D. Default is (100, 100).

Raises:
  • FileNotFoundError – If the configuration file is not found.

  • KeyError – If required environment parameters are missing in the configuration file.

Examples

Example YAML configuration:

environment:
    dimensions: [200, 150]

This will set the environment dimensions to (200, 150).

__init__(config_path)[source]

Initializes the Environment with configuration parameters from a YAML file.

Parameters:

config_path (str) – Path to the YAML configuration file.

abstractmethod get_forces(agents)[source]

Computes the forces exerted by the environment on the agents.

This method must be implemented by subclasses to define the environmental forces acting on agents in the simulation.

Parameters:

agents (list) – A list of agent objects for which the environmental forces are being computed.

Returns:

np.ndarray – An array representing the forces exerted on each agent.

abstractmethod get_info()[source]

Retrieves environment-specific information for logging.

This method must be implemented by subclasses to return relevant environment data.

Returns:

dict – A dictionary containing information about the environment.

abstractmethod update()[source]

Updates the environment state.

This method must be implemented by subclasses to define how the environment evolves over time.

Key Features:

  • Standardized Interface: Consistent API across all environment types

  • Boundary Management: Handle various boundary conditions (periodic, reflective, absorbing)

  • Force Computation: Calculate environmental forces on agents

  • Spatial Queries: Efficient spatial queries and neighbor finding

  • Extensible Design: Easy to extend for specialized environmental conditions

Empty Environment

A minimal environment implementation with configurable boundaries.

class swarmsim.Environments.empty_environment.EmptyEnvironment(config_path)[source]

Bases: Environment

An environment with no forces acting on the agents.

This environment represents a simple static environment where agents are not influenced by external forces. The environment dimensions are loaded from a YAML configuration file.

Parameters:

config_path (str) – Path to the YAML configuration file containing environment parameters.

Variables:

dimensions (tuple of (int, int)) – The dimensions of the environment in 2D (width, height), inherited from the Environment base class.

Config requirements:

dimensions (tuple of (int, int), optional) – The dimensions of the environment in 2D. Default is (50, 50).

Raises:
  • FileNotFoundError – If the configuration file is not found.

  • KeyError – If required environment parameters are missing in the configuration file.

Examples

Example YAML configuration:

environment:
    dimensions: [50, 50]

This will set the environment dimensions to (50, 50).

__init__(config_path)[source]

Initializes the EmptyEnvironment with the configuration parameters from a YAML file.

Parameters:

config_path (str) – Path to the YAML configuration file.

get_forces(agents)[source]

Computes the forces exerted by the environment on the agents.

Since this is an empty environment, no external forces are exerted on the agents. This method returns an array of zeros, representing zero force applied to each agent.

Parameters:

agents (list) – A list of agent objects for which the environmental forces are being computed.

Returns:

np.ndarray – An array of shape (num_agents, 2), where each row is [0, 0] indicating no force.

get_info()[source]

Retrieves environment-specific information for logging.

Since this is an empty environment, it returns an empty dictionary.

Returns:

dict – An empty dictionary {}.

update()[source]

Updates the environment state.

Since this is a static environment with no forces or dynamic elements, this method does nothing.

Key Features:

  • Minimal Overhead: No environmental forces or obstacles

  • Configurable Boundaries: Optional boundary enforcement

  • Performance: Optimized for speed in basic simulations

  • Foundation: Good starting point for custom environments

Applications: - Basic algorithm testing - Theoretical studies without environmental effects - Performance benchmarking - Educational demonstrations

Shepherding Environment

Specialized environment for shepherding and herding simulations.

class swarmsim.Environments.shepherding_environment.ShepherdingEnvironment(config_path)[source]

Bases: EmptyEnvironment

Dynamic environment for shepherding tasks with moving goal positions.

This environment extends EmptyEnvironment to support shepherding scenarios where the goal position moves along a predefined trajectory. The goal starts at an initial position and moves toward a final destination over a specified number of simulation steps, creating dynamic targets for shepherding controllers.

The environment is designed for multi-agent shepherding tasks where herder agents must guide target agents to a moving goal region. The linear goal movement creates realistic scenarios where the target location changes predictably over time.

Parameters:

config_path (str) – Path to the YAML configuration file containing environment parameters.

Variables:
  • goal_radius (float) – Radius of the goal region where agents are considered to have reached the target.

  • goal_pos (np.ndarray) – Current 2D position of the goal center.

  • final_goal_pos (np.ndarray) – Final 2D coordinates where the goal will stop moving.

  • num_steps (int) – Total number of simulation steps over which the goal moves.

  • step_count (int) – Current simulation step counter for tracking goal movement progress.

  • start_step (int) – Simulation step at which goal movement begins (allows for initial stationary period).

  • direction (np.ndarray) – Unit direction vector from initial to final goal position.

Config Requirements:
  • The YAML configuration file must contain the following parameters under the environment section

  • goal_radius (float, optional) – Radius of the goal region in environment units. Default is 5.0.

  • goal_pos (list of float, optional) – Initial 2D coordinates [x, y] of the goal position. Default is [0, 0].

  • final_goal_pos (list of float) – Final 2D coordinates [x, y] where the goal movement terminates. Required parameter.

  • num_steps (int, optional) – Number of simulation steps for the complete goal trajectory. Default is 2000.

  • start_step (int, optional) – Simulation step when goal movement begins. Default is 0.

  • dimensions (list of int, optional) – Environment dimensions [width, height]. Inherited from EmptyEnvironment.

Notes

The goal movement follows a linear trajectory:

goal_pos(t) = initial_pos + (t - start_step) / num_steps * (final_pos - initial_pos)

where t is the current step count. The goal remains stationary before start_step and after reaching the final position.

Key features: - Linear Movement: Goal moves in straight line from initial to final position - Configurable Timing: Movement start time and duration are adjustable - Goal Region: Circular region around goal position with specified radius - Status Tracking: Provides information about goal state and agent proximity

Examples

Example YAML configuration:

environment:
    dimensions: [100, 100]
    goal_radius: 8.0
    goal_pos: [0, 0]
    final_goal_pos: [30, -20]
    num_steps: 1500
    start_step: 100

Advanced usage with shepherding simulation:

from swarmsim.Environments import ShepherdingEnvironment
from swarmsim.Populations import BrownianMotion
from swarmsim.Controllers import ShepherdingLamaController

# Create environment
env = ShepherdingEnvironment('config.yaml')

# Create populations
targets = BrownianMotion('config.yaml', 'Targets')
herders = BrownianMotion('config.yaml', 'Herders')

# Create shepherding controller
controller = ShepherdingLamaController(
    population=herders,
    targets=targets,
    environment=env,
    config_path='config.yaml'
)

# Environment automatically updates goal position each step
env.update()  # Advances goal along trajectory

# Check if targets are near current goal
info = env.get_info()
print(f"Goal position: {env.goal_pos}")
print(f"Targets in goal: {info['targets_in_goal']}")

Applications include: - Dynamic shepherding scenarios - Moving target tracking problems - Adaptive goal-seeking behaviors - Multi-phase shepherding tasks

__init__(config_path)[source]

Initialize the ShepherdingEnvironment with dynamic goal movement.

Parameters:

config_path (str) – Path to the YAML configuration file containing environment parameters.

Raises:
  • FileNotFoundError – If the configuration file cannot be found.

  • KeyError – If required parameters are missing from the configuration.

Notes

The initialization computes the step-wise movement vector based on the linear trajectory from initial to final goal position over the specified number of steps.

get_info()[source]

Retrieve current environment state information for logging and monitoring.

Returns:

dict – Dictionary containing environment status with keys:

  • ‘Goal region radius’: float

    Current radius of the goal region

  • ‘Goal region center’: np.ndarray

    Current 2D coordinates of the goal center

Notes

This information can be used by: - Loggers to track goal movement over time - Controllers to access current goal state - Renderers to visualize the goal region - Analysis tools to compute performance metrics

update()[source]

Advance the goal position along its trajectory.

This method is called at each simulation timestep to update the goal position. The goal moves linearly from its initial position toward the final position over the specified number of steps, starting at the configured start step.

Notes

The goal movement algorithm:

  1. Step Counting: Increment internal step counter

  2. Movement Window: Check if current step is within movement period

  3. Position Update: Add direction vector to current goal position

  4. Boundary Conditions: Goal stops at final position

Movement occurs only when: - Current step > start_step (allows initial stationary period) - Current step < start_step + num_steps (prevents overshoot)

The direction vector is pre-computed during initialization as:

direction = (final_pos - initial_pos) / num_steps

After num_steps, the goal remains stationary at the final position.

Key Features:

  • Target Zones: Defined goal areas for shepherding tasks

  • Obstacle Fields: Environmental obstacles affecting movement

  • Performance Metrics: Built-in shepherding success measurement

  • Boundary Handling: Specialized boundary conditions for herding

Applications: - Livestock herding simulation - Robot shepherding algorithms - Crowd control studies - Multi-agent coordination research

Usage Examples

Best Practices

  1. Start Simple: Begin with EmptyEnvironment and add complexity gradually

  2. Force Validation: Ensure environmental forces are physically reasonable

  3. Boundary Handling: Choose appropriate boundary conditions for your scenario

  4. Performance Testing: Profile environmental force computations for large populations

  5. Visualization: Visualize force fields and environmental features for debugging

  6. Parameter Tuning: Systematically explore environmental parameter effects :show-inheritance:

Shepherding Environment

class swarmsim.Environments.shepherding_environment.ShepherdingEnvironment(config_path)[source]

Bases: EmptyEnvironment

Dynamic environment for shepherding tasks with moving goal positions.

This environment extends EmptyEnvironment to support shepherding scenarios where the goal position moves along a predefined trajectory. The goal starts at an initial position and moves toward a final destination over a specified number of simulation steps, creating dynamic targets for shepherding controllers.

The environment is designed for multi-agent shepherding tasks where herder agents must guide target agents to a moving goal region. The linear goal movement creates realistic scenarios where the target location changes predictably over time.

Parameters:

config_path (str) – Path to the YAML configuration file containing environment parameters.

Variables:
  • goal_radius (float) – Radius of the goal region where agents are considered to have reached the target.

  • goal_pos (np.ndarray) – Current 2D position of the goal center.

  • final_goal_pos (np.ndarray) – Final 2D coordinates where the goal will stop moving.

  • num_steps (int) – Total number of simulation steps over which the goal moves.

  • step_count (int) – Current simulation step counter for tracking goal movement progress.

  • start_step (int) – Simulation step at which goal movement begins (allows for initial stationary period).

  • direction (np.ndarray) – Unit direction vector from initial to final goal position.

Config Requirements:
  • The YAML configuration file must contain the following parameters under the environment section

  • goal_radius (float, optional) – Radius of the goal region in environment units. Default is 5.0.

  • goal_pos (list of float, optional) – Initial 2D coordinates [x, y] of the goal position. Default is [0, 0].

  • final_goal_pos (list of float) – Final 2D coordinates [x, y] where the goal movement terminates. Required parameter.

  • num_steps (int, optional) – Number of simulation steps for the complete goal trajectory. Default is 2000.

  • start_step (int, optional) – Simulation step when goal movement begins. Default is 0.

  • dimensions (list of int, optional) – Environment dimensions [width, height]. Inherited from EmptyEnvironment.

Notes

The goal movement follows a linear trajectory:

goal_pos(t) = initial_pos + (t - start_step) / num_steps * (final_pos - initial_pos)

where t is the current step count. The goal remains stationary before start_step and after reaching the final position.

Key features: - Linear Movement: Goal moves in straight line from initial to final position - Configurable Timing: Movement start time and duration are adjustable - Goal Region: Circular region around goal position with specified radius - Status Tracking: Provides information about goal state and agent proximity

Examples

Example YAML configuration:

environment:
    dimensions: [100, 100]
    goal_radius: 8.0
    goal_pos: [0, 0]
    final_goal_pos: [30, -20]
    num_steps: 1500
    start_step: 100

Advanced usage with shepherding simulation:

from swarmsim.Environments import ShepherdingEnvironment
from swarmsim.Populations import BrownianMotion
from swarmsim.Controllers import ShepherdingLamaController

# Create environment
env = ShepherdingEnvironment('config.yaml')

# Create populations
targets = BrownianMotion('config.yaml', 'Targets')
herders = BrownianMotion('config.yaml', 'Herders')

# Create shepherding controller
controller = ShepherdingLamaController(
    population=herders,
    targets=targets,
    environment=env,
    config_path='config.yaml'
)

# Environment automatically updates goal position each step
env.update()  # Advances goal along trajectory

# Check if targets are near current goal
info = env.get_info()
print(f"Goal position: {env.goal_pos}")
print(f"Targets in goal: {info['targets_in_goal']}")

Applications include: - Dynamic shepherding scenarios - Moving target tracking problems - Adaptive goal-seeking behaviors - Multi-phase shepherding tasks

__init__(config_path)[source]

Initialize the ShepherdingEnvironment with dynamic goal movement.

Parameters:

config_path (str) – Path to the YAML configuration file containing environment parameters.

Raises:
  • FileNotFoundError – If the configuration file cannot be found.

  • KeyError – If required parameters are missing from the configuration.

Notes

The initialization computes the step-wise movement vector based on the linear trajectory from initial to final goal position over the specified number of steps.

get_info()[source]

Retrieve current environment state information for logging and monitoring.

Returns:

dict – Dictionary containing environment status with keys:

  • ‘Goal region radius’: float

    Current radius of the goal region

  • ‘Goal region center’: np.ndarray

    Current 2D coordinates of the goal center

Notes

This information can be used by: - Loggers to track goal movement over time - Controllers to access current goal state - Renderers to visualize the goal region - Analysis tools to compute performance metrics

update()[source]

Advance the goal position along its trajectory.

This method is called at each simulation timestep to update the goal position. The goal moves linearly from its initial position toward the final position over the specified number of steps, starting at the configured start step.

Notes

The goal movement algorithm:

  1. Step Counting: Increment internal step counter

  2. Movement Window: Check if current step is within movement period

  3. Position Update: Add direction vector to current goal position

  4. Boundary Conditions: Goal stops at final position

Movement occurs only when: - Current step > start_step (allows initial stationary period) - Current step < start_step + num_steps (prevents overshoot)

The direction vector is pre-computed during initialization as:

direction = (final_pos - initial_pos) / num_steps

After num_steps, the goal remains stationary at the final position.